PICOT Statement Paper

PICOT Statement Paper

A PICOT starts with a designated patient population in a particular clinical area and identifies clinical problems or issues that arise from clinical care. The intervention should be an independent, specified nursing change intervention. The intervention cannot require a provider prescription. Include a comparison to a patient population not currently receiving the intervention, and specify the timeframe needed to implement the change process. PICOT Statement Paper

Formulate a PICOT statement using the PICOT format provided in the assigned readings. The PICOT statement will provide a framework for your capstone project.

In a paper of 500-750 words, clearly identify the clinical problem and how it can result in a positive patient outcome.


Make sure to address the following on the PICOT statement:

  1. Evidence-Based Solution
  2. Nursing Intervention
  3. Patient Care
  4. Health Care Agency
  5. Nursing Practice

Prepare this assignment according to the guidelines found in the APA Style Guide, located in the Student Success Center. An abstract is not required.

This assignment uses a rubric.

Please review the rubric prior to beginning the assignment to become familiar with the expectations for successful completion.

Evidence-Based Practice Process Quality Assessment: EPQA Guidelines Mei Ching Lee, PhD, RN • Karen L. Johnson, PhD, RN • Robin P. Newhouse, PhD, RN, NEA-BC, FAAN • Joan I. Warren, PhD, RN-BC, NEA-BC


evidence-based practice,

evidence-based practice process

quality assessment, evidence-based practice process


ABSTRACT Background: Nurses are increasingly engaged in evidence-based practice (EBP) processes to answer significant questions and guide nursing practice. However, there are no criteria to eval- uate the rigor and quality of EBP projects, making the decision about whether to implement a recommended practice change questionable.

Aim: The purpose of this study was to achieve consensus among nationally recognized EBP nurse experts on criteria that could be used to appraise the methodological quality of an EBP project as well as to serve as a guideline to plan for an EBP project. PICOT Statement Paper

Methods: A modified two-round Delphi method was used. Twenty-three nationally known EBP experts were invited by e-mail to participate in completing a web-based questionnaire.

Results: Items converged after two rounds (response rate [52% (n = 12/23) for Round 1 and 35% (n = 8/23) for Round 2]) and resulted in the development of the EBP Process Quality Assessment (EPQA) guidelines that include 34 items.

Implications: The EPQA guidelines can be used to guide and evaluate the methodological quality of EBP projects. They can be used in practice settings to critically appraise an EBP project prior to translating recommendations into practice. Educators can use the EPQA guidelines as a rubric to evaluate student EBP projects. EPQA guidelines can be utilized in research to assess interventions and to build or improve EBP capacity.

BACKGROUND Evidence-based practice (EBP) is an essential element in deliv- ering quality patient care (The Joint Commission, 2008). The use of EBP guidelines improves patient outcomes (Cochrane Collaboration, 2005; Horbar et al., 2004; Thomas et al., 1999) as well as outcomes for family members, staff, the organiza- tion, and community (Worral, Levin, & Arsenault, 2010).

Many accrediting bodies in the United States have in- creased their attention on the use of evidence to make decisions and provide care that will improve patient outcomes (Worral et al., 2010). The Joint Commission recognizes the use of EBP as an effective way to improve healthcare delivery (The Joint Commission, 2008). Organizations achieving Magnet recog- nition must possess established and evolving programs related to EBP programs that include infrastructure and resources in place to support the advancement of EBP (American Nurses Credentialing Center, 2008). Professional organizations, such as the American Association of Critical Care Nurses and the Oncology Nursing Society, have extensive tools and resources on their websites to help members conduct EBP projects. Many models have been developed to educate and guide nurses on EBP methods. Scholarship for EBP is an essential curricular element for baccalaureate education (American Association of Colleges of Nursing, 2008). Schools of nursing have integrated PICOT Statement Paper

EBP methods into their curricula to meet accreditation require- ments. Similarly, accreditation standards for postbaccalaureate nurse residency programs require inclusion of an EBP project in the curricula (Commission on Collegiate Nursing Educa- tion, 2008). Many organizations sponsor EBP workshops and related continuing education programs. Education programs emphasize EBP as an essential component of nursing care and provide knowledge on guidance on how to conduct an EBP project (Worral et al., 2010). As a result of all these initiatives, the number of EBP projects produced and published has grown exponentially.

It is imperative that EBP projects are conducted using rig- orous methods to ensure valid, unbiased recommendations because EBP projects are conducted to solve problems and make clinical decisions (Newhouse, Dearholt, Poe, Pugh, & White, 2007). The final goal in conducting an EBP project is to provide valid practice recommendations based on a thor- ough review and critical appraisal of evidence. The evidence is used to determine if a change in current practice is needed (Cvach & Lee, 2010). The decision to translate EBP recom- mendations into clinical practice requires confidence in the validity of the practice recommendations. Practice recommen- dations from a poorly conducted EBP project maybe biased and therefore should not be translated into practice. Prior to

140 Worldviews on Evidence-Based Nursing, 2013; 10:3, 140–149. C© 2013 Sigma Theta Tau International PICOT Statement Paper



Original Article translating recommendations into practice, a critical appraisal of the methodological quality used to generate the recommen- dations must be made.

“Quality” as a concept is difficult to define. Verhagen et al. (1998) suggest that quality is “a set of parameters in the design and conduct of a (project) that reflects the validity of the outcome, and is related to external and internal validity and the statistical model used” (Verhagen et al., 1998, p. 1239). Methodological quality should be assessed in all steps in the EBP process: formulation of a focused clinical question, the search for and critical appraisal of the evidence, translation of the evidence into practice recommendations, and evaluation of the outcomes as a result of the implementation of the recom- mendations.

Published criteria guidelines exist for quality assessment of randomized controlled trials (Verhagen et al., 1998) and sys- tematic reviews and meta-analyses (Moher, Liberati, Tetzlaff, Altman, & the PRISMA Group, 2009). These guidelines help authors improve reporting of the results and also are useful for the critical appraisal of published reports. To our knowledge, there are no such similar published guidelines to aid in the quality assessment of EBP projects.

Significance EBP in nursing is recognized as an essential element in quality care. Accrediting bodies stress the importance of using evi- dence to make decisions and impact patient outcomes. As a result, the number of EBP projects produced and published in recent years has grown exponentially. However, assessment tools are lacking to help clinicians evaluate the rigor and quality of these EBP projects prior to adopting the recommendations into practice. This is a significant problem since wide variances in the EBP process can result in flawed recommendations and therefore may adversely affect patient outcomes, be inefficient and not cost effective. Criteria are needed to aid in the evalua- tion of the rigor and quality of EBP projects to ensure nursing care is delivered with the best evidence available.

Purpose The purpose of this study was to achieve consensus among na- tionally recognized EBP nurse experts on criteria that could be used to appraise the methodological quality of an EBP project as well as to serve as a guideline to plan for an EBP project.

METHODS Design A modified Delphi method was used to generate criteria that could be used in an instrument designed to evaluate the methodological quality of an EBP project. The Delphi method is widely used and accepted for consensus building on a specific topic (Hsu & Sandford, 2007). Nurses have used the Delphi method to ascertain priorities or determine developments for research, education, and clinical practice (Kirkwood, Wales, & Wilson, 2003). The Delphi method was selected for this study because it maintains subject anonymity and minimizes bias

or coercion due to influence of individual members in group discussion (Hsu & Sandford, 2007).

The Delphi method uses a series of rounds in which each participant is given a list of items to review and evaluate. The research team summarizes individuals’ evaluation, revises the list, and summarizes the opinions of the participants as a whole. The items are then sent back to individual participants to review and evaluate again. This iteration and feedback process continues until a consensus is reached among participants. In most cases a series of three rounds are recommended to reach consensus. PICOT Statement Paper

IRB Approval This study was reviewed and approved as an exempt study by the Institutional Review Board at the University of Maryland Baltimore.

Procedures A team (the manuscript authors) was formed to conduct this research. All authors are doctorally prepared nurses who have roles in leading organizational EBP projects and teach EBP and research methods to graduate nursing students. The team identified and selected items on the draft checklist, constructed surveys, analyzed and interpreted the qualitative and quantita- tive data. All authors contributed individually to the develop- ment, revision, and approved the final version of the submitted manuscript.

Instrument Development Items in the initial instrument were developed and modi- fied with permission from an established guideline that eval- uates the quality of published systematic reviews and meta- analyses, the PRISMA statement (Moher et al., 2009). Since the PRISMA guideline was developed for evaluation of system- atic reviews, which uses a different methodology than EBP, adjustment of criteria was needed. Domains were created to represent various aspects of the EBP process (question gener- ation, methods for evidence synthesis, etc.).

Selection of experts. To establish content validity, known EBP nurse experts were identified and invited to participate via e- mail. All invited experts were doctorally prepared clinicians or academicians. All had publications in peer-reviewed journals or books or both related to EBP. Many were recognized public speakers on the topic of EBP. Content validity is a crucial factor in instrument development as it establishes whether items on an instrument adequately measure a desired domain of content (Grant & Davis, 1997). Criteria used in the selection of the content experts included a history of publications and national presentations related to EBP methods. While there is no consensus on the optimal number of participants in a Delphi study, others have suggested 10–15 participants are adequate if the background of the participants is homogenous (Delbecq, Van de Ven, & Gustafson, 1975). Anticipating a 60% response rate, we invited 25 experts to participate.

Worldviews on Evidence-Based Nursing, 2013; 10:3, 140–149. 141 C© 2013 Sigma Theta Tau International PICOT Statement Paper

EPQA Guidelines

Participation of experts. For each round, experts received in- structions, definitions of terms and a questionnaire. They were provided with an Internet link to access the questionnaire on- line. All responses were anonymous, IP addresses were not tracked, and no identifiers were collected. This allowed each participant to express their opinions with no pressure for con- formity in their communication or exchange of information, and produce nonbiased focused opinions (Hsu & Standford, 2007). Participants were asked to rate the relevancy of each item in the questionnaire using a 4-point modified Likert scale (1 = not relevant, 2 = somewhat relevant, 3 = quite relevant, 4 = highly relevant). Additionally for each item, they were asked to provide comments on the clarity of the item (“Was the item well-written, distinct?”). If they felt the item was not clear, they were asked to provide suggestions on how to add clarity to the item. At the conclusion of the questionnaire, experts were asked to comment on the overall comprehensiveness of the items in the questionnaire, if items should be added and whether there was some component of an EBP project that was not captured.

Delphi rounds. As previously recommended (Delbecq et al., 1975), for each round the experts were given 2 weeks to com- plete the questionnaire. After each round the research team met to review the results, revise or delete each item based on participant comments and evaluate and select the items to in- clude in the next round’s questionnaire.

Analyses The analysis of the responses from the Delphi rounds was both qualitative and quantitative. Quantitatively, the mean, median and standard deviation of the 4-point modified Likert scale scores were examined for each item in the survey. An a priori decision was made by the team to retain items that met the following two criteria: (1) median score of 3.25 or higher and (2) at least 70% of the Delphi subjects indicated a rating of 3 or higher (Hsu & Sandford, 2007). Qualitatively, the suggestions and comments from the participants were summarized in a narrative form to provide an audit trail of group decisions.

RESULTS Round 1 Response rate for Delphi Round 1 (conducted in May 2011) was 52% (12/23). Table 1 includes the category heading, items, means, standard deviation, and median scores for each item included in the Round 1 questionnaire. Means for the 22 items ranged from 3.40 (relevant) to 4.0 (highly relevant) and there- fore indicated agreement that all items were relevant. All items had greater than 70% of Delphi subjects rate 3 or higher on the scale. All median scores were 4.0 with the exception of one item: item 10. “Describes methods used for assessing risk of bias of individual studies (including specification of whether this was done at the study or outcome level)” (median = 3.5). The three items with the greatest dispersion of scores were: item 7, “States the process for title, abstract and article screen- ing for selecting studies” (M = 3.47, SD = 1.08, median = 4);

item 18, “For all outcomes considered (benefit or harms), in- clude a table with summary data for each intervention group, effect estimates and confidence intervals, ideally with a forest plot” (M = 3.50, SD = 0.85, median = 4); and item 22, “De- scribes sources of funding for systematic review and other support (e.g., supply of data), and the role of funders for the evidence-based practice project” (M = 3.40, SD = 0.84, median = 4). PICOT Statement Paper

All qualitative responses were reviewed and discussed among the investigative team. Several respondents indicated the need to define “EBP project” to distinguish it from a system- atic review. The following definition was added to the Round 2 questionnaire: A project that is generated in response to a clinical or administrative problem. The project delineates a clear, precise, and answerable question. A strategic and com- prehensive procedure is used to search for evidence (research and nonresearch) to answer the question. Evidence is reviewed and critically appraised for quality and consistency using an established rating scale. The evidence is synthesized and rec- ommendations for further research, practice or policy changes are made.

There were multiple comments stating that some items were more appropriate for systematic review than EBP. Twelve items were added reflecting new content or item revision so that each item reflected unique concepts. The resulting ques- tionnaire for Round 2 included 34 items.

Round 2 Response rate for Round 2 (conducted in July 2011) was 35% (8/23). Table 2 includes the category heading, items, means, standard deviation and median scores for each item included in the Round 2 questionnaire. Item means ranged from 3.63 to 4.0 and all median scores were 4.0, which indicated that all items were evaluated to be highly relevant. All items had scores 3 or higher as rated by more than 70% of Delphi subjects. All 34 items met the a priori criteria and all items in the Round 2 questionnaire were retained. All qualitative responses were re- viewed by the investigative team. No further item revisions were required and consensus on items was attained. Although Round 3 was planned it was determined to be unnecessary and was not conducted.

DISCUSSION AND IMPLICATIONS A two-round modified Delphi method was used to generate 34 items to serve as criteria to evaluate the methodological quality of an EBP project. The EBP Process Quality Assessment (EPQA) Guidelines can be used by clinicians to evaluate the quality of EBP projects in practice, by researchers to test the fidelity of the EBP process, and a grading rubric for educators who require EBP projects in coursework.

Practice EBP activities are increasing exponentially in the healthcare setting. Regulatory agencies, accrediting bodies, consumer

142 Worldviews on Evidence-Based Nursing, 2013; 10:3, 140–149. C© 2013 Sigma Theta Tau International



Original Article Table 1. Results of Delphi Round 1


Item ItemContent Mean SDb Median


1 Identifies the report/project as anevidence-basedpractice project 3.55 0.52 4.00


2 Provides a structured summarywhich includes, as applicable: data toprovide the backgroundof theproblem, statement of theproblem, objective of theEBPproject, setting, inclusion andexclusion criteria, source(s) of evidence, appraisalmethod, limitations, conclusion, recommendation and implications.

4.00 0.00 4.00

Introduction 3 Describes the rationale for the evidence-basedpractice project includingdata to support the problemandwhat is already known.

4.00 0.00 4.00

4 Provides anexplicit statement of thequestionbeing addressedwith reference toparticipants or population/intervention/comparison/outcome (PICO).

3.70 0.48 4.00

Method 5 Explicitly describes the searchmethod, inclusion andexclusion criteria and rationale for search strategy limits.

3.90 0.32 4.00

6 Describesmultiple information sources (e.g., databases, contactwith studyauthors to identify additional studies, or anyotheradditional searchstrategies) included in thesearch strategy, anddate.

3.80 0.42 4.00

7 States theprocess for title, abstract andarticle screening for selecting studies 3.47 1.08 4.00

8 Describes themethodof data extraction (e.g., independently or process for validatingdata frommultiple reviewers).

3.70 0.68 4.00

9 Includes conceptual andoperational definitions for all variables forwhichdatawere abstracted (e.g., definebloodpressureassystolicbloodpressure,diastolicbloodpressure, ambulatory bloodpressure, automatic cuff bloodpressure or arterial bloodpressure).

3.90 0.32 4.00

10 Describesmethodsused for assessing risk of bias of individual studies (including specification ofwhether thiswasdoneat the studyor outcome level).

3.50 0.53 3.50

11 States theprincipal summarymeasures (e.g., risk ratio, difference inmeans). 3.60 0.52 4.00

12 Describe themethodof combining results of studies includingquality, quantity, and consistencyof evidence.

3.80 0.42 4.00

13 Specifiesassessmentof riskof bias thatmayaffect thecumulativeevidence (e.g., publication bias, selective reportingwithin studies).

3.80 0.42 4.00

14 Describes appraisal procedure andconflict resolution. 3.70 0.48 4.00

Results 15 Provides number of studies screened, assessed for eligibility, and included in the review,with reasons for exclusion at each stage, ideallywith aflowdiagram.

3.80 0.63 4.00

16 For each study, presents characteristics forwhichdatawere extracted (e.g., study size, design,method, follow-upperiod) andprovides citations.

3.90 0.32 4.00

17 Present data on risk of bias of each studyand, if available, anyoutcome-level assessment. 3.50 0.71 4.00

18 For all outcomesconsidered (benefit or harms), includea tablewith summarydata for each intervention group, effect estimates, and confidence intervals, ideallywith a forest plot.

3.50 0.85 4.00

groups, and professional organizations call for care that is based on evidence. The conduct of EBP and research is re- quired to achieve Magnet designation. However, many prac- tices are being implemented unchecked under the guise of

evidence base. Leaders and staff alike in healthcare settings are often older and did not have the opportunity to for- mally learn about the rigor required for a practice to be deemed an EBP. Frequently leaders misguidedly refer to quality

Worldviews on Evidence-Based Nursing, 2013; 10:3, 140–149. 143 C© 2013 Sigma Theta Tau International PICOT Statement Paper

Also check: Assessing and Treating Clients with Impulsivity, Compulsivity and Addiction