Journal Logo

Feature: Evaluating the Evidence

Determining the level of evidence

Nonresearch evidence

Glasofer, Amy DNP, RN, NE-BC; Townsend, Ann B. DrNP, RN, ANP-C, CNS-C

Author Information
doi: 10.1097/01.CCN.0000654792.71629.00
  • Free
Figure
Figure:
Evidence hierarchy

This is the third article in a series meant to help nurses determine the strength of a piece of evidence to support evidence-based practice (EBP). The American Association of Critical-Care Nurses (AACN) is one of many professional organizations calling for nurses to use evidence to guide practice.1 In the previous articles, we established that strength of evidence has to do with the certainty or sureness in the conclusions of a piece of evidence.2 This series follows the Johns Hopkins hierarchy for evidence.3 Previous installments covered Levels I through III that included experimental, quasi-experimental, and nonexperimental research.4,5 This article reviews different types of nonresearch evidence useful in clinical decision-making.

If randomized control trials (RCTs) are the DNA of scholarly evidence, nonresearch evidence is like circumstantial evidence—any one piece alone is insufficient for a certain conclusion. However, nurses can use a collection of high-quality nonresearch evidence reaching consistent conclusions for EBP. To understand nonresearch evidence, it is helpful first to define research. According to the Electronic Code of Federal Regulations, research refers to a systematic investigation designed to develop or contribute to generalizable knowledge.6 There are two critical elements in this definition: systematic, which refers to having a predetermined and replicable plan for data collection and analysis, and generalizable, meaning that the results will apply to other populations, settings, and samples. Nonresearch evidence lacks rigid methods that are reproducible and/or provides new knowledge that is transferable across populations and settings. According to the Johns Hopkins model, this includes Level IV (clinical practice guidelines and consensus or position statements) and Level V evidence (literature reviews, expert opinion, quality improvement, financial evaluations, program evaluations, case reports, community standards, clinician experience, and consumer preference).3

Level IV: Clinical practice guidelines, consensus or position statements

Although the terms clinical practice guidelines (CPGs) and consensus or position statements are included in Level IV evidence, they differ in their development. The National Academy of Medicine (NAM, formerly the Institute of Medicine) defines CPGs as “statements that include recommendations, intended to optimize patient care, that are informed by a systematic review of evidence and an assessment of the benefits and harms of alternative care options.”7 The development of a CPG generally occurs in four stages:

  1. A systematic review occurs with the synthesis of best-available evidence on the topic of interest.
  2. The guidelines are constructed, tested, and revised with review from multidisciplinary experts and professional organizations.
  3. The completed guideline is disseminated.
  4. The CPG is revised as new evidence emerges.7,8

The purpose of CPGs is to make recommendations to improve practice by decreasing gaps in evidence to practice, to improve the quality and cost of care through reducing variations in practice, and to foster accountability of practice.9,10 An important component of CPGs is that guidelines provide clinical processes and outcomes that can be measured to improve quality of care through quality initiatives.10 An example of a CPG is the International Guidelines for Management of Severe Sepsis and Septic Shock 2016.11

Similar to CPGs, consensus or position statements are developed by experts and professional organizations with a peer review process, but their purpose is limited to providing expert and consensus opinion, informational statements, and organizational recommendations on specific topics.3,9 In contrast to CPGs, consensus or position statements do not make specific recommendations for clinical practice.9 For example, the American Nurses Association (ANA) issues official position statements that reflect the views of nursing on relevant practice issues.12 The process for development of an ANA position statement includes the identification of a topic for approval, selection of a practice panel to research and draft the position statement, posting of the position on the ANA website for public comment, and revisions with adaptation of the official statement. ANA Official Position Statements include topics such as nursing practice, ethics and human rights, and patient safety.12 Although consensus statements have a less extensive evidence base than CPGs, both require rigorous, systematic, and transparent development.9 A limitation of both consensus statements and CPGs is that this development process may vary in quality, which is why they are Level IV evidence. The NAM has established standards for CPG development to assist clinicians to critically evaluate CPGs for use in clinical practice.7 In the NAM criteria for trustworthy CPGs, the guidelines should be valid in their scientific evidence, reliable/reproducible, clinically applicable to the identified patient population, have flexibility and clarity, documented with accuracy, include multiple disciplines, and have plans for review as new evidence emerges.7 Although other appraisal tools are available for assessing CPGs, an example from the NAM to evaluate CPGs is the Appraisal of Guidelines for Research & Evaluation (AGREE), updated in 2010 to AGREE II.7 The Johns Hopkins model provides tools to evaluate the quality of position statements based on the types and levels of evidence included, stakeholders involved, potential for bias, and clarity of recommendations.3

Level V

Literature review. Dearholt and Dang define a literature review as “a summary of published literature without systematic appraisal of evidence quality or level.”3 Given the abundance of publications, literature reviews provide a useful summary of existing evidence on a topic.13 In the first article in this series, we learned that systematic reviews are Level I evidence, or the highest level. Yet literature reviews are Level V. Though both provide a review of a topic, there are significant differences (see Systematic vs. literature review). Literature reviews do not require strict criteria for inclusion of evidence types; exhaustive, rigorous, and reproducible search methods; or appraisal of evidence, and can be conducted by individual authors. Therefore, the conclusions presented in literature reviews can be influenced by incomplete sampling of existing evidence, inclusion of weak evidence, or author bias. Though literature reviews can provide a helpful summary of literature, nurses should interpret the findings with caution. Literature reviews that are conducted by experts, provide scientific rationale, and draw definitive conclusions can provide a foundation for change in practice.3,13 Sorenson and colleagues published an example of a high-quality literature review on compassion fatigue in healthcare providers.14 The article is published in the peer-reviewed Journal of Nursing Scholarship, the official journal of the international nursing organization Sigma. All the authors have professional certifications, education, and professional affiliations supporting their subject matter expertise. Their methods are clearly detailed and include a flow diagram detailing their review strategies. Finally, conclusions drawn are connected to the results of the literature reviewed.

Table
Table:
Systematic vs. literature review

Expert opinion. Expert opinions may be published as commentary, position statements, case reports, letters to the editor, or as a source of evidence in CPGs.3,15 When consulting expert opinion as a source of evidence, the reader must assess the credentials of the expert. Experts should be externally recognized, have reputable academic and professional affiliations, have a history of publications and presentations on the topic, and use the best existing evidence to support their opinion.3,16 After ensuring these criteria, nurses can use high-quality expert opinion to guide practice. Chiumello and colleagues published an expert opinion regarding respiratory support in patients with acute respiratory distress syndrome.17 Based on author credentials and affiliations, the publication source, and the extensive list of references, this publication could be used as a source to guide EBP.

Organizational experience (quality improvement, financial evaluation, program evaluation). Quality improvement (QI) projects seek to systematically improve care delivery within an organization using continuous processes that can be measured and improved to provide predictable outcomes.18 QI projects apply EBP and CPGs to care settings using a rapid cycle framework such as a plan, do, study, act improvement model. QI projects may include both internal and external sources of evidence to support interventions. QI is a continuous process where changes are monitored, measured, and changed to improve the desired clinical outcome.19 Since the purpose of QI is to achieve change at the organizational level, the results are not generalizable to other settings.

Similar to QI projects, financial evaluations assess the impact of change. The primary focus of a financial evaluation is to measure the economic outcomes of a project. Of interest is the potential return on investment, which represents the savings of an investment compared with the cost. High-quality financial evaluations should report appropriate data sources for costs, clearly define the outcome measures, describe the analytical model, and perform sensitivity analyses.3 Gellert and colleagues published an example of a financial analysis.20 They sought to evaluate the efficiency gained by converting the sign-in process for their electronic health record to single sign-on. They converted the time saved by single sign-on to a dollar amount representing salary for clinician time, and decreased costs for replacing equipment, estimating that single sign-on saved the organization over $3 million across 19 hospitals each year.20

The purpose of a program evaluation (PE) is to evaluate the functioning of a program and identify opportunities to modify the program. While QI is focused on improving, PE is focused on evaluating.21 According to the CDC, PEs follow a set of formal guidelines to systematically collect “information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve program effectiveness, and/or inform decision about future program development.”22 The CDC provides a framework offering guidelines to conduct a rigorous PE including the following steps in a continuous cycle: engage stakeholders, describe the program, focus evaluation design, gather credible evidence, justify conclusions, ensure use, and share lessons learned.22 Additionally, the CDC offers standards to assess the quality of a PE based on utility, feasibility, propriety, and accuracy.22 Logan and colleagues published an example of a PE following the CDC framework evaluating tuberculosis contact investigation programs.23 Their evaluation focused on engaging stakeholders, describing current state programs, and developing tools to conduct self-evaluation.21

Case report. A case report is a summary of information on an unusual situation.3 Case reports are appropriate when a clinician wishes to report on an unusual case, or an exemplar of successful or unsuccessful care.24 The details provided in these reports can offer insights into specific situations, be useful in providing information on new or rare disorders, and provide insights that are not otherwise found in research reports.24 Quality case reports deliver this information in the context of existing knowledge.25 The CARE (CAse REport) guidelines exist to support development of high-quality case reports.25 Because case reports are based on individuals, they are not generalizable or reliable for practice change as a sole source of evidence.2 Case reports should not be confused with case studies or case series, which are research approaches used to develop in-depth understanding of complex issues in the natural context.26 Duignan and colleagues provide an example of a case report offering details of a patient with serotonin syndrome.27 The authors connect the case to existing literature and make recommendations for the practice of clinicians working in the ED.27

Community standard, clinician experience, and consumer preference. Even in situations where there is an abundance of high-level and quality research, nurses must incorporate community standards, clinician experience, and consumer preference in EBP. Community standards represent the current practice in a community.2 This can be ascertained by consulting with providers, agencies, and other external organizations. Additionally, individual clinician experience, especially among experienced clinicians, adds depth to practice when incorporated in an evidence-based framework. Internal clinician expertise should be included when implementing a new practice.2 Finally, at the center of all that critical care nurses do is the patient. The Agency for Healthcare Research and Quality includes patient-centered care as one of the six domains of healthcare quality. They define patient-centered care as “providing care that is respectful of and responsive to individual patient preferences, needs, and values, and ensuring that patient values guide all clinical decisions.”28 EBP requires that patients have options based on the best-available evidence while preserving their autonomy.3 Sensitivity and respect for the individual's personal, religious, cultural, and socioeconomic beliefs and experiences are essential to optimal outcomes. A qualitative study by Fang on hepatitis B screening among Hmong Americans illustrates how consideration of community standards, clinician experience, and patient preference are necessary to deliver evidence-based care.29

Conclusion

The AACN defines EBP as “a lifelong approach to clinical decision making to improve clinical outcomes and includes use of best evidence, clinical expertise, and values of patients and their families.”1 Today, nurses have access to an overwhelming amount of research and nonresearch evidence. Evidence hierarchies, such as the AACN or Johns Hopkins hierarchies, exist to evaluate and grade evidence so that nurses can focus on the best-available evidence to support practice.2,25 In order to apply these hierarchies, nurses must have at least a basic understanding of research design so they can differentiate between various forms of evidence. This series provided an overview of experimental and nonexperimental research, and nonresearch sources of evidence with a brief yet comprehensive overview of knowledge necessary to appraise the level of evidence—an essential step in endeavoring to practice in an evidence-based way.

REFERENCES

1. Peterson MH, Barnason S, Donnelly B, et al. Choosing the best evidence to guide clinical practice: application of AACN levels of evidence. Crit Care Nurse. 2014;34(2):58–68.
2. Melnyk BM, Fineout-Overholt E. Evidence-Based Practice in Nursing and Healthcare: A Guide to Best Practice. 4th ed. Philaelphia, PA: Wolters Kluwer; 2019.
3. Dearholt SL, Dang D. Johns Hopkins Nursing Evidence-Based Practice Model and Guidelines. 3rd ed. Indianapolis, IN: Sigma Theta Tau International; 2018.
4. Glasofer A, Townsend AB. Determining the level of evidence: experimental research appraisal. Nurs Crit Care. 2019;14(6):22–25.
5. Glasofer A, Townsend AB. Determining the level of evidence: nonexperimental research designs. Nurs Crit Care. 2020;15(1):24–27.
6. Electronic Code of Federal Regulations. 2019. https://ecfr.io/Title32/pt32.2.219#se32.2.219_1102.
7. Graham R, Mancher M, Miller Wolman D, et al. Clinical practice guidelines we can trust. In: Institute of Medicine (US) Committee on Standards for Developing Trustworthy Clinical Practice Guidelines. Washington, DC: National Academies Press; 2011.
8. Global Programme on Evidence for Health Policy, World Health Organization. Guidelines for development of WHO guidelines: version10, March 2003. http://archives.who.int/eml/expcom/expcom14/1other/guid_for_guid.pdf.
9. D'Arcy Y. Practice guidelines, standards, consensus statements, position papers: what they are, how they differ. Am Nurse Today. 2007.
10. Kredo T, Bernhardsson S, Machingaidze S, et al. Guide to clinical practice guidelines: the current state of play. Int J Qual Health Care. 2016;28(1):122–128.
11. Rhodes A, Evans LE, Alhazzani W, et al. Surviving sepsis campaign: international guidelines for management of sepsis and septic shock: 2016. Crit Care Med. 2017;45(3):486–552.
12. ANA Official Position Statements. www.nursingworld.org/practice-policy/nursing-excellence/official-position-statements.
13. Baker JD. The purpose, process, and methods of writing a literature review. AORN J. 2016;103(3):265–269.
14. Sorenson C, Bolick B, Wright K, Hamilton R. Understanding compassion fatigue in healthcare providers: a review of current literature. J Nurs Scholarsh. 2016;48(5):456–465.
15. Ponce OJ, Alvarez-Villalobos N, Shah R, et al. What does expert opinion in guidelines mean? A meta-epidemiological study. Evid Based Med. 2017;22(5):164–169.
16. Herman RA, Raybould A. Expert opinion vs. empirical evidence: the precautionary principle applied to GM crops. GM Crops Food. 2014;5(1):8–10.
17. Chiumello D, Brochard L, Marini JJ, et al. Respiratory support in patients with acute respiratory distress syndrome: an expert opinion. Crit Care. 2017;21:240.
18. Agency for Healthcare Research and Quality. Approaches to quality improvement. Modular 4. 2013. www.ahrq.gov/ncepcr/tools/pf-handbook/mod4.html.
19. Merrill KC. Is this quality improvement or research. American Nurse Today. 2015. www.americannursetoday.com/quality-improvement-research.
20. Gellert G, Crouch JF, Gibson LA, Conklin GS. An evaluation of the clinical and financial value of work station single sign-on in 19 hospitals. Perspect Health Inf Manag. 2019;16:1–14.
21. Lowe NK, Cook PF. Differentiating the scientific endeavors of research, program evaluation, and quality improvement studies. J Obstet Gynecol Neonatal Nurs. 2012;41:1–3.
22. Centers for Disease Control and Prevention. Program performance and evaluation office: introduction. www.cdc.gov/eval/guide/introduction/index.htm.
23. Logan S, Boutotte J, Wilce M, Etkind S. Using the CDC framework for program evaluation in public health to assess tuberculosis contact investigation programs. Int J Tuberc Lung Dis. 2003;7(12 suppl 3):S375–S383.
24. Crowe S, Cresswell K, Robertson A, Huby G, Avery A, Sheikh A. The case study approach. BMC Med Res Methodol. 2011;11:100.
25. Porcino A. Not birds of a feather: case reports, case studies, and single-subject research. Int J Ther Massage Bodywork. 2016;9(3):1–2.
26. Gagnier JJ, Kienle G, Altman DG, et al. The CARE Guidelines: consensus-based clinical case report guideline development. J Diet Suppl. 2013;10(4):381–390.
27. Duignan KM, Quinn AM, Matson AM. Serotonin syndrome from sertraline monotherapy: a case report. Am J Emerg Med. [e-pub Nov. 16, 2019].
28. Agency for Healthcare Quality and Research. Six domains of health care quality. www.ahrq.gov/talkingquality/measures/six-domains.html.
29. Fang DM, Stewart SL. Social-cultural, traditional beliefs, and health system barriers of hepatitis B screening among Hmong Americans: a case study. Cancer. 2018;124(suppl 7):1576–1582.
Wolters Kluwer Health, Inc. All rights reserved.