Secondary Logo

Share this article on:

Improving Apparent Cause Analysis Reliability: A Quality Improvement Initiative

Crandall, Kristen M. MSN, RN, CPN; Sten, May-Britt MSN, RN-BC, CPHQ; Almuhanna, Ahmed MHA; Fahey, Lisbeth MSN, RN; Shah, Rahul K. MD, MBA

Pediatric Quality and Safety: May/June 2017 - Volume 2 - Issue 3 - p e025
doi: 10.1097/pq9.0000000000000025
Individual QI Projects from Single Institutions

Introduction: Apparent cause analysis (ACA) is a process in quality improvement used to examine events. A baseline assessment of completed ACAs at a tertiary care free-standing pediatric academic hospital revealed they were ineffective due to low-quality analysis, unreliable action plans, and poor spread, leading to error recurrence. The goal of this project was to increase ACA action plan reliability scores while maintaining or decreasing turnaround time.

Methods: The Model for Improvement served as the framework for this quality improvement initiative. We developed a key driver diagram, established measures, tested interventions using plan- do-study-act cycles, and implemented the effective interventions. To measure reliability, we created a high reliability toolkit that links each action item/intervention to a level of reliability and scored each ACA action plan to determine overall reliability score. Action plans scored as low level of reliability required revision before implementation.

Results: Average ACA action plan reliability scores increased from 86.4% to 96.1%. ACA turnaround time decreased from a baseline of 13 days to 8.6 days. Stakeholders reported a subjective increase in satisfaction with the revamped ACA process.

Conclusions: Incorporating high reliability principles into ACA action plan development increased the effectiveness of ACA while decreasing turnaround time. The high reliability toolkit was instrumental in providing an organizational resource for approaching this subset of cause analyses. The toolkit provides a way for safety/quality leaders to connect with stakeholders to design highly reliable solutions that improve safety for patients, families, and staff.

Supplemental Digital Content is available in the text.

Children’s National Medical Center, Washington, DC.

Presented at the Children’s Hospital Association 2017 Quality and Safety Conference, Orlando, FL, March 21, 2017.

Supplemental digital content is available for this article. Clickable URL citations appear in the text.

Received for publication February 1, 2017; Accepted April 17, 2017.

Published online May 25, 2017

*Corresponding author. Address: Kristen M. Crandall, MSN, RN, CPN, Children’s National Medical Center, 111 Michigan Avenue, NW, Washington, DC 20010, PH: 202-476-6387; Fax: 202-476-5988, Email: kristenmcrandall@gmail.com

This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CC-BY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.

Back to Top | Article Outline

INTRODUCTION

A fundamental tenet of patient safety and quality improvement (QI) is the need for thorough, impartial, expeditious, rigorous, and actionable analyses of near misses, adverse events, and errors. The most commonly used processes for such analyses are apparent cause analysis (ACA), root cause analysis (RCA), and common cause analysis. These processes identify reasonable actions to create safer systems and prevent future events.1

An ACA is a limited investigation of an event with 2 purposes: to identify actions to address the problem/immediate condition and to collect event information that aids in the identification of organizational trends.2 The ACA format gives structure to learning and understanding about the event and facilitates the construction of an action plan to prevent recurrence.3 ACAs typically focus on events that result in no harm, minimal harm, and near miss events that occur in discrete work settings and conventionally do not cross boundaries. ACAs are completed by trained, local stakeholders or leaders who are experts in the discrete work setting where the event occurred. RCAs are expansive, focus on significant events, and affect myriad units rather than a discrete cohort of individuals or stakeholders. Common cause analysis synthesizes learnings from safety events, ACAs, and RCAs, identifying common etiologies and facilitating broad, far-reaching improvements.1

There exists myriad peer-reviewed publications regarding RCAs and their role and value in driving health-care quality and safety.4–8 As RCAs examine more serious and consequential events, this attention to RCAs is understandable. The fact remains that fewer articles discuss and evaluate ACAs compared with RCAs.

At Children’s National, we utilize the ACA approximately 10 times more frequently than RCAs. Common cause analyses, by definition, are less frequent than both ACA and RCA. Events are reported through the electronic safety event reporting system and triaged by the patient safety team. High risk for harm and system-induced events are typically triaged for ACA completion. Baseline assessment (August 2015 to January 2016) revealed that ACAs were low quality, had unreliable action plans, and not effectively spread, resulting in the reoccurrence of events and ultimately harm and errors. Before this initiative, there was not a system to quantitatively measure ACA effectiveness real-time. Therefore, a retrospective scoring of ACA action plans provided a baseline reliability score of 86.4%. It was identification of this low reliability score that prompted us to embark on this QI initiative to evaluate our ACA process and assess ACA action plan reliability. This project aimed to increase corrective action reliability scores on all ACAs from a baseline of 86.4% to 95% by December 2016 and sustain for 6 months. This report details our efforts and improvement results.

Back to Top | Article Outline

METHODS

This is a project undertaken as a QI initiative at Children’s National and it does not constitute human subjects research; as such it is not under the oversight of the institutional review board. The Model for Improvement9 served as the QI methodology. Focused interviews and stakeholder surveys guided identification of the key drivers of education (knowledge), process, and culture and established needed interventions (Fig. 1). With any system change, the team was concerned that turnaround time could be negatively impacted initially; therefore, turnaround time emerged as a balancing measure.

Fig. 1

Fig. 1

Mathematically, reliability is measured as the number of actions that achieve the intended result divided by the total number of attempts (or actions taken).10 Level 1 (score of 85%) reliability11 reflects intent, vigilance, and hard work; this can include interventions such as awareness and training, providing feedback mechanisms, implementing memory aids, or basic standardization. When implementing level 1 actions, 1 or 2 failures of 10 attempts should be expected.11 Level 2 (score of 95%) reliability11 reflects human factors and reliability science and includes interventions such as creating intentional redundancy, decision aids/reminders integrated into the system (such as the electronic health record), utilizing differentiation (such as visual cues to set off alike processes), identification of failures real time, making the default the desired action, standardizing essential tasks, and scheduling key tasks (time stamping). When implementing level 2 actions, less than 5 failures of 100 attempts should be expected.11 Level 3 (score of 99%) reliability11 reflects system design, such as making the system visible, clear, and unambiguous communication, preoccupation with failure, reluctance to simplify interpretations, sensitivity to operations, commitment to resilience, and deference to expertise. When implementing level 3 actions, less than 5 failures of 1,000 attempts should be expected.11

To address education, we created a high reliability toolkit and provided it as a resource to ACA teams for action plan development (see figure, Supplemental Digital Content 1, http://links.lww.com/PQ9/A11). The toolkit aims to support the development of effective action plans and help all ACA participants connect action items with reliability principles. These actions range from least to most effective.10,11 This toolkit assigns specific actions to a reliability score based on high reliability principles.

At the end of February 2016, we introduced the high reliability toolkit to the organization as a resource to use for action plan development. In May 2016, we stipulated that all ACA action plans include a minimum of 1 level 2 intervention from the high reliability toolkit. Safety department staff scored the returned action items and assigned the ACA an overall reliability score. ACAs that did not achieve a minimum score of 95% (level 2 reliability) are returned to the stakeholders for action plan modification and incorporation of a higher reliability intervention. Monthly, we collated ACA reliability scores and calculated a monthly mean ACA reliability score as an outcome measure.

Process improvements implemented included measuring ACA stakeholder satisfaction, establishing a follow-up process with stakeholders, changing the ACA form format/wording, creating an electronic ACA, refining the stakeholder contact list, establishing ACA launching criteria, and centralizing resources. ACA stakeholders provided qualitative satisfaction and feedback data that informed interventions. Initially, the ACA process lacked follow-up needed to ensure that actions are implemented and evaluated successfully.4 Stakeholders shared that the ACA format and wording needed improvement, so we revised the forms and adjusted the wording to better support the culture within our organization (Fig. 2).

At the start of this project, stakeholders completed ACAs on a paper form uploaded to our electronic safety event reporting system. Conversion of the paper form to an electronic form accessed within the specific electronic safety event file consolidated all information into the established electronic system. The ACA educational resources and tools are linked directly into the safety event reporting system. In addition, the organization’s internal patient safety Web site currently consolidates all ACA resources such as educational materials, completed examples, and the high reliability toolkit. To ensure the right people were involved in ACAs, the stakeholder contact list was refined and updated. The patient safety department and the subject matter experts in the area where the safety event occurred decide to launch an ACA. This conversation is multipurposed in that it facilitates a shared mental model between the patient safety team and ACA key stakeholders, it allows the stakeholder experts to provide their input before ACA determination/launch, and it promotes stakeholder engagement.

Simultaneous to this work, the organization rolled out additional just culture education to all leaders supporting fair and just accountability by examining the processes involved in events as well as the decisions that staff make. This encourages fair, effective management of safety events, holding staff accountable for their actions, and not for the failures of the system.12 To help ensure that stakeholders responsible for ACAs are held accountable, ACA data are shared transparently and executive leaders support the ACA process. By engaging and empowering leaders to be active participants in ACAs, a shift occurred. Departmental leaders self-identify the need for an ACA and proactively reach out to the patient safety team for resources.

Back to Top | Article Outline

RESULTS

Establishment of the high reliability toolkit as well as holding teams accountable for implementing high reliability interventions resulted in an improvement in overall ACA reliability score. ACAs completed after June 1, 2016, were a minimum of 95% reliable, with some months as high as 97% reliable. All ACAs since June 2016 have had at least 1 level 2 reliability intervention incorporated into the action plan, with some having level 3 interventions. A centerline shift occurred in mean ACA reliability score from 86.4% to 96.1% (Fig. 3). ACA turnaround time was monitored closely as a balancing measure; ACAs were completed in less time, bringing the average turnaround time from 13.2 days to 8.6 days (Fig. 4).

Fig. 2

Fig. 2

Fig. 3

Fig. 3

Back to Top | Article Outline

DISCUSSION

This QI initiative targeted improvements in ACA at Children’s National Health System. The aim was to increase reliability scores of ACAs and move toward higher reliability interventions in action plan design and implementation with the expectation that less déjà vu or repeat errors/events would occur resulting in decreased harm. An impediment to this initiative was the lack of peer-reviewed literature on measuring reliability of cause analysis action plans in real time as this has not been previously quantified.

Historically, cause analysis effectiveness has been measured by the absence of déjà vu errors and the number of actions that achieve results divided by the total number of actions attempted.10 To be highly reliable, organizations must achieve the intended results most of the time. As such, both of these measures of success are lagging and do not facilitate real-time, rapid improvement and action. Through this initiative, we created a high reliability toolkit (see figure, Supplemental Digital Content 1, http://links.lww.com/PQ9/A11) to standardize reliability assessment and create a real time scoring of reliability. Often, only professionals trained in safety science have the knowledge to apply and integrate principles of high reliability. All stakeholders responsible for completing ACAs have access to our toolkit implemented as part of the ACA process. The implementation of the high reliability toolkit facilitated stakeholder engagement and empowered participants to apply high reliability principles in a simple, easy to understand format that translates high reliability principles into actions and methods, tailored to the front-line health-care professional. In addition to increasing accessibility and usability of high reliability principles, the implementation of the toolkit engaged historically disconnected parties and departments (such as nonclinical support areas) in cause analyses.

With any system change, efficiency can be negatively impacted initially. Our expectation was that ACA turnaround time would increase as we added rigidity and structure to the ACA process. As such, ACA turnaround time was selected as a balancing measure. We examined the reasons for variation in ACA turnaround time and found that stakeholder time, resources, and knowledge were drivers. We were surprised when despite the additional scrutiny and focus on reliability of action plans, ACA turnaround time decreased. This was an unexpected finding, which we are examining; a potential hypothesis is that utilization of a standardized toolkit facilitated expeditious completion of ACAs as there were additional resources and a clear rubric to follow.

Shifting the focus to quality and reliability had an impact on our organization’s safety culture. Before this QI initiative, it was not unusual to face resistance when launching an ACA, as the ACA process was at times viewed as onerous and not value-add. By providing the resources and empowering stakeholders to utilize high reliability principles as a part of the ACA process, stakeholder engagement increased and resistance decreased. On several occasions since this improvement initiative, stakeholders have reached out to the patient safety team requesting that an ACA be launched, whereas in the past this was traditionally a process pushed out by the patient safety team. Every month, select stakeholders present ACAs at the established patient safety committee for review of analysis quality and intervention effectiveness/reliability, which has promoted effective spread throughout the organization. Focusing on ACA reliability has been a tangible step in positive culture change. We will examine this further when we reassess our safety culture using the Safety Attitudes Questionnaire.

There are limitations to this QI initiative, which should be considered when interpreting the findings. This initiative was performed at a single institution and hence the generalizability is limited; however, our institution conducts ACAs similar to other like health care organizations; hence, we presume that this can be successfully trialed in other organizations. An additional limitation is that ACAs were scored by our internal patient safety team, and scoring was not validated with other stakeholders in our organization. This presented a dilemma as the toolkit needed to be created internally and then was use to retrospectively score prior ACAs. Ideally, this would have been done prospectively, but as the high reliability toolkit was created as part of this QI initiative, such assessment was not possible. The priority was to improve the care of our patients by decreasing repeat safety events. A multicenter study of ACA effectiveness and quality is needed in the future, as peer-reviewed literature on this topic is lacking.

Fig. 4

Fig. 4

Back to Top | Article Outline

CONCLUSIONS

Incorporating high reliability principles into ACA action plan development increased ACA reliability scores while decreasing turnaround time. The high reliability toolkit was instrumental in providing an organizational framework for approaching this subset of cause analyses. The toolkit is a way for safety leaders to connect with stakeholders to design highly reliable solutions that can improve safety for patients, families, and staff.

Back to Top | Article Outline

ACKNOWLEDGMENTS

Assistance with the study: Nationwide Quality Improvement Essentials Course Faculty (Andrew Bethune, Dr. Richard Brilli, Karen Heiser, Linda Stoverock, and Jahnavi Valleru), Nafis Khan, Catherine Williams, and Michael Shaw.

Back to Top | Article Outline

DISCLOSURE

The authors have no financial interest to declare in relation to the content of this article.

Back to Top | Article Outline

REFERENCES

1. Hettinger AZ, Fairbanks RJ, Hegde S, et al. An evidence-based toolkit for the development of effective and sustainable root cause analysis system safety solutions. J Healthc Risk Manag. 2013;33:11–20.
2. Solutions for Patient Safety. Apparent Cause Analysis & Common Cause Analysis. 2015.Online: Children’s Hospitals’.
3. Apparent Cause Analysis for Healthcare. 2008.Norfolk, Va.: Healthcare Performance Improvement.
4. Wu AW, Lipshutz AK, Pronovost PJ. Effectiveness and efficiency of root cause analysis in medicine. JAMA. 2008;299:685–687.
5. Wagner C, Merten H, Zwaan L, et al. Unit-based incident reporting and root cause analysis: variation at three hospital unit types. BMJ Open. 2016;6:e011277.
6. Taitz J, Genn K, Brooks V, et al; NSW RCA Review Committee. System-wide learning from root cause analysis: a report from the New South Wales Root Cause Analysis Review Committee. Qual Saf Health Care. 2010;19:e63.
7. Brook OR, Kruskal JB, Eisenberg RL, et al. Root cause analysis: learning from adverse safety events. Radiographics. 2015;35:1655–1667.
8. Weick K, Sutcliffe K. Managing the Unexpected. 2007.1st ed. San Francisco, Calif.: Jossey-Bass.
9. Langley GL, Moen R, Nolan KM, et al. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. 2009.2nd ed. San Francisco, Calif.: Jossey-Bass Publishers.
10. Nolan T, Resar R, Haraden C, et al. Improving the reliability of health care. IHI Innovation Series White Paper. 2004.Boston, Mass.: Institute for Healthcare Improvement.
11. An Outline of Design Concepts for Improving Reliability. 2011.Cincinnati, Ohio: James M. Anderson Center for Health Systems Excellence: Cincinnati Children’s Hospital Medical Center.
12. Boysen PG 2nd. Just culture: a foundation for balanced accountability and patient safety. Ochsner J. 2013;13:400–406.
Copyright © 2017 The Authors. Published by Wolters Kluwer Health, Inc. Health, Inc. All rights reserved.