OVER the last decade, multiple reports (Committee on Quality of Health Care in America, Institute of Medicine, 2001; Kohn et al., 2000; McGlynn et al., 2003) have increased the awareness that healthcare does not reliably meet patient's needs and can even cause harm. A multitude and wide array of improvement efforts have emerged to address the situation and, although hope remains, progress is frustratingly slow toward the goals set by the Institute of Medicine. In light of this situation, the Institute for Healthcare Improvement (IHI) explored utilizing reliability principles, successfully applied in other industries, to support improvement efforts in healthcare. Thus far, these principles and the resulting method (tested predominantly in hospitals) have shown some promising results toward consistent and appropriate care, reduction in defects, and improved outcomes (Nolan et al., 2004; Resar, 2006).
This article describes the application of these reliability methods within 1 ambulatory care setting and presents evolving responses to the following questions:
- Is the IHI reliability method useful in ambulatory care settings?
- Does application of the IHI reliability approach differ for ambulatory care?
SYSTEM AND PROCESS APPROACH
The Committee on Quality Health Care in America, Institute of Medicine (2001) writes that it is system's improvement and not simply the training and education of professionals that will lead to transformation of care. Yet, taking a systems and process view in healthcare can be a significant shift for some healthcare professionals. For example, a common assumption noted in medicine is that quality is determined by the physician's skill, hard work, and expertise rather than by the system (Berwick et al., 2002). This assumption has led to an emphasis on professional autonomy and variability in how a process is performed (Espinosa & Nolan, 2000; Resar, 2006).
The IHI defines reliability as failure-free operation over time (Nolan et al., 2004). It is the failure-free operation of processes over time that contributes to the reliable operation of the system. In this article, the definition of processes is the manner in which work gets done. A process involves input from materials, methods, people, environment and equipment, action upon and within the inputs, and an output to a customer(s). These multiple work processes make up system of care (Deming, 1982, 1994). For the purpose of this article, Deming's definition of system will be adopted: “a system is a network of interdependent components that work together to try to accomplish the aim of the system” (1994, p. 50). Improving the processes of the system toward the system aim is foundational to quality improvement theory and methods.
THE IHI RELIABILITY METHOD
As part of its efforts to learn more about the application of reliability principles in healthcare, the IHI organized a Learning and Innovation Community of clinical teams from hospital settings to test the effectiveness of reliability principles and methods. The community used definitions of levels of reliability based on a mathematical framework, but designed for simplicity and usefulness in healthcare. For example, a 10−1 level of reliability is defined for healthcare as 80% to 90% success rate (1 or 2 failures out of 10 opportunities) as opposed to that number's precise mathematical definition. Strategies closely associated with these levels of reliability were identified, explored, and tested in the community. A 3-tier application model (revised and adapted by the authors for outpatient settings in Table 1) was developed on the basis of this work. Much learning occurred in the community and is documented in the IHI Innovation Series white paper (Nolan et al., 2004).
One key learning from the community was that reliability strategies associated with higher levels of success include consideration of human factors in the design of work processes. Human factors can be described as the “study of the interrelationships between humans, the tools humans use, and the environment in which we live and work” (Kohn et al., 2000, p. 63). Historically, reliability in healthcare has depended on intent, memory, hard work, and vigilance (Resar, 2006). In any work situation, humans are vulnerable to stress, complexity, and fatigue. Reliability science takes into account such factors and supports redesign of work processes to aid memory and accurate task completion. As noted in systems and process thinking, failures are more often a result of poor design of tasks and work flow rather than individuals' efforts.
In the authors' experience, the progressive implementation of reliability strategies increases the likelihood of successful processes' improvement. Lower-level reliability strategies (eg, 10−1, Prevent initial failure) should be implemented before strategies for achieving higher levels of reliability (eg, 10−2, Identify failures and mitigate) because the latter usually requires more resources, time, and attention (see Table 1).
APPLICATION OF THE IHI RELIABILITY METHOD IN AN OUTPATIENT SETTING
CareSouth Carolina, Inc, is a rural healthcare system in Hartsville, South Carolina, serving 31,000 medically underserved patients. Over the last 10 years, CareSouth has vigorously pursued quality improvement and has experienced significant success. For example, in 1999, CareSouth joined a collaborative to improve the care of diabetes. They applied change concepts from the Care Model (Wagner, 1998) and monitored the success primarily using outcome measures such as HbA1c levels. Within 3 to 6 months, their pilot clinic improved the average HbA1c in their population from 12% to 9%.
Despite this success, however, CareSouth was aware of their failures and frustrated with insufficient progress. Their increasing awareness of the value of improving processes versus focusing only on the outcome led them to test the application of the IHI reliability method. In retrospect, after becoming familiar with reliability principles, CareSouth believes that their failures were due in part to utilizing primarily the changes that focused on a 10−1 level of reliability, but also few changes that focused on a 10−2 level of reliability (see Table 1). Also, they did not intentionally build the changes into the system after testing. These insights became much clearer after an attempt to spread the pilot clinic's success with diabetes to other clinics failed.
The CareSouth spread strategy consisted of teaching about changes from the Care Model such as using training, awareness, checklists, and performance feedback to improve care. These are all lower-level (10−1) reliability methods. However, many strategies for achieving higher (10−2) reliability, which were applied by the pilot clinic and critical to the pilot clinic's improvement, were not recognized as such and, therefore, were not included in the spread strategy. This was most likely due to the organization's lack of familiarity with the language and concepts of reliability. The higher-reliability strategies applied by the pilot clinic were as follows:
- Affordances: Make the desired action the default; for example, patients were sent to the laboratory on arrival at the clinic.
- Build in reminders: Patients needing an HbA1c test were given a pink-colored reminder notice that served as a visual reminder for the laboratory to do an HbA1c test.
- Standardization: To prevent repeat laboratory draws, the laboratory always drew a standard amount of blood that enabled them to conduct additional tests if ordered by the physician later in the visit.
After becoming familiar with reliability methods (with the assistance of IHI), CareSouth began to systematically define their desired level of reliability of processes and progressively apply the relevant strategies. For example, for diabetes, they identified 5 processes that were felt to be based on evidence and tightly linked to improved outcomes, hence requiring high consistency of completion. These processes were as follows:
- 2 HbA1c tests annually, at least 90 days apart,
- nutrition education,
- body mass index (BMI) performed and noted in chart,
- prescription of statin (if indicated), and
- annual low-density lipoprotein testing.
CareSouth identified a new pilot improvement team, which included 2 physician practices. To determine the level of reliability for specific processes, they regularly reviewed 20 records of patients with diabetes. Performance on delivering all 5 processes was determined along with reasons for failures. For example, the team progressively applied the following 10−1 (Prevent initial failure) and 10−2 (Identify failures and mitigate) changes to increase the reliability of obtaining a BMI:
- Staff were educated about the intention to obtain BMI at every visit (10−1 reliability method).
- Performance feedback for staff was initiated (percentage of patients with diabetes with a completed BMI) (10−1 reliability method).
- Body mass index became a data element on the standardized Core Elements flow sheet (10−2 reliability method, make use of habits and patterns).
- The Core Elements flow sheet was placed on the front page of the medical record (10−2 reliability method, Affordance and differentiation).
- Body mass index was assessed multiple times: by the nurse checking in the patient, by the physician, and by the care manager (10−2 reliability method, Redundancies).
- The care manager reviewed the record the day before the visit to determine if a BMI was entered into the registry (10−2 reliability method, Redundancies).
- The job description for all personnel was updated to include the task of ensuring BMI documentation at every visit (10−2 reliability method, Standardization).
- Patients were consulted to help develop educational materials on the basis of initial negative reactions to the way they were approached about BMI (10−3 reliability method, Monitor and feedback).
The percentage of patients with diabetes with a completed BMI improved from very low (<20%) to 100%, a level that has been sustained for 6 months. The progressive and intentional approach to improve BMI reliability is reflective of the team's approach to improve the reliability of all 5 components of care. However, different reliability methods were utilized depending on the types of failures identified. For example, a common reason for failure to prescribe a statin (when indicated) was lack of awareness: physicians needed further training on indications and choice of medication (10−1 reliability method).
The results of CareSouth's effort are encouraging. In the pilot clinic, all 5 processes of recommended diabetes care were eventually completed 100% of the time in each of the 20 patients reviewed monthly. Figure 1 shows results for 1 physician. Recently, the outcome measure of HbA1c < 7 for all patients with diabetes in the registry for both physicians in the pilot clinic appears to have improved (Fig. 2).
With the application of 10−3 reliability strategies, the pilot clinic continues to review its data monthly to identify failures and determine additional areas for redesign. For example, chart reviews revealed that failures in HbA1c testing commonly occurred for patients whose visits to the clinic were infrequent. CareSouth identified that this particular patient population required a different work process, and reliability strategies were applied to make necessary improvements. For example, checking the registry each month and calling these patients became a standardized part of the care manager's role. This change resulted in improved HbA1c testing from 56% to greater than 90% in this patient population.
CareSouth began applying reliability methods to many other processes using the 3-tier method. For example, they improved the rate of pain assessment from less than 30% of patients to the current level of 98%. Starting with 10−1 reliability methods, CareSouth noted that there was no standardized pain assessment tool in use. Once a tool was established, organization-wide training was required because it was a new practice for all staff. Performance feedback for staff on use of the tool was given monthly.
The next stage of improvement addressed the human factor issues (10−2 reliability methods). For example, taking advantage of existing staff habits and patterns, CareSouth integrated pain assessment into the Core Elements tool, which had become standard in all clinics. In this way, completion of pain assessment became the required default action. Having the nurse complete the pain assessment also developed redundancies, and then the provider checked the assessment and discussed it with the patient. Continued review of process performance revealed that patients did not always understand the pain assessment questions, which prevented completion of the assessment. Learning from previous BMI work, patients were engaged to help revise the pain assessment tool that also incorporated visual aids (eg, faces with frowns and smiles was added to the form as indicator of pain level).
CareSouth has innovatively applied their learning in clinical reliability to other areas. Using reliability methods, they improved the collections processes, resulting in an approximate $20,000 monthly increase in revenue with a tandem reduction in bad debt. Continued confidence and learning in the principles and methods of reliability have led them to deeply integrate this approach into their structure. For example, they have developed a monthly organizational report with performance results on all key processes currently identified as central to optimal patient care.
The 3-tier approach to achieve intended levels of reliability adapted by IHI for healthcare has led to significant improvements in certain inpatient processes. There is less reported experience in outpatient settings. The experience of CareSouth in applying reliability strategies in the outpatient setting has demonstrated improvements in multiple processes, and evidence is emerging that outcomes may be changing as a result. On the basis of the CareSouth case example and the authors' limited experience in helping teams apply reliability methods in the office practice environment, we feel that reliability principles appear to show considerable potential for application to ambulatory care settings.
Furthermore, in the authors' experience, outpatient clinical improvement teams from a variety of organizations are consistently enthusiastic about reliability. A common evaluative request is to suggest incorporation of training in reliability principles at the initiation of improvement efforts. Although many reliability strategies encompass familiar and recognized improvement methods that have been taught for years, the grouping into a framework that progressively leads to higher reliability seems to provide a useful construct and a welcome pathway for improvement.
For example, a clinic working on improving chronic care described to the authors their effort to adapt clinical guidelines for chronic illness care and translate key recommendation into flow sheets. This effort then led to extensive trainings supported by performance feedback, resulting in levels of improvement far below their hopes. Similar to CareSouth's experience, this team also identified their reliance on 10−1 reliability strategies and lack of 10−2 reliability methods as a factor in their disappointing results.
The CareSouth experience suggests that, once learned, these principles can be applied to any process, including those interventions that are foundational to patient-centered care. One key learning from their experience is to start simple, applying reliability methods to a single process that clearly needs to be highly reliable. As skill and understanding increase, gradually move to more complexity.
Progression in complexity may also help identify subgroups of patient populations (referred to as segments by IHI) who have differing needs, such as individuals who are infrequent visitors to the clinic or who have chronic pain, and often require variations in work processes to meet these needs. The segmented approach may help us find the way to achieving highly reliable care for all patients.
Although early in our exploration, the authors have also identified several characteristics of outpatient settings that differ from inpatient settings and may influence the application of reliability methods. In particular, the patient, family, and community are the primary “care managers” in the outpatient setting. We believe that a strong collaborative engagement is necessary to reliability in the outpatient environment, and applying reliability to collaborative, self-management support, and patient-centered processes is a critical and undeveloped area. In addition, processes in the outpatient environment address both acute and chronic issues, occur episodically, and are sustained over a period of time period—often years. The effect of this difference is not clearly or entirely known; however, this episodically defined long-term relationship is expected to be a significant factor in application.
In the inpatient setting, reliability methods have been applied to groups of processes called bundles. As noted by Carol Haraden, PhD, vice president at IHI, “a bundle is a structured way of improving the processes of care and patient outcomes; a small, straightforward set of practices—generally three to five—that, when performed collectively and reliability, have been proven necessary and sufficient to improve patient outcomes” (Institute for Healthcare Improvement, 2006). Supported by strong evidence, these work processes generally occur at a specific time and place and by a specific person, “no matter what” (Institute for Healthcare Improvement, 2006). The authors recognize that there are most likely tightly linked processes crucial to outcomes in the outpatient environment, and the CareSouth example in diabetes suggests the usefulness of such groupings of processes to improve reliability. However, the characteristic of outpatient processes as episodic and occurring over a long period of time excludes the use of bundles as defined by IHI.
In addition, the lack of close proximity and the episodic, long-term relationship complicate the identification of important related processes that are assumed to be synergistic. Although perhaps not as obvious or closely related in time, other contextual factors might have equal influence but are less visible. CareSouth, for example, instituted a systemwide care manager role to ensure the development of self-management goals and follow-up with patients. It is conceivable that the care manager processes are as tightly linked to improvement in HbA1c levels as the 5 processes that were the focus of improvement at the clinic level.
The awareness of a workforce outcome associated with the CareSouth pilot clinic stimulated much discussion between the authors. The pilot clinic site that applied reliability methods to diabetes care processes has the highest percentage of staff participating in the CareSouth wellness program. This group of staff also has shown significant improvement in their health status (eg, blood pressure control, smoking cessation). Furthermore, overall CareSouth staff satisfaction (measured twice yearly for the entire organization) has more than doubled during the time when the organization has been implementing reliability strategies. These factors give rise to questions regarding the potential effect of applying reliability methods on the working environment itself or, inversely, the working environment's interplay with reliability.
Perhaps the reliability work gives a sense of systematized structure and fosters teamwork toward a common and known aim—factors often associated with results (Resar et al., 2005). Or, perhaps linking important processes supports adult learning and the cognitive strategy of clustering, defined by Stolovich and Keeps (2002) as “different ways to arrange information for easier perception, understanding, retention and recall” (p. 95), supporting the ability of staff to function more effectively. Or perhaps the bundle concept or tightly linked processes are actually (or also) an equal human factors issue related to human organization, learning, and motivation. Perhaps the improvement in staff wellness at the CareSouth clinic, which used reliability methods, is indicative of a tight link between the well-being of patients and the well-being of staff. Although the authors currently have more questions than answers, further explorations into useful methods for grouping processes, as well as a better and more complete understanding of the reasons for their effect, are intriguing and vital areas for future development.
Reliability methods appear to have great promise for improving both inpatient and outpatient processes toward reducing harm and meeting patient needs. Regardless of the setting, however, every process cannot be made highly reliable. There are simply not sufficient resources, and not every process requires the highest level (10−3) of reliability. Systematic prioritization and grouping of processes for high-reliability improvement using criteria based in evidence, experience, and multiple perspectives (patient, workforce, and provider) need to be further understood and developed. In addition, identifying and exploring intended and unintended consequences of the reliability journey might help move toward a more enlightened view of one's system consistent with the aim of a system proposed by Deming (1994) years ago:
“The aim proposed here for any organization is for everybody to gain … over the long term. For example, with respect to employees, the aim might be to provide for them good management, opportunities for training and education for further growth, plus other contributors to joy in work and quality of life” (p. 51).
Berwick, D. M., Godfrey, A. B., & Roessner, J. (2002). Curing health care
. San Francisco: Jossey-Bass.
Committee on Quality Health Care in America, Institute of Medicine. (2001). Crossing the quality chasm: A new health system for the 21st century.
Washington, DC: National Academies Press.
Deming, W. E. (1982). Out of the crisis.
Cambridge, MA: Massachusetts Institute of Technology Center for Advanced Engineering Study.
Deming, W. E. (1994). The new economics for industry, government, education.
Cambridge: MA: Massachusetts Institute of Technology Center for Advanced Engineering Study.
Espinosa, J. A., & Nolan, T. W. (2000). Reducing errors made by emergency physicians in interpreting radiographs: Longitudinal study. BMJ, 320
Retrieved July 18, 2008, from Institute for Healthcare Improvement. (2006). What is a bundle?
Cambridge, MA: Institute for Healthcare Improvement..
Kohn, L., Corrigan, J., & Donaldson, M. (Eds.). (2000). To err is human: Building a safer health system.
Washington, DC: National Academies Press.
McGlynn, E. A., Asch, S. M., Adams, J., Keesey, J., Hicks, J., DeCristofaro, A., et al. (2003). The quality of health care delivered to adults in the United States. New England Journal of Medicine, 348
Retrieved July 18, 2008, from Nolan, T., Resar, R., Haraden, C., & Griffin, F. (2004). Improving the reliability
of health care (IHI Innovation Series white paper). Boston, MA: Institute for Healthcare Improvement..
Resar, R. K. (2006). Making noncatastrophic health care processes reliable: Learning to walk before running in creating high-reliability
organizations. Health Services Research, 41
(4, Pt. 2), 1677–1689.
Resar, R., Pronovost, P., Haraden, C., Simmonds, T., Rainey, T., & Nolan, T. (2005). Using a bundle approach to improve ventilator care processes and reduce ventilator-associated pneumonia. Joint Commission Journal on Quality and Patient Safety, 31
Stolovich, H., & Keeps, E. (2002). Telling ain't training.
Alexandria, VA: American Society for Training and Development.
Wagner, E. H. (1998). Chronic disease management: What will it take to improve care for chronic illness? Effective Clinical Practice, 1
Keywords:© 2009 Lippincott Williams & Wilkins, Inc.
healthcare quality; patient-centered care; practice improvement; process improvement; reliability; safety