Welch, Shari J. MD
Though U.S. emergency physicians take comfort in the knowledge that they practice in one of the most advanced health care systems in the world, the fact is that this system is highly unreliable and fraught with error. In his landmark article “Error in Medicine,” Lucian Leap recounted a number of disturbing statistics demonstrating how flawed health care can be. (JAMA 1994;272:1851.)
Autopsy studies have shown that 35 percent to 40 percent of deaths are caused by missed diagnoses. One study showed the average ICU had 1.7 errors in treatment per patient per day. When looking at operational errors, the data are even worse. Positive urine cultures were either untreated or not followed up 52 percent of the time. More sobering still, according to the Joint Commission on Accreditation of Healthcare Organizations, more than half of sentinel events involving death or permanent injury over a seven-year period and reported in 2002 occurred in the emergency department. (ED Management 2002;14:133.) How can such a high-tech medical environment prove so operationally unreliable? Can anything be done to change this?
When those involved in health care process improvement talk about reliable processes, they are referring to something specific and quantitative. According to Thomas Nolan, a leading authority in health care performance improvement and a senior fellow at the Institute for Healthcare Improvement, reliability is defined as failure-free operation over time from the point of view of the patient. (Improving the Reliability of Health Care, Innovation Series 2004 White Paper, Institute for Healthcare Improvement, www.ihi.org; accessed July 28, 2006.) Put another way, it is the capability of a process, procedure, or health service to perform its intended function in the required time under existing conditions. Reliability equals the number of actions that achieve the intended result divided by the total number of actions taken.
Almost all studies that investigate the reliability of the application of clinical evidence conclude that its reliability level is 10-1, or one or two defects for every 10 attempts. In fact, many processes in health care function at only 80 percent reliability. A number of international studies have shown an error rate in hospitalized patients of 10 percent (or 10-1), which is the level at which most health organizations currently perform. (CMAJ 2004;170:1678; N Z Med J 2002;115:U271.) For example, ace inhibitor administration for left ventricular systolic dysfunction, thrombolytic administration in under 30 minutes, and survival rates for acute MI are all at less than 95% efficacy, or 10-1.
If 90 percent of surgery patients receive antibiotics within an hour of surgical incision, the reliability of that process is 10-1. When we talk about a process having a reliability of 10-2, we mean less than five defects per 100 attempts and so forth. To take a process to a higher level of reliability (for example, less than five failures out of 1000 opportunities, or 10-3), a vigorously applied methodology can and needs to be implemented. Examples of varying reliability in medicine and the associated terminology are in Table 1.
The Institute for Healthcare Improvement has suggested that any process with a reliability of 10-1 (80% to 90% efficacy rate) has no common process articulated, meaning no standardization in care. This includes many procedures and processes in medicine (e.g., from DVT prophylaxis of inpatients to follow-up of outpatient urine cultures). A randomly chosen health care worker would unlikely be able to describe that operation. A process or procedure with 10-2 reliability has medium to high variation. A 10-3 reliability indicates a well designed system with low variation and cooperative relationships. (For comparison, aviation passenger safety is measured at 10-6. Nuclear power plants must demonstrate a design capable of operating at 10-6 before they can be built.)
What should be our goal in emergency medicine, in health care overall? To have our patients receive appropriate care in a timely fashion 99.9 percent of the time? Would that be a worthy goal? Would that be good enough? To put this reliability into perspective, let's look at other industries: If other industries had a reliability of 99.9 percent (and very little in medicine reaches that level), then we would be looking at 84 unsafe landings a day, a plane crash every third day, 16,000 pieces of lost mail per hour, and 37,000 ATM errors per hour!
If we compare encounters per fatality with other industries, the airlines have less than one fatality per 100,000 encounters. More hazardous is driving in a car: one fatality per 10,000 encounters. Frighteningly, in health care there is more than one fatality due to adverse events per 1,000 encounters. This is more hazardous than mountain climbing or bungee jumping!
Reliability principles are used successfully in other industries (such as aviation, nuclear power, and flight operations on the deck of an aircraft carrier) to improve the overall performance of complex systems and compensate for the limits of human ability. Reliability principles can improve safety and the rate at which a system constantly produces desired outcomes. When designing for a process with 10-1 reliability, basic failure prevention is the goal. A common process may not yet be articulated. Some basic strategies for achieving a 10-1 reliability or 80 percent to 90 percent success rate are having common equipment, standard orders, a personal checklist, feedback on compliance, and awareness and training, as well as a pledge to work harder next time.
Relying on such basic strategies should take the process to an 80 percent to 90 percent success rate, but it won't get to the next level of 10-2, or less than five failures per 100 opportunities. To achieve 10-2 reliability or performance, more sophistication is needed. The methodology here focuses on identifying and mitigating basic failures. It is essentially an effort to error-proof the process. This involves identifying the errors and making them visible, adapting processes to prevent errors and mitigate them. Strategies for 10-2 reliability include using decision aids and reminders built into the system, having the desired action be the default, having redundancy (double checks) and scheduling, taking advantage of existing habits and patterns, and standardizing the process.
One of the most difficult concepts for physicians to embrace in medicine is the concept of standardization. Dr. Roger Resar of the Institute for Healthcare Improvement points out that medicine is currently overwhelmed with concerns for patient safety. The best way to eliminate error is to eliminate unnecessary variation. (Aust Fam Physician 2005;34:1.) Staff members are at the greatest risk of error when there is no standardized care. Dr. Resar suggests that local organizations adopt a clear, simple, and standardized approach to patient care. This care is continually refined. Standardization has a number of benefits to health care processes, including easier training and competency and the opportunity to use evidence-based principles. Implementation, support, feedback, and learning from defects are easier when processes are standardized.
Physicians and staff may frequently object to standardization because they have a mindset of designing for perfection, with 100 percent of patients being successfully treated. This is the approach physicians take to their patients one at a time. When asked to devise a standardized process, physicians can always think of outliers and exceptions to a protocol, and use them as an objection to standardization. While the 100 percent goal is admirable for clinical outcomes, it becomes a stumbling block in reliability efforts. Rather, the goal is to make the process foolproof for the majority of patients, then refine it to take it to higher levels of reliability. Providers can then study the failures to learn how to make the process more reliable. This mindset is critical to change and to improve processes. This is an important concept for health care workers to understand. Letting go of the perfection goal is a necessary first step on the road to reliability for health care processes and nowhere more so than in the emergency departments of this country. (See table 2.)
Some of the resistance to standardization may have to do with earlier approaches to standardization. Typically, experts devised a protocol, and this was the end of the design. The protocol was considered the finished product. Customization was infrequent, there was no plan to ensure compliance, and there was little leadership. The newer approach to standardization is to begin with good evidence-based medicine or system-based knowledge and to encourage customization. Changes are monitored and defects (unwanted, unexpected, or unplanned outcomes) trigger the move to a learning system. Leadership drives the expectation of compliance. Instead of hoping that physicians will opt into the standardization, the new model requires an explanation for why they opt out.
It may be worth comparing medicine with the aviation model for standardization. In contrast to medicine, the aviation industry assumes that errors and failures are inevitable, and it designs systems to absorb them, building in multiple buffers, automation, and redundancy. Glance in a cockpit and see the extensive feedback given to the pilot in duplicate and triplicate. Second, procedures are standardized to the maximum extent possible. Specific protocols must be followed for trip planning, operations, and maintenance. Compare that with the breast-beating at the mention of cookbook medicine. What would you say if the pilot announced on your next flight that he doesn't want to do cookbook aviation and has his own technique for landing the plane? Pilots use checklists and regularly take proficiency examinations. Standardization and safety have been institutionalized in aviation.
Other research findings point to cultural differences between aviation and medicine. Health care workers are more likely to deny the effects of stress and fatigue, to find it more difficult to discuss errors, and to have a harder time accepting personal susceptibility to error when compared with cockpit crews. (Brit Med J 2000;320:745.)
It has been suggested that medicine lags behind other industries that are safety-critical. Medicine needs to shift from a blame culture in the face of adverse events to a learning culture. (Brit Med J 2000;320:811.)
Even assuming that most medical processes in emergency medicine will eventually be standardized, this likely will not provide reliability higher than 95 percent. To do that will take further effort. Dr. Resar and his team at IHI propose a three-tiered strategy for moving from 10-1 to 10-2 reliability, and noted that increasing reliability does not occur by accident. (See table 3.)
In this three-tiered paradigm, the first step involves designing simply to eliminate initial failure. This is not a lofty goal. This is simply to get to 10-1 or 80 percent to 90 percent efficacy. The effort is to eliminate the most common defect in a process right off the bat. The second step involves redundancy. Can double-checks be built in for steps in the process? In emergency medicine, these double checks might come from radiology or pharmacy. This represents a model for building reliability into ED operations for clinical care. Clinical information obtained at triage prompts the placement of standardized order sets on the chart. These orders are turned on at a number of points in the ED flow of the patient, and are reinforced by redundancy from pharmacy, radiology, admitting and so forth.
The final step in the three-tiered approach involves identifying those critical failures inherent in the process and trying to remedy those by process redesign. This is in effect anticipating the outliers, the cases that physicians might point to as exceptions to a protocol and then designing around them.
A good analogy can be found in the banking industry. In the early days of ATMs, a recurrent human error was made apparent. Customers would frequently leave their ATM cards in the machine after receiving their cash. This was a burden for banks who had to reissue or return cards. Research showed that if cash were dispensed before the ATM card was returned, there was a predictable risk that the customer would forget his ATM card. This process was studied and refined. In anticipation of failures, the process was modified to give the patron his card back first in the transaction or to never let the patron release the card (swiping readers). The error of leaving behind ATM cards was eliminated by changing the process. (Brit Med J 2000;320:770.)
Efforts at improving a clinical process should start with small pilot studies and a carefully selected target sample. This methodology is vastly different from the medical research model. Instead of random sampling with a large population of patients, a subset of patients is chosen that is easy to target for success. At this stage, the process is not designed for the outliers, or as Dr. Resar calls them, “onesies and twosies” (referring to the one to two percent of a population that will fail in a process). Rather, the design is for the overall population. Then the process is refined by making small changes, measuring the results, and conducting small repetitive audits to monitor the responses to change. These audits are called tests of change. Over time the reliability of the process will increase, and it is all done at the local level.
Ultimately, reliability in emergency medicine will be possible with the help of information technology and clinical decision support. We will be able to achieve 10-2 reliability with standardized order sets prompted through technology. Imagine this: You are evaluating a young woman with abdominal pain. The computer recognizes this pattern and combination of clinical data: child-bearing age plus abdominal pain plus positive pregnancy test. Your communication device goes off, and asks you politely, “Do you want an Rh factor sent? Would you also like an ultrasound?” You might be distracted by another critically ill patient or be eating a ham sandwich, but the system makes the care you provide more reliable. Imagine a system which alerts you that your patient has pneumonia and that the time window for antibiotics is about to close. Imagine a system which uses the latest evidence-based medicine to prompt you regarding critical actions. It's the technological equivalent of a string tied around your finger, a comprehensive reliability program with information technology at its heart. This is our future. We should embrace and incorporate these concepts into our practices today.
© 2006 Lippincott Williams & Wilkins, Inc.