Early quality improvement programs were quite basic and usually consisted of chart audits (looking for documentation deficiencies), focused audits (chest pain was a frequently audited chief complaint) and tracking a few metrics such as patients leaving against medical advice or the department's turnaround time. Though these crude audits and measures were just a start, physicians involved in these types of QI activities discovered something surprising. Whatever they focused their sights on would inevitably show improvement, and the culture of their departments changed.
Those departments with robust quality improvement programs frequently fostered a culture focused on patient satisfaction. The surveys bedeviling physicians had shown improvement subsequently. Similarly, the department that was constantly adjusting and changing its processes to improve efficiency also fostered a climate of innovation. Ideas published about customer service in the ED, bedside registration, team triage, discharge teams, and other best practice ideas have originated in departments with strong quality improvement ideologies and programs. (J Healthcare Mgt 1998;43:421; Am J Emerg Med 2002;20:267; Ann Emerg Med 2002;32:169; personal data.)
What these number crunchers realized was that data talk. It is one thing to complain about bed waits for admitted patients; it is another to audit and show administration that it takes two hours to get an inpatient bed. The shrewd quality improvement director discovered that data can be used to support requests for staffing, support the purchase of new equipment, and serve as a public relations tool. (When an intubation audit in our department revealed that anesthesia was never called and a surgical airway was never required in 100 consecutive ED intubations, we made certain this good news was widely circulated at our institution.)
Using ED Metrics
Benchmarking has become an important part of quality improvement in the ED. Those who began using ED metrics soon found themselves looking for standards with which to compare their data. One real difficulty has been in the terminology. AMA is a frequently used metric, but many departments had difficulty separating AMA patients from those who left without being seen. These numbers often were cross-contaminated. Many institutions have gone to the term LBTC (left before treatment complete) to capture all patients leaving after being recognized as an ED encounter but before being discharged by the physician. Likewise the term TAT (turnaround time) has been used alongside LOS (length of stay) and throughput time.
If a hospital does not have bedside registration, does the clock start in the waiting room or when the patient has been placed in a room? Clear definition of these metrics and consistent utilization is imperative for departments to benchmark against one another. As many physicians involved in quality improvement have realized, hospitals must be matched for volume, acuity, and other demographics. Tertiary referral centers, for instance, are not analogous to smaller community hospitals. Defining benchmarks for different subsets of hospitals is a challenge for the coming decade.
As Jim Augustine, MD, of the Emergency Department Benchmarking Alliance points out, there are approximately the same number of weather stations in the U.S. as there are emergency departments. While the former constantly shares data, there is virtually no data sharing by EDs. (Presentation Benchmarks 2004, March, Orlando, FL.) This is everyone's loss. The wholesale sharing of census, demographic, and operational data by like hospitals will better enable them to predict which patients will come to their department, when they will arrive, and what services they will require. Is the radiology department staffed to maximally meet the ED's needs? Does scheduling of the psychiatric social worker and case manager parallel the needs for those services in the ED? Most departments do not yet have a handle on those kinds of data, but we can envision a time when these data from large cohorts of hospitals will allow establishment of such guidelines.
Physician profiling has been another part of quality improvement. Most physicians have noted that feedback to them about their practice and performance typically helps to bring in the outliers and foster an environment of accountability. (Ann Emerg Med 2001;38:533.) On the other hand, this aspect is being emphasized less as programs move toward the idea of real process improvement. Many have suggested that operational problems and delays be viewed as system problems rather than individual problems and that solutions be sought accordingly. Though we have de-emphasized the physician specific data at our hospital, we publish it anonymously twice a year, and most physicians like to see how they measure up on various practice parameters.
What these number crunchers realized was that data talk.
The advent of computerized tracking systems has had a huge impact on quality improvement efforts around the country. According to Voluntary Hospitals of America, Inc., (VHA), a cooperative of 2,200 nonprofit hospitals and health care systems to help improve performance, 44 percent of emergency departments have tracking systems, and 56 percent have an ED information system (www.vhatools.com/ed. No longer must the administrator hand-audit to track data on departmental operations. Hand-audits are giving way to monthly reports with census data, time interval data, and often resource utilization data ready for retrospective review. Many departments can even get data based on chief complaint. For instance, we get reports on patient volumes and turnaround times by chief complaint, and there lies another big challenge for ED quality improvement: devising a standard chief complaint scheme. (Acad Emerg Med 2004;11:1170.) For instance, does chest pain include chest pressure, chest heaviness, and chest tightness? What about stabbing chest pain or “pain when I breathe?” Once again, for adequate benchmarking, emergency physicians need to define their terms, and standardize the way different subsets of patients are tracked. In the future, one could envision a chief complaint-based benchmark for turnaround time for chest pain, abdominal pain, or toothache.
Taking information technology in the ED a step further and simultaneously turning traditional quality improvement on its head, some departments are forging ahead with ED dashboards. At LDS Hospital in Salt Lake City, the information technology team designed a customized tracking system integrated with all the other information systems at the hospital. As a result, these systems talk to each other. When the lab results on a patient are in, the lab computer tells the tracking system. When patients are waiting in triage, the registration log tells the tacking system. In real time, these dashboards tell how many patients are in the department, what the average turnaround time is, how long it is taking for lab results, how long it is taking to get x-rays, how many people are waiting for beds, how long they have been waiting, the average acuity in the department, and the patient/nurse ratio.
All of these operational data are available in real time so we can address operational problems in real time rather looking at them retrospectively. We are currently working on “triggers” and “remedies” so when there is a backlog at triage, for instance, we set up a secondary triage to meet this surge capacity need. When the turnaround time for x-rays exceeds one hour, we start taking ambulatory patients upstairs to the main x-ray department. There are many industry models for using real-time data to make process improvements. When more than six cars are waiting in the drive-through at many McDonald's restaurants, for example, they ask the car with the order-causing delay to pull into a parking spot, and they hand deliver the order when it is ready. Though EDs have been accustomed to these retrospective quality improvement reports, real-time data and real-time process improvement can be expected to become the norm.
Another innovation is an ED data mart, in which all operational data from the tracking system are stored in a warehouse for subsequent mining. We are in the process of performing analyses on these data, and hope in the near future to be able to report time and volume data by chief complaint, disposition, resources used, and age, and to cross reference all of these data. These types of operational data, especially as they become available to larger and larger hospital cohorts, ought to enable us to predict with great specificity what our patients will need from us and when they will need it, and establish benchmarks and guidelines for these services.
Accomplishing these three tasks — devising an ED cohort system, defining metrics, and standardizing a triage chief complaint system — will catapult us into the realm of process improvement. Once in place, the opportunities for benchmarking performance will be unlimited. The expected gains in ED quality improvement in the coming decade as we fully utilize available information technology will be mind-boggling. (Acad Emerg Med 2004;11:1206.) Whether practitioners work in a department with modest IT support or at a cutting-edge center, everyone needs to have the vision to see how it can positively affect the care we provide. Every ED needs to have the ability to measure performance and devise operational improvements based on data. Though we may have started in the QA business to keep our administrators happy, the past two decades have demonstrated that quality improvement can enhance our ability to care for our patients and improve operational efficiency. Remember: data talk. Are you listening?