Home Collections Archive Blogs Videos Podcasts Info & Services Journal Info
Skip Navigation LinksHome > November 2005 - Volume 27 - Issue 11 > Matchmaking: Emergency Medicine Benchmarking Basics
Emergency Medicine News:
Quality Matters

Matchmaking: Emergency Medicine Benchmarking Basics

Welch, Shari J. MD

Free Access
Article Outline
Collapse Box

Author Information

Dr. Welch is the quality improvement director in the emergency department at LDS Hospital in Salt Lake City, a clinical faculty member at the University of Utah School of Medicine, a quality improvement consultant to Utah Emergency Physicians, and a member of the Emergency Department Benchmarking Alliance.

Benchmarking is the process of identifying, understanding, and adapting outstanding practices and processes from organizations anywhere in the world to help your organization improve its performance.” — American Productivity and Quality Center

Figure. No caption a...
Image Tools

Benchmarking is an ongoing outreach activity: The goal of the outreach is identification of best operating practices that, when implemented, produce superior performance.” — Bogan and English, Benchmarking for Best Practices

Benchmarking in emergency medicine is fast becoming a craze. Spurred along by evidence-based clinical goals, the Joint Commission on Accreditation of Healthcare Organization's core measures, and the pay-for-performance (P4P) scheme proposed by the Centers for Medicare and Medicaid, all hospitals seem to be engaged in some degree of benchmarking activity. (A Comprehensive Review of Development and Testing for National Implementation of Hospital Core Measures. www.jcaho.org/pms, March 2005; Acad Emerg Med 2002;9[11]:1131; and JCAHO news release, Sept. 16, 2004.)

Unfortunately, many hospital EDs are benchmarked against one another inappropriately with erroneous results. It's matchmaking gone awry! While shared clinical goals based on evidence make good sense in our specialty, the mechanics of how to achieve those goals, given the world of differences in emergency departments across the land, are more problematic. Institutions need to be matched along many characteristics before comparing performance.

For example, the acute MI patient needs to be in the cardiac cath lab within 90 minutes for the best outcome. Now, how do we reach that goal? It requires hospital-specific solutions that take into account myriad benchmarks for emergency medicine, not an easy task.

Consider the institutional variables. Are there enough interventional cardiologists taking call? How can EKGs be done more quickly? Are cath lab personnel ready? Solutions to the problem of meeting this 90-minute benchmark are devised using the resources of a particular institution and its medical community.

Let's focus on a benchmarking basic that often confounds physicians. There are data that define and data that measure. It is best not to confuse the two. Examples of data which define are census, pediatric volume, payer mix, admission rate, transfer rate, trauma designation, and critical care. These are data that characterize a department and help place it in the right cohort group for benchmarking. These data are typically independent of the practices and policies of the emergency department. They are the equivalent of the “genetic make-up” of a department. By gathering and understanding these data, directors and managers can seek to characterize their department properly.

On the other hand, there are data that measure, although in 2005 these data are still inconsistently defined and utilized. These metrics would include turnaround time or length of stay, walkaway rate (against medical advice, leaving without being seen, leaving before treatment is completed), unscheduled follow-up, and stays greater than six hours, to name a few of the very crude measures utilized in ED benchmarking in the past. In the future, directors and managers will increasingly search for more granular operational data for benchmarking purposes.

Back to Top | Article Outline

Apples and Oranges

Unfortunately, there is a tendency by hospital administrators to want to compare critically emergency departments within their domains, such as comparing a trauma center with a suburban ED. Even more ludicrous, they compare the patient satisfaction scores of the labor and delivery ward to the ED. (Think about the ridiculousness of this practice. Routinely, young mothers who deliver bundles of joy in the hospital are compared in terms of patient satisfaction with patients who are injured or ill!) They like to compare their real estate side by side, no matter how flawed the methodology.

This points to an often overlooked principle in benchmarking: Hospitals should be benchmarked against like hospitals from the appropriate cohort group. Like is compared with like. We can compare Wendy's and McDonald's along many parameters. What would be the point of benchmarking Chez Panisse and McDonald's? Both feed people, but the comparisons stop there.

In the masked table are characteristics for three hospitals from the same health care system. (See Table 1.) Notice the differences along many parameters. It would be impossible and inappropriate to benchmark these hospitals against one another.

Table 1
Table 1
Image Tools

Notice the last line in the table. In the state of Utah, hospitals are categorized using a formula that assigns a Case Mix Index (CMI) by the health department. (Utah Emergency Department Encounter Data, Bureau of Emergency Medical Services and the Office of Health Care Statistics, July 2001.) A formula using length of stay, total charges, and severity of illness data helps place hospitals in the peer group most appropriate for benchmarking. A CMI of 1 is the average for the state. A CMI of 1.15 means that the overall case mix of a hospital requires 15 percent greater intensity of resource use relative to the state as a whole. Hospitals in Utah have CMIs ranging from 0.43 to 1.44. As can be seen, none of the hospitals was assigned to the same peer group. (See Table 2.)

Table 2
Table 2
Image Tools

For the purposes of benchmarking in Utah, hospitals are assigned to one of six groups based on the type of care, the location, and the CMI. This might be one scheme that could be applied across the country for the purposes of benchmarking emergency departments.

The VHA is a cooperative providing programs and services to 1,400 nonprofit hospitals to help them improve operational efficiency and clinical effectiveness. It is based in Irving, TX, with 18 local offices. It has a large database informed by its online survey process. Jeanne McGrayne, a consultant for VHA, has started cohorting hospitals along several parameters: volume, admission rate, and trauma level designation, for example. (McGrayne J. Consultant VHA, presented at ED Benchmarks 2005, March 4, 2005, Orlando, FL.) She found that staffing varied along these parameters, suggesting that something different goes on in emergency departments based on these categorizations.

Back to Top | Article Outline

ED Benchmarking Alliance

Similarly, the Emergency Department Benchmarking Alliance (EDBA), a group of high-performance American emergency departments that share a commitment to quality, has found differences in operating statistics by volume and acuity. A major data accrual effort took place a year ago that revealed some interesting trends. EDBA stratified its hospitals based on parameters involving volume and hospital type. (See Table 3.)

Table 3
Table 3
Image Tools

Jim Augustine, MD, of EDBA found a number of interesting trends based on this categorization, and that the operating statistics of hospitals in each category were very similar. (EDBA Survey, Fall 2004, Unpublished.) Again, this suggests that practice environments can be characterized along quantifiable parameters, and that this quantification can in turn provide a stratification scheme for benchmarking.

Pediatric volumes are defined as the percentage of patients seen in the ED that are categorized by the hospital as pediatric patients. This was not defined in an absolute fashion for the EDBA survey, so the EDs may have used the ages of 12, 14, 15, or 17 to define the upper age limit for pediatrics. Small hospitals in general have a higher percentage of patients in the younger age group. Some referral centers make an effort to reduce the number of pediatric patients seen in the ED by working cooperatively with the metropolitan children's hospitals and educating the community on the more effective use of those EDs. Most hospitals that do not actively discourage visits by young patients see more than 20 percent pediatric volume. It has been observed that pediatric patients presenting to community EDs have lower admission rates and lower overall acuity than predominantly adult patients.

By definition, admissions refer to the percentage of patients seen in the ED and then placed in an inpatient area of the hospital. This is generally a very well-defined number because all hospitals have uniform requirements for coding those patients. Admission rates are highest in referral centers and in the large metropolitan EDs (particularly those large EDs that see a higher percentage of adult patients). Admission rates are inversely correlated with transfers to other hospitals!

Transfers are the percentage of patients seen in the ED and then transferred from the ED to another ED or hospital. This is generally a very well-defined number because all hospitals are required under EMTALA to manage, document, and maintain data on transferred patients. Small EDs have the highest percentage of transfers; referral centers have the lowest. No hospitals seeing more than 50,000 patients per year transfer more than one percent of the ED patients. Only a few large community hospitals have more than one percent transfer rates. The transfers from these sites are composed mostly of patients suffering major trauma, those requiring interventional cardiac services, those requiring pediatric specialty services, or patients suffering acute mental health emergencies in a hospital that does not offer those services. Transfer rates seem to correlate inversely with admission rates.

EMS arrivals and admissions are tracked as the percentage of patients seen in the ED who arrive by ambulance and those who are admitted. This would include ambulances serving the community as 9–1-1 providers and ambulances that perform routine ambulance transport of patients. Each institution has a predictable profile of those numbers from year to year. Large community hospitals appear to have the highest arrival rates by EMS. Only small EDs had less than 10 percent arrival by EMS. Most hospitals had a significantly higher percentage of admissions from patients who arrive by EMS versus those arriving by other forms of transportation. Over the entire study group, about one-third of patients arriving by EMS is admitted. This would indicate that overall EMS services are being used by significantly ill or injured patients. For most hospitals, arriving by EMS would predict admission about four to five times as often as arriving via private means.

Visits per square foot is not reported in the literature, but EDBA has begun tracking this statistic to get a feel for how compact an ED is and how variable the ED footprints across the alliance are. This metric is defined as the annual number of ED visits divided by the gross square footage of the emergency department. This metric is therefore the annual ED visits per square foot. EDBA members averaged 3.4 annual visits per square foot, but went as high as 6.1 visits per square foot.

Back to Top | Article Outline

Operational Data

Emergency departments interested in quality improvement and efficiency are just beginning to scratch the surface in acquiring the necessary operational data. To get a feel for the utilization of resources by various types of EDs, EDBA surveyed member hospitals about the numbers of services per 100 ED visits. These services included EKGs, radiographs, CT scans, and respiratory treatments. Though many facilities had no access to these data, there was one apparent trend: the highest utilization rates are in the largest hospitals.

These trends are noted in the data that define an institution and help place it in the appropriate benchmarking group. They are data that define, but data that measure serve as indirect measures of quality, though we see different performance based on hospital cohort group. Other data which measure include stays greater than six hours, complaint ratios, operational metrics (turnaround time for the lab, x-ray, etc.), and unscheduled follow-up.

The average ED length of stay indicates the number of minutes spent in the ED for each group of patients (those admitted, discharged, and combined). It also may be referred to as turnaround time (TAT) or throughput. This is generally a poorly defined number because each hospital will identify time markers differently. In particular, does the clock start in registration? In triage? In the treatment room? Many EDBA ED leaders said these time lengths are increasing. The length of stay appears to have some consistency between types of EDs and the volume served. The largest centers have the longest lengths of stay in all measured intervals.

Any patient recognized by the ED who leaves before treatment is completed makes up the data for walkaway rates based on average length of stay and type of facility. It is clear that patients in larger EDs tolerate lengths of stay longer than average before they leave without completed treatment.

There is a growing amount of literature on those who leave before treatment is complete, but no study that identifies time thresholds based on volumes or characteristics of the hospital and ED. There are some factors that seem to affect rates of patients who leave before treatment is complete. Somehow these factors must come into effect in different ways in different size facilities. It is possible that a patient's expectations may be different as he enters one of these EDs. But it is also possible that some operational and design characteristics may drive patient behavior. It appears, even with scattered data points, that EDs processing patients more rapidly are rewarded with low rates of walkaway patients. It does not appear to be a linear relationship, but when the EDs are grouped by size and characteristics, efficient EDs preserve their patient volumes. See Table 4 for an analysis of the ED by their grouping.

Table 4
Table 4
Image Tools

The take-home message is this: Benchmarking to improve performance is here, and will be a part of quality improvement efforts for years to come. Benchmarking data should be incorporated into QI efforts and utilized by the practitioners in a particular shop, who may then devise homegrown solutions to their operational problems. Expect benchmarking data for all ED operations (turnaround time for x-ray, lab, CT, etc.) to arrive soon and with increasing granularity. The first step, though, is to define ourselves.

Lest we be made to benchmark against the wrong facilities, we must know our own data so that our performance can be gauged against an appropriate benchmarking partner. Find your cohorts! Like an online dating service, each emergency department needs to find its prospective mates through data accrual and sharing. ED-Harmony.com? This is matchmaking for emergency medicine!

© 2005 Lippincott Williams & Wilkins, Inc.

Login

Article Tools

Images

Share