The Institute of Medicine's reports on hospital safety, “To Err Is Human”1 and “Crossing the Quality Chasm,”2 were a call to action for the U.S. health care system generally and for hospitals in particular. Among the many points these reports made was the observation that hospitals had lost sight of some very basic safety elements. It was no surprise, then, that the Joint Commission's first set of patient safety goals, issued in 2003, included the use of two patient identifiers to prevent errors in patient identification and recommended steps to prevent “wrong-patient” surgery, or that the second set of patient safety goals, issued in 2004, included handwashing to reduce health-care-associated infections.3 The value of improvement in these areas was obvious, and the leadership at UCLA immediately made 100% compliance a top safety priority.
A hospital-wide hand-hygiene campaign at UCLA began in 1998. We established performance improvement teams and identified nursing-unit-based “champions,” nurses who were team leaders responsible for promoting and carrying out the centralized hand-hygiene program on their particular nursing unit. Staff received education on the safety processes. Safety material given to patients included encouragement for them to ask caregivers about their hand hygiene. Staff gave input on the best hand-hygiene products to purchase and where to place them. Despite these activities, it became evident that there was no way to determine whether the educational and other efforts were having any benefit. We began a measurement program in which nursing unit staff conducted peer audits on inpatient nursing units to measure compliance with the hand-hygiene safety standard. Although compliance was consistently reported at 100%, feedback from patients and their family members, as well as from the staff and physicians who had been patients, indicated that not all staff members adhered to the standards. In addition, this was not an effective way to measure compliance with hand-hygiene rules on the part of physicians and other staff. The challenges for the organization, then, became to establish a method to collect more objective and more reliable safety goal performance data, to report that feedback to care providers, and to measurement improvement over time.
As part of a hospital service initiative, student volunteers had been interviewing patients for three years. In 2003, the hospital's administration asked the director of patient relations and the director of the nursing administration project to expand the student volunteer program and organize a group of undergraduate volunteers to conduct patient safety practice observations for the hospital. This program was called Measure to Achieve Patient Safety (MAPS).
To develop the program, volunteer services assisted in the initial recruitment of students. A training program was developed with the assistance of the hospital's Department of Infection Control. The Department of Nursing provided guidance on the use of two patient identifiers, and the Department of Quality Management developed a database for tracking observations. At the outset, it was clear that partnerships with the Department of Nursing and other hospital departments and the hospital's medical staff would be essential. The central task of organizing, training, and supervising students was assigned to an experienced data manager in the Department of Patient Relations.
Even with an interdisciplinary alliance, the project directors agreed that the team would need an additional level of leadership to establish the program, create and document observation processes, and successfully train and supervise students. The hospital budget was not able to support the project with new full-time-equivalent (FTE) staff. The project directors therefore decided to recruit student leaders, a model that had been successfully used in our services initiative, in which we used student volunteers to interview patients regarding the professional behavior of the housestaff they had encountered. Advertisements ran on Monster Track and campus Web sites, and response was strong. The project directors interviewed candidates with respect to their availability, motivation and reliability, previous leadership responsibilities, and quantitative skills and to discern any alignment of MAPS with their stated career objectives. The first student leader selected in 2004 was a second-year master's in health care administration (MHA) student. Since then, interns have been premed, nursing, or MHA students.
Student leaders supervise between 15 and 20 students. They are de facto project managers who operate under the direction of the project directors. They are responsible for recruitment, promoting teamwork, creating monthly schedules of student observation times, responding to students' questions, and meeting with nurse leaders to address their needs and concerns. They also continuously fine-tune observation tools, ensure accuracy of observation sheets, and generate monthly reports to the charge nurses, the unit directors, and the medical center's quality leaders, including the chief of staff, the licensing director, and chief medical officer.
After it became clear that this program was going to help increase compliance with hand-hygiene standards, it was expanded to focus on the challenges of correct patient identification: specifically, the use of two patient identifiers as part of safe medication practices and to ensure proper patient handoffs at the time of transport from a unit for a procedure, to avoid a wrong-patient error.
The UCLA Medical Center and the David Geffen School of Medicine are located on the main campus of UCLA, a fact that proved key to our ability to recruit highly motivated and qualified students. Selection criteria for students who apply to MAPS are their interest and past involvement in volunteer work and their commitment and reliability in other projects. Student leaders make announcements and pass out flyers in lecture halls and health campus clubs. However, most of the students who become part of the program have been recruited during the medical center's general volunteer orientation, when student leaders give a five-minute “pitch” highlighting the benefits that MAPS offers to pre-health-career students.
On acceptance to the program, students attend a one-hour MAPS orientation provided by the student leaders under the supervision of the two project directors. To achieve consistent and reliable observations, students complete six hours of training, during which they learn the theory behind the safety measures. They are taught how to do observational measurements by using a standardized tool. They are taught professional conduct, modes of being courteous and respectful in patient care areas, and ways to respond if questioned by staff, physicians, or patients. New volunteers watch several MAPS volunteers making their observations and then perform at least five supervised observations before beginning independent work.
Feedback from nursing unit directors has proven important to the development of the program and to the content of the training of student volunteers; it has resulted in improvement and clarification of information on the observation tools. For example, when handwashing compliance was low, the nursing directors initially added a “reason” code for noncompliance, such as “Handwashing for less than 15 seconds,” which gave nursing leaders critical information to use in educating their staffs.
Ongoing training is provided at mandatory monthly meetings, which include a meal provided by the medical center. Meeting agendas always include a review of the data reports for the previous month and a discussion of comments, questions, and concerns that arise about the data-collection process. In addition, feedback from nursing staff or other clinical areas is shared so that students know how the data are being used.
Team building and motivation
Maintaining a constant team of dedicated and motivated students requires a combination of structure and activities. Students need to have a sense of ownership in the program, need to feel engaged, and need to understand the importance of their work. Student leaders and project directors work hard to reinforce the value of the data that students collect, to emphasize that this information is helping to save lives in the hospital setting, and to confirm that this valuable information could not have been obtained without their help.
Often, there are groups of student volunteers with the same major who are taking the same courses. MAPS meetings provide a forum within which students can network, make friends, and form study groups. These opportunities have arisen out of the monthly meeting (and meal), which has provided the entire team with a place and time to form bonds and get to know each other outside of the time spent in data collection on the floor.
A regular part of ongoing training in the monthly meetings is the appearance of guest speakers, including school of medicine faculty, hospital administrators, nurse leaders, and department directors. In addition, nurse leaders often stop by the monthly meeting, simply to express their thanks and acknowledge the volunteers' hard work and dedication to helping to make the UCLA hospital as safe as possible.
At the end of the academic year, a dinner is held for the volunteers and student leaders, who are recognized for their hard work and achievement; this has been a way of keeping the volunteers engaged by creating excitement around the project and providing a foundation for returning volunteers. Many of the volunteers take the summer off, but more than half of the volunteers return in the fall to continue with MAPS.
The MAPS program is a win–win relationship of the hospital, the student volunteers, and the school of medicine. The David Geffen School of Medicine at UCLA and the UCLA Health System strive to provide career-education opportunities to undergraduate students who are considering careers in the health care professions. The UCLA Medical Center had a strong history of recruiting reliable, high-performing, pre-health-career student volunteers and of matching them with projects that support the center's mission. MAPS is among the best examples of these programs. MAPS students are educated about the “why” of these safety goal observations, the origin of the safety goals, the Institute of Medicine reports, and Joint Commission standards, and they also have the opportunity to hear presentations by speakers of interest. Participation in MAPS inspires these student volunteers and future care providers to internalize the safety goals and become safety goal advocates of the future. As a result of participation in MAPS, a student is better prepared to make a sound decision about whether a health profession is the best career choice for himself or herself. An even greater benefit is realized by the student leaders, who obtain mentored leadership training in the real-life setting to which they aspire.
Infrastructure needed to start and sustain MAPS
The direct costs to begin MAPS were minimal, and the costs to sustain MAPS remain nominal. Computers, office supplies, food for meetings, and volunteer incentives cost approximately $5,000 per year. The program has now become sufficiently well established that separate office space was created for the students in the program. During the first year of MAPS, two project directors and one project supervisor dedicated between 5% and 15% of their time, totaling approximately 0.3 FTEs. This time was dedicated to training interns, establishing training modules, conducting training, and creating data-collection tools and reports. The nursing project director, in particular, spent considerable time communicating with nursing unit directors and other clinical leaders and establishing credibility of the program among these groups. By Year 4, when most of the systems were established, the MAPS supervisor role had been eliminated, and project directors estimate that, now, they dedicate approximately 5%, or approximately 0.1 FTEs, to the program.
Data-collection tools have been created for each type of measurement. Initially, these tools were fairly complex and included multiple elements, but, over time, it became clear that simpler tools were better. The form for the handwashing observation appears in Supplemental Digital Figure 1 (at http://links.lww.com/ACADMED/A3), and the form for the medication administration/syringe (i.e., patient identification) observation appears in Supplemental Digital Figure 2 (at http://links.lww.com/ACADMED/A4). These tools gather information on the time the observation was conducted, the patient's room number and bed, the profession of the person being observed, and whether (yes/no) the caregiver was compliant with the required elements of either a handwashing observation or a medication-administration observation. Neither staff names nor patient-identifying information are captured on the observation tools.
Students are trained to fill out these data tools completely for every observation they do. If a caregiver's badge is turned over, or if the volunteer is unsure of the caregiver's position, the volunteer is trained to ask the caregiver at the end of the observation to which profession he or she belongs. To maintain trust in MAPS, the data tools do not, however, ask for specific employee names.
One of the most important and effective tasks of the volunteers is making two copies of their observation tools at the end of the shift. One copy is turned in to the charge nurse of that specific nursing unit, and the other is kept in a central location for data aggregation and reporting. We have found that much of the success of MAPS has been due to the ability to provide nurse leaders with these “on-the-spot” copies of safety goal performance right after observations were conducted. This real-time data reporting allows real-time interventions with staff.
Data are collected largely on a convenience basis, with the significant factors being the students' class schedules and availability. Most of the observations are obtained between 8:00 am and 12:00 pm and between 3:00 pm and 8:00 pm. Seasonality is also a factor because of the absence of students during term breaks and exam weeks. Typically, data collection is high from October through May, and smaller samples are collected from June through September.
Elements of handwashing
The required elements of a handwashing activity are that the caregivers cleanse their hands with soap and water for 15 seconds or use an alcohol-based hand cleanser between patient contacts. Students are trained to watch for “events”—staff behaviors that can be the first stage in an event. A caregiver's entrance into or departure from a patient room or a caregiver's contact with the patient or any of the patient's bedside belongings, such as linen, bed railings, or medication and intravenous fluids, all can be such trigger events. For a student to conduct a handwashing observation, he or she must first be able to physically see that the caregiver has had contact with the patient or bedside belongings. If a patient is behind curtains or closed doors, the observation is aborted and no data are collected. Similarly, if a caregiver simply walks in the room to check on the patient but does not have physical contact with the patient or bedside belongings, then he or she is not required to wash his or her hands. When a caregiver does have patient contact, handwashing is indicated, and students collect data. Students are trained to count the number of seconds the caregiver spends in washing his or her hands. If the caregiver washes his or her hands, but not for the whole 15 seconds required, the observed action becomes noncompliant, and the “Washed for less than 15 seconds” box on the tool is checked. Similarly, if the caregiver washed his or her hands for 15 seconds but failed to use soap, the observed action becomes noncompliant, the “Other” box is checked, and the reason is entered in the space given on the tool.
Two elements are required to be measured in a medication-administration observation: (1) Was either the medication administration record (MAR, which is the official document prepared by the pharmacy that contains the patient's name and medical record number and the medication, dose, and route and time of administration) or another medication order sheet taken with the medications to the patient's bedside? And (2) Was the patient identified before medication was administered, by the checking of at least two patient identifiers against the MAR or other order sheet?
Events that trigger a medication-administration observation include several activities by nurses: entering the medication room, handling the Pyxis machine, carrying medications into a patient's room, or walking down the hallway with an MAR or other medication order sheet. Student training is that, when a student witnesses one of these events, the student is to ask the caregiver if he or she will be giving medication to a patient. If the answer is “yes,” the volunteer asks the caregiver for permission to conduct a patient safety observation. However, if the caregiver is behind glass doors in a room of the intensive care unit (ICU), so that the volunteer is physically able to see a trigger event without entering the room, the volunteer is allowed to conduct the observation without having to askthe caregiver for permission.
For element 1 to be compliant, the MAR or other medication order sheet must be taken to the patient's bedside. If the MAR or other medication order sheet is taken into the room but not used at the bedside, then that element is noncompliant, and “No” is checked on the form for that step. For element 2, the patient must be identified by using at least two of the appropriate identifiers (e.g., patient's name and medical record number for inpatients, or patient's name and date of birth for outpatients). The volunteer must be able to clearly hear the caregiver orally check these two identifiers. If the caregiver does not check them out loud, the volunteer is trained to ask the caregiver at the end of the medication administration which two patient identifiers he or she used. For a medication administration to be entirely compliant, both elements must be marked “Yes” on the observation form.
A number of processes are required to prevent wrong-patient surgery, including a formal time-out in the operating room or procedure area. Another step that we identified as reducing the chances of wrong-patient surgery is a formal patient handoff from the bedside nurse to the transporter assigned to take the patient to the operating room or other procedural area. The steps in the handoff include the following: (1) the transporter obtains a requisition slip with the patient's name and medical record number, (2) the nurse checks the patient chart for a procedure order, (3) the nurse and the transporter together check 2 patient identifiers to ensure that the correct patient is being taken from the unit to the procedure area, (4) a patient transport slip is signed to confirm the handoff, and (5) the chart goes with the patient to the procedure area. Each step is separately confirmed by observation.
Monthly compliance reports
Compliance reports are distributed at the end of every month by the intern. Incomplete observations and those incorrectly filled out are not included. Monthly reports are produced for the type of observation (i.e., handwashing, medication administration, and blood draws) and for the area (i.e., the individual floors and positions on each floor). These monthly reports are distributed to charge nurses, unit directors, and hospital administrators. The MAPS Summary Report from May 2008 on handwashing appears in Supplemental Digital Figure 3 (http://links.lww.com/ACADMED/A5). These reports allow nurse leaders to see how well the staffers in various positions on their floors did that month in meeting safety goal performance, and they allow immediate coaching. The monthly reports also help to hold each unit accountable for the compliance rate on its floor; we found that, because of the availability of this information, some floors were able to increase their compliance rates by almost 40% from one month to the next. The number of observations varies, but they average 700–800 per month.
The best measure of the success of this approach has been the improvement in key safety measures. The monthly data from the past two years on handwashing, the use of two patient identifiers, and handoffs are shown in Figure 1, Figure 2, and Figure 3, respectively. Handwashing has increased from a baseline of 50% to a level consistently >90%; the nurse's checking of two patient identifiers at the time of medication administration increased to 100%, and the nurse-to-transporter handoff for patients leaving units has increased from 40% to 90%. To compare the value of handwashing with that of other measures necessary to reduce hospital-associated infections, we provide Figure 4, which shows that the UCLA hospital's rates of methicillin-resistant Staphylococcus aurea are substantially lower than those reported from many other hospitals.4
Two of the most important and widely accepted quality and safety imperatives in hospitals are hand hygiene for infection control and the use of two identifiers to prevent wrong-patient procedures. The cornerstones of any clinical improvement project, regardless of the methodologic approach used, are measurement and feedback. The challenge in implementing many safety processes is that compliance or noncompliance with the proper procedure occurs at the bedside and is the responsibility throughout a medical center of hundreds, if not thousands, of individual caregivers, with many “moments of truth” occurring each day. It is a challenge to find a measurement system that can support an authentic improvement effort in a procedure such as handwashing.
Because handwashing is a process aimed at reducing hospital-acquired infections, it might be possible to use the actual outcome itself—rates of infection—as the measurement. However, it has not been universally possible to connect a reduction in infections to an increase in handwashing.5 Because no one seriously questions the value of hand hygiene, this inability is likely to be the result of the multifactorial nature of health-care-associated infections. Nonetheless, the outcome measurement of infection reduction is probably too far removed from daily practice to be a driver of hand-hygiene improvement. Thus, measurement of the process itself is required.
A number of surrogates have been attempted for hand-hygiene measurement, including measuring the amount of hand cleanser used and using electric sensors on the soap or alcohol dispenser. However, the gold standard is direct observation.6 Numerous studies of techniques to improve hand hygiene have used direct observation as the means to measure the effectiveness of a specific intervention. Almost all of these studies have been relatively short, and most have been conducted in ICU settings only.7,8 The challenge to a sustained improvement program is to have both regular measurement over a prolonged period and effective feedback to practitioners across the entire hospital. This is difficult in routine practice because of the labor costs. We tried to use peers in their regular job environment, but this proved impractical because peers themselves are busy with their jobs, may not be motivated to complete a formal tool for each evaluation, and may not be accurate in their observations. At the time of this study, we were unable to afford the approximately three FTEs that are the equivalent of the student effort, which is probably true of most hospitals.
There are a number of methodological problems with using observations in research studies, including interrater reliability and the Hawthorne effect, which may bias research findings.6 All of the problems associated with using observations in research studies are likely to be present in this study. However, our goal was not to do a research project but to improve patient care. Our methodology may lack the rigor necessary to prove a research hypothesis, but it has been more than sufficient to improve safety. Two drawbacks of our program are that it is somewhat seasonal and that we have yet to systematically measure the night shift. The structure of the old hospital, which mostly had two-bed rooms, meant that observation of patients before contact with them was often limited (the patient in the bed farther from the door often would have the curtain pulled for privacy and couldn't be seen from the hallway), and thus our findings may be missing elements of full compliance with the guidelines of the Centers for Disease Control and Prevention.9 Our observation tool is significantly simpler than other such tools used for research purposes.10 Nonetheless, we believe that we have created a hospital-wide focus on and enthusiasm for hand-hygiene practices, of which the measurement is now a supporting element. Evidence of this surfaced recently when we moved the entire hospital to a new building. After the move, there were numerous complaints from nurses that the alcohol hand sanitizers were not well placed for easy hand hygiene before each patient contact. These complaints resulted in a doubling of the number of alcohol dispensers in the new hospital.
It is also important to note that improvement in infection rates requires more than hand hygiene. The UCLA program has included the use of multidisciplinary teams, educational programs, reinforcement of that education by leadership, active monitoring and focused interventions in nursing units, and a focus on surface cleaning. With this approach, we have achieved relatively low rates of several key infections, but we have not reached zero.
One of the challenges for hospitals in improving safety profiles is keeping focus. With ever-increasing outside pressure to add required safety measures, each of which requires resources to achieve improvement, there is a tendency for safety programs to shift those scarce resources from initiative to initiative. A corollary to the challenge of focus is that most of the important safety improvements take more than a few months to achieve results, especially in a large organization. For example, our handoff-for-procedures process took more than two years to achieve acceptable results. One of the first case reports in the Annals of Internal Medicine series on medical errors was a wrong-patient surgery.11 Our hospital had a near-miss about the same time that the report was published. That article advised, and we ourselves separately determined, that in-depth defense (i.e., interventions at multiple steps in the process) is the only way to prevent events, such as wrong-patient or wrong-site surgery, that should never happen—so-called “never events.” The handoff of patients when they leave a unit for a procedure was a point of opportunity to prevent a wrong-patient event. We set about creating a structure for the nurse-to-transporter handoff of a patient who is leaving the unit that required checking the schedule for a physician order and checking two patient identifiers. We achieved no results during two years in addressing the obvious threat of transport of the wrong patient from a nursing unit to a procedure unit—until we began measuring. Even then, it took an additional 14 months to achieve a sustained change in behavior. Thus, the ability to provide measurement and feedback over a sustained period is a critical factor in the success of any quality or safety initiative.
Another lesson learned is that there must be a collaborative relationship between leaders at all levels and those being measured, so that feedback is met with a desire to improve rather than with resentment. Such a relationship doesn't develop by accident. The nurse leader for the MAPS team had been a long-time unit director and therefore had great credibility with other nursing leaders. The chief medical officer was actively involved in translating results to physicians. Such a collaborative approach is much easier to achieve when the initiative is one that is as well accepted by bedside caregivers as is hand hygiene. It has been much more difficult for us to achieve results with some of the Medicare core measures, for which the required processes or outcomes are less well accepted.
The benefits and experience that the undergraduate volunteers receive by being part of this program are central to the success of MAPS. Undergraduate students are highly motivated by the direct exposure that MAPS provides. As one student said, “It is really neat for me to see how the material that we are being taught in lectures is actually applicable in a hospital setting and, more important, in patients' lives.” We have also found that, for many volunteers, this experience of being able to observe models of their own potential future professions has helped to pave their career paths into the medical field or into defined areas of interest.
Another of the great benefits that MAPS has provided to its volunteers has been the personal internalization of safety goals and the desire to be the “safest practitioners” in their fields. Through the data-collection process, volunteers have learned the importance of being detail-oriented in safety goal performance. As one of the volunteers said, “I have realized what a difference simple handwashing can make in a patient's life and that [patients] are trusting [us to] make their hospital stay the safest possible. Because of MAPS, I now hold myself accountable for my own safety goal performance.”
A benefit that we have found to be of most interest to the volunteers is the opportunity to establish relationships with nurse leaders and persons in hospital administration. Many of our volunteers have reported that the guidance and support that they receive from the staff with respect to their personal career goals have been phenomenally helpful. Some of this guidance has been provided via educational meetings with faculty guest speakers that inform volunteers about various topics in the health care setting. These meetings provide a place where volunteers can ask questions or get clarification on different aspects of the medical field. Of the seven student leaders involved with MAPS with whom we remain in contact, three with MHAs are working at health-related companies, two have matriculated in nursing degree programs, one is in health consulting, and one is now applying to medical school.
Because the volunteers are committed to providing the hospital with such high-quality data, it is also of great importance to the UCLA Medical Center that we commit ourselves to giving our volunteers high-quality tools, support, and knowledge for their own future health care professions.
The use of trained student volunteers has been a critical factor in the success of several important patient safety initiatives at UCLA. The MAPS program is a low-cost, high-yield program. The improvement shown in these hard-to-measure processes is evidence of the broad acceptance of the program by our clinical staff and also is a reflection of the effort that goes into acculturating and training students. Most academic medical centers and many community hospitals have access to this highly motivated labor pool.
1 Kohn LT, Corrigan JM, Donaldson MS, eds. To Err Is Human. Washington, DC: National Academies Press; 2000.
2 Committee on Quality of Health Care in America, Institute of Medicine. Crossing the Quality Chasm. Washington, DC: National Academies Press; 2001.
5 Backman C, Zoutman DE, Marck PB. An integrative review of the current evidence on the relationship between hand hygiene interventions and the incidence of health care-associated infections. Am J Infect Control. 2008;36:333–348.
6 Haas JP, Larson EL. Measurement of compliance with hand hygiene. J Hosp Infect. 2007;66:6–14.
7 Bischoff WE, Reynolds TM, Sessler CN, Edmond MB, Wenzel RP. Handwashing compliance by health care workers: The impact of introducing an accessible, alcohol-based hand antiseptic. Arch Intern Med. 2000;160:1017–1021.
8 Gould DJ, Chudleigh J, Drey NS, Moralejo D. Measuring handwashing performance in health service audits and research studies.J Hosp Infect. 2007;66:109–115.
9 Boyce JM, Pittet D; Healthcare Infection Control Practices Advisory Committee; HICPAC/SHEA/APIC/IDSA Hand Hygiene Task Force. Guideline for Hand Hygiene in Health-Care Settings. Recommendations of the Healthcare Infection Control Practices Advisory Committee and the HICPAC/SHEA/APIC/IDSA Hand Hygiene Task Force. Society for Healthcare Epidemiology of America/Association for Professionals in Infection Control/Infectious Diseases Society of America. MMWR Recomm Rep. 2002;51(RR-16):1–45.
10 McAteer J, Stone S, Fuller C, et.al. Development of an observational measure of healthcare worker hand-hygiene behavior: The hand-hygiene observation tool (HHOT). J Hosp Infect. 2008;68:222–229.
11 Chassin MR, Becher EC. The wrong patient. Ann Intern Med. 2002;136:827–833.
Supplemental Digital Content
© 2009 Association of American Medical Colleges