CARDIAC ARRESTS in hospitals are usually preceded by observable signs of deterioration, which often appear 6 to 8 hours before the arrest occurs. Studies suggest many patients exhibit signs and symptoms of medical deterioration that go untreated prior to a cardiac arrest.1–4 Schein et al. found that 84% of patients had documented observations of clinical deterioration or new complaints within 8 hours of cardiopulmonary arrest; in 70% of patients, deterioration of either respiratory or mental function was observed during this time.1 Hodgetts et al. noted that 17% of cardiac arrests occurred in patients who were being cared for in an inappropriate clinical area.5 Cardiac arrest was potentially avoidable in 95% of those patients; in contrast, it was potentially avoidable in 60% of patients cared for in appropriate areas.5
In 2004, the Institute for Healthcare Improvement (IHI) launched the 100,000 Lives Campaign in an effort to rapidly and dramatically improve patient outcomes. IHI is a not-for-profit organization whose mission is to improve the safety, effectiveness, patient-centeredness, timeliness, efficiency, and equity of care delivery. IHI identified six strategies aimed at saving 100,000 lives in U.S. hospitals:
- Deploy rapid response teams.
- Improve care for acute myocardial infarction.
- Prevent adverse drug events.
- Prevent central line infections.
- Prevent surgical site infections.
- Prevent ventilator-associated pneumonia.6
One of the campaign's six interventions was to support hospitals in deploying rapid response teams. As a result of this and other quality initiatives, many hospitals across the United States developed rapid response systems in an effort to reduce cardiac arrests and other sudden life-threatening events.
Rapid response systems include processes of event detection, response triggering, response process improvement, and an administrative structure.7 Rapid response systems empower staff and family members to quickly summon a designated group of critical care clinicians to the bedside to evaluate a patient's worsening condition. The team can intervene to treat the acutely ill patient at the bedside, or the team may assist in the immediate transfer of the patient to an ICU.
Room for improvement
In general, intervention by a rapid response team in the United States is triggered by one aspect of a patient's condition at a time, typically an extreme change in a particular vital sign. For example, a significant rise or drop in BP or a significant change in respiratory rate (for example, either below 10 or above 30 breaths per minute) would trigger a call.
While this single-aspect approach has been effective, what if organizations could identify at-risk patients even before such an extreme change? What if a system could respond to multiple aspects of a patient's condition at the same time and identify at-risk patients at the first sign of a subtle change in vital signs?
A few hospitals have moved to implement such systems through an early warning scoring (EWS) system. The EWS system and rapid response systems have been associated with significant reductions in both cardiac arrests and unscheduled admissions to the ICU.8
IHI recently finished a virtual web series, called an Expedition, to coach hundreds of organizations on how to implement these systems. The key points learned from this work and the story from one exemplar hospital are outlined here.
How EWS works
An EWS is a physiologic scoring system typically used in general medical-surgical units before patients experience a catastrophic medical event. This scoring is accompanied by a descriptive step-by-step guide or algorithm of actions to take based on the patient's assessment score.
An EWS can add another layer of early detection to the rapid response team system, helping staff recognize high-risk patients before their condition deteriorates.4,9,10 Although the idea of an EWS is still relatively new in the United States, this concept is being successfully applied in many hospitals in the United Kingdom.4,10,11
Developing an EWS
Stony Brook University Medical Center (SBUMC), a New York State hospital in Long Island, comprises Stony Brook University School of Medicine and Stony Brook University Hospital. SBUMC is the only tertiary care hospital and Level 1 trauma center in Suffolk County. SBUMC currently has 591 beds, including 58 in adult critical care and 52 in pediatric and neonatal critical care.
SBUMC implemented its rapid response team in December 2005, first testing it on a small scale (during one shift in one unit) and then expanding it when the organization deemed the initiative successful. The rapid response team initiative was fully deployed throughout the hospital (24 hours a day, 7 days a week) by February 2007. Our rapid response team consists of an NP or medical resident, a critical care nurse, and a respiratory therapist.
After developing the rapid response team, however, SBUMC found that not all at-risk patients were being identified; nurses didn't have a complete set of criteria to identify a failing patient early and trigger a call to the team. Staff began to look for ways to improve the initiative.
SBUMC staff members first learned of successful implementation of a pediatric EWS (PEWS) system at a National Institute for Children's Healthcare Quality conference in March 2007. A presentation by Cincinnati Children's Hospital indicated that the implementation of its PEWS system decreased mortality, length of stay, and code blue events outside the ICU. Cincinnati Children's Hospital adapted and applied a PEWS assessment tool originally developed by Royal Alexandra Hospital for Sick Children in Brighton, England.12 Staff attending the session brought the tools developed by Cincinnati Children's Hospital back to SBUMC and suggested testing similar tools in our pediatric units.
After our staff modified the PEWS tool obtained at the conference for SBUMC use, the general pediatric unit tested it during the summer of 2007 (see Pediatric Early Warning Score [PEWS]). The PEWS tool monitored three physiologic indicators: behavioral, cardiovascular, and respiratory. We added a grid identifying age-appropriate limits for hypotension to the tool to make the cardiovascular indicator assessment more accurate.
The direct care nurse or clinical assistant (the unlicensed assistive personnel or UAP) determines the PEWS score and obtains vital signs every 4 hours. The UAP collects patient vital signs and the direct care nurse ensures the accuracy of PEWS assessment. The nurse scores each indicator according to a specific behavior or range of vital signs. Each physiologic indicator is assigned a score, ranging from 0 to 3, depending on the assessment outcome. A score of 0 is considered normal or acceptable. Scores ranging from 1 to 3 are considered abnormal or unacceptable.
Scores for all indicators are added to create the PEWS score. The total PEWS score is assigned a color based on the sum of these numbers: a total of 0 to 2 is green, 3 is yellow, 4 is orange, and 5 or higher is red.
We developed an algorithm/ process flow diagram to depict the actions required based on the resulting color. Each color requires the nurse to complete a designated series of action items. This algorithm ensures standardization in the application of the patient assessment and adequate communication of the patient's score. It also validates the decision by the nurse to contact the patient's attending healthcare provider or the rapid response team during off hours. If at any time the attending healthcare provider doesn't respond within 10 minutes of the page, the nurse is directed to call the rapid response team for assessment and treatment of the patient.
Here are the actions mandated by each color:
- A yellow score requires the reassessment of the patient by the charge nurse on duty. If the charge nurse confirms that the score is accurate, he or she determines whether intervention is required, documents assessment and intervention in the medical record, and reassesses the patient within 2 hours.
- An orange score requires reassessment by the charge nurse and notification of the first- or second-year medical resident. The resident alerts the senior resident and attending healthcare provider of the change in the patient's medical condition, and medical staff takes appropriate action. The direct care nurse reassesses the patient within 1 hour.
- A red score requires notification of the rapid response team and resident. The resident alerts the senior resident and attending healthcare provider, who are all expected to respond to the patient's bedside. The rapid response team and primary care team collaborate on the patient's plan of care. The direct care nurse reassesses the patient within 1 hour.
Documentation and communication
Around the time we were testing the EWS, SBUMC was involved in the implementation of its electronic medical record (EMR). We agreed to document the PEWS assessment by writing in the hybrid paper medical record during this testing phase. This allowed us to modify the assessment tool more quickly and limit the resources devoted to the electronic build of the assessment until we finalized the tool.
To assist with communication of the PEWS, SBUMC purchased a magnetic white board to hang on the wall at the nursing station. Room numbers were posted on the white board instead of patient names to ensure privacy. Color magnets placed on the board in the appropriate space provided a display of the patient score, allowing a quick, “at a glance” view of the unit's acuity level and helping in the assignment of patients to nurses on the unit. Soon after testing, we agreed that resident-to-resident handoffs for patients with an orange or red score would take place at the bedside to provide timely knowledge of at-risk patients on the unit.
We implemented the PEWS initiative on our remaining pediatric unit, a hematology/oncology unit, in the fall of 2007.
Extending the initiative to adult patients
After deploying the PEWS in all our pediatric units, we developed a modified EWS (MEWS) system for use with our adult patients (see Modified Early Warning System [MEWS]). We measured these physiologic elements in our MEWS: respiratory rate, heart rate, systolic BP, level of consciousness, and temperature. Compared with PEWS, the MEWS assessment was more complicated because it incorporated aspects above and below the normal or acceptable range. We also expanded the algorithm for responding to the MEWS score to include reassessment by the direct care nurse every hour for 4 consecutive hours to ensure patient stability. If the patient didn't remain stable for 4 consecutive hours, the team considered transferring the patient to a higher level of care.
We also developed an assessment tool and an algorithm and applied them to our obstetric patients (OB-EWS) in early 2008. Following the lessons learned from testing the PEWS on the pediatric units, we documented the scores manually in the hybrid paper medical record and posted the scores on a magnetic white board on the wall at the nurses' station.
Although the PEWS/MEWS assessment tool was generally deemed appropriate by the direct care nurses and nurse managers, staff members believed the tool identified a high incidence of inappropriate yellow and orange scores, which required the charge nurse to reassess the patient.
We conducted a prevalence study in November 2007 to determine common triggers for the yellow scores and the actions required as a result. In this study, we reviewed the medical records of 10 random patients on each unit over a 24-hour period for accuracy of their scores and whether the identified algorithm had been followed. The study showed that 52% of the patients with an elevated MEWS had an altered respiratory rate. Further analysis of the data demonstrated that the clinical assistants had inaccurately assessed the respiratory rate. RNs' observation of clinical assistants during vital signs collection revealed an opportunity for increased education on the correct technique of measuring respiratory rate.
We conducted another prevalence study in January 2008 to reassess the tool and to assess staff accuracy in using it. The incidence of altered respiratory rate triggering the elevated score dropped to 29%, and accuracy in staff assessment of the patient was measured at 99.1%.
Once we finalized our PEWS and MEWS tools, we worked with our information technology department to build the screen into our EMR. SBUMC staff worked with the vendor to develop an electronic assessment screen that pulled vital signs into the assessment form and tallied the EWS score. The screen also highlighted required actions for the abnormal scores, such as “alert rapid response team.”
About 1 year after the initial phase of the EWS initiative, we decided all orange and red scores would automatically generate a text page to the rapid response team director. When the director receives a red score, a patient assessment is required. An orange score prompts the director to call the patient's unit to offer assistance. This systematic change greatly increased the number of rapid response team calls triggered each month.
We're also testing early warning scores in the automated screening of severe sepsis in our general medical units. The scores include altered vital signs and mental status—signs of severe sepsis. We're working with our information technology staff to develop a method of pulling the EWS into an automated severe sepsis screening process. This process is in its infancy. Two of our medical units are currently testing it to assess efficacy and accuracy.
Barriers and shortcomings
Our EMR system prohibits the nurse from easily entering vital signs and EWS scores at the bedside. Although computers on rolling carts are available, many nurses document their assessments in the computer at the nursing station, contributing to a delay in process flow and timely documentation. Placing color magnets on the white board to indicate a change in the score creates an added step and causes delays.
Ideally, providing pocket computers to our nursing staff to document the patient's EWS and vital signs would encourage a more efficient process. An electronic board depicting the unit's collective EWS scores would also improve efficiency. We need to explore alternate funding sources so we can implement this automated process.
Although we fully instituted the rapid response team in 2007, we continue to hear of incidents where the healthcare providers criticize the nursing staff for calling the rapid response team because of the team's concerns about the patient. Automatically paging the rapid response team for orange and red EWS scores eliminates the hesitance of the nursing staff to initiate a rapid response team call.
Results of instituting system
Code blue calls outside of the pediatric and neonatal ICU decreased postimplementation. Twenty-one codes were called in 2008, compared with 15 codes in 2009, and 9 codes for the first two quarters of 2010. Incidentally, analysis of length of stay and mortality after implementation didn't reveal an improvement from the baseline period.
Reviewing adult rapid response team data for 2007 to 2010 revealed a modest decrease in the number of code blue calls as the number of rapid response team calls increased. The number of rapid response team calls over this period grew from 361 in 2007 to 1,225 in 2010. Consequently, the percentage of codes occurring outside of ICUs decreased from 51.67% in 2007 to 47.33% in 2010.
Our number of rapid response team calls remains constant at 100 calls per month. Rapid response team calls increased significantly when we initiated an automated alpha page to the rapid response team director for all orange and red EWS scores. The rapid response team director assesses these patients to determine if intervention is required and proceeds accordingly. This automated notification and required follow-up is perceived by the rapid response team as extremely valuable in the prevention of further patient decline.
Improving patient outcomes
A standardized acuity assessment and communication method to recognize and avoid patient decline may reduce patient mortality and length of stay. Standardization increases reliability and decreases variation in the delivery of patient care. Developing a standardized tool to assess the patient and corresponding algorithm or guide of action steps ensures reliable delivery of patient care. For pediatric patients, the observed-to-expected mortality dropped slightly postimplementation as did the average length of stay. Unfortunately, a decrease in adult length of stay and observed-to-expected mortality wasn't realized. All mortalities are reviewed to determine opportunities for improvement and are brought to the appropriate quality venue for execution. Our staff believes EWS to be beneficial in the early recognition and prevention of further patient decline, and we continue to evaluate the system and collect data.