Secondary Logo

Journal Logo

Original Articles

Comparing the Outcomes of Reporting and Trigger Tool Methods to Capture Adverse Events in the Emergency Department

Lee, Wen-Huei MD*; Zhang, Ewai MD*; Chiang, Charng-Yen MD*; Yen, Yung-Lin MD*; Chen, Ling-Ling BSN, RN; Liu, Mei-Hsiu BSN, RN; Kung, Chia-Te MD*; Hung, Shih-Chiang MD*‡

Author Information
doi: 10.1097/PTS.0000000000000341
  • Open


A reliable, feasible, and valid monitoring system to identify adverse events and errors can enhance patient safety in the emergency department (ED).1 An adverse event is defined as physical injury or potential harm arising from medical services or interventions. However, it is difficult to capture adverse events in medical services.2 Commonly used patient safety performance indicators or measurement methods include incident reporting, patient safety indicators, and trigger tool methods. Each applies a different methodology to measure adverse events and requires varying amounts of resources.3,4 Incident reporting is now widely adopted by healthcare-governing agencies in developed countries.3 However, the reporting rate is a major concern in terms of incident reporting system effectiveness5,6; systems with low reporting rates cannot reliably or accurately measure the burden of adverse events.7

Patient safety indicators capture potential adverse events by screening hospital discharge data. This is considered an effective alternative to other more resource-intensive methods.8 However, incidents derived from administrative data cannot always be confirmed as actual adverse events. For example, death occurring in a patient with an expected low-mortality diagnosis may be a natural clinical consequence rather than a result of an adverse event. Furthermore, certain patient safety indicators, such as postoperative hemorrhage, may be relevant only to hospital-level surgical specialties.9

Trigger tool methods use a medical chart review process facilitated by the identification of words or events in charts that indicate potential adverse events.10 They reportedly identify more adverse events than other methods11 but demand more manual chart review effort.4 Although different methods and indicators have various advantages and disadvantages, difficulties exist in their application to ED settings because most of them focus on events related to hospitalization, such as surgical complications and nosocomial infections.1,7

Little is known about which methods are best for detecting adverse events in the ED or what the characteristics are of adverse events captured by different identification methods. A recent study showed the superiority of trigger tool methods to reporting methods as active surveillance for ED patient safety.12 However, other studies have suggested that combining measurement methods may be a promising solution for identifying adverse events in the ED.13,14 This might enable the capturing of additional adverse events or errors and may identify events with unique characteristics that more accurately reflect patient safety problems in the ED environment. Here, we developed a monitoring system that combined incident reporting and trigger tool methods to capture adverse events and errors in the ED. We then compared the number, type, and physical impact of the adverse events captured by the trigger tool and reporting methods applied to the same cohort of patients admitted to the ED. The study aims to investigate which method was better able to capture adverse events and describe their characteristics.


Study Design and Setting

We conducted a prospective observational cohort study for a 1-year period from January 1, 2013, to December 31, 2013, at an academic medical center in Southern Taiwan. There were 110,675 ED visits during this time. Adult nontrauma patients in the treatment area and patients boarding in the observation area while awaiting ward admission were treated by board-certified emergency physicians. This study was approved by the institutional review board of the study hospital.

Participant Selection

The study population comprised adult nontrauma patients who had received medical care in the treatment or observation area of the ED of the study hospital. Patients who left the ED before being seen by a physician and incidents that could not be validated by medical chart review or interview with medical personnel were excluded. Reported incidents proven after investigation to not be patient safety issues were excluded from the analysis. Patients with in-hospital cardiac arrest were excluded if the arrest did not occur in the ED. The remaining incidents during the study period were enrolled as study incidents.

Methods and Measurements

We implemented 2 principal method categories (incident reporting and trigger tool) to identify adverse events or errors. The incident reporting and trigger tool method categories composed of 2 and 5 tracks, respectively (Fig. 1). We adopted 2 tracks to collect the incident reports. First, a voluntary electronic adverse event reporting system was installed in the computerized health information system of the ED that allowed medical personnel to report any adverse events or errors (track 1). Second, we collected incident reports from the ED nurses' daily logbook. The ED nurses of each working shift were required to report any adverse events, errors, or violations of daily routine practice to the on-duty head nurse on a mandatory basis (track 2). A research assistant collected incidents from the reporting methods each work day.

The study flow diagram of the monitoring system for adverse event and error in ED.

Five tracks of trigger tool methods were used to detect incidents. These trigger events included the following: 72-hour revisit admissions (track 3), unexpected cardiopulmonary resuscitation (track 4), in-hospital cardiac arrest of ED patients (track 5), unexpected transfer to an observation room with continuous vital signs monitoring (track 6), and unscheduled transfer to the intensive care unit within 24 hours of admission to a general ward (track 7). These trigger events were sorted from the computerized health information system every month with the exception of track 4, which was reported by the medical personnel. If a patient encountered more than 1 incident during the same ED visit, each incident was counted separately. Overlapping incidents—those identified by more than 1 track—were counted as 1 incident in the calculations of the subtotal number of incidents in the same method category.

Measurement Outcomes

An adverse event was defined as a physical injury or potential harm arising from medical services or interventions. An error was defined as the failure of a planned action to be completed as intended or the use of an incorrect plan for a specific aim. Errors resulting in physical injury or potential harm were classified as adverse events.

The primary measurement outcomes were the number and positive predictive rate of adverse events and errors in each track. The positive predictive rate was defined as the proportion of adverse events or errors occurring among the study incidents in each track. The secondary outcome was the classification of identified adverse events according to the schema of the Joint Commission Accreditation of Healthcare Organization (JCAHO), which could be grouped into the following 5 primary classifications: impact, type, domain, cause, and prevention and mitigation.15 We focused on the physical impacts and types of adverse events captured in the ED.

Physical impacts included definite physical injury or potential harm. If the adverse event would most likely cause physical injury—irrespective of whether definite harm or detectable harm was identified, we considered that adverse event as resulting in potential harm. We classified the adverse event types as communication, patient management, or clinical performance. The communication subclassification addressed communication problems that existed between patient, medical, and nonmedical staff. Patient management involved problems with improper delegation, failure to track or follow-up, incorrect referral or consultation, or the questionable use of resources. Clinical performance included the full range of failures that could lead to iatrogenic events during the preintervention, intervention, and postintervention phases of clinical care.

Identification of Adverse Outcomes

Two research nurses, 2 emergency medicine resident doctors, and 4 board-certified emergency physicians participated in the review process. They all attended a 4-hour training course on adverse event identification before the research program launch. In the training course, they reviewed 36 incident reports (track 1) together. These had been collected for the month of December 2012. The criteria used to define and classify adverse events were clarified by consensus (Supplemental Digital Content 1,

A 2-stage process was used to identify the adverse outcomes. A research nurse or resident doctor investigated the study incidents and summarized the patient demographics, presenting illnesses, medical histories, and descriptions. Two or 3 certified attending emergency physicians reviewed the summarized results and medical charts to identify adverse events and errors. We adopted a widely used 6-point Likert scale to determine the confidence of reviewers for identifying errors and adverse events, with 1 and 6 indicating no evidence and strong evidence of an adverse event or error, respectively.16 If both reviewers scored the level of certainty of greater than 4, the study incident was classified as an adverse event or error. If there was disagreement between the 2 investigators, a third investigator reviewed the study incident; the final determination of an adverse event or error was based on agreement of 2 of the 3 investigators. Interrater agreement of the reviewers' scoring was good for the identification of adverse events (κ statistic, 0.86).

The investigators also classified the physical impact and type of the identified adverse events according to the JCAHO definition. If the 2 investigators disagreed on the classification of an adverse event, a final decision was made after discussion.


The 7 tracks were grouped into 2 methods (reporting and trigger tool methods) for the data analysis. For demographic and descriptive data, intergroup comparisons were made using Student t test. Categorical variables were reported as numbers and percentages; intergroup comparisons were made using the χ2 test or Fisher exact test where appropriate. The level of significance was set at a P value of less than 0.05 (2-tailed). All analyses were performed using SAS software (Version 9.3; SAS Institute Inc, Cary, NC).


During the study period, there were 69,327 adult nontrauma ED visits. Of the 2828 incidents included in the 7 tracks of the monitoring system, 179 incidents (6.3%) were excluded according to the exclusion criteria (Fig. 1). A total of 2649 incidents were analyzed, representing 3.8% of the study population. A total of 20 adverse events (7.0%) and 6 errors (1.6%) were identified by more than 1 track (Table 1). Once each overlapping incident was counted as a single incident, 285 adverse events and 365 errors were captured. Therefore, approximately 0.9% of adult -trauma ED visits had associated adverse events or errors. Of the 285 adverse events, 220 (77.2%) were captured by reporting methods, 74 (26%) by trigger tool methods, and 9 (3.2%) by both reporting and trigger tool methods. Thus, the positive predictive rate of adverse events captured by the reporting methods was significantly greater than that by the trigger tool methods (34.3% versus 3.2%, respectively; P < 0.001).

The Comparison of Amount and Positive Predictive Rate of Adverse Events and Errors Captured by Reporting and Trigger Tool Methods

Nurse incident reporting, which captured 119 adverse events, had the greatest positive predictive rate (65.4%) of all tracks of the monitoring system; this was followed by electronic adverse event reporting (22.7%). The trigger tool looking for unexpected cardiopulmonary resuscitation identified 7 adverse events, the highest positive predictive rate (17.5%) of all of the trigger tool methods. The clinical characteristics of the patients with adverse events did not differ significantly between the reporting and trigger tool methods except with regard to ED discharge diagnosis categories (Table 2). Patients with adverse events captured by the trigger tool methods had a greater proportion of infective disease and a lower proportion of oncology-related discharge diagnoses.

The Comparison of Clinical Characteristics of Patients With Adverse Event Captured by Reporting and Trigger Tool Methods

A total of 81.7% of adverse events incurred temporary minor physical impacts, including near misses, no harm, or undetectable injuries (Table 3). Reporting and trigger tool methods each identified 3 adverse events that resulted in death, 2 of which were captured by both method categories. The distribution of adverse event type was consistent between the trigger tools and reporting methods (Table 4). Most adverse events captured by the monitoring system (86.7%) were related to clinical performance, whereas 10.2% were problems with patient management and 3.2% were problems with communication. Approximately 76.9% of the adverse events occurred during the intervention phase of clinical performance (Table 5). The most frequent types of adverse events occurring during the intervention phase were patient falls (29.2%), omission of an essential procedure (17.8%), and correct procedure with complications (16.6%). Inaccurate diagnosis (13.4%) was the most common adverse event during the preintervention phase.

The Comparison of Physical Impact of Adverse Events of Reporting and Trigger Tool Methods
Comparison of Type of Adverse Events Captured by Reporting and Trigger Tool Methods
Comparison of Clinical Performance of Adverse Events Captured by Reporting and Trigger Tool Methods

Compared with reporting methods, the odds ratio (OR) of the trigger tool methods for the proportion of study incidents among the included incidents was 3.9 (95% confidence interval [CI], 3.54–4.33) and the positive predictive rate of adverse events was 0.1 (95% CI, 0.09–0.16). The trigger tool methods were better than the reporting methods at capturing adverse events during the preintervention and postintervention phases (OR, 17.0; 95% CI, 8.48–34.16) as well as those resulting in severe physical impact or death (OR, 5.40; 95% CI, 2.62–11.10).


Here, we implemented a monitoring system combining different methods to capture adverse events in adult nontrauma ED visits. From a general perspective, the reporting methods captured approximately 77% of the adverse events in the monitoring system with a positive predictive rate of approximately 34%. Only 3 reported events needed to be reviewed to confirm 1 adverse event. Conversely, a review of 30 medical records with trigger events identified only 1 adverse event. Furthermore, the reviewers encountered fewer difficulties during the identification process of the reported events because more sources of information (e.g., incident reports, interviews with medical personnel) were available. The reporting methods remained more effective and rewarding than the trigger tool methods.

Between the 2 reporting methods, the nurse incident reporting mechanism captured approximately 54% of the adverse events. It also captured more adverse events than the electronic adverse event reporting mechanism. This may be due to the different natures of these methods. The nurse incident reporting is a mandatory mechanism by which ED nurses are obligated to report incidents. Adverse events that attracted greater attention from nurses, such as patient falls and complications of intravenous catheterization, were easily captured by nurse incident reporting. In contrast, electronic adverse event reporting is a voluntary action by medical personnel; the decision to report is based on their perception of clinical management and subjective assessment. We also found that most of the errors were identified by electronic adverse event reporting, which was better able to capture errors than adverse events. This is because medical personnel were more sensitive to violations of routine practice. They were also able to recognize problems during the delivery of routine clinical treatment or procedures.

An open and nonpunitive adverse event reporting method was implemented in our hospital more than 10 years ago. The overall average positive response rate of the patient safety culture survey in our hospital increased from 34.9% in 2008 to 50.2% in 2014. Among the 7 domains of the patient safety culture survey, the positive response rate of our ED was 40.6% to 51%, rates similar to those of other medical centers in Taiwan. Adverse events captured by our electronic adverse event reporting method were mainly reported by nurses (80%), followed by physicians (18%) and other support staff (2%). Reporting methods were superior to trigger tool methods in capturing adverse events and had a higher positive predictive rate. Thus, they should be considered the core component of an ED monitoring system for adverse events. The major concerns about reporting methods are the low reporting rates and reporting biases. Our results show that developing a list of mandatory reportable adverse events and errors in the ED is both reasonable and effective. The purpose of mandatory reporting is to enhance patient safety and facilitate individual and team accountability; it should not be considered a punitive mechanism. Furthermore, it may be helpful for institutions to create and sustain a culture of patient safety among healthcare personnel by developing and instituting policies and providing professional training.

In addition to reporting methods, trigger tool methods captured an additional number of adverse events and errors. Only a small proportion of adverse outcomes and 50% of sentinel adverse events were identified by both the trigger tool and reporting methods. Therefore, if only trigger tool or reporting methods are adopted, a substantial number of adverse outcomes, including those featuring serious physical injuries, will go unidentified; this was corroborated by previous studies.17 Trigger tool methods were better able to identify adverse events involving more severe injuries as well as those occurring both preintervention and postintervention. These results demonstrated that our intention to adopt trigger events that could flag more incorrect diagnoses or serious adverse events was fulfilled. However, reporting methods are good at detecting adverse events during interventions. Most of our trigger events were captured by a computer. Thus, they could detect adverse outcomes in a more prospective way instead of fully relying on spontaneous reporting.18

Previous studies highlighted communication as a common cause of adverse events.19 Most adverse events captured by our monitoring system were related to clinical performance. However, other studies of patient safety in the ED also revealed that adverse events were most commonly related to clinical performance and patient management.12,20 Compared with hospitalized patients, the shorter clinical course and involvement of fewer medical personnel of patients in the ED might result in fewer communication problems. In addition, the “type” classification schema of the JCAHO included only 3 principal groups. Most of the common classification groups adopted by other studies such as diagnostic issues, procedural complications, or medication adverse effects were included in the “clinical performance” category, which might increase the proportion of adverse events related to clinical performance.

In summary, our results revealed that the adverse events captured by the trigger tool and reporting methods had different clinical characteristics. Different tracks within the monitoring systems contributed to elucidating the reality of patient safety in the ED. The combined use of reporting and trigger tool methods had synergistic benefits for detecting adverse events in the ED.

A previous epidemiological study conducted using the Harvard medical practices study method reported a 0.7% occurrence rate of adverse events in hospitalized patients in Taiwan.21 Another study in Taiwan discovered that a low percentage (2%–3%) of hospitalized patients encountered adverse events in the ED.21 The occurrence rate of adverse events and errors detected by our monitoring system was 0.9% of the study population, which was higher than the findings of other adverse event studies of ED in Taiwan (range, 0.05%–0.6%).21,22 The occurrence rate of adverse events for hospitalized or ED patients were no higher than those reported in western countries (3%–8% and 9%–12%, respectively).18,20,23,24 This may be related to differences between studies with respect to study populations, identification processes, and definitions of adverse events and errors. We also focused on the occurrence of adverse events as a continuous surveillance indicator within a hospital using a constant monitoring mechanism. Determining how many patients are harmed, which is a core function of a monitoring system, requires knowledge of which adverse events occur and how they impact patient safety.1,13

The positive predictive rate of our trigger events was approximately 3%, which was lower than those of other studies.12,20,24 This might be because we adopted a narrow definition of an adverse event. We identified reported or trigger events as adverse events only when physical impacts had or were expected to occur. The unnecessary prolongation of symptoms, with subsequent unscheduled return to the ED or hospitalization, was not considered a physical impact. In addition, suboptimal management or disposition without physical injury was defined as an error. We reached the previously mentioned consensus to avoid controversy and improve interrater agreement.

The following limitations were identified in this study: its methodology was based on reporting and trigger tool methods. Therefore, the weaknesses inherent to these 2 methods could not be avoided. Although the trigger events could be identified by the computerized information system, adverse event confirmation relied on medical chart review. Thus, if the information in the medical chart was deficient, adverse events or errors might have been overlooked. Furthermore, reporting methods relied on medical personnel to report adverse outcomes related to patient safety. Definitions of adverse events or errors may differ among personnel. In addition, numerous factors affect their decision to report adverse events or errors. Therefore, because of variability in practitioners' preferences to report or not report certain type of adverse events, reporting bias is inevitable. We did not follow patients discharged from the ED; thus, we might have missed adverse outcomes if patients did not revisit our ED. Delayed adverse events might have been missed if they occurred more than 24 hours after admission. Nosocomial infections were difficult to identify because the clinical course in the ED was shorter than that in hospitalized patients. We adopted patient safety indicators to capture adverse events related to surgical interventions or procedures for trauma patients in our ED. Thus, we only included adult nontrauma patients in this study. Different triggers might be adopted for trauma and pediatric ED patients. However, the concept of combining the reporting and trigger tool methods could be generalized.


Despite its limitations, this study demonstrated the usefulness of routine patient safety monitoring systems in ED environments. We found reporting methods were more effective than trigger tool methods for monitoring systems used to capture adverse events in adult nontrauma ED visits. However, trigger tool methods were better able to capture adverse events with severe physical impacts. The combined use of reporting and trigger tool methods had synergistic benefits for the detection of adverse events in the ED.


1. Pham JC, Alblaihed L, Cheung DS, et al. Measuring patient safety in the emergency department. Am J Med Qual. 2014;29:99–104.
2. Walshe K. Adverse events in health care: issues in measurement. Qual Health Care. 2000;9:47–52.
3. Farley DO, Haviland A, Champagne S, et al. Adverse-event-reporting practices by US hospitals: results of a national survey. Qual Saf Health Care. 2008;17:416–423.
4. Naessens JM, O'Byrne TJ, Johnson MG, et al. Measuring hospital adverse events: assessing inter-rater reliability and trigger performance of the global trigger tool. Int J Qual Health Care. 2010;22:266–74.
5. Scheetman JM, Plews-Ogan ML. Physician perception of hospital safety and barriers to incident reporting. Jt Comm J Qual Patient Saf. 2006;32:337–343.
6. Milch CE, Salem DN, Pauker SG, et al. Voluntary electronic reporting of medical errors and adverse events. An analysis of 92,547 reports from 26 acute care hospitals. J Gen Intern Med. 2006;21:165–170.
7. Thomas EJ, Petersen LA. Measuring errors and adverse events in health care. J Gen Intern Med. 2003;18:61–67.
8. Isaac T, Jha AK. Are patient safety indicators related to widely used measures of hospital quality? J Gen Intern Med. 2008;23:1373–8.
9. Sedman A, Harris JM 2nd, Schulz K, et al. Relevance of the agency for healthcare research and quality patient safety indicators for children's hospitals. Pediatrics. 2005;115:135–145.
10. Rozich JD, Haraden CR, Resar RK. Adverse drug event trigger tool: a practical methodology for measuring medication related harm. Qual Saf Health Care. 2003;12:194–200.
11. Kirkendall ES, Kloppenborg E, Papp J, et al. Measuring adverse events and levels of harm in pediatric inpatients with the global trigger tool. Pediatrics. 2012;130:e1206–14.
12. Calder L, Pozgay A, Riff S, et al. Adverse events in patients with return emergency department visits. BMJ Qual Saf. 2015;24:142–148.
13. Handler JA, Gillam M, Sanders AB, et al. Defining, Identifying, and measuring error in emergency medicine. Acad Emerg Med. 2000;7:1183–8.
14. Tomas-Vecina S, Chanovas-Borrás MR, Roqueta-Egea F, et al. Measuring patient safety in the emergency department: the Spanish experience. Am J Med Qual. 2014;29:362–363.
15. Chang A, Schyve PM, Croteau RJ, et al. The JCAHO patient safety event taxonomy: a standardized terminology and classification schema for near misses and adverse events. Int J Qual Health Care. 2005;17:95–105.
16. Forster AJ, Rose NG, van Walraven C, et al. Adverse events following an emergency department visit. Qual Saf Health Care. 2007;16:17–22.
17. Olsen S, Neale G, Schwab K, et al. Hospital staff should use more than one method to detect adverse events and potential adverse events: incident reporting, pharmacist surveillance and local real-time record review may all have a place. Qual Saf Health Care. 2007;16:40–44.
18. Soop M, Fryksmark U, Koster M, et al. The incidence of adverse events in Swedish hospitals: a retrospective medical record review study. Int J Qual Health Care. 2009;21:285–291.
19. Vincent C, Taylor-Adams S, Stanhope N. Framework for analyzing risk and safety in clinical medicine. BMJ. 1998;316:1154–1157.
20. Calder LA, Forster A, Nelson M, et al. Adverse events among patients registered in high-acuity areas of the emergency department: a prospective cohort study. CJEM. 2010;12:421–430.
21. Lin FY, Lin HC, WU JJ, et al. Incident report in a medical center emergency department. Formosan J Med. 2014;17:497–507.
22. Chen CT, Hsiao CT, Chen JC. Medical incidents in the emergency department. J Taiwan Emerg Med. 2006;8:115–120.
23. Vincent C, Neale G, Woloshynowych M. Adverse events in British hospitals: preliminary retrospective record review. BMJ. 2001;322:517–9.
24. Chern CH, How CK, Wang LM, et al. Decreasing clinically significant adverse events using feedback to emergency physicians of telephone follow-up outcomes. Ann Emerg Med. 2007;49:196–205.

emergency department; patient safety; adverse events

Supplemental Digital Content

Copyright © 2017 The Author(s). Published by Wolters Kluwer Health, Inc.