Secondary Logo

Journal Logo

Factors Influencing the Reporting of Adverse Perioperative Outcomes to a Quality Management Program

Katz, Robert I. MD*; Lagasse, Robert S. MD

doi: 10.1213/00000539-200002000-00020
ECONOMICS AND HEALTH SYSTEMS RESEARCH
Free
SDC

Quality management programs have used several data reporting sources to identify adverse perioperative outcomes. We compared reporting sources and identified factors that might improve data capture. Adverse perioperative outcomes between January 1, 1992, and December 31, 1994, were reported to the Department of Anesthesiology Quality Management program by anesthesiologists, hospital chart reviewers, and other hospital personnel using incident reports. The reports were compared for preoperative health status, severity of outcome, and associated human error. Subsequently, personnel representing the various sources were surveyed regarding factors that might affect their reporting of adverse outcomes. Of 37,924 anesthetics, 734 (1.9%) adverse outcomes were reported, 519 (71%) of which were identified by anesthesiologists, 282 (38%) by chart reviewers, and 67 (9.1%) by incident report. There was no statistically significant difference in reporting rates by anesthesiologists according to preexisting disease, severity of outcome, or presence of human error. Thirteen cases involving human error, however, resulted in disabling patient injury, with a higher rate of self-reporting for these cases (92%, P < 0.05). Rates of reporting by chart reviewers varied (P < 0.05) according to severity of patient illness and severity of outcome. Incident reports identified only 67 adverse outcomes (9.1%), but included a significantly higher percentage of the adverse outcomes involving human error (23.3%, P < 0.05). Twenty attending anesthesiologists, 15 resident anesthesiologists, 29 operating room nurses, 19 postanesthesia care unit nurses, and 6 hospital chart reviewers responded to the survey. Only the potential to improve quality of patient care influenced or strongly influenced a decision by all groups to report an adverse outcome to a peer review process. Physician self-reporting is a more reliable method of identifying adverse outcomes than either medical chart review or incident reporting.

Implications Physician self-reporting is a more reliable method of identifying adverse outcomes than either medical chart review or incident reporting. Reporting by chart reviewers is biased both by the severity of outcome and severity of patient illness, whereas incident reports tend to focus on human error. All groups feel compelled to report adverse outcomes when the data may result in improved patient care.

Departments of Anesthesiology, *State University of New York at Stony Brook, Stony Brook, and †Albert Einstein College of Medicine, Montefiore Medical Center, Bronx, New York

December 7, 1999.

Address correspondence and reprint requests to Robert S. Lagasse, MD, Department of Anesthesiology, Weiler Division of Montefiore Medical Center, 1825 Eastchester Rd., Bronx, NY 10461. Address e-mail to boblagasse@aol.com.

Presented in part at the Annual Conference of the International Anesthesia Research Society, Orlando, FL, March 7–11, 1998.

We believe that the maintenance of a high standard of patient care requires that adverse outcomes be identified and evaluated. Methods of identifying adverse outcomes have included incident reporting by nonphysician personnel, physician self-reporting, and concurrent or retrospective medical record review by trained hospital employees. Cullen et al. (1) found that incident reporting identified only a small percentage of adverse drug events, whereas Brennan. et al (2) concluded that the majority of adverse outcomes could be identified by medical record review performed by hospital-trained chart reviewers. O’Neil et al. (3), however, reported that physician house staff identified the same number of adverse and potentially preventable incidents as did medical record review. None of these studies directly compared all three reporting mechanisms.

In 1995, we described a system of self-reporting adverse outcomes to a departmental peer review process (4) and concluded that the studied physicians reliably reported such events. This study compares the reporting incidence of various sources and identifies factors that might affect this incidence.

Back to Top | Article Outline

Methods

Outcome Data

All anesthetics performed at the University Hospital of the State University of New York at Stony Brook found to be associated with an adverse outcome between January 1, 1992, and December 31, 1994, were reported to the department’s peer review process. Adverse outcomes were considered to be all instances of patient harm that could potentially be related to anesthesia, whether transient or permanent. Reporting sources included the anesthesiologist (resident and/or attending), other clinical personnel (nurses, operating room technicians, etc.), and the Medical Care Review Team (trained chart reviewers employed by the hospital).

Anesthesiologists reported adverse outcomes on a continuous basis by filing a written self-reporting form (Figure 1) with the department at the time of the occurrence. The reporting form included a narrative of events and an analysis of errors. Reporting forms were submitted anonymously but included provider-specific data for purposes of physician credentialing, as required by statute in New York.

Figure 1

Figure 1

Other clinical personnel submitted traditional “incident reports” directly to the department or to the Medical Care Review Team, which consisted of seven medical chart reviewers employed by the hospital. The Medical Care Review Team examined incident reports and screened the medical records of all inpatients within 24 h of surgery and at least every 4 days thereafter for all adverse outcomes, which were reported to the various clinical departments. The Medical Care Review Team was oriented to the specifics of the anesthesia quality management program and given copies of the reporting form for reference, but did not use the form per se. The Medical Care Review Team reported incidents but did not do a quality analysis. The department was informed of their findings at monthly meetings between the departmental Quality Management (QM) Director (RSL) and the Director of the Medical Care Review Team.

Methods by which anesthesia personnel came to be aware of adverse incidents included direct experience in the operating room (OR) or postanesthesia care unit (PACU), and by postoperative visits, required by departmental policy to be performed on all inpatients. Adverse outcomes in ambulatory surgical patients were detected by a phone call made the day after surgery by ambulatory surgery unit nursing staff or by response to a written patient satisfaction survey sent to all ambulatory patients. Problems or complaints discovered by the ambulatory staff were referred to the attending anesthesiologist who performed the anesthesia and to the Medical Care Review Team via incident report. The number of adverse outcomes and reporting sources were recorded each month. A single incident could be reported by multiple sources.

According to previously described methodology (4), each case was reviewed by a preliminary committee that performed fact-finding and prepared an abstract for presentation to the Departmental Peer Review Committee. The committee, consisting of all attending faculty and residents, met on a monthly basis to determine whether a human error by the anesthesia team contributed to the outcome, as well as severity of the outcome. The members of the anesthesia team were not identified as such at the monthly meeting, but could, if they wished, comment on care given. Human errors were categorized by the committee as failing to perform a technique properly, misuse of equipment, disregarding available data, failing to seek appropriate data, or responding to data incorrectly. Outcomes were categorized as 1) no change in hospital course, 2) increased care or risk without organ failure or damage, 3) increased care or risk with reversible organ failure or damage, 4) increased care or risk with irreversible organ failure or damage, and 5) death. Failure to reach consensus regarding case analysis was resolved through majority opinion of the attending staff present.

Back to Top | Article Outline

Survey Data

At the conclusion of the 3-year period, the 36 resident and 25 attending anesthesiologists composing the department, along with the 60 OR nurses, 29 PACU nurses, and 7 medical chart reviewers were given a voluntary, anonymous survey of factors that might influence their reporting of adverse perioperative events. Factors considered potentially relevant were identified at a focus group meeting consisting of the department’s preliminary QM committee and the Director of the Medical Care Review Team. Factors were ranked by using a five-point Likert style scale, a methodology commonly used to measure attitudes of respondents (5), ranging from “strongly influence” (1) to “no influence” (5).

χ2 or Fisher’s exact test, as appropriate, was used to examine differences in reporting rates among different case types, groups of responding individuals, individual attending anesthesiologists and resident classes. One-way analysis of variance was used to analyze survey responses with respect to influence on reporting. The Fisher protected least significance difference test was used for multiple comparisons with one-way analysis of variance. In all cases, P < 0.05 was considered to be statistically significant.

Back to Top | Article Outline

Results

Outcome Data

Of 37,924 anesthetics performed, 734 (1.9%) were associated with an adverse outcome. Five hundred nineteen (71%) adverse outcomes were reported by anesthesiologists, 282 (38%) by chart reviewers, and 67 (9.1%) by other hospital personnel using incident reports (P < 0.05). Human error was judged to have contributed to 90 (12.3%) adverse outcomes. Of these 90, 55 (61%) were self-reported, 25 (27.8%) were reported by chart reviewers, and 21 (23.3%) were identified by incident reports. One hundred nineteen (16.2% of the total, 42.2% of adverse outcomes reported by chart reviewers) adverse outcomes were reported by both physicians and chart reviewers, 6 (0.8%) by physicians and incident reports, 6 (0.8%) by chart reviewers and incident reports, and 6 (0.8%) by all three modalities.

There was no statistically significant difference in rates of self-reporting by anesthesiologists according to preexisting disease (66% of adverse outcomes in ASA physical status III–V self-reported, versus 77% in ASA physical status I and II patients), severity of outcome (72% of Category 1–2 versus 69% of Category 3–5) or human error (61% of human errors versus 72% of system errors). Thirteen cases involving human error (14.4%) resulted in disabling patient injury. Twelve (92%) of these were self-reported, a rate significantly higher (P < 0.05) than the rate of self-reporting for all cases involving an adverse outcome. There was no statistically significant difference in rates of adverse outcomes or self-reporting among departmental attendings or among resident classes.

Chart reviewers reported a higher percentage (P < 0.05) of cases involving ASA physical status III–V patients (206, 50.1%) than ASA physical status I and II patients (76, 24.7%), and patients in outcome Category 3–5 (137, 59.8%) than Category 1–2 (145, 28.7%). Incident reports identified only 67 (9.1%) adverse outcomes, but 21 of these (23.3%) involved human error (P < 0.05).

Reporting rates and significant differences among the three sources are shown in Table 1.

Table 1

Table 1

Back to Top | Article Outline

Survey Data

Twenty attending anesthesiologists (80%), 15 residents (42%), 6 medical chart reviewers (86%), 29 OR Nurses (48%), and 19 PACU nurses (66%) responded to the survey, for an overall response rate of 57%. Preliminary analysis revealed no significant differences between attending and resident anesthesiologists on any question. Therefore, the data for anesthesiologists were pooled. There were significant differences among anesthesiologists, chart reviewers, OR nurses, and PACU nurses regarding factors that might lead them to report an adverse outcome (Table 2), although all groups were most strongly influenced by potential for improving quality of patient care (P < 0.05). Those factors having the least influence on anesthesiologists included professional relationships with involved practitioners, time involved, and fear of the involved practitioner losing either license or privileges. Medical chart reviewers, in addition to potential for improving quality of patient care, were most strongly influenced by value as a teaching tool and by an occurrence being the object of a focused study. Chart reviewers were least influenced by fear of the involved practitioner losing license or privileges, fear of repercussions, fear of litigation/subpoena, personal or professional relationships with involved practitioners, time involved in filing a report, and the expectation that other personnel would also be filing a report. OR and PACU nurses varied in their responses, but were, in general, least influenced by time involved in filing a report and expectations that other personnel would also be filing a report.

Table 2

Table 2

Back to Top | Article Outline

Discussion

The essentials of our quality management-peer review process have been previously described (4). We have found the majority of reported adverse outcomes, particularly those associated with disabling patient injury caused by human error, to be self-reported. Contrary to O’Neil et al. (3), our data indicate that physician self-reporting is capable of identifying far more adverse events than medical record review. The reasons for these differences are uncertain but may be related to the stability of our reporting mechanism. O’Neil et al.’s (3) article suggests that their system was not in place before their study. We found that the percentage of self-reporting tended to increase with time (4), reaching a stable rate approximately one year after the system was begun. O’Neil et al. (3) might have found similar results over a longer study period. Our reporting system also included attending physicians as well as residents, which may account for some of the difference between our results and O’Neil et al.’s (3).

The type of data being recorded and its purpose may also affect reporting rates. Sanborn et al. (6) found only 4.1% of the total number of abnormal incidents (heart rate, blood pressure, temperature, and oxygen saturation) detected by automated anesthesia record keeping to be reported by physicians. These data were stored but were neither revealed to nor used by the department. Also, by the time the physicians were required to fill out a study questionnaire at the end of each case, they would in most instances know if their patients had suffered an adverse outcome.

The initial report of our system (4) included only those indicators recommended at that time by the Joint Commission for Accreditation of Health Care Organizations for clinical departments of anesthesiology (7), all of which were outcome based (i.e., neurologic deficit, death myocardial infarction, dental or ocular injury). Our current indicators have been accepted as relevant to patient outcome by a majority vote of the department and have been expanded and modified over the years. In one earlier version of our system, we also found abnormal heart rate and blood pressure values unassociated with adverse outcomes to be rarely reported. We decided to delete indicators not associated with an adverse outcome, because they were not considered clinically significant by themselves. Physicians are more inclined to report incidents that result in an adverse outcome, particularly if reporting and discussion of such incidents might lead to improvements in patient care.

Costanza et al. (8), Lewis et al. (9), and Ellrodt et al. (10) have published studies supporting the contention that physicians will comply with guidelines whose importance and validity they accept. In this context, in Sanborn et al.’s (6) study, 37% of hypotensive episodes (presumably regarded by the involved physicians as a more serious deviation from the norm) were reported, versus an average self-reporting rate of 4.1% for all indicators.

When examining the cases submitted for review, we found that chart reviewers were less likely than physicians to identify transient injuries and incidents in healthy patients. Self-reporting rates among physicians, however, were constant across all individual factors, although human error leading to disabling patient injury was reported at a higher rate than other adverse outcomes. For all classes of outcomes, other than those involving severe patient injury and those in severely ill patients, physicians were more likely than chart reviewers to report an incident. Physician self-reporting is a more sensitive means of detecting cases meeting predetermined indicator criteria and is minimally affected by either severity of the outcome or the involvement of human error. Incident reports tend to be biased toward human error, while cases detected by chart reviewers are more specific, but not more sensitive, for the outcomes of death or permanent injury. It should be noted that the medical chart reviewers did give a relatively low survey priority to the question of “outcome expected or accepted.”

We must acknowledge the possibility that a certain number of adverse outcomes may not have been detected by any of the three mechanisms under review. Postoperative visits are required by departmental policy to be made, but we have no way of ensuring that they are done. Also, if physicians choose to conceal adverse events and not record them in a medical record, then they cannot be detected by chart review. Our detection rates do compare favorably, however, to other studies. Chopra et al. (11), using pooled data from anesthesia providers, nursing staff, and patients, found a rate of faults, accidents, near accidents, and complications of 0.13%, with no specific human error rate reported. Short et al. (12), using self-reporting of “critical incidents,” found a total rate of 0.78%, with a 0.63% rate of human error. Kumar et al. (13), also using self-reporting of critical incidents, found a total incident rate of 0.45%, with a human error rate of 0.36%. Our total incident rate was much larger (1.9%), and our human error rate somewhat less (0.24%) than reported by these studies. The data in our study are not directly comparable, however, because our data were confined to adverse outcomes, whereas the data of Chopra et al. (11), Short et al. (12), and Kumar et al. (13) included numerous mishaps that only potentially could have resulted in an adverse outcome. In addition, while all adverse outcomes that meet our indicators are included in our system, our results consider as human errors only those mishaps directly attributable to members of the Department of Anesthesiology.

A number of surveys have examined factors that affect the incidence of self-reporting by healthcare providers. Hall et al. (14) found uncertainty regarding one’s role, as well as deficient in-service education, to be major constraints on participation. A survey of physicians in the United Kingdom found that reasons for under reporting adverse drug reactions included lack of time, unavailability of reporting forms, and the feeling that absolute confidence in the diagnosis was required before filing a report (15). A survey of anesthesiologists in New Zealand (16) found that 55% would occasionally omit or falsify data on anesthesia records. Stated reasons included dissatisfaction with the record form, a feeling that the record is a distraction, and the view that the record is unimportant for the use of future anesthetists. Participants denied concern over medical-legal issues as a motive for falsifying data.

Our survey did not find time factors, uncertainty, or ignorance of the system to materially affect a decision to self-report, nor did it find physicians to be excessively concerned with medical-legal issues. One possible reason might be that the basics of our system are reviewed at least quarterly as part of the introduction to the monthly quality assurance meetings. It is stressed that provider specific QM data–including the minutes of QM meetings–are confidential and federal law (United States Health Code 2805) prohibits their use for punitive or disciplinary purposes. The unavailability of forms in our institutions is not a problem, either, because they are kept both at the main desks of the OR’s and at the main desks of the recovery rooms. All groups in our survey agreed that the potential to improve quality of patient care influenced or strongly influenced (average survey response < 2) a decision to report an adverse outcome. Severity of the outcome and value as a teaching tool were also positively associated with a decision to report (average survey response < 3) by all three groups. Apart from these factors, there was a wide variance in response.

Cooper (17) states that an effective system of self-reporting requires that the value of self-reporting be demonstrated and that a culture attributing error to negligence be changed. Deming (18), the “father of quality management,” wrote that quality improvement efforts will succeed only if participants believe in the process. They must have confidence that their efforts will result in real improvements to the work environment, and they must have confidence that these efforts will be appreciated and supported, not punished. We believe that current research supports these contentions. Anesthesiologists will comply with a system of self-reporting if they understand the process, if there is institutional and departmental encouragement and support for the process, and if the process is nonpunitive and can result in real improvements in patient care.

Back to Top | Article Outline

References

1. Cullen DJ, Bates DW, Small SD, et al. The incident reporting system does not detect adverse drug events. J Quality Improvement 1995; 21:541–8.
2. Brennan TA, Localio AR, Leape LL, et al. Identification of adverse events occurring during hospitalization: a cross-sectional study of litigation, quality management and medical records at two teaching hospitals. Ann Intern Med 1990; 112:221–6.
3. O’Neil AC, Peterson LA, Cook EF, et al. Physician reporting compared with medical-record review to identify adverse medical events. Ann Intern Med 1993; 119:370–6.
4. Lagasse RS, Steinberg ES, Katz RI, Saubermann AJ. Defining quality of perioperative care by statistical process control of adverse outcomes. Anesthesiology 1995; 82:1181–8.
5. Likert R. Archives of psychology: a technique for the measurement of attitudes. Greenwich: RS Woolworth, 1932.
6. Sanborn KV, Castro J, Kuroda M, Thys DM. Detection of intraoperative incidents by electronic scanning of computerized anesthesia records. Anesthesiology 1996; 85:977–87.
7. Accreditation manual for hospitals. Oakbrook Terrace: Joint Commission on Accreditation for Healthcare Organizations, 1992.
8. Costanza ME, Stoddard AM, Zapka JG, et al. Physician compliance with mammography guidelines: barriers and enhancers. J Am Board Fam Pract 1992; 5:143–52.
9. Lewis LM, Lasater LC, Ruoff BE. Failure of a chest pain clinical policy to modify physician evaluation and management. Ann Emerg Med 1995; 25:9–14.
10. Ellrodt AG, Conner L, Riedinger M, Weingarten S. Measuring and improving physician compliance with clinical practice guidelines. Ann Intern Med 1995; 122:277–82.
11. Chopra V, Bovill JG, Spierdijk J. Accidents, near accidents and complications during anaesthesia: a retrospective analysis of a 10-year period in a teaching hospital. Anaesthesia 1990; 45:3–6.
12. Short TG, O’Regan A, Lew J, Oh TE. Critical incident reporting in an anaesthetic department quality assurance programme. Anaesthesia 1992; 47:3–7.
13. Kumar V, Barcellos WA, Mehta MP, Carter JG. An analysis of critical incidents in a teaching department for quality assurance: a survey of mishaps during anaesthesia. Anaesthesia 1988; 43:883–6.
14. Hall M, McCormack P, Arthurs N, Feely J. The spontaneous reporting of adverse drug reactions by nurses. Brit J Clin Pharmacol 1995; 40:173–5.
15. Belten KJ, Lewis SC, Payne S, et al. Attitudinal survey of adverse drug reaction reporting by medical practitioners in the United Kingdom. Brit J Pharmacol 1995; 39:223–6.
16. Galletly DC, Rowe WL, Henderson RS. The anesthetic record: a confidential survey on data omission or modification. Anaesth Intens Care 1991; 19:74–8.
17. Cooper JB. Is voluntary reporting of critical events effective for quality assurance? Anesthesiology 1996; 85:961–4.
18. Deming WE. Out of the crisis. Cambridge: Institute for Technology, Center for Advanced Engineering, 1986.
© 2000 International Anesthesia Research Society