Drug Calculation Errors in Anesthesiology Residents and Faculty: An Analysis of Contributing Factors : Anesthesia & Analgesia

Secondary Logo

Journal Logo

Medical Education: Original Clinical Research Report

Drug Calculation Errors in Anesthesiology Residents and Faculty: An Analysis of Contributing Factors

Black, Shira DO*; Lerman, Jerrold MD, FRCPC, FANZCA*,†,‡; Banks, Shawn E. MD§; Noghrehkar, Dena MD; Curia, Luciana MD*; Mai, Christine L. MD, MS-HPEd; Schwengel, Deborah MD, MEd; Nelson, Corey K. MD#; Foster, James M. T. MD**; Breneman, Stephen MD, PhD*; Arheart, Kris L. PhD††

Author Information
Anesthesia & Analgesia 128(6):p 1292-1299, June 2019. | DOI: 10.1213/ANE.0000000000004013

Abstract

See Editorial, p

KEY POINTS

  • Question: What is the frequency and magnitude of drug calculation errors by residents and faculty in US anesthesiology programs?
  • Findings: Using the results of a 15-question drug calculation test, we determined that anesthesiology residents and faculty frequently commit computational drug dosing errors; and inexperienced residents and aging faculty commit more errors, although residents commit errors of more extreme magnitude.
  • Meaning: We posit that the frequency of computational drug errors, particularly of a large (or small) order of magnitude by residents in this study, warrants remedial training for residents in drug dilutions and dosing, together with clinical safeguards to preclude harm from these errors.

Early critical event reporting systems identified medication errors as one of the most common causes of perioperative morbidity and mortality.1,2 Preventable medication errors that resulted in harm to patients ranged from 0.3%–4.7% of hospital admissions in Australia to 2%–15% in the United Kingdom.3,4 Direct observation indicated that the frequency of medication errors in hospital wards was 19%, of which 17% were dosing-related errors.5 In 2003, the American Society of Anesthesiologists (ASA) closed claims study confirmed that drug administration errors were also a major source of iatrogenic harm to patients during anesthesia6; 38% of drug errors have been attributed to physician error.7 International data supported these concerns because 89% of anesthesiologists in New Zealand reported committing ≥1 drug administration error, and 12.5% believed that their errors had actually harmed patients.8 A subsequent prospective study of 7794 anesthetics in New Zealand reported an overall incidence of drug errors of 0.75% (or 1 drug error for every 133 anesthetics), based on self-reporting by anesthesiologists.9 In 2009, evidence indicated that in the United States, ≥1 death occurred daily, and 1.3 million patients were injured annually, all attributable to drug errors.3

The level of provider experience is an important determinant of the rate of drug errors.10 Junior residents and physicians committed more drug calculation errors than more experienced physicians.11–14 The magnitude of the drug errors in one study increased with fewer hours of sleep during the preceding night and with more years of professional experience.15 Although several strategies, including electronic medical records, decision-supported software, unit dosing by pharmacy, 2-person drug dosing checks, workshops, and programmable smart pumps, have reduced drug errors,16–23 most anesthesiologists continue to calculate drug doses and administer drugs independently, in many instances without checks and balances (eg, in operating theaters, pediatrics, intensive care, obstetrics, emergency medicine, trauma, and cardiac arrest),9,10,24 the consequences of which may contribute to serious morbidity and mortality.25

The severity of dosing errors in children is often greater than in adults because drug doses in children are usually administered on a weight basis, and the range of weights in children varies by 300-fold, whereas that in adults varies only by 5- to 6-fold.9–11,26,27 For some medications, the calculations are challenging, and they introduce the risk of errors from a number of different sources,12 the most common being computation, units, dilution, weight, and transcription. Whether considered individually or in aggregate, these errors may result in 10-fold or greater overdoses in children, which could lead to serious and potentially fatal sequelae.11,12,26

The purpose of this study was to investigate the frequency of computational drug errors in a large national sample of anesthesia residents and faculty via a standardized computational test, to identify potential contributing factors, and to determine the magnitude of the dosing errors.

METHODS

The directors of 11 anesthesia residency programs were invited along with their anesthesiology residents and faculty to participate in this prospective study to test the computational skills in their departments at grand rounds by the senior author (J.L.). Seven directors agreed to participate.

Institutional review board approval was obtained at each of the 7 academic anesthesiology departments in the United States with a waiver of consent. This study adhered to the applicable Enhancing the Quality and Transparency of Health Research guidelines. This study was not registered.

At a single grand rounds at each site, an information sheet and the drug calculation test were distributed (without previous notification) to all anesthesiology residents and faculty in attendance. The information sheet described the purpose and nature of the study, emphasizing that completion of the test at the rounds was voluntary and anonymous. Written consent was not required by any review board; if the participant completed the test, then consent was implied. The test was proctored by the program director or his/her associate at each participating center to ensure that participants did not collaborate in their answers. Those who declined to complete the test were excused from the rounds. Demographic data were not collected from nonresponders. All sites completed the same drug calculation test.

The test was not time constrained, and participants were allocated as much time as needed to answer the 15 questions. Participants were permitted to use any instrument on their person (calculator, smartphone, and computer) to calculate their answers. Completed tests were collected, graded by 2 independent and blinded investigators, and then collated.

Test Construction and Psychometric Properties

The test was constructed in a 3-step manner as outlined below.

A preliminary version of the test instrument, which consisted of 25 questions, was first vetted by 3 academic anesthesiologists (Dr R. Cox, University of Alberta, Edmonton, AB, Canada; Dr T. Erb, University Hospital, Basel, Switzerland; and Dr T. Yemen, University of Virginia, Charlottesville, VA) for accuracy, relevance, and range of difficulty. After editing, a second version of the questions was trialed in a pilot study with 20 anesthesiologists at an international anesthesia meeting, after which the questions were further edited to yield the final version of the test. The final test consisted of 15 questions to address the spectrum of clinical relevance and numeral computations. The test questions were worded to require answers in different units (eg, as a dose, dose per body weight or an infusion rate over time) to reflect the types of units used when administering drugs in anesthesia. To ensure a consistent answer format, most questions stipulated the number of decimals places for the answer (Supplemental Digital Content, Appendix 1, https://links.lww.com/AA/C697). Specifically, the questions tested the examinees’ ability to calculate drug dilutions, drug concentrations, and infusion rates, and thus required knowledge of fundamental mathematical principles that are applicable to adult and pediatric age drug doses. Several drugs that are not commonly used in anesthesia were included to test the participants’ computational skills without relying on any prior knowledge of and familiarity with the usual dosages for these drugs. The purpose of the study was to evaluate the medication-related computational skills of resident and faculty anesthesiologists.

A confirmatory factor analysis was used to determine whether the 15 questions represented a single factor. Cronbach alpha was used to determine the internal consistency and reliability of the summary question scale. The confirmatory factor analysis suggested that all 15 items loaded on the same latent scale (“knowledge”) with standardized factor loadings between 0.32 and 0.71 (>0.30 is good), a root mean square error of approximation fit statistic of 0.033 (<0.06 is good), and a comparative fit index fit statistic of 0.93 (>0.95 is good),28,29 supporting the interpretation of a total score. All items correlated positively with the total score and with each other, resulting in an acceptable level of internal consistency, Cronbach α = .71. Confirmatory factor analysis was performed using MPlus (Muthen & Muthen, Los Angeles, CA).

Statistical Methods and Analysis

In the absence of data on which to base a sample size calculation, a convenience sample of 2%–5% of the total number of anesthesia residents in the United States was expected from the 7 participating institutions. We posited that this sample would be representative of the national population of US anesthesiology residents and faculty and provide a reliable geographic representation to reliably estimate the computational error rates.

Demographic data of the participants, including training status (resident or faculty), training (as in resident year), or years of experience (post completion of residency), and the number of hours of sleep during the preceding night were collected. The numbers of residents and faculty with correct answers (% correct answers) and the number of errors per participant were determined, the latter compared among institutions. Unanswered questions were considered incorrect answers.

Factors that significantly affected the error rates including the institution, the participant’s position (resident or faculty), and the resident’s year of training were compared using generalized linear models. Model mean percents and SEs are presented along with the overall P value for each model. When the overall P value was significant, pairwise comparison P values from a Fisher-protected least significance test were calculated.

A generalized linear mixed model for repeated measures was used to determine the effect of type of drug calculation (percent, rate, dose, and ratio) and the number of calculations involved on the error rate (categories of question type and number of calculations were repeated within participants). When the overall P value was significant, pairwise comparison P values from a Fisher-protected least significance test were calculated.

A generalized linear model was used to perform an analysis of a Poisson distributed variable to compare the number of errors per person between residents and faculty. Means and SEs are presented in the Results text. Also, a generalized linear model for a normally distributed variable was used to perform a repeated-measures analysis to compare the average magnitude of the errors per participant between residents and faculty; results shown as means and SEs. Pearson correlations were used to assess the strength of the relationship of years of experience and hours of sleep with the number of errors and mean magnitude per participant.

Post hoc, the magnitude of the calculation errors was analyzed. If an answer was within 5% of the correct answer, it was defined to be a “correct” answer. Incorrect answers were those that were outside of 5% greater or less than the correct answer. This permitted an analysis of the magnitude of incorrect answers. The magnitude of incorrect answers was grouped empirically into 5 ranges13,14: ≥5% to <2-fold greater or less than the correct answer; ≥2-fold to <10-fold greater or less; ≥10-fold to <100-fold greater or less; ≥100-fold to <1000-fold greater or less; and ≥1000-fold greater or less than the correct answer.

Statistical analysis was undertaken by a statistician who was blind to the institutions. A 2-tailed P < .05 was accepted. Data are displayed as means ± SEM unless indicated otherwise. Statistical analyses other than the confirmatory factor analysis were performed using SAS 9.3 (SAS Institute Inc, Cary, NC).

RESULTS

Seven institutions participated in this study: Massachusetts General Hospital, Boston, MA; State University of New York at Buffalo, Buffalo, NY; Johns University Hopkins, Baltimore, MD; University of Miami, Miami, FL; University of Rochester, Rochester, NY; University of New York Upstate, Syracuse, NY; and University of California at Irvine, Irvine, CA. A total of 371 anesthesia residents in the first through fourth postgraduate years, and anesthesia faculty consented to participate. The number of residents and faculty among the 7 institutions were evenly distributed among the institutions (Table 1). However, only 5 first-year residents completed the test: 1 from the 1 institution, and 4 from another. Given the small number of first-year residents who participated, their responses were not included in the analysis. A total of 209 residents (postgraduate year 2 to postgraduate year 4) and 162 faculty completed the test and were included in the analysis; 1.2% of the questions on all of the tests combined were unanswered.

Only 20% of residents and 25% of faculty scored 100% on the test. A detailed analysis of the error rates by residents and faculty according to their institution, level of training, and years of experience is presented in Table 1. The mean error rate (%) was calculated as the number of incorrect answers × 15−1 × 100. For the entire sample, the mean correct answer rate was 83% (95% CI, 81.4–84.8), yielding a mean error rate of 17% (95% CI, 15.2–18.6). Figure 1A displays percentage of residents and faculty who answered each of the questions correctly. The error rates for the residents (17.5% [1.1]) and faculty (16.3% [1.2]) were similar (P = .51) (Table 1). The total number of incorrect answers by residents (2.9 ± 0.1 errors/resident) and faculty (3.0 ± 0.2 errors/faculty) did not differ significantly (P = .504) (Figure 1B). However, the error rates among the residents differed according to their level of training (P = .031); the error rate for postgraduate year 1 residents was significantly greater than that for postgraduate year 2 (P = .012), but not for postgraduate year 3 (P = .076). The error rates for postgraduate year 2 and postgraduate year 3 residents did not differ (P = .48) (Table 1). The overall error rates differed significantly among the 7 institutions (P = .021) (Table 1): the error rate for institution A was significantly less than that for institutions B (P = .007), E (P = .008), and G (P = .007). Similarly, the rate for institution D was significantly less than that for institutions B (P = .029), E (P = .038), and G (P = .026).

T1
Table 1.:
Distribution of Faculty and Resident Respondents and Computational Error Rates
F1
Figure 1.:
The percentage of residents and faculty versus the percentage of correct test scores (A). The percentage of residents or faculty versus the number of incorrect answers per participant (B).

The error rates differed significantly among the types of questions (P = .001) (Table 2). Percentage questions yielded the greatest error rates compared with rate (P < .001), dose (P = .001), and ratio (P = .015) questions. Ratio questions yielded significantly greater error rates than for rate questions (P < .001), and dose questions yielded significantly greater error rates than for rate questions (P = .005). The error rates caused by the number of operations to obtain an answer also differed significantly overall (P < .001) (Table 2). The error rates from greatest to least were for 5 > 4 > 2 > 3 > 1 mathematical operations (pairwise at P < .001, except for 1 to 3 operations [P = .796] and 4 to 5 [P = .70]). Six questions included drugs with which most anesthesiologists are unfamiliar: Nos. 3, 4, 7, 8, 13, and 15; the remaining 9 questions included familiar drugs. The list of unfamiliar drugs included drugs to treat pulmonary hypertension and heart failure, as well as for epidural infusions in a child and dantrolene for malignant hyperthermia. The average number of errors for these 6 questions (from all participants combined) (82 ± 15) was similar to that for the remaining 9 (54 ± 10) (P = .12). The questions associated with the greatest frequency of errors were No. 13 (n = 116), 4 (n = 112), and 6 (n = 106).

T2
Table 2.:
Error Rates for the Type of Question and Number of Operations

Total years of experience for residents was 1.9 ± 0.1 years, and for faculty, it was 15.4 ± 0.7; when combined, the years of experience was 7.6 ± 0.5 years. Pearson correlation between experience and error rate for faculty was modest but significant (R = 0.22; P = .007), whereas that for residents was not significant (R = −0.11; P = .12). When combined, the years of experience did not correlate with the error rate (R = 0.04; P = .39). The (mean ± SE) number of hours of sleep for residents was 6.2 ± 0.1, and for faculty, it was 6.3 ± 0.1; when combined, it was 6.3 ± 0.1 hours. There were no significant relationships between the error rate and the duration of sleep the previous night (all R < 0.06; P > .42).

Post hoc, the authors posited that drug calculation errors that were up to twice the correct dose might result in minor adverse events but were less likely to result in serious adverse events than those that were >2-fold greater or less than the correct dose based on the therapeutic index for most anesthetic drugs in humans. As a result, we compiled the frequency of errors that was >2-fold greater or less than the correct dose (Figure 2; Supplemental Digital Content, Appendix 1, https://links.lww.com/AA/C697). The magnitude of the errors in aggregate for the residents (2.6 ± 0.1) and faculty (2.5 ± 0.1) was similar (P = .42) and independent of the years of experience (R=0.07, P = .24) and the number of hours of sleep the night before (R = −0.6; P = .29). The answers to 2 questions, No. 4 (n = 88) and No. 13 (n = 91), comprised 26% of the errors of large magnitude. Overall, 57% of the errors of large magnitude exceeded the correct answer and 43% were less.

F2
Figure 2.:
The magnitude of the incorrect answers for each of the 15 questions. Errors that were <2-fold greater or less than the correct answer were omitted from the graph (see the Results section for explanation). Each histogram displays the percentage of the incorrect answers that were >2-fold greater or less than the correct answer, grouped into 4 error size categories as shown in the legend. The size categories were based, in part, on the work of Glover and Sussmane.12

We also determined that 6% of the participants committed error rates between 10- and 100-fold greater (or less) than the correct answer. For errors (total = 99 errors) that exceeded 100-fold greater (or less) than the correct answer, residents (n = 51) committed 68% of the errors, twice that committed by the faculty (n = 29). Residents erred with >1 incorrect answer of large magnitude (>100-fold greater or less than the correct answer), with a frequency of 13 out of 51 (or 25%) compared with faculty, at 3 out of 29 (or 10%) (P = .10).

DISCUSSION

We determined the frequency and magnitude of computational errors in 371 residents and faculty from 7 anesthesiology residency programs in the United States by analyzing their responses to a written test of drug doses and infusions. We undertook this investigation because drug errors occur frequently in anesthesiology (once in every 133–274 anesthetics), are more common in emergency situations, and may lead to serious sequelae.27,30 Multiple strategies have been proposed to mitigate drug errors, although they have not been uniformly adopted in anesthesia practice20,21 and in some cases such as smart pumps, have been overridden by clinicians.23 And yet, one strategy, the standardization, technology, pharmacy, and culture arising from a consensus conference of the Anesthesia Patient Safety Foundation, has been successfully implemented to reduce drug errors.31 In this single, snapshot test of computational skills in a large cohort of residents and faculty in which the time to complete the study was not constrained and technology devices (eg, phone and calculator) were permitted, a minority of residents (20%) and faculty (25%) answered all of the questions correctly. Residents and faculty both erred similarly in 17% of questions, although errors of a large magnitude in residents, 100-fold greater or less than the correct answer, were more frequent than in faculty. The results of this study identify an underappreciated deficiency in the computational skills of both anesthesia residents and faculty.

Computational proficiency is not a prerequisite for medical school, although some entrance examinations in Europe now include a numerical reasoning component.32 When the computational skills of 168 medical students were tested with 3 drug dosing questions, only 10% answered all 3 questions correctly, and 25% answered all 3 incorrectly.33 Several studies reported that the error rate by junior postgraduates was greater than by senior year residents,14,34 which is consistent with our results (Table 1). Furthermore, a small proportion of pediatric residents (10%–30%) committed 10-fold errors, and 5% committed 1000-fold errors, also consistent with our results (Figure 2).11,12,26 What distinguishes this study from previous studies is that we tested computational proficiency in a large cohort of residents and faculty (10-fold greater than in a previous study),26 in a multi-institutional, pan-national design,11,12,33,35 and with a test that included 6 questions focused on drugs with which participants would be less familiar (Supplemental Digital Content, Appendix 1, https://links.lww.com/AA/C697).11,12,25,26 These latter questions minimized the impact of familiarity on the answers (also known as crystallized intelligence) compared with the remaining questions.11,12,33,35,36 Notably, neither the error rates nor the frequency of large (or small) errors for these 6 questions with unfamiliar drugs differed significantly from the remaining 9 questions with more familiar drugs. We believe that this approach increased the external validity of our results and allows us to recommend that educational strategies37 should be adopted.

Developing a strategy to improve the computational skills of anesthesia residents requires a multipronged approach. The first strategy would establish a baseline by testing the computational skills of each resident at admission into the program.13,19,32,33 The second would require that each resident (with emphasis on those who performed poorly on the admission test) participate in serial workshops to address knowledge gaps and improve their computational skills.13,19,37,38 The third would involve follow-up assessments periodically throughout the residency program to ensure maintenance of computational proficiency.3 At the end of the training program, a final test would either confirm whether an acceptable level of computational skills was maintained.19,38 If not, then remedial computational training would be provided. We posit that this process for improving computational skills when combined with systemwide strategies that included standardization, technology, pharmacy, and culture, electronic medical record dosing alerts, and/or electronic apps will attenuate the frequency and magnitude of drug errors by anesthesia residents.

Addressing the error rates in faculty requires an approach that is distinct from that for the residents. In this study, even experienced anesthesiologists committed computational errors at a rate ≥15%, with 10% committing >1 error of a large order of magnitude. Available strategies to attenuate faculty computational errors include adopting smart pumps, relying on pharmacy to prepare correct doses, including electronic medical record alerts and the standardization, technology, pharmacy, and culture strategy. The results of the few studies of the effectiveness of smart pumps to prevent dosing errors are mixed.23,39,40 In some, the reduction in errors was limited because clinicians overrode the soft limits on the pumps. Cost-cutting strategies in pharmacies have increasingly limited their ability to prepare drug dilutions, infusions, and unit dosing. Nonetheless, preparing drug infusions in a central pharmacy or from commercially prepared stock solutions reduces drug concentration errors compared with those prepared by nurses on the ward.41,42 Furthermore, automated preparation techniques in the pharmacy reduce errors compared with manually prepared solutions and when ward nurses prepared drug infusions using prefilled drug syringes, resulted in fewer drug errors compared with infusions prepared from standard drug vials. A novel notion is to maintain computational skills testing in maintenance of competence in anesthesia educational programs as well to institute remedial training for those who fail or commit frequent errors or errors of large magnitude. In aggregate, these strategies may reduce drug dosing errors and the magnitude of the errors by faculty anesthesiologists, although studies are required to establish their effectiveness.

The frequency of incorrect answers by faculty increased with years of experience, although experience accounted for only 5% of the variability in the error rate.15 We used the respondents’ “years of experience” as a surrogate index for aging in the after-analysis. Aging is associated with a gradual deterioration in memory and executive cognitive functions; however, its effect on computational skills has not been fully elucidated.36 The test used in this study assessed 2 different skills: computation or arithmetic skills, and problem solving. The former skills are well preserved with age and may actually improve as a result of the years of experience during which these skills become firmly established.42 However, the latter, problem solving, may deteriorate. Problem solving depends on 2 executive functions: the inhibitory process, and fluid intelligence. The inhibitory process, which becomes progressively impaired with age (beyond 50 or 60 years of age), is the ability to separate extraneous information from that required to solve the problem.42,43 The second, fluid intelligence, which may begin to deteriorate as early as 50 years of age,44 allows innovative thinking to solve unfamiliar problems, without relying on previous experience or knowledge. In our test, 40% of the questions involved drugs with which the participants were unfamiliar, thus requiring a greater element of problem solving. Thus, the small decrease in the number of correct answers with years of experience may reflect the offsetting effects of preserved computational skills and waning inhibitory and fluid intelligence processes.

Computational proficiency has been investigated to only a limited extent in anesthesia.21,30,33,35,37 In a study of 141 physicians and surgeons, drug calculation errors occurred in the majority of participants who completed the 12-question test, although anesthesiologists committed fewer errors than both physicians and surgeons.45 The source of the errors was multifactorial: misreading the question, using incorrect operations (eg, dividing instead of multiplying), and/or procedures (eg, incorrectly placing the decimal point or reporting the answer in incorrect units). Although many errors may go unnoticed and have minimal impact on patient outcomes, others may lead to serious sequelae.46–48 Anesthesiologists administer a small number of drugs on a daily basis, possibly limiting the frequency of errors they commit, although the frequency and magnitude of errors with familiar and unfamiliar drugs in this study were similar. Nonetheless, several approaches have demonstrated that computational skills may be improved long term (up to 4 years after the initial workshop training), including drug calculation lectures and participation in workshops and simulation in both medical and pharmacy schools.3,49 Further studies are needed to identify the source(s) of and to develop effective strategies to reduce the most frequent and largest drug errors in anesthesia.

Drug concentrations are often labeled differently on the packaging, resulting in errors when the provider has to convert the units to a measurable unit.33,50,51 Studies demonstrated that dosing errors were more common when concentrations were expressed as ratios or percentages.14,33,50,51 Our results are consistent with those findings (Table 2).

Computational errors of the order of magnitude reported here are exceedingly dangerous in anesthesia. Ten-fold or greater errors have been reported in several studies,11,12,48,52 with 1 student (5%) reportedly committing a 1000-fold calculation error.11 In this study, 6% of the participants committed drug errors 10- to 100-fold greater (or less) than the correct answer, and 6.7% of the residents committed >1 error with a magnitude in excess of 100-fold greater or less than the correct answer, which are consistent with previous studies.11,12,48,52 The clinical impact of errors depends on the therapeutic index of the drugs involved in the dosing errors.48,53 Dosing errors that are 100-fold greater (or less) than the correct dose almost certainly will result in substantial consequences that may range from prolonged recovery or intensive care admission to the need for cardiopulmonary resuscitation; drugs with narrow therapeutic indices are twice as likely to yield drug-related adverse events with wide therapeutic indices.47,53,54 Systemwide strategies to mitigate such errors such as “smart pumps” are unevenly applied and, in some cases, are easily overridden.22,23 It is for these reasons that education, safeguards, and/or algorithms are needed in anesthesia residency programs.

There are several limitations in this study. First, we studied only a sample of residents compared with the total number in residency programs in anesthesiology in the United States (209 of the 5578 residency positions or 3.7%) and a small fraction of faculty. Whether this sampling accurately reflects the computational proficiency of the entire population of residents and faculty remains unclear. Moreover, all participants work in the United States, possibly limiting the external validity of our data. Second, we did not randomly select the anesthesia programs from across the country to participate in this study, but rather used a convenience sample of those institutions that had program directors who were interested in this study. Thus, sampling bias of the residency programs may have skewed our results. Because individuals could opt out of participating in the test, there is a risk that those who chose not to participate could have included a disproportionately large number of residents and faculty with weaker math skills, known as response bias. Third, although we vetted the questions through experts and a sample of clinical before finalizing the syntax and number of questions, we standardized neither the number of questions with similar concentration units (eg, percentages) nor the number of computational operations per question that may have resulted in a disproportionate number of questions of a specific type. Finally, by conducting the study in a quiet “laboratory” setting (outside of the operating room), the error rates reported here may actually underestimate the rates that would have been obtained had the tests been conducted in a stressful operating room environment with multiple sensory distractions.4,55,56

In conclusion, we determined that both residents and faculty in anesthesiology at 7 academic institutions in the United States committed a substantial number of drug calculation errors during computational skill testing. Error rates occurred more frequently in less experienced residents and in more experienced faculty. Very large or very small dosing errors occurred infrequently, although they were, more commonly committed by residents than faculty. These findings represent a serious potential risk for harm to patients.

DISCLOSURES

Name: Shira Black, DO.

Contribution: This author helped collect data, submit to the institutional review board at University of Rochester, write the first draft, edit subsequent drafts, and approve the final copy.

Name: Jerrold Lerman, MD, FRCPC, FANZCA.

Contribution: This author helped develop the test, submit to the institutional review board at University of Buffalo and University of Rochester, collect data, write and edit all drafts, prepare figures, and approve all copies.

Name: Shawn E. Banks, MD.

Contribution: This author helped submit to the institutional review board at University of Miami, collect data, edit drafts, and approve all copies.

Name: Dena Noghrehkar, MD.

Contribution: This author helped submit to the institutional review board at University of Buffalo, collect data, edit drafts, and approve the final copy.

Name: Luciana Curia, MD.

Contribution: This author helped submit to the institutional review board at University of Rochester, collect data, edit drafts, and approve the final copy.

Name: Christine L. Mai, MD, MS-HPEd.

Contribution: This author helped submit to the institutional review board at Harvard, collect data, edit drafts, and approve the final copy.

Name: Deborah Schwengel, MD, MEd.

Contribution: This author helped submit to the institutional review board at Johns Hopkins University, collect data, edit drafts, and approve the final copy.

Name: Corey K. Nelson, MD.

Contribution: This author helped submit to the institutional review board at UC Irvine, collect data, edit drafts, and approve the final copy.

Name: James M. T. Foster, MD.

Contribution: This author helped submit to the institutional review board at SUNY Upstate, collect data, edit drafts, and approve the final copy.

Name: Stephen Breneman, MD, PhD.

Contribution: This author helped collect data, write the first draft, edit subsequent drafts, and approve the final copy.

Name: Kris L. Arheart, PhD.

Contribution: This author helped prepare the manuscript, edit drafts, analyze the data statistically, and approve the final copy.

This manuscript was handled by: Edward C. Nemergut, MD.

REFERENCES

1. Beecher HK, Todd DP. A study of the deaths associated with anesthesia and surgery: based on a study of 599, 548 anesthesias in ten institutions 1948-1952, inclusive. Ann Surg. 1954;140:2–35.
2. Craig J, Wilson ME. A survey of anaesthetic misadventures. Anaesthesia. 1981;36:933–936.
3. Wallace D, Woolley T, Martin D, Rasalam R, Bellei M. Medication calculation and administration workshop and hurdle assessment increases student awareness towards the importance of safe practices to decrease medication errors in the future. Educ Health (Abingdon). 2016;29:171–178.
4. Abeysekera A, Bergman IJ, Kluger MT, Short TG. Drug error in anaesthetic practice: a review of 896 reports from the Australian Incident Monitoring Study database. Anaesthesia. 2005;60:220–227.
5. Barker KN, Flynn EA, Pepper GA, Bates DW, Mikeal RL. Medication errors observed in 36 health care facilities. Arch Intern Med. 2002;162:1897–1903.
6. Bowdle TA. Drug administration errors from the ASA closed claims project. ASA Newsletter. 2003;6:11–13.
7. Leape LL, Bates DW, Cullen DJ, et al. Systems analysis of adverse drug events. ADE prevention study group. JAMA. 1995;274:35–43.
8. Merry AF, Peck DJ. Anaesthetists, errors in drug administration and the law. N Z Med J. 1995;108:185–187.
9. Webster CS, Merry AF, Larsson L, McGrath KA, Weller J. The frequency and nature of drug administration error during anaesthesia. Anaesth Intensive Care. 2001;29:494–500.
10. Cooper L, DiGiovanni N, Schultz L, Taylor AM, Nossaman B. Influences observed on incidence and reporting of medication errors in anesthesia. Can J Anaesth. 2012;59:562–570.
11. Rowe C, Koren T, Koren G. Errors by paediatric residents in calculating drug doses. Arch Dis Child. 1998;79:56–58.
12. Glover ML, Sussmane JB. Assessing pediatrics residents’ mathematical skills for prescribing medication: a need for improved training. Acad Med. 2002;77:1007–1010.
13. Degnan BA, Murray LJ, Dunling CP, et al. The effect of additional teaching on medical students’ drug administration skills in a simulated emergency scenario. Anaesthesia. 2006;61:1155–1160.
14. Wheeler DW, Wheeler SJ, Ringrose TR. Factors influencing doctors’ ability to calculate drug doses correctly. Int J Clin Pract. 2007;61:189–194.
15. Parshuram CS, To T, Seto W, Trope A, Koren G, Laupacis A. Systematic evaluation of errors occurring during the preparation of intravenous medication. CMAJ. 2008;178:42–48.
16. Sethuraman U, Kannikeswaran N, Murray KP, Zidan MA, Chamberlain JM. Prescription errors before and after introduction of electronic medication alert system in a pediatric emergency department. Acad Emerg Med. 2015;22:714–719.
17. Choi CK, Saberito D, Tyagaraj C, Tyagaraj K. Organizational performance and regulatory compliance as measured by clinical pertinence indicators before and after implementation of Anesthesia Information Management System (AIMS). J Med Syst. 2014;38:5.
18. Kaufmann J, Wolf AR, Becke K, Laschat M, Wappler F, Engelhardt T. Drug safety in paediatric anaesthesia. Br J Anaesth. 2017;118:670–679.
19. Smith NA, Wheeler DW. Intensive teaching of drug calculation skills: the earlier the better. Qual Saf Health Care. 2010;19:158.
20. Martin LD, Grigg EB, Verma S, Latham GJ, Rampersad SE, Martin LD. Outcomes of a failure mode and effects analysis for medication errors in pediatric anesthesia. Paediatr Anaesth. 2017;27:571–580.
21. Lobaugh LMY, Martin LD, Schleelein LE, Tyler DC, Litman RS. Medication errors in pediatric anesthesia: a report from the wake up safe quality improvement initiative. Anesth Analg. 2017;125:936–942.
22. Polisena J, Sinclair A, Hilfi H, Bedard M, Sedrakyan A. Wireless smart infusion pumps: a descriptive analysis of the continuous quality improvement data. J Med Biol Eng. 2018;38:296–303.
23. Byrne S, Do QM, Gretzinger D, Kresta P. Are “smart pumps” preventing medication errors? CMBES Proc. 2017;29:1.
24. Flannery AH, Parli SE. Medication errors in cardiopulmonary arrest and code-related situations. Am J Crit Care. 2016;25:12–20.
25. Wheeler SJ, Wheeler DW. Medication errors in anaesthesia and critical care. Anaesthesia. 2005;60:257–273.
26. Wong IC, Ghaleb MA, Franklin BD, Barber N. Incidence and nature of dosing errors in paediatric medications: a systematic review. Drug Saf. 2004;27:661–670.
27. Kozer E, Scolnik D, Jarvis AD, Koren G. The effect of detection approaches on the reported incidence of tenfold errors. Drug Saf. 2006;29:169–174.
28. Harrington D. Confirmatory Factor Analysis. 2009.New York, NY: Oxford University Press.
29. Gaskin J. Confirmatory Factor Analysis (CFA). Gaskination's StatWiki. 2016. Available at: http://statwiki.kolobkreations.com. Accessed November 22, 2018.
30. Glavin RJ. Drug errors: consequences, mechanisms, and avoidance. Br J Anaesth. 2010;105:76–82.
31. Vanderveen T, Graver S, Noped J, et al. Successful implementation of the new paradigm for medication safety: standardization, technology, pharmacy, and culture (STPC). APSF Newsletter. 2010;25:26–28.
32. Wright SR, Bradley PM. Has the UK clinical aptitude test improved medical student selection? Med Educ. 2010;44:1069–1076.
33. Wheeler DW, Remoundos DD, Whittlestone KD, House TP, Menon DK. Calculation of doses of drugs in solution: are medical students confused by different means of expressing drug concentrations? Drug Saf. 2004;27:729–734.
34. Taylor AA, Byrne-Davis LM. Clinician numeracy: use of the medical interpretation and numeracy test in foundation trainee doctors. Numeracy. 2017;10:Article 5, 1–20.
35. Avidan A, Levin PD, Weissman C, Gozal Y. Anesthesiologists’ ability in calculating weight-based concentrations for pediatric drug infusions: an observational study. J Clin Anesth. 2014;26:276–280.
36. Rozencwajg P, Schaeffer O, Lefebvre V. Arithmetic and aging: impact of quantitative knowledge and processing speed. Learn Individ Differ. 2010;20:452–458.
37. McQueen DS, Begg MJ, Maxwell SR. eDrugCalc: an online self-assessment package to enhance medical students’ drug dose calculation skills. Br J Clin Pharmacol. 2010;70:492–499.
38. Wheeler DW, Degnan BA, Murray LJ, et al. Retention of drug administration skills after intensive teaching. Anaesthesia. 2008;63:379–384.
39. Manrique-Rodríguez S, Sánchez-Galindo AC, López-Herce J, et al. Impact of implementing smart infusion pumps in a pediatric intensive care unit. Am J Health Syst Pharm. 2013;70:1897–1906.
40. Adapa RM, Mani V, Murray LJ, et al. Errors during the preparation of drug infusions: a randomized controlled trial. Br J Anaesth. 2012;109:729–734.
41. Hedlund N, Beer I, Hoppe-Tichy T, Trbovich P. Systematic evidence review of rates and burden of harm of intravenous admixture drug preparation errors in healthcare settings. BMJ Open. 2017;7:e015912.
42. Norris JE, McGeown WJ, Guerrini C, Castronovo J. Aging and the number sense: preserved basic non-symbolic numerical processing and enhanced basic symbolic processing. Front Psychol. 2015;6:999.
43. Mathis A, Schunck T, Erb G, Namer IJ, Luthringer R. The effect of aging on the inhibitory function in middle-aged subjects: a functional MRI study coupled with a color-matched Stroop task. Int J Geriatr Psychiatry. 2009;24:1062–1071.
44. Chen X, Hertzog C, Park DC. Cognitive predictors of everyday problem solving across the lifespan. Gerontology. 2017;63:372–384.
45. Simpson CM, Keijzers GB, Lind JF. A survey of drug-dose calculation skills of Australian tertiary hospital doctors. Med J Aust. 2009;190:117–120.
46. Kaushal R, Bates DW, Landrigan C, et al. Medication errors and adverse drug events in pediatric inpatients. JAMA. 2001;285:2114–2120.
47. Nanji KC, Patel A, Shaikh S, Seger DL, Bates DW. Evaluation of perioperative medication errors and adverse drug events. Anesthesiology. 2016;124:25–34.
48. Gokhul A, Jeena PM, Gray A. Iatrogenic medication errors in a paediatric intensive care unit in Durban, South Africa. S Afr Med J. 2016;106:1222–1229.
49. Atayee RS, Awdishu L, Namba J. Using simulation to improve first-year pharmacy students’ ability to identify medication errors involving the top 100 prescription medications. Am J Pharm Educ. 2016;80:86.
50. Wheeler DW, Remoundos DD, Whittlestone KD, et al. Doctors’ confusion over ratios and percentages in drug solutions: the case for standard labelling. J R Soc Med. 2004;97:380–383.
51. Wheeler DW, Carter JJ, Murray LJ, et al. The effect of drug concentration expression on epinephrine dosing errors: a randomized trial. Ann Intern Med. 2008;148:11–14.
52. Ross LM, Wallace J, Paton JY. Medication errors in a paediatric teaching hospital in the UK: five years operational experience. Arch Dis Child. 2000;83:492–497.
53. Blix HS, Viktil KK, Moger TA, Reikvam A. Drugs with narrow therapeutic index as indicators in the risk management of hospitalised patients. Pharm Pract (Granada). 2010;8:50–55.
54. Kozer E, Scolnik D, Keays T, Shi K, Luk T, Koren G. Large errors in the dosing of medications for children. N Engl J Med. 2002;346:1175–1176.
55. McDowell SE, Ferner HS, Ferner RE. The pathophysiology of medication errors: how and where they arise. Br J Clin Pharmacol. 2009;67:605–613.
56. Prakash V, Koczmara C, Savage P, et al. Mitigating errors caused by interruptions during medication verification and administration: interventions in a simulated ambulatory chemotherapy setting. BMJ Qual Saf. 2014;23:884–892.

Supplemental Digital Content

Copyright © 2019 International Anesthesia Research Society