Medication administration (MA) in hospitals is high risk and complex. The importance of MA accuracy has been widely noted in the literature.1-4 Strategies for protecting patients during MA are limited as most errors go unreported because the clinician is unaware an error has occurred.5 Nurses are both the last line of defense from MA errors of others and a potential perpetrator of error. Because clinicians may not be aware of their errors, accurate data on the incidence of MA errors are compromised, and understanding of how MA practices are associated with errors is inherently limited. Barker et al,5 noting the majority of MA errors go unreported, found direct observation and medical record review, originally developed in 1962, were a reliable method to determine MA accuracy.6 Direct observation has emerged as the measurement “gold standard,” recently reaffirmed by Meyer-Massetti et al7 in a systematic review of medication safety assessment methods, noting different assessment methods offer unique advantages and resource requirements.
Medication administration accuracy benefits from adherence to a series of practices that reduce human error commonly called the “5 rights”—right patient, drug, dose, route, and time. Despite tradition-based face validity and qualitative explication, “rights” taxonomies are not evidence based.8-11 The impact of unit-level work environments on patient outcomes has been studied. The resulting literature has fueled systematic reviews and syntheses examining the impact of nurse staffing, technology, and work interruptions on medication safety.12-15 Findings suggest multiple factors converge to explain and predict medication-related errors.
This study, from the Collaborative Alliance for Nursing Outcomes (CALNOC) ongoing study of hospital MA accuracy, examined the independent and predictive strength of microsystem characteristics, including unit-level nurse workload, staff nurse characteristics, and selected risk assessment and preventive interventions, on nurses’ performance of key safe practices during MA and MA outcome errors. A secondary aim was to identify factors that may be useful to advancing improvements in MA accuracy. To build and test the multivariate models, Clarke and Donaldson’s16 conceptual model guided the approach to examining predictive links between variables.
Research questions were as follows:
1. How do selected microsystem factors impact safe practices during administration of medications?
2. To what extent does nurses’ performance of safe practices during MA predict MA accuracy?
3. Do factors emerge that will be useful to clinical and administrative leaders in advancing improvements in MA accuracy?
The research questions were investigated using a cross-sectional design, with systematic direct-observation point-in-time sampling and medical record review. This study was approved by the institutional review boards at the University of California, San Francisco, and Cedars-Sinai Research Institute.
Sample and Setting
Study hospitals, from hospitals participating in CALNOC’s benchmarking registry, comprised a voluntary convenience sample. Each hospital determined, based on strategic priorities and available resources, which adult units/staff/shifts were observed and how often observations were conducted. The resulting sample included data from 124 adult medical surgical units from 46 hospitals and included 148 direct-observation studies (a minimum of 100 doses per study) with 15600 doses observed from January 2009 through April 2010.
Predictor variables were drawn from data submitted by hospitals in their ongoing nursing-sensitive benchmarking and are detailed in Table 1. These variables, a majority of which are endorsed by the National Quality Forum,17 are defined elsewhere.18 Hospital quarterly pressure ulcer (HAPU) prevalence studies provided a “snapshot” of the unit patient population and care, including HAPU risk assessment and prevention, and were used to characterize the typical patients and exemplar quality practices on each participating unit. These relatively stable data were aggregated to 1 observation per unit for the study period. RN expertise data were obtained from CALNOC’s annual RN survey on participating units. Hospital characteristics, described below, were captured upon hospital enrollment in CALNOC and updated annually.
The MA accuracy assessment direct-observation approach was developed in 2004 by CALNOC, a self-sustaining, not-for-profit, nursing-sensitive benchmarking registry serving nearly 300 hospitals in 6 states. The CALNOC MA accuracy assessment has been described elsewhere19 and builds on the seminal work of Barker et al20,21 and Pepper22 The CALNOC MA accuracy assessment has been used for research and quality improvement.23-26
For this study, MA accuracy was defined as a dose administered exactly as ordered by the physician. Other key definitions included the following:
* MA error was dose administered differently than ordered by the physician.
* MA accuracy is operationalized as the prevalence of errors in MA, both type of error and number of actual errors, in relation to the number of dose opportunities for error.
* Opportunity for error (OE) was the basic unit of data. An OE included any dose ordered plus any unordered doses given and any ordered doses omitted.
To optimize the reliability and validity of the direct observations, nurse observers, selected by study hospitals from their staff, were trained in observation techniques and coding by CALNOC. Observer training included rater-to-standard coding exercises with examples of all safe practices and outcome errors. To ensure that observers were truly naive to the patient treatment and medication orders, they did not observe on their “home unit.”
CALNOC adopted the following safe practices from the “rights” literature, and observers documented whether the nurse:
1. compared medication with MA record,
2. minimized distraction or interruption during medication preparation or administration,
3. ensured medication is labeled throughout process from preparation to administration,
4. checked 2 forms of patient identification prior to administration of medication,
5. explained medication to the patient or family as appropriate, and
6. charted/documented MA immediately after completion.
Following chart review and comparison with the patient’s record, the following 10 possible outcomes were coded:
1. No error observed.
2. Unauthorized drug error: administration of dose never ordered for that patient.
3. Wrong dose error: any dose of a drug (excluding an injectable drug) that contained wrong number of dosage units (such as tablets) or is, in the judgment of the observer, more than 17% greater or less than the correct dosage.
4. Wrong form error: the administration of drug dose in different form than ordered by the prescriber when the prescriber wrote a specific dosage form.
5. Wrong route error: medication administered to the patient using different route than ordered.
6. Wrong technique error: use of inappropriate procedure or improper technique in administration of drug. Focus is on technique violations that can alter drug effect.
7. Extra dose error: any dose given in excess of total number of times authorized by physician order.
8. Omission error: failure to give ordered dose that appears on MA record by the time the next dose is due. Patient refusals or drugs appropriately withheld are not considered omissions. Order found in medical record that does not appear on MA record.
9. Wrong time error: administration of dose more than 60 minutes before or after scheduled administration time. If food is involved in the order, dose should be given within 30 minutes of scheduled time.
10. Drug not available error: administration of dose more than 60 minutes after scheduled administration time due to nonavailability of the medication.
Naive observers directly observed staff prepare, administer, and document medications, dose-by-dose. Selected safe practices and specifics of the medication dose, route, timing, and techniques were noted. After each patient encounter, or after each cluster of patients, the observer extracted medication orders from the medical record, compared what was observed with each active medication order, coding the MA outcome per dose. Data were entered on Excel worksheets and electronically submitted to CALNOC. Predictor variables were obtained for the same period from the CALNOC data set for each study hospital. Automated data checking and verification were implemented prior to upload.
Medication administration assessment data consisted of 1 record for each medication dose observed. Dose level values were calculated 1st. Doses were then aggregated to the encounter level, with each encounter including all doses administered during a single observation of a single patient. For patient who received more than 1 dose during a single observation, safe practices and outcomes for the encounter were summarized to avoid correlations among same encounter doses. Because most observation periods were time limited, the assumption was made that a patient was not likely to be sampled twice within a single 100-dose observation, with encounters and patients typically constituting a 1-to-1 relationship.
The dose omission error rate was documented and then excluded from further analyses because they were not observable. “Wrong time” and “wrong technique” errors were also excluded from further analyses, noting these errors were less reliable because of variability found in observer reporting/interpretation confounded by variation in hospital policies.
Observers did not document the nurse’s identity but recorded whether the medication was administered by an RN or a licensed vocational nurse. More than 95% of doses were administered by RNs, eliminating variability; thus, this measure was not included in further analysis. The prevalence of both safe practice deviations and MA outcome errors was calculated individually for each deviation or error type and in aggregate for overall safe practice deviations and outcome errors. Noting the possibility of multiple safe practice deviations and outcome errors per dose and multiple doses per encounter, the following summary variables were calculated for each encounter:
1. Average percent of MA errors per encounter calculated as 2 measures:
a. Average safe practice deviations expressed as percent per encounter: total number of times any safe practice deviation was coded for all doses ÷ total doses in the encounter × 6 possible practice deviations.
b. Average MA outcome errors expressed as percent per encounter: total number of times any outcome error was coded for all doses ÷ total doses in the encounter × 6 possible outcome errors. Same calculations apply.
2. Percent of each individual safe practice deviation/MA outcome error type expressed as percent per encounter. Total number of times the specific practice deviation/outcome error was coded across all doses in the encounter divided by total number of doses in the encounter for each safe practice deviation and outcome error.
The unit of analysis was 1 record per unit over the year of the study, with predictor variables collected aggregated to 1 record per unit. Temporal contiguity was preserved, with predictor variables, as well as MA observations, captured for the same study period. Summary statistics were computed for each MA outcome and for model covariates.
Finding average safe practice deviations and average medication outcome errors were relatively rare, based on examination of unit safe practice deviation and outcome error rate distributions, 2 outcome variables were analyzed: (1) logged (for improved symmetry) average safe practice deviations and (2) the number of MA outcome errors. Regression models were fitted to assess the significance and magnitude of the effect of covariates on these outcomes. Model 1 was designed to predict average safe practice deviations from unit/patient characteristics, nurse staffing and workload, and RN expertise. Model 2 included the same predictor variables, adding average safe practice deviations as a predictor of MA errors.
Model 1 used an ordinary least-squares regression, predicting the log-transformed average safe practice deviations. To resolve the skewed distribution for outcome errors with a large proportion of units with no outcome errors (termed “excess zeroes”), zero-inflated models (Poisson and negative binomial models) were used for the number of outcome errors per year at each unit (model 2). Zero-inflated, Poisson, and zero-inflated negative binomial (ZINB) regression models are suited to fit regressions to data with excess zeroes. Akaike’s information criterion for goodness of fit was used to select the final model. A rigorous exposition of these methods is provided elsewhere.27,28
The total number of dose administrations per unit (rate denominator) and a standard set of predictors were used as model covariates to adjust for differences between hospitals and units. The standard set of covariates was entered 1st into each model to control for patient and organizational/structural hospital and unit characteristics and included average patient age, percent of medical patients, average number of patients (occupied beds) in pressure ulcer prevalence studies (unit size proxy), hospital ownership type, and academic status. Potential unit-level predictors tested after including the standard set of covariates included nursing staffing and workload, RN expertise (education and experience), and CALNOC clinical process variables such as risk assessment and protocol implementation. Additional unit covariates included percentage of male patients, percentage of RN voluntary turnover, prevalence of patients with sitters, number of discharges per month, and patient length of stay at time of pressure ulcer prevalence study. Hospital characteristics studied included hospital size (average daily census) and setting (urban or rural). Interactions of significant workload, expertise, and processes of care main effects were tested for significance.
Only those variables that tested significant in independent tests for an MA outcome error were considered for the final models. The significant covariates were added 1 at a time in order of significance (forward selection) to construct the final multiple regression models. A similar procedure was followed for the safe practice deviation rate. Potential intrafacility unit correlation was ignored in modeling as 29 of the 46 hospitals reported MA observation data on only 1 or 2 units.
To illustrate clinical significance of the results, we provide the estimated change in MA accuracy for a 1-SD change in continuous covariates and the difference in outcome for different levels of categorical covariates. Results were transformed back to a linear scale for interpretability where necessary. We also provide estimated changes in outcome errors for various scenarios of changes in selected predictors.
A total of 124 adult acute care units participated, with a mean of 1.2 (SD, 0.6) MA observation studies and a median of 1 study per unit, ranging from 1 to 6. The mean number of patients per study was 25.6 (SD, 12.1), with a median of 23, ranging from 11 to 93. The mean number of doses per patient (encounter) was 4.7 (SD, 1.6) with a median of 4.5, ranging from 1 to 29. Seventy-eight of the total 15600 doses observed were omission errors and were excluded from further analysis because they were not observed (0.5%). Dose omission errors were experienced by 22 (0.6%) of the total 3881 patients.
Descriptive statistics for predictor variables, including unit/patient characteristics, nurse staffing and workload, and RN expertise are shown in Table 1. Average unit safe practice deviations and average outcome errors along with individual safe practice deviations and outcome error types are presented in Table 2. The average unit safe practice deviation rate was 10.55% of doses per encounter, and average unit MA outcome error rate was 0.26% of doses per encounter. The most frequent safe practice deviations were “nurse distracted/interrupted” (25.81%) followed by “chart medication immediately” (10.18%) and “check 2 forms of ID” (10.11%). Apart from “wrong time” and “wrong technique,” the most frequent outcome errors were “drug not available” (0.60%) followed by “wrong dose” (0.37%). “Wrong time” and “wrong technique” error rates, not included in further analyses, were 3.3% and 2.4%, respectively.
Results of the multivariable regression analyses, with the predicted changes in logged average safe practice deviations (model 1) and average outcome errors (model 2) for 1-SD increase in each predictor variable, are found in Table 3. Standard covariates described above were also included in both models to control for hospital and unit characteristics. Figure 1 graphically illustrates the modeling results shown in the right column of Table 3 using Clarke and Donaldson’s16 structure-process-outcome format. Model 1 illustrates the effect of unit/patient characteristics and staffing and workload (unit structure) variables on the clinical process outcome (safe practice deviations). Adjusted for all other covariates in the model, the numbers in parentheses in Figure 1 show the estimated percent change in safe practices for a 1-SD increase in each of the significant predictors. The arrows next to each predictor show the direction of change for each predictor required to reduce safe practice deviations.
For unit/patient characteristics, a 1-SD (4.3%) increase in percentage of patients with a sitter decreases medication safe practice deviations by 19.1%. For staffing and workload, a 1-SD increase in licensed hours per patient day (1.0) reduces safe practice deviations by 16.9%, but patient (bed) turnover must be decreased to reduce safe practice deviations. One-SD increase (16.7%) in patient (bed) turnover rate would increase average practice deviations by 27.6% (Table 3). Combining all covariates explained about 50% of the log average practice deviation rate variation between units. RN expertise variables were not significant in any of the models.
Model 2 (Figure 1) demonstrates significant adjusted effects on MA outcome errors. One-SD increase (9.6%) in RN hours is estimated to decrease average outcome errors by 17.6%. Logistic regression findings showed that having more practice deviations (1-SD increase or 8%) would decrease the percentage of units with zero outcome errors from 44.4% to 4.6%. The covariates in model 1 were independently significantly associated with MA outcome errors without safe practice deviations in model 2; therefore, safe practice deviations can be considered a mediator in their association with MA outcome errors.
Figure 2 illustrates the relative effects of increasing the care hours provided by RNs and decreasing safe practices deviations on average outcome errors, showing different scenarios for each of these predictors. Each line illustrates a specific change in safe practice deviations. Increasing the care hours provided by RNs without improving safe practice deviations produces significantly less change in outcome errors. For example, increasing care hours provided by RNs by 10% (from sample mean of 73.8% to 83.8%) without any change in safe practices decreases outcome errors by 18%. On the other hand, a 5% decrease in safe practice deviations from the average rate of 10.55% to 5.55% would reduce outcome errors by approximately 46% without any change in hours of care provided by RNs.
Discussion and Summary
The commitment of 46 study hospitals to observe practice and assess the accuracy of MA suggests that MA assessment benefits hospitals’ ongoing efforts to improve medication safety. The frequency of observed safe practice deviations was similar to previous reports of baseline performance using comparable methods.23,24,29 Findings from this robust sample reveal nurses’ adherence to MA safe practices improved MA accuracy, validating the importance of these crucial processes of care.
This study validated the mediating power of safe practices when nurse staffing and the pace of patient turnover threaten MA accuracy. Although medication delivery errors account for a minority of medication-related errors, they are generally not intercepted, thus exposing patients to hazards ranging from benign to lethal. While more staff (licensed hours), reduced workload (patient turnover), and greater use of prevention protocols for patients at risk for falls (use of sitters) and pressure ulcers (prevention protocol implemented) were independently associated with reductions in safe practice deviations (model 1; Figure 1), safe practice deviations, mediating between predictors and MA outcome errors, were the primary determinants of reductions in MA errors (model 2; Figure 1). Reliable MA accuracy, despite chaotic work environments, may hinge on adherence to safe practices, dose-by-dose. In an era in which mandated nurse-to-patient ratios has gained the attention of policy makers,16 it is noteworthy that increasing RN hours without changing adherence to safe practices resulted in significantly less change in MA errors (Figure 2) than changing hours of care alone.
Implications for Nurse Leaders
Systematically assessing the characteristics of unit microsystems and examining how differences within and between units impact nurses’ adherence to key safe practices are priorities to improve MA accuracy. Understanding and managing this variation may expedite transformational change.30 Furthermore, understanding when unit staffing and/or workload reaches a threshold for error may help leaders manage staffing resources and patient flow to avert unit-specific MA accuracy “tipping points.”
Integrating these findings with staffing plans, nursing education, and clinical practice competency validation could improve the outcomes of MA. Introducing the 6 safe practices during basic nursing education and validating the use of the 6 safe practices through competency verification could improve patient safety. These results also highlight the association of interruptions and distractions with MA errors,29 suggesting the potential for improvement in patient safety through changing the practice environment.
The finding that greater use of prevention protocols for patients at risk for falls (ie, use of sitters) and pressure ulcers is linked to fewer safe practice deviations may represent unit-level safety culture. Linking medication safety with reliable performance in other areas of unit-level adverse event risk assessment and prevention may help leaders in their strategic improvement efforts. While the literature linking nurse staffing to patient outcomes has been compelling, despite inconsistencies,16 this study suggests safe practices are crucial mediators of MA outcome. Finally, this study revealed the value of direct observation, as an adjunct to other data sources, in the leader’s quest to advance patient safety and operationalized the “6 rights” for MA.
This study was made possible by the voluntary commitment of hospitals participating in nursing-sensitive benchmarking drawn from a region with regulatory mandates reducing acute care nurse staffing variability.18 Unlike a controlled study, participating units were determined by each hospital for varied strategic reasons. Variation in observer coding may limit these findings. Unmeasured systematic differences may differentiate study hospitals from hospitals at large. Furthermore, this study did not measure technology penetration related to MA. Although we posit that the relationships found would hold under different conditions, more study is needed. Because many of the study hospitals contributed data on 1 or 2 units, we did not control for clustering of units within hospitals. However, underestimation of SEs as a result of not accounting for clustering is expected to be minimal.
1. Preventing Medication Errors. Washington, DC: Board on Health Care Services; 2006.
2. Shane R. Current status of administration of medicines [published online ahead of print March 3, 2009]. Am J Health Syst Pharm. 2009; 66 (5 suppl 3): S42–S48.
3. Burke KG, Mason DJ, Alexander M, Barnsteiner JH, Rich VL. Making MA safe: report challenges nurses to lead the way [published online ahead of print April 2, 2005]. Am J Nurs. 2005; 105 (Suppl 3): 2–3.
4. Leape LL, Bates DW, Cullen DJ, et al. Systems analysis of adverse drug events. ADE Prevention Study Group. JAMA. 1995; 274 (1): 35–43.
5. Barker KN, Flynn EA, Pepper GA, Bates DW, Mikeal RL. Medication errors observed in 36 health care facilities. Arch Intern Med. 2002; 162 (16): 1897–1903.
6. Barker KN, Flynn EA, Pepper GA. Observation method of detecting medication errors. Am J Health Syst Pharm. 2002; 59 (23): 2314–2316.
7. Meyer-Massetti C, Cheng CM, Schwappach DL, et al. Systematic review of medication safety assessment methods [published online ahead of print January 25, 2011]. Am J Health Syst Pharm. 2011; 68 (3): 227–240.
8. Carlton G, Blegen MA. Medication-related errors: a literature review of incidence and antecedents. Annu Rev Nurs Res. 2006; 24: 19–38.
9. Eisenhauer LA, Hurley AC, Dolan N. Nurses’ reported thinking during medication administration. J Nurs Scholarsh. 2007; 39 (1): 82–87.
10. Elliott M, Liu Y. The nine rights of medication administration: an overview. Br J Nurs. 2010; 19 (5): 300–305.
11. Dickson GL, Flynn L. Nurses’ clinical reasoning: processes and practices of medication safety [published online ahead of print August 30, 2011]. Qual Health Res. 2012; 22 (1): 3–16.
12. Mark BA, Belyea M. Nurse staffing and medication errors: cross sectional or longitudinal relationships? Res Nurse Health. 2009; 32 (1): 18–30.
13. Wulff K, Cummings GG, Marck P, Yurtseven O. Medication administration technologies and patient safety: a mixed-method systematic review. J Adv Nurs. 2011; 67 (10): 2080–2095.
14. Brady AM, Malone AM, Fleming S. A literature review of the individual and systems factors that contribute to medication errors in nursing practice [published online ahead of print August 22, 2009]. J Nurs Manag. 2009; 17 (6): 679–697.
15. Biron AD, Loiselle CG, Lavoie-Tremblay M. Work interruptions and their contribution to MA errors: an evidence review [published online ahead of print May 6, 2009]. Worldviews Evid Based Nurs. 2009; 6 (2): 70–86.
16. Clarke SP, Donaldson NE. Nurse staffing and patient care quality and safety. In: Hughes RG, ed. Patient Safety and Quality: An Evidence-Based Handbook for Nurses. Rockville, MD: AHRQ; 2008.
17. National Quality Forum. National voluntary consensus standards for nurse sensitive care: an initial performance measure set. A consensus report 2004; Washington DC: NQF.
18. Brown DS, Donaldson N, Burnes Bolton L, Aydin CE. Nursing-sensitive benchmarks for hospitals to gauge high-reliability performance [published online ahead of print October 16, 2010]. J Healthc Qual. 2010; 32 (6): 9–17.
19. Donaldson NE, Aydin CE, Fridman M, Foley M. Improving medication administration safety: using naive observation to assess practice and guide improvements in process and outcomes. J Healthc Qual. In press.
20. Barker KN, Flynn EA, Pepper GA. Observation method of detecting medication errors [published online ahead of print December 20, 2002]. Am J Health Syst Pharm. 2002; 59 (23): 2314–2316.
21. Barker KN, Flynn EA, Pepper GA, Bates DW, Mikeal RL. Medication errors observed in 36 health care facilities [published online ahead of print August 28, 2002]. Arch Intern Med. 2002; 162 (16): 1897–1903.
22. Pepper GA. Errors in drug administration by nurses [published online ahead of print February 15, 1995]. Am J Health Syst Pharm. 1995; 52 (4): 390–395.
23. Helmons PJ, Wargel LN, Daniels CE. Effect of bar-code-assisted MA on MA errors and accuracy in multiple patient care areas [published online ahead of print June 19, 2009]. Am J Health Syst Pharm. 2009; 66 (13): 1202–1210.
24. Kliger J, Blegen MA, Gootee D, O’Neil E. Empowering frontline nurses: a structured intervention enables nurses to improve MA accuracy [published online ahead of print January 2, 2010]. Jt Comm J Qual Patient Saf. 2009; 35 (12): 604–612.
25. Ching JM, Long C, Williams BL, Blackmore C. Using lean to improve MA safety: in search of the “perfect dose.” Jt Comm J Qual Patient Saf. 2013; 39 (5): 195–204.
26. Cochran G. Comparing the Effectiveness of Medication Use Systems in Small Rural Hospitals. Rockville, MD: Agency for Healthcare Research and Quality; 2009.
27. Cameron AC, Trivedi PK. Regression Analysis of Count Data. Cambridge, UK: Cambridge University Press; 1998. xvii, 411 pages.
28. Atkins DC, Gallop RJ. Rethinking how family researchers model infrequent outcomes: a tutorial on count regression and zero-inflated models. J Fam Psychol. 2007; 21 (4): 726–735.
29. Westbrook JI, Woods A, Rob MI, Dunsmuir WTM, Day R. Associations of interruptions with an increased risk and severity of medication administration errors. Arch Intern Medic. 2010; 170 (6): 683–690.
30. Leape L, Berwick D, Clancy C, et al. Transforming healthcare: a safety imperative. Qual Saf Health Care. 2009; 18 (6): 424–428.