Share this article on:

A Method for Measuring System Safety and Latent Errors Associated with Pediatric Procedural Sedation

Blike, George T. MD; Christoffersen, Klaus PhD; Cravero, Joseph P. MD; Andeweg, Steven K. MD; Jensen, Jens MS

doi: 10.1213/01.ANE.0000152614.57997.6C
Pediatric Anesthesia: Research Report

The practice of sedating patients in the hospital for diagnostic and therapeutic procedures may be associated with life-threatening respiratory depression. We describe a method that uses a simulated event to identify latent system failures. A simulated scenario was developed that was reproducible with realistic physiology that degraded over time if no interventions occurred and improved when treated appropriately. Management of the scenario was observed in an ideal setting, a radiology department, and an emergency department. Event management was videotaped. The simulator’s physiological data were saved automatically at 5-s intervals. Deviations from “best practice” were measured by using a set of video markers for event detection, diagnosis, and treatment. The simulator data files were used to calculate time out of range for critical variables. Hypoxia and hypotension lasted 4.5 and 5.5 min in the radiology and emergency departments, respectively, compared with 0 min in the gold standard setting. Many latent failures were identified by reviewing the video. This study supports the feasibility of using available human simulation as a crash-test dummy to more objectively quantify rescue system performance in actual sedation care settings. This method revealed vulnerabilities in personnel and in care systems even though sedation care regulatory requirements were met.

IMPLICATIONS: We describe a method of using available human simulation to test for actual latent errors (accidents waiting to happen). In this study, the rare event studied was that of apnea secondary to sedation. However, the implications go beyond sedation to represent a generic patient safety problem: that of suboptimal rescue capabilities.

Department of Anesthesiology, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire

Funded in part by National Institutes of Health Grant 1 RO3 HD041229-01, the National Institute for Child Health and Human Development, and the Burnap Fund for the PainFree Pediatric Simulator.

Accepted for publication November 22, 2004.

Address correspondence and reprint requests to George T. Blike, MD, Department of Anesthesiology, Dartmouth-Hitchcock Medical Center, One Medical Center Dr., Lebanon, NH 03756. Address e-mail to George.Blike@hitchcock.org.

Human error and its potential effect on patient safety are recognized as leading causes of preventable death in America (USA) (1). Managing the pain and anxiety associated with medical procedures is complex and presents typical patient safety challenges. The pressure to satisfy patient desires for painless, stress-free care for procedures of all types has led to an explosion in the use of more potent sedative medications by nonanesthesiologists. For example, the use of potent drugs such as ketamine and propofol by gastroenterologists, emergency medicine physicians, and intensivists has grown significantly in the last several years (2–4). The safety of these practice changes has proven difficult, if not impossible, to assess with prospective studies. Although severe respiratory depression is relatively rare, millions of patients are exposed to this risk, resulting in significant aggregate morbidity and mortality. There are no large sufficiently powered multicenter trials to evaluate safety in this context. Instead, the literature is replete with descriptions of how sedative medications can be used in a variety of settings on a series of patients (usually <200 in a cohort) without a fatality (5–7).

Given the expected incidence of an induced crisis with routine sedation, it is not surprising that these studies rarely uncover a critical event. The authors invariably conclude that the described sedation practice is safe. For example, Heuss et al. (8) published a study of 2000 patients who were sedated with propofol administered by sedation nurses under the supervision of the gastroenterologist. Despite the lack of statistical power, the abstract reads “CONCLUSION: The administration of propofol by registered nurses (RNs), with careful monitoring under the supervision of the gastroenterologist, is safe for conscious sedation during gastroenterology endoscopic procedures.” In the absence of critical incidents, however, safety cannot be presumed, because no data are available to assess the efficacy of rescue processes.

Sedation care delivery systems that lack the ability to manage respiratory depression pose a serious threat to patients. In the mid 1980s, 86 deaths in the USA were attributed to the just-released anxiolytic midazolam. When providers used midazolam according to the dosing recommendations, assuming a potency equivalent to that of diazepam, there were many overdoses (the actual potency is 2 to 4 times that of diazepam). All but three of the deaths occurred outside of the operating room in settings where anesthesiologists were not typically present (9). A common theme was that the nonanesthesiologist sedation providers failed to support respiration adequately when needed. Although anesthesiologists were also using midazolam, one must presume their ability to diagnose apnea and use support ventilation (rescue capability), thus providing a safeguard against brain damage and death due to hypoxia. In the aftermath of this public health catastrophe, regulatory agencies identified anesthesiologists as sedation “experts” and partnered with the American Society of Anesthesiologists to ensure sedation safety. Anesthesiologists have since been charged with providing guidelines and oversight for sedation practices in the hospital setting (10).

Despite the introduction of guidelines for safe sedation care and Joint Commission on Accreditation for Hospital Organizations standards, conscious sedation for minor procedures remains associated with an alarming number of fatalities. Cote et al. (11) have published a critical incident review of more than 100 pediatric sedation-related critical incidents that resulted in death or severe neurological injury. The critical incident analysis identified that the overwhelming majority of these deaths were avoidable and associated with a “failure to rescue” and provide airway and ventilatory support in a timely fashion. This report and others have served as a warning that pediatric patients are at especially high risk for injury during sedation.

We believe that available human simulation can be used to uncover latent conditions in actual care-delivery settings that are currently perceived to be “safe.” The secondary goal is to provide evidence that supports the face validity of a new methodological approach we have developed to measure aggregate clinical microsystem performance and safety in a semiobjective manner. The use of a provocative test to uncover hidden vulnerabilities is logical. We describe a method for creating a representative challenge, measuring performance, and identifying care management problems in clinical settings in which rescue systems are assumed to work but are rarely tested.

This field study tests a new methodological approach for measuring patient safety in health care settings in which rescue systems are rarely used. Pediatric sedation (i.e., the acute treatment of procedure-related pain and stress with medications in children) served as the test domain for this investigation. We explored the potential to use available human simulation to provocatively test current sedation care settings for safety in terms of rescue capability in an objective manner. A pediatric simulator was programmed to challenge a radiology department sedation team and an emergency department (ED) sedation team with the same sedation critical event. Rescue performance was compared with a “gold standard” generated by a pediatric anesthesiologist who performs a large volume of pediatric sedation with standard tools and techniques. In addition, we describe the use of a qualitative method for reviewing videotapes of the simulation exercises and identifying performance errors along with their associated contributory factors.

Back to Top | Article Outline

Methods

Pediatric sedation was used as the test domain for this investigation. Pediatric sedation was chosen because children are a high-risk, low-error-tolerance subset of all patients receiving sedative medications. They are more likely to experience errors in management, and those errors are more likely to result in negative outcomes (11).

Two settings were selected to test the method. The interventional radiology department and the ED both provide pediatric sedation care on a regular basis in a context that biases them to use more potent medications titrated to a deeper level of sedation. In both settings, most procedures cause both pain and anxiety. The radiology practice often uses oral chloral hydrate or IV midazolam and/or fentanyl in combination, with a critical care level nurse using monitoring of noninvasive blood pressure, oxygen saturation (Spo2), and direct observation. Airway equipment is brought to care areas such as a computerized tomography scanner in a tool box. A radiologist provides physician backup. The emergency medicine practice uses ketamine in combination with atropine for brief painful procedures (fracture reduction) and fentanyl plus midazolam for longer, less stimulating interventions. An emergency room nurse and emergency physician are in attendance providing the IV sedation. Monitoring includes noninvasive blood pressure, Spo2, electrocardiogram, respiratory rate (RR), and direct observation. Airway equipment is available in a code cart. Both groups use the code team for additional support when managing critical events. Neither group uses capnometry. Succinylcholine is not readily available, nor is airway equipment customized and ready for the patient. The number and skill set of clinicians who provide sedation or supported resuscitation are not controlled.

When a code blue is initiated at Dartmouth-Hitchcock Medical Center, responders include registered nurses (RN); flight nurses; critical care nurses from adult, pediatric, and neonatal intensive care units (ICU); respiratory therapists; anesthesiology residents; physician supervisors of the sedation provider; secretarial support; and administrative support. The pediatric code blue response at this study institution consists of 17 individuals: 1 general internal medicine resident, postgraduate year (PGY) 3; 1 general internal medicine resident, PGY 1; 1 intensive care nursery (ICN) pediatric resident, PGY 3; 1 ICN pediatric resident, PGY 1; 1 ICN pediatric critical care RN; 1 pediatric resident, PGY 2; 1 pediatric charge RN; 1 pediatric ICU RN; 1 emergency medicine flight RN; 1 ED RN; 2 respiratory therapists; 1 anesthesia resident, PGY 2–4; 1 cardiac care unit/cardiothoracic ICU RN; 2 medical students on internal medicine rotation; and 1 distribution equipment support person. Code responders were not alerted to the fact that the response was a simulation exercise until arrival and direct observation of the simulated patient.

All sedation providers and code team members (>300 clinicians) were notified that simulation exercises to assess pediatric sedation safety were being conducted. These exercises were to be unannounced. Participation was mandatory, but clinicians were ensured that goals were to meet hospital patient safety objectives and not to impugn individuals. IRB approval was obtained, and informed consent was waived because video data maintained provider confidentiality and because the exercises were deemed to be training exercises.

To provide a reference against which to compare the performance of the emergency medicine and radiology teams, we conducted a third run of the method under “ideal” conditions. An experienced pediatric anesthesiologist, without knowledge of the scenario features, provided sedation and managed the apneic event by using standard resources.1 This served to define a demonstrably attainable standard of performance, both for the process followed and in terms of how well the critical variables were maintained within their desired ranges.

According to the critical incident review of pediatric sedation-related deaths, respiratory depression leading to airway obstruction and/or central apnea was the most serious event associated with a negative outcome when rescue systems failed (11). A single scenario was developed that presented a child with an obstructed airway followed by frank apnea by using the Medical Education Technologies Incorporated pediatric simulator. The patient behavior that was to be modeled was scripted by three pediatric anesthesiologists. The pediatric simulator was programmed to ventilate, oxygenate, and have hemodynamics in accordance with a normal 4-yr-old child under sedation. Airway obstruction was followed by desaturation that varied according to whether supplemental oxygen was being used. Sustained hypoxia led to bradycardia and hypotension according to an algorithm that was driven by programmed normal physiology for a child of this age. If treatment was properly instituted for airway obstruction with an oral airway and a chin lift, simulated obstruction was reversed, and the patient was left with only apnea. If bag-mask ventilation was provided properly, the hypoxia would resolve, and bradycardia ceased. If providers did not have proper equipment available or failed to use resuscitative equipment appropriately, the simulated child would remain hypoxic and bradycardic.

Qualitative methods were used to calibrate the simulator settings. Medical students with no airway training were compared with anesthesiologist residents in their second year of training. We assumed that medical students were novices and that they should have difficulty in managing airway obstruction or providing positive-pressure ventilation as observed in the operating room environment. However, anesthesiology residents in their second year with hundreds of hours of airway management experience should be able to provide bag-mask ventilation with relative ease. Final simulator settings were selected such that medical students failed to successfully ventilate the simulated patient, whereas the second-year anesthesia residents succeeded without help.

Data were obtained from video recordings of each team’s performance and from the simulator output log describing the physiological state of the simulated patient. The video recordings captured a single wide-angle view of rescue performance by the care team and code responders. The video was recorded from the foot of the patient bed at a height of 8 feet, looking down slightly. Monitors, equipment, clinicians, interventions, and the simulated patient were all visible on the tape. Panning was used to cover the entire range of activities within the confines of the room in which care was being provided. Audio was from the single microphone integrated in the digital video camera.

The simulator output was used to perform quantitative analyses of the quality of control exercised by the teams over the patient’s physiology. Hypoxia and hypotension were defined as Spo2 <60% and systolic blood pressure (SBP) <50 mg Hg, respectively, because these variables would be associated with negative patient outcomes over time. Time out of range for each of these variables was calculated for the gold standard exercise and the two “in context” exercises within actual sedation care settings. This method of analysis has been previously described for comparison of simulator performance between novices and experts (12).

The video recordings were analyzed to produce process traces describing the behavior of each team during the case and to identify specific care management problems (13). These problems were further analyzed to identify contributing factors by using a published taxonomy (14,15). The goals of this analysis were to identify deviations in performance from the standard of care that was proven possible in the gold standard performance exercise. For each care management problem observed by experts, contributory factors were listed that represented the latent error in the system of care (i.e., the accidents waiting to happen under the right triggering conditions). This methodology has been used by the Department of Anesthesia Quality Assurance Committee to review >200 critical incidents and/or close-call reports. All three expert reviewers (pediatric anesthesiologists practicing at the Children’s Hospital at Dartmouth) had experience performing critical incident review with the methodology of Vincent et al. (14). In addition, all three experts reviewed the gold standard case video record before reviewing the video record of care in the emergency medical and radiological settings. Contrasting observed care against the gold standard was intended to reduce reviewer variability (16). Individual review data were compiled to fill a data table template (Table 1). The aggregate data were reviewed by the group to confirm face validity. If two of three reviewers believed that an identified performance deviation was trivial and unlikely to have the potential for patient harm, it was deleted. The experts developed a tool for itemizing the equipment and behaviors associated with gold standard practice and time ranges that they agreed were reasonable for categorizing performance as good, intermediate, or poor (Table 2, gold standard column).

Table 1

Table 1

Table 2

Table 2

Back to Top | Article Outline

Results

The data files generated from each simulated exercise were imported into an Excel spreadsheet (Microsoft, Redmond, WA). The fields of RR, Spo2, heart rate (HR), and SBP were graphed as shown in Figures 1–3. The graphical representations were all annotated with 1) descriptions of the state of the simulated patient before the initiation of sedation; 2) activities to treat potential obstruction; 3) activities associated with the provision of positive-pressure ventilation; 4) calls for help/declaring an emergency; 5) lowest Spo2, HR, and SBP; and 6) the total time hypoxic and hypotensive.

Figure 1

Figure 1

Figure 2

Figure 2

Figure 3

Figure 3

A summary of behavioral markers associated with gold standard performance defined by the three experts and deviations in the two simulation exercises is shown in Table 2. Tables 3 and 4 show the aggregate descriptions (based on three individual expert reviews) of care management problem categories, a description of the care management problem, and the contributory factors deemed to be associated with each problem.

Table 3

Table 3

Table 4

Table 4

Back to Top | Article Outline

Discussion

Identifying hazards and vulnerabilities in the performance of complex health care delivery processes is a methodologically challenging problem. Error-reporting systems are useful in focusing attention on some issues, but their level of resolution is relatively limited, and they are subject to various reporting biases (17). Direct observation of actual care settings by using video recording has proven fruitful in identifying system vulnerabilities for certain types of frequently occurring events (e.g., emergency airway management in the ED during trauma care) (18). However, techniques that rely on opportunistic observation of actual cases are obviously ill suited for the investigation of relatively rare events (e.g., respiratory arrests due to oversedation in pediatric patients). Although they are associated with morbidity and mortality, the infrequent nature of these events makes it impractical to wait for an accumulation of naturally occurring cases. For these types of events, a more deliberately provocative approach is required.

We sought to develop a reproducible method that could be used to assess the systems that provide sedation care in a typical hospital. We used an interactive patient simulator to model a classic oversedation response for a pediatric patient. Unplanned incidents have been observed previously in full-scope simulation performed in a simulation center (19). However, a limitation is that the clinicians may be using unfamiliar equipment in unfamiliar surroundings. In this study, we designed the scenario as a portable event that could be inserted into actual sedation care settings to observe how the teams in these settings responded to manage the event. This corresponds to what human factors researchers refer to as a “field experiment,” in which certain aspects of a naturally occurring work situation are deliberately manipulated by investigators to permit targeted observations of a specific type of problem-solving scenario (13). This class of investigations is intended to allow observation of behavior under conditions that are highly representative of actual work conditions2 (20). Specifically, we wanted to create conditions that would allow us to observe performance with the actual resources (i.e., the personnel, equipment, and procedures) that would be brought to bear on a real patient experiencing the same type of event. In this study, we extend the analysis of initial simulation exercises by using a gold standard comparison with both quantitative and qualitative methods (21).

Our primary conclusion is that this methodological approach proved feasible with available technology. We were able to program the PediaSim to exhibit respiratory depression followed by obstructive apnea and central apnea and resultant hypoxia, bradycardia, and hypotension. The scenarios were reproduced by the simulator in a consistent fashion when calibration protocols were followed. The simulator was able to be moved significant distances within our facility and could be interfaced with the actual devices used by clinicians in the three settings in which it was evaluated. Physiologic data files and event logs were easy to synchronize to video by using the time the sedation provider administered sedative medication as the time the scenario was started. A single wide-angle-view video tape (Sony Digital 8 HandiCam) proved adequate as a data source and allowed for post hoc analysis by multiple experts. The audio provided by the digital video camera was also adequate for analysis, but the amount of noise made it difficult to follow individual commentary (the code response to one of the exercises resulted in >15 providers in a small room).

Although the technical feasibility has been addressed, other aspects of feasibility remain. The simulator tested costs $120,000, and major modifications needed to be implemented to achieve basic realism (adult physiological behaviors were seen in the pediatric simulation). In addition, it took two hours to set up an exercise and two hours to run a simulation and break down the equipment. The video analysis described required approximately 100 man-hours for the first simulation and approximately 10 man-hours for subsequent simulations once the coding schemes were established. Despite the cost, these data have resulted in clinical units correcting the vulnerabilities in their systems on the basis of the results. To be able to see how an overdose-related respiratory depression might be managed has proved invaluable to those responsible for rescue system reliability. Although preliminary and limited, these data systems support a fundamentally new application of available simulation in the medical domain.

Besides feasibility, we sought to begin to validate the use of provocative testing in the sedation care domain as a means for objectively measuring safety. Specifically, we were interested in the measurement of rescue capability. Again, this study demonstrates that the method being explored has great potential for achieving this aim. The data files generated during the simulation exercise represent the data available when a patient simulator is used as a “crash test dummy.” The simulator sensors measure treatment effects, and internal computer models replicate patient behavior that would be expected in the absence of treatment. However, it is clear that the simulator and a well crafted scenario can be used as a probe to identify system components that are associated with control and control failures. One measure of control failure is the time out of range, which was calculated for the basic patient state variables of oxygenation, ventilation, and circulation. The gold standard performance had no periods of these critical variables being out of range. In contrast, the radiology team and emergency medicine team and care settings failed to restore oxygenation, ventilation, and circulation for 4 minutes 30 seconds and 5 minutes 30 seconds, respectively. In addition, time-synchronized video allowed system activities to be correlated with the restoration of patient state or a failure to do so. For example, in both the ED and radiology simulation exercises, sedation providers failed to establish oxygenation or ventilation during initial airway management of the simulated apneic event. Restoration of oxygenation and ventilation was, however, tightly correlated in time with respiratory therapists assuming airway management, using an oral airway, applying an occlusive mask, closing the pop-off valve of the self-inflating positive-pressure ventilation device, and turning on the oxygen flows. These were activities not performed by the primary sedation team that was basic life support and/or advanced cardiac life support certified. The simulated apneic event differentiated management by nurses and physicians who do not routinely provide bag-mask ventilation from those who provide this care on a regular basis (flight nurses, respiratory therapists, or anesthesiologists). These findings strongly support the contention that available simulation technology is of adequate fidelity to differentiate novice from expert airway management and ventilation care.

The qualitative analysis performed, using an approach described by Vincent et al. (14), identified multiple care management problems and contributing factors (Tables 1 and 2) in each of the simulation exercises. The care management problems were associated with errors and contributory factors across the full spectrum of resources supporting sedation care—from the blunt end (system resources) to the sharp end (provider interface factors) of the system (22). To delineate these factors, we used multiple experts analyzing the same video recording of each simulation exercise. This represents a novel use of simulation in medicine.

Simulation has been used primarily as a tool for training. Training applications range from teaching basic skills to high-order crisis resource allocation and team collaboration (23). This study supports the finding that even in training exercises, latent system failures can be uncovered while managing a simulated event. DeAnda and Gaba (19) identified 132 unplanned incidents during 19 simulations using a modified critical incident methodology (range, 3–14; mean, 6.9 per simulation exercise). In addition, the classes of incidents were similar to those identified previously by Cooper et al. (24,25) in field studies.

In addition to its practical aims, this work seeks to exemplify the type of research that will increasingly be needed in efforts to improve patient safety. Emerging perspectives on health care quality and patient safety have stressed the need to adopt a systems-oriented view in identifying and responding to problems (1). The systems viewpoint acknowledges that health care delivery processes are fundamentally complex and involve numerous, often hidden, interactions and interdependencies among elements (26). In fact, it is often these unanticipated interactions that create paths for the expression of latent failures as adverse outcomes in the work setting (27). It is practically impossible to recreate the complexity of such systems in a highly controlled laboratory setting. Moreover, in the attempt to find leverage points for improving the performance of health care processes, we are often more interested in the discovery of relevant issues than in the verification of precisely defined hypotheses. What is needed, therefore, are principled methods for investigating the performance of complex systems in their natural context. The field experiment methodology used here represents one such approach that human factors researchers have used successfully in several domains. These researchers have shown that, to make research relevant to practical problems in complex work systems, naturalistic studies are a necessary complement to traditional, controlled experimental research (13,28–30).

Several logistical problems were faced in executing this complex work. Legal counsel was required to understand how to properly protect clinician confidentiality. This information provided a framework for gaining IRB approval. The institutional leadership needed to support the goals and indirect cost. The chief medical director, director of graduate medical education, medical director of cardiopulmonary resuscitation, and director of nursing supported the work before the initiation of any of the exercises and the indirect costs of simulated rescue responses. These leaders championed the effort primarily as a training exercise, understood the research objectives, and emphasized the need for findings with strong face validity to be acted on. This led to the final major barrier: fear of what might be found. The emergent view of the researchers, educators, quality improvement leaders, and risk managers was that we should not be afraid to test our rescue systems because we might find problems. Ultimately, we instituted a mechanism for credible threats to patient safety that were identified via acting on any quality improvements that could result in corrective action.

As with any simulation exercise, the major limitation of our investigation relates to the realism of the simulated event. The scenario we developed is based on expert opinion and experience in terms of how a 20-kg child would behave if given an overdose of several of the most commonly used sedative medications. In addition, for the mannequin to function as a “crash test dummy” that records the quality of the resuscitation performed, the simulated behavior needs to be validated both in the absence of treatment and in the presence of treatment. For this investigation, calibration of the simulator was performed in only a cursory fashion comparing medical students and anesthesiology residents in a qualitative manner. This type of validation must be formalized for future comparative studies of sedation system safety.

A second important limitation relates to the clinicians who participated. Clearly, the participants in simulation exercises may not have the same level of motivation or anxiety as clinicians responding to an actual pediatric respiratory arrest. Because the exercises were unannounced, activity before arrival at the bedside would not be influenced. The balance of motivation and anxiety in study participants will need to be quantified in the future. Future work in this area will need to test the correlation of latent system failures that are identified in simulation with those observed in actual codes.

In conclusion, this study demonstrates that using simulation as a safety probe is feasible and of enormous potential impact. Both qualitative and quantitative data were generated that provide insight regarding pediatric sedation system vulnerabilities and potential opportunities for creating enhanced safety. Future investigations using this methodology will benefit from a more sophisticated scenario and must be validated against empiric data.

Because some groups perform initial resuscitation well, we would like to be able to increase the difficulty factor. We envision a 3-level scenario in which achieving the goals of Level 1 automatically moves the team to Level 2, and then achieving the goals of Level 2 automatically triggers the highest difficulty of Level 3. This strategy will allow us to challenge novices to experts in airway management.

Additionally, video coding schemes need to be developed that delineate ideal performance and allow for the easy identification of performance deviations. Although these investigations are being pursued to advance the efficacy and safety of pediatric procedural sedation, our ultimate goal is to provide a fundamental method for assessing safety in domains in which events are rare but inevitable. Only by designing our health care systems to use not only preventive strategies, but also strategies that capture error and support recovery, will safety be maximized.

Back to Top | Article Outline

References

1. Kohn LT, Corrigan JM, Donaldson MS, eds. To err is human: building a safer health system. Washington, DC: National Academy Press, 2000.
2. Holloway VJ, Husain HM, Saetta JP, Gautam V. Accident and emergency department led implementation of ketamine sedation in pediatric practice and parental response. J Emerg Med 2000;17:25–8.
3. Petrack EM, Christopher NC, Kriwinsky J. Pain management in the emergency department: patterns of analgesic utilization. Pediatrics 1997;99:711–4.
4. Lowrie L, Weiss AH, Lacombe C. The pediatric sedation unit: a mechanism for pediatric sedation. Pediatrics 1998;102:E30.
5. Havel CJ Jr, Strait RT, Hennes H. A clinical trial of propofol vs midazolam for procedural sedation in a pediatric emergency department. Acad Emerg Med 1999;6:989–97.
6. McCarty EC, Mencio GA, Walker LA, Green NE. Ketamine sedation for the reduction of children’s fractures in the emergency department. J Bone Joint Surg Am 2000;82:912–8.
7. Egelhoff JC, Ball WS, Koch BL, et al. Safety and efficacy of sedation in children using a structured sedation program. AJR Am J Roentgenol 1997;168:1259–62.
8. Heuss LT, Schnieper P, Drewe J, et al. Risk stratification and safe administration of propofol by registered nurses supervised by the gastroenterologist: a prospective observational study of more than 2000 cases. Gastrointest Endosc 2003;57:664–71.
9. Department of Health and Human Services, Office of Epidemiology and Biostatistics, Center for Drug Evaluation and Research, Data Retrieval Unit HFD-737, June 27, 1989.
10. Bailey PL, Pace NL, Ashburn MA, et al. Frequent hypoxemia and apnea after sedation with midazolam and fentanyl. Anesthesiology 1990;73:826–30.
11. Cote CJ, Notterman DA, Karl HW, et al. Adverse sedation events in pediatrics: a critical incident analysis of contributing factors. Pediatrics 2000;105(4 Pt 1):805–14.
12. King PH, Pierce D, Higgins M, et al. A proposed method for the measurement of anesthetist care variability. J Clin Monit Comput 2000;16:121–5.
13. Woods DD. Process-tracing methods for the study of cognition outside of the experimental psychology laboratory. In: Klein G, Orasanu J, Calderwood R, Zsambok C, eds. Decision making in action: models and methods. Norwood, NJ: Ablex, 1993:228–54.
14. Vincent C, Taylor-Adams S, Chapman EJ, et al. How to investigate and analyze clinical incidents: clinical risk unit and association of litigation and risk management protocol. BMJ 2000;320:777–81.
15. Vincent C. Understanding and responding to adverse events. N Engl J Med 2003;348:1051–6.
16. Roth EM, Bennett K, Woods DD. Human interaction with an ‘intelligent’ machine. Int J Man-Machine Stud 1987;27:479–525.
17. Cook RI, Woods DD, Miller C. A tale of two stories: contrasting views of patient safety. Chicago, IL: National Patient Safety Foundation, 1998.
18. Mackenzie CF, Jeffries NJ, Hunter A, et al. Comparison of self reporting deficiencies in airway management with video analyses of actual performance. Hum Factors 1996;38:623–35.
19. DeAnda A, Gaba D. Unplanned incidents during comprehensive anesthesia simulation. Anesth Analg 1990;71:77–82.
20. Brunswik E. Perception and the representative design of psychological experiments. 2nd ed. Berkeley: University of California Press, 1956.
21. Blike GT, Cravero JP, Nelson E. Same patients, same critical events: different systems of care, different outcomes—description of a human factors approach aimed at improving the efficacy and safety of sedation/analgesia care. Qual Managed Health Care 2001;10:17–36.
22. Cook R, Woods D. Operating at the sharp end: the complexity of human error. In: Bogner MS, ed. Human error in medicine. Hillsdale, NJ: Lawrence Erlbaum Associates, 1994:255–310.
23. Gaba D, Fish K, Howard S. Crisis management in anesthesiology. Philadelphia: Churchill Livingstone, 1994.
24. Cooper JB, Newbower RS, Long CD, McPeek B. Preventable anesthetic mishaps: a study of human factors. Anesthesiology 1978;49:399–406.
25. Cooper JB, Newbower RS, Kitz RJ. An analysis of major errors and equipment failures in anesthesia management: considerations for prevention and detection. Anesthesiology 1984;60:34–42.
26. Perrow C. Normal accidents: living with high-risk technologies. New York: Basic Books, 1984.
27. Reason J. Human error. New York: Cambridge University Press, 1990.
28. Hoffman R, Woods D. Studying cognitive systems in context: preface to the special issue. Hum Factors 2000;42:1–7.
29. Vicente KJ. Heeding the legacy of Meister, Brunswik and Gibson: toward a broader view of human factors research. Hum Factors 1997;39:323–8.
30. Vicente KJ. Toward Jeffersonian research programmer in ergonomics science. Theor Issues Ergon Sci 2000;1:93–112.

1 This is the standard of care provided by anesthesiologists working in a pediatric procedural sedation unit at our children’s hospital that has been in place for >2 yr with approximately 1300 sedations per year.
Cited Here...

2 We prefer this more precise term to the commonly used concept of ecological validity (Brunswik E. Perception and the representative design of psychological experiments. 2nd ed. Berkeley, CA: University of California Press, 1956.
Cited Here...

© 2005 International Anesthesia Research Society