Journal Logo

Concepts and Commentary

The Pharmacology of Simulation: A Conceptual Framework to Inform Progress in Simulation Research

Weinger, Matthew B. MD

Author Information
Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare: February 2010 - Volume 5 - Issue 1 - p 8-15
doi: 10.1097/SIH.0b013e3181c91d4a
  • Free


The widespread adoption of simulation-based training by medical and nursing schools and hospitals,1 the rapid growth of the Society for Simulation in Healthcare, the establishment of this journal, and federal funding for simulation research (see, for example, the Agency for Healthcare Research & Quality (AHRQ) request for applications for simulation-based research grants. Available at: Accessed March 15, 2008), among other developments, all point to the increasing role of simulation in healthcare. This represents a long anticipated and still evolving change in how healthcare practitioners, educators, and researchers conduct their everyday business.

However, the transition from learning on patients to learning before caring for patients is just beginning, and there is still much to learn and to do. Moreover, simulation can support our growing understanding of the role of life-long learning and competency based assessment in healthcare.2–5 At present, the evidence base for the most appropriate use of simulation under different circumstances is growing but still sparse.6,7 We need to establish when to use simulation, what kind(s) of simulation to use, how to do it, and how to evaluate its impact. Moving forward, we must take a rigorous systematic approach to generate the evidence of how simulation can have the greatest possible impact. In this article, I present a conceptual framework based on the discipline of pharmacology that I believe can help to guide future progress.

At the highest level of such a paradigm, simulation would be viewed as a therapeutic or a diagnostic intervention. In the case of simulation as a therapy, the intervention could be training to affect changes in human knowledge, skills, attitudes, or behaviors (KSAB). Simulation could be used as a diagnostic intervention for evaluation, certification or credentialing, and research (ie, testing of a hypothesis). In the remainder of this article, I use simulation-based training as the foundation of my remarks, although the conceptual framework should apply equally to diagnostic uses of simulation.

In this discussion, healthcare simulation is defined to include all experiential learning techniques that incorporate modeling or emulation of actual patient care. Thus, the principles espoused below apply equally well to partial task trainers, standardized humans, and manikin-based simulation, or any combination of methods (eg, hybrid or multimodal simulations).

One advantage of reframing simulation interventions and their effects on learning or behavior into pharmacologic terms is that there are established rigorous methods for analysis of the resulting data, for example, for the evaluation of shifts in dose-effect curves.8,9 In the interests of focus and brevity, this article does not delve into the statistical methods that could be used to measure the effects of a simulation intervention if it was designed consistent with the pharmacologic principles described below. Rather, the purpose of this initial article is to provide readers unfamiliar with pharmacology with the foundational knowledge and an appreciation of the potential value of such an approach.


For purposes of illustration, let us imagine that a simulation is a “drug” intended to treat a condition—in this case, for example, a defined absence of necessary KSAB in an individual or a group of individuals deemed essential to high-quality patient care. How then should we dose the simulation (drug) to attain the desired effect (eg, improvement in the KSAB deficit. Of course, a KSAB deficit is not really a “disease” condition to be treated with an educational intervention “drug” but is described as such here for purposes of the analogy. This issue is discussed more fully in the Limitations section below)? Fortunately, the science of pharmacology, based on hundreds of years of practical experience, provides us with guidance. For simplicity, in the following, I use the abbreviation SIM to represent a use of simulation as if it was a drug.

The pathway from the discovery of a putative drug to its successful use in everyday practice is an arduous process that typically takes at least 10 and often more than 20 years. Similarly, we cannot expect the relatively new field of simulation-based training to move immediately to widespread effectiveness and positive impact before going through several stages from preliminary evidence based on research (demonstrating promise), to initial successful implementation in one or more individual training sites (demonstrating efficacy), to ultimately wider acceptance as the feasibility and effectiveness is demonstrated at multiple sites. On attaining this last state, the simulation intervention could be accepted as an “evidence-based practice.”

What is required for an Food and Drug Administration-approved drug, known to be efficacious in randomized controlled trials to be used safely and effectively in actual clinical practice? Table 1 suggests that there are “10 rights” of medication administration—these can be similarly applied to a simulation intervention. The first step is to define the objective (or desired outcome) of the simulation intervention. Then, for the simulation to have its desired effect, we must design the intervention to appropriately address questions of who, what, where, when, why, and how. For each simulation intervention, these critical questions can only be answered by rigorous research and deep experience. Just as in medicinal pharmacology, the context of use is a major factor in effectiveness. To illustrate these points, the elements of a specific simulation-based training intervention, to improve the quality of interprofessional handoffs, are shown in the last column of Table 1. Moreover, this example is used in Table 2 to outline some of the critical attributes of the design of a prototype simulation intervention study.

Table 1:
The Ten Rights of Simulation (vs. Medication Administration As an Analogy)
Table 2:
Attributes of a Prototype Simulation Study Designed Based on the Proposed Conceptual Framework


In our conceptual framework, SIM pharmacokinetics (PK) refers to the factors that affect how much of an impact the simulation actually has on the trainee (eg, acquisition of the intended KSAB).10 Thus, in this analogy, SIM PK11,12 describes the factors that affect how much of an impact the simulation actually has on the trainee's assimilation (learning) of desired knowledge, skills, attitudes, and behaviors (KSAB). In other words, how the SIM affects the trainee (ie, changes in their KSAB) as demonstrated, for example, by immediate postsimulation evaluation.

A more important question to society is how much of what is learned in the artificial environment of simulation training transfers to actual patient care. This is described by SIM pharmacodynamics (PD)12—what (of the KSAB learned during SIM) the learner does when confronted with real patient care. Thus, SIM PD refers to the factors that affect the degree to which what is learned during simulation is actually manifest when the trainee takes care of real patients. If the SIM is delivered to a sufficient number of clinicians, and the desired KSAB are manifest in everyday practice, then there should be a sustained improvement in patient care (ie, patient care quality will be improved). The parallels between pharmacology and simulation are depicted in Figure 1.

Figure 1.:
Simulation pharmacokinetics (PK) and pharmacodynamics (PD). The relationship between the traditional PK/PD and the simulation (SIM) analogy. SIM PK reflects the impact the simulation has on the trainee's learning of the desired knowledge, skills, attitudes, and behaviors (KSAB). SIM PD reflects how much of what is learned during simulation is actually manifest during patient care.

Thus, Miller's triangle13 provides a framework for the distinction between simulation PK and PD (Fig. 2). SIM PK is the relationship between simulation-based training (the drug dose) and the recipient's basic knowledge, skills, and attitudes (the “blood level” of learning), the bottom three levels of Miller triangle. Rethans et al14 argue that this level of performance is a description of competence (ie, “what is demonstrated in controlled representations of professional practice”). However, clinical expertise is a function of real-world performance, as opposed to performance on written examinations or even simulation-based evaluations.15

Figure 2.:
Simulation PK and PD in the context of Miller's triangle. SIM PK represents the bottom three segments, whereas SIM PD is the top segment (peak) of triangle (an accepted general model of learning and competence). Note that the three PK components (bottom half of figure) must be actively engaged to accomplish the desired behavior successfully in the real clinical environment (PD). The circle around the “Does” (or PD) component reflects the importance of contextual factors in the expression of what is learned within real-world clinical practice.

SIM PD is the relationship between the trainees' capacity to do what are expected of them—their actual performance in the real world; this is the top (“does”) level of Miller triangle. Actual performance in the real-world environment will be context dependent, being influenced by both internal (individual) and external (environmental) factors.14 Translating the traditionally learned model of PK and PD to our simulation pharmacology analogy, PK is what simulation training does to the trainee, whereas PD is what the trainee does to the patient.

Of course, the best outcome is for a simulation intervention to improve patients' health by decreasing morbidity and mortality. This is the “gold standard” for outcome but is the most difficult to demonstrate after an educational intervention. However, increasingly, studies have been able to show, after simulation-based training, greater compliance with evidence-based practices,16 changes in clinician behavior,17 and even real-world decreases in adverse events.18

The proposed model allows us to systematically think about and to examine the many factors that can affect SIM PK and SIM PD. The SIM PK will be affected, for example, by the receptivity of the trainee toward the material presented.19 SIM PK affecting factors may include personal (cultural background, prior training and experiences, motivation and incentives, biases, mood, extant attitudes, etc.), interpersonal (teamwork and communication skills, instructor-trainee interaction and prior history), task, environmental (the simulation center ambiance, in vivo versus in vitro simulation, interruptions, distractions, etc.20), and organizational (hospital safety culture, institutional priorities, incentives/disincentives, production pressure, etc.) variables.

The SIM PD is likely even more complex. How well do behaviors learned in the simulation center get activated during actual patient care? Although there is still insufficient evidence about the factors that affect transfer of training, mastery of desired skills, and the effective use of deliberate practice, for example, can enhance the transfer of simulation-based learning to the clinical environment.21,22 Attaining the highest levels of real-world performance requires self-awareness (or reflection) and deliberate directed practice.21,23,24

Another factor that has been studied is the role of simulation fidelity. High fidelity in simulation may not be critical,25 particularly for skill-based training.26 Other important SIM PD factors likely include a variety of individual, situational (contextual), and organizational factors. There may even be factors that lead to negative transfer of training (ie, behaviors learned during simulation that are inappropriate or maladaptive in the real-work environment).


A given SIM dose will produce some peak effect. For example, after a simulation experience, an individual learner may be more likely to exhibit a desired behavior on post-SIM evaluations or during observations of clinical practice. These dose-time relationships are shown in Figure 3. Curve A shows the effects of a simulation intervention, call it SIM1 (delivered at time zero; arrow), that has a modest effect that is relatively short lived (3 months in this example). Curve B shows a more potent simulation-based intervention, SIM2 (also delivered at time zero), which produces a greater initial effect that is sustained for a longer time than is SIM1. However, eventually (6 months), the effects of SIM2 on trainee behavior decrease below an acceptable level (dashed horizontal line). “Redosing” the SIM2 training (curve C) at 6 months (second arrow) results in a higher peak effect that then provides a sustained level of performance that could last much longer than the initial SIM2 exposure (12-month duration depicted).

Figure 3.:
Dose-time relationship for simulation training. The relationship between the type and dose of simulation over time and its effects on trainees' KSAB (y-axis) as manifest in their clinical practice over time (x-axis). The three curves show the effects of different simulation interventions (SIM1 and SIM2) delivered at different times (see text for details).

The most important point here is that the effects of a single simulation encounter will diminish over some period of time. There is voluminous literature showing that learning (whether of rats in an operant cage or a student cramming for an examination) is greatest immediately after exposure and then extinguishes over time. Some experiences can be so powerful that their effects never extinguish; this is a rare event and is much less likely to occur in the simulation center than with salient events in the clinical environment (eg, an error leading immediately to an actual patient's death).

More effective (potent) SIM interventions can be expected to produce a greater effect that may be longer lasting (ie, extinguish more slowly). A more potent intervention, for example, may involve more hours of practice in the simulated environment.27 However, as in a drug trial, the efficacy of a SIM can only be properly assessed with a well-controlled study incorporating appropriate experimental design attributes.

Following with our pharmacology-based conceptual model, it is predicted that a second SIM intervention (ie, redosing; SIM2 in Fig. 3) administered to the same subject(s) before the complete extinguishing of the effects of the initial simulation exposure will lead to a higher peak effect and a longer duration (ie, elimination half-life) until the resulting effect diminishes to a level below that considered minimally acceptable by the trainee, trainer, or other stakeholder. In fact, the current literature is confounded by different “potencies” of simulation intervention, and this may partially explain why some single interventions seem to be effective,28,29 whereas others are not.30,31 However, note that simulation intervention effectiveness (or potency) is influenced by many other curricular and evaluation measurement factors. For example, in the study by Knudson et al,29 simulation-trained surgical residents performed better in team-based crisis management skills but not in trauma management knowledge or basic treatment skills, when compared with a didactic curriculum.

The above discussion is most applicable to novice learners. Certainly, an experienced surgeon does not require frequent or continuous “infusions” of training in those skills that they perform regularly. At a mastery level, an annual or semiannual “validation” may be all that is required, similar to experienced airline pilots. In such cases, the more appropriate analogy might be that they have been “immunized” and now require periodic “titer” checks and possible “booster” vaccinations. However, as noted in a previous editorial15 and supported by the literature, experience does not equal expertise and KSAB that is not regularly used will decay over time.23,24,32


Recasting the evaluation of a simulation intervention into a dose-time construct allows consideration of the critical question of the frequency of training (ie, dosing interval) to maintain the desired level of competence. The optimal simulation interventions would start with a robust loading (or bolus) dose to rapidly attain a large behavioral effect (Fig. 4). However, because of the expected decay of learning, redosing will be required. Although many required curricula (eg, Advanced Cardiac Life Support (ACLS) and Pediatric Advanced Life Support (PALS)) are repeated at annual or biannual intervals, there is evidence that in the interim performance decays significantly below minimum acceptable criteria.33–35 In the optimal situation, similar to the pharmacology of intravenous drug infusions, the desired effect would best be maintained by a continually repeated intervention (equivalent to a continuous infusion). This might be accomplished by very frequent (eg, weekly) exposures to simulation learning within the clinical environments (eg, in situ mock codes). More research is necessary to ascertain the optimal dose, timing, and frequency (redosing) of simulation-based training for technical and behavioral skills.

Figure 4.:
Bolus Dose and Continuous Infusions. Similar to the pharmacology of intravenous drug infusions, optimal simulation interventions might start with a loading or bolus dose to rapidly attain a peak effect. Because of the decay of learning, to then maintain the desired effect, a second very frequently repeated intervention (equivalent to a continuous infusion) would be needed.


The effects of a simulation intervention can be expressed as a dose-effect relationship (Fig. 5). For each simulation intervention, a range of “doses” (perhaps varying by the intensity and duration of the training intervention) can be delivered and their effects measured. Statistical methods permit quantitative comparison of dose-effect curves of different simulation interventions.8,9 For SIM1 (curve A) from Figure 3, a substantial dose is required to produce an acceptable performance effect (eg, perhaps 80% of the maximum) as shown for curve A of Figure 5. Moreover, no matter how much SIM1 is provided, a 100% effect cannot be attained (in pharmacology, this is called a partial agonist). In contrast, SIM2 (curve B) is a more potent intervention, so that not only is one able to attain a greater peak effect (100% maximum possible effect) but also a lower dose is required (ie, the dose required to attain a 50% effect or ED50 is lower; curve B is to the left of curve A).36 If SIM1 is administered under less optimal circumstances (eg, less receptive trainees, suboptimal instruction or simulation environment, or impediments to transfer of training such as poor institutional safety culture), then the curve will shift from curve A to curve C. The maximum effect of SIM1 is diminished further, and a higher dose of SIM1 is required to attain the equivalent effect (ie, the ED50 is shifted to the right and downward as shown in the figure by the X arrow). One might think of whatever factor is causing SIM1 to manifest as curve C (instead of curve A) as an antagonist of the simulation intervention. Finally, it is possible to make a simulation intervention more potent, for example, SIM2 could produce curve D instead of curve B, a parallel shift to the left (as shown by arrow Y in the figure). Factors that could make an intervention more potent might include a previous, relevant learning experience, or a preexisting culture or system of care that facilitates or reinforces the simulation curriculum. Future research must evaluate the relative contributions to overall learning (and resulting patient care outcomes) of different simulation modalities and methods and their interactions.

Figure 5.:
Dose-effect relationships for simulation training. The magnitude of effect of several possible simulation interventions. For each intervention (A–D), a range of “doses” is depicted (see text for details). Changes in drug attributes or the pharmacologic milieu can shift the dose-effect curves right (X, less potent) or left (Y, more potent).


It is inevitable in the coming years that many healthcare providers will be exposed to multiple simulation experiences, sometimes over relatively short time periods. What will be the net effects? Although it is natural to assume that “more is better,” there may be diminishing returns or alternatively, with repeated exposure to simulation, less total contact time with simulation may be required to achieve the desired educational objectives (ie, synergistic effects).

In modern healthcare, there are many kinds of intentional (and unintentional) interventions that can affect care quality. These interactions are increasingly going to be critically important to patient outcomes. To illustrate, let us assume we want to enhance patient safety by getting nurses to speak up when they see an unsafe act or situation during actual patient care. We then design a training course (perhaps using standardized physicians and manikin patients) where we put individual nurses in situations where they need to speak up or the (simulated) patient could be harmed. During debriefing of these encounters, the instructor emphasizes the importance of speaking up, works with the nurses on specific ways to do so, and then even provides a “speak up” template or protocol to support the intervention. At the end of the course, the nurse trainees complete a written examination delineating why speaking up is important. Then, a week later, they each perform a posttraining assessment to demonstrate that they indeed do speak up appropriately in a simulated clinical situation. Thus, we have measured the PK of the intervention and established that these nurses now know what to do and have shown that they can do it.

What is going to happen when these nurses go back to their everyday work? Will they manifest this newly learned behavior? What factors will affect their willingness and ability to do so? How effective will this intervention be if none of the physicians were even aware of it and/or respond in a negative and derisive manner to nurses' attempts to speak up? On the other hand, what if all the ward physicians were required to go through similar training? From a pharmacologist's point of view, the first case would represent a situation where our intervention (the “speak up drug”) either has no effect at all or has a small effect that is rapidly extinguished. One might also speculate that the nurse trainee will be quite resistant (or tolerant) to subsequent doses (ie, additional “speak up” training sessions). In the second case, one would expect appreciable synergy between the two training interventions (MD and RN) with greater net effect than if either one was done alone (note that many factors would influence this interaction including dose, timing, concurrent training, etc.).

Learning does not occur in a vacuum. Trainees bring their prior experience to the simulation center and what they learn is then modified by subsequent experiences. As alluded to above, simulation trainees return to the world of live patient care where what they have learned can be practiced, reinforced, or even quickly extinguished. If enough clinicians are trained to do something (eg, “speak up”) in a particular way and the clinical care environment facilitates and supports this behavior, the intervention will be more likely to be successful (ie, positive transfer of training). Moreover, reinforcement of learning (and even accelerated in situ learning by those who are less well trained) can even occur. For example, in the postanesthesia care unit handover training intervention described earlier (Tables 1 and 2), we found that over time, cultural changes and informal in situ guidance in the postanesthesia care unit lead to improved handover performance by providers who had not yet received the formal simulation training.


What happens when trainees receive two simulation interventions contemporaneously—how will these two learning experiences interact and what will be the net effect on actual clinical behavior? Let us, for example, consider two distinct simulation-based training courses intended for anesthesia residents. One course, developed and delivered by the anesthesiology department, is focused on improving crisis management and teamwork skills, similar to Anesthesia Crisis Resource Management (ACRM).37,38 We will call this annual 8-hour experience SIM1. The other course, a one-time 2-hour simulation-based experience (call it SIM2) developed and delivered by hospital educators, is intended to improve interdisciplinary handoffs and transitions of care.39 Let us further assume that each intervention alone has a previously known effect on the target audience (or receptor in drug parlance), represented in Figure 6 by point A for SIM1 and point B for SIM2.

Figure 6.:
SIM-SIM interactions. The interactive effects of two simulation interventions displayed as an isobologram (see text for details). As described by Tallarida, an isobologram for some particular effect (eg, 100% of the maximum) is a straight line between the dose of SIM1 alone (point A on the y axis) and SIM2 alone (point B on the x axis). The isobolographic line connecting these intercept points (additivity line) is the locus of all dose pairs that, based on these potencies, should give the same net effect. An actual dose pair, such as point C, attains this effect with lesser quantities and is superadditive (or synergistic), whereas in the dose pair denoted by point D, greater quantities are required and is therefore subadditive. A suitable statistical analysis is required to demonstrate the nature of the interaction.

One might predict that a trainee who received both courses would manifest better communication and teamwork skills during subsequent patient care than if the same resident had been exposed to either of one these courses alone, but how much better? The line in between the two points A and B in Figure 6 is called an isobologram40,41 and represents all combinations of lower doses of the two interventions together that would be predicted to produce the same net effect as the higher doses (A and B) of either one alone. Although this has been shown to be true of two drugs acting competitively at the same receptor,42 I could not find any simulation literature that evaluated such an educational interaction. If the combination of the two interventions led to a greater effect than that predicted by the isobologram (or, as shown by point C, a smaller dose of each was required to produce an equivalent net effect), then the interventions would be said to be synergistic (or supraadditive). In contrast, what if the philosophy and intent of the two courses were diametrically opposed? Perhaps, the handoff course focused on a rigid standardized approach (“you must always do it this exact way”) with no discussion of adaptive or resilient teamwork or communication skills. In this case, it is possible that the two courses could conflict with each other (whether due to mixed messages, confusion, or other factors), such that the effectiveness of the two courses may be less than that predicted (ie, a subadditive or antagonist interaction) for either one alone (point D in Fig. 6). In other words, when combined, the two SIM interventions would need larger relative doses to produce the same net effect as either one alone. The nature of these interactions can be analyzed statistically.42,43 Studies to evaluate these kinds of interactions will need to be designed and undertaken if we are to understand, and benefit maximally from, comprehensive experiential learning-oriented curricula.


Clinical pharmacology primarily deals with appropriate drug choice (ie, what simulation intervention?) given the desired clinical situation (training effect) and the relationship between that drug's dosing regimen (ie, how much and how often?) and the resulting effect(s) and side effects. We have not yet discussed how simulation actually produces its effects on learners. This is analogous to the mechanisms (or physiology) of a drug's action. Pharmacologists study pharmacogenetics, second messenger systems, receptor mechanisms, etc. For healthcare simulation, we need to know the mechanisms by which a SIM intervention does (or does not) affect a learner's KSAB. This question can be addressed through an understanding of learning theory, cognitive science, educational psychology, social psychology, etc. There is still much to learn about the “basic science” of simulation learning. For more information on these topics, the reader is referred to many of the articles cited in this article and general texts.44–46


The conceptual model presented here has many limitations. Simulation-based interventions are not “drugs” and, despite what some might believe, a KSAB deficiency is not a disease. The analogy is not perfect nor is it intended to be. Rather, what is important is the extent to which it is useful in guiding thinking and planning for simulation research.

Nevertheless, the differences between integrated experiential interventions that affect human behavior and drug therapies that affect physiology may be less than many believe. Biology and behavior are inextricably linked and many of the perceived differences may be due to our inability to understand, operationalize, and measure relevant dependent and independent variables. Just as in clinical medicine, where we are currently using blood pressure as a surrogate for end-organ perfusion, in the social sciences, we use words/utterances and observable behavior as a surrogate for underlying human cognitive and perceptual processes.

Using this model, to do SIM PK experiments, one must have a measure of the “drug” level in the learner, but this may be difficult to obtain. We do not currently have good assays. However, the fact that we do not yet know how to effectively and efficiently assay simulation's “drug” levels or the resulting effects should be considered a call to action to the simulation research community. That said, we do not have to wait for perfect assays to pursue evaluation of dose-time and dose-effect relationships.

In the interest of clarity and brevity, I do not address the application of this conceptual model to simulation for “diagnostic” purposes; that is, for evaluation, certification or credentialing, and research (ie, testing of a hypothesis). This could be considered analogous to a diagnostic drug intervention (eg, an edrophonium test for a myasthenic patient or a Cortisol test for adrenal dysfunction) in which case the concepts of dose-time and dose-effect may still be applicable. However, this topic will need to be explored further.


This article presents a framework for the design and evaluation of simulation research to assist our specialty in creating the empirical evidence on which to base decisions about curriculum design and implementation. With this conceptual model, simulation dose, route, timing, and effect can be formally analyzed and presented. The notion of simulation PK (the effects of an intervention on KSAB in the simulated environment) and PD (effects on subsequent behavior during actual patient care) can help to guide the design of simulation research. Similarly, the use of simulation dose-time, dose-effect, and drug-drug interaction relationships provide structure to the evaluation of multimodal interventions and the need to consider the entire educational milieu.

Rigorous simulation research is indeed difficult, not to mention time-consuming and expensive, but it is possible. Given the increasing resource constraints of modern healthcare, we will invariably see more demands to “prove” that (especially high-fidelity) simulation actually “makes a difference.” Thus, we must develop a rigorous epistemology of simulation and base both our work and our recommendations on this foundation. It is hoped that this article stimulates readers to ponder and then to pursue this course of action.


The author acknowledges Emil Petrusa, Ray Booker, and this journal's peer reviewers, whose suggestions and comments substantially improved the manuscript. The editorial assistance of Ray Booker is gratefully acknowledged. This article is dedicated to Dr. George F. Koob for his contributions to my scientific education.


1.Issenberg SB, Scalese RJ. Simulation in healthcare education. Perspect Biol Med 2008;51:31–46.
2.Bhatti NI, Cummings CW. Competency in surgical resident training: defining and raising the bar. Acad Med 2007;82:569–573.
3.Michelson JD, Manning L. Competency assessment in simulation-based procedural education. Am J Surg 2008;196:609–615.
4.Scalese RJ, Obeso VT, Issenberg SB. Simulation technology for skills training and competency assessment in medical education. J Gen Intern Med 2007;23(suppl 1):46–49.
5.ten Cate O. Competency based postgraduate training: can we bridge the gap between theory and clinical practice? Acad Med 2007;82:542–549.
6.Lynagh M, Burton R, Sanson-Fisher R. A systematic review of medical skills laboratory training: where to from here? Med Educ 2007;41:879–887.
7.Sutherland L, Middleton P, Anthony A, Hamdorf J, Cregan P, Scott D, Maddern G. Surgical simulation: a systematic review. Ann Surg 2006;243:291–300.
8.Litchfield JJ, Wilcoxon F. A simplified method of evaluating dose-effect experiments. J Pharmacol Exper Ther 1949;96:99–113.
9.Tallarida RJ, Murray RB. Manual of Pharmacologic Calculations. 2nd ed. New York: Springer-Verlag; 1987.
10.Issenberg SB, McGaghie WC, Petrusa ER, Lee Gordon D, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach 2005;27:10–28.
11.Gibaldi M, Perrier D. Pharmacokinetics. 2nd ed. London: Informa Healthcare; 1982.
12.Tozer TN, Rowland M. Introduction to Pharmacokinetics and Pharmacodynamics: The Quantitative Basis of Drug Therapy. Philadelphia: Lippincott Williams & Wilkins; 2006.
13.Miller GE. The assessment of clinical skills/competence/performance. Acad Med 1990;65(suppl):S63–S67.
14.Rethans J-J, Norcini JJ, Baron-Maldonado M, Blackmore D, Jolly BC, LaDuca T, Lew S, Page GG, Southgate LH. The relationship between competence and performance: implications for assessing practice performance. Med Educ 2002;36:901–909.
15.Weinger MB. Experience ≠ expertise: can simulation be used to tell the difference? Anesthesiology 2007;107:691–694.
16.Wayne DB, Didwania A, Feinglass J, Fudala MJ, Barsuk JH, McGaghie WC. Simulation-based education improves quality of care during cardiac arrest team responses at an academic teaching hospital: a case-control study. Chest 2008;133:56–61.
17.Seymour NE, Gallagher AG, Roman SA, O'Brien MK, Bansal VK, Andersen DK, Satava RM. Virtual reality training improves operating room performance: results of a randomized, double-blinded study. Ann Surg 2002;236:458–463.
18.Barsuk JH, Cohen ER, Feinglass J, McGaghie WC, Wayne DB. Use of simulation-based education to reduce catheter-related bloodstream infections. Arch Intern Med 2009;169:1420–1423.
19.Vikis EA, Mihalynuk TV, Pratt DD, Sidhu RS. Teaching and learning in the operating room is a two-way street: resident perceptions. Am J Surg 2008;195:594–598.
20.Stefanidis D, Korndorffer JJ, Markley S, Sierra R, Heniford B, Scott D. Closing the gap in operative performance between novices and experts: does harder mean better for laparoscopic simulator training? J Am Coll Surg 2007;205:307–313.
21.McGaghie WC, Siddall VJ, Mazmanian PE, Myers J for the American College of Chest Physicians Health and Science Policy Committee. Lessons for continuing medical education from simulation research in undergraduate and graduate medical education: effectiveness of continuing medical education: American College of Chest Physicians Evidence-Based Educational Guidelines. Chest 2009;135:62S–68S
22.McGaghie WC. Research opportunities in simulation-based medical education using deliberate practice. Acad Emerg Med 2008;15:995–1001.
23.Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med 2004;79(suppl):S70–S81.
24.Ericsson KA, Lehmann AC. Expert and exceptional performance: evidence of maximal adaptation to task constraints. Annu Rev Psychol 1996;47:273–305.
25.Lane C, Rollnick S. The use of simulated patients and role-play in communication skills training: a review of the literature to August 2005. Patient Educ Counsel 2007;67:13–20.
26.Gopher D, Weil M, Baraket T. Transfer of skill from a computer game trainer to flight. Hum Factors 1994;36:1–19.
27.McGaghie WC, Issenberg SB, Petrusa ER, Scalese RJ. Effect of practice on standardised learning outcomes in simulation-based medical education. Med Educ 2006;40:792–797.
28.Owen H, Mugford B, Follows V, Plummer JL. Comparison of three simulation-based training methods for management of medical emergencies. Resuscitation 2006;71:204–211.
29.Knudson MM, Khaw L, Bullard MK, Dicker R, Cohen MJ, Staudenmayer K, Sadjadi J, Howard S, Gaba D, Krummel T. Trauma training in simulation: translating skills from SIM time to real time. J Trauma 2008;64:255–263.
30.Gordon JA, Shaffer DW, Raemer DB, Pawlowski J, Hurford WE, Cooper JB. A randomized controlled trial of simulation-based teaching versus traditional instruction in medicine: a pilot study among clinical medical students. Adv Health Sci Educ Theory Pract 2006;11:33–39.
31.Schwartz LR, Fernandez R, Kouyoumjian SR, Jones KA, Compton S. A randomized comparison trial of case-based learning versus human patient simulation in medical student education. Acad Emerg Med 2007;14:130–137.
32.Murray DJ, Boulet JR, Avidan M, Kras JF, Henrichs B, Woodhouse J, Evers AS. Performance of residents and anesthesiologists in a simulation-based skill assessment. Anesthesiology 2007;107:705–713.
33.Kurrek MM, Devitt JH, Cohen MM. Cardiac arrest in the OR: how are our ACLS skills? Can J Anaesth 1998;45:130–132.
34.Bayley R, Weinger M, Meador S, Slovis C. Impact of ambulance crew configuration on simulated cardiac arrest resuscitation. Prehospital Emerg Care 2008;12:62–68.
35.Smith K, Gilcreast D, Pierce K. Evaluation of staff's retention of ACLS and BLS skills. Resuscitation 2008;78:59–65.
36.Negus S, Pasternak G, Koob G, Weinger M. Antagonist effects of betafunaltrexamine and naloxonazine on alfentanil-induced antinociception and muscle rigidity in the rat. J Pharmacol Exp Ther 1993;264:739–745.
37.Howard S, Gaba D, Fish K, Yang G, Sarnquist F. Anesthesia crisis resource management training: teaching anesthesiologists to handle critical incidents. Aviat Space Environ Med 1992;63:763–770.
38.Blum RH, Raemer DB, Carroll JS, Sunder N, Felstein DM, Cooper JB. Crisis resource management training for an anaesthesia faculty: a new approach to continuing education. Med Educ 2004;38:45–55.
39.Slagle JM, Kuntz A, France D, Speroff T, Madbouly A, Weinger MB. Simulation training for rapid assessment and improved teamwork: lessons learned from a project evaluating clinical handoffs. Proc Hum Fac Ergon Soc 2007;51:668–671.
40.Tallarida RJ, Porreca F, Cowan A. Statistical analysis of drug-drug and site-site interactions with isobolograms. Life Sci 1989;45:947–961.
41.Gessner PK. Isobolographic analysis of interactions: an update on applications and utility. Toxicology 1995;105:161–179.
42.Tallarida RJ. Interactions between drugs and occupied receptors. Pharmacol Ther 2007;113:197–209.
43.Tallarida RJ. Statistical analysis of drug combinations for synergism. Pain 1992;49:93–97.
44.Schunk DH. Learning Theories: An Education Perspective. 5th ed. Upper Saddle River, NJ: Prentice Hall; 2007.
45.Woolfolk AE. Educational Psychology. 11th ed. Upper Saddle River, NJ: Prentice Hall; 2009.
46.Gazzaniga MS, Ivry RB, Mangun GR. Cognitive Neuroscience: The Biology of the Mind. 3rd ed. NY: W.W. Norton & Co; 2008.

Simulation; Pharmacologic principles; Educational theory; Dose-response relationships; Dose-dose interactions; Experiential learning; Training; Data analysis

© 2010 Society for Simulation in Healthcare