Secondary Logo

Journal Logo

The Creation of Standard-Setting Videos to Support Faculty Observations of Learner Performance and Entrustment Decisions

Calaman, Sharon, MD; Hepps, Jennifer H., MD; Bismilla, Zia, MD; Carraccio, Carol, MD, MA; Englander, Robert, MD, MPH; Feraco, Angela, MD; Landrigan, Christopher P., MD, MPH; Lopreiato, Joseph O., MD, MPH; Sectish, Theodore C., MD; Starmer, Amy J., MD, MPH; Yu, Clifton E., MD; Spector, Nancy D., MD; West, Daniel C., MD and the I-PASS Study Education Executive Committee

doi: 10.1097/ACM.0000000000000853
Articles
Free

Entrustable professional activities (EPAs) provide a framework to standardize medical education outcomes and advance competency-based assessment. Direct observation of performance plays a central role in entrustment decisions; however, data obtained from these observations are often insufficient to draw valid high-stakes conclusions. One approach to enhancing the reliability and validity of these assessments is to create videos that establish performance standards to train faculty observers. Little is known about how to create videos that can serve as standards for assessment of EPAs.

The authors report their experience developing videos that represent five levels of performance for an EPA for patient handoffs. The authors describe a process that begins with mapping the EPA to the critical competencies needed to make an entrustment decision. Each competency is then defined by five milestones (behavioral descriptors of performance at five advancing levels). Integration of the milestones at each level across competencies enabled the creation of clinical vignettes that were converted into video scripts and ultimately videos. Each video represented a performance standard from novice to expert. The process included multiple assessments by experts to guide iterative improvements, provide evidence of content validity, and ensure that the authors successfully translated behavioral descriptions and vignettes into videos that represented the intended performance level for a learner. The steps outlined are generalizable to other EPAs, serving as a guide for others to develop videos to train faculty. This process provides the level of content validity evidence necessary to support using videos as standards for high-stakes entrustment decisions.

S. Calaman is associate professor, Department of Pediatrics, Drexel University College of Medicine and St. Christopher’s Hospital for Children, Philadelphia, Pennsylvania.

J.H. Hepps is assistant professor, Department of Pediatrics, Uniformed Health Services University of the Health Sciences and Walter Reed National Military Medical Center, Bethesda, Maryland.

Z. Bismilla is assistant professor, Department of Pediatrics, University of Toronto and The Hospital for Sick Children, Toronto, Ontario, Canada.

C. Carraccio is vice president for competency-based assessment, American Board of Pediatrics, Chapel Hill, North Carolina.

R. Englander is senior director for competency-based learning and assessment, Association of American Medical Colleges, Washington, DC.

A. Feraco is clinical fellow in pediatric hematology/oncology, Dana Farber and Boston Children’s Hospital Cancer and Blood Disorders Center, Harvard Medical School, Boston, Massachusetts.

C.P. Landrigan is associate professor, Department of Medicine and Pediatrics, Harvard Medical School, and Department of Medicine, Boston Children’s Hospital and Brigham and Women’s Hospital, Boston, Massachusetts.

J.O. Lopreiato is professor, Department of Pediatrics, Uniformed Health Services University of the Health Sciences and Walter Reed National Military Medical Center, Bethesda, Maryland.

T.C. Sectish is professor of pediatrics, Harvard Medical School, and Department of Medicine, Boston Children’s Hospital, Boston, Massachusetts.

A.J. Starmer is staff physician and lecturer in pediatrics, Harvard Medical School and Boston Children’s Hospital, Boston, Massachusetts, and volunteer affiliate professor, Department of Pediatrics, Oregon Health and Science University (OHSU) and OHSU Doernbecher Children’s Hospital, Portland, Oregon.

C.E. Yu is associate professor, Department of Pediatrics, Uniformed Health Services University of the Health Sciences and Walter Reed National Military Medical Center, Bethesda, Maryland.

N.D. Spector is professor of pediatrics, Department of Pediatrics, Drexel University College of Medicine and St. Christopher’s Hospital for Children, Philadelphia, Pennsylvania.

D.C. West is professor of pediatrics, Department of Pediatrics, University of California, San Francisco (UCSF), School of Medicine and UCSF Benioff Children’s Hospital, San Francisco, California.

I-PASS Study Education Executive Committee: Boston Children’s Hospital/Harvard Medical School (primary site): Christopher P. Landrigan, MD, MPH, Theodore C. Sectish, MD, Amy J. Starmer, MD, MPH; Boston Children’s Hospital: Elizabeth L. Noble, Lisa L. Tse; Cincinnati Children’s Hospital Medical Center/University of Cincinnati College of Medicine: Jennifer K. O’Toole, MD, MEd; The Hospital for Sick Children/University of Toronto: Zia Bismilla, MD, FRCPC, Maitreya Coffey, MD, FRCPC; Lucile Packard Children’s Hospital/Stanford University School of Medicine: Lauren A. Destino, MD, Jennifer L. Everhart, MD, Shilpa J. Patel, MD (currently at Kapi’olani Children’s Medical Center for Women and Children/University of Hawai’i at Mānoa John A. Burns School of Medicine); OHSU Doernbecher Children’s Hospital/Oregon Health and Science University: Amy J. Starmer, MD, MPH; Primary Children’s Hospital/Intermountain Healthcare/University of Utah School of Medicine: James F. Bale Jr, MD, Rajendu Srivastava, MD, MPH, Adam T. Stevenson, MD; St. Louis Children’s Hospital/Washington University School of Medicine in St. Louis: F. Sessions Cole, MD; St. Christopher’s Hospital for Children/Drexel University College of Medicine: Sharon Calaman, MD, Nancy D. Spector, MD; UCSF Benioff Children’s Hospital/University of California, San Francisco, School of Medicine: Glenn Rosenbluth, MD, Daniel C. West, MD; Walter Reed National Military Medical Center/Uniformed Services University of the Health Sciences: Jennifer H. Hepps, MD, Joseph O. Lopreiato, MD, MPH, Clifton E. Yu, MD.

Funding/Support: The I-PASS Study is primarily supported by the U.S. Department of Health and Human Services, Office of the Assistant Secretary for Planning and Evaluation (1R18AE000029-01). The study was developed with input from the Initiative for Innovation in Pediatric Education and the Pediatric Research in Inpatient Settings Network (supported by the Children’s Hospital Association, the Academic Pediatric Association, the American Academy of Pediatrics, and the Society of Hospital Medicine). A.J.S. was supported by the Oregon Comparative Effectiveness Research K12 Program (Agency for Healthcare Research and Quality, 1K12HS019456-01). Additional funding for the I-PASS Study is provided by the Medical Research Foundation of Oregon, Physician Services Incorporated Foundation (of Ontario, Canada), and Pfizer (unrestricted medical education grant). Computer modules used in the I-PASS curriculum were developed by Concurrent Technologies Corporation. This project was also supported by a contribution from the American Board of Pediatrics Foundation.

Other disclosures: None reported.

Ethical approval: Reported as not applicable.

Correspondence should be addressed to Sharon Calaman, St. Christopher’s Hospital for Children, 160 E. Erie Ave., Philadelphia, PA 19134; telephone: (215) 427-8846; e-mail: Sharon.calaman@drexelmed.edu.

Entrustable professional activities (EPAs) provide a framework to standardize outcomes and advance assessment in competency-based medical education (CBME).1–3 This advance is important because CBME provides a way to train physicians to better meet the needs of society and opens the door to making judgments about advancement based on entrustment decisions rather than completion of training time requirements.1,4 As the medical education community moves toward making entrustment decisions in the context of specialty-specific EPAs, the way in which we assess learners will be critical because the consequences of these assessments will be significant for learners and patients.5

Central to assessing whether a learner can be entrusted to perform a particular EPA is direct observation of the learner in clinical settings by faculty supervisors.1–3 However, assessment data obtained from these types of observations are often insufficient for drawing valid conclusions for high-stakes decisions because observations can be influenced by characteristics unrelated to the skills being assessed, such as the trainee’s likeability.6–10 A number of strategies have been employed in an attempt to improve the validity of faculty observations.8,11 The most common approach has been to try to train individual faculty to be better assessors. Approaches to this type of faculty development have varied widely in methods and efficacy,8,11–13 but one of the most effective strategies has been to use video recordings of either real or simulated activities and frame of reference training to set standards with the goal of reducing variability between raters.7,9,14–17

EPAs require the integration of competen cies within and across domains.1,18 EPAs thus provide a more “panoramic” view of the learner integrating competencies to deliver care, while the individual competencies and their milestones provide a more granular view of learner abilities. Integrating competencies and milestones into an EPA framework allows one to create a shared mental model of observable behaviors that serve as “scaffolding” for the EPA. The EPA provides a more “holistic perspective of learner assessment.”18 Thus, by their nature, EPAs represent a complex set of observable behaviors that can be recognized by faculty observers in the workplace environment. Although entrustment of a learner is a binary decision, the pathway to that entrustment involves stages of development that correspond to the levels of required supervision for a given learner. This adds a meaningful construct to trainee feedback and assessment.19 Rater training is important in this process, and developing standard-setting videos is essential.

The use of video standards is different from the common trigger video used in medical education. The goal of trigger videos is to generate discussion, not to accurately represent performance, and best practices for creating trigger videos are well established.20–22 Typically, they should have one to three main learning objectives with “trigger” points selected before the script is developed.20 On the other hand, little is known about how to create standard-setting videos that have sufficient validity evidence to represent different levels of performance for assessment and high-stakes decision making. The purpose of this article is to help fill this gap by describing our experience creating video standards that represent five distinct levels of performance for learners engaged in the professional activity of patient handoffs. The steps we describe are generalizable to other EPAs and can serve as a guide for the development of videos that can be used to train faculty assessors and establish standards for a broad range of assessments.

Back to Top | Article Outline

Common Approaches to Creating Standard-Setting Videos

There are two reported approaches to creating standard-setting videos: unscripted and scripted. In the unscripted approach, learners are videotaped in real or standardized clinical situations, and performance examples are identified, classified, and used as case studies. This approach has been used to train faculty raters by allowing them to practice rating case study videos.14,17,23 However, for setting standards for EPA entrustment decisions, the unscripted approach has limited utility because it is difficult to capture real learner examples of each of the performance levels, especially at advanced levels where differences can be subtle. An alternative approach, and the approach we chose to pursue, is to create scripted standardized videos using actors to represent members of the health care team. The threefold intended outcomes of this approach are discrimination among multiple performance levels, especially when the differences are nuanced; development of a shared mental model of behaviors to align behaviors with predetermined performance levels; and provision of a rigorous process to validate content. Scripted videos have been used to improve teaching and feedback, communication, and professionalism skills as well as to teach ethics and diagnostic decision making.16,20,22 Experience of other investigators has identified four key principles for creating effective scripted videos: engage faculty to create and validate the scripts; limit the script to no more than three main points; use real staff and settings to make the video as realistic as possible; and keep videos as short as possible (30 seconds to 2 minutes).16,20

Back to Top | Article Outline

Approach to Creating Video Standards for EPAs

Create and validate EPA-level anchors and vignettes

Our goal was to construct standard-setting videos reflecting novice to expert performance of learners engaged in the EPA of patient handoffs. This process is summarized in Figure 1. We started with a draft EPA entitled “Facilitate handovers to another healthcare provider either within or across settings” using an approach that has been previously described.2,18,24 Briefly, the initial step in the process was describing the functions required to perform the EPA, then judiciously mapping these functions to the domains of competencies and their associated milestones that are most critical to entrusting a learner to perform an effective handoff without supervision. We then created a matrix of the designated competencies (Figure 1, left-hand column) and their milestones (across the rows). The latter provide a series of behavioral descriptions for learners at each of five performance levels (corresponding to Dreyfus and Dreyfus’s novice, advanced beginner, competent, proficient, and expert levels).25,26 For clarification, whereas the Dreyfus and Dreyfus model outlines an additional performance level beyond expert, that of master, the Pediatrics Milestones did not include a level for mastery because primary source literature to discriminate between these two levels was not available. Using the integration of the behavioral descriptions down each of the five columns representing a performance level, we created five clinical vignettes adapting the language of the original milestones into the context of the given EPA (Figures 1 and 2). To validate the behavioral descriptions and vignettes, a panel of experts in the field of patient handoffs reviewed their content. These experts had studied handoffs and were experts on the available literature. For the creation of other standard-setting videos if such experts aren’t readily available, careful review of the literature would be an important step. During this review process, as an example, additional subcompetencies were added and some of the original ones removed. Members of the Milestones Working Group and the I-PASS* Study Group engaged in an iterative process to better articulate behaviors that were not explicitly captured by the milestones-based language, remove nonobservable elements, and translate high-inference behaviors into low-inference behaviors (described below) until consensus was reached. The product of this final step served as the starting point for video development (Figure 2).

Figure 1

Figure 1

Figure 2

Figure 2

Back to Top | Article Outline

Develop the scaffolding for script development

To develop the video scripts, we first dissected the expected behaviors for each level of performance into single, discrete, observable behaviors or skills to serve as scaffolding for each performance level. An example of this process with a portion of the vignette is provided in Figure 3, using performance level 5 of the EPA. The initial draft of the scaffolding was reviewed and revised by handoff content experts until consensus was obtained. We found that careful attention to this step was essential because the video scripts and filming followed from this work, and key differences between levels of performance were often subtle and nuanced. Taking shortcuts on this step risks the possibility of producing scripts and videos that lack sufficient discriminatory features for standard setting. Special effort is required to adequately capture in a script and subsequent video the discriminatory features associated with varying levels of performance (e.g., performing a task occasionally versus frequently).

Figure 3

Figure 3

A particularly challenging aspect of this process was that some components of the original milestones-based descriptors (which served as the core of the vignettes) are high-inference behaviors that cannot be directly observed. High-inference behaviors are defined as being vague and open to subjectivity, whereas low-inference behaviors can be clearly observed and objectively quantified.27,28 In the example illustrated in Figure 3, the idea that a learner “internalizes the professional responsibility aspect of handoff communication” is a high-inference behavior. To overcome this challenge, we focused on translating high-inference descriptors into low-inference, observable behaviors by focusing on observable elements that facilitate drawing high-inference conclusions. These observable behaviors may count for more than one element in the performance description.

Back to Top | Article Outline

Develop video scripts

We developed the script for the most advanced level of performance first, with subsequent levels following from that. We paid particular attention to the types and complexity of patient scenarios that would be used as these would have to be consistent throughout the scripts for all five performance levels. The patient cases needed to be complex enough to demonstrate differences in level of performance but not so complex as to be distracting to the observer. One does not want the observer to be focusing on managing the medical aspects of the case rather than focusing on the learners. In addition, because the videos would be disseminated across a broad range of institutions and institutional cultures, it was important that the patient case scenarios be common enough to be broadly generalizable.

On the basis of the EPA scaffolding, two of us (S.C. and J.H.) developed initial drafts of the scripts, which were then reviewed and revised by a small team of simulation experts. Through this process, multiple revisions in patient cases, dialogue, and flow were made in an iterative fashion until consensus was reached. Once the script for the most advanced level of performance was completed, we worked backward to develop scripts that reflected the behaviors in the clinical vignettes associated with the other four levels of performance.

Back to Top | Article Outline

Validation of video scripts and iterative improvement

Once the draft scripts for the five levels were finalized, a panel of experts in handoff content, different from the simulation experts, reviewed them to assess whether they appeared to accurately reflect each performance level of the EPA. This process also ensured that the behaviors depicted were sufficiently discriminating between level of performance while also appropriate to the intended level of competence rather than a reflection of some other unrelated attribute, such as likeability. As a final validation step, the revised scripts were reviewed again by the full group of simulation, handoff, and milestone experts for any additional revisions and approval.

Back to Top | Article Outline

Dress rehearsal videos

Once video scripts were complete, we then created dress rehearsal videos. Dress rehearsal videos are versions without full attention to the setting, using actors who perform a casual read-through of the script. We learned the importance of this step from our first attempt at developing standard-setting videos. Without this step, the need arose to reshoot the videos because they contained behavioral elements that were not detailed in the written script (likeability behaviors, overall length, uniformity) that become apparent in the video version. The goal of dress rehearsal videos was to cross-check whether we successfully translated the script to videos that were valid representations of the intended performance before investing resources into higher-quality video production. To do this, we recruited content experts, as well as additional nonexpert faculty who were not acquainted with the scripts, to review the videos. Each rater was blinded to the intended performance level of the trainees in the video to determine whether they could match the observed performance to its intended level. We found that this was an especially important validation step in our efforts to convert high-inference to low-inference behaviors, which was not possible by review of the written script alone. Finally, the EPA vignettes as originally scripted focused on both the giver and the receiver of the handoff. When we tried to portray this in the dress rehearsal videos, we found that it made faculty assessment more difficult and resulted in greater variability in faculty ratings. The videos were too long for effective training, and it was harder to determine which skills reflected which learner. Faculty ratings were more consistent when we focused on varying the behavior of a single actor, the resident giving the handoff, while keeping the receiver performance at a consistent level.

Back to Top | Article Outline

Video filming and editing

Because little has been published about best practices for filming standard-setting videos, we chose to adapt best practices from trigger and training videos. We also made practical decisions to film in a clinic conference room and briefly at the clinic main desk, attempting to balance fidelity of the environment with the need to control noise and interruptions. To minimize production costs, resident physicians were used as the actors in the videos, even in the nursing and student roles. We found that we could use residents in these roles without a noticeable difference in fidelity. Importantly, we used the same actors throughout the videos of all five levels to avoid potential bias due to gender or race. To record the videos, we used a Sony HD camcorder model HDR-CX900 camera to videotape in an MP-4 format, which allowed for editing in I-movie afterwards. The final versions of these videos were developed in 2013. For purposes of internal dissemination, the videos are available from the authors by request.

Back to Top | Article Outline

Final validation of EPA-level videos

Training raters for high-stakes decision making requires a higher level of validity than for ordinary decision making, rendering this additional step critical to ensure content validity of the videos.29 Therefore, as a final content validation step, expert and nonexpert faculty rated the proposed final version of each video using assessment tools derived from the same EPA scaffolding used to create the video script components. We then calculated the proportion of faculty raters who assigned the rating level that matched the intended video performance in a manner similar to a content validity index previously described.30 A priori, we established a proportion ≥ 0.8 as evidence of content validity sufficient for high-stakes decisions. After one round of ratings we achieved this standard in all five videos, but if we had not we would have continued with additional rounds of revisions and ratings until we achieved the standard. Additional studies are under way to test the validity of using these videos as standards in faculty direct observation of handoffs in both objective structured clinical examination handoff exercises and live patient handoffs for the purpose of making entrustment decisions using the assessment tool described above.

Back to Top | Article Outline

Conclusions and Next Steps

With the new focus in medicine on CBME, direct faculty observation of learner performance will be a key approach to assessment. Developing valid ways to assess learner performance presents many challenges, but one critically important one is the need to establish consensus standards of performance that can be reliably observed and serve to support high-stakes entrustment decisions. Rigorously developed standard-setting videos following the process we describe can help fill this gap. However, standard-setting videos have limitations because the authentic clinical setting is a “messy” place where it is not possible to control variables in the ways we did in creating the patient handoff standard-setting videos. A key to using these videos and any associated assessment tools in the real-life setting will be to conduct studies to generate additional validity evidence. However, even at this stage, standard-setting videos provide a significant leap forward in helping faculty to hone their direct observation skills, develop a shared mental model of targeted behaviors for a given EPA, and align specific behaviors with levels of performance illustrated by competencies and milestones on the developmental trajectory from novice to expert. In other words, the power of standard-setting videos is the potential to serve not only as visual representations of different levels of learner performance in carrying out professional activities but also as a way to provide a shared mental model to use in training faculty to consistently recognize the behaviors in question and to align them with the appropriate predetermined performance level. Perhaps most important, these videos will enable us to conduct additional empirical studies in the clinical workplace to discover the level of supervision that typically equates with a given performance level illustrated by the video and, ultimately, the video performance level that equates with entrustment.

On the basis of our experience working with an EPA for patient handoffs, we outlined a rigorous process that can be used to create standard-setting videos for any EPA (Figure 1). The process we describe begins with integration of the milestones at each level across competencies, which enabled the creation of clinical vignettes that were converted into video scripts and ultimately videos. Each video represented a performance standard from novice to expert. The process included multiple assessments by experts to guide iterative improvements, provide evidence of content validity, and ensure that we successfully translated behavioral descriptions and vignettes into videos that represented the intended performance level for a learner. We suggest five key principles: Start with mapping the EPA to the critical competencies needed to make an entrustment decision. Each competency is then defined by five milestones (behavioral descriptors of performance at five advancing levels). These discrete, observable behaviors or skills serve as the scaffolding for script development. Focus on low-inference observable behaviors and skills and avoid attempts to portray high-inference skills or reasons for behaviors that cannot be easily and unambiguously represented on video. At each stage of script and video development, ensure content validity by using an iterative process with multiple levels of review by experts in milestones as well as the specific clinical activity related to the EPA. Before investing resources into high-quality video production, create dress rehearsal videos to test whether neutral observers can recognize the level distinguishing behaviors that are intended to be portrayed in written scripts. Test the validity of level-specific videos against related assessment tools using both expert and nonexpert raters. This latter principle is a key step to generate content validity evidence to support using standard-setting videos for drawing high-stakes conclusions necessary for entrustment decisions and for CBME to reach its full potential.

Acknowledgments: The primary authors thank the members of the I-PASS Study Education Executive Committee for their contributions to this article.

* I-PASS is mnemonic for the key elements of the handoff process: I: Illness severity; P: Patient summary; A: Action items; S: Situation awareness and contingency planning; S: Synthesis by receiver.
Cited Here...

Back to Top | Article Outline

References

1. Jones MD Jr, Rosenberg AA, Gilhooly JT, Carraccio CL. Perspective: Competencies, outcomes, and controversy—linking professional activities to competencies to improve resident education and practice. Acad Med. 2011;86:161–165
2. ten Cate O, Scheele F. Competency-based postgraduate training: Can we bridge the gap between theory and clinical practice? Acad Med. 2007;82:542–547
3. ten Cate O. Entrustability of professional activities and competency-based training. Med Educ. 2005;39:1176–1177
4. Berwick DM, Finkelstein JA. Preparing medical students for the continual improvement of health and health care: Abraham Flexner and the new “public interest.” Acad Med. 2010;85(9 suppl):S56–S65
5. Carraccio C, Burke AE. Beyond competencies and milestones: Adding meaning through context. J Grad Med Educ. 2010;2:419–422
6. Hauer KE, Ten Cate O, Boscardin C, Irby DM, Iobst W, O’Sullivan PS. Understanding trust as an essential element of trainee supervision and learning in the workplace. Adv Health Sci Educ Theory Pract. 2014;19:435–456
7. Kogan JR, Conforti L, Bernabeo E, Iobst W, Holmboe E. Opening the black box of clinical skills assessment via observation: A conceptual model. Med Educ. 2011;45:1048–1060
8. Williams RG, Klamen DA, McGaghie WC. Cognitive, social and environmental sources of bias in clinical performance ratings. Teach Learn Med. 2003;15:270–292
9. Holmboe ES, Ward DS, Reznick RK, et al. Faculty development in assessment: The missing link in competency-based medical education. Acad Med. 2011;86:460–467
10. Kogan JR, Conforti LN, Iobst WF, Holmboe ES. Reconceptualizing variable rater assessments as both an educational and clinical care problem. Acad Med. 2014;89:721–727
11. Noel GL, Herbers JE Jr, Caplow MP, Cooper GS, Pangaro LN, Harvey J. How well do internal medicine faculty members evaluate the clinical skills of residents? Ann Intern Med. 1992;117:757–765
12. Newble D. Techniques for measuring clinical competence: Objective structured clinical examinations. Med Educ. 2004;38:199–203
13. Woehr DJ, Huffcutt AI. Rater training for performance appraisal: A quantitative review. J Occup Organ Psychol. 1994;67:189–205
14. Boulet JR, De Champlain AF, McKinley DW. Setting defensible performance standards on OSCEs and standardized patient examinations. Med Teach. 2003;25:245–249
15. de Leng B, Dolmans D, van de Wiel M, Muijtjens A, van der Vleuten C. How video cases should be used as authentic stimuli in problem-based medical education. Med Educ. 2007;41:181–188
16. Losh DP, Mauksch LB, Arnold RW, et al. Teaching inpatient communication skills to medical students: An innovative strategy. Acad Med. 2005;80:118–124
17. Ottolini MC, Cuzzi S, Tender J, et al. Decreasing variability in faculty ratings of student case presentations: A faculty development intervention focusing on reflective practice. Teach Learn Med. 2007;19:239–243
18. Carraccio CL, Englander R. From Flexner to competencies: Reflections on a decade and the journey ahead. Acad Med. 2013;88:1067–1073
19. Ten Cate O. Nuts and bolts of entrustable professional activities. J Grad Med Educ. 2013;5:157–158
20. Ber R, Alroy G. Twenty years of experience using trigger films as a teaching tool. Acad Med. 2001;76:656–658
21. Alroy G, Ber R. Doctor–patient relationship and the medical student: The use of trigger films. J Med Educ. 1982;57:334–336
22. Ber R, Alroy G. Teaching professionalism with the aid of trigger films. Med Teach. 2002;24:528–531
23. Aylward M, Nixon J, Gladding S. An entrustable professional activity (EPA) for handoffs as a model for EPA assessment development. Acad Med. 2014;89:1335–1340
24. Gilhooly J, professor of pediatrics, Oregon Health and Sciences University; Carraccio C, vice president for competency-based assessment, American Board of Pediatrics; Englander R, senior director for competency-based assessment and learning, Association of Americal Medical Colleges. Personal communicaton with Nancy Spector, January 18, 2012.
25. Carraccio CL, Benson BJ, Nixon LJ, Derstine PL. From the educational bench to the clinical bedside: Translating the Dreyfus developmental model to the learning of clinical skills. Acad Med. 2008;83:761–767
26. Dreyfus HL, Dreyfus SE Mind Over Machine. 1988 New York, NY Free Press
27. Chitsabesan P, Corbett S, Walker L, Spencer J, Barton JR. Describing clinical teachers’ characteristics and behaviours using critical incidents and repertory grids. Med Educ. 2006;40:645–653
28. Bush A, Kennedy J, Cruickshank D. An empirical investigation of teacher clarity. J Teach Educ. 1977;28:53–54
29. Feldman M, Lazzara EH, Vanderbilt AA, DiazGranados D. Rater training to support high-stakes simulation-based assessments. J Contin Educ Health Prof. 2012;32:279–286
30. Lynn MR. Determination and quantification of content validity. Nurs Res. 1986;35:382–385
© 2016 by the Association of American Medical Colleges