As competency-based education prioritizes attained competence over time spent in training, the relationship between time in training and the magnitude of learning effects is relevant. Learning requires time and effort by learners.1 More time and effort leads to more learning, assuming all other factors are constant. However, the relationship between time and learning effect can be influenced by many factors. Individuals may differ in learning capability, but programs usually do not allow for time variation; the way time-on-task is filled can be more or less effective, independent of the amount of time; and contexts may be more or less conducive to learning.2–5
Since the rise of educational theory in the 20th century, researchers have examined the relationship between time, effort, and learning. In medical education, however, the rationale underpinning the chosen length of training programs has never been well argued.6 As an example, the difference in training length for specialists in dentistry and ophthalmology (in the United States, 8 vs 12 years after secondary school, respectively; in Europe, 5 or 6 vs 11 years)—both specialties that focus on disease and dysfunction in a similarly sized segment of the body—is a legacy of history more than an evidence-informed decision. With the introduction of competency-based education, the length of training is being debated by medical educators.7 A shift from time-based education with variable outcomes to competency-based education suggests that the length of training should depend on the needs of individual learners.8,9
In this article, we discuss theories developed over the past 50 years, including educational theories, which relate to the time individuals need for training, with particular reference to interindividual variability. In addition, we discuss those concepts that are relevant for time-variable medical training, such as deliberate practice, neuroscience and cognition, motivation theories, professional identity formation, and entrustment processes.
The Carroll Model: Putting Time in the Equation of Learning
The roots of competency- and outcomes-based education may be found in the educational thinking that emerged in the post–World War II period around educational objectives, initiated by educational scientist Ralph Tyler.10 Tyler proposed that schools should rethink their purpose and organize learners’ experiences toward those purposes.11 His work affected medical education as he worked intimately with Case Western Reserve University to define objectives for medical training. Tyler’s insights were followed by Benjamin Bloom’s work expanding taxonomies of educational objectives.12,13 Once objectives for education became accepted, John Carroll, an educational psychologist, proposed an educational model that included aptitude and perseverance. He proposed that the degree of learning that occurs is a function of the time spent by the learner and the time needed by the same learner to attain an educational objective—that is, learning effect = f (time spent/time needed). In other words, a learner succeeds in learning a given task to the extent that he or she spends the amount of time that he or she requires to learn the task, which is different for each individual.14–16 This general thinking, which focuses solely on individual factors, had pervasive influence on behaviorist educational theory in the 1960s and 1970s and was also applied to medical training at that time.17
Mastery Learning, Deliberate Practice, and Learning Curves
The Carroll model subsequently inspired Bloom18 to develop a model of education called mastery learning, a concept that quickly caught the attention of medical educators.19 Mastery learning is rooted in behaviorist principles and established itself as successful in the 1980s. Its learning effects are well documented, particularly in the cognitive domain.20,21 In mastery learning, learners engage in repetitive practice, practicing tasks with increasing levels of difficulty while receiving feedback during the learning experiences. Learning outcomes from mastery learning programs were meant to be uniform—that is, with little or no variation among learners. Importantly, educational time to achieve standardized outcomes was expected to vary among learners.
With the expansion of simulation techniques, mastery learning has enjoyed a revival in medical education.22–24 Repetitive practice for clinical skills can be organized in a mastery learning arrangement, using simulation. McGaghie et al25,26 described seven features of mastery learning in this context (see List 1). Time variation as a necessary component of mastery learning is captured in the seventh feature (see List 1): “… until the mastery standard is reached,” which supposedly is different for each learner.
List 1Features of Mastery Learning, Based on the Work of McGaghie et al25
- A mastery learning course should start with baseline or entry-level diagnostic testing.
- The course must state clear learning objectives, sequenced as units usually in increasing difficulty.
- There must be engagement in educational activities (e.g., deliberate skills practice, calculations, data interpretation, reading) focused on reaching the objectives.
- For each educational unit, a minimum passing standard is set (e.g., test score).
- Formative testing must inform the learner about unit completion.
- Advancement to the next educational unit is contingent upon completion of the previous unit at or above the mastery standard.
- There must be continued practice or study on an educational unit until the mastery standard is reached.
Whereas mastery learning sets standardized performance goals to be measured with similar tests for all students, Ericsson’s expert performance approach with deliberate practice is focused on developing reproducibly superior performance in an individual. In the training of musicians, Ericsson et al27 found the most effective form of practice to be one-on-one training by a teacher assessing the students’ skills, assigning personalized practice goals and training tasks, and providing immediate feedback to enhance students’ gradual attainment of training goals. They named this model deliberate practice.27 Ericsson proposed that, rather than innate talent, deliberate practice is the key to expert performance,28 which implies that anyone with reasonable capability can attain expert performance goals. More recently, Ericsson28 described how the expert performance approach with deliberate practice could be applied to physicians’ training. Whereas mastery learning provides short-term goals (such as mastering the insertion of a central venous catheter), the expert performance approach aims to attain long-term goals to help learners continue with integrated skill acquisition throughout their professional careers. If the conditions for deliberate practice29 are applied by the learner and supported by the educational context, they should lead to time-efficient skills acquisition. The essentials of deliberate practice are clear performance goals; focused, repetitive practice; rigorous, reliable measurements of results; informative feedback (e.g., from simulators or teachers); monitoring and error correction; and subsequent further practice. There is some debate about the domains and contexts in which deliberate practice can be applied.30 Outside the workplace, however, such as in simulation centers, deliberate practice has been convincingly shown to be effective.24,29
In some areas of clinical medicine, the relationship between practice effort, time, and proficiency has been well established. One example is in colonoscopy procedures. Chung et al31 graphed the typical learning curves of 12 gastroenterology fellows based on 3,243 colonoscopies (see Supplemental Digital Appendix 1 at http://links.lww.com/ACADMED/A514). They concluded that, at their center, 200 colonoscopies are necessary to attain the proficiency to conduct colonoscopies with reasonable efficiency; however, as the graph shows, individuals differ. Some fellows reached proficiency with less practice than others, supporting the idea that time-variable training, beyond a minimum threshold, is useful. On average, however, trainees may arguably be required to perform at least 150 to 200 colonoscopies under supervision before unsupervised execution may be considered. Colonoscopy represents a clinical workplace procedure that is done repeatedly, with clear targets (“reach the cecum within 20 minutes without missing pathology; next, do this within 15 minutes”), and with deliberate practice conditions as specified above. Although expert performance takes different people different amounts of time, there seems to be at least a floor of experience required for anyone to achieve endoscopic competence. However, the practice of health care in other workplace settings may very often not meet the conditions specified for deliberate practice, in large part because of the complexity of the goals, the absence of rigorous and timely feedback, and the lack of opportunities to reattempt the work in a manner that incorporates the feedback.
Learning in the workplace is not always organized to provide repetitive, deliberate practice opportunities in a more or less standardized setting as occurs in colonoscopy training. That does not mean that time variation between individual trainees cannot be shown. One other area that relates proficiency and length of training is progress testing, in which a comprehensive examination, sampling the spectrum of medicine or a specialty, is administered repeatedly throughout the entire curriculum, and reveals trainees’ progress in knowledge acquisition over time.32 For example, Supplemental Digital Appendix 2 (available at http://links.lww.com/ACADMED/A514) shows scores on a Dutch nationwide formative radiology progress test that was administered twice a year to all residents (postgraduate year 1 to postgraduate year 5), from April 2005 to April 2009.33 The superimposed lines represent a recent discussion regarding the transition to a summative examination with a cutoff score that all residents should meet. Most residents will have passed this threshold after three years, but some fifth-year residents still do not meet this criterion at the end of training and require remediation (the shaded block), following a shift toward a competency-based program.
From these examples, it becomes clear that the time learners need to attain a minimum level of competence varies, supporting the general notions from education theory. Practice is needed, setting mastery goals is useful, and deliberate practice may enhance learning, decrease time, and lead to higher levels of proficiency. Assuming that a time-variable training model is designed to specifically reduce training time, however, would be incorrect. Slower learners may need more time to reach high levels of proficiency and should have time and space to do so.
Time Variability and the Theories of Self-Regulation and Motivation
Learners’ agency is dependent on their motivation to act. In clinical workplaces, self-regulation of learning is key in the development of competence, much more so than in preclinical classroom education.34 It requires a sequenced pattern of thinking and planning that includes a forethought phase, a performance phase, and a reflection phase,35 which is compatible with a cyclical experiential learning process.36 These phases may be concluded with questions like “Where was I going?” “How did I get there?” and “Where to next?”37
Individual variation in the time it takes to develop competence may in part be explained by motivation and self-regulated learning ability. Although motivation to some extent is an individual characteristic, it is also affected by contextual factors and emotions that in turn affect achievement.38–40 Therefore, a discussion of motivation theories, as they relate to time variability, is warranted; we briefly focus on goal orientation theory and self-determination theory (SDT).
Learners may have different goal orientations. Someone with a goal of true learning (called mastery goal orientation) is less focused on performance and more concerned with gaining new knowledge and skills. In contrast, someone with a goal of performance (called performance goal orientation) is focused on receiving positive and avoiding negative judgments of his or her competence.41 Competency-based curricula that focus on attaining a required level of performance carry the risk of preferentially strengthening a performance goal orientation in learners, particularly in medicine’s competitive learning culture, with the possibility that additional time in the same context after reaching (minimum) standards could be considered wasted time or a sign of less capability in a competitive world.42
SDT is another major motivational theory. It describes three innate psychological needs of human beings that must be satisfied to generate and maintain intrinsic motivation: a feeling of competence, a feeling of autonomy, and a feeling of relatedness to a group or community, such as friends, family, or colleagues. A clinical context that satisfies these needs—by rewarding competence, granting responsibilities when possible, and providing autonomy at work (e.g., by accepting learners as important members of the health care team)—can be expected to foster intrinsic motivation.43 Intrinsic motivation drives self-regulation of learning. SDT posits that motivation weighted toward the intrinsic side of the extrinsic–intrinsic spectrum of motivation—often called autonomous motivation—correlates with academic achievement and well-being.44 This supports the idea that variations in context affecting motivation will lead to variations in time to attain this achievement. Unfortunately, clinical contexts are increasingly not conducive to autonomy-supportive conditions,45 and trainee well-being is increasingly challenged.46,47 As time-variable training is dependent on a context that drives motivation, medical schools and hospitals should be aware of their learning climate.48
Neurocognitive Perspectives of Time and Learning
Successful learning cannot be reduced to overt learning activities alone. From a neurocognitive perspective, learning also happens during periods that are not explicitly devoted to learning.49 Neuroscience has revealed that learning (i.e., the adaptation of the brain) happens not only during intentional learning activities but also implicitly or during informal interactions and periods of rest. Sleep fosters memory consolidation.50–53 It is well established that, for learning goals requiring extensive rehearsal, spaced practice over a prolonged time is more effective than massed practice in a condensed time, despite similar investments of practice hours.54–57 Evidently, something happens in periods of nonpractice that contributes to a learning effect, akin to muscle development after physical training. Educational programs should therefore consider the importance of time that is not filled with learning activities, at least not with repetition of similar activities in a short period of time, and the importance of mental rest for learning. If only overt learning activities are counted and assessed, one may lose sight of the importance of covert processing time. That holds true for time-fixed as well as time-variable programs, but it may require more explicit awareness among educators in time-variable programs.
Another cognitive science perspective relevant to time-variable training is covert or latent learning—the learning that occurs in the absence of any obvious reinforcement or noticeable behavioral change58—which is necessary to develop clinical reasoning skills. Clinical reasoning is assumed to involve two mental processes: System 1 or fast, automatic thinking and System 2 or slow, analytic thinking.59,60 Adequate System 1 thinking, roughly conceptualized as pattern recognition, requires extensive clinical experience. Much of that cannot be taught in educational courses.61 It develops during clinical practice, when patient encounters and their management add to the creation of illness scripts, a mental repository that an experienced clinician has and that allows for the rapid recognition of a pattern of signs and symptoms in a particular patient based on patients encountered in the past.62,63 Building this mental illness script repository takes time and experiences that are not easily planned. Patient encounters and their diversity may be understood and valued in hindsight, but a primary approach of preplanning through allocating suitable clinical placements for a designated duration does not guarantee a deliberate mix of patient encounters. The current rotational system has instead been called a “curriculum by random opportunity.”64
Time Variability From the Perspective of Professional Identity Formation
The professional identity of a physician has been defined as “a representation of self, achieved in stages over time during which the characteristics, values, and norms of the medical profession are internalized, resulting in an individual thinking, acting, and feeling like a physician.”65 Time-variable training from the perspective of knowledge and skills, as exemplified in mastery learning, does not take into account that professional identity development may take time, independent of specific skill acquisition. As Hafferty66 noted, “While any occupational training involves learning new knowledge and skills, it is the melding of knowledge and skills with an altered sense of self that differentiates socialization from training.”
Professional identity continues to develop after training, and there is no predetermined identity formation goal that trainees must achieve before graduating. Although there are processes related to identity formation that are common to all trainees, it is likely that individuals differ not only in how this process affects their identity but also in the time identity formation takes. Educational programs can affect this process.67 For instance, rituals in medical training—the white coat ceremony, the awarding of the degree, the granting of patient care responsibility through entrustment68—are likely to influence a learner’s sense of socialization into the profession and her or his formation of a medical identity.
Identity formation is not easily captured in a specific time frame, but it may be hampered when training is substantially shortened and learners must act as professionals while not feeling ready for it. Several theories have added to our understanding of the features of the (clinical) workplace that affect learning and professional development, such as communities of practice and cognitive apprenticeship, reflective practice, social cognitive theory, and experiential learning.36,69–72 None of these theories explicitly focuses on time as a relevant variable, yet many or all implicitly support the idea that becoming a professional takes time and social interaction.
Entrustment as an Objective of Training
Although objectified knowledge and skills are usually regarded as the ultimate standard for graduation from an educational program, the concept of entrustment to perform professional tasks may be a more suitable approach in the health professions. Entrustment decisions, basically answering the question “Can I leave this trainee to carry out health care tasks without supervision?” represents the trust that must ultimately be conveyed. Theoretical insights around trust and entrustment decisions in clinical training are emerging since the concept of entrustable professional activities (EPAs) was introduced.73 Whether an entrustment decision is made is determined by several factors, grouped into five sources: (1) features of the trainee, (2) features of the supervisors, (3) characteristics of the context, (4) the nature of the task that is being entrusted, and (5) the relationship between trainee and supervisor.74–76 Time is critical, as trust develops over time and entrustment processes may be best served by longitudinal attachments.77,78 Decisions to trust involve the valuation of risks: Are the risks acceptable? and Has the trainee passed a threshold that justifies her or his unsupervised execution of an EPA?79–81 In time-variable training, the nature of the supervisory context and the sequencing of activities and entrustment decisions for EPAs may be critical for a learner’s readiness for certification.
EPAs imply time variability. As not all EPAs are mastered at the level of unsupervised practice at the end of training (or, for undergraduate medical education, at the level of indirect supervision), EPAs foster the deliberate granting of responsibilities as soon as learners are deemed ready for it. As training progresses, trainees may become gradually qualified and entrusted with EPAs one at a time, while transforming from a trainee into a professional.73
To account for individual differences in the time needed to master EPAs, individualized training pathways allowing for time variability will be necessary,82,83 so that learners only begin performing unsupervised tasks when they can perform them competently. If trust in a learner is established early across all essential EPAs of a program, a decrease in total program length for that individual can be considered. However, it is just as reasonable to expect that some learners will require an increased total time.
The time needed to attain specific educational objectives may vary among learners. Carroll made this observation in 1963, when he proposed an equation saying that the learning effect is a function of the time needed by the learner divided by the time spent on learning. With current knowledge and insights, we can conclude that for health professions education, this equation is only part of the story. The model was inspired by classroom learning in primary and secondary school, while in health professions education, the clinical workplace adds a significant dimension. Mastery learning and deliberate practice in health professions education, most prominently exemplified in simulation and training in the procedural domains, does roughly follow Carroll’s model, but training in the clinical environment brings several other factors to this equation. Carroll was focused on overt and testable learning objectives; he did not take experiential learning into account, did not relate it to professional identity formation, and certainly did not think of entrustment to perform professional responsibilities as the objective of training. Alternating high-intensity experiences and rest time may be particularly conducive to learning; the development of professional identity requires time; and self-regulation habits, goal orientation, and motivation influence the time and effort an individual puts into learning. Finally, time is a factor if trust and entrustment become central components of the assessment of clinical trainees.
Reviewing the richness of these underlying concepts, we can conclude that, in approaching time variability, the Carroll equation is somewhat simplistic in its application to the breadth of medical training. Future work will need to systematically and thoughtfully attend to the additional factors that we have articulated and perhaps create a new equation of time variability that is relevant to both in vitro and in situ medical training.
The authors wish to thank Anders Ericsson, PhD, from the Department of Psychology, Florida State University, Tallahassee, Florida, and Justin Sewell, MD, Zuckerberg San Francisco General Hospital and Department of Medicine, University of California, San Francisco, School of Medicine, San Francisco, California, for reviewing sections of this article.
1. Bloom BS. Time and learning. Am Psychol. 1974;29:682–688.
2. Lee HS, Anderson JR. Student learning: What has instruction got to do with it? Annu Rev Psychol. 2013;64:445–469.
3. Asch DA, Nicholson S, Srinivas SK, Herrin J, Epstein AJ. How do you deliver a good obstetrician? Outcome-based evaluation of medical education. Acad Med. 2014;89:24–26.
4. Smirnova A, Ravelli ACJ, Stalmeijer RE, et al. The association between learning climate and adverse obstetrical outcomes in 16 nontertiary obstetrics–gynecology departments in the Netherlands. Acad Med. 2017;92:1740–1748.
5. Ravesloot CJ, van der Schaaf MF, Kruitwagen CLJJ, et al. Predictors of knowledge and image interpretation skill development in radiology residents. Radiology. 2017;284:758–765.
6. Custers EJFM, ten Cate O. The history of medical education in Europe and the United States, with respect to time and proficiency. Acad Med. 2018;3:S49–S54.
7. Emanuel EJ, Fuchs VR. Shortening medical training by 30%. JAMA. 2012;307:1143–1144.
8. Long DM. Competency-based residency training: The next advance in graduate medical education. Acad Med. 2000;75:1178–1183.
9. Carraccio C, Wolfsthal SD, Englander R, Ferentz K, Martin C. Shifting paradigms: From Flexner to competencies. Acad Med. 2002;77:361–367.
10. Tyler RW. Basic Principles of Curriculum and Instruction. 1949.Chicago, IL: University of Chicago Press.
11. Chan BY. After Tyler, what? A current issue in curriculum theory. Educ J. 1977:21–31.
12. Bloom B, Engelhart M, Furst E, Hill W, Krathwohl D. Taxonomy of Educational Objectives: The Classification of Educational Goals; Handbook I: Cognitive Domain. 1956.New York, NY: Longmans, Green.
13. Krathwohl DR, Bloom BS, Masia BB. Taxonomy of Educational Objectives, the Classification of Educational Goals. Handbook II: Affective Domain. 1973.New York, NY: David McKay Co., Inc..
14. Carroll JB. A model of school learning. Teach Coll Rec. 1963;64:723–733.
15. Carroll JB. The Carroll model—A 25-year retrospective and prospective view. Educ Res. 1989;18:26–31.
16. Gettinger M. Time allocated and time spent relative to time needed for learning as determinants of achievement. J Educ Psychol. 1985;77:3–11.
17. Jason H. Van den Bussche R, Simpson M. Effective medical instruction: Requirements and possibilities. In: Proceedings of a 1969 International Symposium on Medical Education. 1970:Leuven, Belgium: Medica; 5–8.
18. Bloom BS. Learning for mastery. Instr Curriculum. 1968;1:1–11.
19. McGaghie WC, Miller GE, Sajid AW, Telder TW. Competency-Based Curriculum Development in Medical Education—An Introduction. 1978.Chicago, IL: Center for Educational Development, University of Illinois and World Health Organization.
20. Kulik C-LC, Kulik JA, Bangert-Drowns RL. Effectiveness of mastery learning programs: A meta-analysis. Rev Educ Res. 1990;60:265–299.
21. Schunk DH. Schunk DH. Learning Theories. An Educational Perspective. 2012.6th ed. Boston, MA: Pearson Education Inc.
22. Lineberry M, Soo Park Y, Cook DA, Yudkowsky R. Making the case for mastery learning assessments: Key issues in validation and justification. Acad Med. 2015;90:1445–1450.
23. Yudkowsky R, Park YS, Lineberry M, Knox A, Ritter EM. Setting mastery learning standards. Acad Med. 2015;90:1495–1500.
24. Motola I, Devine LA, Chung HS, Sullivan JE, Issenberg SB. Simulation in healthcare education: A best evidence practical guide. AMEE guide no. 82. Med Teach. 2013;35:e1511–e1530.
25. McGaghie WC, Siddall VJ, Mazmanian PE, Myers J; American College of Chest Physicians Health and Science Policy Committee. Lessons for continuing medical education from simulation research in undergraduate and graduate medical education: Effectiveness of continuing medical education: American College of Chest Physicians Evidence-Based Educational Guidelines. Chest. 2009;135(3 suppl):62S–68S.
26. McGaghie WC. Mastery learning: It is time for medical education to join the 21st century. Acad Med. 2015;90:1438–1441.
27. Ericsson KA, Krampe RT, Tesch-romer C, et al. The role of deliberate practice in the acquisition of expert performance. Psychol Rev. 1993;100:363–406.
28. Ericsson KA. Acquisition and maintenance of medical expertise: A perspective from the expert-performance approach with deliberate practice. Acad Med. 2015;90:1471–1486.
29. McGaghie WC, Issenberg SB, Cohen ER, Barsuk JH, Wayne DB. Does simulation-based medical education with deliberate practice yield better results than traditional clinical education? A meta-analytic comparative review of the evidence. Acad Med. 2011;86:706–711.
30. Macnamara BN, Hambrick DZ, Oswald FL. Deliberate practice and performance in music, games, sports, education, and professions: A meta-analysis. Psychol Sci. 2014;25:1608–1618.
31. Chung JI, Kim N, Um MS, et al. Learning curves for colonoscopy: A prospective evaluation of gastroenterology fellows at a single center. Gut Liver. 2010;4:31–35.
32. Albanese M, Case SM. Progress testing: Critical analysis and suggested practices. Adv Health Sci Educ Theory Pract. 2016;21:221–234.
33. Ravesloot C, van der Schaaf M, Haaring C, et al. Construct validation of progress testing to measure knowledge and visual skills in radiology. Med Teach. 2012;34:1047–1055.
34. Sandars J, Cleary TJ. Self-regulation theory: Applications to medical education: AMEE guide no. 58. Med Teach. 2011;33:875–886.
35. Zimmerman BJ. Becoming a self-regulated learner: An overview. Theory Pract. 2002;41:64–70.
36. Kolb DA. Experiential Learning. Experience as the Source of Learning and Development. 1984.Englewood Cliffs, NJ: Prentice-Hall, Inc..
37. Teunissen PW, Dornan T. Lifelong learning at work. BMJ. 2008;336:667–669.
38. Mega C, Ronconi L, De Beni R. What makes a good student? How emotions, self-regulated learning, and motivation contribute to academic achievement. J Educ Psychol. 2014;106:121–131.
39. Kusurkar RA, Ten Cate TJ, van Asperen M, Croiset G. Motivation as an independent and a dependent variable in medical education: A review of the literature. Med Teach. 2011;33:e242–e262.
40. White C, Gruppen L, Fantone J. Swanwick T. Self-regulated learning in medical education. In: Understanding Medical Education. 2014:2nd ed. Chichester, UK: Wiley Blackwell; 201–211.
41. Dweck CS. Self-Theories: Their Role in Motivation, Personality, and Development. 1999.Philadelphia, PA: Psychology Press.
42. Teunissen PW, Bok HG. Believing is seeing: How people’s beliefs influence goals, emotions and behaviour. Med Educ. 2013;47:1064–1072.
43. Ten Cate TJ, Kusurkar RA, Williams GC. How self-determination theory can assist our understanding of the teaching and learning processes in medical education. AMEE guide no. 59. Med Teach. 2011;33:961–973.
44. Ryan R, Deci E. Self-Determination Theory. Basic Psychological Needs in Motivation, Development and Wellness. 2017.New York, NY: Guilford Press.
45. Halpern SD, Detsky AS. Graded autonomy in medical education—Managing things that go bump in the night. N Engl J Med. 2014;370:1086–1089.
46. Rotenstein LS, Ramos MA, Torre M, et al. Prevalence of depression, depressive symptoms, and suicidal ideation among medical students: A systematic review and meta-analysis. JAMA. 2016;316:2214–2236.
47. Mata DA, Ramos MA, Bansal N, et al. Prevalence of depression and depressive symptoms among resident physicians: A systematic review and meta-analysis. JAMA. 2015;314:2373–2383.
48. Gruppen L, Rytting M, Marti K. Dent J, Harden R, Hunt D. The educational environment. In: A Practical Guide for Medical Teachers. 2017:5th ed. Edinburgh, Scotland: Elsevier; 376–383.
49. Soderstrom NC, Bjork RA. Dunn D. Learning versus performance: Review of studies. In: Oxford Bibliographies Online: Psychology. 2013.Oxford, England: Oxford University Press.
50. Maquet P. The role of sleep in learning and memory. Science. 2001;294:1048–1052.
51. Kuriyama K, Stickgold R, Walker MP. Sleep-dependent learning and motor-skill complexity. Learn Mem. 2004;11:705–713.
52. Fogel SM, Ray LB, Binnie L, Owen AM. How to become an expert: A new perspective on the role of sleep in the mastery of procedural skills. Neurobiol Learn Mem. 2015;125:236–248.
53. Diekelmann S, Born J. The memory function of sleep. Nat Rev Neurosci. 2010;11:114–126.
54. Cepeda NJ, Pashler H, Vul E, Wixted JT, Rohrer D. Distributed practice in verbal recall tasks: A review and quantitative synthesis. Psychol Bull. 2006;132:354–380.
55. Kang SHK. Spaced repetition promotes efficient and effective learning: Policy implications for instruction. Policy Insights Behav Brain Sci. 2016;3:12–19.
56. Kapler IV, Weston T, Wiseheart M. Spacing in a simulated undergraduate classroom: Long-term benefits for factual and higher-level learning. Learn Instr. 2015;36:38–45.
57. Gerbier E, Toppino TC. The effect of distributed practice: Neuroscience, cognition, and education. Trends Neurosci Educ. 2015;4:49–59.
58. Soderstrom NC, Bjork RA. Learning versus performance: An integrative review. Perspect Psychol Sci. 2015;10:176–199.
59. Eva KW. What every teacher needs to know about clinical reasoning. Med Educ. 2005;39:98–106.
60. Croskerry P. A universal model of diagnostic reasoning. Acad Med. 2009;84:1022–1028.
61. ten Cate O. ten Cate O, Custer E, Durning SJ. Introduction. In: Principles and Practice of Case-Based Clinical Reasoning Education—A Method for Pre-Clinical Students. 2018.Cham, Switzerland: Springer International Publishing.
62. Barrows HS, Feltovich PJ. The clinical reasoning process. Med Educ. 1987;21:86–91.
63. Custers EJFM, Boshuizen HPA, Schmidt HG. The role of illness scripts in the development of medical diagnostic expertise: Results from an interview study. Cogn Instr. 1998;16:367–398.
64. Diachun L, Charise A, Goldszmidt M, Hui Y, Lingard L. Exploring the realities of curriculum-by-random-opportunity: The case of geriatrics on the internal medicine clerkship rotation. Can Geriatr J. 2014;17:126–132.
65. Cruess RL, Cruess SR, Boudreau JD, Snell L, Steinert Y. Reframing medical education to support professional identity formation. Acad Med. 2014;89:1446–1451.
66. Hafferty F. Cruess RL, Cruess SR, Steinert Y. Professionalism and the socialization of medical students. In: Teaching Medical Professionalism. 2009:Cambridge, UK: Cambridge University Press; 53–73.
67. Jarvis-Selinger S, Pratt DD, Regehr G. Competency is not enough: Integrating identity formation into the medical education discourse. Acad Med. 2012;87:1185–1190.
68. ten Cate O, Hart D, Ankel F, et al. Entrustment decision making in clinical training. Acad Med. 2016;91:191–198.
69. Wenger E. Communities of Practice: Learning, Meaning, and Identity. 1998.Cambridge, UK: Cambridge University Press.
70. Schön DA. Educating the Reflective Practitioner: Toward a New Design for Teaching and Learning in the Professions. 1987.San Francisco, CA: Jossey-Bass.
71. Collins A, Brown J, Newman S. Resnick L. Cognitive apprenticeship: Teaching the craft of reading, writing and mathematics. In: Knowing, Learning and Instruction. Essays in the Honor of Robert Glaser. 1989:Hillsdale, NJ: Lawrence Erlbaum Associates; 453–494.
72. Billett S. Towards a model of workplace learning: The learning curriculum. Stud Contin Educ. 1996;18:43–58.
73. ten Cate O. Entrustability of professional activities and competency-based training. Med Educ. 2005;39:1176–1177.
74. Sterkenburg A, Barach P, Kalkman C, Gielen M, ten Cate O. When do supervising physicians decide to entrust residents with unsupervised tasks? Acad Med. 2010;85:1408–1417.
75. Dijksterhuis MG, Voorhuis M, Teunissen PW, et al. Assessment of competence and progressive independence in postgraduate clinical training. Med Educ. 2009;43:1156–1165.
76. Sheu L, O’Sullivan PS, Aagaard EM, et al. How residents develop trust in interns: A multi-institutional mixed-methods study. Acad Med. 2016;91:1406–1415.
77. Hirsh DA, Holmboe ES, ten Cate O. Time to trust: Longitudinal integrated clerkships and entrustable professional activities. Acad Med. 2014;89:201–204.
78. Englander R, ten Cate O. Poncelet A, Hirsh D. Longitudinal integrated clerkships and entrustable professional activities: A perfect match. In: Longitudinal Integrated Clerkships (LICs)—Principles, Outcomes, Practical Tools and Future Directions. 2016.North Syracuse, NY: Gegensatz Press.
79. Damodaran A, Shulruf B, Jones P. Trust and risk: A model for medical education. Med Educ. 2017;51:892–902.
80. Holzhausen Y, Maaz A, Cianciolo AT, Ten Cate O, Peters H. Applying occupational and organizational psychology theory to entrustment decision-making about trainees in health care: A conceptual model. Perspect Med Educ. 2017;6:119–126.
81. Ten Cate O. Managing risks and benefits: Key issues in entrustment decisions. Med Educ. 2017;51:879–881.
82. Ten Cate O, Chen HC, Hoff RG, Peters H, Bok H, van der Schaaf M. Curriculum development for the workplace using entrustable professional activities (EPAs): AMEE guide no. 99. Med Teach. 2015;37:983–1002.
83. Wiersma F, Berkvens J, Ten Cate O. Flexibility in individualized, competency-based workplace curricula with EPAs: Analyzing four cohorts of physician assistants in training. Med Teach. 2017;39:535–539.