Primary care delivery is ripe for disruption. Rising out-of-pocket health care costs, greater availability of health information and technology, the drive toward 24-hour convenience, and the shortage of primary care physicians are changing the way people seek care.1 Patients are moving away from physician-led medical homes toward “medical neighborhoods” inhabited by sprawling health systems, urgent care clinics, pharmacy chains, technology companies, and more. Providers from different health professions have joined physicians in delivering primary care, including nurse practitioners (NPs) and physician assistants (PAs),2,3 each arriving in this space by variable paths of content and duration.
In this edition of Academic Medicine, Dewan and Norcini4 examine this variability and ask, What is the minimum education and training required to practice primary care? In their analysis they find that physicians undergo a minimum of 8 years of training, including 110 weeks of supervised clinical experience (SCE); NPs undergo a minimum of 6 years of training, including 27.5 SCE; and PAs undergo a minimum of 6 years of training, including 45 weeks of SCE.4 Dewan and Norcini point out that all of these “giants” of primary care practice provide care upon graduation, and there is scant literature describing which route is best. They call for further studies to determine optimal training duration and eventual scope of practice.
We commend Dewan and Norcini for calling out the seemingly arbitrary nature of time-in-training and for questioning the status quo, but to us, the giant’s height (length of training) should not be the focus. Instead, we want to know what the giant can do.
Use of time-in-training as a surrogate for competence is pervasive in education systems, with funding, curricula, and staffing grounded in these models. Beginning in primary school, students with varying abilities and learning needs progress through curricula and grade levels at mostly the same pace, with only extreme outliers being advanced early or held back. However, learners in any profession have their own trajectories of competence attainment that may be skill and context specific.5 Primary care is not a uniform entity. It ranges from complex elderly chronically ill patients, to twentysomething millennials with acute problems, to pregnant women, to families, and everything in between.
Training should be fit for purpose and produce high-quality outcomes for patients. Competence should be defined by these outcomes. Time to competence development will be variable for different training programs depending on purpose, and also variable for people within those programs, even with shared purpose.6 While time is a tool for competence attainment, it should not be the metric by which we measure readiness for unsupervised practice.7
Instead, we should scrutinize the type of learners our trainees become. Competence is often context specific, and performance in one setting does not guarantee performance in another. If a provider has a change in practice location or patient population, will her previous training suffice? Health care providers should become master adaptive learners who reflect on their practice, identify gaps between learning and performance, use proven learning strategies, assess the effect of new knowledge on outcomes, and adjust their practice based on these efforts.8 The master adaptive learner model describes an “optimal adaptability corridor” between highly efficient routinized clinical experiences (e.g., performing well-woman checks according to guidelines) versus creative exploration of a complex or nuanced clinical problem (e.g., treatment of diabetes in a socioeconomically disadvantaged population).8 In today’s health care environment, both are needed, and there can be no single path or timeline to becoming a master adaptive learner.
Should patients choose a physician, an NP, or a PA for their primary care? Dewan and Norcini4 rightly point out that the current level of evidence directly comparing these providers with respect to patient outcomes is poor. They call for more rigorous studies, including randomized controlled head-to-head comparisons to determine the shortest training duration needed to reach desired outcomes. But we think this would be akin to researching the smallest number of lessons needed to learn piano. The number of lessons depends on the student, the teacher, the type of playing desired (e.g., scales versus improvisational jazz), and the eventual audience (e.g., for fun versus recital). If we arbitrarily said, “Everybody gets 10 lessons,” the resulting performances would likely be of varying quality. Instead, we should study how to measure competence so that training programs of any type can make defensible decisions about when trainees can be allowed to “play” unsupervised.
Competency-based medical education (CBME) suggests four necessary steps to achieve this.9 First, we must determine desired outcomes of care. What does good primary care look like? In what setting? For whom? Second, we must work with the public to define performance levels that constitute competence, proficiency, and expertise. Third, we must develop a framework for assessment to know that trainees are reaching predetermined goal performance levels. And fourth, we must evaluate whether the program meets the needs of patients and society.
CBME moves us away from norm-referenced, fixed-time decision structures to criterion-referenced, time-variable experiences.9 We acknowledge that this will not be easy for medical education to accept. Although the evidence base for this work is growing every day, we need to continue to collect validity evidence connecting educational structure to care outcomes.10 It is only once we know what competence is in a given setting that we can look backwards and ask the question, What is the shortest way to get there? We also acknowledge that time-variable education poses significant logistical issues for training programs anchored to fixed-time training structures. Does it matter how long it takes two budding pianists to learn Für Elise if both can play it well eventually? Yes, if the first student takes 1 week and the second student takes 10 years. It is the same with medical trainees. Programs will have to be creative in developing reasonable time parameters in time-variable environments and in determining what to do with outliers on either end of the spectrum (some people will not graduate at all even with time variability).
Perhaps medical education can learn from Major League Baseball (MLB). Players with a wide spectrum of abilities and skill levels are drafted from high school, college, and other teams, usually spending multiple years in the minor league system. While players improve their performance through deliberate practice and coaching, they are not on a set schedule. Advancement is based on a robust set of metrics, and decision makers use this information to form an entrustment that the player can advance. Some arrive early and have illustrious careers (Ken Griffey Jr started in the major leagues at age 19 and played 21 seasons), and others arrive late and stay briefly (Pete Rose Jr started in the major leagues at age 27 and played 11 games). Some players are highly efficient and can do only one thing well (Mariano Rivera, unanimous hall of famer, relief pitcher using generally one type of pitch), and others have extreme creativity/flexibility (Pete Rose Sr, the only major league player to start 500 games and have five all-star appearances at five different positions). Depending on the point of the game, all these players had value. Sometimes the team needed Rivera, and sometimes the team needed Rose Sr. Each of these players learned a craft in the minor leagues and was judged not by his length of training but by how he performed. Each player had to be a master adaptive learner and continuously hone his skills as the game and other players adjusted around him.
This tension between specialization and generalization by practitioner and type of practitioner is also common in the primary care fields, as is the need to continuously learn over time. Of course, this analogy breaks down when considering important differences between baseball and medical training and practice. Baseball is just a game, and only a select few ever make it to the major leagues. On the other hand, nearly everyone who enters medical training eventually enters practice, and in medicine it is often not clear whether different practitioners are playing the same sport, let alone in the same league. Further, the outcome of the health care game (i.e., how well each player performs) is often not clear. However, unlike medicine, MLB has been able to improve player assessment through investment in and development of technology (e.g., Statcast)11 and advanced metrics, and link these assessments to the ultimate desired outcomes (team wins). The questions What is the minimum amount of time a player needs to make it to the major leagues? What can they do there? and How long do they stay there? always have the same answer: It depends on the abilities of the player, the needs of the team, and the outcomes on the field.
Perhaps our approach to training primary care “giants” has something to learn from MLB. Each player has a different path to the game. Some skills take longer than others to develop, and some people take longer than others to develop the same skills. All players must continuously learn about the game or they will be passed by (training does not end at “graduation” to the major leagues). On a winning team, all players are giants if they play their role well, regardless of how they arrived there. Currently, because we cannot see the real score of the game with respect to patient outcomes, we use scores such as time-in-training as a faulty proxy: We all arrive together and everybody is a giant, or nobody is, depending on your perspective. Instead, we should define the ideal outcomes of the game, teach players to be master adaptive learners in their given environment so all giants continuously grow, and then work backward to determine the most efficient way to get there.
2. Everett CM, Morgan P, Jackson GL. Primary care physician assistant and advance practice nurses roles: Patient healthcare utilization, unmet need, and satisfaction. Healthc (Amst). 2016;4:327–333.
3. Jackson GL, Smith VA, Edelman D, et al. Intermediate diabetes outcomes in patients managed by physicians, nurse practitioners, or physician assistants: A cohort study. Ann Intern Med. 2018;169:825–835.
4. Dewan MJ, Norcini JJ. Pathways to independent primary care clinical practice: How tall is the shortest giant? Acad Med. 2019;94:950–954.
5. Pusic MV, Boutis K, Hatala R, Cook DA. Learning curves in health professions education. Acad Med. 2015;90:1034–1042.
6. Warm EJ, Held JD, Hellmann M, et al. Entrusting observable practice activities and milestones over the 36 months of an internal medicine residency. Acad Med. 2016;91:1398–1405.
7. Lucey CR, Thibault GE, Ten Cate O. Competency-based, time-variable education in the health professions: Crossroads. Acad Med. 2018;93(3S Competency-Based, Time-Variable Education in the Health Professions):S1–S5.
8. Cutrer WB, Miller B, Pusic MV, et al. Fostering the development of master adaptive learners: A conceptual model to guide skill acquisition in medical education. Acad Med. 2017;92:70–75.
9. Frank JR, Snell LS, Cate OT, et al. Competency-based medical education: Theory to practice. Med Teach. 2010;32:638–645.
10. Holmboe ES, Sherbino J, Englander R, Snell L, Frank JR; ICBME Collaborators. A call to action: The controversy of and rationale for competency-based medical education. Med Teach. 2017;39:574–581.