Secondary Logo

Journal Logo

Perspectives

A Purpose-Driven Fourth Year of Medical School

Dewan, Mantosh MD; Norcini, John PhD

Author Information
doi: 10.1097/ACM.0000000000001949
  • Free

Abstract

The fourth year of medical school, fully one-fourth of a long curriculum, has been repeatedly found to be ineffective, if not wasteful.1–3 Reviews2,4 have summarized the long-standing concerns regarding the purpose, optimal type, organization, academic quality of courses, excessive focus on securing residency positions (preresidency syndrome), uncertainty about the optimal ratio of required and elective courses, and grade inflation. It has also been found that there is equivalence in outcomes of three- and four-year medical schools.5 This has led to calls for eliminating the fourth year as unnecessary and expensive,3 counterbalanced with suggestions to better utilize this time.4,6–10 What would a “better” fourth year with a distinct structure linked to a clear purpose look like?

Defining the Purpose

Ten Cate11 presents a coherent context in which to consider the purpose of the fourth year. He writes: “Abraham Flexner, reporting a century ago about the state of U.S. and European medical education, did not yet mention internships or any hospital training after graduation. This reflected the prevailing practice of relying on undergraduate medical education as sufficient preparation for lifelong medical practice…. In the 21st century the medical degree, while still significant in its legal status, has become an intermediate station in a long educational trajectory, rather than an end point.”11(p966) Essentially, “the purpose of medical training has moved from readiness for independent medical practice to readiness for postgraduate training.”12(p7)

How well do we prepare students for this purpose? A survey of training directors13 found that, as four-year programs are currently structured, their graduates are seen as inadequately prepared to enter a residency and are thought to require more clinical and nonclinical training. Medical students train in a system of block rotations on teams that are typically inpatient. With increasing patient acuity and being judged by clinical productivity, attending physicians are busy treating patients and writing notes, with a consequently diminished role and independence for both residents and medical students. Although there is a concerted effort to train students in an outpatient setting, this is still the exception and usually lacks long-term continuity.14 All these factors conspire to graduate medical students who have not experienced even semi-independent responsibility for “my patients” for a significant length of time and who are seen as unprepared to start residency.7–10 This shortfall—and suggestions on how to remedy it—have been addressed from the vantage point of specialties such as family medicine,7 internal medicine,9 and emergency medicine.10 More recently, Englander et al15 have distilled 13 core entrustable professional activities (EPAs) that could be graduation requirements and necessary for entering any residency (List 1).

List 1

Core Entrustable Professional Activities for Entering Residency15

  1. Gather a history and perform a physical examination
  2. Prioritize a differential diagnosis following a clinical encounter
  3. Recommend and interpret common diagnostic and screening tests
  4. Enter and discuss orders/prescriptions
  5. Document a clinical encounter in the patient record
  6. Provide an oral presentation of a clinical encounter
  7. Form clinical questions and retrieve evidence to advance patient care
  8. Give or receive a patient handover to transition care responsibility
  9. Collaborate as a member of an interprofessional team
  10. Recognize a patient requiring urgent or emergent care and initiate evaluation and management
  11. Obtain informed consent for tests and/or procedures
  12. Perform general procedures of a physician
  13. Identify system failures and contribute to a culture of safety and improvement

There should be clarity of purpose for the fourth year as a gateway to residency: It is best seen as being toward the beginning of the clinical continuum from supervised training to postresidency independent practice. There are new and increasingly sophisticated, actionable objectives that define the foundation of the MD degree.15 We now need a structure that will support two critical educational elements: first, allow students to repeatedly demonstrate these competencies in routine clinical practice; and second, enable faculty to assess them using the most meaningful, objective, performance-based measures.

Optimizing the Fourth Year

We propose that the fourth year be anchored by a yearlong, longitudinal ambulatory experience of at least three days each week characterized by experience working on an outpatient interprofessional team with consistent faculty supervision and mentoring, increasing independence, a clear focus on education, and rigorous assessment of meaningful outcomes. Under this model, we advocate for the following:

The senior medical student (“student doctor”) will be integral to an interprofessional team led by a faculty physician. The student doctor would be assigned a limited number of patients (e.g., 25–50), with a wide range of common conditions, and given 1 or 2 new patients each week. Generous time would be allotted—an hour for initial appointments, with a half hour for follow-up—to allow time for the student to thoroughly evaluate the patient, look up information, and consult with the team as needed. It would also enable her to address social determinants of health and lifestyle changes (e.g., smoking cessation) to improve adherence,16 treat the patient, and write notes. Also built in will be flexibility and time to make home visits as well as visit their patient in other settings—for instance, when they end up in the emergency room or get admitted or are sent to consult with specialists. Electives and residency interviews would need to be accommodated, but even during these times, minimal continuity would be maintained (except when on away electives) by being in the outpatient clinic at least one day each week.

Students in longitudinal clerkships uniquely benefit from their “direct engagement in the continuities of care, supervision, and curriculum.”14(p643) In the only study comparing third-year students in traditional block rotations versus those in a longitudinal integrated clerkship (LIC),14 LIC students showed distinct advantages that are likely to be magnified if the LIC were to be done in the fourth year. Compared with conventional third-year students, the LIC students were more likely to have seen patients before their initial diagnosis or decision for admission, and also after discharge; to be significantly better on patient centeredness (including being truly caring with patients, dealing with ethical dilemmas, and involving patients and families in decision making); and to have a better understanding of social context, patients from diverse backgrounds, and at different stages of the life cycle. They had received more mentoring and feedback from attending physicians and been less exposed to the negativity of the hidden curriculum. Rather than experiencing the “ethical erosion so typical of traditional clerkships, the LIC seemed to reinforce students’ humanistic patient-centered values.”14(p648)

The study concluded that LIC students also had more satisfying, confidence-building, rewarding, humanizing, and transformational experiences than conventional third-year students, with less boredom and marginalization. They had a better understanding of how the health system works, had the knowledge base to be competent practitioners, knew their strengths and limitations, dealt well with ambiguity, and engaged in self-reflection.14

However, they reported feeling more hectic and stressful than their conventional third-year counterparts, and equally frustrated. They felt less well prepared to practice in hospital settings but better prepared for the ambulatory setting.14 The experience, maturity, and inpatient experience gained from the third year would likely address and decrease these feelings if the LIC were to occur in the fourth year. Objectively, LIC students scored equally on subject exams (knowledge) and exhibited superior clinical skills on the end-of-year objective structured clinical examinations (OSCEs). There were no differences in their career choices, and they had an equal rate of matching into their first choice of residency.14

One critical new element in our proposal is that the student doctor must be given a significant degree of autonomy but with readily available supervision and support from the team. This is not current practice, which is understandable given the high acuity and short lengths of stay on inpatient rotations. It needs to change and can be done given the carefully chosen, less acute, and more chronic patients that the student would treat in the outpatient setting. An aspirational level of independence is suggested by supervisory requirements for midlevel practitioners. For instance, a physician’s assistant with two years of training (one of which is clinical) can work independently with review and co-signing of notes every 30 days. Similarly, a student doctor with three years of training (one of which is clinical) could be supervised closely initially (e.g., for the first month) and then given increasing independence, with the faculty member moving from being a supervising physician to a collaborating physician in graded steps as appropriate for each student. For instance, the student doctor could evaluate and treat the patient, with the faculty supervisor briefly stepping in to confirm this with the patient at the end of the visit. This marked increase in independence is what would distinguish the fourth-year continuity experience from the many excellent LICs that are currently offered in a select but growing number of medical schools worldwide in the third year.14 Fourth-year student doctors would need to be independent enough to reasonably allow attributing patient outcomes primarily to their work.

Because this still would be primarily an educational year, careful assessment and regular formative feedback will be important. Rethans et al17 suggest a useful three-stage model. This consists of a screening phase for all students, focusing on important elements of real practice; a continuous quality improvement phase for those who pass the screen; and a detailed assessment process for those “at risk.”

For instance, under the model we propose, faculty could directly observe, review a video recording, or use an incognito standardized patient18 to assess a student doctor’s achievement of the 13 core EPAs for entering residency (List 1), which would enable them to give detailed and specific formative feedback. Again, a longitudinal experience with a high level of independent practice will be necessary to adequately assess some of these 13 EPAs. Hirsh et al14 found that, compared with third-year students in an LIC, students in traditional block rotations were significantly less likely to see patients before their initial diagnosis, decision regarding admission, and post discharge. This makes it difficult for these traditional students in acute care settings, which are attending and resident physician centric, to effectively and meaningfully show competence in some of the required EPAs. Among these are to recommend and interpret common diagnostic and screening tests, enter and discuss orders/prescriptions, give or receive a patient handover to transition care responsibility, (perhaps) collaborate as a member of an interprofessional team, recognize a patient requiring urgent or emergent care and initiate evaluation and management, obtain informed consent for tests and/or procedures, perform general procedures of a physician, and identify system failures and contribute to a culture of safety and improvement.15 In the more hospitable continuity clinic setting, these core EPAs could be tested, and additional specialty-specific EPAs could be added on an individual basis to ensure that the fourth year prepares graduates for success in their residency.7–10 Lomis et al19 have practical guidance on implementing these EPAs based on two years of experience from 10 pilot sites. The complete list of competencies (perhaps pegged to milestones in that specialty) could be sent post Match to residency directors as an “educational handover,” as suggested by Sozener et al.20(p676)

Toward a More Meaningful, Performance-Based Assessment

A real-world, independent, yearlong, team-based practice would improve clinical expertise and ensure a seamless readiness for student doctors to begin residency. As an important additional benefit, for the first time, there would be the opportunity for meaningful, performance-based assessments such as documenting what percentage of patients with diabetes or depression were satisfactorily maintained or improved using standard measures such as HbA1c or the Beck Depression Inventory. As Norcini et al have said21(p1462): “Much of the research on the competence of graduates has focused largely on educational measures of quality. A more fundamental question is: Are there differences in clinical outcomes for patients cared for by these physicians?”

Unfortunately, medical education, with its system of short rotations and lack of primary responsibility for patient care, has had to focus on educational measures of competence and does not allow for meaningful, “work-based” or performance assessment. Lacking a credible way to study outcomes, we certify competence but do not know whether our educational system is effective in producing good doctors. Therefore, finding an objective method to measure meaningful outcomes is an important and urgent challenge facing medical education today.21–23

The assessment gap in medical education is highlighted if we apply the most established framework for assessing competence to the “real-world” workplace, the Kirkpatrick Levels24 (Table 1). Although originally a framework for evaluating programs, Kirkpatrick’s rubric also inspires insight into individual assessment.22,25 It can be thought of as guiding assessment in the four important elements of medical education: the teacher; the student in a controlled, educational environment; the student in independent practice; and the clinical outcome of patients being treated by this student.22

Table 1
Table 1:
Application of Kirkpatrick’s Levels to Medical Student Education

The base Level 1, “reaction,” is the evaluation of the teacher by the student and is regularly assessed by questionnaire and surveys. Such evaluations are easy to construct and take little effort to complete. They tell you something about the teacher but nothing about the learner (Table 1).

Level 2 outcomes, “learning,” focus on the immediate resulting change in the knowledge, skills, and attitudes of learners. These changes are assessed by classroom tests during training and are almost always a graduation requirement. The ubiquitous multiple-choice question examination efficiently evaluates knowledge transfer, and activities such as OSCE26 and EPAs assess new clinical skills. These directly observed Level 2 evaluations have been promoted by the Liaison Committee on Medical Education requiring direct observations of core clinical skills throughout the medical education program.27 Englander et al15 have compiled an elegant list of 13 EPAs that students need to be able to successfully complete independently as a common requirement for graduation from medical school and to be ready to function on day one of their residency. Level 2 activities (e.g., EPAs) adequately and efficiently assess a student’s mastery of knowledge and skills in a protected environment but tell us less about how they would practice independently in the real world (Level 3) or what is the clinical outcome of their patients (which is the desired, work-based Level 4).

Level 3 outcomes, “behavior,” focus on the change in clinical behavior seen in everyday, independent practice resulting from training. Ideally, this evaluation should occur by routine or covert direct observation in the clinical setting three to six months post training. For instance, does the physician routinely successfully complete the 13 EPAs foundational to the MD degree,15 follow best practice protocols (e.g., washing her hands, giving prophylactic antibiotics or vaccinations), and scrupulously observe every step in a checklist?28

As the fourth year is currently structured, medical students do not have a routine independent practice, and clinical rotations are three months or less. These structural problems rule out Level 3 assessments of medical students under current conditions.

Level 4 outcomes, “results,” are the “gold standard”—the ultimate results for the patient that stem from application of new learning. As is being increasingly recognized,21,23 this final result is not an evaluation of the teacher (Level 1) or physician (Level 2 and 3) but the effect of the physician’s intervention with the patient. Level 4 is only about change in the patient (Table 1).

Unfortunately, because clerkships are relatively brief and students do not independently treat patients, they cannot show clinical improvement in their patients (Level 4 outcomes). The proposed yearlong clinical practice in the fourth year would enable this. A rigorous process was used to create the list of 13 Level 2 and 3 EPAs as core requirements for graduation.15 This process should be repeated to generate another list of objective, meaningful Level 4 outcome measures for the commonest conditions seen in a general outpatient practice such as headaches, diabetes, hypertension, infections, depression, arthritis, obesity, and dementia.

There are numerous advantages of this proposed model for the clinic and the student. Particularly in underserved areas, hosting a relatively independently practicing student doctor may allow for a few additional patients to be treated at the clinic. Given the extra time and care that the student could lavish on patients, lifestyle changes and better adherence may improve clinical outcomes. Even less experienced third-year LIC students have reported being able to establish meaningful relationships with patients and making a real difference in their patients’ well-being.14

Students would be expected to graduate as knowledgeable, skilled, ethical, and humane physicians with increased confidence, caring, and self-reflection. Given the expected positive effects of this experience, there is “hope that this model may also inspire students’ idealism about the future of the profession.”14(p649) Clearly, this proposed fourth year will require a major realignment of resources and faculty time. However, it is entirely consistent with the direction in which health services are moving, with its emphasis on community-based primary care provided in interprofessional teams. As we have discussed, preliminary data suggest that third-year LIC experiences are an excellent educational model. Our proposed fourth year builds on these data and adds two important elements: first, a gradual but marked increase in clinical independence; and second, the first opportunity in medical student education for meaningful, performance-based assessment.

Longitudinal clinical experiences have existed for decades, suggesting that legitimate barriers to this model are surmountable. Barriers include finding a large number of clinical placements within an interprofessional setting with a wide range of patients and committed on-site faculty. This faculty-intensive outpatient model is more expensive than training centered in tertiary care hospitals, which benefit from subsidized resident teachers.14 Several sources of income may be needed, including student tuition and monies generated from the faculty member briefly seeing and billing for the student’s patients. Recognizing that the 1:1 faculty-to-student model may be difficult to sustain, Hirsh et al14 present a creative modification wherein four to eight students form “learning communities” with close faculty oversight.

Additional challenges include the need for careful scheduling, coordination, assessment of performance, and collecting data for outcomes; however, these are already required of current training programs.

We have described the basics of a purpose-driven, assessment-rich fourth year consisting of a team-based outpatient primary care practice with its many advantages. If implemented, it would provide a rich variety of real-world clinical experiences to medical students, making them more competent and confident on the first day of their residency, including in working within interprofessional teams. A recent report29 delineates the successful learning components of LICs: continuity and relationships with preceptors, patients, places, and peers; and integration of and flexibility within the curriculum. As long as these elements are maintained, our model can be adapted to mix and match any of a wide range of sites and teaching arrangements. For instance, it could be broadened to include ambulatory specialty settings, such as oncology or neurology, or patient-centered medical homes.

The assessment metrics (validated clinical scales) built into this model would also allow us—for the first time—to certify competence at the only meaningful level there is (i.e., good patient outcomes) and revel in the confidence that the work of our more experienced and expert graduates leads to demonstrable collaboration, healing, and good patient outcomes.

References

1. Cooke M, Irby D, O’Brien B. Educating Physicians: A Call for Reform of Medical School and Residency. 2010.San Francisco, CA: Jossey-Bass
2. Benson NM, Stickle TR, Raszka WV Jr.. Going “fourth” from medical school: Fourth-year medical students’ perspectives on the fourth year of medical school. Acad Med. 2015;90:13861393.
3. Abramson SB, Jacob D, Rosenfeld M, et al. A 3-year M.D.—Accelerating careers, diminishing debt. N Engl J Med. 2013;369:10851087.
4. Walling A, Merando A. The fourth year of medical education: A literature review. Acad Med. 2010;85:16981704.
5. Lockyer JM, Violato C, Wright BJ, Fidler HM. An analysis of long-term outcomes of the impact of curriculum: A comparison of the three- and four-year medical school curricula. Acad Med. 2009;84:13421347.
6. Goldfarb S, Morrison G. The 3-year medical school—Change or shortchange? N Engl J Med. 2013;369:10871089.
7. Nevin J, Paulman PM, Stearns JA. A proposal to address the curriculum for the M-4 medical student. Fam Med. 2007;39:4749.
8. Langdale LA, Schaad D, Wipf J, Marshall S, Vontver L, Scott CS. Preparing graduates for the first year of residency: Are medical schools meeting the need? Acad Med. 2003;78:3944.
9. Angus S, Vu TR, Halvorsen AJ, et al. What skills should new internal medicine interns have in July? A national survey of internal medicine residency program directors. Acad Med. 2014;89:432435.
10. Manthey DE, Coates WC, Ander DS, et al.; Task Force on National Fourth Year Medical Student Emergency Medicine Curriculum Guide. Report of the Task Force on National Fourth Year Medical Student Emergency Medicine Curriculum Guide. Ann Emerg Med. 2006;47:e1e7.
11. Ten Cate O. What is a 21st-century doctor? Rethinking the significance of the medical degree. Acad Med. 2014;89:966969.
12. Ten Cate O. Trusting graduates to enter residency: What does it take? J Grad Med Educ. 2014;6:710.
13. Lyss-Lerman P, Teherani A, Aagaard E, Loeser H, Cooke M, Harper GM. What training is needed in the fourth year of medical school? Views of residency program directors. Acad Med. 2009;84:823829.
14. Hirsh D, Gaufberg E, Ogur B, et al. Educational outcomes of the Harvard Medical School–Cambridge integrated clerkship: A way forward for medical education. Acad Med. 2012;87:643650.
15. Englander R, Flynn T, Call S, et al. Toward defining the foundation of the MD degree: Core entrustable professional activities for entering residency. Acad Med. 2016;91:13521358.
16. Shivale S, Dewan M. The art & science of prescribing. J Fam Pract. 2015;64:400406, 406A.
17. Rethans JJ, Norcini JJ, Barón-Maldonado M, et al. The relationship between competence and performance: Implications for assessing practice performance. Med Educ. 2002;36:901909.
18. Rethans JJ, Gorter S, Bokken L, Morrison L. Unannounced standardised patients in real practice: A systematic literature review. Med Educ. 2007;41:537549.
19. Lomis K, Amiel JM, Ryan MS, et al.; AAMC Core EPAs for Entering Residency Pilot Team. Implementing an entrustable professional activities framework in undergraduate medical education: Early lessons from the AAMC core entrustable professional activities for entering residency pilot. Acad Med. 2017;92:765770.
20. Sozener CB, Lypson ML, House JB, et al. Reporting achievement of medical student milestones to residency program directors: An educational handover. Acad Med. 2016;91:676684.
21. Norcini JJ, Boulet JR, Dauphinee WD, Opalek A, Krantz ID, Anderson ST. Evaluating the quality of care provided by graduates of international medical schools. Health Aff (Millwood). 2010;29:14611468.
22. Dewan M, Walia K, Meszaros ZS, Manring J, Satish U. Using meaningful outcomes to differentiate change from innovation in medical education. Acad Psychiatry. 2017;41:100105.
23. Nasca TJ, Weiss KB, Bagian JP, Brigham TP. The accreditation system after the “next accreditation system.” Acad Med. 2014;89:2729.
24. Kirkpatrick DL, Kirkpatrick JD. Evaluating Training Programs: The Four Levels. 1994.San Francisco, CA: Berrett-Koehler Publishers
25. Manring J, Norcini J, Dewan M. Dewan M, Steenbarger B, Greenberg R. Evaluating competence in brief psychotherapy. In The Art and Science of Brief Therapies. 2017.3rd ed. Washington, DC: American Psychiatric Press
26. Bergus GR, Woodhead JC, Kreiter CD. Using systematically observed clinical encounters (SOCEs) to assess medical students’ skills in clinical settings. Adv Med Educ Pract. 2010;1:6773.
27. Liaison Committee on Medical Education. Functions and structure of a medical school 2015–16. www.lcme.org/publications. Accessed July 31, 2017.
28. Gawande A. The Checklist Manifesto: How to Get Things Right. 2009.New York, NY: Henry Holt
29. Latessa RA, Swendiman RA, Parlier AB, Galvin SL, Hirsh DA. Graduates’ perceptions of learning affordances in longitudinal integrated clerkships: A dual-institution, mixed-methods study. Acad Med. 2017;92:13131319.
Copyright © 2017 by the Association of American Medical Colleges