Secondary Logo

Journal Logo

The Time Is Now

Using Graduates’ Practice Data to Drive Medical Education Reform

Triola, Marc M. MD; Hawkins, Richard E. MD; Skochelak, Susan E. MD, MPH

doi: 10.1097/ACM.0000000000002176
Invited Commentaries
Free

Medical educators are not yet taking full advantage of the publicly available clinical practice data published by federal, state, and local governments, which can be attributed to individual physicians and evaluated in the context of where they attended medical school and residency training. Understanding how graduates fare in actual practice, both in terms of the quality of the care they provide and the clinical challenges they face, can aid educators in taking an evidence-based approach to medical education. Although in their infancy, efforts to link clinical outcomes data to educational process data hold the potential to accelerate medical education research and innovation. This approach will enable unprecedented insight into the long-term impact of each stage of medical education on graduates’ future practice. More work is needed to determine best practices, but the barrier to using these public data is low, and the potential for early results is immediate. Using practice data to evaluate medical education programs can transform how the future physician workforce is trained and better align continuously learning medical education and health care systems.

M.M. Triola is associate professor of medicine, associate dean for educational informatics, and founding director, Institute for Innovations in Medical Education, NYU School of Medicine, New York, New York; ORCID: https://orcid.org/0000-0002-6303-3112.

R.E. Hawkins is president and chief executive officer, American Board of Medical Specialties, Chicago, Illinois. He was vice president for medical education outcomes, American Medical Association, Chicago, Illinois, at the time of writing.

S.E. Skochelak is group vice president for medical education, American Medical Association, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-9522-4888.

Editor’s Note: An Invited Commentary by S. Chahine et al appears on pages 829–832.

Funding/Support: This work was funded in part by a grant from the American Medical Association Accelerating Change in Medical Education initiative.

Other disclosures: Richard E. Hawkins is an employee of the American Board of Medical Specialties. Susan E. Skochelak is an employee of the American Medical Association.

Ethical approval: Reported as not applicable.

Previous presentations: Some of the topics addressed in this Invited Commentary were presented at the National Academy of Medicine, Health and Medicine Division, Graduate Medical Education Outcomes and Metrics workshop on October 11, 2017, in Washington, DC.

Correspondence should be addressed to Marc M. Triola, NYU Medical Center, 550 First Ave., Medical Science Building G107, New York, NY 10016; e-mail: marc.triola@nyumc.org; Twitter: @marctriola.

The ultimate goal of medical education is to train our graduates to deliver the best care to their patients. To be successful in achieving this goal, we need to better understand the quality and impact of our schools and training programs. Despite the high stakes for patient care, fundamental questions about our medical education programs remain difficult to answer. For example, is medical education, with all of its resources, costs, and societal burdens, doing its job and meeting our nation’s needs? Are we preparing our graduates for the future of clinical practice? Do we teach our students the emerging skills needed by physicians and care teams?1

We currently use a variety of measures to evaluate the effectiveness of our educational programs and to assess individual learners. Program evaluation approaches may include gathering structural, process, or outcome measures. Individual learner measures often focus on learner feedback and outcomes, such as performance on standardized exams, success in residency placement, performance with simulated patients, and responses to follow-up surveys after graduation. Although these measures have been shown to be useful in providing feedback to educational programs, it is imperative that we also include patient outcomes in our evaluations, given that an overarching purpose of medical education is to improve patient care and that there is a large public investment in supporting medical education.2–5 Research has linked medical education to practice outcomes; however, few educational programs get any substantive feedback on the quality of care delivered by their graduates, the types of patients they are seeing, and the clinical and procedural landscape in which they are working.

There have been numerous historical barriers to using clinical and administrative data to follow our learners after graduation—physicians crisscross institutional boundaries as they progress through their training, limiting visibility across a continuum; the length of time between graduation and individually attributable clinical outcomes may limit inferences about the impact of prior educational experiences; sharing clinical data for the purposes of educational quality improvement has significant policy barriers; we have yet to identify the clinical outcomes that are sensitive to educational interventions6; and educational programs often do not know where their graduates are practicing or where to find data on their outcomes.

Back to Top | Article Outline

Opportunities

Open clinical datasets from federal and state health agencies have the potential to overcome these barriers and provide a powerful new tool for educational programs to achieve a panoramic view of their graduates. The open data movement began in earnest in 2013, when then President Obama signed an executive order, entitled “Making Open and Machine Readable the New Default for Government Information.” The goal of this initiative was to “improve government transparency; increase opportunities for research, mobile health application development, and data-driven quality improvement; and make health-related information more accessible.”7 This effort resulted in free public access to hundreds of datasets from all federal agencies involved in the U.S. health care system via websites such as data.gov.

Many of these public health datasets identify the individual physician providing care, increasing their usefulness to educational programs for tracking graduates. Datasets, such as the Centers for Medicare and Medicaid Services Physician Compare database, list all physicians in Medicare and their medical school with year of graduation. Using each physician’s National Provider Identification number, these data can in turn be linked to other freely available sources of quality measures, prescribing patterns, diagnoses seen, and procedures performed. Another example of a dataset that contains identifiable physician data is the Medicare Provider Utilization and Payment dataset, which has detailed information, limited to Medicare fee-for-service patients, on a given physician’s practice. Included in this dataset are details on each out- and inpatient visit, the procedures performed, tests ordered, and the associated charges and reimbursements. The related Medicare Part D Prescriber dataset includes all prescriptions written by licensed care providers under the Medicare Part D Prescription Drug Program. Each drug or other prescription is listed along with the number of patients, number of days/refills, if a generic or brand name was chosen, and how many of the patients were over the age of 65. Some states also release detailed data on all inpatient admissions, enabling evaluations of patient outcomes, costs, and practice patterns over time. New York State, for example, releases detailed data on each of the approximately 2.5 million hospital discharges that occur yearly. For every encounter, they include identifiers for the primary attending physician and for those who performed any surgeries or other major procedures. The New York dataset also includes patient demographics, length of stay, diagnoses, procedures, outcomes, mortality, costs, and charges for each individual hospital admission. Proprietary datasets like the American Medical Association Physician Masterfile can provide even more detailed data on physicians in practice, including their practice type, educational history, residency training, and certifications.

These data and others like them are a treasure trove for medical education programs. For the first time, schools and training programs have direct access to the practice data of their graduates, regardless of their location or year of graduation. Using practice data to better understand the influences of education programs on quality outcome measures would move us toward a “continuously learning medical education system” that is grounded in the authentic clinical behaviors and outcomes of our learners.8 The amount of data available is significant and, in most cases, is far greater than in traditional medical education research studies. At NYU School of Medicine, for example, we were able to identify 8,514 of our medical school graduates and 11,904 of our residency training program graduates in the data sources listed above. Medical education programs can use these data to implement a health services approach to prospectively follow our graduates.

At both the school and national levels, we now can explore the relationship between curricular reform and the ultimate outcomes, value, and quality of care delivered; for schools with a workforce mission, a much deeper understanding of which patients our graduates are caring for and in what settings; and predictive models of what care will look like and what future skills will be needed by current students and trainees. Data that are in the public domain can facilitate the collaboration of schools, either grouped geographically or organized around a common purpose, to address national research questions. Secondary uses of these data by medical education researchers could include retrospective and prospective analysis to validate prior medical education studies, such as those that show a link between training at a high-cost center and subsequently practicing in a high-cost manner.9

At NYU School of Medicine, we have created an education data warehouse that combines the rich data we have on our existing and historical students with the public data available on their subsequent practice. Early analyses are providing insight into the relationship of our last two curricular reforms to the changing prescribing patterns of our graduates. We also are doing comparative analyses of our medical school and residency program graduates looking at the differences in value-based care and overall costs compared with the graduates of other schools and training programs. Using data on the diagnoses seen and procedures performed, we have created new dashboards for our medical school curriculum committee and residency program leadership that allow them to explore what their graduates are doing in practice and how that is changing over time. We hope this insight brings new context to our curriculum design efforts and the structure of clinical education and training. Other medical schools in the American Medical Association’s Accelerating Change in Medical Education Consortium are embarking on similar analyses to assess their graduates’ practice performance outcomes, based on the models we have developed at NYU School of Medicine.

Back to Top | Article Outline

Challenges

There are some challenges to using these practice data for the purpose of evaluating medical education programs. Most clinical outcomes reflect the decisions of a team, the local practices of a system, or even the workflows of a particular electronic medical record, potentially limiting their attribution to an individual and her or his educational programs. National data sources focus on Medicare, which limits the types of physicians included and the spectrum of patients seen. There is also a long lag time between a student beginning medical school and having sufficient measurable outcomes to accurately reflect her or his practice upon completion of training. However, the comprehensiveness of these data does allow us to reduce confounders by comparing graduates from different programs working within the same settings. There also are potential unintended consequences of leveraging practice outcomes in this way. We will need to think carefully about how these outcomes should be attributed to medical schools and training programs, both of which have no direct control over the clinical settings in which these data are collected.

Back to Top | Article Outline

Conclusions

Using these open datasets, medical education programs now can follow their graduates into clinical practice, creating a feedback loop for continuous quality improvement of educational programs. This approach has the potential to transform how we refine our curricula, prepare our future physician workforce, and create a continuously learning system that further aligns medical education outcomes with the outcomes that matter to our patients and our health care system. It is imperative that we use these resources to drive evidence-based decisions that improve medical education at every level.

Although more work is needed to determine best practices, the barrier to using this approach for research is low, and the potential for early results is immediate. Schools and national organizations can convene collaborative groups to work together on research and innovation projects that are both national in scope and extend across the continuum of medical education. Medical education researchers can lead the discussion to define fundamental questions of how and when these data should be used. Academic medical centers can embark on developing their education leaders to interpret and act on the insights drawn from evaluating these data. Lastly, national discussion on this topic should inform what data are reported and how they can be enhanced for the purposes of improving medical education in the interest of the public good.

Back to Top | Article Outline

References

1. Weinstein DF. Optimizing GME by measuring its outcomes. N Engl J Med. 2017;377:2007–2009.
2. van der Leeuw RM, Lombarts KM, Arah OA, Heineman MJ. A systematic review of the effects of residency training on patient outcomes. BMC Med. 2012;10:65.
3. Chen FM, Bauchner H, Burstin H. A call for outcomes research in medical education. Acad Med. 2004;79:955–960.
4. Asch DA, Nicholson S, Srinivas SK, Herrin J, Epstein AJ. How do you deliver a good obstetrician? Outcome-based evaluation of medical education. Acad Med. 2014;89:24–26.
5. Tamblyn R. Outcomes in medical education: What is the standard and outcome of care delivered by our graduates? Adv Health Sci Educ Theory Pract. 1999;4:9–25.
6. Kalet AL, Gillespie CC, Schwartz MD, et al. New measures to establish the evidence base for medical education: Identifying educationally sensitive patient outcomes. Acad Med. 2010;85:844–851.
7. Martin EG, Helbig N, Shah NR. Liberating data to transform health care: New York’s open data experience. JAMA. 2014;311:2481–2482.
8. Stuart G, Triola M. Enhancing Health Professions Education Through Technology: Building a Continuously Learning Health System. Proceedings of a Conference Sponsored by the Josian Macy Jr. Foundation in April 2015. Josiah Macy Jr. Foundation; 2015.New York, NY.
9. Phillips RL Jr, Petterson SM, Bazemore AW, Wingrove P, Puffer JC. The effects of training institution practice costs, quality, and other characteristics on future practice. Ann Fam Med. 2017;15:140–148.
© 2018 by the Association of American Medical Colleges