Measuring Graduate Medical Education Outcomes to Honor the Social Contract : Academic Medicine

Secondary Logo

Journal Logo

Scholarly Perspectives

Measuring Graduate Medical Education Outcomes to Honor the Social Contract

Phillips, Robert L. Jr MD, MSPH1; George, Brian C. MD, MA2; Holmboe, Eric S. MD3; Bazemore, Andrew W. MD, MPH4; Westfall, John M. MD, MPH5; Bitton, Asaf MD, MPH6

Author Information
Academic Medicine 97(5):p 643-648, May 2022. | DOI: 10.1097/ACM.0000000000004592

Abstract

As of 2020, annual federal and state support for graduate medical education (GME) had grown to nearly $19 billion, which funds 139,848 physician training positions in 1,657 teaching hospitals across the United States (Table 1). 1–5 These public subsidies are provided with the understanding that the training institutions will use this governmental funding to meet the health care needs of society, both now and in the future. Decisions about how that funding is further allocated are deferred to individual training sites. This arrangement represents an implicit social contract between teaching hospitals and the American public with the reciprocal responsibilities being sustained government funding that enables GME training programs to produce a workforce that can meet communities’ and the broader society’s needs. 6,7 Surprisingly, this social contract contains little accountability for how that public funding is used.

T1
Table 1:
Graduate Medical Education Public Funding, 2018–2020

The Terms of the Social Contract

Government funding

Federal funding for GME was added to the Medicare authorizing legislation in 1965 and, by 2019, Medicare funding for GME alone totaled $11.218 billion (Table 1). 5 This funding is allocated primarily to academic teaching hospitals via a complex formula tied to patient care. While Congress prescribed fiscal oversight for these Medicare-funded GME training programs, the Centers for Medicare & Medicaid Services (CMS) has no authority to set quality, workforce, or service targets for these programs. In January 2020, a letter to Senate Committee on Finance Chairman Charles E. Grassley from then CMS Administrator Seema Verma stated this bluntly, “…the Medicare statute on GME is prescriptive and limits [Medicare’s] authority to make payments to hospitals for the costs of approved GME programs, and does not provide the authority to take into consideration workforce needs.” 8

While Medicare funding represents much of the dedicated public GME support, as of 2020, the Veterans Health Administration (VHA)—a component of the U.S. Department of Veterans Affairs (VA)—supported more than 11,000 GME positions and is nearing the end of an expansion adding 1,500 new positions that began in 2015. 9 The VA also provided $1.600 billion in GME funding in 2020, most of which supports training in VA hospitals in partnership with existing Medicare-funded GME training programs (Table 1). 2 As with Medicare funding, the VA funding stream also lacks an oversight mechanism for the educational outcomes of the programs it funds.

Additionally, as of 2019, the Health Resources & Services Administration (HRSA) provides about half a billion dollars in GME funding via teaching health center (THC) and children’s hospitals GME funding mechanisms (Table 1). 4 Unlike Medicare and VA funding, the HRSA is required by Congress to report evaluations of training costs and workforce outcomes of children’s hospitals GME and THC funding. 10

As of 2018, states also contributed over $5 billion in GME funding, mostly via Medicaid, and much of it supplemented by federal funding (Table 1). 3 While many states have little insight into the impact of their GME funding, there are 2 notable exceptions. In New York (the largest state funder of GME), the NYU Grossman School of Medicine tracks workforce outcomes by linking training data to outcomes data. 11 The NYU assessments include analyses comparing NYU residency program graduates with the graduates of other schools and training programs in terms of differences in value-based care and overall health care costs. These data are then fed back to graduate medical educators and administrators to guide local improvement efforts and strategic investments. The University of Minnesota recently developed a similar capacity, which currently includes 2 dozen data sources that are integrated into the Medical Education Outcomes Center and that connect alumni practice data to education data. 12 These pilot efforts could be instructive for governmental funders in terms of the methods and types of data needed for GME outcomes assessment.

There are currently 2 websites that offer limited assessment of Medicare GME funding and positions supported. Both sites use data from teaching hospital reports of all trainee full-time equivalents (FTEs) made to the CMS via their Hospital Cost Reports. The publicly usable data formats from these reports are accessible on the CMS and Robert Graham Center websites. 5,13 Medicare funding reported in this article is from the CMS website, but the Robert Graham Center website uses Hospital Cost Report data to estimate primary care versus subspecialty training positions as well as trainee FTE counts beyond Medicare GME caps, that is to say, trainee FTEs funded by other sources or by the hospitals themselves. For 2018, the total number of FTEs above the Medicare GME cap was 22,150. While these FTEs are not funded by Medicare, it is not possible to account for other funding sources, 13 nor is it possible to know what proportion are for fellowship positions for which the CMS only allows 0.5 FTE direct GME support. They may be supported by other federal funds or other mechanisms (usually by the hospitals themselves). In all, the reported above-cap Direct Medical Education FTEs represent more than $2.5 billion beyond Medicare funding.

Types of accountability

We believe that 2 primary categories of accountability need to be considered in the future use of public GME funding: outcome assurance and improvement. Some accountability efforts might be categorized as being primarily outcome assurance efforts, where funders have specific expectations about the outcomes produced by training programs and institutions. Examples of these types of outcomes might include workforce distribution (geographic and specialty), workforce diversity, physician competence, or trainee-associated health care costs. The second category of accountability efforts is focused on outcome improvement. For these efforts, funders might have specific expectations about the processes by which training programs and institutions continuously improve. Examples of these processes might be implementation of a standardized assessment system, reporting of key educational outcomes, or engagement with systems that provide feedback to the program about the patient outcomes of their graduates. Throughout the rest of this article, we use the term accountability to include both outcome assurance and improvement efforts.

Calls for increased accountability

Accountability in the use of public GME funding encourages those receiving the funds to use them in ways that are aligned with societal needs. For example, training institutions may wish to use GME funding for their own purposes and needs, yet those local interests may be in conflict with the public’s interests in terms of training physicians who will best meet the needs of society. This potential for misaligned goals, or even outright conflicts of interest, is not new. One year after the founding of Medicare, the American Medical Association’s Citizens Commission (better known as the Millis Commission) raised the prescient concern that GME is a unique professional education situation in which the training institutions “have service rather than education as their primary function.” 7 Contemporaneously, the Association of American Medical Colleges (AAMC) also acknowledged this tension, recognizing that without “careful attention to appraising the needs of society for health care and health personnel,” public funds could be lost. 6 Related concerns about the lack of accountability for GME funding continue to be highlighted by the AAMC and others. 14–17

Unfortunately, there is a history of GME training programs and institutions using public funding in ways that are not aligned with the interests of society. They tend to do so not out of malice but in response to very real and important business interests. However, without the counterbalance provided by public accountability, these local effects can lead to unwanted outcomes across the GME system. For example, researchers have identified numerous factors associated with important workforce outcomes, yet these results have had little general effect on training. 18–22 Similarly, there is little evidence that training expansion has had any correlation with state workforce needs (e.g., geographic location, specialty type) or community health needs assessments or that redistributions of training slots have met federal intentions (e.g., to establish more rural training programs). 23–25 Instead, teaching hospitals seem to use public funds to establish training programs within more lucrative clinical services (e.g., cardiology, orthopedics). 26 Furthermore, when additional public funding is provided to support larger societal interests, there is generally little appetite for pursuing those broader goals. 23,27

Lack of transparency about how institutions use GME funding and concerns about self-serving behaviors have led to calls for social mission definition, greater accountability, better transparency, pay for performance, oversight functions, and/or outright funding cuts. 11,22,27–33 Some of these calls have been echoed by the Institute of Medicine, which has called for greater accountability for GME funding several times over the last 4 decades. 21,22 Most recently, a workshop focused on the issue of GME accountability and hosted by the National Academies of Sciences, Engineering, and Medicine concluded that the capacity for routinely measuring GME outcomes now exists. 33,34 The workshop also noted that “Thibault challenged that perhaps the medical and GME communities are not fulfilling all of their responsibilities and that it is time to consider values other than self-regulation. The GME community may not be able to do this alone, and it may need to accept the idea that it needs to partner with government and regulatory bodies.” 34 Supplemental Digital Appendix 1 (at https://links.lww.com/ACADMED/B224) offers synopses of calls for GME outcomes assessment or accountability from these and other publications from over the last 50 years.

Implementing GME Accountability

The path toward greater accountability in GME funding is becoming clearer. It entails more robust assessment, data sharing across organizations, and a greater focus on system accountability.

Assessment

As the basis for meaningful accountability, standardized assessment of both trainee- and institutional-level GME outcomes should become routine. Measured outcomes for trainees might include, for example, clinical competence and resource utilization during training or practice geography, referral patterns, and scope of practice once they graduate. Example educational outcomes for institutions might include community engagement of their trainees or the specialty choices of their graduates. A 2014 Institute of Medicine report called for a national GME infrastructure that could conduct research and develop policy regarding “the sufficiency, geographic distribution, and specialty configuration of the physician workforce.” 22 A low-burden start would be data transparency that could enable purposeful decision making and support outcome and training improvement. Implementation of standardized assessment should begin by using data already collected for other purposes. For example, Medicare claims can be used to assess the risk-adjusted performance of both early-career physicians and the training system. 11,35,36 Similarly, the American Board of Family Medicine, 1 of 24 specialty certification boards, surveys all family medicine residents upon graduation from training and again 3 years later to assess a variety of issues, such as preparation for practice, burnout, and practice type/location, and is also using claims data to measure training-level outcomes. 37,38 More generally, training institutions must be given incentives and support for engaging in more robust measurement for the purposes of quality assurance and improvement. Even low-burden assessments will likely be uncomfortable and imperfect but should be adopted in the education space just as they are increasingly being used in clinical enterprises and in most publicly financed enterprises outside of health care.

In parallel, developing better ways to measure GME outcomes could be a useful area of investment for GME stakeholders, as others have already pointed out. 17,39–41 Here, too, there has been progress. In 1995, the World Health Organization published a report on medical education accountability that offered a “social accountability grid” of values (relevance, quality, cost-effectiveness, and equity) arrayed by domains (education, research, and service). 42 Their report explains:

The purpose of this [social accountability] grid is not to rank or compare institutions, but to help an individual institution measure its progress in addressing social accountability and to stimulate institutional action. In doing so, the grid is intended to provide a means to facilitate the translation of good intentions into operational and practical terms so that the recommendations made by various national authorities, organizations and groups over the years can be implemented…. The grid is designed to measure the progress of institutional efforts in education, research and service towards this goal. 42

This social accountability grid is well aligned with more recent U.S. calls for measurement of GME outcomes, including those from a 2018 National Academies of Sciences, Engineering, and Medicine workshop and a 2021 Council on Graduate Medical Education issue brief, both of which made more specific recommendations about what outcomes should be used to evaluate the GME system. 34,43

Some measures will be particularly useful for quality assurance, others will mostly be used to guide training improvement, and some will be helpful in both contexts. For example, robust assessment of GME outcomes offers a way for the public to understand the relative value of returns on their nearly $19 billion investment (quality assurance) and to guide strategic changes in funding (quality improvement). That being said, lessons from clinical quality improvement efforts suggest that a careful accounting of the purposes of data collection—quality assurance, quality improvement, or research—will be important for effective GME accountability. 44

Data sharing

Medical professionals train and work across many institutions and are tracked and assessed by many different stakeholders. For example, teaching hospitals produce physicians who train in one or more GME programs, which are embedded within multiple hospitals and/or health care systems. Those physicians eventually practice within other health care systems that also influence the care they provide. Few of the data collected about these physicians are shared beyond the collecting entity, and when they are shared, the data are often commoditized. Reluctance to make these data more available is, in some cases, related to the funds they generate from research and marketing. In other cases, the data may produce outcome assessments that are awkward for some teaching hospitals. However, this data fragmentation has hindered the ability to make longitudinal inferences about many important GME outcomes.

GME outcomes data should be aggregated, analyzed, and reported in such a way that they can effectively be used by the many stakeholders involved, including trainees, training programs, institutions, payors, and ultimately patients. This means that organizations that currently have sovereignty over data resources will need to share their data for the greater good. That said, the burden of data-sharing efforts should not fall solely on individual institutions. Governance and infrastructure should be established to support the aggregation, analysis, and dissemination of GME outcomes data in a cost-effective and ethical manner. 33,34,45 The federal government is the logical stakeholder for this data infrastructure since it is the largest funder of GME (via the CMS, HRSA, and VA), but a GME consortium organized by the AAMC and/or American Medical Association could also make sense and offer improvement support as well.

One of the first uses of aggregated data might be to support ongoing GME reform efforts. GME is critical to the career development of individual physicians, to the functioning of many teaching institutions, and to the production of a sufficiently trained and appropriately distributed physician workforce. Unfortunately, the current lack of established GME outcome measures hampers improvement efforts in each of these areas. The aggregation of high-quality measures is required to accurately assess the performance of individual graduates against their national peers, the performance of residency programs and teaching institutions using the patient outcomes of their graduates, and the collective performance of the GME system in producing the future physician workforce. Illustrating this potential, claims data have already been used in research settings to assess the performance of training programs. 35,36 Similar approaches could, for example, be used by regulators to identify programs that consistently graduate trainees who do or do not meet the needs of their communities or of society more broadly.

System accountability

The focus for accountability must expand to encompass the entire GME system, including the funders, accrediting and certifying bodies, and GME institutions themselves. One approach is to directly tie GME funding to larger societal goals (e.g., training more rural physicians or general surgeons), as history would suggest that without some form of accountability it is unlikely that the GME system will spontaneously embrace the level of change needed to fully honor the implicit social contract. The Washington, Wyoming, Alaska, Montana, and Idaho (WWAMI) program is one of the nation’s most successful models for rural health training and continuous outcome assessment. WWAMI has benefited from Medicare GME funding redistribution, the VHA GME expansion, and support from the HRSA because these have allowed WWAMI to increase the rural workforce and it has data to demonstrate this and other outcomes. 46 Thus, WWAMI is an exemplar of system accountability with focused, actionable metrics.

A complementary approach might be a payor mandate that all training programs participate in a process of continuous educational quality improvement using standardized measures. Payors can support a mandate through cooperative agreements that permit centralized, standardized data collection, aggregation, and reporting. In support of those efforts, the Accreditation Council for Graduate Medical Education could task specialty-specific residency review committees to include collecting data on training outcomes and for quality improvement as requirements for program reaccreditation.

Payors should be motivated to support such mandates given the track record of clinical quality improvement collaboratives with proven system-level benefits. 47,48 Indeed, collaborative quality improvement approaches are now being implemented in GME. As one example, the American Board of Family Medicine has partnered with its program director organization to implement a cross-sectional census of family physicians who are 3 years out of their training that feeds those data back to training programs. 49,50 Similar efforts are underway in general surgery, where a national consortium of training programs is using a standardized operative performance assessment tool that permits system-level assessments of training outcomes. 51 In addition to serving as accountability metrics for individual programs, such data and outcomes can also be used for research to develop a more evidence-based education system and the ability to improve outcomes.

The role of GME funders in accountability

The CMS, the dominant funder of GME, currently lacks the authority to hold training institutions accountable for their educational outcomes. Congress should not only give them this authority, as has been proposed in the past, but it should direct the CMS to better align population and societal needs with allocation of GME training slots. 30 Similarly, the VHA should have a keen interest in educational outcomes, like understanding how many of the trainees it funds go on to care for veterans or are in geographic areas where there is a need for community-level care for veterans. The HRSA has statutory requirements to assess its investments in children’s hospitals GME and THCs. To this end, the HRSA operates the National Center for Health Workforce Analysis and funds a series of health and rural health workforce centers across the country to which it could add other data sources, specifically Medicare and other claims data, to better understand how its investments improve rural, generalist, and safety net workforces. The HRSA is the only federal GME funder with assessment requirements and capacity, as such it offers a precedent for extending these to the CMS and VHA. Finally, states could support innovation on a smaller scale, which could provide local returns and serve to inform future national efforts. For example, New York benefits from having both the NYU Langone Institute for Innovations in Medical Education and the Center for Health Workforce Studies at the School of Public Health, University at Albany-State University of New York.

More oversight and, ultimately, accountability would help payors make more strategic investments in GME. Action items for increasing GME oversight and accountability include:

  • Implementing data agreements and infrastructure that enable cross-institutional outcome evaluation,
  • Convening data holders to identify important outcome measures and best practices,
  • Authorizing the CMS to perform or commission GME assessments,
  • Enabling the HRSA National Center for Health Workforce Analysis to support assessments more broadly,
  • Requiring the VHA to assess the outcomes of its funded entities,
  • Coordinating methods and data sharing across governmental agencies, such as the CMS, VHA, and HRSA, and
  • Engaging philanthropy to raise awareness of the importance and influence the priorities of GME outcomes assessment.

Conclusions

Since its inception, medical and policy leaders have identified the need for GME to be responsive to the needs of communities and the broader society. More recent calls have explicitly emphasized a need for greater accountability for physician workforce outcomes. This will be challenging, as it requires tracking educational outcomes back to the institutions that shape a physician’s education and publicly sharing data on those outcomes in a transparent manner. While this type of assessment requires data from many sources, some data sources needed to get started have been identified and methods have been published. 11,51,52 There is room for improvement in both the data and methods, and support for research and data development will be necessary to make assessments more effective and strategic. Future GME oversight should therefore include measurement of the outcomes associated with the systems in which physicians train and practice. Doing so would enable better accountability, inform future training improvement efforts, and assure the public and funders that the nearly $19 billion in public GME subsidies are being used in ways that explicitly meet and best honor what is now only an implicit social contract.

References

1. U.S. Government Accountability Office. Physician Workforce: HHS Needs Better Information to Comprehensively Evaluate Graduate Medical Education Funding. https://www-gao-gov.proxy.lib.umich.edu/assets/700/690854.pdf. Published March 9, 2018. Accessed December 17, 2020.
2. Department of Veterans Affairs, Office of Budget. Volume II: Medical Programs and Information Technology Programs. https://www.va.gov/budget/products.asp. Accessed December 9, 2021.
3. Association of American Medical Colleges. Medicaid Graduate Medical Education Payments: Results From the 2018 50-State Survey. Washington, DC: Association of American Medical Colleges; 2019. https://store.aamc.org/downloadable/download/sample/sample_id/284. Accessed December 9, 2021.
4. U.S. Department of Health and Human Services, Health Resources and Services Administration. Fiscal Year 2021: Justification of Estimates for Appropriations Committees. Rockville, MD: Department of Health and Human Services; 2020. https://www.hrsa.gov/sites/default/files/hrsa/about/budget/budget-justification-fy2021.pdf. Accessed December 9, 2021.
5. Centers for Medicare & Medicaid Services. Hospital 2552-2010 Form. https://www.cms.gov/Research-Statistics-Data-and-Systems/Downloadable-Public-Use-Files/Cost-Reports/Hospital-2010-form. Accessed December 9, 2021.
6. Coggeshall LT. Planning for Medical Progress Through Education: A Report Submitted to the Executive Council of the Association of American Medical Colleges. Philadelphia, PA: Association of American Medical Colleges; 1965.
7. Citizens Commission on Graduate Medical Education. The Graduate Education of Physicians. Chicago, IL: American Medical Association; 1966.
8. Verma S. Letter to the Honorable Charles E. Grassley, Chairman, Committee on Finance, United States Senate. https://www.finance.senate.gov/imo/media/doc/Chairman%20Grassley%20GME%20Response.pdf. Published January 6, 2020. Accessed December 27, 2021.
9. Chang BK, Brannen JL. The Veterans Access, Choice, and Accountability Act of 2014: Examining Graduate Medical Education Enhancement in the Department of Veterans Affairs. Acad Med. 2015;90:1196–1198.
10. U.S. Department of Health and Human Services, Health Resources and Services Administration. Report to Congress: Teaching Health Center Graduate Medical Education Direct and Indirect Training Expenses Report. Rockville, MD: Health Resources and Services Administration; 2019. https://bhw.hrsa.gov/sites/default/files/bureau-health-workforce/about-us/reports-to-congress/report-to-congress-thcgme-2019.pdf. Accessed December 9, 2021.
11. Triola MM, Hawkins RE, Skochelak SE. The time is now: Using graduates’ practice data to drive medical education reform. Acad Med. 2018;93:826–828.
12. Rosenberg ME, Gauer JL, Smith B, Calhoun A, Olson APJ, Melcher E. Building a medical education outcomes center: Development study. JMIR Med Educ. 2019;5:e14651.
13. Robert Graham Center. Data Tables: Graduate Medical Education For Teaching Hospitals, Fiscal Year 2018. https://www.graham-center.org/rgc/maps-data-tools/data-tables/gme.html. Accessed December 9, 2021.
14. Cohen JJ. Honoring the “E” in GME. Acad Med. 1999;74:108–113.
15. Oliver TR, Grover A, Lee PR. Variations in Medicare Payments for Graduate Medical Education in California and Other States. Oakland, CA: California Health Care Foundation; 2001. https://www.chcf.org/publication/variations-in-medicare-payments-for-graduate-medical-education-in-california-and-other-states. Accessed December 9, 2021.
16. Stevens RA. Graduate medical education: A continuing history. J Med Educ. 1978;53:1–18.
17. Whitcomb ME. Research in medical education: What do we know about the link between what doctors are taught and what they do? Acad Med. 2002;77:1067–1068.
18. Rosenthal TC. Outcomes of rural training tracks: A review. J Rural Health. 2000;16:213–216.
19. Levin Z, Meyers P, Peterson L, Habib A, Bazemore A. Practice intentions of family physicians trained in teaching health centers: The value of community-based training. J Am Board Fam Med. 2019;32:134–135.
20. Phillips RL, Petterson S, Bazemore A. Do residents who train in safety net settings return for practice? Acad Med. 2013;88:1934–1940.
21. Institute of Medicine. Primary Care Physicians: Financing Their Graduate Medical Education in Ambulatory Settings. Washington, DC: National Academies Press; 1989.
22. Institute of Medicine. Graduate Medical Education That Meets the Nation’s Health Needs. Washington, DC: National Academies Press; 2014.
23. Chen C, Xierali I, Piwnica-Worms K, Phillips R. The redistribution of graduate medical education positions in 2005 failed to boost primary care or rural training. Health Aff (Millwood). 2013;32:102–110.
24. Coutinho AJ, Klink K, Wingrove P, Petterson S, Phillips RL Jr, Bazemore A. Changes in primary care graduate medical education are not correlated with indicators of need: Are states missing an opportunity to strengthen their primary care workforce? Acad Med. 2017;92:1280–1286.
25. Rittenhouse DR, Ament AS, Grumbach K. Sponsoring institution interests, not national plans, shape physician workforce in the United States. Fam Med. 2020;52:551–556.
26. Weida NA, Phillips RL Jr, Bazemore AW. Does graduate medical education also follow green? Arch Intern Med. 2010;170:389–390.
27. Phillips RL Jr, Bitton A. Tectonic shifts are needed in graduate medical education to ensure today’s trainees are prepared to practice as tomorrow’s physicians. Acad Med. 2014;89:1444–1445.
28. Grover A. GME and the Future of the Pathologist Workforce. Presented at: 2013 CAP Policy Meeting; Washington, DC; May 6, 2013. http://www.cap.org/apps/docs/advocacy/policy_meeting/gme.pdf. Accessed December 9, 2021.
29. Council on Graduate Medical Education. Advancing primary care. http://purl.fdlp.gov/GPO/gpo11461. Accessed December 9, 2021.
30. U.S. Department of Health and Human Services. HHS FY2016 Budget in Brief. https://www.hhs.gov/about/budget/budget-in-brief/cms/medicare/index.html. Accessed December 9, 2021.
31. Weinstein DF. The elusive goal of accountability in graduate medical education. Acad Med. 2015;90:1188–1190.
32. Reddy AT, Lazreg SA, Phillips RL Jr, Bazemore AW, Lucan SC. Toward defining and measuring social accountability in graduate medical education: A stakeholder study. J Grad Med Educ. 2013;5:439–445.
33. Weinstein DF. Optimizing GME by measuring its outcomes. N Engl J Med. 2017;377:2007–2009.
34. National Academies of Sciences, Engineering, and Medicine; Health and Medicine Division; Board on Health Care Services. Graduate Medical Education Outcomes and Metrics: Proceedings of a Workshop. Washington, DC: National Academies Press; 2018.
35. Asch DA, Nicholson S, Srinivas S, Herrin J, Epstein AJ. Evaluating obstetrical residency programs using patient outcomes. JAMA. 2009;302:1277–1283.
36. Bansal N, Simmons KD, Epstein AJ, Morris JB, Kelz RR. Using patient outcomes to evaluate general surgery residency program performance. JAMA Surg. 2016;151:111–119.
37. Peterson LE, Carek P, Holmboe ES, Puffer JC, Warm EJ, Phillips RL. Medical specialty boards can help measure graduate medical education outcomes. Acad Med. 2014;89:840–842.
38. Coutinho AJ, Levin Z, Petterson S, Phillips RL Jr, Peterson LE. Residency program characteristics and individual physician practice characteristics associated with family physician scope of practice. Acad Med. 2019;94:1561–1566.
39. Magraw RM, Fox DM, Weston JL. Health professions education and public policy: A research agenda. J Med Educ. 1978;53:539–546.
40. Chen FM, Bauchner H, Burstin H. A call for outcomes research in medical education. Acad Med. 2004;79:955–960.
41. The Commonwealth Fund Task Force on Academic Health Centers. Training Tomorrow’s Doctors: The Medical Education Mission of Academic Health Centers. New York, NY: The Commonwealth Fund; 2002. https://www.commonwealthfund.org/sites/default/files/documents/___media_files_publications_fund_report_2002_apr_training_tomorrows_doctors__the_medical_education_mission_of_academic_health_centers_ahc_trainingdoctors_516_pdf.pdf. Accessed December 9, 2021.
42. Boelen C, Heck JE. Defining and Measuring the Social Accountability of Medical Schools. Geneva, Switzerland: World Health Organization; 1995. https://apps.who.int/iris/bitstream/handle/10665/59441/WHO_HRH_95.7.pdf?sequence=1. Accessed December 9, 2021.
43. Council on Graduate Medical Education. Investing in a health workforce that meets rural needs. https://www.hrsa.gov/sites/default/files/hrsa/advisory-committees/graduate-medical-edu/publications/cogme-rural-health-issue-brief.pdf. Published February 2021. Accessed April 13, 2021.
44. Solberg LI, Mosser G, McDonald S. The three faces of performance measurement: Improvement, accountability, and research. Jt Comm J Qual Improv. 1997;23:135–147.
45. Micheli M, Ponti M, Craglia M, Berti Suman A. Emerging models of data governance in the age of datafication [published online ahead of print September 1, 2020]. Big Data Soc. doi:10.1177/2053951720948087.
46. Allen SM, Ballweg RA, Cosgrove EM, et al. Challenges and opportunities in building a sustainable rural primary care workforce in alignment with the Affordable Care Act: The WWAMI program as a case study. Acad Med. 2013;88:1862–1869.
47. Winkley Shroyer AL, Bakaeen F, Shahian DM, et al. The Society of Thoracic Surgeons Adult Cardiac Surgery Database: The driving force for improvement in cardiac surgery. Semin Thorac Cardiovasc Surg. 2015;27:144–151.
48. Luckenbaugh AN, Miller DC, Ghani KR. Collaborative quality improvement. Curr Opin Urol. 2017;27:395–401.
49. Weidner AKH, Chen FM, Peterson LE. Developing the national family medicine graduate survey. J Grad Med Educ. 2017;9:570–573.
50. Hansen A, Peterson LE, Fang B, Phillips RL Jr. Burnout in young family physicians: Variation across states. J Am Board Fam Med. 2018;31:7–8.
51. George BC, Bohnen JD, Williams RG, et al.; Procedural Learning and Safety Collaborative (PLSC). Readiness of US general surgery residents for independent practice. Ann Surg. 2017;266:582–594.
52. Chen C, Petterson S, Phillips RL, Mullan F, Bazemore A, O’Donnell SD. Toward graduate medical education (GME) accountability: Measuring the outcomes of GME institutions. Acad Med. 2013;88:1267–1280.

Supplemental Digital Content

Copyright © 2022 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of the Association of American Medical Colleges.