Significant attention is currently focused on graduate medical education (GME) financing in the United States. Despite representing just 0.55% of the annual health care expenditure,1 the investment of over $15 billion annually from federal and state entities, including $9.7 billion from Medicare,2 is significant. Key stakeholders in GME, including state and federal policy makers, perceive a lack of accountability and transparency for GME funding and express concern that the GME system has not kept pace with changing societal needs.3–10 Given its sizable investment, the public’s interest in GME funding is justified.
Congress first explicitly stated its intent to fund GME as part of Medicare in 1965.11 Since the 1980s, teaching institutions have received funds in two forms: direct GME reimbursement and indirect medical education (IME) payments. Within institutions, monies are distributed with limited transparency to achieve ill-defined outcomes. Evidence suggests the existence of geographic and specialty-specific maldistribution of physicians,12–15 while, at the same time, residency graduates lack the skills needed for today’s practice environment (e.g., care coordination, quality improvement).16,17 The cost of training residents approximates the total public expenditure on GME,18,19 but there is wide variation in how GME funding is used across institutions and regions.
To address these concerns, in 2012, the Institute of Medicine (IOM, now the National Academy of Medicine) convened a committee on the governance and financing of GME to recommend finance reform that would promote a physician training system that meets society’s current and future needs. The resulting report2 provided recommendations for oversight and mechanisms of GME funding, with performance-based GME payment being a “key requirement” to achieving transparency and accountability, but did not provide specific details about the content and development of metrics for these payments. In this Article, we (see below) propose potential metrics in the hopes of starting a national conversation about performance-based funding for GME. Our intention is not to propose a GME funding mechanism but, rather, to offer possible metrics that could inform future funding decisions.
The IOM committee, however, was not the first to recommend performance-based GME payment. In 2010, the Medicare Payment Advisory Commission (MedPAC) recommended to Congress that 50% of IME payments be diverted into a performance-based GME fund.20 MedPAC proposed targeted metrics in five areas (quality improvement, evidence-based medicine, multidisciplinary teamwork, care coordination, and health information technology) that had been previously identified by a RAND Corporation study,21 but stopped short of proposing specific performance-based metrics. To our knowledge, we are the first to propose such metrics for GME funding.
This work complements a collaborative policy project on GME funding and workforce issues between the Alliance for Academic Internal Medicine (AAIM) and the American College of Physicians.22 We (internal medicine or medicine–pediatrics physicians) believe that sound metrics, developed and debated within the entire profession, will enhance the value of the public investment in GME. Given the limitation inherent in a specialty-specific group, we propose the GME performance metrics detailed below as a “conversation starter” for all stakeholders in academic medicine.
This Article outlines the process by which we created a set of 17 potential metrics to start the conversation on performance-based funding for GME. Eight of the metrics are described below as exemplars to add context and to help readers obtain a deeper understanding of the complexities of performance-based GME funding. We also describe considerations and precautions for metric implementation.
We are current and former volunteer members of the AAIM Education and AAIM Health Policy Committees, from university- and community-based programs of various sizes and locations across the United States, with backgrounds in GME advocacy. We conducted three in-person meetings, multiple conference calls, and numerous e-mail communications over the course of 25 months (from January 2015 to January 2017) to accomplish this work.
To establish potential meaningful performance-based metrics for GME funding, we considered the question: What should GME be held accountable23 for in exchange for public funding? We challenged ourselves to identify what should be measured as opposed to what can be readily measured. We explicitly strove to be provocative yet pragmatic and to build on existing reporting processes to develop a novel set of metrics to drive innovation in GME training and demonstrate the added value of GME to justify public funding.
To start, we systematically reviewed seminal articles and reports highlighting expectations of GME from a variety of stakeholder groups, including the IOM,2 MedPAC,20 Josiah Macy Jr. Foundation,24,25 Council on Graduate Medical Education,26 and physician professional organizations. We used an iterative process to categorize expectations and establish a framework for our metric development process. We selected the Institute for Healthcare Improvement (IHI) Triple Aim27 as an organizing principle because it describes the ideal health care system that GME should support. List 1 provides the eight categories we identified as the foundation of our metric development process.
Foundational Categories for Performance-Based Graduate Medical Education Metricsa
- Value, benefit, and cost
- Access to care
- Attention to the care of the underserved
- Patient safety
- Patient- and family-centered care
- Communication, teamwork, and transitions of care
- Educational environments
- Physician well-being
aAs identified by the authors.
Despite the prominent and controversial debate over physician workforce, we elected to limit the role of workforce outcomes in our proposed metrics. Many external factors that exceed the influence of the GME community contribute to the makeup and distribution of the physician workforce (e.g., educational debt, physician reimbursement, practice environment). Addressing these factors will require broader strategies22 that consider local and regional variations in workforce requirements, shifts in populations over time, and the role of advanced practice providers. For this, we favor a process by which local, state, and national bodies first determine physician workforce needs and then use dynamic GME funding to incentivize training in needed specialties and locations. We also elected to exclude metrics pertaining to individual competence and procedural experience because these domains are best addressed through existing professional self-regulation mechanisms (i.e., accreditation and board certification) and should be separate from any conversation about programmatic or institutional funding.
To ensure that we captured the breadth of viewpoints related to measurement and accountability, we added four perspectives through which to consider the categories—the GME community- at-large (collective GME), GME sponsoring institution (institutional), GME training program (program), and trainee/graduate of a program (trainee/graduate)—to our process. This approach broadened the opportunities for measurement of GME performance.
Working in pairs, we identified potential performance metrics for each perspective for an assigned category. This work was done using a simple matrix for each of the eight categories that included each of the four perspectives as well as some prompting questions. Chart 1 provides an example matrix for the category value, benefit, and cost. We modified each matrix slightly prior to use by the pairs to allow for contextual variations between each category.
Pairs reviewed existing performance-based metrics used in health care (e.g., National Quality Forum metrics28) with specific attention to collected and reported metrics that could be repurposed for the value-added measurement of GME performance. The pairs iteratively narrowed their lists of metrics on the basis of feasibility, perceived validity, potential for adverse or unintentional consequences, and alignment with the IHI Triple Aim to arrive at a maximum of 12 potential metrics for their category (3 metrics for each of the 4 perspectives). Pairs wrote narratives describing the background considerations, application, and risks for each metric. The entire group then discussed the proposed metrics and narratives for continued editing and clarification. Some proposed metrics were eliminated from consideration because they did not align with our identified goals of being pragmatic, aligning with current data collection mechanisms, or advancing the pursuit of the IHI Triple Aim. We also eliminated those metrics that could not be related to the entire GME continuum and/or other specialties.
To finalize the list of potential metrics, each pair was asked to identify up to 5 final metrics, regardless of perspective, for their category on the basis of feedback from the entire group and alignment with our goals. The entire group then reviewed the 33 proposed final metrics and eliminated 16 more from consideration choosing only those that were felt to have the greatest potential for implementation.
The resultant 17 metrics covering 8 categories are listed in Table 1 (exemplars) and Table 2 (nonexemplars). The 8 exemplar metrics, one for each category, are described in detail below to illustrate the metrics’ variety, complexity, and potential uses. Each metric includes a background describing our justification for inclusion, potential mechanisms for measurement (i.e., for collecting and interpreting data), and key considerations for application. While this was not done for the nonexemplars metrics, many of the same mechanisms of measurement and considerations apply.
Of note, several proposed metrics mention the use of data related to the Accreditation Council for Graduate Medical Education (ACGME) Clinical Learning Environment Review (CLER) program. The CLER program provides formative assessment to teaching hospitals in six defined pathways pertinent to learning environments conducive to professional growth and development.29 Institutions measure and track their performance within each pathway to improve GME processes and outcomes. We believe that national, aggregated CLER data, written as a performance metric, may be appropriate for global GME funding determinations. However, we strongly oppose the direct use of institution-specific CLER performance data, as this would unintentionally undermine competence-based training22 and the stated intent of the CLER program.29 Because we predict institutions will prioritize data collection to document growth for CLER pathways, we aligned several of our proposed metrics with them to minimize the administrative burden of collecting additional data.
Metric: Value of care provided
Category: Value, benefit, and cost
U.S. health care costs are unsustainably high, with an estimated $750 billion attributed annually to wasteful and unnecessary care.30 While it is impossible to know how much waste is directly attributable to GME hospitals, evidence suggests that these hospitals play a key role in their graduates’ practice regarding care intensity, spending patterns, and certain patient outcomes,31–34 an effect that lasts at least 15 years after graduation.32,33 GME has taken significant steps to improve learners’ understanding and provision of high-value care (HVC),35 including the jointly created AAIM-American College of Physicians HVC curriculum36 and the Choosing Wisely campaign,37 which houses over 400 recommendations from over 70 national societies. Evidence suggests that HVC education can change behaviors and attitudes among students, trainees, and practicing physicians.38–40 Thus, the value of care provided at teaching hospitals should serve as a predictor of the future practice patterns of graduates.
A ratio consisting of institutional measurement of the total adjusted costs plus the costs from harm (e.g., excess readmissions, excess length of stay, hospital-acquired infections) for patients treated for the top five diagnosis-related groups, divided by the number of patient cases treated for those groups. The first report would establish a baseline to assess the change in the value-based ratio over time. High-performing institutions, when compared with similar teaching hospitals, would meet this metric even if performance did not improve from baseline.
The feasibility of collecting this information is reasonable as most GME hospitals already collect and report this information but do not use it to measure the value of care provided. One barrier is that this metric measures only the costs of care without directly measuring benefit as defined by outcomes of care. However, when combined with other institutional metrics that compare outcomes across institutions, we believe the relative value of care can be determined. In developing this metric, there may need to be some adjustment for acuity, perhaps based on the case mix index. When applying this metric, caution should be taken to avoid creating financial “double jeopardy,” in which teaching hospitals (especially critical access hospitals) would lose direct care payment via new performance-based reimbursement structures in addition to losing funding from a future performance-based GME funding system.
Metric: Care management processes for populations of patients
Category: Access to care
The clinical training environment of an institution significantly impacts the outcomes of care provided by the graduates of that institution.31–33 Accordingly, residents who train in institutions that emphasize fair and equitable access to care through the presence of population management care processes and services will promote population-based care in the future. Metrics for institutions should target the presence of care management processes within the health delivery system as well as the formal education of residents and faculty in this area.
The presence of care management programs that reach across a broad institutional catchment area to target defined populations (e.g., high-risk or high-utilizing patients), and the impact those programs have on defined outcomes related to access to care for those populations.
Data for this metric could be obtained through self-report of institutions and/or by accessing available databases or registries. To maximize impact, we recommend flexibility for GME institutions to choose care management programs that address local or regional needs and to engage learners in the development and execution of these programs. In the early use of this metric, institutions should be rewarded for improving their defined outcomes as opposed to being penalized for having suboptimal performance at baseline.
Metric: Reducing health disparities
Category: Attention to the care of the underserved
Underserved populations are disproportionately poor, undereducated, and nonwhite. These socioeconomic factors, coupled with reduced access to care, contribute to health disparities.41 Care for underserved populations is an undeniable component of GME’s social contract, though not an explicit expectation of GME funding. Teaching hospitals have historically provided a disproportionate share of charity care,42 and residents who train in institutions that care for underserved patients tend to work in similar settings after graduation.43 Therefore, resident education should occur in institutions that are actively working to reduce health disparities and should be included in any future GME performance-based funding model.
Institutional improvement in two specific patient outcomes (e.g., colorectal cancer screening) that evidence suggests are influenced by disparities (e.g., race, geography) at that institution. Efforts may be confined to a specific specialty or clinic.
Significant complexity exists in attributing responsibility for health disparities, with local and regional factors contributing to an institution’s health disparities profile. For example, health insurance status is the major determinant of access to care and, therefore, a major contributor to health disparities, yet the availability of non-employer-based insurance differs widely from state to state. This and similar factors are often beyond an institution’s direct influence. The early use of this metric should focus on efforts to reduce disparities while allowing institutions to identify how their respective unique contexts influence broader local and regional goals.
Metric: Performance on patient safety metrics
Category: Patient safety
Patient safety indicators (PSIs) are statistical performance measures monitored by hospitals, payers, regulators, and others to identify potential adverse events (e.g., in-hospital falls, decubitus ulcer formation).44 PSIs play a crucial role in hospitals’ monitoring of safe patient care and increasingly affect reimbursement for patient care activities. We recommend that institutions be held accountable for resident engagement in improving safety for identified institutional priorities meant to impact PSIs.
Institutional improvement on a composite score of self-identified PSIs in which residents have actively engaged in the improvement process. A percentage or proportion of trainees actively participating in patient safety initiatives could also be reported.
We believe there is significant opportunity to implement this metric. Data for many PSIs are already collected by GME institutions (e.g., the Agency for Healthcare Research and Quality, Leapfrog) and could also be collected via institutional self-report. Additionally, resident engagement is encouraged as part of an optimal learning environment.29 Existing institutional ACGME program surveys and aggregate, national CLER data could be used to measure resident engagement in patient safety activities. Limitations with regard to this metric are the growing concern that the measurement of PSIs may be flawed and biased45,46 and the risk of financial double jeopardy (see above). In the future, it would be useful to develop specific PSIs for use by GME programs or allow institutions to adapt existing PSIs to meet local needs.
Metric: Survey of patient experience
Category: Patient- and family-centered care
Perspective: Collective GME
Patient and family engagement in personal care decisions is increasingly valued and measured in health care. Evidence shows that patient experience and doctor–patient communication are key to clinical outcomes,47–49 including psychological and functional status and symptom recovery.50,51 Linking institutional and program performance via metrics associated with patient experience is important for future GME performance-based funding.
Collective GME community improvement over time on a composite score of focused elements regarding patient experience or patient self-management using existing or adapted surveys or measurement tools (e.g., the Clinical and Group Consumer Assessment of Healthcare Providers and Systems survey, the C-I-CARE framework) and assessments of the educational experience for residents and faculty (e.g., ACGME program surveys).
Many aspects of patient- and family-centered care are already measured in clinical and educational settings; however, data are very contextual. Extenuating circumstances outside the control of a program or institution make attribution and, therefore, GME funding, to a specific program or institution problematic. However, it is appropriate for the entire GME enterprise to be held accountable for improving patient experience at teaching hospitals by requiring the purposeful inclusion of curriculum, assessments, or novel interventions designed to positively impact patient experience or engagement. As assessment tools improve, focus could be placed on institution- or program-level performance.
Metric: Hospital readmission rate
Category: Communication, teamwork, and transitions of care
There is widespread agreement that 21st-century patient care requires the participation of professionals from numerous disciplines, all practicing at the top of their abilities. Interprofessional team-based care demands a level of interpersonal communication that is not traditionally a focus of physician training.52,53 For almost 10 years, the Centers for Medicare & Medicaid Services (CMS) has tracked and reported elements of the quality of care provided by hospitals for high-volume admitting diagnoses (e.g., pneumonia), including each hospital’s 30-day readmission rates.54 More recently, the CMS has strengthened penalties regarding reimbursement for readmission rates in excess of those expected for each diagnosis.55 The stable transition from hospital to home, as assessed by short-term readmission rates, is an attractive metric for GME institutions because it can be a proxy for elements of resident communication and teamwork.
Hospital 30-day readmission rates for target diagnoses tracked and reported by the CMS with the opportunity for a “readmission bonus” for hospitals with rates in the lowest quintile or that have made significant improvements over the prior year.
Hospitals have a responsibility to ensure safe and appropriate discharges. Structured multidisciplinary programs have effectively reduced readmission rates in GME hospitals.56 While readmission rates are not under the sole control of GME institutions, it is reasonable to assume that trainees participating in effective discharge planning practices will incorporate these principles and priorities into their future practice. Tying readmission rates to GME funding underscores the importance of transitions of care training in medical education. Given the significant difficulties with attributing the reasons for readmission, institutions should be accountable for risk-adjusted improvement in the rate of readmission.
Metric: Educational portfolios
Category: Educational environments
Perspective: Program and/or trainee/graduate
The worldwide movement toward competency-based medical education and measurement of educational outcomes57 has led to redesigned assessments throughout physician training. The educational portfolio has been shown to be useful in certain areas of undergraduate medical education (UME)58,59 and postgraduate medical education,60 with successful implementation being described in internal medicine,61 family medicine,62 and surgery63 in the United States as well as in the United Kingdom.64 Portfolios can be used to document individual participation and performance in many of the activities and metrics proposed in this Article (e.g., engagement in patient safety activities,65 team-based care, high-value patient-centered care) and to encourage the development of reflective practice.66 We propose the robust use of educational portfolios as a performance-based metric, recognizing that the format, content, and use of portfolios are evolving.
Percentage of completed portfolios per program. (A completed portfolio documents participation in scholarly activities, clinical performance data review, and engagement in quality improvement activities.) Optimal content of portfolios may vary by specialty.
This metric will require an institutionally maintained portfolio management system for GME; appropriate infrastructure, including trained educators to facilitate completion, which is currently lacking for most institutions; and provision of data for use in reflection and quality improvement activities. Of note, many of these data correspond to those collected for the previously described metrics (i.e., institutional self-reports, ACGME surveys) but might also include Web Accreditation Data System data. Ultimately, however, this metric would require new data-reporting mechanisms by programs and institutions and the sharing of data across the continuum of medical education. Caution must be taken to avoid increased job dissatisfaction as a result. Opportunity lies in linking GME educational portfolios to UME educational portfolios and the outcomes of practicing physicians as a future performance-based GME metric.
Metric: Institutional system for physician well-being
Category: Physician well-being
Physician burnout in the United States is reaching epidemic proportions and continuing to grow,67 with as many as 75% of residents manifesting difficulties with burnout or mood disorders.68–74 Burnout and depression in physicians adversely affect clinical and educational performance, undermining the profession’s commitment to quality and safety in patient care.75–81 This impact on quality is so profound that provider wellness has been proposed as a potential fourth arm of the IHI Triple Aim.82 To positively impact the desired patient outcomes of our other metrics, we recommend one designed to promote the well-being of medical students, trainees, and practicing physicians at GME institutions.83
Presence of a proactive institutionally based system that promotes physician well-being and monitors physician well-being and burnout while providing confidential treatment to affected physicians.
Data for this metric could be collected by the ACGME via surveys of residents and faculty and institutional self-reporting. A recent meta-analysis suggests that individual-focused and organizational interventions (e.g., mindfulness, stress management, small-group discussions) can reduce burnout.83 Combining these types of approaches may improve physician well-being. At this time, it is not clear which interventions are most impactful; further studies are needed. Implementation of this metric would need to align with evolving evidence and the ongoing activities of the ACGME, which has made physician wellness a priority.
In 2013, federal and state governments contributed over $15 billion to GME.2 Expecting greater transparency and accountability for the outcomes of this sizable taxpayer investment is reasonable. The IOM report noted that the current system “does not yield useful data on program outcomes and performance” and recommended “modernizing GME payment methods based on performance, to ensure program oversight and accountability, and to incentivize innovation in the content and financing of GME.”2 In this Article, we have proposed broad-ranging performance-based metrics that can be used, along with others that evolve over time, in a redesigned GME funding system. Collectively, these metrics can be used to leverage GME and achieve our common objective of a physician workforce that provides high-quality, patient-centered, affordable health care to patients and populations.
In developing our proposed metrics, we aimed to use existing data sets and registries or capitalize on data collection processes that were either already planned or already in place. We also endeavored to identify novel metrics with new or unmeasured data points by asking, “What should GME be held accountable for?” We believe that collectively our proposed metrics provide measurable and impactful opportunities for all specialties, locations, and training settings.
To enhance their applicability, we suggest that data for multiple metrics, selected from those provided in the exemplar and nonexemplar tables, be collected and reported as a formal composite measure. By obtaining data from many sources, GME funding will not be overly reliant on any singular data set or point. Ideally, there would be a larger menu of metrics, including those presented in this report, that align with institutional priorities and specialties from which institutions would choose. The use of such a menu approach and a focus on change in performance from baseline could help to mitigate unintended consequences, particularly at safety net institutions.
Given the wide variety of GME training institutions and programs, we believe that the reporting of performance-based data must account for local variations in institutional mission, geographical location, and population served while continuing to promote national standards. In this manner, GME could be used to leverage local health systems improvement as opposed to simply justifying continued public funding for GME. For example, internal medicine residency programs’ curricula improved after the ACGME changed program rules in response to areas found to be deficient (e.g., information technology use).84 So, too, could performance metrics be used to improve national and local GME.
Ultimately, we anticipate that performance-based metrics will inform the extent and distribution of publicly funded GME dollars. As with any performance-based program, the threshold can either trigger an incentive (carrot) or penalty (stick) for future funding. We recommend building an incentive-based system that facilitates ongoing efforts to improve GME and avoid unintentional harms. Whether the basis for incentives or penalties, ultimately, GME performance-based metrics should be evidenced-based; dynamic; tied to improved outcomes of care; and, most important, transparent to teaching hospitals, health care payers, medical education accrediting organizations, and the public.
There are multiple limitations to the recommendations presented in this report. We are internal medicine and medicine–pediatric physicians and do not represent the perspectives of all physician specialties. To the extent possible, we designed these metrics with broad applicability and adaptability across specialties. Many of the exemplar metrics presented are from the institutional perspective and attributed to the sponsoring institution, even if they are applicable to a subset of residency programs. We recognize that current data sets do not capture relevant performance metrics attributable to a program or trainee. This presents an opportunity for the GME community to develop mechanisms for collecting relevant metrics that inform quality of care at these levels. Our proposed metrics differ from the metrics historically used in UME (e.g., board exam scores, Match rate), which may lead to a lack of synergy across the continuum. As mentioned previously, GME imprints meaningful habits that endure throughout careers,31–33 resulting in the opportunity to leverage GME funding for health systems improvement. We anticipate medical schools and UME moving in this direction over time. Finally, our process for determining metrics was iterative and did not use a formal consensus-building process (e.g., Delphi), nor did we seek feedback from external experts. Incorporation of these steps was time- and resource-prohibitive. We anticipate, and in fact hope, that other medical organizations and experts will engage in this conversation.
It is critical to develop pilot projects to study the effect and unintended consequences of performance-based GME metrics prior to full-scale implementation of any new performance-based funding model. In accordance with the IOM report recommendations,2 we believe these pilots should be supported via new or existing GME funding mechanisms. We recommend establishing stable, reasonable GME performance metrics to allow programs and teaching hospitals to adapt their learning environment to achieve them. Additionally, we recognize that the health care system is dynamic, with rapidly changing technologies, patient demographics, and delivery systems. Any performance-based model will need continuous oversight and curation so that the metrics remain relevant over time.
We recognize the inherent difficulty in measuring these, or any, proposed performance-based metrics and the implications that performance-based GME funding has for teaching institutions and programs. Our proposal is intended to be provocative and start a meaningful conversation within the profession and among stakeholder organizations. We feel that GME stakeholders and medical educators must work together to incentivize and support change, as accountability and transparency of GME funding is a critical first step to building a strong and sustainable GME system for decades to come.
Acknowledgments: The authors would like to thank the dedicated Alliance for Academic Internal Medicine (AAIM) staff for their ongoing support throughout this initiative and Teresa Hartman, MLS, at the University of Nebraska McGoogan Library of Medicine for her assistance in preparing this manuscript.
1. Gold JP, Stimpson JP, Caverzagie KJ. Envisioning a future governance and funding system for undergraduate and graduate medical education. Acad Med. 2015;90:12241230.
2. Eden J, Berwick D, Wilensky G. Graduate Medical Education That Meets the Nation’s Health Needs. 2014.Washington, DC: National Academies Press.
3. Blumenthal D. Training Tomorrow’s Doctors: The Medical Education Mission of Academic Health Centers. http://www.commonwealthfund.org/publications/fund-reports/2002/apr/training-tomorrows-doctors--the-medical-education-mission-of-academic-health-centers
. Published April 1, 2002. Accessed October 24, 2017.
4. Council on Graduate Medical Education. Nineteenth Report: Enhancing Flexibility in Graduate Medical Education. https://www.hrsa.gov/advisorycommittees/bhpradvisory/cogme/Reports/nineteenthrpt.pdf
. Published September 2007. Accessed October 24, 2017.
5. Holmboe ES, Bowen JL, Green M, et al. Reforming internal medicine residency training. A report from the Society of General Internal Medicine’s task force for residency reform. J Gen Intern Med. 2005;20:11651172.
6. Institute of Medicine, Committee on the Future Health Care Workforce for Older Americans. Retooling for an Aging America: Building the Health Care Workforce. 2008.Washington, DC: National Academies Press.
7. Ludmerer KM, Johns MM. Reforming graduate medical education. JAMA. 2005;294:10831087.
8. Meyers FJ, Weinberger SE, Fitzgibbons JP, Glassroth J, Duffy FD, Clayton CP; Alliance for Academic Internal Medicine Education Redesign Task Force. Redesigning residency training in internal medicine: The consensus report of the Alliance for Academic Internal Medicine Education Redesign Task Force. Acad Med. 2007;82:12111219.
9. Mullan F. Workforce Issues in Health Care Reform: Assessing the Present and Preparing for the Future. Testimony Before the United State Senate Finance Committee. March 12, 2009. https://www.finance.senate.gov/imo/media/doc/031209fmtest1.pdf
. Accessed December 27, 2017.
10. Medicare Payment Advisory Commission. Report to the Congress: Improving Incentives in the Medicare Program. http://www.medpac.gov/docs/default-source/reports/Jun09_EntireReport.pdf?sfvrsn=0
. Published June 2009. Accessed October 24, 2017.
11. Rich EC, Liebow M, Srinivasan M, et al. Medicare financing of graduate medical education. J Gen Intern Med. 2002;17:283292.
12. Council on Graduate Medical Education. Physician Distribution and Health Care Challenges in Rural and Inner-City Areas. 1998.Washington, DC: Health Resources and Services Administration.
13. Zhang X, Phillips RL Jr, Bazemore AW, et al. Physician distribution and access: Workforce priorities. Am Fam Physician. 2008;77:1378.
14. Petterson SM, Phillips RL Jr, Bazemore AW, Koinis GT. Unequal distribution of the U.S. primary care workforce. Am Fam Physician. 2013;87(11). http://www.aafp.org/afp/2013/0601/od1.html
. Accessed on December 1, 2017.
15. United States Government Accountability Office. Testimony Before the Committee on Health, Education, Labor, and Pensions, U.S. Senate: Primary Care Professionals: Recent Supply Trends, Projections, and Valuation of Services. GAO-08-472T. 2008. Washington, DC: Government Accountability Office; http://www.gao.gov/new.items/d08472t.pdf
. Accessed on November 27, 2017.
16. Mattar SG, Alseidi AA, Jones DB, et al. General surgery residency inadequately prepares trainees for fellowship: Results of a survey of fellowship program directors. Ann Surg. 2013;258:440449.
17. Crosson FJ, Leu J, Roemer BM, Ross MN. Gaps in residency training should be addressed to better prepare doctors for a twenty-first-century delivery system. Health Aff (Millwood). 2011;30:21422148.
18. Steinmann AF. Threats to graduate medical education funding and the need for a rational approach: A statement from the Alliance for Academic Internal Medicine. Ann Intern Med. 2011;155:461464.
19. Ben-Ari R, Robbins RJ, Pindiprolu S, Goldman A, Parsons PE. The costs of training internal medicine residents in the United States. Am J Med. 2014;127:10171023.
20. Medicare Payment Advisory Commission. Chapter 4. In: Report to the Congress: Aligning Incentives in Medicare. https://www.aacom.org/docs/default-source/grad-medical-education/jun10_entirereport.pdf?sfvrsn=2
. Published June 2010. Accessed October 24, 2017.
21. Cordasco KM, Horta M, Lurie N, Bird CE, Wynn BO. How Are Residency Programs Preparing Our 21st Century Internists? A Review of Internal Medicine Residency Programs’ Teaching on Selected Topics. http://220.127.116.11/documents/Jul09_ResidencyPrograms_CONTRACTOR_CB.pdf
. Published July 2009. Accessed October 24, 2017.
22. Butkus R, Lane S, Steinmann AF, et al. Financing U.S. graduate medical education: A policy position paper of the Alliance for Academic Internal Medicine and the American College of Physicians. Ann Intern Med. 2016;165:134137. http://annals.org/aim/fullarticle/2520466/financing-u-s-graduate-medical-education-policy-position-paper-alliance
. Accessed November 20, 2017.
23. American Medical Association Council on Medical Education. CME Report 5: Accountability and Transparency in Graduate Medical Education Funding. 2016.Chicago, IL: American Medical Association.
24. Josiah Macy Jr. Foundation. Ensuring an Effective Physician Workforce for America: Recommendations for an Accountable Graduate Medical Education System. http://www.macyfoundation.org/docs/macy_pubs/Effective_Physician_Workforce_Conf_Book.pdf
. Published April 2011. Accessed October 24, 2017.
25. Josiah Macy Jr. Foundation. Ensuring an Effective Physician Workforce for the United States: Recommendations for Graduate Medical Education to Meet the Needs of the Public. http://macyfoundation.org/docs/macy_pubs/JMF_GME_Conference2_Monograph(2).pdf
. Published November 2011. Accessed October 24, 2017.
26. Council on Graduate Medical Education. Twenty-First Report: Improving Value in Graduate Medical Education. http://www.hrsa.gov/advisorycommittees/bhpradvisory/cogme/reports/twentyfirstreport.pdf
. Published August 2013. Accessed October 24, 2017.
27. Berwick DM, Nolan TW, Whittington J. The Triple Aim: Care, health, and cost. Health Aff (Millwood). 2008;27:759769.
28. National Quality Forum. Measures, Reports & Tools. http://www.qualityforum.org/measures_reports_tools.aspx
. Accessed October 24, 2017.
29. Accreditation Council for Graduate Medical Education. Introduction. In: Clinical Learning Environment Review (CLER): CLER Pathways to Excellence—Expectations for an Optimal Clinical Learning Environment to Achieve Safe and High Quality Patient Care. http://www.acgme.org/acgmeweb/Portals/0/PDFs/CLER/CLER_Brochure.pdf
. Accessed October 24, 2017.
30. Institute of Medicine. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. 2013.Washington, DC: National Academies Press.
31. Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists’ ability to practice conservatively. JAMA Intern Med. 2014;174:16401648.
32. Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014;312:23852393.
33. Asch DA, Nicholson S, Srinivas S, Herrin J, Epstein AJ. Evaluating obstetrical residency programs using patient outcomes. JAMA. 2009;302:12771283.
34. Epstein AJ, Nicholson S, Asch DA. The Production of and Market for New Physicians’ Skill. 2013.Cambridge, MA: National Bureau of Economic Research.
35. Weinberger SE. Providing high-value, cost-conscious care: A critical seventh general competency for physicians. Ann Intern Med. 2011;155:386388.
36. Smith CD, Levinson WS; Internal Medicine HVC Advisory Board. A commitment to high-value care education from the internal medicine community. Ann Intern Med. 2015;162:639640.
37. Choosing Wisely. About. http://www.choosingwisely.org/about-us/
. Accessed October 24, 2017.
38. Post J, Reed D, Halvorsen AJ, Huddleston J, McDonald F. Teaching high-value, cost-conscious care: Improving residents’ knowledge and attitudes. Am J Med. 2013;126:838842.
39. Kost A, Genao I, Lee JW, Smith SR. Clinical decisions made in primary care clinics before and after choosing wisely. J Am Board Fam Med. 2015;28:471474.
40. PerryUndem Research/Communication. Unnecessary Tests and Procedures in the Health Care System: What Physicians Say About the Problem, the Causes, and the Solutions—Results From a National Survey of Physicians. http://www.choosingwisely.org/wp-content/uploads/2015/04/Final-Choosing-Wisely-Survey-Report.pdf
. Published May 1, 2014. Accessed October 24, 2017.
41. Fiscella K, Williams DR. Health disparities based on socioeconomic inequities: Implications for urban health care. Acad Med. 2004;79:11391147.
42. Association of American Medical Colleges. Patient Care at AAMC-Member Teaching Hospitals. https://www.aamc.org/download/379180/data/patientcareone-pager
. Published March 2013. Accessed October 24, 2017.
43. Phillips RL, Petterson S, Bazemore A. Do residents who train in safety net settings return for practice? Acad Med. 2013;88:19341940.
44. Agency for Healthcare Research and Quality. Patient Safety Indicators Overview. http://www.qualityindicators.ahrq.gov/modules/psi_overview.aspx
. Accessed October 24, 2017.
45. Rajaram R, Barnard C, Bilimoria KY. Concerns about using the patient safety indicator-90 composite in pay-for-performance programs. JAMA. 2015;313:897898.
46. Berwick DM. Era 3 for medicine and health care. JAMA. 2016;315:13291330.
47. Berwick DM. A user’s manual for the IOM’s “Quality Chasm” report. Health Aff (Millwood). 2002;21:8090.
48. Glickman SW, Boulding W, Manary M, et al. Patient satisfaction and its relationship with clinical quality and inpatient mortality in acute myocardial infarction. Circ Cardiovasc Qual Outcomes. 2010;3:188195.
49. Isaac T, Zaslavsky AM, Cleary PD, Landon BE. The relationship between patients’ perception of care and measures of hospital quality and safety. Health Serv Res. 2010;45:10241040.
50. Stewart MA. Effective physician–patient communication and health outcomes: A review. CMAJ. 1995;152:14231433.
51. Beck RS, Daughtridge R, Sloane PD. Physician–patient communication in the primary care office: A systematic review. J Am Board Fam Pract. 2002;15:2538.
52. Interprofessional Education Collaborative Expert Panel. Core Competencies for Interprofessional Collaborative Practice: Report of an Expert Panel. 2011. Washington, DC: Interprofessional Education Collaborative; https://www.aamc.org/download/186750/data/core_competencies.pdf
. Accessed November 20, 2017.
53. Cox M, Naylor M. Transforming Patient Care: Aligning Interprofessional Education With Clinical Practice Redesign. 2013. New York, NY: Josiah Macy Jr. Foundation; http://macyfoundation.org/docs/macy_pubs/JMF_TransformingPatientCare_Jan2013Conference_fin_Web.pdf
. Accessed November 20, 2017.
54. Medicare. Hospital Compare: Hospital Readmissions Reduction Program. https://www.medicare.gov/hospitalcompare/readmission-reduction-program.html
. Accessed October 24, 2017.
55. Centers for Medicare & Medicaid Services. Readmissions Reduction Program (HRRP). https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/Readmissions-Reduction-Program.html
. Revised April 18, 2016. Accessed October 24, 2017.
56. Jack BW, Chetty VK, Anthony D, et al. A reengineered hospital discharge program to decrease rehospitalization: A randomized trial. Ann Intern Med. 2009;150:178187.
57. Frenk J, Chen L, Bhutta ZA, et al. Health professionals for a new century: Transforming education to strengthen health systems in an interdependent world. Lancet. 2010;376:19231958.
58. Buckley S, Coleman J, Davison I, et al. The educational effects of portfolios on undergraduate student learning: T Best Evidence Medical Education (BEME) systematic review. BEME guide no. 11. Med Teach. 2009;31:282298.
59. Friedman Ben David M, Davis MH, Harden RM, Howie PW, Ker J, Pippard MJ. AMEE medical education guide no. 24: Portfolios as a method of student assessment. Med Teach. 2001;23:535551.
60. Tochel C, Haig A, Hesketh A, et al. The effectiveness of portfolios for post-graduate assessment and education: BEME guide no 12. Med Teach. 2009;31:299318.
61. Donato AA, George DL. A blueprint for implementation of a structured portfolio in an internal medicine residency. Acad Med. 2012;87:185191.
62. McEwen LA, Griffiths J, Schultz K. Developing and successfully implementing a competency-based portfolio assessment system in a postgraduate family medicine residency program. Acad Med. 2015;90:15151526.
63. Webb TP, Aprahamian C, Weigelt JA, Brasel KJ. The Surgical Learning and Instructional Portfolio (SLIP) as a self-assessment educational tool demonstrating practice-based learning. Curr Surg. 2006;63:444447.
64. Tochel C, Beggs K, Haig A, et al. Use of web based systems to support postgraduate medical education. Postgrad Med J. 2011;87:800806.
65. Taylor BB, Parekh V, Estrada CA, Schleyer A, Sharpe B. Documenting quality improvement and patient safety efforts: The quality portfolio. A statement from the academic hospitalist taskforce. J Gen Intern Med. 2014;29:214218.
66. Mann K, Gordon J, MacLeod A. Reflection and reflective practice in health professions education: A systematic review. Adv Health Sci Educ Theory Pract. 2009;14:595621.
67. Dyrbye L, Shanafelt T. A narrative review on burnout experienced by medical students and residents. Med Educ. 2016;50:132149.
68. Shanafelt TD, Bradley KA, Wipf JE, Back AL. Burnout and self-reported patient care in an internal medicine residency program. Ann Intern Med. 2002;136:358367.
69. Ripp JA, Bellini L, Fallar R, Bazari H, Katz JT, Korenstein D. The impact of duty hours restrictions on job burnout in internal medicine residents: A three-institution comparison study. Acad Med. 2015;90:494499.
70. Ripp J, Babyatsky M, Fallar R, et al. The incidence and predictors of job burnout in first-year internal medicine residents: A five-institution study. Acad Med. 2011;86:13041310.
71. Dyrbye LN, West CP, Satele D, et al. Burnout among U.S. medical students, residents, and early career physicians relative to the general U.S. population. Acad Med. 2014;89:443451.
72. Lebensohn P, Dodds S, Benn R, et al. Resident wellness behaviors: Relationship to stress, depression, and burnout. Fam Med. 2013;45:541549.
73. Cooke GP, Doust JA, Steele MC. A survey of resilience, burnout, and tolerance of uncertainty in Australian general practice registrars. BMC Med Educ. 2013;13:2.
74. Milstein JM, Raingruber BJ, Bennett SH, Kon AA, Winn CA, Paterniti DA. Burnout assessment in house officers: Evaluation of an intervention to reduce stress. Med Teach. 2009;31:338341.
75. Fahrenkopf AM, Sectish TC, Barger LK, et al. Rates of medication errors among depressed and burnt out residents: Prospective cohort study. BMJ. 2008;336:488491.
76. Joules N, Williams DM, Thompson AW. Depression in resident physicians: A systematic review. Open J Depression. 2014;3:89100. http://file.scirp.org/pdf/OJD_2014080814110512.pdf
. Published August 2014. Accessed October 24, 2017.
77. West CP, Tan AD, Habermann TM, Sloan JA, Shanafelt TD. Association of resident fatigue and distress with perceived medical errors. JAMA. 2009;302:12941300.
78. Garrouste-Orgeas M, Perrin M, Soufir L, et al. The Iatroref study: Medical errors are associated with symptoms of depression in ICU staff but not burnout or safety culture. Intensive Care Med. 2015;41:273284.
79. Welp A, Meier LL, Manser T. The interplay between teamwork, clinicians’ emotional exhaustion, and clinician-rated patient safety: A longitudinal study. Crit Care. 2016;20:110.
80. Profit J, Sharek PJ, Amspoker AB, et al. Burnout in the NICU setting and its relation to safety culture. BMJ Qual Saf. 2014;23:806813.
81. de Oliveira GS Jr, Chang R, Fitzgerald PC, et al. The prevalence of burnout and depression and their association with adherence to safety and practice standards: A survey of United States anesthesiology trainees. Anesth Analg. 2013;117:182193.
82. Bodenheimer T, Sinsky C. From triple to quadruple aim: Care of the patient requires care of the provider. Ann Fam Med. 2014;12:573576.
83. West CP, Dyrbye LN, Erwin PJ, Shanafelt TD. Interventions to prevent and reduce physician burnout: A systematic review and meta-analysis. Lancet. 2016;388:22722281.
84. Chaudhry SI, Lien C, Ehrlich J, et al. Curricular content of internal medicine residency programs: A nationwide report. Am J Med. 2014;127:12471254.