Harnessing the Power of Big Data to Improve Graduate Medical Education: Big Idea or Bust? : Academic Medicine

Secondary Logo

Journal Logo

Invited Commentaries

Harnessing the Power of Big Data to Improve Graduate Medical Education: Big Idea or Bust?

Arora, Vineet M. MD, MAPP

Author Information
Academic Medicine 93(6):p 833-834, June 2018. | DOI: 10.1097/ACM.0000000000002209
  • Free

Abstract

“Big data” is the buzzword of the moment in health care. The use of predictive analytics, machine learning, and artificial intelligence are touted as transformational tools to improve clinical care, and the advent of electronic medical records (EMRs) has made it possible to collect the data that fuel the use of these tools. Hospitals and health systems are already investing in analytic capability to predict the next readmission or proactively identify patients who are at risk for critical illness before anyone recognizes it.

If health care delivery can be transformed by big data, can graduate medical education (GME)? Because our current system relies on faculty observations of competence, it is not unreasonable to ask whether big data in the form of clinical EMRs and other novel data sources can answer questions of importance in GME, such as when is a resident ready for independent practice?

The timing is certainly ripe for such a transformation in measuring GME outcomes. Recently, Weinstein1 articulated the need to rigorously measure GME outcomes after a National Academy of Medicine report2 called for reforms to how GME is delivered and financed. While experts agree on the need to ensure that GME meets our nation’s health needs, there is little consensus on how to measure the performance of GME in meeting this goal. This was the subject of a workshop at the National Academy of Medicine on GME outcomes and metrics in October 2017.3 A key theme from this meeting was that big data holds great promise to inform GME performance at individual, institutional, and national levels.

Using Big Data to Inform Clinical Experience

Currently, GME leaves trainees’ clinical experience to chance. Residents are assigned to care for patients somewhat haphazardly without attention to the illnesses patients have or the type of medical attention they will need. As a result, clinical experience of trainees is variable. While efforts to capture and standardize clinical experience of trainees exist, such as the Accreditation Council for Graduate Medical Education surgical case logs,4 they rely on manual data entry by residents and do not capture the multifactorial nature of a patient’s illness or their procedural care.

While a case count is a low-level measure captured by EMRs, advanced methods using big data can answer more fundamental questions of importance.5 For example, machine learning could be applied to big data to identify the minimum number of procedures a trainee should perform to achieve optimal complication rates.

Providing Clinically Meaningful Data to Trainees

For data to be meaningful for trainees, the data need to reflect care provided by residents. This process assumes that the data are a reflection of a process or outcome within a resident’s control and that the data apply to patients cared for by that resident. While isolated measures such as venous thromboembolism prophylaxis exist, measures related to clinical care often rely on a chain of processes that do not solely reflect care provided by trainees. For example, whether a patient leaves the hospital before noon is affected not only by the resident’s performance but also by the timing of the patient being accepted to a rehabilitation facility or when a family member is able to come get the patient.

Attribution algorithms that are better able to identify patients cared for by residents are a critical step toward the ability to use big data to inform training. With residents working fewer hours and completing multiple handoffs, correctly attributing patients to individual trainees is challenging. If a trainee does not trust the data to reflect the care provided by that trainee, it will be too easy to disregard the data. One major challenge with administrative data is that care is typically attributed to the billing attending, with no mention of residents. Thus, national data comparing hospital performance cannot separate care by residents from care by clinicians without residents. The simple addition of a billing identifier to code care provided by residents would advance measurement of care provided by trainees on a national level and allow benchmarking between GME-sponsoring institutions.

The Promise of New Data Sources

Measurement and data reporting in teaching hospitals focuses on key measures required for public reporting and value-based purchasing. While such data are useful for clinical comparisons, they are unlikely to single-handedly yield breakthroughs in measuring GME performance.

However, merging such data with newer data sources holds great promise. Researchers have used Doximity, a social network of physicians, to link clinical outcomes to site of medical school training.6 Ambient data during clinical encounters could be captured via voice-activated technology recording (e.g., Amazon Alexa) coupled with natural language processing and sentiment analyses to examine patterns in verbal communication. For instance, using similar methodology, studies of body-worn video cameras on police officers during traffic stops demonstrate racial bias in utterances related to “respect.”7

Likewise, engineering technology can measure effectiveness of patterns of touch during simulated encounters using haptic data. While most simulations or clinical observations use checklists to measure protocol adherence, rigorous analysis of verbal conversation or haptic data could yield additional metrics, such as level of empathy demonstrated in a clinical encounter or proficiency of physical exam skills. Other sources of ambient data could include geolocation from mobile phones or time spent charting in EMRs. While such practices may raise concerns about invasion of privacy, it is worth highlighting that companies already use similar approaches in advertising and marketing.

Investing in Big Data Expertise in GME

A major hurdle in using big data to inform GME is the lack of expertise that exists in the educational enterprise to analyze such data. GME offices are often lean units focused on onboarding, operations, accreditation, and remediation. The analytic capability and storage required to analyze such data require willing partners with expertise and resources from other areas, such as quality or informatics. Partnership will not be seamless since the language of GME and data sources in GME are often disconnected from the clinical enterprise. As leadership roles that bridge across hospital and GME are forming, there will be a need for experts in data science to bridge such educational and clinical data as well. While New York University pioneered the creation of an educational data warehouse,8 generalizing this model to smaller independent academic medical centers remains elusive. Therefore, solutions that are transferrable across sites and contexts and that leverage available data will be of greatest interest. Of course, the greatest hurdle in using big data to inform GME is a big investment of expertise and resources. Despite this, the return to society can be exponentially bigger than any initial investment. In the words of Stanford University big data scientist Atul Butte, “Hiding within those mounds of data is knowledge that could change the life of a patient, or change the world.”9 Certainly, big data can also change residency training.

Acknowledgments: The author thanks the participants of the National Academy of Medicine’s Graduate Medical Education (GME) Outcomes and Metrics Workshop, particularly Dr. Debra Weinstein and the Data in the Future panel consisting of Drs. Rachel Werner, Anupam Jena, and Alvin Rajkomar.

References

1. Weinstein DF. Optimizing GME by measuring its outcomes. N Engl J Med. 2017;377:20072009.
2. Eden J, Berwick D, Wilensky G; Institute of Medicine. Graduate Medical Education That Meets the Nation’s Health Needs. 2014.Washington, DC: National Academies Press.
3. National Academy of Medicine. Graduate medical education outcomes and metrics—Workshop. http://www.nationalacademies.org/hmd/Activities/Workforce/GMEoutcomesandmetrics/2017-OCT-10.aspx. Accessed on February 28, 2018.
4. Accreditation Council for Graduate Medical Education. Case log information. http://www.acgme.org/Specialties/Case-Log-Information/pfcatid/24/Surgery. Accessed February 28, 2018.
5. Rajkomar A, Ranji SR, Sharpe B. Using the electronic health record to identify educational gaps for internal medicine interns. J Grad Med Educ. 2017;9:109112.
6. Tsugawa Y, Jena AB, Orav EJ, Jha AK. Quality of care delivered by general internists in US hospitals who graduated from foreign versus US medical schools: Observational study. BMJ. 2017;356:j273.
7. Voigt R, Camp NP, Prabhakaran V, et al. Language from police body camera footage shows racial disparities in officer respect. Proc Natl Acad Sci U S A. 2017;114:65216526.
8. Triola MM, Pusic MV. The education data warehouse: A transformative tool for health education research. J Grad Med Educ. 2012;4:113115.
9. Goldman B; King of the mountain: Digging data for a healthier world. Stanford Medicine. Summer 2012. http://sm.stanford.edu/archive/stanmed/2012summer/article3.html. Accessed March 2, 2018.
Copyright © 2018 by the Association of American Medical Colleges