Secondary Logo

Journal Logo

Research Reports

An Examination of Medical Malpractice Claims Involving Physician Trainees

Myers, Laura C. MD, MPH; Gartland, Rajshri M. MD, MPH; Skillings, Jillian; Heard, Lisa MSN, RN; Bittner, Edward A. MD, PhD; Einbinder, Jonathan MD, MPH; Metlay, Joshua P. MD, PhD; Mort, Elizabeth MD, MPH

Author Information
doi: 10.1097/ACM.0000000000003117

Abstract

Both patients and medical professionals can experience tremendous burden after being involved in patient harm events that lead to medical malpractice claims. The average physician spends 4.2 years with an open malpractice claim.1 Open claims can have a negative impact on physicians’ daily practice, which can last for years and even continue after the claim is closed.2

In 2007, Singh and colleagues3 studied 240 malpractice claims that were closed between 1984 and 2004 from 5 large insurers. The sample consisted of claims with explicit medical errors in which trainees were involved. They found that these claims were more likely to involve excessive workloads, handoff issues, technical competence, and lack of supervision.3 However, this study did not address the broader landscape of medical malpractice involving physician trainees, that is, when trainees are involved in patient harm events without an explicit error.

Although the Accreditation Council for Graduate Medical Education (ACGME) mandated that training programs include education on patient safety in 2012,4,5 we have not yet seen data published on the unique factors associated with physician trainees being directly involved in harm events that ultimately become malpractice claims. Our goal was to identify patient-, provider-, and claim-related factors of medical malpractice claims in which physician trainees were directly involved in the harm events to target harm prevention strategies. Preventing these events from happening in the future would benefit both patients and providers. Based on previous literature about procedural competency and supervision,6–8 we hypothesized that surgical specialties practicing in settings with higher patient acuity would be at a higher risk of having physician trainees directly involved in harm events, which might be due to inadequate supervision.

Method

Study design

We performed a matched case–control study of medical malpractice claims. The Partners Healthcare institutional review board waived the project because no protected health identifiers were used (#2017P001506).

Data source

We used data from the Comparative Benchmarking System (CBS) database, which contains > 400,000 medical malpractice claims from 45 states (~30% of malpractice claims in the United States). It is operated by Harvard’s malpractice insurer, Controlled Risk Insurance Company (CRICO). Researchers have used the CBS for rigorous analyses in the past.9–11 Institutions voluntarily contributed claims from all clinical departments. For each claim, CBS-trained nurse coders working for CRICO have access to all materials, including, for example, expert witness testimony if a claim went to trial. The CBS has a robust quality assurance program that includes biweekly coder conference calls, an annual coder’s conference, and an auditing process whereby 15% of claims are reviewed quarterly. Coding time varies per claim. CRICO developed the database’s proprietary taxonomy, and a governance committee oversees it.

Definition of medical malpractice claims and variables used

We defined medical malpractice claims as written requests for compensation due to injury, according to previous articles.12,13 We used the National Association of Insurance Commissioners’ scale for injury severity, which includes the categories of fatal, permanent, and temporary, except that we merged emotional (originally its own category) with temporary, and we further divided fatal injury into fetal and nonfetal death.14,15 We reported case disposition as a binary variable depending on whether an indemnity payment was made or not. Payment could be made through settlements or verdicts in favor of the plaintiff. Teaching hospitals were defined according to the Council of Teaching Hospitals and Health Systems of the Association of American Medical Colleges.16 That is, they must have an affiliation with a medical school that is accredited by the Liaison Committee on Medical Education and sponsor or significantly participate in at least 4 approved, active residency programs, of which at least 2 are medicine, surgery, obstetrics–gynecology (OB/GYN), pediatrics, family practice, or psychiatry. We describe the other variables we used, including academic medical center and contributing factor, in Supplemental Digital Appendix 1 (at http://links.lww.com/ACADMED/A783). There were negligible missing data for the variables used (< 0.1%).

Identifying case and control claims

We examined claims closed between 2012 and 2016 that were contributed by teaching hospitals. A coded field called the service extender flag (SEF, see below) became available in the CBS in 2012. We chose a 5-year window because institutions entering the CBS must code their previous 5 years of data. No institution left the CBS during this time period, which ensured a stable pool of claims. We included only closed claims for variable completeness; however, the claims could be paid or unpaid.

Figure 1 shows a flowchart of how we classified the case and control claims (see below). We required that all defendants be insured by the same facility. Given the number of teaching hospitals that own community hospitals where trainees occasionally rotate, this stipulation increased the likelihood that the harm event occurred at the main teaching facility. This was crucial so that control claims originated from institutions in which physician trainees had the potential to be involved in harm events. Otherwise, claims from community affiliates could potentially appear in the control group and not provide a valid comparison if trainees only provided a fraction of care there.

Figure 1
Figure 1:
Flowchart showing how medical malpractice cases and controls (medical malpractice claims from the Comparative Benchmarking System database contributed by teaching hospitals and closed between 2012 and 2016) were classified. Nurse coders assigned the service extender flag (SEF) to the claim if they deemed one of the following people were directly involved in the patient harm event: residents (including interns), fellows, medical students, nursing students, physician assistants, or nurse practitioners. The SEF does not necessarily mean that the person was named as a defendant, rather it means that the person was involved in the harm event based on the full review of the claim files by the coder. The authors classified claims as cases if they had a resident, a fellow, or both directly involved in a harm event or as controls if they were from the same facilities as the case claims but did not have a resident or fellow directly involved in the harm event.

We identified claims that used the SEF, which nurse coders invoked if any of the following people were directly involved in the harm event: residents (including interns), fellows, medical students, nursing students, physician assistants, or nurse practitioners. Table 1 provides a few hypothetical examples of claims with the SEF. A claim with the SEF did not necessarily mean that the person was named as a defendant, rather it meant that the person was involved in the harm event based on the full review of the claim files by the coder. We classified claims as cases if the SEF was used to denote a resident, a fellow, or both being directly involved in a harm event. We classified claims as controls if they were from the same facilities as the case claims but did not have a resident or fellow directly involved in the harm event. To maximize power, control claims could have the SEF if it was used to denote someone other than a resident or fellow. Throughout this article, we will use the terms cases and controls to refer to claims involving a physician trainee (resident or fellow) or not, respectively.

Table 1
Table 1:
Hypothetical Examples of Medical Malpractice Claims With a Service Extender Flag (SEF)a

We confirmed the accuracy of the SEF as follows. L.C.M. read the claim abstract of a random sample of claims (20% of the full sample) to determine if a person with a service extender role was directly involved in the harm event. The single, blinded reviewer (L.C.M.) obtained good agreement with the nurse coders (Kappa = 0.63, 95% confidence interval [CI] 0.56, 0.71) according to a commonly cited scale.17

Statistical analysis

We generated descriptive statistics to summarize the characteristics of the cases and controls. We then performed multivariable regression using variables that were chosen a priori. These variables were patient and provider variables that were known at the time of the harm event that could plausibly be associated with trainees being directly involved in harm events and be used to target harm prevention strategies. We did not include variables occurring after the harm event in the regression analysis, such as harm severity, or claim-related factors, such as allegation, as these may not come to light until months or years after the event. The outcome was physician trainee involvement in harm events. We used a generalized estimating equation with exchangeable odds ratio (OR) structure. We accounted for clustering at the state level because it was the primary sampling unit and the basis for tort law. We used SAS 9.4 (SAS Institute, Cary, North Carolina) for all analyses. We set the 2-tailed threshold for significance at P = .05.

Results

From the original pool of all claims closed between 2012 and 2016 (30,973), there were 581 case claims, with the SEF denoting the direct involvement of a resident only (471, 81%), a fellow only (75, 13%), or both (35, 6%) in the harm event (Figure 1). There were 2,610 control claims from the same facilities as the case claims. Claims originated from 32 teaching institutions and 9 states (California, Colorado, Florida, Illinois, Massachusetts, Maryland, New Jersey, Pennsylvania, and Wisconsin) and the District of Columbia.

Table 2 shows the characteristics of the cases and controls. Both cases and controls primarily came from academic medical centers (525, 90% vs 2,231, 85%; P = .004). There was a statistically significant difference in the distribution of cases versus controls based on region. For example, cases were less likely than controls to take place in the West (198, 34% vs 1,213, 46%; P < .001). There was no statistically significant difference in the percentage of cases versus controls closed over the years of the study (P = .87). While there was not a statistically significant difference in median filing time between the groups, cases had statistically longer median open claim times than controls (19 vs 17 months, P = .04). There was no statistically significant difference between case and control claims in terms of the loss date occurring in July (48, 8% vs 225, 9%; P = .87) or on a weekend (83, 14% vs 383, 15%; P = .85).

Table 2
Table 2:
Characteristics of Medical Malpractice Claims Closed Between 2012 and 2016 From Teaching Hospitals From the Comparative Benchmarking System Databasea

Table 2 also shows the severity of harm, care setting in which the harm occurred, and allegation categories. There were statistically more fetal deaths (13, 2% vs 20, < 1%; P = .005) and permanent injuries (211, 36% vs 725, 28%; P < .001) in cases than controls. Cases were more likely to take place in the inpatient setting (378, 65% vs 1,428, 55%; P < .001) than controls. Cases were more likely to have an allegation that was related to surgical treatment (185, 32% vs 708, 27%; P = .02) or OB/GYN treatment (63, 11% vs 103, 4%; P < .001) and less likely to have a diagnosis-related allegation (79, 14% vs 457, 18%; P = .02) than controls.

Finally, Table 2 also shows the types of defendants named on the claim, whether claims were procedure related, and whether claims were paid. Case claims had a statistically higher rate of having a trainee named as a defendant than control claims (184, 32% vs 233, 9%; P < .001), but the rates of physician staff and hospitals named as defendants were not statistically significantly different. Cases were more likely to involve a procedure than controls (410, 71% vs 1,509, 58%; P < .001) and were more likely to be paid than controls (284, 49% vs 857, 33%; P < .001). Supplemental Digital Appendix 2 (at http://links.lww.com/ACADMED/A783) lists the top 5 most common procedures based on their frequency within the cases. The 2 procedures that were more common in cases than controls were intubation (28, 5% vs 58, 2%; P = .001) and manually assisted vaginal delivery (9, 2% vs 9, < 1%; P = .002).

Table 3 lists the most common final diagnoses based on their frequency within the cases and their corresponding frequency in the controls. Compared with controls, cases had statistically more puncture or laceration during a procedure (62, 11% vs 131, 5%; P < .001) and hypoxic or ischemic brain injury (predominantly in neonates) events (18, 3% vs 27, 1%; P < .001). Table 3 also lists the most common contributing factors based on their frequency within the cases and their corresponding frequency in the controls. The median number (interquartile range) of contributing factors per claim were 3 (2, 5) for cases and 2 (2, 4) for controls (P < .001). The most common contributing factors in cases were inadequate supervision (140, 24%) and technical performance with a known complication (137, 24%). Of the 140 case claims where inadequate supervision was a contributing factor, 104 (74%) involved a procedure and the majority (> 50%) involved providers in surgery or OB/GYN.

Table 3
Table 3:
Most Common Final Diagnoses and Contributing Factors in Medical Malpractice Claims Closed Between 2012 and 2016 From Teaching Hospitals From the Comparative Benchmarking System Databasea

Table 4 summarizes the results of the multivariable regression analysis identifying significant factors associated with direct trainee involvement in harm events. Cases were statistically more likely than controls to occur in the emergency department versus the inpatient setting (OR = 1.65, 95% CI 1.43, 1.91; P < .001). Cases were statistically more likely than controls to be in specialties such as oral surgery/dentistry and OB/GYN than general surgery (OR = 7.99, 95% CI 2.93, 21.83; P < .001, and OR = 1.85, 95% CI 1.24, 2.66; P < .001, respectively), although notably the CI for oral surgery/dentistry was wide. Cases had higher odds of involving a procedure than controls (OR = 1.58, 95% CI 1.27, 1.96; P < .001).

Table 4
Table 4:
Results of a Multivariable Regression Analysis to Identify Significant Factors Associated With Physician Trainees Being Directly Involved in Harm Eventsa

Discussion

In summary, our study demonstrated that claims in which physician trainees were directly involved in the harm events were rare overall. When they did happen, residents were involved more often than fellows. Physician trainees from surgical specialties were at the highest risk, especially when practicing in the emergency room setting. We found the most common final diagnosis to be puncture or laceration during a procedure. Therefore, we felt that procedural safety was the best area to target with prevention strategies.

There has always been a tension between learner autonomy and patient safety. Simulation has been implemented in high-risk fields like surgery, anesthesia, and OB/GYN18–20 and often uses scenarios from previous adverse events or malpractice claims. A Harvard group found that the actuarial risk for anesthesiologists who participated in simulation decreased enough that the insurer was able to decrease premiums.20 Other studies have shown that experiential learning through simulation is not only effective for critical thinking and communication skills but also for procedural skills.21,22 If simulation is an available resource, we advocate for both procedural and team-based simulations. Yearly sessions could address topics that are high yield to trainees from different years of training while not imposing too much of a time commitment. These sessions could be combined with a required annual departmental event, such as N95 fit testing, to ensure compliance. Identifying a departmental champion for the simulation program would ensure that content remained relevant over time.

Besides simulation, program directors could consider other prevention strategies related to procedural safety. Trainees and attendings could cosign logs after completing a procedure and attendings could provide trainees real-time, face-to-face feedback. Attendings could also rate the difficulty of the procedure and ease of completion by the trainee so that program directors reviewing the logs could better evaluate trainees’ experience. Additionally, program directors could consider raising their threshold for granting procedural independence, as the concept of “practice makes perfect” has been shown to be true in various studies examining the volume of procedures done by an individual.23 Lastly, it is essential to have a culture of safety.24 Trainees must feel comfortable asking for help even if they have already met the criteria to perform a procedure independently.

Inadequate supervision was also an important theme in the data we presented. It was a contributing factor in only about a quarter of case claims involving physician trainees. However, three-quarters of claims with inadequate supervision as a contributing factor also involved procedures. Not surprisingly, a meta-analysis by Snowdon and colleagues6 showed that patients’ mortality and rate of complications during invasive procedures decreased with more direct supervision. Yet, there is not definitive evidence that “overlapping surgeries” pose a safety risk to patients.25 Overlapping surgeries are procedures where the completion of an attending’s previous procedure overlaps with the start of his/her next one.

Every year, in July, the physician trainee workforce turns over at teaching hospitals. We might have expected a higher rate of harm events involving physician trainees occurring in July, but we did not find it. This suggests that trainees were not more likely to be directly involved in harm events at a time when, due to their inexperience, we might have expected them to be most vulnerable. Although some articles demonstrate a so-called “July effect,”26 others suggest that heightened supervision in July compensates for inexperienced trainees.27,28

Some institutions have implemented in-house nighttime and weekend coverage to ensure attending supervision during off-hours. However, these programs are expensive and logistically challenging. We advocate for using the ACGME’s framework for progressive responsibility as a framework for providing enough supervision without placing excessive burden on faculty.29 In this framework, program directors assess trainees’ milestones for independent practice to delineate which activities require direct versus indirect supervision. The different levels of indirect supervision range from supervisors being physically present to them being available by phone. This framework may allow programs, especially smaller programs which may have a limited number of supervisors, to provide supervision more efficiently.

Our study found that trainees were named as defendants in only a minority (184, 32%) of claims in which they were directly involved in the harm event, yet they were also named in 233 (9%) of claims in which they were deemed not to have been directly involved in the harm event. Our estimates are in the same range as those found by Studdert and colleagues12 who reported that physician trainees were named on 30% of all claims. However, our results, which are broken down by direct involvement in the harm event, are important for designing prevention strategies. For example, a harm prevention strategy addressing inadequate supervision might prevent physician trainees from being directly involved in harm events. However, a change in tort law would be necessary to decrease physician trainees being named in claims when they are not actually involved in harm events.

Differences in state law could influence the percentages of trainees being named as defendants. For example, some states, such as Florida, give immunity to trainees, so trainees cannot be named as defendants unless they practice outside their scope.30 Claims from Florida were still included in our study because (1) the SEF identifies claims in which trainees are involved in the harm event regardless of defendant status and (2) trainees can still be named as defendants in Florida if practicing outside of their scope. In contrast, other states with institutional charitable caps, such as Massachusetts, may contribute a higher number of claims with trainees as defendants because lawyers are potentially more motivated to name as many providers as possible in claims. Because tort laws vary widely between states,31 we accounted for clustering at the state level in the multivariable regression analysis, but these results should be interpreted as the average landscape for physician trainees across many states.

Similarly, the observation that case claims were more likely to result in a payment, either through settlement or verdict in favor of the plaintiff, could simply reflect the states’ legal milieu rather than characteristics of the harm events themselves. More settlements might occur in states where court processes are slow or tribunals do not exist to dismiss malpractice claims without merit. The higher payment rate in case claims may be related to the high severity of harm; there were more fetal deaths and permanent injuries in case claims compared with control claims. However, it is unclear whether trainees caused this higher severity of harm or were simply more likely to be at the bedside during such crises, as they are the frontline workforce in many high-acuity hospitals.

There are several advantages to the database and study design we used. First, the database itself contains ~30% of malpractice claims in the United States, so it enabled us to study rare events. We narrowed the sample to claims contributed by teaching hospitals with all defendants insured by the same facility to have a control group that would be eligible for the same events as the case group. We confirmed the reliability of the SEF, which provided chart review–level granularity about trainee involvement in harm events. Previous studies on malpractice claims in physician trainees are over a decade old,3,12,32 use legal databases that are subject to reporting bias,33–35 lack a control group,33–35 or are specialty- or procedure-specific.33–37

However, there are several potential limitations to our study. We did not have data on total physician coverage-years, so we could not generate a rate at which trainees were involved in medical malpractice claims. Physician coverage-years is an estimate of the total number of physicians who are insured over a certain period of time. Our results mainly apply to training programs at academic medical centers, rather than training programs based at community hospitals or off-site rotations at community hospitals. Floors within a teaching hospital may be designated as nonteaching, so some claims within the control group may not be eligible for the exposure (i.e., it was not possible for physician trainees to be involved in the claim simply because they were not present on that floor to be involved in the patient’s care). Lastly, we focused only on harm events that resulted in claims but acknowledge that harm events not resulting in claims are equally worth preventing and studying.

As leaders in medical education and patient safety, we face the challenge of training tomorrow’s physician workforce while preserving the safety of today’s patients. We hope that program directors, especially those in surgical specialties, consider implementing some of the prevention strategies that we discussed above, including: (1) using high-yield procedural and team-based simulation, (2) using procedure logs with supervisors’ cosignatures, (3) having higher thresholds for granting procedural independence, (4) having a culture of safety, and (5) using the ACGME’s framework for progressive responsibility. The goal of these strategies is to prevent patients from being harmed as well as to prevent trainees from being exposed to potentially traumatic events early in their career.

Acknowledgments:

The authors thank the Controlled Risk Insurance Company (CRICO) for providing the Comparative Benchmarking System data. They also want to acknowledge Jessica Moran, JD, Georgetown School of Law, for her assistance in interpreting legal terminology, Paul Currier, MD, Massachusetts General Hospital (MGH), for providing access to an SAS license, Harvard Catalyst, and Xiu Liu, senior specialist for analytics and reporting at the Center for Quality and Safety at MGH, for programming assistance.

References

1. Seabury SA, Chandra A, Lakdawalla DN, Jena AB. On average, physicians spend nearly 11 percent of their 40-year careers with an open, unresolved malpractice claim. Health Aff (Millwood). 2013;32:111–119.
2. Shapiro RS, Simpson DE, Lawrence SL, Talsky AM, Sobocinski KA, Schiedermayer DL. A survey of sued and nonsued physicians and suing patients. Arch Intern Med. 1989;149:2190–2196.
3. Singh H, Thomas EJ, Petersen LA, Studdert DM. Medical errors involving trainees: A study of closed malpractice claims from 5 insurers. Arch Intern Med. 2007;167:2030–2036.
4. Accreditation Council for Graduate Medical Education. Clinical Learning Environment Review. http://www.acgme.org/What-We-Do/Initiatives/Clinical-Learning-Environment-Review-CLER. Accessed November 7, 2019.
5. Myers JS, Bellini LM. Leveraging the continuum: A novel approach to meeting quality improvement and patient safety competency requirements across a large department of medicine. Acad Med. 2018;93:1321–1325.
6. Snowdon DA, Hau R, Leggat SG, Taylor NF. Does clinical supervision of health professionals improve patient safety? A systematic review and meta-analysis. Int J Qual Health Care. 2016;28:447–455.
7. Mourad M, Kohlwes J, Maselli J, Auerbach AD; MERN Group. Supervising the supervisors—Procedural training and supervision in internal medicine residency. J Gen Intern Med. 2010;25:351–356.
8. American Board of Internal Medicine. Policies and procedures for certification. https://www.abim.org/~/media/ABIM%20Public/Files/pdf/publications/certification-guides/policies-and-procedures.pdf. Published October 2019. Accessed November 7, 2019.
9. Gartland RM, Bloom JP, Fong ZV, et al. What have we learned from malpractice claims involving the surgical management of benign biliary disease?: A 128 million dollar question. Ann Surg. 2019;269:785–791.
10. Quinn GR, Ranum D, Song E, et al. Missed diagnosis of cardiovascular disease in outpatient general medicine: Insights from malpractice claims data. Jt Comm J Qual Patient Saf. 2017;43:508–516.
11. Myers LC, Skillings J, Heard L, Metlay JP, Mort E. Medical malpractice involving pulmonary/critical care physicians. Chest. 2019;156:907–914.
12. Studdert DM, Mello MM, Gawande AA, et al. Claims, errors, and compensation payments in medical malpractice litigation. N Engl J Med. 2006;354:2024–2033.
13. Gandhi TK, Kachalia A, Thomas EJ, et al. Missed and delayed diagnoses in the ambulatory setting: A study of closed malpractice claims. Ann Intern Med. 2006;145:488–496.
14. Sowka MP. The medical malpractice closed claims study. Conducted by the National Association of Insurance Commissioners. Conn Med. 1981;45:91–101.
15. National Association of Insurance Commissioners. Malpractice Claims: Final Compilation: Medical Malpractice Closed Claims, 1975-1978. 1980.Brookfield, WI: National Association of Insurance Commissioners;
16. Association of American Medical Colleges. Council of Teaching Hospitals and Health Systems (COTH). https://www.aamc.org/members/coth. Accessed November 7, 2019.
17. Altman DG. Practical Statistics for Medical Research. 1990.London, UK: Chapman and Hall;
18. Arriaga AF, Gawande AA, Raemer DB, et al.; Harvard Surgical Safety Collaborative. Pilot testing of a model for insurer-driven, large-scale multicenter simulation training for operating room teams. Ann Surg. 2014;259:403–410.
19. Lenguerrand E, Winter C, Innes K, et al.; Thistle group. THISTLE: Trial of Hands-on Interprofessional Simulation Training for Local Emergencies: A research protocol for a stepped-wedge clustered randomised controlled trial. BMC Pregnancy Childbirth. 2017;17:294.
20. Shannon DW. How a captive insurer uses data and incentives to advance patient safety. Patient Safety Quality Healthc. 2009;4:18–26.
21. Huang GC, Newman LR, Schwartzstein RM, et al. Procedural competence in internal medicine residents: Validity of a central venous catheter insertion assessment instrument. Acad Med. 2009;84:1127–1134.
22. Soffler MI, Hayes MM, Smith CC. Central venous catheterization training: Current perspectives on the role of simulation. Adv Med Educ Pract. 2018;9:395–403.
23. Morche J, Mathes T, Pieper D. Relationship between surgeon volume and outcomes: A systematic review of systematic reviews. Syst Rev. 2016;5:204.
24. Bump GM, Calabria J, Gosman G, et al. Evaluating the clinical learning environment: Resident and fellow perceptions of patient safety culture. J Grad Med Educ. 2015;7:109–112.
25. Sun E, Mello MM, Rishel CA, et al.; Multicenter Perioperative Outcomes Group (MPOG). Association of overlapping surgery with perioperative outcomes. JAMA. 2019;321:762–772.
26. Young JQ, Ranji SR, Wachter RM, Lee CM, Niehaus B, Auerbach AD. “July effect”: Impact of the academic year-end changeover on patient outcomes: A systematic review. Ann Intern Med. 2011;155:309–315.
27. Myers L, Mikhael B, Currier P, et al.; American Heart Association’s Get With the Guidelines-Resuscitation Investigators. The association between physician turnover (the “July Effect”) and survival after in-hospital cardiac arrest. Resuscitation. 2017;114:133–140.
28. Shah AA, Zogg CK, Nitzschke SL, et al. Evaluation of the perceived association between resident turnover and the outcomes of patients who undergo emergency general surgery: Questioning the July phenomenon. JAMA Surg. 2016;151:217–224.
29. American Council for Graduate Medical Education. Common program requirements. https://www.acgme.org/Portals/0/PDFs/Common_Program_Requirements_07012011[2].pdf. Published July 1,2011. Accessed November 7, 2019.
30. State University Systems of Florida Board of Governors Self-Insurance Programs. Liability protections and frequently asked questions. http://flbog.sip.ufl.edu/liability-protection-afforded. Accessed November 7, 2019.
31. Kachalia A, Mello MM. New directions in medical liability reform. N Engl J Med. 2011;364:1564–1572.
32. Helms LB, Helms CM. Forty years of litigation involving residents and their training: II. Malpractice issues. Acad Med. 1991;66:718–725.
33. Anandalwar SP, Choudhry AJ, Choudhry AJ, et al. Litigation in laparoscopic cholecystectomies. Am Surg. 2014;80:E179–E181.
34. Thiels CA, Choudhry AJ, Ray-Zack MD, et al. Medical malpractice lawsuits involving surgical residents. JAMA Surg. 2018;153:8–13.
35. Wegman B, Stannard JP, Bal BS. Medical liability of the physician in training. Clin Orthop Relat Res. 2012;470:1379–1385.
36. Anesthesia Quality Institute. About Closed Claims Project. https://www.aqihq.org/ACCMain.aspx. Accessed November 25, 2019.
37. Gurley KL, Grossman SA, Janes M, et al. Comparison of emergency medicine malpractice cases involving residents to nonresident cases. Acad Emerg Med. 2018;25:980–986.

Supplemental Digital Content

Copyright © 2019 by the Association of American Medical Colleges