Reddy, Siddharta G. MPH; Babbott, Stewart F. MD; Beasley, Brent W. MD; Nadkarni, Mohan M. MD; Gertner, Eric J. MD, MPH; Holmboe, Eric S. MD
In 2001, the Institute of Medicine (IOM) highlighted health information technology (HIT) as a key feature needed to transform health care in the United States to a safer, more efficient, and patient-centered system.1 Policy makers are pushing hard for practices to embrace HIT to improve care delivery, primarily through implementation of the electronic health record (EHR), which is arguably the most recognizable component of HIT.2 The federal government, in its recent passage of the American Recovery and Reinvestment Act of 2009, has made HIT a priority as evidenced by plans to spend $19 billion dollars over five years to ensure widespread implementation of EHRs.3 Initially, we expect the federal government to offer incentives to adopt EHRs and their meaningful use, but over time, hospitals and providers—and, by extension, continuity clinic training sites for graduate medical education—that fail to integrate EHRs will be penalized financially.4 Logically, then, current residents and fellows must be prepared to use HIT effectively in clinical practice. A committee of the Office of the National Coordinator for Health Information Technology has recommended the use of basic EHR technology as a start,4 but the variety of products and tiers (basic, full-featured) offered by multiple vendors has muddied comparative analysis concerning which aspects of HIT are most important; however, several studies have documented various benefits of implementing HIT and EHRs, and a subset of these has included the medical education setting.5–9 Aside from the educational benefit of having point-of-care access to clinical and decision-support information, potential exists for improving care by facilitating electronic communication among providers to improve continuity of care and workflow.5,6 Additionally, residents have reported receiving increased feedback after clinical encounters when they use electronic systems to record patient notes, which, in turn, helps ensure that they meet faculty evaluation requirements.7 Targeted retrieval of electronic records may also improve the identification of documentation errors8 and help program directors tailor learner experiences by periodically analyzing documented encounters.9
Multiple investigations, including those at the institutional, community-based, statewide, and national levels, have found that EHR adoption rates range from 9% to 29%.10–14 The most recent national study of EHRs in the ambulatory, private-office setting, based in part on the IOM-defined functionalities (List 1),15 found that 13% of practices installed a basic system (functionalities I–III), and just 4% installed a fully functional system (functionalities I–IV).16 Practices that implemented fully functional EHRs tended to be more satisfied than users of basic systems, and practices with younger physicians tended to have greater prevalence of both basic and fully functional EHRs.16
Residents are likely to encounter EHRs during their inpatient services as many academic medical centers and Veterans Health Administration (VHA) hospitals have embraced EHRs,6,17,18 but we do not know much about either the current exposure of residents to EHRs or residents' use of this technology in the ambulatory practice setting. Given that the majority of clinical care in the United States actually occurs in the ambulatory setting, learning to use HIT and EHRs to provide longitudinal care to populations of patients is a core competency for all residents. Specifically, residents need to leverage the population management and care coordination functionalities of EHRs to understand the needs of and connect with their patients, providing better continuity of care with the aim of improving measurable outcomes.19
Using the IOM's eight functionalities for EHRs (List 1) as a guide, we sought to describe the prevalence and state of HIT and EHR use among ambulatory training sites of U.S. internal medicine (IM) postgraduate training programs. We further desired to learn what specific functionalities residency training clinics had and were using in clinical practice.
This study targeted clinic directors from all Accreditation Council on Graduate Medical Education–accredited IM training programs in the United States where residents undergo any aspect of their required longitudinal ambulatory training. First, we contacted 378 IM program directors by e-mail, requesting their participation by asking them to identify their continuity clinic medical director (or multiple directors, if residents trained in more than one ambulatory clinic site). We followed up with up to three reminders to nonresponders.
Second, to compensate for program directors whom we could not contact or who did not respond, we also invited any members of the Society of General Internal Medicine Resident Clinic Director Interest Group (MRCDIG) who were still active in their directorship roles. We added this group to the clinic director sample, eliminating any duplicates.
Lastly, we invited by e-mail the clinic director sample over a series of eight waves (we identified clinic directors over several months) beginning in July 2007. We also sent up to nine e-mail and up to two telephone reminders with each wave to increase response rate until the survey closed in January 2008. We offered respondents a small honorarium ($50 USD).
The survey instrument consisted of a 67-item clinic demographic survey and 65-item American Board of Internal Medicine (ABIM) system readiness survey. We aligned the latter survey with the previously validated Physician Practice Connections Readiness Survey (PPC-RS) developed by the National Committee for Quality Assurance, a not-for-profit organization aimed at improving health care by assessing quality, measuring performance, and recognizing providers who meet certain practice standards.20 Practices can use the PPC-RS to generate an estimated score based on a 100-point scale.21 Table 1 provides the definitions we used for the various types of electronic data systems (EDSs) described in the survey. The system readiness survey inquires about several system-specific components including the presence and functionality of EHRs and other EDSs in residency clinics, as well as demographic characteristics of the individual clinic director and practice setting. Some item response options (simple dichotomous responses [i.e., yes/no]) allowed respondents to indicate the presence or absence of functionality. Three multipart questions asked clinic directors to describe administrative and clinical data present in their EDS and to explain how they used charting tools. The survey limited affirmative responses to only the prior three months and to data applied to 75% or more of patients; therefore, we could evaluate systems that had been in place for a while and were already in use for a majority of the patient population. Another question, regarding functionality of a clinic's electronic prescription writer (EPW), also differentiated affirmative responses regarding data that applied to 75% or more of patients. Respondents had the option to answer “do not know” for 31 items, including the three multipart questions described here. We adapted the survey for delivery on the Internet using Grapevine Surveys (Markham, Ontario, Canada); we provided unique log-in information to each respondent, allowing him or her to complete the questionnaire over multiple sessions if needed.
Reasoning that certain information may not be readily available to all respondents, we made all survey items optional; thus, the number of responses for individual questions did not always correspond to the total number of respondents. We determined a survey to be complete if the director completing it had answered at least one of the last three items (the last section of the survey dealt with personal demographic characteristics). Nearly all respondents who progressed to the end of the survey answered one or more of these three questions. We used chi-square analysis to compare the proportion of survey completers with that of noncompleters. We calculated frequencies for all responses and described them using distributions, means, medians, standard deviations, and ranges. We used two-sided tests using P values of less than .05 to determine statistical significance. We performed all analyses using SPSS 14.0 (SPSS, Inc., Chicago, Illinois) and tracked survey completion with Microsoft Access (Redmond, Washington).
Protection of human participants
The ABIM, the sponsor of this research, has a business associates (BA) agreement with its diplomates (a designation for physicians certified in IM by the ABIM) that allows the use of data for research purposes. The population of physicians whom we invited to participate in the research were diplomates of the ABIM. The ABIM has a similar BA agreement with training programs to use data for research purposes. Nevertheless, we took several steps to ensure the protection of participant confidentiality. Participants submitted their survey responses through a password-protected, secured Web survey platform. We did not collect any data identifying individual clinic patients. We used participant-identifying information only for invitation, follow-up, and honorarium processing, and these data were located separately on a password-protected computer. Lastly, we removed all identifying information prior to analysis.
We identified 356 clinic directors from 264 programs (70% of all accredited U.S. IM training programs). A total of 221 clinic directors (62%), including 22 whom we identified through the MRCDIG, completed the entire survey, representing 185 programs (49% of all accredited U.S. IM programs). We excluded incomplete surveys from analyses. Clinic directors from larger programs (≥50 trainees; 105/180 [58%]) constituted proportionally more survey completers (P < .001) than those from smaller programs (<50 trainees; 80/198 [40%]).
Of 219 respondents, 147 (67%) reported that their clinic used an EDS (defined in Table 1) that contains searchable administrative patient information. The EDS may have included an EHR, but other systems, such as a practice management system or registry, were also applicable. Directors reported that their clinics used the systems to maintain billing codes for services provided (94/141, 67%), past and current diagnoses (113/144, 78%), emergency contacts (113/147, 77%), and insurance identification or social security number (126/144, 88%). While clinic directors were mostly aware of the types of data their EDS maintained, three demographic variables elicited the highest percentage of “do not know” responses: marital status (30/146, 21%), language preference (31/144, 22%), and voluntarily self-identified race/ethnicity (30/144, 21%). Nearly all clinics, according to directors, also used their EDS to collect at least some of the basic patient information typically gathered during a visit, such as name (144/147, 98%), address (140/147, 95%), and dates of previous visits (136/144, 94%). When asked specifically about an EHR, 121 (56% of 216 respondents) reported using one. Table 2 shows EDS and EHR availability by clinic type.
Health information and data
Of those clinics with either an EDS or EHR system, 134 (62% of 216) clinic directors reported that residents have access to systems capable of storing searchable patient clinical data in coded fields. The most commonly available patient data in these systems, according to directors, were results from laboratory tests (124/137, 90%), imaging (120/135, 89%), and pathology (120/138, 87%). More than three out of four clinics stored in their EDS basic patient information, such as weight (102/135, 76%), blood pressure (104/137, 76%), and allergies and adverse reactions (112/137, 82%) on a majority (≥75%) of patients; more than half of clinics also had height (82/135, 61%) and body mass index (BMI) data (73/133, 55%) for a majority of clinic patients. Of 138 clinics, 80 (58%) tracked the status of preventive services for the majority (>75%) of their patients. A quarter of clinics, however, did not have electronically stored data on BMI (34/133, 26%), and 19% (26/137) lacked data on blood pressure, 22% (30/135) lacked information on height, and 19% (25/135) lacked weight data beyond the paper medical record. The remaining clinic directors either did not know whether their EDS stored information on patients or reported that their electronic systems held data on less than 75% of their patients.
According to our responding directors, electronic systems in 62 clinics (28% of 221) permitted retrieval of stored data to identify patients needing follow-up care. Generating such lists enabled residents at 42 of 62 clinics (68%) to anticipate patients due for preventive care. Directors reported that over half of the clinics (36/62, 58%) used lists to track patients who were prescribed certain medications, patients who had chronic conditions that needed follow-up care (35/62, 56%), and patients who were due for a particular test (33/61, 54%). Twenty-six (43% of 61) clinics generated patient lists for cases in which a resident or faculty member needed to review a patient's chart for reasons that included abnormal test results or missed visits, and 18 clinics (29% of 62) used lists to acquire patient data ahead of an upcoming visit.
Directors of 90 clinics (41% of 217) reported that they used a separate system to track critical referrals until the consultation report was returned to the clinic: 26 (29%) of these systems were electronic, 34 (38%) were paper based, and 30 (33%) used a system that combined paper and electronic means. However, 118 (54% of 217) clinic directors reported not having a system outside the paper medical chart. We did not inquire about whether the ordering physician was responsible for checking the chart for results of the critical referral.
According to directors, residents in more than half of clinics tracked laboratory, imaging, and abnormal tests (and flagged overdue results) using a system other than the medical chart. Overall, of 220 clinics, 74 (34%) used an electronic system, whereas 48 (22%) used a paper-based system to track laboratory tests until the clinician viewed them. Results were similar for tracking imaging results: 72 clinics (33%) used an electronic system and 41 (19%) used a paper-based system. Both systems flagged overdue results. Table 3 reveals that, according to our responding directors, clinics in larger programs were more likely to use an electronic system for tracking both lab and imaging results. We did not ask how clinicians were alerted to overdue results in paper-based systems.
When flagging abnormal test results, 91 (41% of the 220 responding clinic directors) reported using an EDS to notify the resident or other clinician, 94 (43% of 220) used a paper-based system, and 34 (15% of 220) reported no formal system. Again, larger programs tended to use an electronic system to bring abnormal test results to a clinician's attention (Table 3). After residents or clinicians followed up with patients with abnormal test results, 49 (23% of 213) clinics used an EDS with no variation by program size. Ninety clinics (42% of 213) used a paper-based system to document this encounter, and 70 (33% of 213) had no formal system to do so.
Order entry and order management
Physicians at a minimum of 90% of responding clinics reported having an EDS that could retrieve text reports for lab results (198/219) and imaging results (199/219) directly from the facility that performed the service; 183 of 218 clinics (84%) could receive the images as well. Half of the clinics reported an EDS capable of actually ordering either type of test (109/218 for lab, 50%; 111/219 for imaging, 51%).
Overall, according to clinical directors, physicians at 100 (46% of 217) clinics were able to note in their EDS that the right clinical staff received and responded to the results, 49 (23% of 216) respondents indicated that their EDS flagged duplicate orders, and 31 (14% of 218) systems alerted clinicians if they ordered inappropriate tests. Larger programs reported having significantly greater prevalence of EDS functionality along these lines (Table 3).
The directors at half of the clinics (n = 110) reported having an EPW. Of these 110 clinic directors, 96 (88%) reported the residents could either print prescriptions in the clinic for their patients or send a fax or electronic message directly to a pharmacy. Furthermore, our respondents reported that 60 EPWs (55%) connected to a pharmacy, 11 (10%) connected to a pharmacy's benefit manager, and only 34 (31%) EPWs were capable of receiving electronic renewal requests (the survey did not specify if requests for prescription renewals could be sent by patients).
Electronic prescription reference programs offer knowledge support to clinicians when placing prescription orders (Table 1). For instance, of the 107 clinics reporting an EPW mentioned above, 36 (34%) said their EPW suggested drug substitutions and 27 (25%) reported that their EPW provided patient-specific formulary drug alternatives, including, in both cases, suggestions for appropriate generic medications. Electronic systems provided general medication information as well as information tailored for specific patients (Table 4).
According to responding directors, 68 clinics (31% of 218) offered patients the opportunity for secure e-mail consultations with a staff physician, resident, or other provider, and 44 clinics (20% of 218) provided patients with a Web site to meet their nonurgent needs (e.g., scheduling, prescription refills, test results). Of 218 clinic directors, 44 (20%) reported that their clinic had a written policy for measuring this Web-based communication with patients, and 20 (9%) further reported that their efforts were working well, whereas 24 (11%) reported that their process could use improvement, and 171 (78%) lacked any written policy. A small percentage of clinics (7%, 16/218) measured their performance in responding to patient requests that were received electronically.
Electronic communication and connectivity
Of 120 responding clinics with an EHR, directors at 114 (95%) reported using unique patient identifiers. All providers in 108 (89% of 121) clinic EHRs used National Provider Identifier (NPI) codes or other unique identifiers. (The Health Insurance Portability and Accountability Act of 1996 mandated the NPI as part of the National Plan and Provider Enumeration System in anticipation of facilitating electronic transactions between providers and payers for health care services. The Centers for Medicare and Medicaid Services assigns NPIs to providers and requires providers participating in Medicare to obtain an NPI. Other payers use the identifier also.) Residents in 102 (86% of 119) clinics used standardized codes for clinical information, and 100 (83% of 120) EHRs stored standardized medication and allergy data that followed nationally accepted standards. Fifty-two (43% of 120) respondents indicated that their clinics used and maintained codes, such as Logical Observation Identifiers Names and Codes (LOINC), to identify clinical observations, diagnostic results, and allergies. (LOINC is a universal system of identifying laboratory and other clinical observations that was developed to facilitate exchange of these data between clinicians and laboratories.)
According to our responding clinic directors, nearly all 121 clinics with an EHR provided insight into the extent of their systems' interoperability, that is, their ability to interface with data transmitted to or received from external sources (e.g., other care providers or the public). More than four out of five clinics reported that their EDS had the capacity to receive lab (99/120, 83%) and imaging (99/118, 84%) test results, and about three-quarters reported that theirs could receive inpatient data (87/119, 73%) and prescription data (87/121, 72%). More than two-thirds of these clinics (83/121, 68%) reported the capacity to receive medical histories, and 54% (68/125) could receive physical findings from other providers. One in four clinics was able to transmit in kind to other providers' EDSs, including clinical information (30/119, 25%), orders, and requested appointments (33/118, 28%). One-third also sent prescription (40/119, 34%) and diagnostic (37/116, 32%) information to other providers and patients. Finally, personal health records played the smallest role in clinic connectivity: Only 25 (21% of 118) clinics reported being able to receive self-monitoring information from patients, and only 20 (17% of 120) clinics reported being able to transmit clinical information back to patients.
As reported earlier, 90 (41% of 217) clinics reported using a systematic approach (paper based, electronic, or a combination) outside the medical chart for tracking critical referrals. Regardless of the system used, some portion of the patient's information may have been sent electronically in a report accompanying the transmission. Four out of five clinics sent the reason for the consultation (50/60, 83%) as well as clinical data that were applicable to the referral (48/59, 81%). Forty-three (72% of 60) also sent clinical findings, and 40 (69% of 58) added the providers involved in that patient's care. Half provided a plan of care (30/59, 51%), and one-third of clinics provided the name of a person providing support for the patient (22/60, 37%), the patient's family history (21/60, 35%), and his or her social history (19/59, 32%). Less than one in four clinics (13/59, 22%) included the patient's functional status in the electronic report.
Discussion and Conclusions
In the United States, a substantially higher proportion of ambulatory clinics involved in training future IM physicians possess an EDS, specifically an EHR, compared with ambulatory office practices without IM residents.16 From an educational outcomes point of view, residents who graduate from these programs have a valuable opportunity to become competent in using an EDS to provide high-quality patient care in the continuity clinic setting, and they may have a greater chance of learning to treat their patients from a population-based perspective. The residency graduate's transition to the typical community ambulatory office setting, however, may be a difficult one. This new workforce, potentially prepared with technical know-how, will too often find themselves in practices or inpatient settings that lack either an EHR22 or a more advanced HIT system and, as a result, may experience difficult career transitions and be less likely to order a prescription electronically, to easily examine their patient performance measures in order to engage in quality-improvement activities, or to share clinical data for patients seen by multiple providers.23
Although the news regarding HIT in IM ambulatory training clinics compared with the external environment is encouraging, still only about half the sample of ambulatory sites we surveyed had EHRs. This is disappointing given that graduates are entering a health care environment in which the government has set timelines for mandatory implementation of electronic health strategies.24 In addition, a recent examination of 26 IM residency program curricula by the Medicare Payment Advisory Committee found that although many programs offer trainees exposure to an EHR, these systems are often not full-featured and lack important functionalities.2
Other EDSs were available in slightly more clinics (67%) than those with an EHR specifically, but these systems seemed to be designed primarily for administrative purposes, such as scheduling and billing. Many clinics contend with multiple systems, both electronic and paper based, to order tests, to manage results, to flag abnormal results, or to notify the physicians—and too many of these systems are not integrated. Given the episodic care and poor continuity that exists in the longitudinal experience of many IM residency clinics, misaligned systems impede the coordination of care for patients seen by residents.25
Among the clinics with EHR systems in place, functionality and inclusion of patient data were suboptimal. For instance, one-third of clinics did not generate lists to identify patients needing previsit planning, and more than one-half of clinics did not use lists to anticipate preventive tests. Systematically incorporating planned visits into the care for patients with chronic disease is a basic principle of the Chronic Care Model and the Patient-Centered Medical Home.26 These holes illustrate missed opportunities that, if corrected—with residents participating in the processes—could improve patient health and instill trainees with pragmatic skills for future care of patients in contemporary care models (e.g., the Chronic Care Model or Medical Home) using existing resources.
Clinics associated with programs having 50 or more trainees tended to have greater adoption of HIT, including the presence of an EDS. Moreover, larger programs also reported using significantly greater functionality with respect to their HIT systems. An analysis of clinics by program size reveals that, as HIT functionality is added, the proportion of clinics having the cumulative functionalities decreases more rapidly for smaller programs than for larger programs (Table 5). These findings are not surprising because larger programs tend to be housed in larger academic health centers with more capability to expand and align their HIT infrastructure. Hospital-based (i.e., funded by the hospital and located in or near the facility) or hospital-supported (i.e., funded by the hospital, but located elsewhere) training programs, which constitute 82% of the clinic types in our sample, were equally represented in both smaller and larger programs. Hospitals already rely on GME indirect training funds to support noneducational operations and subsidize safety net care.27 One hypothesis is that larger programs have a larger pool of indirect funds with which to purchase HIT. Thus, it is not unreasonable to suppose that these programs can outlay larger HIT investments for the continuity clinics that they already support, especially if clinics are located within or near the facility, as was the case among the hospital-based and -supported clinics.
The majority of clinics we surveyed were either hospital based or hospital supported, and slightly more than half had an EHR in the clinic. VHA clinics, which have affiliations with 107 U.S. medical schools, constituted only a small percentage of the clinics we surveyed. Funded by the Department of Veterans Affairs, clinics in the VHA network have had an evolving, comprehensive (patient clinical data, financial, and administrative) EHR system in place since the mid-1980s.17 However, only one-third of freestanding (resident program supported) and federally qualified health centers reported having an EHR. Given the financial constraints these institutions already face in providing care to a patient population that is challenged economically, it seems both understandable and regrettable that these clinics lack modern HIT systems that may improve care.
This study has several limitations. First, to attain our study sample, we relied on program directors to provide us with the names and contact information of their clinic directors, thus creating a potential for nonresponse bias. Further, we were not able to determine the number or types of clinics not represented in the survey. However, we do know which programs did not participate in the survey, and a program-level-data comparison of responders and nonresponders shows that they are similar. Nonetheless, individual clinic characteristics of nonresponders are still unknown. Also, the questions about electronic systems were embedded in a long survey along with other specific aspects of the clinic, which may have resulted in a lower response rate. The potentially lower response rate and unknown characteristics of individual nonresponding clinics may limit the generalizability of our results; however, 62% of the sample did complete the entire survey and represented nearly half of all accredited IM programs in the United States. Ten of 22 clinic directors identified through the SGIM MRCDIG group completed the survey. Their residents were more likely to generate electronic lists for patients needing follow-up care (P = .042), but they were otherwise similar to the rest of the clinics in terms of HIT prevalence and functionality. Second, lack of familiarity among the clinic directors about EDSs and related components may have created some difficulty in answering specific questions, but our results do represent what the clinic directors know and understand about the clinic they are presumably responsible for as directors; for example, respondents offered comments, through survey free-text fields, regarding
* their clinic's plans for implementation of an EHR (e.g., “received funding,” “in the next two years”),
* specific functional limitations of their electronic systems (e.g., “we do not get alerts except for [abnormal] radiology test [results]”), and
* modifications to their systems (e.g., “turned off the duplicate test alerts because they mostly flagged intentional duplication”).
Another, and perhaps the most important, limitation is that we did not survey residents directly about how they used electronic systems in the clinic or how satisfied they were with the functionality of HIT (when present). Future studies should compare trainees' experiences in outpatient versus inpatient settings and specifically focus on electronic system functionality, satisfaction, and whether experience affects career choice.
HIT has the potential to substantially improve health care delivery in the ambulatory setting. A decade ago, Bates and colleagues28 demonstrated that implementation of a physician order entry system resulted in a significant reduction (55%, P = .01) of nonintercepted serious medical errors in a large tertiary hospital. Demakis and colleagues29 showed significantly improved resident compliance with standard-of-care measures for chronic disease management at VA ambulatory care clinics after use of an electronic reminder system, though the influence of the reminders seemed to wane over time. Adams and colleagues30 documented greater clinical detail in pediatricians' documentation of history, risk assessment, and developmental milestones in an ambulatory practice after EHR implementation. And, more recently, a Massachusetts survey of physician practices found that a large number of practices have the ability to generate patient registries based on diagnoses, lab results, and medications and that practices with EHRs were more likely to generate registries by all three factors.31
That EHRs will become commonplace throughout the U.S. health care system is not a question of “if” but of “when.” The Health Information Technology for Economic and Clinical Health Act, a component of the American Recovery and Reinvestment Act, suggests that opportunities exist for residency ambulatory clinics to take advantage of HIT funding and guidance in order to be at the forefront of health care reform. Public and not-for-profit hospitals, facilities serving medically underserved populations, federally qualified health centers, small office practices involved in primary care, and entities involved in training health professionals are among the prioritized beneficiaries of this act, all of which were represented in our study sample and all of which are involved in the training of IM residents.3 The physician-in-training is in a crucial position to acquire the necessary HIT proficiency to be able to fully utilize HIT on entering practice. The electronic systems that residents use to acquire HIT proficiency must, therefore, be modern, integrated, and fully functional; they must be user-friendly; and they must be capable of maintaining sufficient patient data. The authors urge residency program directors to collaborate with training sites in order to implement or upgrade HIT systems, especially where full functionality or tight integration among multiple coexisting systems is lacking. More important, residency programs must evaluate their trainees' HIT proficiency to meet the ultimate goal of improving patient care.
This study was supported by funding from the American Board of Internal Medicine. The study sponsor had no role in the design and conduct of the study; the collection, management, analysis, and interpretation of the data; or the preparation, review, or approval of the manuscript. The authors report no conflicts of interest, including specific financial interests and relationships and affiliations relevant to the subject matter or materials discussed in the manuscript.
The authors report no conflicts of interest that may affect the information, research, analysis, or interpretation presented in the manuscript.
The American Board of Internal Medicine (ABIM), the sponsor of this research, has a business associates (BA) agreement with its diplomates that allows the use of data for research purposes. Participating physicians were ABIM diplomates. The ABIM has a similar BA agreement with training programs to use data for research purposes. Nevertheless, we took steps, as detailed in this article, to ensure the protection of participant confidentiality.