Secondary Logo

Journal Logo

Research Reports

The AAMC Standardized Video Interview and the Electronic Standardized Letter of Evaluation in Emergency Medicine: A Comparison of Performance Characteristics

Hopson, Laura R. MD; Regan, Linda MD; Bond, Michael C. MD; Branzetti, Jeremy MD; Samuels, Elizabeth A. MD, MPH, MHS; Naemi, Bobby PhD; Dunleavy, Dana PhD; Gisondi, Michael A. MD

Author Information
doi: 10.1097/ACM.0000000000002889

Abstract

Residency selection is a high-stakes process for medical students and residency programs. Extensive research has demonstrated that selection criteria such as United States Medical Licensing Examination (USMLE) Step exam scores, class rank, and interview-day assessments are not strong predictors of future performance during residency.1–5 Personal competency factors, such as interpersonal and communication skills and professionalism, are poorly represented in standard application materials5,6 but may be the most predictive of professional success for potential trainees.7–12

In 1999, the Council of Residency Directors in Emergency Medicine (CORD-EM) developed the Standardized Letter of Recommendation (SLOR), which combines quantitative ratings and traditional narrative comments in a single document. Emergency medicine (EM) program directors place great importance on this residency selection tool, which has undergone several iterations and is currently known as the electronic Standardized Letter of Evaluation (eSLOE). The eSLOE includes background information about the applicant, clinical rotation grade, ratings of qualifications required for success in an EM residency program, and global ratings comparing the applicant with other applicants.13,14 Although the eSLOE does not directly measure an applicant’s interpersonal and communication skills and professionalism, there are some limited data demonstrating the tool’s value for predicting learner success during EM residency.15

Behavioral and situational interviewing may be useful methods to assess applicants’ aptitude in interpersonal and communication skills and professionalism.16 In 2016, the Association of American Medical Colleges (AAMC) developed the AAMC Standardized Video Interview (SVI), which consists of 6 brief behavior-based questions designed to measure applicants’ interpersonal and communications skills and knowledge of professionalism. Applicants’ responses, recorded by webcam, are scored by trained raters against a standardized rubric to generate a total score ranging from 6 to 30 (with higher scores indicating higher proficiency). Program directors can access applicants’ SVI total scores through the Electronic Residency Application Service after completing instructional modules, which include implicit bias training. The SVI is thus designed to help residency programs consider, as part of a holistic review of an applicant, relevant information about critical personal competencies in a structured, objective interview format that avoids issues associated with unstructured interviews, which feature lower predictive validity and greater opportunities for interviewer bias. The SVI was available for operational pilot use in the EM residency selection process during the 2017–2018 application cycle for the 2018 National Resident Matching Program Main Residency Match (2018 Match). SVI total scores during this cycle were normally distributed, with a mean of 19.1, range of 6 to 30, and standard deviation (SD) of 3.1.17

Using national data from the 2018 Match application cycle, this study compared the performance characteristics of the eSLOE and SVI to examine the validity of these tools for EM residency selection. We compared these tools in 3 broad areas: (1) correlations between eSLOE ratings and SVI total scores; (2) correlations with other selection variables, including the relationships between eSLOE ratings, SVI total scores, EM rotation grades, USMLE scores, and honor society memberships; and (3) group differences, or how eSLOE ratings and SVI total scores differ by gender, race, and applicant type.

Method

All data used in this study were drawn from the CORD-EM eSLOE data repository or the AAMC Data Warehouse. The eSLOE data were securely sent to the AAMC by CORD-EM on March 29, 2018. SVI, demographic, and outcome data were obtained from the AAMC Data Warehouse on May 3, 2018. Data were linked by applicant ID using the merge function of IBM SPSS Statistics for Windows version 25.0 (IBM Corp., Armonk, New York). Individuals provided explicit consent for their data to be used for research purposes when they registered for the SVI. The institutional review board of the American Institutes for Research approved SVI research for this study (FWA00001666), including a waiver of informed consent given the minimal risk to participants. This study was conducted in accordance with STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) guidelines.18

Study sample

We identified 2 population datasets: SVI and eSLOE. These data were then matched by applicant. The eSLOE population was defined as EM applicants with a completed eSLOE submitted from March 15, 2017, through March 26, 2018. The SVI population was defined as applicants with valid SVI total scores who completed the SVI from June 10, 2017, through August 1, 2017, encompassing the entire population of SVI test takers for 2017.

Description of variables

eSLOE ratings.

Ten eSLOE items with quantitative ratings were selected for analysis based on their relevance to the proposed research questions, including:

  • EM rotation grade (eSLOE 3b): 1 item rated using a 5-point scale from “fail” to “honors” (although no “fails” were reported);
  • applicant qualifications for EM compared with other applicants/peers (eSLOE B1–B6): 6 items rated using 3-point scales, from “below peers (lower 1/3)” to “above peers (top 1/3)” (5 items) or from “less than peers” to “more than peers” (1 item);
  • predicted success for the applicant (eSLOE B7): 1 item rated using a 3-point scale (“good,” “excellent,” “outstanding”); and
  • global assessment: 2 items using similar rating scales to rank the applicant in comparison with applicants recommended last year by the rater(s) (eSLOE C1) and to predict the applicant’s rank list position (eSLOE C2).

The wording and scale format for each item is provided in the sample eSLOE available as Supplemental Digital Appendix 1 at http://links.lww.com/ACADMED/A713.

SVI total score.

The SVI total score for each applicant was calculated by standardizing the 6 individual question scores by converting each raw score of 1 to 5 into a z score to account for potential rater effects, summing these into a total score, and then transforming the total score back to the original scale of 6 to 30. Additional information about the SVI, including sample questions and a general scoring rubric, is provided in Supplemental Digital Appendix 2 at http://links.lww.com/ACADMED/A713.

Other selection variables.

Applicants’ first attempt scores on the USMLE Step 1 and Step 2 Clinical Knowledge (CK) exams and pass/fail score on the Step 2 Clinical Skills (CS) exam were used in this study. Self-reported membership in the Alpha Omega Alpha Honor Medical Society (selection generally based on academic accomplishments) and the Gold Humanism Honor Society (selection based on compassionate care and community service) was also used in this study, given broad levels of participation among applicants and access to these variables in the AAMC Data Warehouse, to address convergent and divergent validity. The nomination processes and opportunities for these honor society memberships vary by medical school.

Data analyses

Statistical analyses were performed using IBM SPSS Statistics for Windows version 25.0 to conduct all correlational analyses and report subgroup differences. For applicants with multiple eSLOE ratings, data were aggregated into a single rating by taking the simple average (mean) of ratings. This approach reflects how many residency programs use eSLOE ratings during their selection processes and also prevented applicants with multiple eSLOE ratings from having more weight in our final analysis. Spearman rank correlations were used when the outcome was an ordinal/ranked variable such as an eSLOE rating. Point-biserial correlations were used when the outcome was dichotomous, such as honor society membership or USMLE Step 2 CS pass indicator. Pearson correlations were used for relationships between continuous variables such as SVI total score and USMLE Step 1 or Step 2 CK score. Subgroup differences were calculated for 3 eSLOE ratings (eSLOE B7, C1, and C2), which were selected as representative general/overall ratings. Subgroup differences are presented using Cohen’s d, a version of the standardized mean difference formula, to account for the variance around group mean ratings by testing effect size differences, as follows:

Cohen’s d is widely used in selection research and allows for group difference comparisons to occur on a standardized scale by producing a value equivalent to the proportion of a single-SD difference (e.g., a d of 0.50 represents a difference of 0.5 SD).

Results

The SVI population sample included 3,469 potential EM applicants with valid SVI total scores. A total of 7,544 cases of completed eSLOEs were found, belonging to 3,223 applicants. We removed 79 test cases, duplicates, and cases where the rater indicated that they did not know the applicant or that the applicant did not rotate in their emergency department. The final eSLOE population sample was 3,205 applicants with valid eSLOE ratings. Of these 3,205 applicants, 2,884 were matched to a corresponding SVI total score, indicating a 90% match rate between the SVI and eSLOE population samples. (A flow diagram is provided in Supplemental Digital Appendix 3A at http://links.lww.com/ACADMED/A713.)

The matched sample had a mean SVI total score of 19.33 (SD 3.17), with scores ranging in a normal distribution between 6 and 30. These scores were almost identical to the scores of the cohort of all 2017 SVI participants.17Table 1 summarizes the demographic characteristics of the matched sample, which was similar to the full EM applicant pool in the 2018 Match—although the matched sample had proportionally fewer foreign medical graduates (FMGs: non–U.S. citizen attendees of international medical schools) and international medical graduates (US-IMGs: U.S. citizen attendees of international medical schools), who were less likely to have completed eSLOEs than attendees of U.S. MD-granting medical schools (US-MDs) and DO-granting medical schools (DOs).

Table 1
Table 1:
Demographic Characteristics of the 2017–2018 eSLOE–SVI Matched Sample and the Emergency Medicine (EM) Residency Applicant Pool in the 2018 Match

In the eSLOE population sample (n = 3,205), the eSLOE ratings were provided by 708 unique raters or rater groups from 241 unique institutions. (These values represent best approximations after attempts to reconcile similar but discrepant rater information due to inconsistencies in the manual entry of rater and institution names on the eSLOEs.) The majority of applicants had 2 to 3 completed eSLOEs (75% [2,395/3,205]; for the number of completed eSLOEs per applicant, see Supplemental Digital Appendix 3B at http://links.lww.com/ACADMED/A713). The majority of the eSLOEs were group ratings (73% [5,447/7,465] ratings) or evaluations provided by a consensus decision among a team or group of multiple raters, typically representing departmental education leadership rather than evaluations provided by a single rater. Overall, 94.6% of eSLOEs (7,062/7,465) were written by education leadership, individually or as a group consensus letter, with the remainder written by individual faculty members. Few eSLOEs were incomplete. Relatively few raters made use of the bottom rating categories for each item, as reflected in Figure 1.

Figure 1
Figure 1:
Response category distributions for electronic Standardized Letter of Evaluation (eSLOE) ratings in the 2018 Match application cycle (n = 7,465 valid ratings for 3,205 applicants). These data represent emergency medicine (EM) leadership and faculty ratings of medical students applying to EM residency programs submitted from March 15, 2017, through March 26, 2018, predicting their future success in EM residency (eSLOE B7) and ranking them in relation to peers/other applicants (eSLOE C1 and C2). A sample eSLOE, with the wording of each item, is provided in Supplemental Digital Appendix 1 at http://links.lww.com/ACADMED/A713.

Correlations

Table 2 reports means and SDs for SVI total score and the 10 eSLOE ratings. It also reports correlations between SVI score, rotation grade (eSLOE 3b), and ratings of qualifications for EM (eSLOE B1–B7) and global assessment ratings (eSLOE C1 and C2) for the matched sample. All correlations between SVI score and eSLOE ratings were statistically significant at the P < .05 level, with magnitudes in the range for small positive relationships (generally approaching r = 0.20).

Table 2
Table 2:
AAMC SVI Total Score and eSLOE Ratings, 2017–2018: Means and Correlations

Table 3 reports correlations between SVI total score, eSLOE global assessment ratings (eSLOE C1 and C2), USMLE Step exam scores, and honor society memberships in the matched sample. Correlations between SVI and Step exam scores were smaller than correlations between eSLOE ratings and Step exam scores, with the magnitude of these relationships ranging from r = 0.05 to 0.10 for SVI score and from r = 0.12 to 0.30 for eSLOE ratings. The correlations between SVI score and honor society membership status were small, with r = 0.09 for Alpha Omega Alpha membership and r = 0.12 for Gold Humanism Honor Society membership. The global assessment ratings of applicant ranking (eSLOE C1) and rank list prediction (eSLOE C2) were both correlated at r = 0.20 with Alpha Omega Alpha membership and at r = 0.17 and r = 0.16, respectively, with Gold Humanism Honor Society membership.

Table 3
Table 3:
AAMC SVI Total Score, eSLOE Ratings, USMLE Step Exam Scores, and Honor Society Memberships, 2017–2018: Means and Correlations

Group differences

Table 4 demonstrates that when using white, male, and US-MD applicants as the reference groups, subgroup differences for SVI total score and eSLOE ratings in the matched sample were small or nonexistent. None of the reported group differences reached the generally accepted threshold for moderate differences (d > 0.50),19 except for applicant type. US-MDs had higher predictions of success (eSLOE B7) than US-IMGs (d = 0.53) and had slightly higher SVI scores than DOs (d = 0.42) and US-IMGs (d = 0.34).

Table 4
Table 4:
Group Differences by Gender, Race/Ethnicity, and Applicant Type in AAMC SVI Total Score and eSLOE Ratings, 2017–2018 eSLOE–SVI Matched Sample (n = 2,884)

When comparing Latino or Asian applicants with white applicants, there were no differences in SVI total score or eSLOE ratings that met the threshold for a small effect (d > 0.20). SVI scores very slightly favored black applicants compared with white applicants (d = −0.14). The eSLOE ratings showed small black–white subgroup differences: eSLOE ratings for success prediction (eSLOE B7), applicant ranking (eSLOE C1), and predicted location on the rank list (eSLOE C2) were slightly higher for white applicants than for black applicants (d = 0.27 for B7; d = 0.40 for C1; d = 0.36 for C2). Both SVI total score and eSLOE ratings also slightly favored female compared with male applicants, but these subgroup differences were small (d approaching −0.20).

Discussion

EM program directors use multiple tools that attempt to provide useful, reliable, and valid information about applicants to select the best candidates to invite to interview and to match into their programs. This study sought to compare an existing valued tool (the eSLOE) with a methodologically rigorous but new tool (the AAMC SVI). Both of these tools have the potential to address applicants’ personal competencies in distinct ways—the eSLOE through a summative personal observation of behavior, and the SVI through a structured interview assessment of critical competencies. We found that ratings for all 10 eSLOE items selected for this analysis had small positive correlations with SVI total score in the 2018 Match application cycle. These findings suggest that the eSLOE and SVI are measuring similar but not duplicative aspects of a candidate’s profile. Although eSLOE global assessment ratings (eSLOE C1 and C2) broadly assess an applicant’s clinical acumen (e.g., medical knowledge, patient care), the SVI is specifically designed to measure an applicant’s interpersonal and communication skills and knowledge of professionalism.

The eSLOE is widely accepted as the standard tool for assessment of EM residency candidates, with program directors noting that the most important factor in determining which applicants to invite to interview is eSLOE data.14 The SVI is new to the residency selection process and has been met with reservations throughout the EM community.20 Our analysis suggests that the SVI and eSLOE can be used as selection tools that are related but not redundant.

An additional study objective was to examine correlations of these tools with other selection variables, including the relationships between SVI total score, eSLOE ratings, EM rotation grade (eSLOE 3b), USMLE Step exam scores, and honor society memberships. Although correlations between SVI score and eSLOE ratings were small and positive, correlations between SVI score and academic variables (Step exam scores and Alpha Omega Alpha membership) were negligible. The correlation between SVI score and Gold Humanism Honor Society membership (r = 0.12) was slightly higher but still small, perhaps reflecting the variation in selection criteria across medical schools. The relatively higher correlations between eSLOE ratings and Alpha Omega Alpha membership (r = 0.20 for eSLOE C1 and C2) are an indication that eSLOE ratings better reflect medical knowledge or cognitive skills than do SVI scores. The literature suggests that neither Gold Humanism Honor Society nor Alpha Omega Alpha membership are good predictors of residency performance.4,15,21,22 The absence of large correlations between academic variables and either the SVI score or eSLOE ratings suggests that these 2 selection tools may provide unique information about applicants’ personal competencies.

An important limitation to note is that the correlations between SVI total score and USMLE Step exam scores reported here refer only to the matched eSLOE–SVI sample used for this study and do not describe the relationship between these assessments for the full population. In the EM applicant population (n = 3,469), SVI and Step exam scores were correlated at r = 0.09 for Step 1, r = 0.12 for Step 2 CK, and r = 0.15 for Step 2 CS.17 The primary reason for the notable difference in the Step 2 CS correlation (r = 0.05 for our sample and r = 0.15 for the population) is that our matched study sample had a disproportionately reduced number of Step 2 CS fail scores (23 fails out of 737 total) when compared with the population (84 fails out of 1,058 total).

The correlations between SVI total score and USMLE Step exam scores were smaller than the correlations between eSLOE ratings and Step exam scores, suggesting that SVI provides unique information about applicants that is not simply a function of cognitive ability. The correlations between SVI total score and eSLOE ratings found in our study are consistent with the correlations reported between personal characteristics and performance variables in employment and educational testing, including those reported in a seminal meta-analysis by Barrick and Mount23 that found personality–performance correlations in the r = 0.1–0.2 range. The sizes of reported correlations in our sample were also likely attenuated by the high skew in eSLOE ratings; as shown in Figure 1, few raters made use of the bottom rating categories. This finding has also been noted in previous studies of the Standardized Letter of Evaluation (SLOE).24,25 The magnitude required for a correlation to have practical significance varies based on the context and potential outcomes. For example, the correlation between lung cancer and smoking (r = 0.1) is an example of a small correlation that is still considered important.26 Until performance outcomes are defined, a limitation of these results is that the practical significance of eSLOE rating and SVI score correlations cannot be conclusively determined.

In EM and several other specialties, standardized letters are considered important for their ability to communicate information more quickly and effectively than traditional narrative letters.13,14,27 Yet there are limitations to every tool. Studies of the SLOE (and the initial version known as the SLOR) have shown a tendency among raters to preferentially use higher global assessment (e.g., eSLOE C1 and C2) rating categories.24,25,28 Writer experience may bias toward higher rating categories,24,25,28 as writers often do not adhere to published guidelines for completing standardized tools.29 Writers may also vary in how they approach applicant ratings, with some considering the entirety of an individual’s record in addition to clinical performance, while others consider only clinical rotation data. Importantly, eSLOE raters are not necessarily trained to assess interpersonal and communication skills and professionalism in the same ways that SVI raters are trained. However, the length of the relationship between applicants and eSLOE raters is valuable. In addition, the large correlations between eSLOE items (e.g., eSLOE C1 and C2 at r = 0.91) may indicate that certain questions are providing duplicative information, although discordant information for these ratings may have substantial value in the assessment of some applicants.

We found both the SVI total score and eSLOE ratings slightly favored female applicants, and the eSLOE’s global assessment items for applicant ranking (eSLOE C1) and predicted location on the rank list (eSLOE C2) slightly favored white applicants. The absence of large group differences for the SVI is consistent with the literature describing a lack of group differences in structured interviews30; it may also reflect efforts to standardize the SVI rubric scales as well as extensive rater training to minimize implicit bias. The effect size differences for gender and race found in our sample for eSLOE ratings are consistent with meta-analyses on group differences in employee performance appraisals. These studies found that supervisors, on average, tend to rate women higher than men (d = 0.14) and white workers higher than black workers (d = 0.27).31–33 The consistency of our results with these meta-analyses supports the validity of our findings of racial and gender differences in eSLOE ratings and demonstrates that this critical tool in EM residency selection is subject to the same systemic racial biases present in employee appraisal.

Although we found that quantitative eSLOE ratings slightly favored female applicants, previous studies have reported varied results when analyzing gender bias for the narrative portion of the eSLOE.34–36 Scoring favoring female applicants does not continue through narrative letters, residency assessments, and subsequent faculty hiring and promotion decisions. Bias favoring men has been demonstrated in residency milestone assessments, as well as physician hiring and promotion decisions, especially in academic EM.37–44

SVI raters undergo standardized implicit bias training, but this type of training is not routinely provided to physicians who complete eSLOE assessments. Although the magnitude of black–white differences in eSLOE ratings in our sample (d = 0.27–0.40) is smaller than frequently reported effect size differences for cognitive ability tests (d values of approximately 1.0),45 the presence of racial differences favoring white applicants is concerning given the lack of diversity in EM compared with other medical specialties.46,47 A demographic report for EM trainees in 2017 indicated that the population was 73% white and 63% male.47 Further research is necessary to evaluate the impact of racial differences in applicant assessments on the residency selection process.

There is limited literature examining characteristics of successful international medical graduates in U.S. residency programs. The high degree of heterogeneity in this cohort in our sample severely limits our ability to draw conclusions. Factors affecting assessments of international medical graduates may include variations in training, implicit and overt rater bias, citizenship status, and perceptions of language proficiency.48 Future research in this domain is needed.

Our study was not designed to identify factors contributing to group differences in SVI total scores or eSLOE ratings. There are several factors that may contribute to these differences, including implicit or overt rater bias, rater demographics and characteristics, demographics of the overall EM workforce, societal inequities between groups, and opportunity bias. If EM is to develop an inclusive and diverse workforce reflective of the communities served, the specialty must make conscious and focused efforts to identify and address potential bias in recruitment, training, and assessment practices. This should include efforts such as provision of implicit bias training to EM educators and standardization of eSLOE rating scales to reflect criterion-referenced rather than norm-based ratings. It is important to note that many criteria used in residency selection also demonstrate racial group differences, including election to Alpha Omega Alpha and USMLE Step exam scores.49,50 The presence of group differences in multiple evaluation criteria compounds the challenges to residency programs seeking to identify and match a successful and diverse resident cohort.

Conclusions

This study is a first step in examining the AAMC SVI, a new assessment tool, and comparing it with the eSLOE, an established, frequently used assessment tool in EM. The positive correlations between eSLOE ratings and SVI total score illustrate that these are related but not redundant tools for residency selection in EM. Our findings indicate that the SVI assesses personal competencies that are related to residency selection criteria, with SVI score having a small positive relationship with eSLOE performance ratings that is concordant with the magnitude of previously published personality–performance relations from other fields. This study does not suggest that the SVI should replace the eSLOE but, rather, that the use of multiple tools is likely to provide a more nuanced portrait of the applicant. Development of performance outcome data for residency and the predictive value of both eSLOE ratings and SVI total score should be further explored. We encourage further collaborations and partnerships between organizations and individual residency programs to examine the relationship between EM selection tools and varying performance outcomes. The issues of group differences in the eSLOE ratings in this study are concordant with systemic biases in employee appraisal, and our findings should serve as a call for further refinement of the instrument as well as additional training of raters who use this valuable and widely accepted tool. Further research will be needed to assess the effectiveness of such interventions, their impact on the diversity within the field, and the extent to which racial and gender differences affect various aspects of the residency selection process.

Acknowledgments:

The authors wish to thank the Council of Residency Directors in Emergency Medicine (CORD-EM) Board of Directors for facilitating access to the electronic Standardized Letter of Evaluation (eSLOE) database as well as DeAnna McNett for her contributions to data preparation and analysis and Michele Byers for administrative support of the project. They also wish to thank Keith Dowd for helping clean and link data files; Laura Fletcher for standardizing tables, figures, and references; and Renee Overton for her feedback on early versions of the manuscript.

References

1. Schaverien MV. Selection for surgical training: An evidence-based review. J Surg Educ. 2016;73:721–729.
2. Hamdy H, Prasad K, Anderson MB, et al. BEME systematic review: Predictive values of measurements obtained in medical schools and future performance in medical practice. Med Teach. 2006;28:103–116.
3. Bandiera G, Abrahams C, Ruetalo M, Hanson MD, Nickell L, Spadafora S. Identifying and promoting best practices in residency application and selection in a complex academic health network. Acad Med. 2015;90:1594–1601.
4. Prober CG, Kolars JC, First LR, Melnick DE. A plea to reassess the role of United States Medical Licensing Examination Step 1 scores in residency selection. Acad Med. 2016;91:12–15.
5. Stephenson-Famy A, Houmard BS, Oberoi S, Manyak A, Chiang S, Kim S. Use of the interview in resident candidate selection: A review of the literature. J Grad Med Educ. 2015;7:539–548.
6. Dunleavy D, Geiger T, Overton R, Prescott J. Results of the 2016 program directors survey: Current practices in residency selection. https://store.aamc.org/downloadable/download/sample/sample_id/180. Published September 2016. Accessed June 27, 2019.
7. Dupras DM, Edson RS, Halvorsen AJ, Hopkins RH Jr, McDonald FS. “Problem residents”: Prevalence, problems and remediation in the era of core competencies. Am J Med. 2012;125:421–425.
8. Zbieranowski I, Takahashi SG, Verma S, Spadafora SM. Remediation of residents in difficulty: A retrospective 10-year review of the experience of a postgraduate board of examiners. Acad Med. 2013;88:111–116.
9. Regan L, Hexom B, Nazario S, Chinai SA, Visconti A, Sullivan C. Remediation methods for milestones related to interpersonal and communication skills and professionalism. J Grad Med Educ. 2016;8:18–23.
10. Joint Commission on Accreditation of Healthcare Organizations. Joint Commission Center for Transforming Healthcare releases targeted solutions tool for hand-off communications. Jt Comm Perspect. August 2012;32:1–3.
11. Papadakis MA, Hodgson CS, Teherani A, Kohatsu ND. Unprofessional behavior in medical school is associated with subsequent disciplinary action by a state medical board. Acad Med. 2004;79:244–249.
12. Cherry MG, Fletcher I, O’Sullivan H, Dornan T. Emotional intelligence in medical education: A critical review. Med Educ. 2014;48:468–478.
13. Keim SM, Rein JA, Chisholm C, et al. A standardized letter of recommendation for residency application. Acad Emerg Med. 1999;6:1141–1146.
14. Love JN, Smith J, Weizberg M, et al.; SLOR Task Force. Council of Emergency Medicine Residency Directors’ standardized letter of recommendation: The program director’s perspective. Acad Emerg Med. 2014;21:680–687.
15. Bhat R, Takenaka K, Levine B, et al. Predictors of a top performer during emergency medicine residency. J Emerg Med. 2015;49:505–512.
16. Lavashina J, Hartwell CJ, Morgenson FP, Campion MA. The structured employment interview: Narrative and quantitative review of the research literature. Pers Psychol. 2014;67:241–293.
17. Bird SB, Hern HG, Blomkalns A, et al. Innovation in residency selection: The AAMC Standardized Video Interview. Acad Med. 2019;94:1489–1497.
18. von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP; STROBE Initiative. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: Guidelines for reporting observational studies. PLoS Med. 2007;4:e296.
19. Cohen J. Statistical Power for the Behavioral Sciences. 1988.2nd ed. Hillsdale, NJ: Erlbaum.
20. Buckley RJJ, Hoch VC, Huang RD. Lights, camera, empathy: A request to slow the Emergency Medicine Standardized Video Interview Project study. AEM Educ Train. 2018;2:57–60.
21. Hayden SR, Hayden M, Gamst A. What characteristics of applicants to emergency medicine residency programs predict future success as an emergency medicine resident? Acad Emerg Med. 2005;12:206–210.
22. Borowitz SM, Saulsbury FT, Wilson WG. Information collected during the residency match process does not predict clinical performance. Arch Pediatr Adolesc Med. 2000;154:256–260.
23. Barrick MR, Mount MK. The big five personality dimensions and job performance: A meta-analysis. Pers Psychol. 1991;44:1–26.
24. Grall KH, Hiller KM, Stoneking LR. Analysis of the evaluative components on the Standard Letter of Recommendation (SLOR) in emergency medicine. West J Emerg Med. 2014;15:419–423.
25. Love JN, Deiorio NM, Ronan-Bentle S, et al.; SLOR Task Force. Characterization of the Council of Emergency Medicine Residency Directors’ standardized letter of recommendation in 2011–2012. Acad Emerg Med. 2013;20:926–932.
26. Nakagawa S, Cuthill IC. Effect size, confidence interval and statistical significance: A practical guide for biologists. Biol Rev Camb Philos Soc. 2007;82:591–605.
27. Girzadas DV Jr, Harwood RC, Dearie J, Garrett S. A comparison of standardized and narrative letters of recommendation. Acad Emerg Med. 1998;5:1101–1104.
28. Beskind DL, Hiller KM, Stolz U, et al. Does the experience of the writer affect the evaluative components on the standardized letter of recommendation in emergency medicine? J Emerg Med. 2014;46:544–550.
29. Hegarty CB, Lane DR, Love JN, et al. Council of Emergency Medicine Residency Directors standardized letter of recommendation writers’ questionnaire. J Grad Med Educ. 2014;6:301–306.
30. Huffcutt AI, Roth PL. Racial group differences in employment interview evaluations. J App Psychol. 1998;83:179–189.
31. Roth PL, Purvis KL, Bobko P. A meta-analysis of gender group differences for measures of job performance in field studies. J Manag. 2012;38:719–739.
32. McKay PF, McDaniel MA. A reexamination of black-white mean differences in work performance: More data, more moderators. J Appl Psychol. 2006;91:538–554.
33. Roth PL, Huffcutt AI, Bobko P. Ethnic group differences in measures of job performance: A new meta-analysis. J Appl Psychol. 2003;88:694–706.
34. Li S, Fant AL, McCarthy DM, Miller D, Craig J, Kontrick A. Gender differences in language of standardized letter of evaluation narratives for emergency medicine residency applicants. AEM Educ Train. 2017;1:334–339.
35. Isaac C, Chertoff J, Lee B, Carnes M. Do students’ and authors’ genders affect evaluations? A linguistic analysis of medical student performance evaluations. Acad Med. 2011;86:59–66.
36. Madera JM, Hebl MR, Martin RC. Gender and letters of recommendation for academia: Agentic and communal differences. J Appl Psychol. 2009;94:1591–1599.
37. Dayal A, O’Connor DM, Qadri U, Arora VM. Comparison of male vs female resident milestone evaluations by faculty during emergency medicine residency training. JAMA Intern Med. 2017;177:651–657.
38. Mueller AS, Jenkins TM, Osborne M, Dayal A, O’Connor DM, Arora VM. Gender differences in attending physicians’ feedback to residents: A qualitative analysis. J Grad Med Educ. 2017;9:577–585.
39. Lautenberger DM, Dandar VM, Raezer CL, Sloane RA. The state of women in academic medicine: The pipeline and pathways to leadership, 2015–2016. https://store.aamc.org/downloadable/download/sample/sample_id/228. Published 2016. Accessed June 27, 2019.
40. Edmunds LD, Ovseiko PV, Shepperd S, et al. Why do women choose or reject careers in academic medicine? A narrative review of empirical evidence. Lancet. 2016;388:2948–2958.
41. Wehner MR, Nead KT, Linos K, Linos E. Plenty of moustaches but not enough women: Cross sectional study of medical leaders. BMJ. 2015;351:h6311.
42. Kuhn GJ, Abbuhl SB, Clem KJ; Society for Academic Emergency Medicine (SAEM) Taskforce for Women in Academic Emergency Medicine. Recommendations from the Society for Academic Emergency Medicine (SAEM) Taskforce on women in academic emergency medicine. Acad Emerg Med. 2008;15:762–767.
43. Freund KM, Raj A, Kaplan SE, et al. Inequities in academic compensation by gender: A follow-up to the National Faculty Survey Cohort Study. Acad Med. 2016;91:1068–1073.
44. Jena AB, Olenski AR, Blumenthal DM. Sex differences in physician salary in US public medical schools. JAMA Intern Med. 2016;176:1294–1304.
45. Roth PL, Bevier CA, Bobko P, Switzer FS III, Tyler P. Ethnic group differences in cognitive ability in employment and educational settings: A meta-analysis. Pers Psychol. 2001;54:297–330.
46. Deville C, Hwang WT, Burgos R, Chapman CH, Both S, Thomas CR Jr.. Diversity in graduate medical education in the United States by race, ethnicity, and sex, 2012. JAMA Intern Med. 2015;175:1706–1708.
47. Marco CA, Nelson LS, Baren JM, et al.; Research Committee, American Board of Emergency Medicine; American Board of Emergency Medicine. American Board of Emergency Medicine report on residency and fellowship training information (2016–2017). Ann Emerg Med. 2017;69:640–652.
48. Liang M, Curtin LS, Signer MM, Savoia MC. Understanding the interview and ranking behaviors of unmatched international medical students and graduates in the 2013 Main Residency Match. J Grad Med Educ. 2015;7:610–616.
49. Boatright D, Ross D, O’Connor P, Moore E, Nunez-Smith M. Racial disparities in medical student membership in the Alpha Omega Alpha Honor Society. JAMA Intern Med. 2017;177:659–665.
50. Edmond MB, Deschenes JL, Eckler M, Wenzel RP. Racial bias in using USMLE Step 1 scores to grant internal medicine residency interviews. Acad Med. 2001;76:1253–1256.

Supplemental Digital Content

Copyright © 2019 by the Association of American Medical Colleges