The 2019 novel coronavirus (COVID-19) pandemic has strained our economy, health care system, and personal lives, and is affecting the 2020 residency application cycle. Travel limitations have halted away rotations,1 testing centers are unable to meet the demand for licensure exam testing,2 and the Coalition for Physician Accountability has delayed the application timeline by 1 month3 and recommends all programs transition to virtual interviewing.4
These COVID-19-related stressors are laid on top of a resident selection process already under duress: exploding application numbers5 overwhelm program directors, leading them to stray away from holistically reviewing applicants and, instead, resort to screening tools such as board scores and clerkship grades that are unlikely to predict the best-performing residents.6 The interview offer and scheduling process is haphazard,7 and some applicants cancel at the last minute.8 Others use interviews to “practice” for more preferred programs9 or hoard interviews: for example, an estimated 12% of internal medicine applicants accounted for half of all interviews in 2017.10 Widespread virtual interviewing in the upcoming cycle will reduce any time or monetary barriers reining in such overinterviewing. Programs are left struggling to ascertain which interviewees are genuinely interested,5 and may feel pressure to offer even more interviews than previously, else risk going unfilled. The hypercompetitive cycle repeats as applicants, in a prisoner’s dilemma, feel they must continue to increase the number of programs to which they apply and interview.11
Capping applications has been suggested,5,12–14 a practice that has met mixed acceptance from candidates15 and programs.16,17 Candidates’ reasons for the number of applications they submit vary.16,18 Moreover, the relationship between a candidate’s number of applications and their probability of matching is confounded by multiple factors,19 threatening the viability of a one-size-fits-all application cap. Various specialties have proposed other ideas such as preference signaling,20,21 preinterview rank lists,22 and additional stages in the residency match.23–25 Although we support these bold ideas in the long term, the infrastructure needed to implement them thoughtfully may take years. Rather, applicants and programs need expedited improvements to the upcoming COVID-affected application cycles.
Thus, we propose limiting the number of interviews candidates may attend in the upcoming residency application cycles. In contrast to applications, the number of contiguous ranks—a reasonable surrogate for interviews26—is a proxy for an applicant’s competitiveness and correlates well with probability of matching. For example, among U.S. MD,27 U.S. DO,28 and U.S. citizen international medical graduates (IMGs)29 applying in the largest specialties (i.e., internal medicine, family medicine, pediatrics, emergency medicine, and anesthesiology who allocate two-thirds of all residency positions), ranking 4–10 programs is associated with a > 90% chance of matching. For non-U.S. citizen IMGs, who represent about 20% of the applicant pool, ranking 8–20 programs is associated with a similar chance of matching.29 Above these upper thresholds, additional ranks are not associated with dramatic improvements in match rate, justifying an interview cap.
Educators have promoted interview caps before9,30–32 without elaborating on how they might be operationalized. Here, we describe a novel, self-enforcing model for interview caps, framed by our recent experiences as trainees.
An Interview Ticket System for Capping Interviews
The interview ticket system (ITS) we propose is a residency program–based intervention to cap residency interviews (Figure 1). First, each specialty electing to participate would agree to a standard per-applicant interview cap based on specialty-specific data. For example, across all types of applicants (MD and DO, U.S. and international) in internal medicine, evidence supports that ranking 12 programs is associated with a > 95% chance of matching, with little improvement in match rate thereafter.27–29 Thus, internal medicine could set a cap of interviewing at 12 programs. Next, residency programs within each specialty would elect to participate and notify all applicants of their participation before the application season.
Applicants would then receive their set number of interview tickets—each a unique code (e.g., applicant ID + random number)—electronically from the ITS before the interview season. Interview offers and scheduling would occur via existing infrastructure, which is important because scheduling systems vary by specialty and program.33 At the time of an interview with a participating program, applicants would cash in their ticket to that program, and the ticket would be converted from “unused” to “used” through an electronic interface with the ITS. If a program attempted to convert a ticket that was invalid or already used, the ITS would immediately alert the program. If the applicant made an error, this could be rectified. However, if the applicant had indeed interviewed at more programs than the cap, the participating program would have the liberty to decide what to do with that information and could initiate a dialogue with the student. Such behavior would presumably reflect poorly on the applicant and serve to enforce the system. If a student interviews at a nonparticipating program, no ticket is exchanged, and that interview would not count against their allotted number of tickets.
In our discussions with local data scientists, the ITS as proposed would be technologically feasible and simple to design. Ideally, a disinterested third party without a direct stake in who obtains interviews—which could include national organizations in academic medicine, specialty groups, or an external not-for-profit organization—would host and maintain the ITS. The system would generate rich data regarding applicant interview behavior. With appropriate deidentification and data protection, pairing such data with Association of American Medical Colleges (AAMC) and National Resident Matching Program datasets could provide medical school advisors actionable information for advising medical students in the future.12
The ITS would benefit both applicants and programs. Highly competitive applicants would attend fewer interviews but would see little change in their match rate, as the incremental yield of each interview above the evidence-based cap is minimal.26 The interview offers that highly competitive applicants would “give up” due to the cap would then be available to less competitive applicants, potentially leading to an improved match rate. Most important, all applicants would be able to signify genuine interest to programs by using an interview ticket—especially critical when away rotations have been suspended. Finally, moving to virtual interviews that are limited in number should help decrease time spent interviewing and traveling, which may help make up for lost clinical training time due to COVID-19.34
Participating programs will in turn benefit by only interviewing serious candidates, which should reduce the number of interviews (and ranks) programs require to fill. Prior research indicates that > 85% of programs’ cost of interviewing applicants is from faculty salary overhead and lost productivity35,36; capping interview numbers may reduce these costs. With successive iterations of an interview cap, the yield of each interview slot offered by a program resulting in a match would increase, permitting fewer interview slots and promoting more holistic review. By capping interviews, applicants would also be more thoughtful about which interviews they initially accept—counterbalancing the likely increase in cancelations with virtual interviews that could strain even the most nimble programs.16
The ITS could be implemented in different ways for different specialties. For example, some specialties may prefer that ticket exchange occur at the time of interview offer acceptance, which would significantly mitigate the number of late cancellations, but would add complexity regarding how—or if—tickets would be reinstated as “unused” should an applicant cancel their interview. Ticket exchange could theoretically be integrated into the scheduling process; however, the many different scheduling systems used by programs would make this technologically challenging.
Finally, several applicant groups warrant special consideration. Individuals who are couples matching tend to submit more applications and attend more interviews than their peers, with the match likelihood of each dyad dependent on factors related to both applicants. A specialty-specific, one-size-fits-all interview cap could negatively affect certain couples. To address this, those couples matching could be excluded from the ITS or have an increased interview cap, although such approaches might create perverse incentives for couples matching. Likewise, IMG and DO applicants require more interviews than their U.S. MD counterparts to achieve a similar match rate; care would be needed to ensure each specialty accounts for these groups in determining their interview cap number. For those applying in parallel into more than one specialty, we recommend they abide by the capping limitations of each respective specialty to which they are applying.
ITS Limitations and Alternatives
The ITS we propose has several limitations. Obtaining specialty-wide consensus and student acceptance of a specific cap number could be contentious. To help assuage fears that such a cap could harm match outcomes for applicants or programs, we recommend modeling within a specialty, as has been done for preference signaling.37 Robust program participation will be critical for evidence-based caps to have their intended effect. To encourage adequate program participation, specialties should aim to identify an esteemed physician leader within their specialty to champion ITS implementation, and could enforce an “all in” policy to ensure sufficient program participation. Although we would recommend a conservative (high) cap number in the first implementation, additional tickets could be distributed to all applicants midcycle if needed, which should mitigate fears of making such tickets too scarce. Additionally, specialties implementing a cap would need to recognize that data supporting interviews caps are based on in-person interviews, which could change if the allotted number of interview slots changes. Within specialties choosing to use the ITS, programs should commit to interviewing roughly the same number of applicants.
Another critique we anticipate is legal threat: what if a student who attended “only” the capped number of interviews (say, 12) does not match and sues because they could not attend 13 interviews? We think this situation is unlikely to occur, as the number of applicants in such a category would be exceedingly small. We are not attorneys but can find no precedent for such suit. Nevertheless, we recognize that the mere fear of litigation might make an organization hesitant to host the ITS. We would respond that in our proposed ITS model, participation is voluntary for both applicants and programs. Furthermore, programs could assert that they would favorably evaluate applicants who adhered to the ITS cap but would not strictly exclude applicants who chose to not participate. This would further reduce avenues for legal challenge but might create a Hobson’s choice for applicants.
Alternatives to a ticket system exist for capping interviews. One option would be an honor system, with specialties or programs asking students to self-regulate the number of interviews they complete. To strengthen this system, programs could ask applicants at the time of their interview to sign a nonbinding honor code acknowledgment indicating that they are not exceeding the specialty-designated maximum number of interviews. Several medical schools have built preclinical examinations around an honor code, with the vast majority of students abiding by the code.38 Finally, interview caps could be student governed through an online forum, similar to existing forums39–41 through which candidates share application data and interview information that does appear to have reliability evidence.42 However, the resident selection process is high stakes and draws applicants from many different communities; we worry that the few students who disregard the honor code, or a student-governed cap, could create a toxic and inequitable environment for other applicants.
Caps Will Require Preinterview Transparency and High-Quality Interviewing
For individuals whose interviews are capped, each interview would carry more weight. Accurate preinterview advising and information sharing would be essential for applicants to know which interviews to accept and attend. Although specialty-level data exist,43 applicants’ compatibility with specific programs is difficult for advisors to prognosticate. Recent work suggests even program directors do not understand what other programs are doing to vet applicants.44 Ongoing efforts to improve the transparency of selection factors and compatibility, such as the AAMC Residency Explorer,45 should be bolstered to provide applicants and advisors a clear-eyed picture of which programs would be a good match. Programs can do their part by improving their websites—which often are not comprehensive46–50—before the application season, ideally through standardized tabulation of the program features that are most important to applicants for determining if they are a good fit with a program.51 Without such program-led transparency, applicants may rely on flawed social media ranking services.52 Interview day activities could be changed to virtual experiences before applicants expend a precious interview. Programs could share interactive virtual hospital tours53–55 or didactic sessions39—perhaps even the virtual lectures created for resident education during the COVID-19 pandemic56–59—to help applicants get a better feel for the program before accepting an interview.
In addition to improving preinterview transparency, standardizing the interview offer and scheduling process would help to optimize ITS implementation. For example, in the 2019–2020 residency application cycle, the majority of obstetrics–gynecology residency programs committed to standardized interview offer dates and allowed 72 hours for applicants to accept interviews.60 All specialties should adopt similar standards to mitigate applicant anxiety61 and allow applicants to use their allotted tickets without concern for additional interview offers coming at a later date.
Improving the interview day will also be vital. Unstructured interviews lack interrater reliability,62,63 and unblinded interviews may merely recapitulate board scores.64 Virtual interviewing may offer an opportunity for programs to revamp their interview process to align with best practices such as using structured interviews,65 standardizing questions, using multiple observers, training interviewers in format and scoring, and blinding the interviewer to minimize bias.66 Prior experiences with virtual interviews67–69—including lessons learned from the AAMC Standardized Video Interview70—must directly inform educators and programs as they design virtual interviews for the upcoming application cycle. Finally, educators must consider how virtual interviews may perpetuate or shift inequities in the resident selection process. The reduced cost of interviewing71 may open a door for some lower-income applicants, yet those applicants without fast or reliable Internet or whose home environments are not optimal for a professional interview may be disadvantaged.55 Thoughtfully mitigating these and other known biases72 will be critical.
COVID-19 has created unimaginable challenges, yet also offers an unprecedented opportunity for reflection, innovation, and transformative change. Our health care system has mobilized large-scale changes in weeks that were previously projected to take years—we can do the same for our residency interview process. Interview caps do not cure all that ails the residency application process but would ensure each interview represents a genuine interaction between applicant and program. The ITS we propose would be a low-cost, simple, and self-enforcing implementation of such interview caps that requires no changes to existing infrastructure for application submission and interview offer and scheduling. Combined with increased preinterview transparency and attention to high-quality interviewing practices, capping interviews would make the residency interview process more reliable, equitable, and constructive. The medical education community has been calling for change in the resident selection process for decades; there has never been a better time to innovate and improve the experience for both applicants and programs.
The authors wish to thank J. Bryan Carmody, Louis Miller, and Matthew R. Carey for their critical review of the manuscript.
1. Association of American Medical Colleges. Coronavius (COVID-19) and The VSLO Program. https://students-residents.aamc.org/attending-medical-school/article/coronavirus-covid-19-and-vslo-program
. Accessed September 2, 2020.
2. United States Medical Licensing Examination. Coronavirus (COVID-19) 4/10/2020 update: Prometric closures and Step 1, Step 2 CK, and Step 3. https://www.usmle.org/announcements/?ContentId=268
. Accessed September 2, 2020.
3. Association of American Medical Colleges. ERAS 2021 Residency Timeline. https://students-residents.aamc.org/applying-residency/article/eras-timeline-md-residency
. Published 2020 Accessed September 2, 2020.
4. The Coalition for Physician Accountability. Final report and recommendations for medical education institutions of LCME accredited, U.S. osteopathic, and non-U.S. medical school applicants. https://www.aamc.org/system/files/2020-05/covid19_Final_Recommendations_Executive%20Summary_Final_05112020.pdf
. AccessedSeptember 2, 2020.
5. Weissbart SJ, Kim SJ, Feinn RS, Stock JA. Relationship between the number of residency applications and the yearly match rate: Time to start thinking about an application limit? J Grad Med Educ. 2015;7:81–85.
6. Wagner JG, Schneberk T, Zobrist M, et al. What predicts performance? A multicenter study examining the association between resident performance, rank list position, and United States Medical Licensing Examination Step 1 Scores. J Emerg Med. 2017;52:332–340.
7. Klein MR, Sanguino SM, Salzman DH. A challenge to disrupt the disruptive process of residency interview invitations. J Grad Med Educ. 2019;11:375–377.
8. Nilsen K, Walling A, Callaway P, et al. “The end game”—Students’ perspectives of the National Residency Matching Program: A focus group study. Med Sci Educ. 2018;28:729–737.
9. Katsufrakis PJ, Uhler TA, Jones LD. The residency application process: Pursuing improved outcomes through better understanding of the issues. Acad Med. 2016;91:1483–1487.
10. Lee AH, Young P, Liao R, Yi PH, Reh D, Best SR. I dream of Gini: Quantifying inequality in otolaryngology residency interviews. Laryngoscope. 2019;129:627–633.
11. Berger JS, Cioletti A. Viewpoint from 2 graduate medical education deans application overload in the residency match process. J Grad Med Educ. 2016;8:317–321.
12. Pereira AG, Chelminski PR, Chheda SG, et al.; Medical Student to Resident Interface Committee Workgroup on the Interview Season. Application inflation for internal medicine applicants in the match: Drivers, consequences, and potential solutions. Am J Med. 2016;129:885–891.
13. Naclerio RM, Pinto JM, Baroody FM. Drowning in applications for residency training: A program’s perspective and simple solutions. JAMA Otolaryngol Head Neck Surg. 2014;140:695–696.
14. Molina Burbano F, Yao A, Burish N, et al. Solving congestion in the plastic surgery match: A game theory analysis. Plast Reconstr Surg. 2019;143:634–639.
15. Ward M, Pingree C, Laury AM, Bowe SN. Applicant perspectives on the otolaryngology residency application process. JAMA Otolaryngol Head Neck Surg. 2017;143:782–787.
16. Puscas L. Viewpoint from a program director they can’t all walk on water. J Grad Med Educ. 2016;8:314–316.
17. Sweet ML, Williams CM, Stewart E, et al. Internal medicine residency program responses to the increase of residency applications: Differences by program type and characteristics. J Grad Med Educ. 2019;11:698–703.
18. Angus SV, Williams CM, Kwan B, et al. Drivers of application inflation: A national survey of internal medicine residents. Am J Med. 2018;131:447–452.
19. Association of American Medical Colleges. Apply Smart: Data to consider when applying for residency. https://students-residents.aamc.org/applying-residency/filteredresult/apply-smart-data-consider-when-applying-residency
. Accessed September 2, 2020.
20. Bernstein J. Not the last word: Want to match in an orthopaedic surgery residency? Send a rose to the program director. Clin Orthop Relat Res. 2017;475:2845–2849.
21. Salehi PP, Benito D, Michaelides E. A novel approach to the National Resident Matching Program—The star system. JAMA Otolaryngol Head Neck Surg. 2018;144:397–398.
22. Melcher ML, Wapnir I, Ashlagi I. May the interview be with you: Signal your preferences. J Grad Med Educ. 2019;11:39–40.
23. Hammoud MM, Andrews J, Skochelak SE. Improving the residency application and selection process: An optional early result acceptance program. JAMA. 2020;323:503–504.
24. Monir JG. Reforming the match: A proposal for a new 3-phase system. J Grad Med Educ. 2020;12:7–9.
25. Arnold L, Sullivan C, Okah FA. A free-market approach to the match: A proposal whose time has not yet come. Acad Med. 2018;93:16–19.
26. Carmody JB. Applying smarter: A critique of the AAMC Apply Smart tools. J Grad Med Educ. 2020;12:10–13.
27. National Resident Matching Program. Charting outcomes in the match: U.S. allopathic seniors. https://www.nrmp.org/wp-content/uploads/2018/06/Charting-Outcomes-in-the-Match-2018-Seniors.pdf
. Updated July 2018. Accessed September 2, 2020.
28. National Resident Matching Program.. Charting outcomes in the match: Senior students of U.S. osteopathic medical schools. https://mk0nrmp3oyqui6wqfm.kinstacdn.com/wp-content/uploads/2018/06/Charting-Outcomes-in-the-Match-2018-Osteo.pdf
. Published 2018 Accessed September 2, 2020.
29. National Resident Matching Program. Charting outcomes in the match: International medical graduates. https://mk0nrmp3oyqui6wqfm.kinstacdn.com/wp-content/uploads/2018/06/Charting-Outcomes-in-the-Match-2018-IMGs.pdf
. Published 2018 AccessedSeptember 2, 2020.
30. Frush BW, Byerley J. High-value interviewing: A call for quality improvement in the match process. Acad Med. 2019;94:324–327.
31. Gruppuso PA, Adashi EY. Residency placement fever: Is it time for a reevaluation? Acad Med. 2017;92:923–926.
32. Hammoud MM, Standiford T, Carmody JB. Potential implications of COVID-19 for the 2020–2021 residency application cycle. JAMA. 2020;324:29–30.
33. Burk Rafel J. You missed a late-night email. Could you lose your dream residency? AAMC News and Insights. https://www.aamc.org/news-insights/you-missed-late-night-email-could-you-lose-your-dream-residency
. Updated October 4, 2018. Accessed September 2, 2020.
34. Association of American Medical Colleges. Important guidance for medical students on clinical rotations during the coronavirus (COVID-19) outbreak. https://www.aamc.org/news-insights/press-releases/important-guidance-medical-students-clinical-rotations-during-coronavirus-covid-19-outbreak
. Updated April 17, 2020. Accessed September 2, 2020.
35. Moore DB. Not a cheap investment: Estimating the cost of the 2017 to 2018 ophthalmology residency match to the applicant and program. J Acad Ophthalmol. 2018;10:e158–e162.
36. Van Dermark JT, Wald DA, Corker JR, Reid DG. Financial implications of the emergency medicine interview process. AEM Educ Train. 2017;1:60–69.
37. Whipple ME, Law AB, Bly RA. A computer simulation model to analyze the application process for competitive residency programs. J Grad Med Educ. 2019;11:30–35.
38. Ross PT, Keeley MG, Mangrulkar RS, Karani R, Gliatto P, Santen SA. Developing professionalism and professional identity through unproctored, Flexible testing. Acad Med. 2019;94:490–495.
39. Otomatch. https://www.otomatch.com
. Accessed September 2, 2020.
40. The Student Doctor Network. https://www.studentdoctor.net/profession/medical
. Accessed September 2, 2020.
42. Hu S, Laughter MR, Dellavalle RP. Reliability of self-reported data on social media versus National Residency Match Program charting outcomes for dermatology applicants. J Am Acad Dermatol. 2020;83:1842–1844.
43. National Resident Matching Program. Results of the 2018 NRMP Program Director Survey. https://www.nrmp.org/wp-content/uploads/2018/07/NRMP-2018-Program-Director-Survey-for-WWW.pdf
. Published 2018 Accessed September 2, 2020.
44. Garber AM, Kwan B, Williams CM, et al. Use of filters for residency application review: Results from the internal medicine in-training examination program director survey. J Grad Med Educ. 2019;11:704–707.
46. Svider PF, Gupta A, Johnson AP, et al. Evaluation of otolaryngology residency program websites. JAMA Otolaryngol Head Neck Surg. 2014;140:956–960.
47. Stoeger SM, Freeman H, Bitter B, Helmer SD, Reyes J, Vincent KB. Evaluation of general surgery residency program websites. Am J Surg. 2019;217:794–799.
48. Patel SJ, Abdullah MS, Yeh PC, Abdullah Z, Jayaram P. Content evaluation of physical medicine and rehabilitation residency websites [published online ahead of print December 16, 2019].PM&R. doi:10.1002/pmrj.12303
49. Reilly EF, Leibrandt TJ, Zonno AJ, Simpson MC, Morris JB. General surgery residency program websites: Usefulness and usability for resident applicants. Curr Surg. 2004;61:236–240.
50. Shaath DS, Whittaker TJ. Evaluation of ophthalmology residency program web sites. J Acad Ophthalmol. 2019;11:e44–e48.
51. Phitayakorn R, Macklin EA, Goldsmith J, Weinstein DF. Applicants’ self-reported priorities in selecting a residency program. J Grad Med Educ. 2015;7:21–26.
52. Lorch AC, Miller JW, Kloek CE. Accuracy in residency program rankings on social media. J Grad Med Educ. 2019;11:127–128.
53. Hariton E, Bortoletto P, Ayogu N. Residency interviews in the 21st
century. J Grad Med Educ. 2016;8:322–324.
54. Healy WL, Bedair H. Videoconference interviews for an adult reconstruction fellowship: Lessons learned. J Bone Joint Surg Am. 2017;99:e114.
55. Pourmand A, Lee H, Fair M, Maloney K, Caggiula A. Feasibility and usability of tele-interview for medical residency interview. West J Emerg Med. 2018;19:80–86.
56. Chick RC, Clifton GT, Peace KM, et al. Using technology to maintain the education of residents during the COVID-19 pandemic. J Surg Educ. 2020;77:729–732.
57. Kogan M, Klein SE, Hannon CP, Nolte MT. Orthopaedic education during the COVID-19 pandemic. J Am Acad Orthop Surg. 2020;28:e456–e464.
58. Kwon YS, Tabakin AL, Patel HV, et al. Adapting urology residency training in the COVID-19 era. Urology. 2020;141:15–19.
59. Comer BT, Gupta N, Mowry SE, Malekzadeh S. Otolaryngology education in the setting of COVID-19: Current and future implications. Otolaryngol Head Neck Surg. 2020;163:70–74.
60. Murphy B. The Match process is packed with stress. Ob-gyns aim to fix it. American Medical Association: Improve GME. https://www.ama-assn.org/education/improve-gme/match-process-packed-stress-ob-gyns-aim-fix-it
. Published 2019 Accessed September 2, 2020.
61. Strand EA, Sonn TS. The residency interview season: Time for commonsense reform. Obstet Gynecol. 2018;132:1437–1442.
62. Conway JM, Jako RA, Goodman DF. A meta-analysis of interrater and internal consistency reliability of selection interviews. J Appl Psychol. 1995;80:565–579.
63. Patrick LE, Altmaier EM, Kuperman S, Ugolini K. A structured interview for medical school admission, Phase 1: Initial procedures and results. Acad Med. 2001;76:66–71.
64. Smilen SW, Funai EF, Bianco AT. Residency selection: Should interviewers be given applicants’ board scores? Am J Obstet Gynecol. 2001;184:508–513.
65. Marcus-Blank B, Dahlke JA, Braman JP, et al. Predicting performance of first-year residents: Correlations between structured interview, licensure exam, and competency scores in a multi-institutional study. Acad Med. 2019;94:378–387.
66. Stephenson-Famy A, Houmard BS, Oberoi S, Manyak A, Chiang S, Kim S. Use of the interview in resident candidate selection: A review of the literature. J Grad Med Educ. 2015;7:539–548.
67. Breitkopf DM, Green IC, Hopkins MR, Torbenson VE, Camp CL, Turner NS 3rd. Use of asynchronous video interviews for selecting obstetrics and gynecology residents. Obstet Gynecol. 2019;134(suppl 1):9S–15S.
68. Shah SK, Arora S, Skipper B, Kalishman S, Timm TC, Smith AY. Randomized evaluation of a web based interview process for urology resident selection. J Urol. 2012;187:1380–138C4.
69. Pasadhika S, Altenbernd T, Ober RR, Harvey EM, Miller JM. Residency interview video conferencing. Ophthalmology. 2012;119:426–426.
70. Bird SB, Hern HG, Blomkalns A, et al. Innovation in residency selection: The AAMC standardized video interview. Acad Med. 2019;94:1489–1497.
71. Edje L, Miller C, Kiefer J, Oram D. Using skype as an alternative for residency selection interviews. J Grad Med Educ. 2013;5:503–505.
72. Maxfield CM, Thorpe MP, Desser TS, et al. Bias in radiology resident selection: Do we discriminate against the obese and unattractive? Acad Med. 2019;94:1774–1780.