Vice president, USMLE, National Board of Medical Examiners, Philadelphia, Pennsylvania; email@example.com. (Dillon)
Vice president, Measurement Consulting Services, National Board of Medical Examiners, Philadelphia, Pennsylvania. (Clauser)
President, National Board of Medical Examiners, Philadelphia, Pennsylvania. (Melnick)
To the Editor:
McGaghie et al1 offer a summary of recent thinking about test validity, primarily citing the work of Kane. Kane suggests a strategy for validation research that focuses on the chain of inferences that supports the interpretation of examination results. However, McGaghie and colleagues' discussion of the use of scores from the United States Medical Licensing Examination (USMLE) seems unnecessarily restrictive, in part because the authors limit both the aspects of Kane's work and also the spectrum of relevant research considered.
Kane notes that it is important for score users to clearly state the claims included in their interpretation of test results. We agree that evidence supporting USMLE performance as highly predictive of successful acquisition of the full range of knowledge, skills, and attitudes important for patent care is limited. Nevertheless, USMLE Step 1 and Step 2 Clinical Knowledge performance is undeniably related to mastery of applied basic and clinical science knowledge. If program directors consider a solid foundation in these domains to be important measures of readiness for growth and development during graduate medical education, then we believe it is reasonable for them to use USMLE scores as a factor in their consideration of applicants.
Kane argues that credible validity evidence based on correlations with a criterion measure requires compelling evidence for the criterion measure. In the case of the criteria cited in McGaghie and colleagues' article, this would require making the case that success in a residency program can be broadly and convincingly defined as success in isolated clinical and procedural skills. This seems like an unreasonable trivialization of residency training.
The design of the USMLE is directed by a broad group of physicians and scientists who have the goal of assessing knowledge and skills essential to safe and effective practice as the candidate begins to assume responsibility for patient care. The detection of relationships between USMLE performance and markers for resident performance, albeit modest as McGaghie et al note, provides evidence in support of this effort. Furthermore, relationships between USMLE performance and performance on other standardized examinations, also dismissed by the authors, focus on the likelihood of continued success on measures that have significant consequences for the individual and for his or her program.
We support McGaghie and colleagues' call for the development of more standardized tools for use in residency selection, and, until this goal is achieved, users of USMLE scores need to clearly understand the limitations of reliance on those scores as a sole criterion in this process. Nevertheless, USMLE scores provide meaningful information on a candidate's fundamental basic and clinical science knowledge, and, when used as one of many measures of candidate readiness, these scores allow a useful comparison among individuals from a broad range of backgrounds and diverse educational experiences.
Gerard F. Dillon, PhD
Vice president, USMLE, National Board of Medical Examiners, Philadelphia, Pennsylvania; firstname.lastname@example.org.
Brian E. Clauser, EdD
Vice president, Measurement Consulting Services, National Board of Medical Examiners, Philadelphia, Pennsylvania.
Donald E. Melnick, MD, MACP
President, National Board of Medical Examiners, Philadelphia, Pennsylvania.
The USMLE program is sponsored by the National Board of Medical Examiners and the Federation of State Medical Boards.