Share this article on:

The Development and Preliminary Validation of a Rubric to Assess Medical Students’ Written Summary Statements in Virtual Patient Cases

Smith, Sherilyn MD; Kogan, Jennifer R. MD; Berman, Norman B. MD; Dell, Michael S. MD; Brock, Douglas M. PhD; Robins, Lynne S. PhD

doi: 10.1097/ACM.0000000000000800
Research Reports

Purpose The ability to create a concise summary statement can be assessed as a marker for clinical reasoning. The authors describe the development and preliminary validation of a rubric to assess such summary statements.

Method Between November 2011 and June 2014, four researchers independently coded 50 summary statements randomly selected from a large database of medical students’ summary statements in virtual patient cases to each create an assessment rubric. Through an iterative process, they created a consensus assessment rubric and applied it to 60 additional summary statements. Cronbach alpha calculations determined the internal consistency of the rubric components, intraclass correlation coefficient (ICC) calculations determined the interrater agreement, and Spearman rank–order correlations determined the correlations between rubric components. Researchers’ comments describing their individual rating approaches were analyzed using content analysis.

Results The final rubric included five com ponents: factual accuracy, appropriate narrowing of the differential diagnosis, transformation of information, use of semantic qualifiers, and a global rating. Internal consistency was acceptable (Cronbach alpha 0.771). Interrater reliability for the entire rubric was acceptable (ICC 0.891; 95% confidence interval 0.859–0.917). Spearman calculations revealed a range of correlations across cases. Content analysis of the researchers’ comments indicated differences in their application of the assessment rubric.

Conclusions This rubric has potential as a tool for feedback and assessment. Opportunities for future study include establishing interrater reliability with other raters and on different cases, designing training for raters to use the tool, and assessing how feedback using this rubric affects students’ clinical reasoning skills.

Supplemental Digital Content is available in the text.

S. Smith is professor, Department of Pediatrics, University of Washington School of Medicine, Seattle, Washington.

J.R. Kogan is associate professor, Department of Medicine, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania.

N.B. Berman is professor, Department of Pediatrics, Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, and executive medical director, MedU, Lebanon, New Hampshire.

M.S. Dell is professor, Department of Pediatrics, Case Western Reserve University School of Medicine, and director of undergraduate medical education, Rainbow Babies and Children’s Hospital, Cleveland, Ohio.

D.M. Brock is associate professor, Department of Family Medicine and MEDEX Northwest, University of Washington School of Medicine, Seattle, Washington.

L.S. Robins is professor, Departments of Biomedical Informatics and Medical Education, University of Washington School of Medicine, Seattle, Washington.

Funding/Support: None reported.

Other disclosures: None reported.

Ethical approval: This study received an exemption from the institutional review board of the University of Washington (October 21, 2011).

Previous presentations: Association of American Medical Colleges annual meeting, November 2013, Philadelphia, Pennsylvania; Council on Medical Student Education in Pediatrics annual meeting, March 2014, Ottawa, Ontario, Canada.

Supplemental digital content for this article is available at

Correspondence should be addressed to Sherilyn Smith, Seattle Children’s Hospital, 4800 Sand Point Way NE, Mailstop: MA 7.226, Seattle, WA 98006; telephone: (206) 987-2073; e-mail:

© 2016 by the Association of American Medical Colleges