To examine the validity evidence for a scrub training knowledge assessment tool to demonstrate the utility and robustness of a multimodal, entrustable professional activity (EPA)-aligned, mastery learning scrub training curriculum.
Validity evidence was collected for the knowledge assessment used in the scrub training curriculum at Stanford University School of Medicine from April 2017 to June 2018. The knowledge assessment had 25 selected response items that mapped to curricular objectives, EPAs, and operating room policies. A mastery passing standard was established using the Mastery Angoff and Patient-Safety approaches. Learners were assessed pre curriculum, post curriculum, and 6 months after the curriculum.
From April 2017 to June 2018, 220 medical and physician assistant students participated in the scrub training curriculum. The mean pre- and postcurriculum knowledge scores were 74.4% (standard deviation [SD] = 15.6) and 90.1% (SD = 8.3), respectively, yielding a Cohen’s d
= 1.10, P
< .001. The internal reliability of the assessment was 0.71. Students with previous scrub training performed significantly better on the precurriculum knowledge assessment than those without previous training (81.9% [SD = 12.6] vs 67.0% [SD = 14.9]; P
< .001). The mean item difficulty was 0.74, and the mean item discrimination index was 0.35. The Mastery Angoff overall cut score was 92.0%.
This study describes the administration of and provides validity evidence for a knowledge assessment tool for a multimodal, EPA-aligned, mastery-based curriculum for scrub training. The authors support the use of scores derived from this test for assessing scrub training knowledge among medical and physician assistant students.