Cognitive load (CL) theory provides a framework to inform simulation instructional design. Reliable measures of CL types (intrinsic [IL], extraneous [EL], and germane load [GL]) in simulation are lacking. We developed the novel Cognitive Load Assessment Scales in Simulation (CLAS-Sim) and report validity evidence using Kane's framework.
This quasi-experimental study tested the effect of a segmented/pause-and-debrief or standard/end-of-case-debrief intervention on pediatric residents' performance and self-rated CL in 2 complex- and simple-case simulations. After each simulation, participants completed 22 items measuring CL types. Three validity inferences were examined: scoring (instrument development and principal component analysis); generalization (internal consistency reliability of CL-component items across cases); and extrapolation [CLAS-Sim correlations with the single-item Paas scale, which measures overall CL; differences in primary task performance (high vs low); and discriminant validity of IL under different instructional-design conditions].
Seventy-four residents completed both simulations and postcase CLAS-Sim measures. The principal component analysis yielded 3 components: 4-item IL, 4-item EL, and 3-item GL scales (Cronbach's α, 0.68–0.77). The Paas scores correlated with CLAS-Sim IL and total CL scores in both cases (rs range, 0.39–0.70; P ≤ 0.001). High complex-case performers reported lower IL and total CL (analyses of variance, each P < 0.001). In multivariate analyses of variance, CLAS-Sim IL, GL, and total CL varied across both cases by arm (each P ≤ 0.018); the segmented-debrief arm reported lower IL than the standard-debrief arm in both cases (each P ≤ 0.01).
The CLAS-Sim demonstrates preliminary validity evidence for distinguishing 3 CL types but requires further study to evaluate the impact of simulation-design elements on CL and learning.