Secondary Logo

Journal Logo

Articles

A Big Data and Learning Analytics Approach to Process-Level Feedback in Cognitive Simulations

Pecaric, Martin PhD; Boutis, Kathy MSc, MD; Beckstead, Jason PhD; Pusic, Martin MD, PhD

Author Information
doi: 10.1097/ACM.0000000000001234

Abstract

Cognitive simulations can promote effective skill development. Examples of this from health professions education include virtual patients and cognitive task trainers.1,2 These simulations are typically based on well-tested instructional models such as deliberate practice in which the learner engages in an aspect of a clinical task and then is given feedback on their performance.3,4 Ideally, the simulation is repeated in performance–feedback cycles governed by a learning curve relationship.5 The efficiency with which one learns using this model depends in part on the difficulty of the material (e.g., a steep learning curve) and on the effectiveness of the feedback.6–8

In learning cognitive clinical skills, feedback that is oriented to the process of problem solving can be more effective than pure outcome feedback.6,9,10 We define process data as data that capture details about how a learner interacts with a learning case, which is distinct from outcome data, which describe the correctness of the final learner response.11 Consider the simulation of a radiology case in which a learner interprets a radiograph of an elbow joint to determine the presence or absence of a fracture. Outcome feedback would compare the learner’s interpretation with that of an expert radiologist. However, simply being given the correct answer (e.g., “there is a supracondylar fracture”) is often not as effective as being shown the process by which the fracture is found (e.g., “by following an imaginary line along the anterior border of the humerus, supracondylar fractures are detected when …”). Further, both educators and learners are better informed about how a learner acquires a skill when they examine the process by which a learner arrives at a diagnosis.12–14

Advances in educational technology have made it possible to collect tremendous amounts of process data in a digital environment.15 Collecting this large amount of data and analyzing it for the purposes of education can be considered a big data/learning analytics (BD/LA) approach to improving learning.15,16 Big data are large, heterogeneous data sets that require specialized tools beyond those that are built into standard databases or reporting software.17,18 Data analytics refers to the use of statistical or data mining techniques to synthesize information or to make useful predictions.16,17,19 When data analytics are applied to education data, they are often referred to as learning analytics.16,17,19

The BD/LA approach is often used in online courses delivered to grade school or college learners.15,19 Specifically, massive open online courses use learning analytics data to detect levels of engagement, track learner progress, and assign badges documenting achievement.19 In the education of health care professionals, the application of BD/LA is more limited to date. This is expected to change as more health professions education content becomes available in online learning repositories.20–22

In this paper, we discuss the potential advantages of the BD/LA approach for the process of learning via cognitive simulations, using the data from an education study where the subject matter was radiograph interpretation.23 We first give a brief overview of the radiology study and then, through the lens of a cognitive model, outline a number of BD/LA measures that could support learning from this radiograph case simulation and other similar cognitive simulations.

The Cognitive Simulation Example: Radiograph Interpretation

To demonstrate how a BD/LA approach can be used for process-level feedback in cognitive simulation, we present a reanalysis of data from a series of studies on learning how to interpret radiographs. This visual diagnosis task can be well simulated in an online environment23,24 and is also ideal because the cognitive models for it are well understood.25,26 Research by Kundel et al26 used a number of empirical approaches to elucidate the cognitive basis of radiograph interpretation. They outlined four phases (or cognitive subtasks) of radiograph interpretation demonstrated by expert radiologists: orientation to the radiograph, searching/scanning the radiograph, feature detection, and decision making. In Table 1, we list the four cognitive subtasks of interpreting a radiograph with corresponding BD/LA strategies for optimizing interpretation accuracy.

Table 1
Table 1:
Cognitive Basis for a Big Data/Learning Analytics Approach to Radiograph Interpretation

We have published several reports describing screen-based simulations, using the pediatric ankle radiograph as our model.23,27 The methodological details are described in full in the original reports and in Supplemental Digital Appendix 1 (at http://links.lww.com/ACADMED/A364).23,27 Briefly described, we collected 234 consecutive cases from the pediatric emergency department at the Hospital for Sick Children, Toronto, Ontario, Canada, representing over two years’ experience for a full-time pediatric emergency medicine practitioner. Forty-six participants from three expertise levels (medical students [n = 20], residents [n = 18], and attending physicians [n = 8]) each interpreted the 234 ankle radiographs through a standard Web browser (see Figure 1). They were presented with a one-line summary of the clinical history and could then choose to view unmarked anteroposterior, oblique, and lateral views of the ankle in the order and frequency they preferred with no time limit. Once ready, the user then completed a dialogue box indicating whether the case was normal or abnormal (fractured). For abnormal cases, they further indicated the location of the fracture directly on the image. They received immediate feedback consisting of the original radiologist’s classification, an onscreen highlighting of any abnormal features on the images, and the full text of the radiology report.

Figure 1
Figure 1:
Layout of the six screens for one ankle radiograph case. Participants started each case on the history page and then could access any of the radiology views (V1–V3) in any order they chose and with any frequency. Once the participant was ready to commit to their diagnosis, they clicked a one-way “Submit” button which led to the feedback 1 page, which showed the correctness of their answer and gave them access to the text of the original radiologist’s dictated report (feedback 2 page). All transitions from screen to screen were documented in the MySQL database. Abbreviation: AP indicates anteroposterior.

The Big Data Approach

Strictly defined, big data are characterized by the collection of high volumes of data of considerable variety. The core idea is to generate and collect as much relevant data as are available. In our radiology example, the cases were presented within a PHP 5.5 (Zend Technologies Ltd., Cupertino, California) and Adobe Flash Player (Adobe Systems Inc., Mountain View, California) shell that allowed us to track, within an institutional-review-board-approved protocol (see Supplemental Digital Appendix 1 at http://links.lww.com/ACADMED/A364), every click interaction generated by every user. Specifically, each click was time-stamped and documented in a MySQL database, including every page turn, every answer, each user’s confidence in each answer, and the Cartesian x,y coordinates of the point where the user indicated a fracture/abnormality might be. This was repeated for each of the hundreds of cases completed by the participants, resulting in tens of thousands of data points.

In our previous studies using this cognitive simulation, we were able to generate meaningful learning curves for individuals,27 as well as detect developmental differences in how sensitivity is traded off against specificity,23 show that the presentation sequence can bias an individual’s responses,28 and show that the base rate of abnormality in an image set can result in different interpretations.29 In the sections that follow, we highlight relevant work from these prior studies and do additional analyses to show how process information collected in the digital environment makes it possible to individualize instruction in a manner that is often not possible in the actual clinical environment.

Orientation to the radiograph

In Figure 1, we show the layout of each case within our image bank. Each case had six screens with the first screen showing a text summary of the history, as might be seen on a radiograph requisition. Kundel et al26 describe the first phase of reading a radiograph case as an orientation to the image where general visual information (overall position, boundaries of major organs, gross abnormalities, etc.) is integrated with “clinical data and experience [to determine] the overall search pattern which follows.”26

Proportion of cases with clinical history review.

Without a method for eye tracking, it is difficult to precisely capture the process of orientation to a radiograph. However, one important aspect of orientation is to review the patient history to determine patient age and to establish the type of clinical presentation; this clinical information has been shown to improve abnormality detection rates.30,31 We postulated that the proportion of cases in which a participant tracked back to verify the clinical history could be a useful learning analytic (process) measure. This proved to be the case, with the number of “look-backs” (or re-reviews) by a learner being statistically significantly associated with diagnostic accuracy, even when adjusted for clinician (or expertise) level (see Figure 2 and regression tables supplied in Supplemental Digital Appendix 2 at http://links.lww.com/ACADMED/A365). The implication of this measure is that we could accumulate an individual profile of this beneficial behavior and provide targeted feedback to promote it.

Figure 2
Figure 2:
Comparison of the proportion of cases participants correctly interpreted with the proportion of the cases where they looked back to the history page at least once before committing to their diagnosis. Each point represents a single individual (medical student, resident, or attending physician) who completed the 234 ankle radiograph cases. The lines show that looking back at the history page is associated with increasing accuracy of case interpretation to a statistically significant degree (for regression tables, see Supplemental Digital Appendix 2 at http://links.lww.com/ACADMED/A365).

Searching/scanning the radiograph

On the basis of their eye-tracking studies, Kundel et al26 describe the next process after orientation in interpreting a radiograph as searching/scanning.26 The radiographer “moves the axis of the gaze … over the film in rapid jumps, with intervening fixations … in eye movements that are neither random nor stereotyped.”26 Two process measures could help to quantify this process: verification that the learners access every available view in the radiograph series, and measurement of the total time spent on the case.

Proportion of cases with only partial image review.

An ankle radiograph series is made up of three distinct images (or views): anteroposterior, oblique, and lateral. A complete search for pathology involves thoroughly inspecting each of the views. In our study, we were able to detect when a learner submitted an answer prior to considering all of the radiograph views. Incomplete searches occurred in 2,388/10,764 (22.2%) of the cases completed, including 1,230/7,225 (17.0%) of the cases in which the learner declared the case to be free of pathology. Furthermore, in these cases, false negatives were statistically more likely in cases with skipped views than in cases without skipped views (389/1,230 [31.6%] vs. 1,451/5,995 [24.2%]; P < .001). Thus, a learner who tended not to review all available images could be shown the proportion of their cases with incomplete image review and the effect on their interpretation performance.

Time on case.

In general, experts perform cognitive tasks more quickly than do novices.24,32 In our study, however, those who were fastest at interpreting the radiographs were in fact the most novice (i.e., the medical students).33 Further, the median time spent on the cases increased with increasing level of expertise (see Figure 3). We speculate, according to Kundel et al’s26 cognitive model and informal conversations with our participants, that the novices spent much less time scanning a radiograph because they did not know where to look.33 That is, their mental models of potential abnormalities were not as fully developed as those of the experts and thus required shorter scanning times.34 The speed difference is unlikely to be due to the novices being faster at the other cognitive subtasks of radiograph interpretation—that is, orientation, feature detection, or decision making. This measure could enable beneficial reflection by the learner, both within expertise level (e.g., “compared with other medical students, you spend less time scanning …”) and between expertise levels (e.g., “radiologists typically spend twice as long scanning these radiograph cases compared with your average time”).

Figure 3
Figure 3:
Smoothed moving average of time to complete each case by expertise level (medical student, resident, or attending physician). A filtered moving average shows the linear trend in time per case (in seconds) as the participants completed all 234 radiograph cases. Details of the smoothing process are supplied in Supplemental Digital Appendix 1 (at http://links.lww.com/ACADMED/A364).

The Learning Analytics Approach: Immediate Individualized Information

In the previous section, we gave examples of the big data approach, in which we collected larger amounts of process data so as to develop insights into learner cognition. In this section, we turn to the learning analytics part of the approach, showing how computer algorithms can convert the data into useful visualizations or alerts so as to influence the learning process in real time.

Feature detection

In the cognitive model of radiograph interpretation, feature detection involves correctly identifying abnormal visual features on a radiograph on the basis of a well-developed mental model of what is normal.26,35,36

Visualizations of common errors.

A potentially useful learning analytic related to feature detection is the heat map visualization of common novice errors. Each incorrectly marked localization of a fracture by a learner is information about the misconceptions that novices have in distinguishing normal from abnormal features on radiographs (Figure 4). (A color version of Figure 4 is supplied in Supplemental Digital Appendix 3 at http://links.lww.com/ACADMED/A366.) Metacognitive feedback based on these visualizations could strengthen learners’ mental model of the characteristics of an actual fracture compared with confusable normal features.

Figure 4
Figure 4:
Example of how learning analytics could support metacognition. Images show a normal pediatric ankle radiograph case of a 10-year-old child. The unmarked original radiograph views are on the left, whereas on the right the same images are overlaid with a heat map representation of how often an area was erroneously specified as being an abnormal feature (warmer areas, represented by darker shades of gray, correspond to higher frequency). In this case, 81/115 (70.4%) medical students that took part in our hinting study33 incorrectly classified the case as having a fracture, often confusing features of the growth plates with potential fractures. Feedback based on this information could strengthen learners’ mental model of the characteristics of a fracture versus normal features. For a color version of this figure, see Supplemental Digital Appendix 3 (at http://links.lww.com/ACADMED/A366). Abbreviations: AP indicates anteroposterior; Obl, oblique; Lat, lateral.

Decision making

Kundel et al26 describe the final step in radiograph interpretation, decision making, as the process of considering a feature detected on the radiograph in the context of known patterns of pathology and then deciding whether the feature in question is indeed pathologic and in need of flagging.26

Detecting and reflecting on learners’ implicit biases.

Diagnostic decisions may be influenced by unconscious biases. In a previous report, we described how the sequence of image presentation can affect respondents’ answers.28 To illustrate, consider the following two sequences of the same 10 radiographs, half of which are normal (N) and half of which are abnormal (A):

  • NNNANAAANA
  • ANANANANAN

In the first sequence there are runs of the same type of image; in the second sequence the rate of alternation between image types is high.

In the study, we found participants were more likely to repeat a given response, normal or abnormal, if they recently experienced a repetitive sequence of cases, like in the first example.28 Conversely, if a case series showed a high rate of alternation between normal and abnormal, participants showed a tendency beyond chance of alternating their answers, even though a radiologist’s response on a prior case should not influence their response on subsequent cases. Running calculations of case alternation rate can allow detection of this maladaptive bias, making customized feedback possible.28

Path map showing completeness and nature of visual search.

A path map shows in detail how long and in what order the user considered the screens of a cognitive simulation. This type of representation allows learners and educators to spot procedural deficiencies or irregularities in the diagnostic strategies used.

Consider the path map visualizations shown in Figure 5 that can be examined from both the learner (Panels A–C) and the case-based perspective (Panels C–F). The maps can be quickly reviewed for suboptimal behaviors (Panels B and D reveal cases where radiograph views were skipped prior to the formulation of a diagnosis, suggesting premature closure). Although the sequential consideration of all images, as shown in the other panels, may be considered ideal, experienced radiographers use a contrastive strategy in making a diagnostic decision where if a radiographic feature (a potential fracture) is identified in one view, it is then specifically sought out in other views so as to confirm the validity of the initial interpretation (see Panel E).31,36 Another helpful pattern is the re-review of the clinical history before the trainee commits to a decision (Panel F). Path map visualizations allow novices to consider their diagnostic search patterns relative to those of experts, identify lapses such as skipping radiograph views, and adopt strategies that could improve diagnostic accuracy.

Figure 5
Figure 5:
Visualizations of a user’s path (i.e., a path map) through a single radiograph case. The six panels each show the path of one learner through one case. Panels A–C are of the same learner (Resident 2) on three different cases, and Panels C–F are of four different learners (Resident 2, Medical 1, Attending 5, and Radiologist 3) all doing the same case. The x-axis shows the five main pages of the case, and the y-axis is the time spent on a particular page of the case. In particular, the cases show skipped views (Panels B and D), a contrastive strategy of toggling back and forth between views (Panel E), and a re-review of the clinical history page (Panel F).

The Value of Process Data for Adaptive Learning Systems

In the preceding sections, we discussed the augmented collection of learning process data in a digital environment and how using the BD/LA approach can prompt more precise reflection on a trainee’s approach to a cognitive task. We used the examples, for the skill of radiograph interpretation, of collecting increased data on learner behaviors such as how often they checked the clinical history and whether they considered each of the available images. Learning analytics allow us to accumulate the data from individual cases to inform learning at a metacognitive level. Our examples included the use of heat maps or calculated measures of interpretation bias.

Previous research has shown that this orientation toward process, as opposed to an orientation toward the outcome or goal, can be a more effective learning approach. In two separate studies, Zimmerman and Kitsantas37,38 explored the acquisition of both motor (dart throwing) and cognitive (writing) skills in teenage girls and found evidence for a phased social cognitive model of learning, in which an initial process orientation that is ultimately phased toward a goal orientation is the most effective sequence. Brydges et al6 used these ideas in a health professions setting. They studied medical students learning suturing skills, randomizing them to practice the skill either using process-oriented goals (“have the needle enter the tissue perpendicular to the skin”) or using outcome-oriented goals (“ensure that every knot is square”). They found that the process-oriented group performed better on standardized blinded transfer tests of suturing skills. Needs assessments for simulation-based health professions education research have subsequently highlighted the need to better elaborate the shift from process to outcomes (or mastery) orientation.8,39,40 Our consideration of an online radiograph interpretation example not only adds to this literature the several qualitative and quantitative differences detected among the learners but also demonstrates the capacity of online cognitive simulations for fine-grained collection and dynamic analysis of process data.

More generally, what we are proposing is to reconsider the way digital educational process data are used, at least in the simulation of cognitive tasks in the health professions. The increased use of digital methods, whether during online learning such as we have described here or through the collection of digital data from the electronic health record or mobile devices, makes process data available at a very fine level of granularity. This large, heterogeneous, instantly available trove of data acts as a substrate for learning analyses and visualizations that can be customized to the individual learner or more generally to the learning system.15,22

Both big data and learning analytics are ideally applied in service of a system of adaptive learning, in which an educational experience is customized to the individual needs of the learner. In Figure 6, we show a schematic of a typical adaptive learning system as described in a U.S. Department of Education brief on the use of learning analytics.15 As the student interacts with the content—radiology cases in our example—they generate student learning data such as the time on case, running estimates of accuracy, and number of radiograph views skipped. These data can be inputs for a predictive model, as can facts from the student information system such as gender, prior radiology experience, or specific aptitudes. The predictive model generates algorithmic analyses or insights that can iteratively adapt the content seen by the user in terms of diagnosis, difficulty of case, and sequence of cases, or the analyses can prompt alerts (e.g., when a user tends to skip views). The desirability of adapting content by type, difficulty, or number of cases has been successfully addressed in the field of computerized adaptive testing where a precise running estimate of student ability is matched to the difficulty of the items presented.41 In addition, metacognitive insights can come from either predictive-model-generated information, such as the heat maps of incorrect responses, or through the intervention of faculty and teachers interpreting any number of analytic data on a dashboard, such as learning curves, calibration data, or the nature of the mistakes being made. Indeed, such a system could enable data-driven education management and coaching. One could envisage a specific information dashboard designed for the educator in charge, monitoring across individual learners for any number of metrics, such as outlier behavior, levels of engagement, and rate of learning. The first reports of such dashboards in general education are emerging.42,43

Figure 6
Figure 6:
Schematic of an adaptive learning system. The educational data flow is shown with a box and arrows diagram. To start, the student interacts with the content (1). The data generated (e.g., correctness of response) is written to a database (2) that is continually updated as the student works through the content. The generated learning data are combined with existing data from the student information system to form the inputs for a predictive model (3). In turn, this information is joined with knowledge of the available material to determine how best to adapt the learning experience for the student (e.g., choice of next case, alert that the student is prone to a given bias, alert to maladaptive behaviors like skipping views) (4) (see main text). Finally, the whole learning cycle can be supervised by faculty, teachers, and administrators (5). Figure adapted from Bienkowski et al.15

Limitations

There are limitations to this learning strategy that warrant consideration. First, we freely admit that our proxy measures vary in their fidelity to Kundel et al’s26 cognitive model. In the absence of eye-tracking information, the measure of orientation (checking clinical history) is a placeholder for a more tightly construct-relevant process measure. Also, time as a proxy for searching/scanning has a number of confounds that render our interpretation of the observed patterns speculative. Costs relating to technology and programming—unique features that are not automatically incorporated into current information systems—may be a barrier to implementation because the resources and expertise required to perform these analytics are not necessarily embedded in the current operating budgets of universities or hospitals. While we proposed what is most relevant to radiograph interpretation, application to other specific tasks would require a consideration of the (big) data that are most relevant to the outcome of better learning for that task. Generating sufficient data requires a commitment on the part of the learners to complete many hours of cases. Thus, the learners’ motivation, perceived value of the task, degree of self-regulation, and emotional state are all factors that can determine the success of acquiring accurate data on learner processes and skill.44 Finally, with respect to adaptive learning systems, although the sheer volume of data can be a tremendous advantage, it will soon overwhelm the educational designer unless he or she uses a cognitive task model based on empirical research to narrow down the data abstraction and interpretation.

Conclusions

In conclusion, we have discussed how the augmented collection and dynamic analysis of learning process data within a cognitive simulation can improve feedback and prompt more precise reflection on the accuracy of a novice clinician’s skill development. Specifically, we have described how collecting and analyzing process data can be used to improve learning in a cognitive simulation of radiograph interpretation. Examples included measuring learner behaviors, such as skipping radiograph views or prematurely calling off a search, and detecting bias due to presentation sequence. Learning analytics allow us to dynamically generate novel visualizations including heat and path maps that can lead to metacognitive insights. Overall, the BD/LA approach to learning a single diagnostic task is characterized by the ability to collect conceptually targeted data in a manner that is far more comprehensive than would be possible in a nonsimulation context. These rich data displays will have considerable faculty development implications, but finer-grained feedback, based on a deeper consideration of the cognitive underpinnings of specific tasks, such as our visual diagnosis example, should help both learners and educators improve.

Acknowledgments: The authors gratefully acknowledge the participants in the original trial.

References

1. Cook DA, Triola MM. Virtual patients: A critical literature review and proposed next steps. Med Educ. 2009;43:303311.
2. Cook DA, Erwin PJ, Triola MM. Computerized virtual patients in health professions education: A systematic review and meta-analysis. Acad Med. 2010;85:15891602.
3. Bond W, Kuhn G, Binstadt E, et al. The use of simulation in the development of individual cognitive expertise in emergency medicine. Acad Emerg Med. 2008;15:10371045.
4. Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med. 2004;79(10 Suppl):S70S81.
5. Pusic MV, Boutis K, Hatala R, Cook DA. Learning curves in health professions education. Acad Med. 2015;90:10341042.
6. Brydges R, Carnahan H, Safir O, Dubrowski A. How effective is self-guided learning of clinical technical skills? It’s all about process. Med Educ. 2009;43:507515.
7. Ericsson KA. Ericsson KA. Enhancing the development of professional performance: Implications from the study of deliberate practice. In: Development of Professional Expertise. 2009:New York, NY: Cambridge University Press; 405431.
8. Motola I, Devine LA, Chung HS, Sullivan JE, Issenberg SB. Simulation in healthcare education: A best evidence practical guide. AMEE guide no. 82. Med Teach. 2013;35:e1511e1530.
9. Azevedo R, Bernard RM. A meta-analysis of the effects of feedback in computer-based instruction. J Educ Comput Res. 1994;13:111127.
10. Balzer WK, Doherty ME, O’Connor R. Effects of cognitive feedback on performance. Psychol Bull. 1989;106:410433.
11. Zimmerman BJ. Investigating self-regulation and motivation: Historical background, methodological developments, and future prospects. Am Educ Res J. 2008;45:166183.
12. Ericsson KA. Ericsson KA, Charness N, Feltovich P, Hoffman RR. Protocol analysis and expert thought: Concurrent verbalizations of thinking during experts’ performance on representative task. In: Cambridge Handbook of Expertise and Expert Performance. 2006:Cambridge, England: Cambridge University Press; 223242.
13. Renkl A. Worked-out examples: Instructional explanations support learning by self-explanations. Learn Instr. 2002;12:529556.
14. Reed SK, Dempster A, Ettinger M. Usefulness of analogous solutions for solving algebra word problems. J Exp Psychol Learn. 1984;11:106125.
15. Bienkowski M, Feng M, Means B. Enhancing Teaching and Learning Through Educational Data Mining and Learning Analytics: An Issue Brief. 2013. Washington, DC: U.S. Department of Education; https://tech.ed.gov/wp-content/uploads/2014/03/edm-la-brief.pdf. Accessed March 2, 2016.
16. Ellaway RH, Pusic MV, Cameron T, Galbraith R, Farrell J. Developing the role of big data and analytics in medical education. Med Teach. 2014;36:216222.
17. Cukier KN, Mayer-Schoenberger V. The rise of big data: How it’s changing the way we think about the world. Foreign Aff. 2013;92:2840.
18. Murdoch TB, Detsky AS. The inevitable application of big data to health care. JAMA. 2013;309:13511352.
19. Baker R. Data mining for education. Int Encycl Educ. 2010;7:112118.
20. Ruiz JG, Mintzer MJ, Leipzig RM. The impact of E-learning in medical education. Acad Med. 2006;81:207212.
21. Siemens G, Long P. Penetrating the fog: Analytics in learning and education. Educ Rev. 2010;46:3040.
22. Pinto A, Brunese L, Pinto F, Acampora C, Romano L. E-learning and education in radiology. Eur J Radiol. 2011;78:368371.
23. Boutis K, Pecaric M, Seeto B, Pusic M. Using signal detection theory to model changes in serial learning of radiological image interpretation. Adv Health Sci Educ Theory Pract. 2010;15:647658.
24. Carney PA, Bogart TA, Geller BM, et al. Association between time spent interpreting, level of confidence, and accuracy of screening mammography. AJR Am J Roentgenol. 2012;198:970978.
25. Lesgold A, Rubinson H, Feltovich P, Glaser R, Klopfer D, Wang Y. Chi MTH, Glaser R, Farr MJ. Expertise in a complex skill: Diagnosing x-ray pictures. In: The Nature of Expertise. 1988:Hillsdale, NJ: Lawrence Erlbaum Associates; 311342.
26. Kundel HL, Nodine CF, Carmody D. Visual scanning, pattern recognition and decision-making in pulmonary nodule detection. Invest Radiol. 1978;13:175181.
27. Pusic M, Pecaric M, Boutis K. How much practice is enough? Using learning curves to assess the deliberate practice of radiograph interpretation. Acad Med. 2011;86:731736.
28. Beckstead JW, Boutis K, Pecaric MR, Pusic MV. Stimulus sequence features influence physicians’ response tendencies in radiological image interpretation. Appl Cogn Psychol. 2013;27:625632.
29. Pusic MV, Andrews JS, Kessler DO, et al. Prevalence of abnormal cases in an image bank affects the learning of radiograph interpretation. Med Educ. 2012;46:289298.
30. Berbaum KS, Franken EA Jr, Dorfman DD, Barloon TJ. Influence of clinical history upon detection of nodules and other lesions. Invest Radiol. 1988;23:4855.
31. Kundel HL. Beutel J, Kundel HL, Van Metter RL. Visual search in medical images. In: Handbook of Medical Imaging. Volume 1, Physics and Psychophysics. 2000:Bellingham, Wash: SPIE; 838855.
32. Nodine CF, Kundel HL, Mello-Thoms C, et al. How experience and training influence mammography expertise. Acad Radiol. 1999;6:575585.
33. Boutis K, Pecaric M, Shiau M, et al. A hinting strategy for online learning of radiograph interpretation by medical students. Med Educ. 2013;47:877887.
34. Johnson-Laird PN. Mental models in cognitive science. Cogn Sci. 1980;4:71115.
35. Norman GR, Coblentz CL, Brooks LR, Babcook CJ. Expertise in visual diagnosis: A review of the literature. Acad Med. 1992;67(10 Suppl):S78S83.
36. Taylor PM. A review of research into the development of radiologic expertise: Implications for computer-based training. Acad Radiol. 2007;14:12521263.
37. Zimmerman BJ, Kitsantas A. Self-regulated learning of a motoric skill: The role of goal setting and self-monitoring. J Appl Sport Psychol. 1996;8:6075.
38. Zimmerman BJ, Kitsantas A. Acquiring writing revision skill: Shifting from process to outcome self-regulatory goals. J Educ Psychol. 1999;91:241250.
39. Stefanidis D. Optimal acquisition and assessment of proficiency on simulators in surgery. Surg Clin North Am. 2010;90:475489.
40. Issenberg SB, Ringsted C, Ostergaard D, Dieckmann P. Setting a research agenda for simulation-based healthcare education: A synthesis of the outcome from an Utstein style meeting. Simul Healthc. 2011;6:155167.
41. Wainer H, Dorans NJ, Flaugher R, Green BF, Mislevy RJ. Computerized Adaptive Testing: A Primer. 2000.2nd ed. Mahwah, NJ: Lawrence Erlbaum.
42. Maldonado RM, Kay J, Yacef K, Schwendimann B. Cerri SA, Clancey WJ, Papadourakis G, Panourgia K. An interactive teacher’s dashboard for monitoring groups in a multi-tabletop learning environment. In: Intelligent Tutoring Systems: 11th International Conference Proceedings. 2012:Berlin, Germany: Heidelberg Springer; 482492.
43. Few S. Few S. Putting it all together. In: Information Dashboard Design: Displaying Data for At-a-Glance Monitoring. 2013:2nd ed. Burlingame, Calif: Analytics Press; 203233.
44. Ellaway RH, Pusic M, Yavner S, Kalet AL. Context matters: Emergent variability in an effectiveness trial of online teaching modules. Med Educ. 2014;48:386396.

Supplemental Digital Content

Copyright © 2016 by the Association of American Medical Colleges