Secondary Logo

Journal Logo

Research Reports

A Medical Student Inquiry Behavior Assessment Tool: Development and Validity Evidence

Brondfield, Sam MD; Boscardin, Christy PhD; Strewler, Gordon MD; Hyland, Katherine PhD; Oakes, Scott A. MD; Nishimura, Holly; Crawford, Jenny MA, MPH; Hauer, Karen E. MD, PhD

Author Information
doi: 10.1097/ACM.0000000000002520

Abstract

The expanding body of health information, ranging from basic science discoveries to therapeutic and clinical systems innovations, challenges physicians-in-training not only to learn current knowledge but also to cultivate the skills to manage and contribute to future knowledge.1 However, educational and assessment practices traditionally favor memorization of facts over fostering the habits of mind necessary to express intellectual curiosity, seek new knowledge, and solve complex problems.2 To address this gap in undergraduate medical education (UME), the University of California, San Francisco (UCSF) School of Medicine incorporated inquiry as a core component of its curriculum. To articulate and guide medical students’ development of essential inquiry behaviors in small groups, we created an assessment tool.

Teaching and Assessing Inquiry

Inquiry captures engagement in learning through questioning, seeking information, and wrestling with scenarios that lack ready answers.3,4 Inquiry-based teaching improves students’ science learning, as measured by achievement outcomes, compared with traditional teaching methods within a variety of scientific disciplines and educational levels.3Scientific inquiry entails asking questions, investigating phenomena, acquiring understanding of concepts and principles, collecting and analyzing evidence, and developing and communicating explanations.4Clinical inquiry entails deductive clinical reasoning through eliciting patient data, asking questions, and assembling key information to identify more likely diagnoses.5 Both scientific and clinical inquiry are regularly at play when a physician investigates a new disease-related causal pathway or therapeutic approach or reasons through a difficult case.5,6

In the 2010 Carnegie Foundation report Educating Physicians: A Call for Reform of Medical School and Residency, Cooke et al called for engaging learners with challenging problems and encouraging their participation in inquiry and innovation.7,8 Consequently, the UCSF School of Medicine Bridges curriculum incorporated a new inquiry-based curriculum—the Core Inquiry curriculum—that integrates scientific and clinical concepts and fosters an “inquiry habit of mind,” defined as the process of approaching the unknown with curiosity and skepticism, challenging current concepts, and creating new knowledge.9

An inquiry-based curriculum is learner centered, with learners generating their own learning objectives. Inquiry therefore draws from constructivist theory, in which learners construct individual understanding by building on their own knowledge.10 Learners actively investigate questions with faculty guidance, aiming for increasingly self-directed inquiry behaviors.11 One commonly used model is problem-based learning (PBL), which is a type of inquiry or discovery learning.12,13 Barrows14 described the core characteristics of PBL as learner-centered, self-directed learning; small, facilitated student groups; and authentic problems to explore. PBL has been shown to be superior to traditional curricular models in promoting knowledge retention and skill acquisition.15 UME curricula can incorporate these characteristics in an entirely PBL-based format or, as in the UCSF School of Medicine Bridges curriculum,9 by supplementing with inquiry-based learning.

Tools exist to evaluate inquiry-based learning activities,11 but there is a need for a tool with validity evidence to assess medical student inquiry behaviors. In this article, we describe the development of and present validity evidence for a novel medical student inquiry behavior assessment tool to guide development of these behaviors in small groups. This tool lists important medical student inquiry behaviors, with a focus on observed behaviors to optimize feedback to students.16 Through our tool development process, we set out to answer the research question of how best to capture the construct of inquiry in a feasible assessment tool.

Method

Core Inquiry curriculum

UCSF School of Medicine is a public, urban, research-intensive, four-year medical school with approximately 150 students per class. An inquiry leadership group composed of nine faculty educators with expertise relevant to inquiry in research, basic and/or clinical sciences, or medical education (including G.S., K.H., and S.A.O., as well as colleagues of C.B. and K.E.H.) created the Core Inquiry curriculum, the inquiry-based portion of the UCSF School of Medicine’s four-year Bridges curriculum, and implemented it in academic year 2016–2017.

The Core Inquiry curriculum is a longitudinal curriculum that approaches problems through multiple lenses within six scientific domains—biomedical, clinical, educational, epidemiological, social and behavioral, and systems. It includes weekly inquiry small groups during the first 60 weeks of the curriculum. These groups consist of eight or nine students who remain together throughout the preclinical curriculum and a faculty facilitator who may change occasionally, though facilitators typically facilitate multiple small groups with the same students. The curriculum also includes a 2-week inquiry immersion block in year 1, a 4-week immersion block in year 3, and a capstone project in year 4.

The weekly small groups feature case-based problem solving, journal clubs, and debates. These sessions reinforce content from the parallel preclinical foundational sciences (Foundations) curriculum and approach problems through the lenses of two or more scientific domains. Students choose their own learning objectives, seek evidence from the primary literature to justify explanations, critically evaluate their own and peers’ explanations, and collaborate in their small groups. List 1 provides an example outline of two sequential inquiry small groups in year 1.

Tool development and validity evidence

To assess our students’ inquiry skills development, we developed and collected validity evidence for our medical student inquiry behavior assessment tool in two phases, as described below. Using Messick’s validity framework,17–19 we examined four categories of validity evidence for the tool: content, response process, internal structure, and consequential validity.

The UCSF Institutional Review Board approved this study as exempt.

Phase 1: Tool development and modified Delphi study (content validity)

Tool development.

We followed established guidelines for designing an assessment tool.20 Three of us (educators S.B., C.B., and K.E.H.) developed the tool with consultation from the inquiry leadership group (members included G.S., K.H., and S.A.O.). S.B. and K.E.H. first reviewed the literature to summarize published definitions of inquiry and inquiry behaviors. They searched ERIC, Google Scholar, and PubMed in August–October 2015 using the terms inquiry, teaching methods, science instruction, problem-based learning, and education, and they also reviewed references within identified English-language articles. Through iterative discussion, S.B., C.B., and K.E.H. organized inquiry behaviors into four categories: cognitive, metacognitive, attitudinal, and social. We then e-mailed the inquiry leadership group to invite their participation in a focus group (without incentive) in November 2015. S.B. and C.B. conducted the focus group with all nine inquiry leaders to review the behaviors we identified and gather feedback on the alignment and appropriateness of our categories. The focus group participants recommended adjustments, such as avoiding overlapping items. We synthesized the results of the literature review and focus group into 40 candidate inquiry behaviors for the proposed tool.

Modified Delphi study.

Next, we performed a modified two-round Delphi study in February–March 2016 to generate consensus on the most salient inquiry behaviors. We modified the original Delphi process by using a survey with structured items (based on the results of our literature review and focus group) phrased as questions (“Does the student…?”), rather than open-ended items.21 We invited all faculty leaders in the preclinical Foundations curriculum and/or Core Inquiry curriculum (n = 33), several of whom (including G.S., K.H., and S.A.O.) were also in the inquiry leadership group, and all final-year medical students in the UCSF Health Professions Education Pathway22 (n = 14) to participate. We chose these faculty for their content and teaching expertise and their experience working with early learners. We chose these students for their training in medical education and their experiences in small groups.23 We had previously interacted with some of the student participants in teaching settings. Student participants received a $10 electronic gift card for completing both survey rounds; faculty participants did not receive an incentive. We did not expect initial consistency between faculty and student groups but wanted each to inform the other’s responses through the study.

Participants received each round of the modified Delphi survey by e-mail. In the first round, participants rated the importance of the 40 candidate inquiry behaviors using a five-point scale (from 1 = absolutely do not include to 5 = very important to include). In the second round, participants re-rated each behavior after viewing their individual first-round response alongside the group’s mean and standard deviation (SD). Participants could also suggest revisions or propose additional behaviors.

For each behavior, we calculated the content validity index (CVI)—that is, the percentage of second-round respondents who rated it as 5 (very important to include). We set the inclusion threshold CVI at 70%, near the 75% median and within the 50% to 97% range reported in a systematic review.24 We chose a rating of 5 to ensure that faculty and students highly valued each behavior. Five items met this inclusion criterion.

We used one-way analysis of variance (ANOVA) to compare second-round faculty and student ratings to ensure that these groups did not strongly disagree about behaviors. We examined effect sizes (d = absolute value [faculty mean − student mean] / overall SD) to compare mean faculty and student group ratings relative to the spread of the entire group of raters. We defined effect sizes less than 0.3 as small, 0.3–0.8 as moderate, and greater than 0.8 as large; smaller effect sizes denoted closer faculty–student agreement.

Phase 2: Collection of additional validity evidence

Response process validity.

For the five inquiry behaviors identified, we (S.B., C.B., G.S., K.H., and K.E.H.) wrote frequency anchors for ratings of 1, 2, and 3, corresponding to “never,” “occasionally,” and “consistently,” respectively, with brief descriptors of student behaviors corresponding to each level. We drafted instructions to articulate the goal of the tool and minimum expected performance for first- and second-year students. The instructions stated that students performing below expectations would be required to meet with an inquiry faculty member to discuss improvement strategies. To refine the five items for clarity, in May 2016 we conducted structured cognitive interviews25 with expert educators experienced with small-group facilitation who did not participate in the modified Delphi study. These faculty, who were colleagues of C.B. and K.E.H., were invited by e-mail and were not offered incentives. The interviews focused on feasibility of observing the behaviors and rating scale clarity.

As a pilot, we distributed the tool via Qualtrics (Provo, Utah) in May 2016 after a single pilot inquiry small group (prior to implementation of the Core Inquiry curriculum) to all 18 faculty facilitators, who together worked with all of the first-year students. (Facilitators were recruited into these roles, without incentives, by the inquiry leadership group.) We asked facilitators to complete the assessment tool for all students in their small group based on their performance in the pilot small group, and we reviewed these assessments. We also obtained feedback from facilitators through a free-response survey (15 facilitators), an open-ended e-mail (5 facilitators), and a focus group (3 facilitators). Based on the pilot feedback, in the final tool we simplified the rating scale to three ordered levels without frequency anchors (see Results section).

For faculty development, facilitators viewed a required 17-minute video on facilitating inquiry small groups. Facilitators could also attend recommended one-hour in-person training sessions before each small group; the inquiry assessment tool was described during each session.

Internal structure validity.

In academic year 2016–2017, we implemented the tool via E*Value (Fall 2016 version, MedHub, Minneapolis, Minnesota) four times in the first year of the Core Inquiry curriculum, as both a faculty assessment of students and a student self-assessment, to determine whether the tool captures inquiry skills development over time. Completion of the assessment tool was required. Sixty-seven unique facilitators (some of whom facilitated multiple small groups across the year) and 152 first-year students used the tool. At the end of each quarter, students completed self-assessments, and the small-group facilitator who worked with a particular student for the largest number of inquiry small groups assessed that student.

To gather evidence for reliability, we compared faculty assessments and student self-assessments using the Student two-tailed paired t test with a 0.05 significance threshold. Analyses were conducted in June 2017 using GraphPad QuickCalcs (https://www.graphpad.com/quickcalcs/ttest1.cfm; GraphPad Software, San Diego, California). For the statistical analysis, we assigned to each of the ordered levels the numerical scores 1–3 that had been used as the initial frequency anchors. A score of 1 indicated that less of the inquiry behavior was observed, and a score of 3 indicated that more of the inquiry behavior was observed. The Bonferroni correction was applied for multiple (20) comparisons. In each quarter, only students for whom both faculty and self-assessments were available were included in the analysis.

Consequential validity.

Students who did not meet expectations on the inquiry assessment tool reviewed written facilitator feedback from the tool, alone and with their Bridges faculty coach (a clinician–educator who provides advice, assistance, and encouragement in all aspects of the student’s education and professional development). We reviewed faculty facilitator assessments of these students from the quarter when they did not meet expectations and from the following quarter.

Results

Content validity (Phase 1)

The two-round response rate for the modified Delphi survey was 77% (36/47) overall: 26 (79%) of 33 faculty and 10 (71%) of 14 students. The five behaviors that met the inclusion threshold were as follows: select relevant questions to pursue; justify explanations with evidence; critically evaluate his/her explanation in light of alternative possibilities; allow for the possibility that his/her own knowledge may not be completely correct; and collaborate well with peers. We dropped the remaining 35 behaviors, with 1 initial exception (further described below).

Table 1 shows the second-round mean (SD) rating, effect size, and CVI for each of the 40 candidate behaviors by group and overall. The average SD across all items was 0.69 in the first round and 0.64 in the second round.

T1
Table 1:
Forty Inquiry Behaviors Considered for Inclusion in a Medical Student Inquiry Behavior Assessment Tool, as Rated by an Invited Group of UCSF School of Medicine Faculty and Medical Students in the Second Round of a Modified Delphi Survey, 2016a

Faculty and students generally agreed on behavior importance: ANOVA demonstrated no significant differences between faculty and student ratings (data not shown), and effect sizes were mostly small. Only one of the five selected behaviors had a moderate effect size (select relevant questions to pursue, d = 0.78). Behaviors that were close to but did not reach the CVI threshold for inclusion also had small effect sizes, indicating strong agreement between faculty and students.

Additional validity evidence (Phase 2)

Response process validity.

We subsequently explored the five selected behaviors as well as one that was close to but did not meet the initial threshold—reflect on his/her own gaps in understanding (mean rating = 4.58, d = 0.03, CVI = 69.4)—in cognitive interviews with three expert faculty educators who had not participated in the modified Delphi survey. Based on feedback that reflection was difficult for faculty to observe, we excluded the student reflection behavior.

During the pilot, the 18 faculty facilitators completed a single assessment of the first-year students in their small group (data not shown). Qualitative feedback on the tool from the pilot included support for the small number of items and the use of a three-point scale. One faculty facilitator felt that the tool did not include student gaps in understanding and role in the small-group dynamic. One suggested an assessment of the whole group. Four facilitators described two behaviors—critically evaluate his/her explanations in light of alternative possibilities and allow for the possibility that his/her own knowledge may not be completely correct—as not feasible to observe in small groups. All 18 facilitators were able to observe the other three behaviors. Facilitators felt that assessment would be easier over multiple small groups. Some felt that because students generally performed well, it was difficult to distinguish among students on a three-point scale, although most facilitators favored keeping a three-point scale.

Based on the pilot feedback, we simplified the tool’s rating scale to three ordered levels without frequency anchors, and we simplified the associated descriptors to guide behavior observation and enable facilitators to distinguish among students more easily. The final five-item tool is provided in Chart 1 and Supplemental Digital Appendix 1 at https://links.lww.com/ACADMED/A613.

The faculty development video accumulated 62 views among 67 faculty facilitators, with a view defined as at least 75% of the total video time. It was not possible to determine whether a single person viewed the video more than once. Most facilitators attended the in-person trainings before each small group based on approximate numbers of attendees, though attendance was not recorded.

AT1
Chart 1:
A Novel Medical Student Inquiry Behavior Assessment Tool Used for Faculty Assessment and Student Self-Assessment of First-Year Medical Students in the Core Inquiry Curriculum, UCSF School of Medicine, 2016–2017a

Internal structure validity.

Tables 2 and 3 display faculty assessment and student self-assessment data for the end-of-quarter assessments using the final tool during 2016–2017. The percentage of completed assessments was high for both groups (Table 2). As evidence of construct validity, faculty and student scores increased on most items over the year, indicating skills development over time (Table 3). For three items—select relevant questions, justify explanations with evidence, and critically evaluate explanations—both faculty and student scores increased by late in the year, with a steeper rise in faculty scores. For the item allow for the possibility that own knowledge may not be completely correct, both groups initially gave high scores, followed by a drop and then a small rise late in the year. For the item collaborate well with peers, faculty and student scores were high throughout the year, though faculty scores fluctuated more than student scores.

T2
Table 2:
Completion of Faculty Assessments and Student Self-Assessments for First-Year Medical Students (n = 152) Using the Medical Student Inquiry Behavior Assessment Tool, Core Inquiry Curriculum, UCSF School of Medicine, Academic Year 2016–2017a
T3
Table 3:
Comparison of Scores From Faculty Assessments and Self-Assessments of 152 First-Year Medical Students Using the Medical Student Inquiry Behavior Assessment Tool, Core Inquiry Curriculum, UCSF School of Medicine, Academic Year 2016–2017a

Using the Student paired t test, we found no statistically significant difference between faculty and student scores on most items at most time points, indicating evidence of interrater reliability. When there were statistically significant differences, faculty scores exceeded student scores. These differences occurred primarily in quarters 3 and 4. When the Bonferroni correction for multiple (20) comparisons was applied, statistically significant differences occurred only in quarters 3 and 4.

Consequential validity.

Two (1.3%) of 152 students did not meet expectations based on faculty scores. Both students subsequently met expectations in the following quarter.

Discussion

We designed a feasible inquiry behavior assessment tool to guide development of medical student inquiry behaviors in small groups and gathered validity evidence from multiple sources. The five items included in this tool are observable, measurable core inquiry behaviors that medical students can demonstrate in small groups. By synthesizing the broadly defined construct of inquiry into five behaviors, we have additionally outlined a potential definition of inquiry for UME curricula: selecting relevant questions to pursue, justifying explanations with evidence, critically evaluating one’s own explanations in light of alternative possibilities, allowing for the possibility that one’s own knowledge may not be completely correct, and collaborating well with peers.

Validity evidence is essential for an assessment tool.17–19 The results of our modified Delphi study provided content validity evidence. Cognitive interviews and pilot feedback provided response process validity evidence. The pilot also provided essential on-the-ground feedback to evaluate the convergence of our theoretical work with practical small-group needs.

Evidence for reliability showed that faculty and student assessments aligned across all five items. Although we found overall consistency in ratings, faculty scores were systematically higher, particularly late in the year. Faculty may have scored students higher to help students achieve course expectations, which became more stringent later in the year. Students may have rated themselves lower, knowing that humility was valued in one of the items (allow for the possibility that his/her own knowledge may not be completely correct). The finding that the two lowest-scoring students subsequently earned higher scores in the next quarter provided potential evidence for consequential validity.

The medical student inquiry behavior assessment tool is intended to measure manifestations of a difficult-to-measure construct: an inquiry habit of mind. Our five items represent consensus but may miss some elements of inquiry. For example, others have proposed a 20-item tool to measure intellectual curiosity,2 a related medical education construct.26 However, an inquiring, curious learner may not necessarily express corollary observable behaviors, particularly within 50-minute small-group sessions. We identified important, observable inquiry behaviors that faculty can feasibly use as a proxy for an inquiry habit of mind, a construct that modern medical educators view as critical.7,8

This study has several limitations. It is a single-institution study, and our findings may not generalize more broadly. The adequacy of the importance scale used in the modified Delphi study may be limited given the overall high ratings in the modified Delphi study. It was difficult to gather internal structure validity evidence based on student self-assessments. We cannot determine whether the fact that most students met minimal expectations reflects any amount of inflation of ratings by faculty. We did not demonstrate validity evidence around the relationship of this tool to other variables, but future investigations could examine the relationship between small-group inquiry behaviors and prior medical school admissions information, future clinical clerkship inquiry behaviors, or other measures of knowledge and performance.

In conclusion, we created a novel tool to guide the development of medical student inquiry behaviors in small groups and gathered validity evidence from multiple sources for its use. The tool is feasible and ready for use within inquiry-based curricula to promote medical student self-assessment and to guide faculty and self-assessment feedback to students. This tool can also guide design of inquiry learning objectives and curricula in medical schools that are beginning to incorporate inquiry skill set acquisition. Our next steps include continuing to implement the tool in medical student small groups beyond the first year of the Core Inquiry curriculum, obtaining ongoing faculty facilitator and student feedback on ways to improve the tool and its use, and developing a similar tool to assess relevant inquiry behaviors in the clinical setting. Ultimately, measurement of inquiry behaviors as part of adaptive expertise27 in practice will be needed to determine how physicians incorporate these behaviors into their professional careers.

List 1

Outline of Two Sequential Inquiry Small Groups in the UCSF School of Medicine Core Inquiry Curriculum, as Implemented in 2016–2017 With 152 First-Year Medical Students

Week 1: 50-minute small group

  • Small-group faculty facilitator introduces session and agenda (5–10 minutes)
  • Students assign student leader, scribe, and timekeeper
  • Students read vignette, describing a clinical case or learning topic, as a group
  • Students identify and refine learning objectives for the case
  • Facilitator asks guiding questions only if students are off track
  • Students decide who will research each learning objective
  • Facilitator summarizes discussion (5–10 minutes)

Week 2: 50-minute small group

  • Small-group faculty facilitator introduces session and agenda (5–10 minutes)
  • Same students act as leader, scribe, and timekeeper
  • Students present their research findings
  • Students engage in discussion with each other
  • Facilitator asks guiding questions only if students are off track
  • Students brainstorm next steps to address questions not adequately answered in the literature
  • Facilitator summarizes discussion (5–10 minutes)

Abbreviation: UCSF indicates University of California, San Francisco.

Acknowledgments: The authors would like to thank Michelle Hermiston, MD, PhD, for helpful discussions and Mark Lovett for assistance with data acquisition.

References

1. Eppler MJ, Mengis J. The concept of information overload: A review of literature from organization science, accounting, marketing, MIS, and related disciplines. Inform Soc. 2004;20:325–344.
2. Sternszus R, Saroyan A, Steinert Y. Describing medical student curiosity across a four year curriculum: An exploratory study. Med Teach. 2017;39:377–382.
3. Furtak EM, Seidel T, Iverson H, Briggs DC. Experimental and quasi-experimental studies of inquiry-based science teaching: A meta-analysis. Rev Educ Res. 2012;82:300–329.
4. National Research Council. National Science Education Standards. 1996.Washington, DC: National Academies Press.
5. Norman GR, Patel VL, Schmidt HG. Clinical inquiry and scientific inquiry. Med Educ. 1990;24:396–399.
6. Cutrer WB, Miller B, Pusic MV, et al. Fostering the development of master adaptive learners: A conceptual model to guide skill acquisition in medical education. Acad Med. 2017;92:70–75.
7. Cooke MC, Irby DM, O’Brien BC. Educating Physicians: A Call for Reform of Medical School and Residency. 2010.San Francisco, CA: Jossey-Bass.
8. Irby DM, Cooke M, O’Brien BC. Calls for reform of medical education by the Carnegie Foundation for the Advancement of Teaching: 1910 and 2010. Acad Med. 2010;85:220–227.
9. University of California, San Francisco School of Medicine. Bridges curriculum. http://meded.ucsf.edu/bridges. Revised May 2017. Accessed October 17, 2018.
10. Whitman N. A review of constructivism: Understanding and using a relatively new theory. Fam Med. 1993;25:517–521.
11. Spronken-Smith R, Walker R. Can inquiry-based learning strengthen the links between teaching and disciplinary research? Stud High Educ. 2010;35:723–740.
12. Lee VS. What is inquiry–guided learning? New Dir Teach Learn. 2012;2012:5–14.
13. Bruner JS. The act of discovery. Harv Educ Rev. 1961;31:21–32.
14. Barrows HS. Problem-based learning in medicine and beyond: A brief overview. New Dir Teach Learn. 1996;1996:3–12.
15. Dochy F, Segers M, Van den Bossche P, Gijbels D. Effects of problem-based learning: A meta-analysis. Learn Instr. 2003;13:533–568.
16. Frank JR, Snell LS, Cate OT, et al. Competency-based medical education: Theory to practice. Med Teach. 2010;32:638–645.
17. Messick S. Linn RL. Validity. In: Educational Measurement. 1989:3rd ed. New York, NY: American Council on Education/Macmillan; 13–103.
18. Cook DA, Beckman TJ. Current concepts in validity and reliability for psychometric instruments: Theory and application. Am J Med. 2006;119:166.e7–166.16.
19. Downing SM. Validity: On meaningful interpretation of assessment data. Med Educ. 2003;37:830–837.
20. Artino AR Jr, La Rochelle JS, Dezee KJ, Gehlbach H. Developing questionnaires for educational research: AMEE Guide No. 87. Med Teach. 2014;36:463–474.
21. Hsu CC, Sandford BA. The Delphi technique: Making sense of consensus. Pract Assess Res Eval. 2007;12:1–8.
22. Chen HC, Wamsley MA, Azzam A, Julian K, Irby DM, O’Sullivan PS. The Health Professions Education Pathway: Preparing students, residents, and fellows to become future educators. Teach Learn Med. 2017;29:216–227.
23. Wyatt-Smith C, Klenowski V, Colbert P. Designing Assessment for Quality Learning. 2014.Dordrecht, the Netherlands: Springer.
24. Diamond IR, Grant RC, Feldman BM, et al. Defining consensus: A systematic review recommends methodologic criteria for reporting of Delphi studies. J Clin Epidemiol. 2014;67:401–409.
25. Willis GB, Artino AR Jr. What do our respondents think we’re asking? Using cognitive interviewing to improve medical education surveys. J Grad Med Educ. 2013;5:353–356.
26. Dyche L, Epstein RM. Curiosity and medical education. Med Educ. 2011;45:663–668.
27. Mylopoulos M, Woods NN. Having our cake and eating it too: Seeking the best of both worlds in expertise research. Med Educ. 2009;43:406–413.

Supplemental Digital Content

Copyright © 2018 by the Association of American Medical Colleges