Skip Navigation LinksHome > March 2010 - Volume 85 - Issue 3 > How to Measure Success: The Impact of Scholarly Concentratio...
Academic Medicine:
doi: 10.1097/ACM.0b013e3181cccbd4
Scholarly Concentrations

How to Measure Success: The Impact of Scholarly Concentrations on Students—A Literature Review

Bierer, S. Beth PhD; Chen, Huiju Carrie MD, MSEd

Free Access
Article Outline
Collapse Box

Author Information

Dr. Bierer is director of evaluation and assistant professor of medicine, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland Clinic, Cleveland, Ohio.

Dr. Chen is director, Health Professions Education Pathway, and associate clinical professor of pediatrics, University of California, San Francisco School of Medicine, San Francisco, California.

Correspondence should be addressed to Dr. Bierer, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, 9500 Euclid Avenue NA25, Cleveland, OH 44195; telephone: (216) 444-3283; fax: (216) 445-4471; e-mail: biererb@ccf.org.

Collapse Box

Abstract

Purpose: Scholarly concentrations (SCs) are elective or required curricular experiences that give students opportunities to study subjects in-depth beyond the conventional medical curriculum and require them to complete an independent scholarly project. This literature review explores the question, “What impact do SC programs have on medical students?”

Method: In 2008, the authors retrieved published articles using Medline, ERIC, and PsycINFO electronic databases and scanned reference lists to locate additional citations. They extracted data from selected articles using a structured form and used Kirkpatrick's evaluation model to organize learner outcomes into four categories: reactions, learning, behavior, and results.

Results: Of 1,640 citations, 82 full-text papers were considered, and 39 studies met inclusion criteria. Most articles described SC programs that offered students research opportunities. Fourteen articles provided evidence that SC experiences influenced students' choice of clinical specialty or fostered their interest in research. Eight articles reported that SCs improved students' understanding of research principles and methods. Nineteen articles reported publications and presentations to document students' ability to apply acquired knowledge and skills. Twelve studies confirmed the entry of SC graduates into academic medicine with continued engagement in research or success in obtaining grant funding. Students' criticisms focused on requiring research during clinical training and the effort needed to complete scholarly projects.

Conclusions: The diversity of articles and variable results prevent definitive conclusions about the value of SCs. Findings suggest several implications for future SC program evaluations and educational research. The authors advocate increased rigor in evaluation designs to demonstrate SCs' true impact.

The Association of American Medical Colleges (AAMC) advocates training a diverse workforce of physicians who can make contributions to society beyond clinical medicine.1 Medical schools have a decades-long history of exposing students to and training them in biomedical research.2–5 Increasingly, schools provide opportunities for students to cross-train in areas beyond the conventional medical curriculum. These areas of study have included subjects as diverse as ethics, humanities, medical education, community health, and global health6–11; specific examples of areas of intensive, independent study and schools that offer them are described in this issue of Academic Medicine and elsewhere in the literature. In light of this growing trend, educators and administrators should understand the effects and consequences of scholarly concentrations (SCs) on medical students. In this report, we identify and classify the outcomes that educators, administrators, and investigators have used to evaluate the impact of these programs on students. We examine the outcomes to provide implications for future program evaluation and educational research.

Back to Top | Article Outline

Definition of SCs

For this review, we define SCs as elective or required curricular experiences that allow medical students to study specific subjects (both medical and nonmedical) in-depth beyond the conventional medical school curriculum. SCs can vary in duration from six weeks to several years, and they often feature didactic instruction, mentorship opportunities, and hands-on experiences. Our definition also includes a required, student-generated, academic product (e.g., poster, formal presentation, written thesis) that demonstrates successful program completion. Our review includes dual-degree tracks (e.g., MD–PhD, MD–MPH), which we classify as elective SCs that award degrees on completion. As older, more established programs, dual-degree programs may provide more long-term outcomes. Thus, we believe evaluations of dual-degree programs could contribute to a literature review of SCs in medical education.

Back to Top | Article Outline

Method

Search strategy

In 2008, we used Medline, PsycINFO, and ERIC electronic databases to locate articles related to the review question, “What impact do SC programs have on medical students?” One of us (first author, SBB) used the subject MESH terms of “education, undergraduate, medical” and “medical, students” to retrieve an initial set of articles. She then conducted a series of separate searches using key words (“research activity,” “research experience,” “research project,” “area of concentration,” “scholarly,” “concentration,” “dual degree,” “MD-PhD,” “MSTP,” “MPH,” and “MBA”) to limit citations. She searched medical education journals (Academic Medicine, Medical Education, Medical Teacher, Teaching and Learning in Medicine, and Advances in Health Sciences Education) electronically using the same key words. SBB read the titles and abstracts of the articles identified in key word searches and developed a master list of citations. She also scanned cited references of full-text articles to locate additional papers for consideration.

Back to Top | Article Outline
Inclusion/exclusion criteria

We included articles that (1) were written in English, (2) were published in peer-reviewed journals, and (3) reported evaluations of SCs. We did not factor in the quality of evaluation designs when selecting articles. Instead, we considered only articles that presented data to demonstrate the impact of SCs on medical students. These data include students' reactions/satisfaction, skill development/attitude change, academic products/performance, career interests, and/or employment history. We excluded articles if the SC programs did not require students to submit a scholarly project. We also excluded large-scale evaluations of either specific, cross-institutional initiatives (e.g., outcomes associated with research awards from the Howard Hughes Medical Institute) or survey research involving multiple schools (e.g., AAMC Graduation Questionnaire data) because these did not meet our definition of SCs. For instance, we did not consider an article by Gallin and colleagues12 because it presented a large-scale evaluation of the Doris Duke Clinical Research Fellowship program, an extramural research initiative sponsored by multiple institutions rather than a single medical school. In terms of the selection process, each of us independently read approximately 50% of the full-text articles from the master list to determine whether they met inclusion/exclusion criteria. Both of us read the same article only if one or the other of us was uncertain about its eligibility for the review. We included an article in the review only if we both agreed it met the inclusion criteria.

Back to Top | Article Outline
Data extraction

Together, both of us developed a four-page data extraction form based on the literature and guidelines for a Best Evidence Medical Education (BEME) review.13 In addition to bibliographic information, this extraction form recorded data about SC programs and various evaluation outcomes (List 1). Initially, we both read and discussed approximately 50% of the articles to complete the data extraction forms consistently. We then divided and independently rated the remaining articles in the review. One of us (S.B.B.) entered the data from the extraction forms into SPSS 16.0 (Chicago, Illinois) for purposes of analysis.

Table. List 1 Variab...
Table. List 1 Variab...
Image Tools
Back to Top | Article Outline
Analysis

We adapted Kirkpatrick's model of evaluation outcomes to organize, analyze, and synthesize the student outcomes we identified in the literature review. Kirkpatrick developed this model in the 1960s to evaluate the effectiveness of training programs at four levels: reactions, learning, behavior, and results.14 More recently, Kirkpatrick's model has served as a common metric to organize outcomes of diverse studies reported in BEME literature reviews.15 We categorized the outcomes of SCs as follows:

1. Impact on Satisfaction (Reactions).

Learners' views of the SCs including their perceived value, organization, content, methods, instructional materials, and teaching effectiveness.

2a. Impact on Career Interests and Perceptions (Learning).

Changes in learners' attitudes, interests, perceptions, or career interests during or after participation in SCs.

2b. Impact on Knowledge and Skill Development (Learning).

Changes in learners' knowledge (e.g., acquisition of concepts, procedures, and principles), skills (e.g., acquisition of critical-thinking/problem-solving, psychomotor, and/or social skills), or confidence during or after participation in SCs.

3. Impact on Scholarly Work (Behavior).

Learners' transfer of learning gained during SCs to the workplace.

4a. Impact on Clinical Specialty and Career (Results).

Changes attributed to SCs at the individual, organizational, or societal level (e.g., production of additional physician–investigators).

4b. Impact on Patients (Results).

Improvements, attributed to SCs, in the health or well-being of patients.

Back to Top | Article Outline

Results

The search produced 1,640 citations. We reduced this to 82 full-text papers after screening abstracts/titles, conducting journal-specific searches, and reviewing reference citations. Of these, 39 studies met inclusion criteria. We excluded the remaining 43 articles because they did not provide data on outcomes, were not about SCs, reported large-scale evaluations, or were about SC programs that did not require a final scholarly project (Figure 1).

Figure 1
Figure 1
Image Tools

We report results in two major sections. First, we present descriptive characteristics of the articles included in the review. Then, we use our adaptation of Kirkpatrick's model to organize and discuss SC outcomes.

Back to Top | Article Outline
Descriptive analysis of included studies
Setting.

The majority of studies evaluated SC programs in 24 U.S. medical schools (13 private and 11 public). Six studies reported outcomes of SC programs in Canadian or European medical schools. Eight schools with long-standing SC programs or with multiple SC programs were featured in more than one of the 39 studies. Appendix 1 lists the schools included in the review.

Back to Top | Article Outline
Program type and inception.

The 39 reports described required (n = 13), degree-granting (n = 11), or elective (n = 15) experiences. Thirty-two articles mentioned the year the SC was first implemented. Of the articles in our review, Ebbert2 reports the earliest SC program, at Yale University, which implemented a thesis requirement for all medical students over 150 years ago. Four institutions created MD–PhD programs during the 1970s,16–19 and about half of the SC programs in this review began in the early 1980s. Start dates for elective SC programs varied considerably and did not offer a discernable pattern.

Back to Top | Article Outline
Program goals.

Twenty-four studies explicitly described the goals of SC programs.3–7,9,18–35 We analyzed the content of these goals and coded them into three categories: experience/reactions, mentoring/career development, and knowledge/skill development. Table 1 presents paraphrased goals and shows that many of these SC programs focused on students' knowledge and skill development (e.g., the ability to communicate ideas/results in a well-written report). Dual-degree programs, especially MD–PhD programs, usually had career-oriented goals for graduates (e.g., the production of students who can function as independent investigators).4,5

Table 1
Table 1
Image Tools
Back to Top | Article Outline
Evaluation methods and scope.

Most studies used questionnaires (n = 30) to collect data from SC program participants. Of these, three studies used pre/post research designs,6,29,39 four reported student satisfaction across class cohorts,2,3,28,36 and one study compared student outcomes across three institutions.35 Eight articles reported outcomes obtained from school-maintained records,7,18–20,25,26,37,38 two from structured interviews with SC program participants,39,40 and two from content analyses of students' research projects.23,41 Thirty-six reports mentioned the years of data included in SC program evaluations; the number of years of outcome data ranged from 1 to 30 (mean = 9.1 years, standard deviation = 8.6). Fourteen papers stated evaluation questions clearly,2,4,7,24–26,28,29,37,39,41–44 and we could infer the evaluation questions for another 17 studies5,6,9,16–19,21–23,27,30,32,33,35,36,45 by reading the goals and program descriptions of the SC program described. Initially, we tried to determine, using the data presented, whether reports completely or partially answered the stated or inferred evaluation questions. We struggled to make these judgments reliably, especially with inferred evaluation questions. Therefore, we do not report these data. Only four articles included statements about obtaining institutional review board (IRB) approval.7,26,41,42

Back to Top | Article Outline
Reported student outcomes
Level 1: Impact on satisfaction.

Twenty studies reported students' opinions of SC programs.3,6,7,9,17,20–26,28,30,36,39,42,43,45,46 Of these, 14 studies explicitly provided either questionnaire ratings or selected open-ended comments to demonstrate students' perceptions of SC experiences.3,9,17,20–26,36,39,42,45 Five studies reported that 80% to 94% of students would undertake SCs again if given the opportunity.25,27,36,39,45 Several studies documented student satisfaction with their SC preceptors as evidenced by students' high ratings of advising attributes such as availability,20,22,28,45 appropriate guidance,6,22,23,28,43 expertise,20,28 interest in the student and/or project,22,28 and professionalism.6 Students' criticisms of SC programs typically involved the scholarly project, which some students thought caused unwelcome stress,22,36,39 took too much effort,22,28,43 lacked appropriate structure,39 focused too much on laboratory research,20 and/or detracted from clinical opportunities.20,22

Few of the studies we reviewed explored directly the question of whether SCs should become a curriculum requirement.2,43 In a survey of Yale alumni, Ebbert2 learned that 60% of respondents thought all Yale medical students should complete a research project during their medical training. Closer examination of survey results revealed that alumni with medical school appointments (69%) held this opinion more often than those in private practice (57%). Frishman43 identified that 50% of Albert Einstein College of Medicine graduates who completed an elective, six-month research experience during medical school (n = 49) thought research should not become a graduation requirement. Interestingly, medical students at the University of Calgary consistently assigned low satisfaction ratings to a required research experience even though many students had positive perceptions of their preceptors and scholarly projects.22,28 Haraysm and colleagues22 discovered that dissatisfied students viewed Calgary's research requirement as less valuable if they had previous research experience or if they wanted more time to pursue clinical training opportunities.

Back to Top | Article Outline
Level 2A: Impact on career interests and perceptions.

A number of studies examined whether SC programs affected students' attitudes or career interests. Several articles reported that completing SCs during medical school influenced students' choice of clinical specialty2,6,24,25,29,30,36,45 or interest in research.20,24,26,29,30 One study observed that SC programs increased students' interest in rural medicine,9 and two others reported that SC programs encouraged students to consider academic careers.29,43 Ogunyemi and colleagues23 reported that 20% of students at the Charles Drew School of Medicine had decreased interest in research after completing a required thesis. The authors do not speculate whether these students were generally less interested in research given Charles Drew's mission to prepare primary care physicians for careers in urban health care.

Back to Top | Article Outline
Level 2B: Impact on knowledge and skill development.

Several investigations used students' self-ratings to document how completing SCs improved their abilities to evaluate the literature critically,7,24,29,43 to write a scientific paper,24,29,43 and/or to conduct ethically responsible research.6,24 Other students thought undertaking SCs gave them a broader perspective of patient care,9,17,36 improved their understanding of research principles/processes,6,22,23,43 and enhanced their knowledge.9,22 Another study compared the self-assessments of MD–MPH and MD-only students in appraising scientific literature.7 Though MD–MPH graduates reported higher confidence in this area than the comparison group (80% versus 33%), the low survey response of 16% made drawing valid conclusions difficult. Interestingly, no other investigation in the literature we reviewed used group comparisons to examine the impact of SCs on students' learning. Two studies used pre/post research designs to document changes in student learning.6,39 In one of these, students (n = 9) completed questionnaires to assess their knowledge before and after a research elective.6 Difference scores showed that students learned the most about IRB procedures and research processes. In the other of these two studies, Shapiro et al39 interviewed 11 students on two occasions. In post-SC experience interviews, all students said a research elective improved their confidence in designing and conducting clinical research, and nine students thought their technical skills improved during the SC.

Only one study in the review reported faculty assessments of students' scholarly products.22 In this investigation, 276 preceptors and an evaluation committee used a 14-item checklist to rate students' written research projects using a four-point scale (1 = poor to 4 = excellent). On average, both groups rated student performance above 3 on all criteria, but preceptors consistently rated student performance higher than the evaluation committee did on all items. Because of scoring discrepancies and a potential halo effect, the authors (Harasym and colleagues) recommend using a separate evaluation committee to assign grades to students' required research projects.

The last outcome in this category frequently mentioned by reviewed reports involved the number of students who completed degrees to fulfill SC requirements. Eight articles reported the total number of students who received PhDs4,5,25,37 or MPH degrees7,19,25,38 during their medical school training.

Back to Top | Article Outline
Level 3: Impact on scholarly work.

Many of the studies cited scientific presentations and publications to document students' ability to apply acquired knowledge and skills to a career setting. Most articles reporting these data relied on students' self-reports or school records (i.e., voluntary participation in school-sponsored research symposia). No studies mentioned specific methods (electronic searches, conference proceedings, etc.) to confirm the accuracy of self-report data, and only one article used Medline searches to identify students' publications.40

Publication in peer-reviewed journals seemed to serve as the “gold standard” of academic success across elective, required, and dual-degree SC programs. In fact, several articles reported SC students' publication rates in peer-reviewed journals, which ranged from 8% to 85%.3,22–24,26,31,32,36,40,43,44,46 Rates of giving presentations at regional or national meetings ranged from 10% to 41%.3,28,33,34,44,46 Three studies presented class cohort data to show how students' publications and presentations increased over time.23,26,36 Other articles discussed the nature of SC program students' scholarly work by listing the titles of students' projects,6–9,23,24,32,34,39,43,46 categorizing students' projects by topic area,22,23,26,41,43,46 or reporting the research designs that students employed.23,43

Back to Top | Article Outline
Level 4A: Impact on clinical specialty and career.

Several studies explored whether relationships existed between students' SC activities during medical school and their choice of clinical specialty. Ebbert2 learned that Yale graduates tended to enter clinical specialties in the departments where they completed their required research theses. He also noted that 26% of Yale graduates' current research activity was related to the topic of their student projects. Similarly, Chongsiriwatana41 discovered that students who completed women's health projects to fulfill the University of New Mexico's research requirement matched more often into residency programs focusing on women's health than students who did research in other areas. Another study reported that those who took MPH courses during medical school tended to enter pediatrics and preventive medicine at greater rates than MD-only graduates,19 whereas another stated that 83% of students who completed an elective honors research program in the Department of Dermatology obtained dermatology residency placements.32 One study identified that 62% of responding graduates (n = 107) from Duke University's MD/PhD program obtained academic appointments in clinical departments (e.g., internal medicine, pathology, pediatrics) or nonclinical departments (e.g., biotechnology, basic sciences, informatics) where they could devote 48% to 65% of their time to research.37 We could not discern from these studies whether students selected certain SC topics to enhance their standing during the residency application process or whether students explored SC opportunities that later shaped their career interests.

Other studies used students' continued engagement in research to convey the long-term impact of SC programs. For instance, several schools reported their SC program graduates' current time commitment to research,2,4,16,18,19,27,30,35,43 involvement with research since medical school,2,19 success in obtaining National Institutes of Health (NIH)/extramural support,4,5,18,27,30,37 and/or record of publications after graduation.5,27,35 To explore the impact of a required research experience on later research involvement, Segal and colleagues35 surveyed graduates of the University of Pennsylvania who completed a research requirement and graduates of two similarly ranked medical schools with elective SC programs. Results showed that those with any research experience at all—be the experience required or elective—during medical school conducted and published more postgraduate research than those without any research experience (49% versus 32%). In another study, investigators interviewed 274 graduates of the Dutch University of Groningen to explore the relationship between completion of extracurricular research during medical school and research productivity.40 Graduates who did research during medical school published more frequently after graduation (mean = 4 articles) than graduates without research experience (mean = 1 article). Of the graduates with research exposure in medical school, approximately 50% published during medical school, and those who had published during medical school had higher publication rates after medical school (mean = 6 articles) compared with those who had not published during medical school (mean = 2 articles).

Some evaluations of SC programs identified other career outcomes of graduates. Ten studies reported the number of graduates from elective,30,43 required,2,35 or dual-degree programs4,5,16,18,27,37 who obtained academic appointments or achieved faculty rank (e.g., professor) after residency training. Three reports of MD/PhD programs presented the institutions where graduates obtained academic appointments, including their recruitment to home institutions.4,18,37 Two reports described graduates' occupations27,37 (clinical medicine, industry, ministry, etc.) and leadership roles27 (journal editor, NIH section leader, etc.) to demonstrate how SC programs helped prepare physicians for careers in academic medicine as originally intended.

Back to Top | Article Outline
Level 4B: Impact on patients.

No articles in the review reported outcomes to document how or whether SC programs influenced changes in health care delivery or improved patient care practices or outcomes.

Back to Top | Article Outline

Discussion

This review includes a diverse set of SC programs whose topics of study, program goals, and instructional methods varied considerably. We did choose to limit our definition of SCs to programs that require students to generate academic products to demonstrate successful program completion, and we offer this requirement as a potential standard for SCs. Speculating on what additional characteristics a gold standard SC should contain is too difficult given the range of program goals and characteristics.

Many reports used descriptive, single-group designs to examine program outcomes, or they relied heavily on self-report data. For these reasons, we cannot offer definite conclusions about the long-term impact of SCs on students. The literature only suggests that SC programs influence students' perceptions, learning, research productivity, and career decisions (Appendix 1). Even so, this review underscores the importance of using outcomes to judge the effectiveness of educational interventions in medicine. The challenge now is to identify and prioritize the best outcomes to measure to demonstrate program impact. Some believe medical educators must first determine whether educational programs actually influence learners' behaviors.47,48 Others advocate investigations that link curricula to patient outcomes.49,50 Alternative lines of inquiry could explore how SC programs affect organizational structures or institutional culture. Evaluating complex, difficult-to-measure outcomes of educational programs requires leadership, expertise, and resources. The current literature reveals that continuing to measure what is easy to collect (student feedback) rather than what is important to know (behavioral, institutional, or societal outcomes) will not advance research in this area.

Back to Top | Article Outline
Implications

Several lessons have emerged from this review that can help educators determine the merit and worth of SC programs. We believe the following implications for program evaluation and educational research will apply to most SC programs regardless of their topic areas or program designs.

Back to Top | Article Outline
Create program goals.

Goals help shape a program's design, target faculty development initiatives, and clarify performance expectations. Goals may also help stakeholders identify the desired short- and long-term outcomes of educational programs. Most MD–PhD programs in this review had research-related goals and used publications and academic positions to demonstrate program impact. These outcomes may not apply to SC programs with different missions. Thus, if not already in place, faculty should develop goals for their SC programs and use these goals to communicate what they hope to achieve. Although goals may evolve over time, they do provide benchmarks for monitoring progress and evaluating impact.

Back to Top | Article Outline
Develop logic models.

After defining goals, faculty should identify which outcomes to collect systematically. To help with this process, the Kellogg Foundation developed a handbook in 1989 (and updated it in 2004) to show nonprofit agencies how to use logic models to align evaluation plans to program goals.51 The logic model graphically depicts the inputs, activities, outputs, and outcomes of programs. Inputs refer to the resources needed to operate a program such as grant dollars, in-kind departmental support, or laboratory resources. Activities document the tasks required to implement the program (curriculum design, student recruitment, faculty development, etc.). Outputs identify the direct results of programs, and these are often stated numerically (e.g., the number of students enrolled in SCs or the number of students who completed research projects). Outcomes list the anticipated short-term (one to two years after program completion) and long-term effects (five or more years after program completion) of programs. When used as an evaluation tool, logic models can uncover program assumptions, identify needed resources, generate program outcomes, and communicate evaluation plans.51,52 Therefore, we encourage faculty to explore how (or even if) logic models can improve the planning and evaluation of their SC programs.

Back to Top | Article Outline
Select appropriate designs.

Only a few studies in this review used quasi-experimental designs to evaluate SCs. In the future, investigators should select more rigorous designs (pretest/posttest, cohort analyses of SC versus non-SC grads, etc.) if they want to explore the impact of SC programs on learners' behaviors.47 Multiinstitutional research could also advance understanding about the effectiveness of educational interventions across sites as in Segal and colleagues' study,35 which showed a link between undergraduate research experience and postgraduate scholarly activity.

Back to Top | Article Outline
Collect multiple sources of data.

Multiple challenges (e.g., “just-in-time” planning, lack of resources) often arise that supersede evaluation activities when faculty design new educational programs. Kirkpatrick14 recommend always collecting reaction feedback from participants because doing so (with questionnaires) is easy and the resulting data are better than no data at all. Though many view participant feedback as “soft,” these data can provide information for improving program components, establishing quality benchmarks, and justifying program expansion or elimination decisions.14 Limiting evaluation data to participant feedback, however, will not provide sufficient evidence to link educational programs to long-term outcomes. We encourage researchers to collect, when feasible, additional data from multiple sources (e.g., NIH databases, AAMC databases, faculty, alumni, patients, employers) using different methods (e.g., MEDLINE searches, interviews, questionnaires, observation, social network analysis). Doing so should allow researchers to explore other outcomes in Kirkpatrick's model and will provide opportunities to compare data across sources and methods, which will help support or refute the reliability and validity of evaluation findings.

Back to Top | Article Outline
Assess students' abilities.

Although many of the SC programs in this review required students to generate and present scholarly projects, few programs reported student performance on these projects. The recent shift toward competency-based education in medicine53–55 may encourage SC programs to develop rigorous assessment systems and tools to provide students with feedback about their performance of complex skills. Faculty, patients, or peers could provide students with diagnostic feedback about their performance in multiple domains such as professionalism, interpersonal communication, medical knowledge, and research ability. These data might help SC programs determine whether students are meeting program-specific outcomes such as formulating testable hypotheses or practicing patient-centered interviewing skills. Aggregating performance data across students could also help SCs identify areas for program improvement.

Back to Top | Article Outline
Obtain IRB approval.

Ethical guidelines for human participants research in medical education have changed substantially.56,57 Professional practice now requires investigators to submit evaluation plans of SC programs to IRBs to determine whether such plans warrant an exemption or require consent from participants.58,59 Many IRBs may decide that the evaluation of normal educational practices meet federal exemption criteria,60 making formal ethical review unnecessary. Other IRBs may require detailed protocols for curriculum evaluation, which may make identifying and measuring outcomes of new or evolving educational initiatives difficult. Data registries offer one approach to streamline the IRB approval process. Initially used to record and monitor patient outcomes for clinical research, a data registry is one form of IRB application with the flexibility to amend existing protocols quickly for developing programs while still protecting the rights of human participants. Data registries may also support faculty scholarship in that faculty can request registry data for academic products (posters, presentations, publications) while still maintaining ethical standards.

Back to Top | Article Outline
Identify explanatory theories.

Finally, few reported studies in medical education research use theory to identify “why programs work.”61 Our review of published evaluations of SC programs corroborates this trend in that only one of the articles we reviewed used theory to explain the outcomes of a clinical research experience in family medicine.39 Medical educators should explore the social science literature to identify theoretical models that can contribute to the academic medicine community's understanding of SC programs. For instance, Baaken and colleagues62 used social cognitive career theory (SCCT) as a framework to explore the limitations of training programs designed to produce physician–investigators. SCCT contends that positive learning experiences (e.g., interactions between a person and his or her environment) increase confidence in career-related abilities (career self-efficacy) and shape career interests (outcome expectations). Personal attributes (gender, marital status, parental status, etc.) and other variables (clinical demands, mentor availability, conflict with dual-career, funding resources, etc.), however, may decrease self-efficacy and adversely influence outcome expectations which could discourage physicians from pursuing clinical research careers. SC programs with career-oriented goals may wish to use SCCT as one theory to identify and evaluate relationships among variables to understand how and why programs work.

Back to Top | Article Outline
Limitations

This review has several limitations. First, many of the reported outcomes in this review were related to research-oriented goals and activities. We do not know whether the outcomes of these programs (i.e., peer-reviewed publications or careers in academic medicine) generalize to SC programs with different goals (e.g., appreciate role of the humanities in medicine). Second, publication bias may exist because evaluations for some programs, especially those with elective SCs, reported favorable outcomes and offered few suggestions for improvement. Third, we may have omitted articles, given our inclusion/exclusion criteria and reliance on electronic searches. We identified 30 additional articles after scanning reference citations and searching medical education journals; nonetheless, we may have overlooked some articles. Finally, we did not calculate interrater agreement after completing data extraction forms because of the diverse outcomes in this set of articles. Ongoing discussion between raters likely reduced this problem.

To conclude, educational programs, especially resource-intensive ones, require evaluations to inform decision making. The outcomes reported in Appendix 1 suggest that SCs influence students in several ways (i.e., fostering in-depth study, encouraging scholarly activity, increasing research productivity, and influencing career decisions). This review also identifies several lessons that may improve the evaluation of SCs. At this point in time, the literature does not provide sufficient evidence to allow definitive statements about the value of SC programs. Future research should employ more rigorous evaluation designs to show the direct impact of SC programs on both learners and, ideally, patient outcomes. Until then, medical school leaders will have to rely on the available literature and their own experiences to decide if—given the resources expended—SC programs achieve their desired goals.

Back to Top | Article Outline

Funding/Support:

None.

Back to Top | Article Outline

Other disclosures:

None.

Back to Top | Article Outline

Ethical approval:

Not applicable.

Back to Top | Article Outline

References

1Martin JB. Report of the Ad Hoc Committee of Deans: Educating Doctors to Provide High Quality Medical Care: A Vision for Medical Education in the United States. Washington, DC: Association of American Medical Colleges; 2004.

2Ebbert A Jr. A retrospective evaluation of research in the medical curriculum. J Med Educ. 1960;35:637–643.

3Fisher WR. Medical student research: A program of self education. J Med Educ. 1981;56:904–908.

4Frieden C, Fox BJ. Career choices of graduates from Washington University's Medical Scientist Training Program. Acad Med. 1991;66:162–164.

5Schwartz P, Gaulton GN. Addressing the needs of basic and clinical research: Analysis of graduates of the University of Pennsylvania MD-PhD program. JAMA. 1999;281:96–97, 99.

6DeHaven MJ, Chen L. Teaching medical students research while reaching the underserved. Fam Med. 2005;37:315–317.

7Harris R, Kinsinger LS, Tolleson-Rinehart S, Viera AJ, Dent G. The MD-MPH program at the University of North Carolina at Chapel Hill. Acad Med. 2008;83:371–377.

8Shafer A. Stanford University School of Medicine, Arts and Humanities Medical Scholars Program. Acad Med. 2003;78:1059–1060.

9Zorzi A, Rourke J, Kennard M, Peterson M, Miller K. Combined research and clinical learning make rural summer studentship program a successful model. Rural Remote Health. 2005;5:401.

10Kanter SL, Wimmers PF, Levine AS. In-depth learning: One school's initiatives to foster integration of ethics, values, and the human dimensions of medicine. Acad Med. 2007;82:405–409.

11Rickards E, Borkan J, Gruppuso P. Educating the next generation of leaders in medicine: The scholarly concentrations program at the Warren Alpert Medical School of Brown University. Med Health R I. 2007;90:275–276, 280–282.

12Gallin EK, LeBlancq SM; Clinical Research Fellowship Program Leaders. Launching a new fellowship for medical students: The first years of the Doris Duke clinical research fellowship program. J Investig Med. 2005;53:73–81.

13Harden RM, Grant J, Buckley G, Hart IR. Best evidence medical education. Adv Health Sci Educ Theory Pract. 2000;5:71–90.

14Kirkpatrick DL, Kirkpatrick JD. Evaluating Training Programs: The Four Levels. San Francisco, Calif: Berrett-Koehler Publishers Inc.; 2006.

15BEME. Starting a review. Available at: http://www.bemecollaboration.org/beme/pages/start.html. Accessed November 11, 2009.

16Abelmann WH, Nave BD, Wilkerson L. Generation of physician-scientists manpower: A follow-up study of the first 294 graduates of the Harvard-MIT Program of Health Sciences and Technology. J Investig Med. 1997;45:272–275.

17Chauvin SW, Rodenhauser P, Bowdish BE, Shenoi S. Double duty: Students' perceptions of Tulane's MD-MPH dual degree program. Teach Learn Med. 2000;12:221–230.

18McClellan DA, Talalay P. M.D.-Ph.D. training at the John Hopkins University School of Medicine, 1962-1991. Acad Med. 1992;67:36–41.

19Rosenberg SN. A survey of physicians who studied public health during medical school. Am J Prev Med. 1998;14:184–188.

20Blazer D, Bradford W, Reilly C. Duke's 3rd year: A 35-year retrospective. Teach Learn Med. 2001;13:192–198.

21Grochowski C, Halperin EC, Buckley EG. A curricular model for the training of physician scientists: The evolution of the Duke University School of Medicine curriculum. Acad Med. 2007;82:375–382.

22Harasym PH, Mandin H, Sokol PA, Lorscheider FL. Development of a research elective program for first- and second-year medical students. Teach Learn Med. 1992;4:173–179.

23Ogunyemi D, Bazargan M, Norris K, et al. The development of a mandatory medical thesis in an urban medical school. Teach Learn Med. 2005;17:363–369.

24Jacobs CD, Cross PC. The value of medical student research: The experience at Stanford University School of Medicine. Med Educ. 1995;29:342–346.

25Stellman JM, Cohen S, Rosenfield A. Evaluation of a one-year masters of public health program for medical students between their third and fourth years. Acad Med. 2008;83:365–370.

26Zier K, Friedman E, Smith L. Supportive programs increase medical students' research interest and productivity. J Investig Med. 2006;54:201–207.

27Wilkerson L, Abelmann WH. Producing physician-scientists: A survey of graduates from the Harvard-MIT Program in Health Sciences and Technology. Acad Med. 1993;68:214–218.

28Smith FG, Haraysm PH, Mandin H, Lorscheider FL. Development and evaluation of a research project program for medical students at the University of Calgary Faculty of Medicine. Acad Med. 2001;76:189–194.

29Houlden RL, Raja JB, Collier CP, Clark AF, Waugh JM. Medical students' perceptions of an undergraduate research elective. Med Teach. 2004;26:659–661.

30Solomon SS, Tom SC, Pichert J, Wasserman D, Powers AC. Impact of medical student research in the development of physician-scientists. J Investig Med. 2003;51:149–156.

31Rhyne RL. A scholarly research requirement for medical students: The ultimate problem-based learning experience. Acad Med. 2000;75:523–524.

32Wagner RF Jr, Lewis SA. Teaching medical students dermatology research skills: Six years of experience with the University of Texas Medical Branch dermatology non-degree research honors program, 2001-2006. Dermatol Online J. 2006;12:20.

33Gonzales AO, Westfall J, Barley GE. Promoting medical student involvement in primary care research. Fam Med. 1998;30:113–116.

34Rosenblatt RA, Desnick L, Corrigan C, Keerbs A. The evolution of a required research program for medical students at the University of Washington School of Medicine. Acad Med. 2006;81:877–881.

35Segal S, Lloyd T, Houts PS, Stillman P, Jungas RL, Greer RB 3rd. The association between students' research involvement in medical school and their postgraduate medical activities. Acad Med. 1990;65:530–533.

36Elwood JM, Pearson JC, Madeley RJ, et al. Research in epidemiology and community health in the medical curriculum: Students' opinions of the Nottingham experience. J Epidemiol Community Health. 1986;40:232–235.

37Bradford WD, Anthony D, Chu CT, Pizzo SV. Career characteristics of graduates of a medical scientist training program, 1970-1990. Acad Med. 1996;71:484–487.

38Boyer MH. A decade's experience at Tufts with a four-year combined curriculum in medicine and public health. Acad Med. 1997;72:269–725.

39Shapiro J, Coggan P, Rube A, Morohasi D, Fitzpatrick C, Danque F. The process of faculty-mentored student research in family medicine: Motives and lessons. Fam Med. 1994;26:283–289.

40Reinders JJ, Kropmans TJ, Cohen-Schotanus J. Extracurricular research experience of medical students and their scientific output after graduation. Med Educ. 2005;39:237.

41Chongsiriwatana KM, Phelan ST, Skipper BJ, Rhyne RL, Rayburn WF. Required research by medical students and their choice of a women's health care residency. Am J Obstet Gynecol. 2005;192:1478–1480.

42Watt CD, Greeley SW, Shea JA, Ahn J. Educational views and attitudes, and career goals of MD-PhD students at the University of Pennsylvania School of Medicine. Acad Med. 2005;80:193–198.

43Frishman WH. Student research projects and theses: Should they be a requirement for medical school graduation? Heart Dis. 2001;3:140–144.

44Remes V, Helenius I, Sinisaari I. Research and medical students. Med Teach. 2000;22:164–167.

45Legardeur B, Lopez A, Johnson WD. Evaluation of short research experiences in cancer. J Cancer Educ. 1993;8:265–268.

46McPherson JR, Mitchell MM. Experience with providing research opportunities for medical students. J Med Educ. 1984;59:865–868.

47Shea JA, Arnold L, Mann KV. A RIME perspective on the quality and relevance of current and future medical education research. Acad Med. 2004;79:931–938.

48Shea JA. Mind the gap: Some reasons why medical education research is different from health services research. Med Educ. 2001;35:319–320.

49Prystowsky JB, Bordage G. An outcomes research perspective on medical education: The predominance of trainee assessment and satisfaction. Med Educ. 2001;35:331–336.

50Chen FM, Burstin H, Huntington J. The importance of clinical outcomes in medical education. Med Educ. 2005;39:350–351.

51W.K. Kellogg Foundation Logic Model Guide. Available at: http://www.wkkf.org/Pubs/Tools/Evaluation/Pub3669.pdf. Accessed November 11, 2009.

52Frechtling JA. Logic Modeling Methods in Program Evaluation. San Francisco, Calif: Jossey-Bass; 2007.

53Epstein RM. Assessment in medical education. N Engl J Med. 2007;356:387–396.

54Carracio C, Wolfsthal SD, Englander R, Ferentz K, Martin C. Shifting paradigms: From Flexner to competencies. Acad Med. 2002;77:361–367.

55Ben-David MF. The role of assessment in expanding professional horizons. Med Teach. 2000;22:472–477.

56Caelleigh AS. Expanding of informed consent at university medical centers to include trainees as subjects of social-science research: Implications for science educators. Sci Editor. 2002;25:79–85.

57Roberts LW, Geppert C, Connor R, Nguyen K, Warner TD. An invitation to medical educators to focus on ethical and policy issues in research and scholarly practice. Acad Med. 2001;76:876–885.

58Henry RC, Wright DE. When do medical students become human subjects of research? The case of program evaluation. Acad Med. 2001;76:871–875.

59Kanter SL. Ethical approval for studies involving human participants: Academic Medicine's new policy. Acad Med. 2009;84:149–150.

60United States Department of Health and Human Services. Basic HHS Policy for Protection of Human Research Subjects. Available at: http://www.hhs.gov/ohrp/humansubjects/guidance/45cfr46.htm#46.101. Accessed November 11, 2009.

61Cook DA, Bordage G, Schmidt HG. Description, justification, and clarification: A framework for classifying the purposes of research in medical education. Med Educ. 2008;42:128–133.

62Baaken LL, Byars-Winston A, Wang MF. Viewing clinical research career development through the lens of cognitive theory. Adv Health Sci Educ Theory Pract. 2006;11:91–110.

Back to Top | Article Outline
Cited Here...
Table. Appendix 1 Su...
Table. Appendix 1 Su...
Image Tools
Table. Appendix 1, c...
Table. Appendix 1, c...
Image Tools
Table. Appendix 1, c...
Table. Appendix 1, c...
Image Tools
Table. Appendix 1, c...
Table. Appendix 1, c...
Image Tools
Table. Appendix 1, c...
Table. Appendix 1, c...
Image Tools
Table. Appendix 1, c...
Table. Appendix 1, c...
Image Tools
Table. Appendix 1, c...
Table. Appendix 1, c...
Image Tools

Cited By:

This article has been cited 2 time(s).

Academic Medicine
Implementation of a Longitudinal Mentored Scholarly Project: An Approach at Two Medical Schools
Boninger, M; Troen, P; Green, E; Borkan, J; Lance-Jones, C; Humphrey, A; Gruppuso, P; Kant, P; McGee, J; Willochell, M; Schor, N; Kanter, SL; Levine, AS
Academic Medicine, 85(3): 429-437.
10.1097/ACM.0b013e3181ccc96f
PDF (536) | CrossRef
Academic Medicine
Required vs. Elective Research and In-Depth Scholarship Programs in the Medical Student Curriculum
Parsonnet, J; Gruppuso, PA; Kanter, SL; Boninger, M
Academic Medicine, 85(3): 405-408.
10.1097/ACM.0b013e3181cccdc4
PDF (136) | CrossRef
Back to Top | Article Outline

© 2010 Association of American Medical Colleges

Login

Article Tools

Images

Share