Secondary Logo

Journal Logo

Evaluation of Program Outcomes

Going Beyond Kirkpatrick in Evaluating a Clinician Scientist Program: It's Not “If It Works” but “How It Works”

Parker, Kathryn PhD; Burrows, Gwen MA; Nash, Heather; Rosenblum, Norman D. MD

Author Information
doi: 10.1097/ACM.0b013e31823053f3
  • Free

Abstract

Evaluation of medical education programs is increasingly important. Educators are accountable to the learner to deliver high-quality education, to funders to deliver education in a cost-effective and efficient manner, to other educators to generate new knowledge on program effectiveness, and to the public to ensure the competency of their graduates. A multitude of approaches and models exist to help educators evaluate their programs. In medical education, commonly used models and approaches emphasize the outcomes rather than the processes that lead to outcomes.1 Many medical educators conceptualize outcomes as a series of levels through which the learner progresses (satisfaction, learning, behavior, and results or effects of behavior).2–6 Evidence suggests that programs—including well-constructed ones—fail to demonstrate outcomes at the higher levels of behavioral change, even in sophisticated learners.7–11 Outcomes-focused models may provide program stakeholders with a common knowledge of the program's rationale, goal, and intended outcomes—as well as the inputs and processes required to arrive at those outcomes. Yet, these models provide little insight into a program's “theory”—that is, the processes by which it yields desired outputs and outcomes—and they do not capture unintended outcomes.12–14 Accordingly, outcome-focused models provide little insight into the underlying mechanisms that hinder or enable the achievement of higher-level program outcomes. Evaluative strategies that capture and allow for the analysis of unintended or emergent outcomes may provide medical educators with a richer, more comprehensive understanding of program theory and value.

We investigated how evaluating emergent and unintended outcomes can inform an educational program in ways that extend beyond the results of usual, outcome-based methods. Whereas the Canadian Child Health Clinician Scientist Program (CCHCSP), which is a program focused on educating a new generation of child and youth health clinician scientists, uses a logic model—an outcome-focused framework—as its primary program evaluation strategy, we devised supplementary methods to determine whether the CCHCSP affects learners in ways that are unanticipated and whether identification of such unanticipated outcomes might provide novel insights into the effectiveness of the CCHCSP.15

The Program

The overarching objective of the CCHCSP is to educate a new generation of clinician scientists in child and youth health within an interdisciplinary model. The CCHCSP is a partnership among 17 Canadian child-and-youth-health-focused academic health centers (AHCs). CCHCSP mandates mentored research training at the doctoral, postdoctoral, or career development level. CCHCSP trainees engage in a curriculum which consists of four major components: Web-based learning modules, interdisciplinary mini-symposia, an annual national symposium, and international meetings.

The five online learning modules focus on the following subject areas or content: (1) research ethics and conflicts of interest, (2) research design and clinical research practice, (3) managing one's career as a clinician scientist, (4) oral and written communication, and (5) knowledge translation. The online curriculum also consists of 12 case studies, each of which highlights concepts across two or more modules. Case studies contain a case scenario, learning objectives, guiding questions, and links to modular content that specifically relate to the objectives and questions. For example, the case “Wheezing in Winnipeg” focuses on study design and research ethics in clinical trials and on collaboration within a multidisciplinary framework. The content modules are available for use by trainees at any time. Small groups, which exist in each of the CCHCSP's 17 partner AHCs, meet on a regular basis to use the case studies as a starting point for discussion and learning.

The CCHCSP uses research exercises at mini-symposia as key elements for learning. A typical symposium focuses on a specific theme (e.g., “collaborating in research groups”) and consists of a series of keynote lectures, working seminars that address the theme, and small-group sessions that focus on pivotal questions, which again relate to the theme. At one symposium, for example, participants were challenged to develop interdisciplinary approaches to research from planning and grant writing to study implementation and publication.

The annual national symposium, which takes place in a different Canadian city each year, also features workshops, but these focus on specific skill sets relevant to the emerging clinician scientist. Other critical components of the symposia are research presentations by graduating CCHCSP trainees and poster presentations by all other trainee attendees, a keynote address on an issue of relevance to clinician scientists, and time for the program's leaders and faculty to conduct progress meetings with trainees and (separately) with supervisors.

CCHCSP trainees participate in international symposia via the CCHCSP partnership with the Pediatric Scientist Development Program (PSDP) and the Training Upcoming Leaders in Pediatric Science (TULIPS) Program. Participation in these meetings provides an opportunity to share scientific work, to network with other leaders in the field, and to take part in career development seminars that focus on skills such as grant writing and manuscript writing.

Method

Logic-model-based evaluation of the CCHCSP

Soon after its creation, the CCHCSP developed an evaluation framework using a program logic model. This logic model (Chart 1) was created to serve as a communication tool for the program, to inform program development, and to serve the evaluation needs of program developers and funders. The model organizes the program components being evaluated, including inputs, outputs, and anticipated outcomes. The logic model helps to identify and define types of data needed to measure processes and outcomes for each program component. These data are collected through both quantitative and qualitative assessment tools developed by the CCHCSP.

Chart 1 Original Program Logic Model (i
Chart 1 Original Program Logic Model (i:
Chart 1 Original Program Logic Model (i.e., Before the Original Inclusion of Ibarra's Theory) for the Canadian Child Health Clinician Scientist Program (CCHCSP)

We sought, and the Hospital for Sick Children granted, ethical approval for this retrospective study. We included data from all the cohorts, up to and including the 2009 graduating cohort, in our analysis for this study.

Quantitative assessment

Tools developed through the program's logic model track whether graduates were sufficiently exposed to and engaged in the program to learn and develop transdisciplinary research skills. We examined participation rates in all program components. We also used five-point, Likert-like scales to assess graduates' satisfaction with some of the components (the annual national symposium and the various mini-symposia). To assess whether the CCHCSP achieved the desired short-term research goals, the program evaluators used both output and outcome measures. To assess program outputs, we examined program graduates' research productivity by counting the total number of peer-reviewed publications they authored or coauthored and the total number of invited presentations they gave. We also calculated the total amount of grant funding graduates received.

To further assess outcomes, we calculated the mean and range of the number of disciplines represented in the trainees' and graduates' research, and we counted the number of program graduates who secured positions as clinician scientists.

Qualitative assessment

To further explore the achievement of goals, one of us, the evaluation committee chair of the CCHCSP (G.B.), conducted confidential, one-on-one, face-to-face exit interviews, each lasting about 30 minutes, with all program graduates, as well as with trainees who did not complete the program (see Appendix 1 for the exit interview questions). The interviewer (G.B.) summarized the resulting data using grounded theory methodology,16 and then we grouped and analyzed recurring themes.

Contextual knowledge of program development is critical to the interpretation of evaluation data. To inform future evaluation efforts and the development/revision of program components, we held a utilization-focused, information-based deliberation with stakeholders (e.g., the program director, administrative staff, program evaluation leaders, evaluation specialists). During this deliberation, we (both program evaluators and stakeholders) examined the qualitative data and noted one emergent or unintended outcome: Many graduates commented on gaining a clearer understanding of their professional identity (see Results).

Consequently, we conducted a literature search in an effort to understand these emerging data. Using the search terms professional, identity, and socialization (and combinations thereof), we searched PubMed, ERIC, CINAHL, RDRB, Harvard Business School Archives, and Google Scholar in August 2009 for literature from the last 20 years that explored how health care professionals acquire or alter their professional identities.

Results

The sample, participation, and satisfaction

From 2003 to 2009, the program graduated 21 trainees, each of whom received training in one of seven different clinical disciplines: medicine (11 graduates), nursing (2), physical therapy (1), psychology (1), occupational therapy (2), speech language pathology (3), and dentistry (1). The number and the diversity of the disciplines indicate the multidisciplinary breadth within the program.

A large number of graduates engaged in each of the four program components (Table 1). All funded graduates participated in the online learning modules and at least one of the national symposia. A smaller number of the funded graduates, 12 (57%), participated in mini-symposia. This lower participation rate is largely due to the facts that the regional mini-symposia were not available in all regions, and some graduates chose not to attend because of their workload. Many graduates (n = 15 [71%]) also participated in the meetings of one or both international partner organizations (i.e., TULIPS, PSDP).

Table 1
Table 1:
Participation of Funded Graduates in the Canadian Child Health Clinician Scientist Program by Program Component, 2003–2009

Graduate satisfaction with the mini-symposia, as well as with the seven national symposia held between 2003 and 2009, was high. The average overall rating by all participants (including graduates) was 4.33 out of 5; the range was from a low of 4.2 in 2004 to a high of 4.5 in 2006. Participation in the mini-symposia was not compulsory. Those who did participate in these events ranked their experience highly; the average rating was 4.4 out of 5. We did not collect satisfaction data on the online modules or the meetings of international partner organizations.

Program outputs and outcomes

Productivity.

The data we examined to assess the program's research outputs indicate that graduates' productivity did increase. The number of peer-reviewed publications they produced, the number of invited presentations they gave, and the amount of funding they garnered (Table 2) indicate that graduates were beginning to function as independent child health researchers with the skills needed to conduct interdisciplinary research, which was the program's ultimate goal.

Table 2
Table 2:
Research Productivity of Graduates of the Canadian Child Health Clinician Scientist Program, 2003–2009

Interdisciplinary research and careers.

Graduates' research projects indicated that they engaged in interdisciplinary research as established investigators (a program outcome articulated in the logic model); the mean number of disciplines their research represented was 3.2, and the range was 2 to 8. Evidence that graduates were able to develop independent research careers after the program (another program outcome articulated in the logic model) lies in the fact that all but 1 of the 18 graduates (94%) who sought a position as a clinician scientist were able to secure such a position (3 graduates are pursuing postgraduate studies and were removed from the denominator).

Qualitative inquiry

Table 3 provides a summary of the themes that emerged from the interviews regarding program graduates' experiences with and perceptions of the CCHCSP. We found that the ability of the CCHCSP to expose graduates to interdisciplinary research was a critical program outcome. Furthermore, graduates indicated that the CCHCSP broadened their perspectives about research processes and systems and increased their professional networks and support systems.

Table 3
Table 3:
Experiences, by Theme, of the 21 Program Graduates of the Canadian Child Health Clinician Scientist Program as Identified in Exit Interviews, 2003–2009

Unintended program outcome and further inquiry

One outcome that emerged from the data, which was not articulated a priori in the original logic model, was that the program helped clarify for graduates what being a clinician scientist means. Fourteen of the 21 graduates (67%) said that one of the most important ways the program prepared them for their careers was to provide understanding and clarity about the clinician scientist role, which in turn contributed to their understanding of themselves as clinician scientists. Some graduates linked this outcome explicitly to professional identity alteration (Table 3, bottom row).

In trying to understand the significance of this finding and the role the program plays in role clarification and professional identity alteration, we reanalyzed the interview data to look for further information about how graduates altered their professional identity and the role the program played in that process. However, information from the interviews, although invaluable, was insufficient to explain how the program worked to effect enhanced self-identification as a clinician scientist.

As mentioned above, we conducted a search of the literature to find further information about professional identity. Our search resulted in 508 citations. We refined our search to focus on studies that were theory based or that tested a specific theoretical assumption on the professional identity of health care professionals. This refined search resulted in 20 citations. Some of these references provided information regarding numerous theories on how individuals are socialized into a health care profession, on factors that affect the acquisition of a primary professional identity,17–25 and on the role that education can play in professional identity formation.17,25–27 However, we could not find any theory in the published health care literature on how professionals with an established identity acquire a new one—that is, how practicing clinicians may come to see themselves as clinician scientists.

Ibarra's theory on provisional selves

Next, we broadened our search to include studies in the business and organizational development literature and found one theory on how individuals change or modify their professional identity. Herminia Ibarra's28 theory on provisional selves proposes that professionals adopt new identities through a pseudo-trial-and-error process of experimenting with “provisional selves” that are possible, yet not fully formed, professional identities. Provisional selves are new professional identities that are constructed, rehearsed, and refined as a professional takes on a new role. This provisional self is constructed during “times of career change or transition, as people identify role models, experiment with unfamiliar behaviors, and evaluate their progress.”28 Ibarra asserts that professionals will engage in three simple tasks in the adaptation of new roles: (1) observing role models to identify possible identities, (2) experimenting with provisional selves, and (3) evaluating these experiments against internal standards and external feedback.

A new program theory for CCHCSP

We merged logic model data (including qualitative information from exit interviews) and findings from the literature on professional identity change to articulate a new program theory or model for the CCHCSP (Figure 1). We have presented this program theory as a series of hypothetical statements, supported by program data (quantitative outcomes and output, interview results) that illustrate how the CCHCSP works to enable the learner to identify as a clinician scientist.

Figure 1
Figure 1:
Canadian Child Health Clinician Scientist Program theory incorporating Ibarra's28 theory of professional identity and logic model data.

We hypothesize that the process by which this program works to enable a trainee to identify as a clinician scientist is first a synthesis of three critical components (Figure 1, Boxes 1–3). First, the trainee needs to engage in the curriculum (Figure 1, Box 1), which includes interacting with a group of peers from diverse clinical and research disciplines. Engagement in the curriculum, in turn, contributes to an increased productivity in research (Figure 1, Box 2). Engagement in the mini-symposia, the annual national symposium, and the international meetings where trainees regularly present their research, as well as engagement in the research process, provides the opportunity for trainees to work with and observe role models (mentors) which, according to Ibarra's theory, enable the trainee to experiment with provisional selves (Figure 1, Box 3).

Next, as the trainee engages in this experimentation, three sources (not identified explicitly in the model) provide information to the trainee that informs the evaluation of each provisional self. The trainee evaluates his or her experiences against external feedback provided by various faculty or researchers. This external feedback (formal or informal in nature) may come from the trainee's mentor, a peer, another faculty member, or from the program leaders. The trainee also evaluates each provisional self against his or her own internal standards. If the provisional self is in accordance with these internal standards, full or partial adoption of the provisional self is more likely to occur. Finally, the trainee evaluates each provisional self against external standards. External standards may be those set by the program—but they may also be those that are set by the system in which a clinician scientist works and/or those that are learned through activities such as applying for grants, submitting manuscripts for publication, and seeking jobs. This trial-and-error process is an iterative and dynamic one. As each provisional self is evaluated against external feedback and internal standards, the trainee's internal standards may be altered or reinforced.

This process of evaluation directly influences the trainee's confidence and interest in pursuing research (Figure 1, Box 4) and generates clarity of the role of the clinician scientist and of the system in which a clinician scientist works (Figure 1, Box 5). This confidence/interest and this clarity are components necessary for the trainee to determine if he or she will identify as a clinician scientist (Figure 1, Box 6). (Notably, these two constructs—confidence/interest and role clarity—were also identified by the program's logic model and exit interview data.)

The trainee who identifies as a clinician scientist will actively seek employment in the role (Figure 1, Box 7), thus contributing to the next generation of clinician scientists (Figure 1, Box 8).

Discussion and Conclusions

Our rationale for this evaluation was to uncover the value and impact of the CCHCSP beyond what was articulated in the program's original logic model— that is, to understand what trainees experienced as they progressed through the program. Quantitative data collected as part of the logic model evaluation support the conclusion that CCHCSP graduates were engaged with and productive during and after the program. Qualitative data helped us understand how and why graduates were successful, and, importantly, they revealed the unexpected outcome that graduates identified as clinician scientists.

By articulating the program's theory with, first, information from the logic model framework (including outcome information that emerged through exit interviews), and, then, through a literature-informed understanding of how professionals adapt a new identity, we potentially have a greater comprehension of the program's influence and how the CCHCSP functions.

Information that we collected from the interview data is consistent with assumptions in Ibarra's theory on professional identity. Common to both is the importance of the network and mentor in establishing or changing a professional identity. Ibarra asserts that the mentor is a possible provisional self: that the trainee “tries on” various characteristics of the mentor prior to the adoption of any of these characteristics. Similarly, exit interview data indicate that the mentoring and networking provided by the program are key to achieving a greater understanding of what is required as a clinician scientist. Although the program explicitly built in a strong mentoring component, the importance of more informal mentoring was also clearly evident. Ibarra asserts, and these data support, that this mentor–mentee relationship is critical for conveying commonly held understandings about, and the cultural norms of, the clinician scientist community, which are in turn vital to the process of graduates identifying themselves as clinician scientists, members of the community. Understanding this “hidden curriculum” (implicit in many program components) provides insight into how this program works to bring about the observed outcomes.

Another commonality between our results and Ibarra's theory is the importance of role clarity. Ibarra asserts that trainees try on provisional selves to gain a better understanding of the reality of the profession and their place in it. Role clarity was a dominant theme in the exit interviews. Through this process of obtaining clarity, a trainee might come to the conclusion that being a clinician scientist is not a role that is desirable. This outcome, rarely articulated in a program's logic model, might be erroneously viewed as an indicator of the program's failure. Educators often view their educational efforts as failures if predetermined outcomes (e.g., a participant does not pursue work as a clinician scientist) are not achieved. In fact, their ability to influence learners in their process of growth and improvement in ways that are not known a priori (e.g., a learner comes to realize she prefers to spend the majority of her time with direct patient care) should be viewed as an indicator of program success.

The resulting program theory, although provocative, has not been empirically tested and requires further investigation. Assertions of the theory may be tested individually (Which type of provisional self is likely to be adapted? What program components best facilitate the experimentation of various identities?), or assertions may be evaluated as indicators of overarching program success (Can we correlate long-term success of graduates with self-identification as a clinician scientist?). Further testing of this theory would necessitate data from all or most program graduates (as opposed to the 21 graduates whose outcomes we studied for this work).

This program evaluation approach has implications for future program assessment and development efforts. The CCHCSP's logic model has been modified to include identity as a physician–scientist as an outcome. Further testing of the program's theory will have implications for the development of future program components. Some program components (i.e., the mini-symposia or national symposia) may be critical to the testing of provisional selves as they enable or foster mentoring relationships (as opposed to the online modules, which provide technical knowledge of research but do not influence the professional identity process). This knowledge may affect how program developers construct subsequent program modules.

As program developers and evaluators, we learned three key lessons through this process. First, the development of a program's logic model is part of sound program planning and evaluation. A small but worthwhile investment, it brings knowledge of the program to its key stakeholders and should be created at the outset. Second, the articulation of a program's theory provides a greater understanding of how the program brings about not only predetermined and intended outcomes but also unintended or emergent outcomes. Articulating the underlying theory is vital to program improvement and knowledge generation. Finally, the ability of program developers to bring both an open-ended level of inquiry and a readiness to integrate the right people and perspectives (e.g., theories, including those from the literature of other fields) leads to new hypotheses and research in educational programming.

Acknowledgments:

This work was supported by a Canada Institutes of Health Research Strategic Training Initiative in Health Research grant (to N.D.R.) and by a Canada Research Chair (to N.D.R.). The authors would like to thank Canadian Child Health Clinician Scientist Program trainees and mentors for their engagement in the program.

Funding/Support:

Funding for the Canadian Child Health Clinician Scientist Program (CCHCSP) was provided the Canadian Institutes of Health Research, the Sick Kids Foundation, the British Columbia Children's Hospital Foundation, the Women's and Children's Health Research Institute (Edmonton), and the Manitoba Institute of Child Health.

Ethical approval:

Ethical approval was sought and obtained from the Hospital for Sick Children.

Other disclosures:

None.

Previous presentations:

This paper was presented at the Association of Medical Education in Europe annual meeting in September 2010 in Glasgow, Scotland.

References

1 Tian J, Atkinson NL, Portnoy B, Gold RS. A systematic review of evaluation in formal continuing medical education. J Contin Educ Health Prof. 2007;27:16.
2 Kirkpatrick DL. Evaluating Training Programs: The Four Levels. San Francisco, Calif: Berrett-Koehler Publishers; 1998.
3 Dixon J. Evaluation criteria in studies of continuing education in the health professions: A critical review and a suggested strategy. Eval Health Prof. 1978;1:47–65.
4 Walsh PL. Evaluating educational activities. In: Adelson R, Watkins FS, Caplan RM, eds. Continuing Education for the Health Professional: Educational and Administrative Methods. Rockville, Md: Aspen; 1985:71–100.
5 U.S. Department of Health and Human Services. Healthy People 2010: Understanding and Improving Health. 2nd ed. Washington, DC: U.S. Government Printing Office; 2000.
6 Freeth D, Hammick M, Reeves S, Koppel I, Barr H. Effective Interprofessional Education: Development, Delivery and Evaluation. London, UK: Blackwell; 2005.
7 Mansouri M, Lockyer J. A meta-analysis of continuing medical education effectiveness. J Contin Educ Health Prof. 2007;27:6.
8 Noe RA, Schmidt NM. The influence of trainee attitudes on training effectiveness: Test of a model. Pers Psychol. 1986;39:497–523.
9 Clement RW. Testing the hierarchy theory of training evaluation: An expanded role for trainee reactions. Public Pers Manage. 1982;11:176–184.
10 Smith PE. Management modeling training to improve morale and customer satisfaction. Pers Psychol. 1976;29:351–359.
11 Severin D. The predictability of various kinds of criteria. Pers Psychol. 1952;5:93–104.
12 Bickman L, ed. Advances in program theory. New Directions for Program Evaluation. Fall 1990:1–124. Special issue.
13 Chen H, Rossi PH. Evaluating with sense: The theory-driven approach. Eval Rev. 1983;7:283–302.
14 Weiss CH. Theory-based evaluation: Past, present and future. New Dir Eval. 1997;76:41–55.
15 Patton MQ. Utilization-Focused Evaluation. Thousand Oaks, Calif: Sage Publications; 2008.
16 Denzin NK, Lincoln YS, eds. Handbook of Qualitative Research. Thousand Oaks, Calif: Sage Publications; 2000.
17 Monrouxe LV. Identity, identification and medical education: Why should we care? Med Educ. 2010;44:40–49.
18 Andrew N, Ferguson D, Wilkie G, Corcoran T, Simpson L. Developing professional identity in nursing academics: The role of communities of practice. Nurs Educ Today. 2009;29:607–611.
19 Crawford P, Brown B, Majomi P. Professional identity in community health nursing: A thematic analysis. Int J Nurs Stud. 2008;45:1055–1063.
20 Deppoliti D. Exploring how new registered nurses construct professional identity in hospital settings. J Contin Educ Nurs. 2008;39:255–262.
21 Lindquist I, Engardt M, Granham L, Poland F, Richardson B. Physiotherapy students' professional identity on the edge of working life. Med Teach. 2006;28:270–276.
22 Lingard L, Garwood K, Schryer CF, Spafford MM. A certain art of uncertainty: Case presentation and the development of professional identity. Soc Sci Med. 2003;56:603–616.
23 Roberts SJ. Development of a positive professional identity: Liberating oneself from the oppressor within. ANS Adv Nurs Sci. 2000;22:71–82.
24 Niemi PM. Medical students' professional identity: Self-reflection during the preclinical years. Med Educ. 1997;31:408–415.
25 Miller JL. Level of RN educational preparation: Its impact on collaboration and the relationship between collaboration and professional identity. Can J Nurs Res. 2004;36:132–147.
26 Clandinin DJ, Cave MT. Creating pedagogical spaces for developing doctor professional identity. Med Educ. 2008;42:765–770.
27 Grealish L, Trevitt C. Developing a professional identity: Student nurses in the workplace. Contemp Nurse. 2005;19:137–150.
28 Ibarra H. Provisional selves: Experimenting with image and identity in professional adaptation. Adm Sci Q. 1999;44:764–791.
Appendix 1
Appendix 1:
Exit Interview Questions Asked of Participants, Including Graduates, of the Canadian Child Health Clinician Scientist Program (CCHCSP), 2003–2009
© 2011 Association of American Medical Colleges