One of the great joys of being editors of Academic Medicine is receiving a variety of manuscripts that expose us to new ideas that could fundamentally change how health professionals are educated about delivering health care and promoting health. Some of the most important ideas are found in descriptions of new programs. Other ideas are in the substantial number of submissions that report on program evaluation. Those submissions range from describing program innovations with minimal or informal evaluation, as discussed by Kanter,1 to providing rigorous evidence of the value of a program. While the description of a program conveys the goals and aspirations of the authors, the rigor of the evaluation provides the needed evidence of what works and in what context. Journals, by carrying out thorough peer review, play an important part in assessing the evidence that new programs work as advertised.
For purposes of this editorial we use the following definitions:
- A program description is a report of a discrete educational process or initiative, with information about how it works and its impact.
- A program evaluation is the systematic collection and analysis of information related to the design, implementation, and outcomes of an education program, for the purpose of monitoring and improving the quality and effectiveness of the program. (This definition is slightly modified from that used by the Accreditation Council for Graduate Medical Education to define the evaluation of a resident education program.2)
Various frameworks for conducting program evaluation have been described,3 and these frameworks can be integrated into the design of the program from the start, such as the use of a logic model4 that describes how a program is supposed to work. Frye and Hemmer5 also provide an overview of evaluation frameworks that should be considered for medical education evaluation scholarship. Because we want to encourage the dissemination of successful programs—as well as an understanding of those that are not successful—we focus this editorial on the topic of program evaluation to help authors achieve greater clarity and consistency in the program evaluations they submit and to stimulate discussion of this important topic.
Program evaluations can be components of different types of manuscripts that are submitted to the journal. Table 1 presents the journal’s general expectations for three of these types of manuscripts— Articles, Innovation Reports, and Research Reports—and includes a link to more detailed instructions about them.
Submissions of manuscripts to Academic Medicine that contain program evaluations but do not have a research question and/or robust evaluation evidence are generally best suited for presentation either as Articles or as Innovation Reports. The Article format allows for longer program descriptions with rich background and context, often with discussion about how the approach might be applied more broadly and be improved through future work. Articles often focus on a mature program where practical experience is being harnessed without the application of rigorous research methods. For example, Englander et al6 describe the development of the core entrustable professional activities for entering residency through a consensus process in an Article that combines multiple stages of innovation, analysis, and evaluation to arrive at a product that could guide curricular and program development at a national level.
In contrast, an Innovation Report introduces a new, preliminary approach to a challenge facing the wider academic medicine community. Innovation Reports are particularly suited for local, novel, often cutting-edge approaches. For example, in an Innovation Report, Crabtree et al7 present an attempt to link an evidence-based medicine educational program for second-year medical students to improvements in clinical care. Students produced evidence-based tools to assist in the management of a variety of problems such as sepsis, status epilepticus, and pediatric sickle cell disease. The students and clinical teams provided evaluations of the project to improve both the education and clinical approaches for future iterations of the project and to evaluate whether the project was meeting the goal of linking education with quality care. This kind of Innovation Report gives readers a glimpse into programs at an early stage of development and emphasizes how a preliminary evaluation of the early experience informs next steps.
Research Reports constitute yet another category of manuscript submitted to the journal that can include program evaluations. This category is generally reserved for submissions with a specific research question(s), rigorous evaluation methods, robust data analysis, and findings that advance understanding about addressing an educational need. A study by Cedfeldt et al8 provides an example of a program evaluation of a resident time-off policy as part of a Research Report. This evaluation project asked a research question about why the time-off policy was not being utilized by many residents and used a rigorous analysis of data to reach a conclusion. Given the time and effort required, Research Reports are more likely to focus on mature programs. Ideally, though not always, Research Reports incorporate findings beyond a single institution.
Program descriptions can also be published with little or no evaluation. Descriptive reports can provide value by facilitating the implementation of similar efforts at other sites and perhaps increasing the likelihood of subsequent research. However, unless such reports of local programs are clearly innovative or generalizable, they would likely fit best in other journals devoted to disseminating educational materials without the expectation of rigorous evaluation.
Selected Guidelines for Creating Program Evaluations
Below we outline selected elements of program evaluation and offer general guidance to help authors and reviewers get the most out of Academic Medicine’s review process and provide the greatest benefit to the journal’s readers. Table 1, including the link to the journal’s “Complete Instructions for Authors,” can be useful to authors in choosing which type of manuscript (e.g., Article) their program evaluation should be part of.
Identify and describe the problem or gap that the program was designed to address. The problem might be circumscribed and straightforward (e.g., the need for an online curriculum addressing radiation safety), large but focused (insufficient clinician education about prescribing opiates), or complex (underutilization of palliative care, or geographic maldistribution of physicians). Note for whom this problem is relevant: Who are the stakeholders (e.g., learners, funders, potential beneficiaries)? Is the problem limited to the local institution or broadly applicable?
Describe previous approaches to the problem through a review of relevant literature. A thorough problem description should summarize prior published experience with successful or unsuccessful approaches to the problem in order to demonstrate the current approach’s novelty and relationship to previous work.
Identify the population for whom the program was intended, i.e., the participants. Most programs are intended for a particular population and may not be successful if applied to a different population. For example, a program meant for medical students might not work well for practicing physicians. It is important to describe the population and why it was chosen. Manuscripts with more rigorous methodologies will describe how the findings may extend beyond the current population being studied.
Describe the program. This pertains to describing both the conceptual framework that underlies the program, the actual components of the program, and also the intended outcomes of the program. For example, Nothnagle et al9 provide a particularly clear program description of a project to enhance self-directed learning skills in family medicine residents from concept to implementation. Particularly for Research Reports, it is important to explain the supporting theory and/or conceptual framework. In terms of the actual components of the program, the authors should define the target participants and provide a clear and appropriately detailed outline of the program’s scope, architecture, and implementation. Optimally, the manuscript should include enough detail so that its program could be replicated by others. It should also include information that could have affected the implementation or outcome (e.g., the proximity of a medical school to a law school for a medical–legal collaboration project). Explaining the rationale for the various program components helps readers understand the design process and consider how they might adapt the program for application in other settings. Finally, clarifying the desired outcomes provides the basis for evaluating whether the program successfully met its goals. Durning et al10 suggest that in health professions education, the program evaluation should begin with the definition of success, so that it is clear whether the program has been successful or not. Musick11 suggests that the evaluation should begin with the question of why the evaluation is being undertaken and for whom.
Discuss the protection of subjects. Did the program and evaluation go through an IRB process? What were the results? For Academic Medicine, if the authors are collecting data about people (patients, students, faculty), the authors should either obtain IRB review or state why and by whom this requirement was waived.
Describe the evaluation. As appropriate to the type of manuscript (e.g., Research Report), describe the type of evaluation (formative or summative; process and outcome measures), evaluation design (randomized, case study, quasi-experimental), and the method of evaluation (quantitative or qualitative), as well as the evaluation questions, outcome measures, data, and analytic approaches. If tools were used to collect data about the program validity, describe those tools and their validity. Did the authors evaluate efficacy (e.g., the program was implemented in a controlled environment) or effectiveness (it was implemented in a real-world setting)? Authors should also describe the evaluators and their connection to the program.
Describe the results of the program that answer the research questions. Present the high points of the results, and, when necessary, use tables and figures to supplement the narrative description. Results should show whether the goals were met compared with a baseline and should include process and outcome measures, as applicable. We recommend including findings and observations that might influence feasibility of implementation in other settings, such as cost and specific resource requirements.
Build on the existing literature. The manuscript should place the program evaluation in the context of the existing literature. Comment on unanswered questions, unanticipated consequences, and how the evaluation was used to improve the program. Discuss the program’s generalizability to other institutions, specialties, settings, or even to other challenges. Limitations and potential sources of bias should be addressed, and any conflicts of interest must be disclosed.
Many Roads to Success
The development and evaluation of new programs represent an important segment of health professions scholarship and express the creativity of our health professions community. Fortunately, there are several manuscript formats that can be utilized when describing these programs and their evaluations. Submitting manuscripts about innovative programs and evaluations in a thorough and consistent manner will help to place worthy contributions into readers’ hands and influence the spread of successful programs. We encourage our community to provide feedback to Academic Medicine concerning the guidelines for the presentation of program evaluation scholarship presented above so that the journal can continue to be the best venue for sharing this important work.
Disclaimer: The views expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the Uniformed Services University of the Health Sciences, the Department of Defense, or the U.S. Government.
David P. Sklar, MD
Debra F. Weinstein, MD
Jan D. Carline, PhD
Steven J. Durning, MD, PhD
Deputy editor for research
1. Kanter SL. Toward better descriptions of innovations. Acad Med. 2008;83:703704.
2. Accreditation Council for Graduate Medical Education. Glossary of terms. https://www.acgme.org/Portals/0/PDFs/ab_ACGMEglossary.pdf
. Published July 1, 2013. Accessed June 28, 2017.
3. Hawkins R, Durning SJ. Holmboe E, Durning SJ, Hawkins R. Program evaluation. In: Practical Guide to the Evaluation of Clinical Competence. 2017:2nd ed. Amsterdam, the Netherlands: Elsevier; 303330.
4. Helitzer D, Willging C, Hathorn G, Benally J. Using logic models in a community-based agricultural injury prevention project. Public Health Rep. 2009;124(suppl 1):6373.
5. Frye AW, Hemmer PA. Program evaluation models and related theories: AMEE guide no. 67. Med Teach. 2012;34:e288e299.
6. Englander R, Flynn T, Call S, et al. Toward defining the foundation of the MD degree: Core entrustable professional activities for entering residency. Acad Med. 2016;91:13521358.
7. Crabtree EA, Brennan E, Davis A, Squires JE. Connecting education to quality: Engaging medical students in the development of evidence-based clinical decision support tools. Acad Med. 2017;92:8386.
8. Cedfeldt AS, Bower E, Flores C, Brunett P, Choi D, Girard DE. Promoting resident wellness: Evaluation of a time-off policy to increase residents’ utilization of health care services. Acad Med. 2015;90:678683.
9. Nothnagle M, Goldman R, Quirk M, Reis S. Promoting self-directed learning skills in residency: A case study in program development. Acad Med. 2010;85:18741879.
10. Durning SJ, Hemmer P, Pangaro LN. The structure of program evaluation: An approach for evaluating a course, clerkship, or components of a residency or fellowship training program. Teach Learn Med. 2007;19:308318.
11. Musick DW. A conceptual model for program evaluation in graduate medical education. Acad Med. 2006;81:759765.