Research paradigms and theoretical perspectives of researchers influence the research process—even research questions posited by researchers are shaped by how they view the world. Using the accepted Research in Medical Education (RIME) articles as exemplars, we explore the following questions: What are the philosophical underpinnings of the researchers, and are they made visible in their publications? What research paradigms potentially guide their choice of methods? We then discuss how the results from research can influence actions of other medical educators. We highlight how research design encompasses paradigms, theory, methodology, and methods. Before proceeding further, we provide a brief primer on the key concepts that we used while preparing this summary. More detailed resources1–6 related to the primer are provided in the bibliography for those interested. We encourage readers to consider the applicability of these concepts and theories to guide their own practice and research.
Primer: Researchers follow various philosophical frameworks or paradigms that impact their research activities. Paradigms can be viewed as a set of beliefs or worldviews, the nature of the world and the relationship of the world to individuals or objects, or simply as epistemological stances.7 These paradigms have evolved over time and continue to do so to date. Researchers may use different labels for the paradigms with some overlap; therefore, the paradigms should be regarded as loosely bound, related schools of thought rather then rigid frameworks.3 The best-known paradigms include positivism/postpositivism, postmodernism/poststructuralism, critical theory/ideological paradigm, constructivism, and pragmatism.3,8 It is not the intent of the authors to provide an encompassing review of paradigms; rather, we aim to provide easy-to-understand descriptions for readers as a primer to our subsequent conversation.
Positivism is historically the best-known paradigm.3 Researchers following this paradigm believe that there is a single truth to be discovered and that reality is observable, is measurable, and can be discovered in cause-and-effect studies.3,9 The idea of a fixed measurable reality received much criticism with researchers conceding that measurement is fallible, researchers have biases, and complete objectivity is not possible.3 This led to the postpositivist paradigm with researchers striving for objectivity but understanding that complete neutrality is not obtainable in any absolute sense.9 Postpositivists understand that “truth” is an approximation or “what we know about reality is true for now—until more studies find a better explanation, a more accurate measure.”9,10 Postmodernism posits that reality of a phenomenon is subjectively relative to those who experience it.11 This paradigm is also variably referred to as poststructural or postcolonial and offers critiques of modern society.3 Postmodern researchers aim to deconstruct existing narratives and produce reconceptualized descriptions of phenomena.12 Critical theory/ideological paradigm aims to address perceived social injustices through research and advocating for policy changes, acknowledging that reality is shaped by social, political, cultural, economic, ethnic, and gender values crystallized over time.7 The researcher in this case (as opposed to postmodernism) has a proactive stance with values being central to the task, purpose, and methods of research.13 Constructivism recognizes that reality is constructed by those who experience it and is relative.14 Knowledge consists of the constructions made through our interactions blurring the line between researchers and participants, therefore challenging objectivity. Knowledge is cocreated by researchers and participants and is time dependent, individual dependent, and context dependent.9 Pragmatism is often regarded as an “alternate paradigm” sidestepping issues of truth and reality, allowing the researcher to be free of being forced to make a choice between the ranges of paradigms from positivism/postpositivism to constructivism.15 These researchers focus on the purpose of research while appreciating social, historical, and political context—describing truth as what works at the time.16
Methodology is the description and justification of methods, and not the methods themselves. Methods are the actual research actions or practical activities of research.1 An astute reflexive researcher will pay careful attention to describing their paradigm in relation to the research, describing the philosophy informing the work. The role and place of values in the research process, specifically the influence of values on the relationship between paradigm, methodology, and methods, is known as axiology—that is, the research paradigm is axiological in that the values of the researcher inform the choice of methods. The word praxeology stems from praxis—action. Praxeology is the branch of science which studies the effect of human actions or makes meaning and interprets actions.4,5 It specifically answers the question of what research means for making education work and making it work better in the everyday practice of teaching.4
How Are Research Paradigms Enacted in RIME Publications?
We aim to go beyond providing the readers with a general summary of the articles by exploring how paradigms of each research team influenced the methodology. Using “AM Last Page: A Guide to Research Paradigms Relevant to Medical Education”17 as our guiding framework, we reviewed each accepted RIME paper to consider how authors’ research paradigm might be made visible and informed choices related to methods used—axiology (Table 1). We then comment on the impact of the research in the everyday practice of teaching—praxeology. We recognize that without explicit statements from the authors, the categorizations below represent our own interpretations, and the authors or other readers may disagree. We first consider the research papers we felt represented a positivist/postpositivist paradigm, followed by papers with critical theory, constructivist, and pragmatist paradigms.
Positivist/postpositivist paradigm research
Knowledge in this paradigm is gained by hypothesis testing. Kulasegaram et al18 articulate a clear hypothesis that “the sequence of discovery, followed by direct instruction would benefit students’ transfer of skills, but not their retention or immediate post-test performance.” The study design is an eloquent randomized controlled trial to compare learner suturing skills performance in a simulated setting before formal instruction (Do then See) versus the more typical sequence of formal instruction, followed by practice (See then Do). The researchers gathered data related to self-efficacy and student performance scored on a five-point global rating scale by a blinded rater. The researchers’ postpositivist paradigm is reflected in the design of the experiment with its focus on the measurement of skill acquisition. True to postpositivism, the authors acknowledge that their outcomes cannot be completely objective and were impacted by the use of a single rater, the context, and time allocated for the suturing task. Additionally, they note that they could have chosen to study different outcomes—specifically, behavioral processes, which may then have been best studied using constructivist paradigms. Regarding axiology, for the purpose of their study the postpositivist paradigm is well aligned with the research design and provides useful information. The authors show that students who first are allowed to explore and struggle with the task on their own and then receive instruction perform better on a later transfer task compared with those who are instructed first and then allowed to explore the task through independent practice. The results of this study can and should impact medical educators involved in simulation as the results challenge traditional teaching involving direct instruction followed by completion of tasks. A thoughtful combination of guided self-regulated learning and direct instruction can optimize such learning opportunities.
The objective of positivist/postpositivist research (quantitative or qualitative) is to test hypotheses and to establish cause-and-effect relationships between variables.19 The studies by Feldman et al20 and Onishi et al21 both investigate the effect of testing on education. The postpositivist paradigm was well suited to answer Feldman and colleagues’ research question regarding the use of a multiple-choice-question-based test-enhanced learning (TEL) intervention on knowledge, which incorporated both pre- and posttesting at a national medical conference. Undertaking a randomized controlled trial, they found that repeated retrieval of facts improved retention of declarative knowledge in a continuing medical education setting (CME) over time. Continuing professional development (CPD) is essential in ensuring that health care professionals remain up-to-date with the latest changes in their fields of practice. Health care professionals attend CME courses and are often required to retake licensing exams to demonstrate knowledge and proficiency. What this study adds to the existing literature is that cognitive tests promoting retrieval practice are just as valuable for CPD as they are in undergraduate or postgraduate education. CME-granting agencies should consider repeated TEL to help attendees solidify their knowledge.
The focus of Onishi and colleagues’ research is to determine the impact of examining board certification decisions, which are based on whether trainees should pass all subcomponents to pass the entire assessment (i.e., noncompensatory)—or whether performance on one subcomponent can compensate for performance on other sections of the assessment (compensatory). Grounded in the postpositivist paradigm, using psychometric theory the authors used a national database in Japan to examine internal structure and consequential validity evidence of composite scores and composite decisions related to compensatory and noncompensatory scoring. They demonstrate that decisions about compensatory and noncompensatory scoring of certification assessments have dramatic effects on the reliability. Specifically, noncompensatory scoring, which emerges from a determination that a certain element absolutely must be passed in the name of competence and safety, often will significantly reduce the reliability of the assessment overall. Although noncompensatory scoring does have its benefits to ensure that learners are fully competent in specific content areas with implications for patient safety, there is a need to consider psychometric consequences including reducing the overall decision reliability. While the axiology of the study (i.e., the postpositivist paradigm lens) and its methods are well aligned, the authors do acknowledge that the setting, context, specialty, learner sample, and values of the examiners could all influence the study. The results of this study are particularly important for educators involved in the planning of high-stakes exams at their own institutions or at national levels—for instance, United States Medical Licensing Examination (USMLE) or licensing examinations.
Though the studies by Feldman et al and Onishi et al regarding the measurement of knowledge approach education from a postpositivist perspective, they both raise questions for future research that are best suited to a constructivist approach. Although knowledge retention is important, it is important to ensure that CPD improves patient care or what the physician “does” at the bedside. Constructivist methods may be the best approach to answering questions about real-life performance, about the value judgments made around what is tested and assessed, and about the power structures, belief systems, and cultural biases that value certain facts and performances over others. Constructivist studies may provide a different perspective than or could add to the results of the postpositivist research.
Foster et al22 and Park et al23 also use positivist and postpositivist paradigms to study specific workplace tasks and performance. Foster et al view knowledge as objective—that is, a fixed reality, which is measurable. They used the USMLE test setting as an opportunity to survey medical students regarding the use of electronic health records (EHRs). Data revealed that while the percentage of clerkships nationwide using EHRs has increased over a five-year period, the percentage of students entering notes into those records has remained stable, and the percentage of students entering orders has actually decreased. This study falls mainly in the positivist paradigm as they focus on prediction and aim to produce generalizable data, though the authors mention that context (availability of EHRs) can impact the results. Their work has implications for two ongoing conversations, one regarding the new Medicare guideline that allows and encourages documentation by medical students in patient charts, and another regarding the Association of American Medical Colleges’ core entrustable professional activities document, which recommends proficiency in writing patient orders and progress notes prior to entering residency. Medical educators involved in curricular planning must ensure that students have adequate opportunity to practice these tasks and receive feedback.
Park et al strive to obtain objective knowledge, acknowledging that complete objectivity is not achievable. They demonstrate that when measures of workload are included in the statistical analysis of competency ratings for pediatric interns, the reliability and precision of assessment scores increase substantially. Consistent with a postpositivist paradigm, the study uses measurement to clarify and give insights into complex phenomena. The study highlights the importance of identifying confounders to workplace-based assessment (WBA), such as workload of trainees. The Accreditation Council for Graduate Medical Education now mandates the use of milestones reporting, for which WBA is essential. This paper provides food for thought for educators regarding the accuracy of WBA without taking into account other complexities of the workplace including trainee workload.
Burkhardt et al24 generate a hypothesis about the benefits of using a predictive enrollment model (EM) in eliminating subjectivity of judgments in the medical school admissions process, where educational leaders such as medical school admissions officers make predictions about who will attend their school. They acknowledge that their results are deductive and that outcomes may not be absolutely objective. The authors demonstrate that the use of a postpositivist tool can improve the process by eliminating subjective judgments. EM models can be used by admissions offices to provide information about likely behavior of applicants, which can be valuable in the admissions office particularly in the recruitment of underrepresented minorities and women, and to balance in-state versus out-of-state applicants. The authors show noninferiority and in some aspects superiority of their model compared with human judgment. This study is an example of how deliberate practice/steps to improve diversity can be helpful.
Another positivist study in this year’s RIME selection is the scoping review by Aakre et al.25 The researchers maintain a neutral, objective, independent, and value-free stance to study quantitative and qualitative research related to electronic knowledge and point-of-care education resources. UpToDate was the most frequently cited electronic resource. A paucity of research was noted regarding the educational or clinical impact of knowledge resources. Given the ever-accelerating growth of medical knowledge, and the consequent need for effective knowledge synthesis and translation to practice, further research in this field is important, particularly ensuring that resources have impact on knowledge and at the point-of-care.
Critical theory and constructivist paradigm research
The studies by Gonzalez et al26 and Sukhera et al27 seek to understand the experiences of educators and learners engaged in implicit bias training. The researchers have a constructivist paradigm and use grounded theory. True to constructivism, the authors believe that multiple truths are constructed between individuals, and the results of the studies showcase how the perception of reality is socially created. Both studies highlight the importance of a constant process of reflection and self-improvement. Sukhera et al discuss the tensions emerging from revealing bias and how learners and teachers navigate these tensions, while the work by Gonzalez et al demonstrates the unpreparedness of faculty to facilitate conversations related to bias. The results can be used for faculty development related to bias instruction and to allay anxieties when participants discover their own biases. The American College of Physicians recently released a position statement advocating for the provision of regular and recurring implicit bias training by all organizations that employ physicians.28 The results of the research by Gonzalez et al and Sukhera et al will be helpful for educators involved in curriculum design, providing important tips to inform the process.
In a critical synthesis review, Wisener and Eva29 review the literature to explore incentives for teaching. Critical synthesis revolves around two main types of methodology: criticism and synthesis. Although the authors do not explicitly state their paradigm, a review of the article reveals that they take on a critical stance, acknowledging that multiple truths do exist. The authors provide a synthesis of literature—not just from medical education but also psychology, behavioral, and organizational literature—but then go a step further to critique the available literature. They list self-reported incentives by faculty and, while providing a critique, discuss how faculty neglect to bring up concerns of financial disincentives associated with teaching (i.e., money that is given up by not being in clinic). Additionally, they point out that the influence of an incentive is dependent on factors influencing different individuals’ motivation and that the type of incentive offered has to be carefully thought out. The critical synthesis highlights that the field needs greater clarity regarding how, when, and why incentives operate within the many contexts in which medical educators work.
Pragmatism paradigm research
Gowda et al30 explore the effects of a course on observation through examining art in influencing reflective ability, tolerance for uncertainty, and awareness of personal bias. The authors do not explicitly state their paradigm but do acknowledge the need for diversity of methods to produce rigor, adopting a real-world stance, i.e., a “horses for courses [whatever works] approach to the question, and clarifying the mix.”16 The authors use a mixed-methods design using previously established rating scales for reflection ability as well as focus groups to explore the effects of the course qualitatively. They found improvement in reflective ability of students participating in the course and an increased awareness of the role of subjectivity in observing and interpreting the world. This study is important for all educators but particularly those involved in teaching clinical skills. Careful observation should not be presumed to be automatically present in students and should be considered a learned skill. Additionally, the capacity to reflect on the uncertainty of observations can help students deal with ambiguity. The study is also valuable because it provides evidence to support the integration of arts and humanities in the curriculum.
In this year’s RIME articles, the authors have either clearly articulated conceptual frameworks (Burkhardt et al,24 Gonzalez et al,26 Kulasegaram et al18), or the authors state on whose work they are building (Aakre et al,25 Feldman et al,20 Foster et al,22 Gowda et al,30 Onishi et al,21 Park et al,23 Sukhera et al,27 Wisener and Eva29). However, only two papers had clearly articulated a research paradigm (i.e., Gonzalez et al and Sukhera et al). These authors state that they were approaching the research questions from a constructivist perspective. It is possible that qualitative researchers may feel they need to state their paradigm clearly to justify the methodology. The papers that we felt represented positivist, postpositivist, and pragmatism paradigms did not explicitly state their paradigmatic position. It is interesting to note that the majority of papers belonged to the positivist/postpositivist paradigms. We would like to point out that the research paradigms attributed to the research papers (other then Gonzalez et al and Sukhera et al) are our own interpretations of the research, based on information provided in the paper. Without clear statements from the authors, we had difficulty making these interpretations, and it is possible that authors may disagree with our interpretations.
There is little available in medical education literature about research papers articulating research paradigms. However, a review of experimental studies in medical education revealed that less half the articles examined contained an explicit statement of the conceptual framework used, leading to calls for medical education researchers to clearly state the conceptual framework they have used in their research.31,32 We are unable to comment on the number of overall medical education research papers containing explicit statements regarding paradigms stances, but we encourage all medical education researchers to clearly state not just their conceptual frameworks but also the research paradigm. Explicit statements of paradigm by postpositivist authors, for example, might influence the discourse around knowledge and the limitations of any one way of knowing. Researchers should also consider adding a statement of how they maintained axiology and discuss the praxeology of their research. Reviewers, editors, and readers need to evaluate the rigor of research, and clear statements by authors should be the norm. We acknowledge that it can be challenging to “intentionally know what we know,” and to articulate and label may at times seem impossible, but knowing the limits of our paradigm is as important as articulating it.33
Good-quality research attends to paradigm, methodology, and methods and demonstrates internal consistency between them.1 This axiological integrity then has the ability to retain values in transferring, translating, or synthesizing research evidence.34 We recommend that medical education researchers consider the axiological integrity of their research. Researchers can consider maintaining a reflexive journal to serve as a repository for their researcher’s memories, reflections, and perspective over time. The space provided in the form of a journal for metacognitive reflection on the research process can help in the development of reflexivity.35
In conclusion, researchers often approach the questions that they investigate from a particular research paradigm. This stance determines the types of questions that can be asked and the methods that are chosen. Although paradigmatic clarity is important, it is important to recognize that no one view or approach is able to describe all aspects of the complex phenomena of human learning. Each approach will have its strengths and limitations. Many of the issues that are most important in education can be explored through multiple lenses. The studies from this year’s RIME program demonstrate a diversity of paradigms of the questions that researchers address—by considering the diversity of lenses, the limitations of one’s own approach and possibilities for future steps become more clear. This realization brings humility to our research and opens us up to greater possibilities.
The authors would like to thank the following individuals for their comments and input: members of the 2018 Research in Medical Education Program Planning Committee—Reena Karani, Bridget O’Brien, Karen Hughes Miller, Tanya Horsley, Win May, Jeanne Farman, Yoon Soo Park, and Lara Varpio; and Elliot P. Douglas, professor of environmental engineering sciences, Engineering School of Sustainable Infrastructure and Environment, associate director for research, Institute for Excellence in Engineering Education, and Distinguished Teaching Scholar, University of Florida.
1. Carter SM, Little M. Justifying knowledge, justifying method, taking action: Epistemologies, methodologies, and methods in qualitative research. Qual Health Res. 2007;17:1316–1328.
2. Ponterotto JG. Qualitative research in counseling psychology: A primer on research paradigms and philosophy of science. J Couns Psychol. 2005;52(2):126–136.
3. Glesne C. Becoming Qualitative Researchers: An Introduction. 2015.5th ed. Boston, MA: Pearson;
4. Biesta G. On the two cultures of educational research, and how we might move ahead: Reconsidering the ontology, axiology and praxeology of education. Eur Educ Res J. 2015;14(1):11–22.
6. Bunniss S, Kelly DR. Research paradigms in medical education research. Med Educ. 2010;44:358–366.
7. Lincoln YS, Lynham SA, Guba EG. Denzin NK, Lincoln YS. Paradigmatic controversies, contradictions, and emerging confluences, revisited. In: The SAGE Handbook of Qualitative Research. 2011:Vol 4. Washington, DC: Sage; 97–128.
8. Sommer Harrits G. More than method? A discussion of paradigm differences within mixed methods research. J Mix Methods Res. 2011;5(2):150–166.
10. Godfrey-Smith P. Theory and Reality: An Introduction to the Philosophy of Science. 2003.Chicago, IL: University of Chicago Press;
11. Birks M. Mills J, Birks M. Practical philosophy. In: Qualitative Methodology: A Practical Guide. 2014:Cornwall, UK: Sage; 17–29.
12. Koro-Ljungberg M, Douglas EP. State of qualitative research in engineering education: Meta-analysis of JEE articles, 2005–2006. J Eng Educ. 2008;97(2):163–175.
13. Kincheloe JL, McLaren P. Denzin NK, Lincoln YS. Rethinking critical theory and qualitative research. In: Handbook of Qualitative Research. 2000:2nd ed. Thousand Oaks, CA: Sage; 279–313.
14. Schwandt TA. Denzin NK, Lincoln YS. Three epistemological stances for qualitative inquiry: Interpretivism, hermeneutics, and social constructionism. In: Handbook of Qualitative Research. 2000:2nd ed. Thousand Oaks, CA: Sage; 189–213.
15. Feilzer MY. Doing mixed methods research pragmatically: Implications for the rediscovery of pragmatism as a research paradigm. J Mix Methods Res. 2010;4(1):6–16.
16. Maudsley G. Mixing it but not mixed-up: Mixed methods research in medical education (a critical narrative review). Med Teach. 2011;33:e92–e104.
17. Bergman E, de Feijter J, Frambach J, et al. AM last page: A guide to research paradigms relevant to medical education. Acad Med. 2012;87:545.
18. Kulasegaram K, Axelrod D, Ringsted C, Brydges R. Do one then see one: Sequencing discovery learning and direct instruction for simulation-based technical skills training. Acad Med. 2018;93(11 suppl):S37–S44.
19. Creswell JW. Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative Research. 2007.3rd ed. Upper Saddle River, NJ: Prentice Hall;
20. Feldman M, Fernando O, Wan M, Martimianakis MA, Kulasegaram K. Testing test-enhanced continuing medical education: A randomized controlled trial. Acad Med. 2018;93(11 suppl):S30–S36.
21. Onishi H, Park YS, Takayanagi R, Fujinuma Y. Combining scores based on compensatory and noncompensatory scoring rules to assess resident readiness for unsupervised practices: Implications from a national primary care certification examination in Japan. Acad Med. 2018;93(11 suppl):S45–S51.
22. Foster LM, Cuddy MM, Swanson DB, Holtzman KZ, Hammoud MM, Wallach PM. Medical student use of electronic and paper health records during inpatient clinical clerkships: Results of a national longitudinal study. Acad Med. 2018;93(11 suppl):S14–S20.
23. Park YS, Hicks PJ, Carraccio C, Margolis M, Schwartz A. Does incorporating a measure of clinical workload improve workplace-based assessment scores? Insights for measurement precision and longitudinal score growth. Acad Med. 2018;93(11 suppl):S21–S29.
24. Burkhardt JC, DesJardins SL, Teener CA, Gay SE, Santen SA. Predicting medical school enrollment behavior: Comparing an enrollment management model to expert human judgement. Acad Med. 2018;93(11 suppl):S68–S73.
25. Aakre CA, Pencille LJ, Sorensen KJ, et al. Electronic knowledge resources and point-of-care learning: A scoping review. Acad Med. 2018;93(11 suppl):S60–S67.
26. Gonzalez CM, Garba RJ, Liguori A, Marantz PR, McKee MD, Lypson ML. How to make or break implicit bias instruction: Implications for curriculum development. Acad Med. 2018;93(11 suppl):S74–S81.
27. Sukhera J, Wodzinski M, Teunissen PW, Lingard L, Watling C. Striving while accepting: Exploring the relationship between identity and implicit bias recognition and management. Acad Med. 2018;93(11 suppl):S82–S88.
28. Butkus R, Serchen J, Moyer DV, Bornstein SS, Hingle ST; Health and Public Policy Committee of the American College of Physicians. Achieving gender equity in physician compensation and career advancement: A position paper of the American College of Physicians. Ann Intern Med. 2018;168:721–723.
29. Wisener KM, Eva KW. Incentivizing medical teachers: Exploring the role of incentives in influencing motivations. Acad Med. 2018;93(11 suppl):S52–S59.
30. Gowda D, Dubroff R, Willieme A, Swan-Sein A, Capello C. Art as sanctuary: A four-year mixed-methods evaluation of a visual art course addressing uncertainty through reflection. Acad Med. 2018;93(11 suppl):S8–S13.
31. Bordage G. Conceptual frameworks to illuminate and magnify. Med Educ. 2009;43:312–319.
32. Bordage G. Reasons reviewers reject and accept manuscripts: The strengths and weaknesses in medical education reports. Acad Med. 2001;76:889–896.
33. Koro-Ljungberg M, Yendol-Hoppey D, Smith JJ, Hayes SB. (E)pistemological awareness, instantiation of methods, and uninformed methodological ambiguity in qualitative research projects. Educ Res. 2009;38(9):687–699.
34. Kelly M, Ellaway RH, Reid H, et al. Considering axiological integrity: A methodological analysis of qualitative evidence syntheses, and its implications for health professions education [published online ahead of print May 14, 2018]. Adv Health Sci Educ.doi: 10.1007/s10459-018-9829-y
35. Gerstl-Pepin C, Patrizio K. Learning from Dumbledore’s pensieve: Metaphor as an aid in teaching reflexivity in qualitative research. Qual Res. 2009;9(3):299–308.