Secondary Logo

Journal Logo

2006 RIME Wrap-Up Addresses

Moving the Field Forward: Going Beyond Quantitative–Qualitative*

Bordage, Georges

Author Information
doi: 10.1097/ACM.0b013e31813e661d
  • Free

The RIME conference is a major venue for presenting research in medical education. The main issue for such a conference is how it can move the field forward, both from a theoretical perspective and a practical perspective. The issue is often framed in terms of quantitative versus qualitative approaches—different ideologies, different forms of data, different data-collection methods, different data-analysis methods, etc. Indeed, this wrap-up session was framed as such. However, this oft-repeated debate is not productive, because each approach is useful in its own right and is often most productive when complementary. Instead, I propose that a more productive approach to moving the field forward is to devote greater effort to theory building, theory testing, and programmatic research.

Quantitative and qualitative approaches to research are often portrayed as a dichotomy or as an opposition. Indeed, there are some contrasts. In quantitative approaches, data types are typically numbers and charts, compared with words and thick descriptions in qualitative approaches—thus the quantity versus quality characterization. Researchers use different designs, different data validation, and different data-analysis procedures, such as typically experiments versus grounded theory, interrater and internal consistency reliabilities versus triangulation and negative case analysis, and hypothesis testing and statistical analysis versus content analyses and theme identification. There may also be differences in paradigms and ideologies—deductive, positivist hypothesis testing versus inductive, constructivist theory building. The debate is often enjoined by the quantitative side as rejecting “the detailed descriptions from select individuals” in qualitative studies and by the qualitative side as discounting “the number crunching that obscures the social reality” in quantitative studies (from Ercikan and Roth1).

In everyday life, however, and in the domain of research, the world has both quantitative and qualitative characteristics. In physics for example, a small quantitative change in temperature brings about a qualitative change from solid to liquid.1 In the classroom, there are the frequency of student–student and student–teacher interactions and the content of the interactions. Likewise, certain questions require different, complementary research approaches. “What is the effect of two instructional strategies on standardized performance?” calls for a randomized trial with the numbers that result from standardized assessment. “How do students who are taught using different instructional strategies understand the instructional task?” calls for a research method eliciting their words, such as an interview, a focus group, or a survey with open-ended questions. To perpetuate the debate and the perception of opposition between quantitative and qualitative perspectives is unproductive. A more productive endeavor is to focus on the research questions asked and on building programmatic research that fosters both theory testing and theory building, using quantitative and qualitative approaches as appropriate, to achieve a greater depth of understanding related to significant questions and, thereby, move the field forward.

The research question plays a key role in the research enterprise, as indicated recently by Schuwirth and van der Vleuten,2 who argue that “it’s not the method that determines whether a study is scientifically rigorous, it’s the strength of the research question.” The strength of the question is further enhanced by the theory it draws from, or tries to test, and by the purpose of the researcher in formulating the question. When Lemieux and I3 did research on the nature of the relationships of medical knowledge in memory, we used structural semantics theory to frame our questions. The theory provided ready-made variables to investigate. In that instance, the relationships were portrayed as dichotomous abstractions, called semantic axes or qualifiers, such as acute–chronic associated with the onset of symptoms or mono–poly associated with arthritis. The semantic qualifiers form networks of coherent relationships from which knowledge is organized. For example, an acute–recurrent–mono arthritis is associated with gout, possibly septic arthritis, compared with a chronic–poly arthritis, which is associated with rheumatoid arthritis or osteoarthritis. The theory provides an articulated view of how knowledge might be organized. Conceptual frameworks and theories are typically expressed in thick descriptions—that is, a qualitative text that can be used to generate questions and test hypotheses. Theories by nature provide hypotheses to be tested, such as the positive association that might exist between the use of semantic qualifiers and diagnostic accuracy and comprehension. Using various observational and experimental methods, we tested semantic theory in the context of medical knowledge, sometimes with positive results3,4 and at other times not,5 thus illustrating the dynamic nature of systematic scientific inquiry.

One of the results of this dynamic process is the evolving nature of the theory itself, the way that it portrays the world. During our work on diagnostic reasoning, we further extended structural semantic theory by categorizing clinicians’ discourses according to two organizational dimensions, a semantic dimension (i.e., conveying poor versus rich meaning) and a syntactic dimension (i.e., depicting limited versus extensive speech). This categorization yielded four discourse types: reduced discourses (limited speech, poor meaning), dispersed discourses (extended speech, poor meaning), elaborated discourses (extended speech, rich meaning), and compiled discourses (limited speech, rich meaning).6 It was not a matter of either theory building or hypothesis testing. It was both. And, taken together, they best helped gain greater understanding of the nature and dynamics of discourse structures as they might relate to knowledge relationships in the memory of medical students and experienced clinicians. In summary, theory building stemmed from thick descriptions, which yielded constructs and variables that were used to frame the problem and to generate hypotheses—conjectures, to use Norman’s expression7—that were and continue to be tested, sometimes using qualitative approaches, other times using quantitative approaches, as appropriate.

An example of theory building was provided in the RIME proceedings by Varpio and colleagues8 in their study of physicians’ and nurses’ transformation of patient information from electronic patient records. Using constructivist-grounded theory and visual rhetoric analyses, they observed that the physicians and nurses worked off the record—that is, “not only interacting with patient data on paper printouts rather than in a virtual environment, but also reengineering the information on these printouts to reflect their own working preferences and practices.” From an educational perspective, they speculate that “these transformation moments harbor critical lessons for novices regarding how to value some kinds of patient information over others, how to prioritize actions, how to organize clinical work, and how to negotiate collaborative practices.” They conclude that their results suggest that “there is a function in transformation ‘dysfunction’ that should not be ignored and that might be productively cultivated for novice learning.” Their conclusion can now serve as a new hypothesis to be tested—for example, by manipulating either the visual displays of the electronic patient records or the paper review procedures, and observing their effects, either quantitative or qualitative, or both, on novices’ learning. To use a Shulman analogy, moving the field forward is a commute between theory building and theory testing, not simply one or the other. Moving the field forward involves the use of theories to frame and generate our questions, using the resulting scholarship to support or modify the theory. Again, and as appropriate, either or both quantitative and qualitative approaches can be used productively to achieve theory building and theory testing.

Bernard of Chartres once said, “The serious students of ancient literature and philosophy in his own time were able to see farther than the ancients not because of better vision but because of the eminence to which their gigantic predecessors had raised them” (from Stock9). How often do researchers in medical education stand on the shoulders of giants? On conceptual frameworks or theories? Unfortunately, not often enough! Of the 18 RIME 2006 hypothesis-testing studies reported in the proceedings, I only found four (22%) that contained a theoretical or conceptual framework. From the remaining 14 reports, the reader gets the sense of a blind expedition, not knowing what precisely was in pursuit and what best means could have been used to get there. Only two of the six RIME 2006 qualitative studies reported some theory building. Instead, they tended to focus on practical applications. Regardless of whether the approach is quantitative or qualitative, quality research requires explicit statements about theory building and hypothesis testing, rather than jumping, possibly prematurely, to solutions. The results reported during this RIME conference are comparable with those we found in a systematic review of reporting medical education experiments in six journals, where 45% of the reports had no theoretical or conceptual framework.10

Instead of addressing conceptual and theoretical issues in the discussion section of the articles in the RIME proceedings, much was said about biases, such as large but biased samples or small and possibly biased samples. Little was said about theory. I found that only a quarter of the articles mentioned theory in the discussion. Clear, explicit statements about the specific nature of the theory being built, and acknowledgement of the specific type of evidence gathered concerning the hypotheses tested (i.e., for, against, or inconclusive), are essential for quality reporting that can inform researchers, educators, and policy makers and provide depth of understanding for the field.

Eva and Regehr,11 in their work on self-assessment, provide a good example of how theories evolve. After a careful analysis of the rhetoric about self-assessment and a critical review of the literature, they have deconstructed the notion of self-assessment and proposed a reconceptualization of the construct along with the identification of obsolete, defunct questions that are not worth asking any more, because they have been answered already or are misguided.12 Such defunct questions included: How well do practitioners self-assess? “Poorly!” How can we improve self-assessment? “You can’t!” How can we measure self-assessment? “Don’t bother!” On the basis of their critical review of the evidence and a renewed way of thinking about self-assessment, they suggest new questions, such as “What external data would help identify areas requiring updating? How can we collect and deliver these data in a meaningful form? How can we convince people to believe this feedback and incorporate it into self-concept? How can we get people to act on this?” Constructs and theories are not static, inert entities. Instead, they are dynamic entities to be challenged, built on, and refashioned.

The strength and pertinence of the research question lie in the purpose of the studies undertaken by the researchers. Purposes can be grouped into three main categories1,13: descriptive studies (What did we do, what happened?), effectiveness studies (How well did it work?), and clarification studies (Why and how did it work?). Using this classification, I found that the dominant purpose for the studies reported in the RIME conference proceedings was descriptive (61%), followed by effectiveness studies (29%) and then clarification studies (10%). This is analogous to what Schmidt13 reported during the 2005 AMEE conference for studies on problem-based learning (64%, 29%, and 7%). Depth of understanding and moving the field forward are most likely to come from clarification studies. This is not to say that descriptive or effectiveness studies are not worthwhile. Rather, it’s important to achieve a better balance among the three purposes. Ten percent (10%) of clarification studies pale in comparison to the 61% of descriptive studies. The paucity of clarification studies may be a reflection of the paucity of theoretical and conceptual frameworks. Theories provide the framework and mechanisms for gaining understanding and clarification.

Unfortunately, much of the research in medical education is done in an ad hoc, opportunistic fashion, with researchers going from one topic to the next, often with little conceptual underpinning, as illustrated above, and without critical appraisal of the literature. The lack of a critical or up-to-date review of the literature was among the top 10 reasons for rejecting manuscripts submitted to the RIME conference in the past.14 One way to alleviate this problem is by conducting theory-based, programmatic research rather than isolated, disjointed research projects. Programmatic research allows for the judicious selection of research topics followed by long-term, systematic investigations—not one, but a series of studies that build and test theories from multiple perspectives. A healthy cycle of theory-based programmatic inquiry was exemplified during the conference in the work of Woods et al,15 from the McMaster group, on the role of basic and clinical science knowledge. From a critical review of the literature and theories, Woods and her colleagues generated hypotheses to be tested, used appropriate methods, obtained results, and generated newer hypotheses to be tested further.

The challenge for the new and upcoming medical education researchers is to select a research topic of importance and pursue its investigation over time, as exemplified by the McMaster group. As a new investigator to the field, or one seeking direction, ask yourself: What do I want to be known for 5, 10, and 15 years from now? What expertise will I have built? Collaborative work among clinicians and educators can help bring together complementary sets of expertise and modes of inquiry to produce pertinent, high-quality research capable of bearing fruit and moving the field forward; see Albert et al16 about collaboration and multiple conceptual frameworks. Programmatic research is one way of enhancing the theoretical foundation of our work, but as Regehr17 has indicated in a recent editorial, there are also other mechanisms to consider, such as more venues in journals and conferences for longer, thoughtful presentations of the theoretical and conceptual underpinnings of our work. Such venues can lead researchers to “reflect on the theoretical choices they made along their research pathway, where they were wrong, naïve, or underdeveloped in their thinking, and how their findings have contributed to better understanding of an area and/or evolved more useful theoretical constructs.”17

The main theme for the 2006 Association of American Medical Colleges meeting was pursuing excellence. Pursuing excellence in medical education research calls for (1) explicit statements of theory building and hypothesis testing, be they from quantitative or qualitative studies, (2) asking more why questions, more clarification studies to gain depth of understanding, and (3) engaging in more theory-based, programmatic research that can yield results capable of moving the field forward. As Riehl18 states in comparing medical research and education research: “research is an ongoing conversation and quest, punctuated occasionally by important findings that can and should alter practice, but more often characterized by continuing investigations [… that] taken cumulatively, can inform the work of practitioners.” Thus, a healthy cycle is created between theory and research and between research and practice that can lead to better medical education and, ultimately, better patient care.

Acknowledgments

The author is grateful to Drs. Ilene Harris (University of Illinois at Chicago) and David Cook (Mayo Medical School) for their comments and suggestions while preparing the wrap-up and writing the essay.

References

1Ercikan K, Roth WM. What good is polarizing research into qualitative and quantitative? Educ Res. 2006;35:14–23.
2Schuwirth LWT, van der Vleuten CPM. Challenges for educationalists. BMJ. 2006;333:544–546.
3Bordage G, Lemieux M. Semantic structures and diagnostic thinking of experts and novices. Acad Med. 1991;66(10 suppl):S70–S72.
4Bordage G. Elaborated knowledge: a key to successful diagnostic thinking. Acad Med. 1994;69:883–885.
5Nendaz M, Bordage G. Promoting diagnostic problem representation. Med Educ. 2002;36:761–767.
6Lemieux M, Bordage G. Structuralisme et pédagogie médicale: étude comparative des stratégies cognitives d’apprentis-cliniciens. Semiotic Inq. 1986;6:143–179.
7Norman GR. Theory-testing research versus theory-based research. Adv Health Sci Educ Theory Pract. 2004;9:175–178.
8Varpio L, Schryer CF, Lehoux P, Lingard L. Working off the record: physicians’ and nurses’ transformations of electronic patient record-based patient information. Acad Med. 2006;81(10 suppl):S35–S39.
9Stock B. Antiqui and moderni as “giants” and “dwarfs”: a reflection of popular culture? Mod Philol. 1979;76:370–374.
10Cook DA, Beckman TJ, Bordage G. Quality of reporting experimental studies in medical education: a systematic review. Med Educ. 2007;41(8).
11Eva KW, Regehr G. Self-assessment in the health professions: a reformulation and research agenda. Acad Med. 2005;80(10 suppl):S46–S54.
12Regehr G. Deconstructing self-assessment: implications for a research agenda. Paper presented at: Research in Medical Education Conference, Annual Meeting of the Association of American Medical Colleges; November 2006; Seattle, Wash.
13Schmidt HG. Influence of research on practices in medical education: the case of problem-based learning. Paper presented at: Annual Meeting of the Association for Medical Education in Europe; September 2005; Amsterdam, the Netherlands.
14Bordage G. Reasons reviewers reject and accept manuscripts: the strengths and weaknesses in medical education reports. Acad Med. 2001;76:889–896.
15Woods NN, Nevill AJ, Levinson AJ, Howey EHA, Oczkowski WJ, Norman GR. The value of basic science in clinical diagnosis. Acad Med. 2006;81(10 suppl):S124–S127.
16Albert M, Hodges B, Regehr G. Research in medical education: balancing research and science. Adv Health Sci Educ Theory Pract. 2007;12:103–115.
17Regehr G. Introducing “I wish I knew then…” Adv Health Sci Educ Theory Pract. 2007;12:1117–1119.
18Riehl C. Feeling better: a comparison of medical research and education research. Educ Res. 2006;35:24–29.

*Presented in part during the 2006 RIME Wrap-up session.

© 2007 Association of American Medical Colleges