Secondary Logo

Journal Logo


Research Design

McGaghie, William C.; Bordage, Georges; Crandall, Sonia; Pangaro, Louis

  • Free


  • The research design is defined and clearly described, and is sufficiently detailed to permit the study to be replicated.
  • The design is appropriate (optimal) for the research question.
  • The design has internal validity; potential confounding variables or biases are addressed.
  • The design has external validity, including subjects, settings, and conditions.
  • The design allows for unexpected outcomes or events to occur.
  • The design and conduct of the study are plausible.


Research design has three key purposes: (1) to provide answers to research questions, and (2) to provide a road map for conducting a study using a planned and deliberate approach that (3) controls or explains quantitative variation or organizes qualitative observations.1 The design helps the investigator focus on the research question(s) and plan an orderly approach to the collection, analysis, and interpretation of data that address the question.

Research designs have features that range on a continuum from controlled laboratory investigations to observational studies. The continuum is seamless, not sharply segmented, going from structured and formal to evolving and flexible. A simplistic distinction between quantitative and qualitative inquiry does not work because research excellence in many areas of inquiry often involves the best of both. The basic issues are: (1) Given a research question, what are the best research design options? (2) Once a design is selected and implemented, how is its use justified in terms of its strengths and limits in a specific research context?

Reviewers should take into account key features of research design when evaluating research manuscripts. The key features vary in different sciences, of course, and reviewers, as experts, will know the ones for their fields. Here the example is from the various social sciences that conduct research into human behavior, including medical education research. The key features for such studies are stated below as a series of five general questions addressing the following topics: appropriateness of the design, internal validity, external validity, unexpected outcomes, and plausibility.

Is the research design appropriate (or as optimal as possible) for the research question? The matter of congruence, or “fit,” is at issue because most research in medical education is descriptive, comparative, or correlational, or addresses new developments (e.g., creation of measurement scales, manipulation of scoring rules, and empirical demonstrations such as concept mapping2,3).

Scholars have presented many different ways of classifying or categorizing research designs. For example, Fraenkel and Wallen4 have recently identified seven general research methods in education: experimental, correlational, causal—comparative, survey, content analysis, qualitative, and historical. Their classification illustrates some of the overlap (sometimes confusion) that can exist among design, data-collection strategies, and data analysis. One could use an experimental design and then collect data using an openended survey and analyze the written answers using a content analysis. Each method or design category can be subdivided further. Rigorous attention to design details encourages an investigator to focus the research method on the research question, which brings precision and clarity to a study. To illustrate, Fraenkel and Wallen4 break down experimental research into four subcategories: weak experimental designs, true experimental designs, quasi-experimental designs, and factorial designs. Medical education research reports should clearly articulate the link between research question and research design and should embed that description in citations to the methodologic literature to demonstrate awareness of fine points.

Does the research have internal validity (i.e., integrity) to address the question rigorously? This calls for attention to a potentially long list of sources of bias or confounding variables, including selection bias, attrition of subjects or participants, intervention bias, strength of interventions (if any), measurement bias, reactive effects, study management, and many more.

Does the research have external validity? Are its results generalizable to subjects, settings, and conditions beyond the research situation? This is frequently (but not exclusively) a matter of sampling subjects, settings, and conditions as deliberate features of the research design.

Does the research design permit unexpected outcomes or events to occur? Are allowances made for expression of surprise results the investigator did not consider or could not anticipate? Any research design too rigid to accommodate the unexpected may not properly reflect real-world conditions or may stifle the expression of the true phenomenon studied.

Is the research design implausible, given the research question, the intellectual context of the study, and the practical circumstances where the study is conducted? Common flaws in research design include failure to randomize correctly in a controlled trial, small sample sizes resulting in low statistical power, brief or weak experimental interventions, and missing or inappropriate comparison (control) groups. Signals of research implausibility include an author's failure to describe the research design in detail, failure to acknowledge context effects on research procedures and outcomes, and the presence of features of a study that appear unbelievable, e.g., perfect response rates, flawless data. Often there are tradeoffs in research between theory and pragmatics, precision and richness, elegance and application. Is the research design attentive to such compromises?

Kenneth Hammond explains the bridge between design and conceptual framework, or theory:

Every method, however, implies a methodology, expressed or not; every methodology implies a theory, expressed or not. If one chooses not to examine the methodological base of [one's] work, then one chooses not to examine the theoretical context of that work, and thus becomes an unwitting technician at the mercy of implicit theories.1


1. Hammond KR. Introduction to Brunswikian theory and methods. In: Hammond KR, Wascoe NE (eds). New Directions for Methodology of Social and Behavioral Sciences, No. 3: Realizations of Brunswik's Representative Design. San Francisco, CA: Jossey—Bass, 1980:2.
2. McGaghie WC, McCrimmon DR, Mitchell G, Thompson JA, Ravitch MM. Quantitative concept mapping in pulmonary physiology: comparison of student and faculty knowledge structures. Am J Physiol: Adv Physiol Educ. 2000;23:72–81.
3. West DC, Pomeroy JR, Park JK, Gerstenberger EA, Sandoval J. Critical thinking in graduate medical education: a role for concept mapping assessment? JAMA. 2000;284:1105–10.
4. Fraenkel JR, Wallen NE. How to Design and Evaluate Research in Education. 4th ed. New York: McGraw-Hill, 2000.


Campbell DT, Stanley JC. Experimental and Quasi-experimental Designs for Research. Boston, MA: Houghton Mifflin, 1981.
    Cook TD, Campbell DT. Quasi-experimentation: Design and Analysis Issues for Field Settings. Chicago, IL: Rand McNally, 1979.
      Fletcher RH, Fletcher SW, Wagner EH. Clinical Epidemiology: The Essentials. 3rd ed. Baltimore, MD: Williams & Wilkins, 1996.
        Hennekens CH, Buring JE. Epidemiology in Medicine. Boston, MA: Little, Brown, 1987.
          Kazdin AE (ed). Methodological Issues and Strategies in Clinical Research. Washington, DC: American Psychological Association, 1992.
            Patton MQ. Qualitative Evaluation and Research Methods. 2nd ed. Newbury Park, CA: Sage, 1990.

              Section Description

              Review Criteria for Research Manuscripts

              Joint Task Force of Academic Medicine and the GEA-RIME Committee

              © 2001 Association of American Medical Colleges