Secondary Logo

Journal Logo

Study Design Algorithm

Andrews, Jeff MD, FRCSC1; Likis, Frances E. DrPH, NP, CNM2

Journal of Lower Genital Tract Disease: October 2015 - Volume 19 - Issue 4 - p 364–368
doi: 10.1097/LGT.0000000000000144
Commentary
Free

Objectives To aid authors in correctly naming their study design, to assist readers and reviewers who must decide what the design was for some published studies, and to provide consistency in evaluating the design of published studies, especially for those conducting systematic reviews and evidence synthesis.

Methods An annotated algorithm method is used to prompt serial questions and analysis to identify a single study design.

Results The algorithm begins with a research article. Primary clinical research is divided into experimental and observational studies. Key determinants include identifying the study question and the population, intervention, comparison, and outcome. Experimental therapy and prognosis studies are subdivided into 4 design types. Observational therapy and prognosis studies are subdivided into 7 design types. Experimental diagnosis and screening studies are subdivided into 2 types. Observational diagnosis and screening studies are subdivided into 5 types.

Conclusions An annotated algorithm may be used by authors, readers, and reviewers to consistently determine the design of clinical research studies.

An annotated algorithm helps the user determine a study design by answering questions about the research study.

1Editor-in-Chief, Journal of Lower Genital Tract Diseases, Bethesda, MD, USA; and 2Editor-in-Chief, Journal of Midwifery & Women's Health; Associate Director, Vanderbilt University Evidence-based Practice Center, Nashville, TN, USA

Reprint requests to: Jeff Andrews, MD, FRCSC, Editor-in-Chief, Journal of Lower Genital Tract Diseases, Bethesda, MD, USA. E-mail: jandrews@asccp.org

Disclosure of sources of financial support: No external funding was secured for this study.

Disclosure of conflicts of interest: The authors have no financial relationships or conflicts of interest relevant to this article to disclose.

Authorship Statement: All authors have participated sufficiently in the conception, design, and writing of the manuscript to take public responsibility for all aspects of the work. Each author has reviewed the manuscript and believes it represents valid work. The final manuscript has been reviewed and approved as submitted for publication by all authors

Clinical research involves the analyses of data or information in an attempt to answer a discrete clinical question or to support or refute a hypothesis. Planning research should begin with the development of a clinical research question (Table 1). The next step in planning research is developing the study design, which should be clearly set before the research begins. The study design should be so clearly described in the Methods section of the publication about the research that another researcher could duplicate the study, and any reader may discern the methodology. Different study designs have inherent internal and external validity challenges, as well as differing risks of bias.

TABLE 1

TABLE 1

Published research frequently does not contain an adequate description of the study design, nor an accurate classification. Inconsistent, inaccurate, and imprecise use of design taxonomy is a challenge. Such poor reporting results in a lack of clarity. Discrepancies are frequent between the stated intent of the investigator, including design, and the actual study implementation and data analysis. Common discrepancies concern whether there was a comparison (control) group, the study was experimental, and the data collection was prospective or retrospective. Sometimes, mixed methods are used in a single study, or more than one question or hypothesis is being investigated.

Mislabeling of the study design impairs the appropriate indexing and sorting of evidence. This complicates secondary research or research synthesis, when systematic reviews and meta-analyses require selection of studies, assessment of the risk of bias, analysis of study results, interpretation of results, and the grading of the body of evidence.

There is a clear need for consistent use of terminology and study design labels. Editors encourage authors to describe the study design with a commonly used term and present the key elements of study design. To aid authors in correctly naming their study design, and to aid readers and reviewers who must decide for themselves what the design was for some published studies, an annotated algorithm is presented. If the response to a question in the taxonomy algorithm is unclear, the user is advised to assume that the condition was not met. A full description of the study design types may be found in an online glossary at http://journals.lww.com/jlgtd/Pages/Study-Design-Glossary.aspx.

The annotated algorithm starts with a research study (Fig. 1). Annotations for Figure 1:

  • (1) Question: Did the research investigate human participants' outcomes?
  • (2) Question: Did the investigator/author collect the data/information used in the study? The data source and analysis could be at 1 of 3 levels. If the investigator collected the original data firsthand, there was primary data analysis. If the investigator acquired or reused data/information collected during another original research study, or from a database or registry, for the purpose of reanalyzing the original data with different statistical techniques or answering new questions with old data, there was secondary data analysis. Both primary and secondary data analyses are considered primary clinical research in the algorithm. If the investigator gathered data/information from 2 or more previously published sources, there was clinical research synthesis.
  • (3) Primary clinical research (the focus of this algorithm): The next 2 steps should be to define the clinical question (Table 1), and to determine the population, intervention, comparison, and outcome (PICO/PICOTS) from the Methods and Results (Table 2). These facts will be needed to move through the algorithm.
  • (4) Clinical research synthesis: investigators summarized available published studies in the form of narrative review, systematic review, and/or meta-analysis.
  • (5) Question: Did the investigator intervene by assigning/allocating the intervention/exposure after receiving ethics approval and after each participant/patient consented to the research? For diagnostic and screening studies to be classified as experimental, the result of the index test must have influenced clinical decision making, and clinically important outcomes must have been measured.
  • (6) Experimental study: As the decision to conduct the research study preceded the intervention, all experimental studies are prospective.
  • If the clinical question was about therapy or prognosis (Table 1), go to Figure 2.
  • If the clinical question was about diagnosis or screening (Table 1), go to Figure 3.
  • (7) Observational study: Observational studies may be prospective, retrospective, or cross-sectional. This includes research where there was a description of outcomes in participants, without an intervention or exposure.
  • If the clinical question was about therapy or prognosis or frequency/rates (prevalence, incidence, trend, risk factor, association, causation, and etiology; Table 1), go to Figure 4.
  • If the clinical question was about diagnosis or screening (Table 1), go to Figure 5.
  • Figure 2 identifies the experimental therapy/prognosis study designs.
  • Annotations for Figure 2:
  • (8) Question: Was there a comparison to assess the effect/association of an intervention/exposure and an outcome? This comparison could be between 2 or more groups or within the same individual or group(s). There could be more than one intervention.
  • (9) Prospective case series: all participants received the same intervention/exposure and did not have the same primary outcome(s) measured before and after the intervention/exposure. This is a noncomparative study design.
  • (10) Quasi-experimental study, also known as before-after study: the same outcome was measured before and after the intervention/exposure. Subtypes: (a) if 3 or more measures were taken before and 3 or more measures were taken after, the study was an interrupted time series; (b) if there was only one participant, the research was an N-of-1 study. Even if the order of assignment of interventions was random or the participants receive blinded placebo or intervention, this was not a randomized trial.
  • (11) Question: Did the investigator randomly allocate participants to the intervention choices?
  • Figure 4 identifies the observational therapy/prognosis/frequency/rate study designs (prognosis/prevalence, incidence, trend, risk factor, association, causation, and etiology). Annotations for Figure 4:
  • (12) Question: Was there a comparison to assess the effect/association of an intervention/exposure and an outcome? This could be between 2 or more groups or within the same individual or group(s).
  • (13) Noncomparative observational study: all participants received the same intervention(s)/exposure, and did not have the same primary outcome(s) measured before and after the intervention/exposure. This includes retrospective case series, case report(s), time series, focus group studies, and correlational studies.
  • (14) Question: Was there simultaneous acquisition of both exposure/intervention and outcome data?
  • (15) Question: What was the factor for determining the groups (or the before/after)? Were the groups defined by intervention/exposure or by outcome?
  • (16) Question: Did the intervention/exposure occur after the study was designed and commenced? If necessary, ask the author or check the date on the ethics approval form. If unable to determine, assign to “no”.
  • (17) Controlled before-after study: same outcome measured before and after intervention. Subtypes: (a) if 3 or more measures were taken before and 3 or more measures were taken after, the study was a controlled interrupted time series; (b) if there was only one participant, the study was an N-of-1 study. Even if the order of assignment of interventions was random, or the participant(s) received blinded placebo or intervention, this was not a randomized trial.
  • Figure 3 identifies the experimental diagnostic or screening study designs. Annotation for Figure 3:
  • (18) Question: Were the participants randomized to 2 or more diagnostic/screening tests/regimens and then were matching interventions performed based on the assigned test results?
  • Figure 5 identifies the observational diagnostic (or screening) study designs.
  • Annotations for Figure 5:
  • (19) Question: Was there a comparison between the index test and at least one reference standard?
  • (20) Were comparisons made between different testers, machines, laboratories, or human interpreters?
  • (21) Were there 2 or more groups of participants, identified by one or more characteristics?
  • (22) Question: Were the groups defined by intervention/exposure, (a test result), or by outcome (clinical outcome, disease state, or diagnosis)?
Figure

Figure

TABLE 2

TABLE 2

Figure

Figure

Figure

Figure

Figure

Figure

Figure

Figure

Keywords:

study design; algorithm; PICO; study question

Copyright © 2015 by the American Society for Colposcopy and Cervical Pathology