Case-control studies are hard to understand. This type of research, which goes backward in time from outcome to exposure, has been termed “research in reverse.”1 For most clinicians, this temporal sequence is counter-intuitive. Alternative terms for this research design are “retrospective” or “trohoc” (“cohort” spelled backward).2
In a case-control study, those in the case group have the disease of interest (eg, women with ovarian cancer), and those in the control group do not (eg, women without ovarian cancer). Investigators then look back in time to see what proportion of people in the case and control groups had the exposure of interest (eg, oral contraceptives). Those in the control group indicate the background frequency of exposure in the population.3 If a higher proportion of those in the case group than in the control group had the exposure of interest, then a positive (harmful) association exists. Conversely, if a lower proportion of those in the case group than in the control group had the exposure, then a negative (protective) effect is evident. For example, in a large meta-analysis of ovarian cancer and oral contraceptives, 31% of women with ovarian cancer and 37% of women without ovarian cancer had ever used oral contraceptives, indicating protection against this disease.4
Although challenging to grasp, case-control studies have several appealing features.5,6 They are useful for studying rare events or events that take a long time to develop, such as cancer. They often can be done faster and with less expense than a cohort study, which tracks participants forward in time from exposure to outcome. However, case-control studies have important limitations. Because of their research design, they do not allow incidence rates or relative risks to be calculated. Instead, the odds ratio is a useful proxy for the relative risk, when the outcome is rare.7 When the outcome is not rare (eg, greater than 10% to 15%), the odds ratio exaggerates the relative risk. In addition, case-control studies are more susceptible to selection and information bias than are prospective cohort studies.8
Mislabeling of cohort studies as “case-control” studies is a chronic problem in the published literature, including general medical journals.9 To estimate the frequency in obstetrics and gynecology journals, I reviewed published articles billed as “case-control” in the title.
MATERIALS AND METHODS
I used PubMed to identify published articles that featured “case-control” in the title and were published in four U.S. journals: American Journal of Obstetrics & Gynecology, Fertility and Sterility, Journal of Reproductive Medicine, and Obstetrics & Gynecology. The search strategy specified the journal name and required the words “case control” in the title; the search extended from January 1, 1970, through May 13, 2009. I reviewed the abstract of each article and the full text of articles when the methods were not clear from the abstract. Only original research articles were included; reviews, commentaries, and letters to the editor were excluded. I considered cross-sectional studies, which are hybrids between case-control and cohort studies,10 to be case-control studies for this evaluation; five such reports were found.
I then calculated the proportion of articles labeled as “case-control” in the title that were, in fact, not case-control studies. I calculated Fisher’s exact 95% confidence intervals (CIs) around these proportions with Open Epi software (http://www.openepi.com/Menu/OpenEpiMenu.htm) and compared proportions between journals with STATCALC in Epi Info 6 (http://www.cdc.gov). Journal results are presented anonymously, with journal A having the lowest percentage of mislabeled reports and journal D the highest. I also looked for temporal trends in this problem.
The search yielded 124 articles with “case-control” in the title. The number of articles ranged from 13 to 63 in the four journals (Table 1). The proportion of published reports mislabeled as “case-control” ranged from 13% (95% CI 3-34%) in journal A to 36% (95% CI 18-57%) in journal D, a 2.8-fold difference (95% CI 0.9-9.0). Thirty percent of studies were mislabeled overall.
Publication of case-control studies increased over the decades, as did incorrect use of the term “case-control.” In the 1970s and 1980s, two of 22 reports (9%, 95% CI 1-29%) misused the term. In the 1990s and 2000s, 35 of 102 reports (34%, 95% CI 25-43%) made the error.
Confusion about research methods remains a generic problem in obstetrics and gynecology. This is not unique to case-control studies; for example, many published reports claiming to be “randomized controlled trials” were not randomized controlled trials.11 In the last two decades, more than a third of published reports claiming to be “case-control” in the title were, in fact, not case-control studies. Although likely due to naivete, inaccurate reporting of research is a serious problem.12 Ironically, one report13 called itself a “case-control” study in the title and a “retrospective cohort study” in the abstract.
Most of the mislabeled studies were retrospective cohort studies. Researchers commonly designate an exposed group the “cases” and the unexposed group the “controls” in a cohort study.14 This terminology is inappropriate. In both retrospective and prospective cohort studies, exposure or nonexposure defines the inclusion criteria, and both groups are then followed forward in time to outcomes.10 In a case-control study, disease (case) or nondisease (control) define the inclusion criteria for the research, and investigators look back in time for disproportionate exposures. Stated alternatively, cohort studies start with exposure, and case-control studies begin with outcome.
Strengths and weaknesses of this survey deserve mention. A strength was the inclusion of four journals, which should improve the ability to extrapolate these findings. The survey encompassed nearly four decades, and one assessor uniformly judged all articles. That only the author judged the articles may also be a weakness, since inadvertent misclassification may have occurred. The impact on the overall results of such misclassification would likely be small. The sample was not random, and the extent to which these 124 publications represent the universe of alleged case-control studies is unclear. Improper use of the term “case-control” may be even more common in articles that do not specify “case-control” in the title.
Current peer-review processes at these four journals have been variably successful in weeding out mislabeled “case-control” studies; journal A did better than the other three (each with more than 30% of reports mislabeled). Investigators clearly need more training in the lexicon and methods of research.12 Editors15 and reviewers16 need to be more vigilant as well. Given the scope and chronicity of this problem, manuscripts claiming to be a “case-control” study merit special scrutiny. For example, these submissions might benefit from review by an epidemiologist. This precaution would not be foolproof, however, because one of the mislabeled “case-control” studies was submitted from a Department of Epidemiology. Case-control studies are simply hard to understand.
1. Schulz KF, Grimes DA. Case-control studies: research in reverse. Lancet 2002;359:431–4.
2. Feinstein AR. Clinical biostatistics. XX. The epidemiologic trohoc, the ablative risk ratio, and “retrospective” research. Clin Pharmacol Ther 1973;14:291–307.
3. Grimes DA, Schulz KF. Compared to what? Finding controls for case-control studies. Lancet 2005;365:1429–33.
4. Beral V, Doll R, Hermon C, Peto R, Reeves G. Ovarian cancer and oral contraceptives: collaborative reanalysis of data from 45 epidemiological studies including 23,257 women with ovarian cancer and 87,303 controls. Lancet 2008;371:303–14.
5. Schlesselman JJ. Case-control studies. Design, conduct, analysis. New York (NY): Oxford University Press; 1982.
6. Peipert JF, Grimes DA. The case-control study: a primer for the obstetrician-gynecologist. Obstet Gynecol 1994;84:140–5.
7. Grimes DA, Schulz KF. Making sense of odds and odds ratios. Obstet Gynecol 2008;111:423–6.
8. Grimes DA, Schulz KF. Bias and causal associations in observational research. Lancet 2002;359:248–52.
9. Garg SK, Chase HP, Marshall G, Hoops SL, Holmes DL, Jackson WE. Oral contraceptives and renal and retinal complications in young women with insulin-dependent diabetes mellitus. JAMA 1994;271:1099–102.
10. Grimes DA, Schulz KF. An overview of clinical research: the lay of the land. Lancet 2002;359:57–61.
11. Grimes DA. Randomized controlled trials: “it ain’t necessarily so” [editorial]. Obstet Gynecol 1991;78:703–4.
12. Altman DG. The scandal of poor medical research. BMJ 1994;308:283–4.
13. Incerti M, Ghidini A, Locatelli A, Poggi SH, Pezzullo JC. Cervical length < or = 25 mm in low-risk women: a case control study of cerclage with rest vs rest alone. Am J Obstet Gynecol 2007;197:315.e1–4.
14. Fanfani F, Fagotti A, Gagliardi ML, Ruffo G, Ceccaroni M, Scambia G, et al. Discoid or segmental rectosigmoid resection for deep infiltrating endometriosis: a case-control study. Fertil Steril 2009 April 24. [Epub ahead of print].
15. Altman DG. Poor-quality medical research: what can journals do? JAMA 2002;287:2765–7.
16. Schroter S, Black N, Evans S, Godlee F, Osorio L, Smith R. What errors do peer reviewers detect, and does training improve their ability to detect them? J R Soc Med 2008;101:507–14.