Inspiration to pursue research topics in analytic epidemiology comes from many sources. We may follow leads from descriptive epidemiology to identify the underlying reasons for a time trend, ethnic disparity, or geographic pattern in disease occurrence. We may follow a trail of evidence built on previous analytic epidemiology studies that have yielded promising results. There may be a compelling policy question that calls for epidemiologic information—for example, monitoring whether an approved, widely used drug is safe, or whether regulatory limits for an environmental toxicant provide sufficient protection from public health harm. Mechanistic or toxicologic evidence may encourage epidemiologic studies to find out whether a plausible mechanism for disease causation is in fact operative. We may just be curious about whether something we experience affects our health. All are legitimate sources of epidemiologic ideas, and all have been pursued successfully.
A more complex question is when we should we stop pursuing a topic. Friends have cynically suggested to me that we seem to stop only when no one will provide money that lets us keep going. Ideally, we would stop when we have generated definitive answers, but of course that almost never occurs. Given the nature of discovery in science, new knowledge inevitably raises more questions. A more modest and realistic proposal is to assess, for any given candidate study, the marginal costs and marginal benefits.1,2 Marginal costs, including the study's expense in time and money, are relatively straightforward. Marginal benefits are more complex, depending on the choice of metric (scientific or public health product? near-term benefit? long-term benefit?). Conceptually, at least, we would take into account previous pertinent studies and ask how the next study would alter the state of knowledge. We are not asking “should this topic be studied at all?” or “would a definitive answer to the question be beneficial?” but rather, “what is the incremental benefit of a specific new investigation in light of what has already been done?” In other words what does the new approach offer? Without some reckoning among priorities in this manner, proponents will advocate based only on perceived marginal benefits, no matter how small or how costly to attain.
With this framework for setting research priorities, I will offer my (subjective, arbitrary) list of causal pathways for epidemiologic perseveration, loosely defined as the refusal to let go of an idea despite the current evidence:
1. Hope overriding experience. As human beings, we have strong intuitive notions about what is good for us (eg, fruits and vegetables in the diet, physical activity, water free of impurities), and what is harmful (eg, stress, high-fat foods, environmental pollutants). These common-sense notions are a reasonable basis for launching research. However, when well-designed epidemiologic studies fail to corroborate our expectations, we would do well to accept the empirical evidence generated by the admittedly-imperfect tools of epidemiology, and acknowledge when the evidence does not support our intuition. Going back repeatedly to try to prove what must be right in the face of continuing lack of support is the research analogy to looking, again and again, for lost keys in the place where they should logically be found, despite having already looked and failed to find them. The underlying idea may well be correct, but gathering more and more data that fail to support the idea is wasteful.
2. Ideology overriding evidence. Individuals and societies have clear notions of right and wrong regarding social justice, environmental protection, and other values. These fundamental principles are not subject to empirical verification through research, eg, “proving” that social inequality or ethnic discrimination is unjust. Epidemiology can generate information that supports these values only when the data cooperate—showing that income inequality is related to worse health status, or lack of medical care access results in delayed disease diagnosis and treatment, and therefore poorer outcomes. But when we fail to find evidence supporting “the right policy” (determined legitimately on ideological grounds), we fear that our data will be used by the opposition to argue for the opposing policy. When the data are uncooperative, epidemiologists find it difficult to accept that the overriding principles are valid independent of or even in opposition to the epidemiologic data—in part because this calls into question the value of having conducted the epidemiologic evaluation in the first place. The misguided solution is to keep studying the issue until the evidence bends in our favor, presuming that with persistence, the epidemiology ultimately will align itself with what is ethically right. But what if improved prenatal care access (or improved quality) does not actually prevent preterm birth, and what if induced abortion really does have some long-term adverse health consequences? Do we keep seeking data that demonstrate prenatal care in fact prevents preterm birth, or that induced abortion has no adverse consequences but only benefits? Perhaps we as a society would be better served by focusing the debate not on the epidemiologic evidence, but on the overriding reasons why access to prenatal care and abortion services are needed. We may question whether epidemiology is helpful when there are overwhelming ideological reasons for a particular policy.
3. Biology overriding epidemiology. Epidemiologists tend to feel most secure, like “real scientists,” when we pursue biologically based research questions. By doing so, we receive approval from our more-respected colleagues working in the laboratory. Furthermore, in the hierarchy of legitimate research motivations (even among fellow epidemiologists), biologic plausibility conveys the most prestige. Thus, it is not surprising that we are reluctant to abandon a biologically well-grounded idea just because it does not yield supportive epidemiologic evidence. However, many highly plausible and appealing mechanisms of disease causation will not generate the expected epidemiologic findings—a result that should not surprise us, given the complexity of human biology and the challenges of extrapolation from in vitro systems and across species. The biologic pathway may be more complex than we recognize, with checks and balances that mitigate the particular sequence of arrows in the causal diagram on which we are focused. Alternatively, the pathway may not be quantitatively important, or our tools may be too blunt to measure and document its true impact. Negative epidemiologic findings do not, of course, disprove the underlying biologic evidence and theory. Such findings may, however, provide evidence that the hypothesized pathway is inconsequential for public health applications, subject to the limitations of epidemiology at a particular point in time. As the technology for addressing a given pathway advances, there may be value in revisiting the previously discouraging evidence, but only when there is a clear opportunity to address the hypothesis with more rigor and precision.
4. Poorly done research as the argument for doing better. Just because a research question has been addressed in a methodologically inferior manner does not justify addressing the issue more rigorously. The only exceptions are situations in which the faulty research has encouraged unproductive new studies or inappropriate policy. Otherwise, further research must be driven by the worthiness of the topic—not merely by the ability to address it more effectively. If it's not worth doing, it's not worth doing well. Where the original justification for the research was weak and the quality of the epidemiology was limited, there is good reason to consider pursuing other topics of greater promise, rather than expending resources solely to correct the record. Similarly, contradictory findings alone do not provide a compelling basis for a new study. Unless the conflict is important to resolve (beyond just the fact that it arouses our curiosity and we are able to design a study that can reconcile or override earlier research), one more study that is similar to those that has come before has little merit. An additional study would only generate another dot on the meta-analysis—an expensive but modest increment in the cumulative weight of evidence.
5. Continuing past the point of diminishing returns. Epidemiologic technology in the form of measurement tools, research designs, and other structural features has inherent limits. There is a state-of-the-art that defines the outer bounds of exposure-measurement technology, disease identification, and mechanistic understanding, and this boundary expands as more advanced tools are developed. Once the epidemiologic research community has given a topic our best collective shot, perhaps several times, using all the tools at our disposal, there is little benefit to doing more of the same, and certainly no value in going backwards to repeat earlier, inferior approaches.3 Where the volume and quality of prior research is substantial, the next study of similar quality will inevitably make little contribution. This is true even if the previous studies are contradictory and the original question remains important and unresolved. We have confronted this challenge in the epidemiologic study of magnetic fields and childhood leukemia, with extremely large studies completed in most of the developed world, with state-of-the-art exposure assessment, large study size, and rigorous methods of implementation. Insofar as the topic remains unresolved, we can only await opportunities for better research. We do need to face up to epidemiology's limits, by suspending further work when epidemiology's capabilities offer little or no promise of forward progress, and then resuming work as the repertoire of methods expands.
6. Funding sources that are willing to support unproductive research. Those who fund epidemiology have a wide range of motives, as diverse as the realms of application for the knowledge that results. While special-interest groups have provided a jump-start to important new research directions that would not have emerged otherwise, their enthusiasm for the question (and desire for a particular answer) may well continue beyond the point where epidemiology has much more to offer. And in a seductive if cynical dance, the epidemiologists hold out the promise of more and better (ie, more favorable) answers, and the funding agencies hold out hope for research that will finally resolve the issue, ideally proving what they knew to be true all along.4 Industry has been known to persist in seeking studies to raise doubt about their demonstrably harmful product (eg, tobacco),5 and citizen advocacy groups have been known to provide or lobby for continued support of topics in the face of strong empirical discouragement (eg, vaccines and autism). Funding agencies are all advocacy groups of one sort or another, and in these times of limited funding there is no shortage of epidemiologists willing to engage in any topic that comes with research support. We should make sure we can provide a more compelling underlying justification for why we continue along a line of research than simply to say “because we can.”
What's to be done to avoid epidemiologic perseveration? First, we need to try to overcome our biases and keep a focus on the goal of epidemiology—to generate knowledge that advances science and public health. We should at least strive to attend to that big question in the face of temptations and impulses that distract us. Second, we need the help of dispassionate outside evaluators to help us overcome our narrow self-interest, ie, collections of experts to evaluate research direction and potential, thereby guiding funding agencies in the most productive directions. Groups do this more effectively than any given individual, especially the person whose work is being judged. Third, we would do well to be able to articulate in simple and persuasive terms (to ourselves as well as others) how this new research will help—why it is needed. Before we begin the study, can we write the closing sentences of the manuscript's introduction, where we succinctly indicate why this work is important? Finally, and perhaps most idealistically, we have some ethical obligation to maintain objectivity and be ready to acknowledge when we have hit an impasse. We need to be ready to look at a topic we have pursued or would like to pursue, and confess that we have run out of productive ideas and opportunities for moving forward. We need to recognize when it is time to move on to other important questions where our talents and efforts will be more beneficial.
Thanks to Jay Kaufman and Andrew Olshan for helpful comments on the manuscript.