Having some policy on conflict of interest may be unavoidable today and is probably okay when focused on the “big ones.” However, I'm worried that if we advertise potential conflicts of interest too much, the message that is being picked up is that scientific results are for sale.
Let me provide an example that is not isolated. In the fall of 2003, the Dutch National Health Council issued a report on environmental tobacco smoke. As an update of a 1990 report, it provided a concise and rather mainstream message, primarily based on other authoritative reviews. However, the report came out just before a Parliament discussion of smoking bans, and it quickly became the focus of a heated media debate. An editorial in a major newspaper said, “There's a contrast between studies funded by governmental agencies, which are against smoking, and scientific reports to which the Tobacco Industry has contributed. So, the final word about the scientific side of the issue is not yet in.”
This is the stuff that opinion leaders and policy makers read (they don't read epidemiology), and it worries me that the way science is being used by the tobacco industry and by responsible government agencies is being portrayed as equally flawed.
Is this image of “science for sale” the image that we are, inadvertently, promoting by our current preoccupation with “conflict of interest”? (The phrase alone fetches some 4500 hits in PubMed.) After disclosing our financial interests, are we going to disclose other conflicts arising from, for example, our religious or political preferences, as the draft policy suggests? Will we end up wringing our hands over, and excusing ourselves from, peer reviewing work by even distant colleagues, just to be on the safe side?1
And then, how are we going to use this information? Isn't it contradictory to say that “conflict of interest can affect many stages of data collection and analysis” and then not to use disclosed conflicts of interest as a basis for editorial decisions? If conflicts of interest bias our studies, should we not be able to detect that bias (and act upon it) by applying all the critical skills we can muster as editors and reviewers in the prepublication stage? If conflicts of interest lead to flawed work that we are then going to publish anyway, how will we use this “evidence” in the already difficult process of translating science into policy advice? Are we going to exclude articles from our policy consideration solely on the basis of a disclosed conflict of interest? Or, are we being sidetracked into a debate that is no longer about science but about second-guessing how our financial and other ties might have influenced the results of our studies?
As acknowledged in the draft policy document, there will always be conflicts of interest in epidemiology, given its applied nature. Efforts by stakeholders to influence policy through science have been with us for a long time. Some of our colleagues ally themselves with specific stakeholder interests to become “hired guns,” sadly promoting the image of “science for sale.” The extent to which commercial interests have influenced environmental health science and regulation is sometimes staggering.2 Continued concern about orchestrated attempts to “manufacture uncertainty” to prevent or delay sensible public health interventions seems well justified.3
I'm leaning to the position that the best we can do as public health scientists is to critically focus on the quality of our science first, second, and third, and then to worry about the lesser issues (such as religion, politics and money) later.
Journal editors and publishers have a clear role to play here. It does not make me happy as a reviewer to receive manuscripts when the collective conflict of interest statements run to more pages than the methods section. It does not make me happy as an author when a journal editor refuses a work explicitly and primarily because the study produced a null result. Nor does it make me happy as a policy advisor to have to deal with shoddy articles published by “hired guns” in lesser or out-of-specialty journals in an effort to tilt the balance of the peer-reviewed evidence in their preferred direction. (None of these experiences stem from EPIDEMIOLOGY, needless to say.) In all of these areas, journal editors, publishers, and reviewers can do a lot to boost the quality and the integrity of the science, and I happen to think that that is really more important than to know exactly who paid whom how much, and why.
ABOUT THE AUTHOR
BERT BRUNEKREEF is a professor of Environmental Epidemiology at the Institute for Risk Assessment Sciences at the University of Utrecht, the Netherlands. He has performed studies on health effects of indoor and outdoor pollution for the last 25 years. He has been involved closely with policy advice on environmental health issues both for the Dutch government and the World Health Organization.
1. Maurissen JP, Gilbert SG, Sander M, et al. Workshop proceedings: managing conflict of interest in science. A little consensus and a lot of controversy. Toxicol Sci
2. Markowitz G, Rosner D. Deceit and Denial: The Deadly Politics of Industrial Pollution
. Berkeley, CA: The University of California Press; 2002.
3. Michaels D, Monforton C. Manufacturing uncertainty: contested science and the protection of the public's health and environment. Am J Public Health
. 2005;95(Suppl 1):S39–S48.