From the Department of Philosophy, University of Miami, Coral Gables, Florida.
Correspondence: Susan Haack, Department of Philosophy, University of Miami, Coral Gables, FL 33124. E-mail: email@example.com
“It seems to me that there is a good deal of ballyhoo about scientific method.”1 - —Percy Bridgman
Greenland2 writes in defense of what he calls “risk-factor epidemiology.” If I have understood correctly, these are epidemiologic studies in which data are collected without any test–hypothesis being stipulated in advance, or in which data collected for other purposes are analyzed to look for correlations between this or that exposure and these or those diseases. But critics such as Feinstein3 or Skrabanek,4–7 who write pejoratively of “black-box epidemiology” or “data-dredging,” think such studies substandard. Feinstein argues that they are especially susceptible to unreliability, and Skrabanek complains that often they are just “scaremongering made respectable by the use of sophisticated statistical methods.”4 This is quite a tangle; but I will try, in what remains of my 1000 words, to disentangle it.
As Feinstein3 points out, when a hypothesis is specified before data are collected, precautions can be taken against relying on poor information about exposure, dosage, or diagnosis, against mistaking higher diagnosis rates for higher disease rates, and so on—foresight that is impossible when masses of data collected for other purposes are searched for unanticipated correlations. This seems right. Moreover, when very large numbers of statistically significant correlations are discovered in a search of multiple datasets, some of them will likely be the result of chance. However, all this is quite compatible with Greenland's2 argument, which is that the correlations such studies turn up can prompt us to speculate about what causal mechanism might tie together the apparently unrelated factors. Together with evidence from toxicology, animal studies, and so on, these correlations help confirm an etiologic hypothesis. This also seems right, as Greenland illustrates very nicely with his example of the role of lipid peroxidation in the etiology of renal cancer, and the epidemiologic and other evidence that supports that hypothesis.
Feinstein's3 correct observation that no amount of toxicologic, in vivo, and other types of evidence can turn a poor statistical study into a good one does not undermine Greenland's2 argument either. The structure of evidence is not linear, like a mathematical proof, but ramifies like a crossword puzzle.8,9 The reasonableness of a crossword entry depends on how well it is supported by the clue and other intersecting entries, how reasonable those other entries are (independent of this one), and how much of the crossword has been completed; similarly, degree of warrant depends on how supportive the evidence is of the claim in question, how secure it is (independent of that claim), and how comprehensive it is. Feinstein is concerned with the potential insecurity of the results of the controversial kinds of study, Greenland with their potential supportiveness with respect to an etiologic claim.
Like security, supportiveness comes in degrees. How strongly evidence supports a claim depends on how well the evidence and the claim fit together in an explanatory story, as evidence from genetics, phage studies, stereochemistry, x-ray crystallography, and so on interlock to support the conclusion that DNA is a double-helical, backbone-out macromolecule that has like-with-unlike base pairs—and do so more strongly than any one of these pieces of evidence does. Similarly, statistical evidence of a correlation between risk-factor F and disease D, toxicologic evidence of the physiological ill effects of F, and studies showing an increased occurrence of D in animals exposed to F, could interlock to support the conclusion that exposure to F causes elevated risk of D; and this combined evidence may do so more strongly than any component piece of evidence alone could do. Granted, supportiveness and security interact; if the epidemiologic evidence is weak on independent security, the degree of warrant of the conclusion will be lessened. On the other hand, independent evidence of a causal mechanism could justifiably boost confidence in the reliability of the epidemiologic evidence.
Skrabanek's concerns are in part like Feinstein's, focused on security, but mainly he complains that in this era of epidemiology, without epidemics, much of what goes on in the discipline is trivial or alarmist.4 Doubtless—aided and abetted by journalists eager for a salable story and attorneys eager for a new cause of action—unreliable epidemiologic studies often do raise unnecessary public alarm; and doubtless there is plenty of banal work being done. However, everywhere in the academy the “publish or perish” mentality has encouraged a flood of academic busy-work; everywhere in the sciences pressure to obtain grants has encouraged special pleading for “further research” into this or that, and created incentives for scientists to go prematurely to the press; and everywhere that research has a potential relevance to policy, the line between inquiry and advocacy tends to get blurred.10
In any case, this too is irrelevant to Greenland's argument; for these are matters, not of “methodology,”2 but of the health of our academic—indeed, our intellectual—culture. Skrabanek's complaints about “poor data... manipulated to reach a foregone conclusion”7 bring to mind the lapidary response of French sociologist Georges Sorel,11 when asked what was the most important method in his field: “Honesty.” Which prompts the following concluding thought: methodologic protocols, procedures, rules, and so on, have their place; however, they are no substitute for well-informed imagination, shrewd judgment of the weight of evidence, and good faith in inquiry—as, I suspect, in his era of “epidemiology without methodologic ballyhoo,” John Snow understood well enough.
ABOUT THE AUTHOR
SUSAN HAACK is Cooper Senior Scholar in Arts and Sciences, Professor of Philosophy, and Professor of Law at the University of Miami.
1. Bridgman P. On ‘scientific method’ (1949). In: Bridgman P. Reflections of a Physicist. New York: Philosophical Library; 1955:81–83.
2. Greenland S, Gago-Dominguez, Castelao JE. The value of risk-factor (“black box”) epidemiology. Epidemiology. 2004;15:529–535.
3. Feinstein AR. Scientific standards in epidemiologic studies of the menace of everyday life. Science. 1988;242:1257–1263.
4. Skrabanek P. The poverty of epidemiology. Perspect Biol Med. 1992;35:182–185.
5. Skrabanek P. The emptiness of the black box. Epidemiology. 1994;5:553–555.
6. Skrabanek P, McCormick J. Follies and Fallacies in Medicine. Buffalo, NY: Prometheus Books; 1990.
7. Skrabanek P. The epidemiology of errors. Lancet. 1993;342:1502.
8. Haack S. Evidence and Inquiry: Towards Reconstruction in Epistemology. Oxford: Blackwell; 1993.
9. Haack S. Clues to the puzzle of scientific evidence: a more-so story. In: Haack S. Defending Science—Within Reason: Between Scientism and Cynicism. Buffalo, NY: Prometheus Books; 2003:57–92.
10. Haack S. Preposterism and its consequences. In: Haack S. Manifesto of a Passionate Moderate: Unfashionable Essays. Chicago: University of Chicago Press; 1998:188–208.
11. Andreski S. Social Sciences as Sorcery. New York: St. Martin's Press; 1972.
© 2004 Lippincott Williams & Wilkins, Inc.