Department of Epidemiology, Box 3572356, University of Washington, Seattle WA 98195.
Address correspondence to: Noel S. Weiss, Department of Epidemiology, Box 357236, University of Washington, Seattle WA 98195.
While some of my research over the years has had no clear application to disease prevention, another portion has had direct implications for clinical and public health decisions. Not surprisingly, the latter type of research has been (and continues to be) the most rewarding. Mining cancer surveillance data to explore demographic correlates of cancer incidence, 1,2 for example, gives me some satisfaction, but not nearly as much as being able to document an effect of certain hormonal regimens on the risk of endometrial cancer. 3,4 Or as much as estimating the potential reduction in mortality from colorectal or prostate cancer with screening. 5,6
However, having identified a way of using hormones that appears to raise the risk of endometrial cancer, or a screening approach for colorectal cancer that seems to be effective, my articles reporting these findings somehow never go on to recommend avoidance of those hormones, or to recommend establishment of a screening program. What would it take for me to include a policy recommendation in an article that reports research results?
The short answer to the above question: A lot. The longer answer: All of the following conditions would have to be fulfilled.
1. The results from my study and others would have to be consistent in observing a strong association. At a minimum, the association would need to be consistent enough and strong enough to infer a true causal or protective action of the exposure in question.
2. Information would have to be available on all potentially important consequences of the exposure, not just the one I have investigated. The decision by a postmenopausal woman to use a particular hormone regimen ought to be based only in small part on the impact of that regimen on endometrial cancer risk, since there are other potential health consequences (both risks and benefits) related to such use. While mortality from prostate cancer is a key factor in deciding on the desirability of screening for the presence of that disease, the impact of screening on other health outcomes (eg morbidity due to treatment of lesions that were detected by screening) must be taken into account as well.
3. The economic and other non-health-related consequences of a particular recommendation would have to be well understood. For example, even if screening appears to reduce mortality from prostate cancer, the costs of screening and evaluating men who falsely test positive (combined with treatment-related morbidity) may be too great a price to pay, especially in populations with a low risk of prostate cancer.
4. The apparent benefits associated with the implementation of the recommendation would have to exceed the costs by so great a margin that my failure to conduct a formal decision analysis would not detract from my recommendation. Even if I were trained to do such an analysis, the length limitations of a research paper probably would not provide me the space to do so.
Are the above conditions ever met? Once in a while. For example, what if I had conducted a large cohort study of cigarette smoking or heavy occupational asbestos exposure, in which the full range of important health outcomes had been examined? The size and breadth of the associations observed would argue loudly for action, and I would eagerly have lent my voice in support of such action in my research paper. Or what if I found a strong association between the consumption of inadequately cooked ground beef and the occurrence of the hemolytic uremic syndrome? The absence of any obvious benefit from undercooking and the low cost of adequate cooking would compel me to make a recommendation in virtually the same breath I used to describe the study results.
However, in my experience, examples such as these are the exception rather than the rule. In most situations, a policy recommendation will need to be based on a systematic enumeration and weighing of all potential benefits and costs of an intervention if it is to be credible. I believe that such an enumeration and weighing, if done properly, is beyond the scope of most of our research articles.
1. Weiss NS, Homonchuck T, Young JL. Incidence of the histologic types of ovarian cancer: The US Third National Cancer Survey, 1969–1971. Gynecol Oncol 1977: 5:161–167.
2. Flood DM, Weiss NS, Cook LS, Emerson JC, Schwartz SM, Potter JD. Colorectal cancer incidence in Asian migrants to the United States and their descendants. Cancer Causes Control 2000; 11:403–411.
3. Weiss NS, Szekely DR, English DR, Schweid AI. Endometrial cancer in relation to patterns of menopausal estrogen use. JAMA 1979; 242:261–264.
4. Weiss NS, Sayvetz TA. Incidence of endometrial cancer in relation to the use of oral contraceptives. N Engl J Med 1980; 302:551–554.
5. Selby JV, Friedman GD, Quesenberry CP Jr, Weiss NS. A case-control study of screening sigmoidoscopy and mortality from colorectal cancer. N Engl J Med 1992; 326:653–657.
6. Richert-Boe KE, Humphrey LL, Glass AG, Weiss NS. Screening digital rectal examination and prostate cancer mortality: a case-control study. J Med Screen 1998; 5:99–103.