Epidemiology

Skip Navigation LinksHome > May 2006 - Volume 17 - Issue 3 > Participation in Population Studies
Epidemiology:
doi: 10.1097/01.ede.0000209441.24307.92
Commentary

Participation in Population Studies

Hartge, Patricia

Free Access
Article Outline
Collapse Box

Author Information

From the National Cancer Institute, Bethesda, MD.

Address correspondence to: Patricia Hartge, National Cancer Institute, Division of Cancer Epidemiology and Genetics, Executive Plaza South, Rm. 8093, 6120 Executive Boulevard, Bethesda, MD 20892. E-mail: hartgep@mail.nih.gov

Editors' note: Related commentaries appear on pages 255 and 258.

Many areas of epidemiologic practice are flourishing as population registries, and other large databases make it easier than ever to collect the information researchers seek on virtually all the individuals they have defined as eligible. Meanwhile, in studies that require participants to respond (by being interviewed or completing a questionnaire, or by providing biologic samples), epidemiologists face a real and growing threat from nonparticipation. We are not alone in our predicament, because survey researchers in general report that they must spend more effort to get even moderate response rates. It appears that the very high response rates of earlier decades are no longer within reach today. We must first understand the scope and nature of the problem, soberly evaluate our current practices, and then look for major changes that will take us forward.

Back to Top | Article Outline

Are Response Rates Today Actually Poor?

To assess the state of response rates, we rely on reporting in the journal articles. Analytic epidemiology studies published in 2003 in 10 high-impact journals in epidemiology (American Journal of Epidemiology; Annals of Epidemiology; Cancer Epidemiology, Biomarkers and Prevention; EPIDEMIOLOGY; International Journal of Epidemiology; and Journal of Clinical Epidemiology), public health (American Journal of Public Health), and medicine (Lancet, New England Journal of Medicine, and Journal of the American Medical Association) showed remarkable patterns both in the reporting of response rates and in the rates achieved.1 Authors provided some information on participation—either the “response rate” of participants divided by the total eligible or the “participation rate” of participants divided by the subtotal of eligible living persons approached for the study (or the numbers with which to calculate one or both rates)—for only 41% of cross-sectional, 56% of case–control, and 68% of cohort studies. Publications in 2003 included studies conducted over many years; average participation fell from 1970 to 2002 for all designs and most steeply for controls in population-based studies. In such control groups, participation rates of 70% and response rates of 50% are not uncommon today. During the past 3 decades, proportionally more studies have collected biologic specimens, but fewer than 3 in 10 reported participation for the specimen component.

Back to Top | Article Outline
Is Poor Participation Introducing Substantial Bias in our Estimates?

Epidemiologists measure the association between an exposure and health outcome by choosing the right effect parameter, estimating it, calculating the imprecision, and considering the possible direction and magnitude of the bias. For rate ratios, the “bias factor” (estimated rate ratio divided by the true rate ratio) is the parameter that captures direction and magnitude of bias.2 In the case–control setting, the willingness to participate (W) may or may not vary with a particular exposure. Expressed mathematically, the “bias ratio” (WE, for the exposed, divided by WU for the unexposed) may or may not be 1.0, and it may be different in cases (WEcase/WUcase) than in controls (WEcontrol/WUcontrol). The bias ratio thus becomes WEcase/WUcase/WEcontrol/WUcontrol.

Several implications emerge immediately. A 70% response rate in cases and 60% in controls could create almost no bias for one exposure yet large bias for another exposure within the same study. Equal response rates in cases and controls offer no protection if the relevant correlates of nonparticipation are different in the sick and the healthy, as they so often are. On the other hand, rather poor response rates may be nothing to worry about if willingness is essentially unrelated to exposure. Even if willingness differs with exposure, bias still will not result unless this tendency is stronger (or weaker) in cases than in controls. This pattern is hard to gauge, but it is the real threat.

Back to Top | Article Outline
How Sensible is Sensitivity Analysis?

Careful investigation of the statistical structure of bias has uncovered subtle features and variations of this central paradigm and doubtless will yield more. We and our statistically sophisticated colleagues should pursue this area. Meanwhile, we all should exploit the simple insights uncovered by the bias ratio and use sensitivity analyses to explore the impact of plausible levels of differential response for a particular exposure in a particular study. Sometimes, the exposure in question (even just the simplest yes/no version of exposure) has been measured in another study with a high response rate. The established correlates of the exposure then can inform the model of the probable determinants of nonresponse. With such a model, a range of the likely risk ratios or other effect estimates can be constructed. Second, if the effect parameters in one study agree closely with those from studies with other designs, one can take some comfort.

Another tack is to exploit contrasts within the study. One might look across related exposures within the study for signs of distortion. For example, if the rate ratio estimates for coffee and smoking in a case–control study closely resemble those in the larger literature (even if these are otherwise uninteresting parameters), one might be slightly reassured for estimation of alcohol. If a subset of our study population had better response, for example, among the women or in the Midwest, one should examine whether the estimates differ between the higher response and lower response subsets.

Although sensitivity analysis always carries less authority than accurate direct measurement, such analysis can sharpen the interpretation of the results. With solid and relevant ancillary data on the likely values of the components of the nonresponse bias ratio, the exercise approaches standardization. With flimsy data or information peripheral to the problem, it dissolves into handwaving.

Back to Top | Article Outline
Are Biomarkers Different?

With carefully measured molecular exposures, one gets closer to the presumed causal nexus, so epidemiologic researchers, reviewers, and even editors sometimes give “wet epidemiology” a pass on the problem of nonresponse. We should know better than to assume that ignorance protects us. It is not impossible that willingness to participate could vary with a particular polymorphism. It is reassuring that the first report3 of such effects found no evidence of participation differential for 108 polymorphisms, 8 haplotypes, and 9–15 short tandem repeats in data on 2955 individuals in 3 studies, but much more research is needed. Typically, true nonparticipation has to be inferred using a variety of proxy measures such as late responders who have to be contacted many times or who require extra incentives to participate, or people who are unwilling to give blood but willing to give buccal cell specimens, or people who participate during an early phase of the study with poorer response rates compared with those during a later phase with good rates, or people with genetic material from an early study who decline later participation. Not one of these strategies is fully satisfying, but all of the quantitative or semiquantitative information we can collect will help with a reasoned interpretation of the findings from molecular epidemiology.

Back to Top | Article Outline
What Practical Steps can Raise Participation?

Current literature shows wide variation in the magnitude of effects of key factors on response rate, but a rather clear overall pattern emerges.4–14 The salience of the topic exerts the strongest effect on willingness. Incentives enhance response, especially incentives given early in the recruitment process. Personality, training, and experience of the recruiter have major effects, whereas demographic attributes have lesser effects, depending more on the specific setting. House-to-house or other in-person approaches typically (but not uniformly) elicit higher response than initial telephone contacts, but they are more expensive and harder to monitor for quality assurance.

Molecular epidemiology presents its own challenges. Anecdotal information from dozens of epidemiologists conducting studies now suggests that each a small change in practice (eg, raising levels of incentives, moving incentive payment to earlier, retraining staff, streamlining procedures, reducing time or burden of the interview) can make a marginal improvement. There are no simple solutions. Whatever the strategy, it does not appear possible today to achieve response rates as high as the best efforts 15 years ago.

Back to Top | Article Outline
Is Telephone-Based Interviewing, Recruitment, or Sampling Still Viable?

Epidemiologists confront several distinct and largely countervailing changes in telephone patterns. Mobile phones and multiple landlines make it easier to reach respondents who are willing to be interviewed by phone. Computer-assisted telephone interviewing (CATI) permits branching, online table lookups, and other features that produce a very flexible and powerful questionnaire.

If persons have already been selected and recruited, the telephone interview may work better than even before. However, appropriate selection and successful recruitment are 2 big “ifs.” The telephone no longer provides us with a serviceable sampling frame. We can no longer rely on a rough equation of one working residential phone number per household, although the household is the underlying basis of random-digit dialing (RDD) as a means of selecting a sample of the general population. Original RDD schemes have been adapted over time (eg, with assistance from lists), but the current challenge seems more fundamental. Without radical changes, we soon will not be able to use telephones to draw good approximations of samples of the general population.

The changes in telecommunications, marketing, and culture have also undermined the use of the telephone for recruitment. Indeed, the use of the telephone to recruit people (with a known telephone number) into a study hangs by a thread. It can work in settings with very good staff, but it is getting harder and more expensive. The best approach seems to be to approach people by several avenues, for example, by sending a letter or an e-mail as well as telephoning. Recruitment through telephone will not get easier. More likely, it will coexist, at least for awhile, with the other approaches.

Back to Top | Article Outline
Should We “Keep Those Cards and Letters Coming”?

Postcards and letters for recruitment and retention remain valuable.8,10,13 Likewise, self-administered questionnaires have their place. We are learning that the appearance of these hard-copy materials matter greatly as people sort the wheat from the chaff in the large volume of mail solicitations that stuff their mailboxes. Big envelopes and special delivery help, as do signatures and letterheads from respected institutions. We keep learning again and again that long, boring, or confusing letters and questionnaires get the poor response they richly deserve, no matter how crisp the printing.

Back to Top | Article Outline
Has the Web Replaced the Printed Page and the Telephone? Will it?

A look into the not-so-distant future suggests that cards and letters will perform best when bundled with other approaches in a polite but persistent attempt to get through the chatter of modern life. Postal questionnaires will also survive, but they too will be bundled with options to enter the data through telephone or web. The hard copy will serve to assure the participant that the scope is modest and the questions are clear and easy to answer. Then actual assessment can be done by any convenient mode. For instance, the web will play a greater role in data collection by allowing respondents flexibility in when and where they answer questions. It is also very likely to improve follow-up of cohorts.

Web-based data collection is young but growing fast. By contrast, web-based selection basically has not begun, at least not to the standards of most epidemiologists. As online coverage and crossindexing of e-mail addresses with other identifiers grow, the Internet may emerge as an alternative selection frame. For now, the unrepresentative nature of the online world and various double-counting problems make selection based on e-mail addresses an area to develop and to watch but not yet a tool to use.

Finally, the new technology of the web is almost certain to meld with the old technology of dwelling-based selection. It should eventually become feasible to use satellite surveys of places and classic multistage sampling, now used in RDD, to select defined dwellings at random and in advance.

In summary, if the problem at hand requires a population-based design, then the investigator must consider every available trick to get adequate response. Pretests and pilot studies will disclose the likely costs and benefits of various maneuvers. Ancillary data might be collected to characterize all the nonresponders, or a random sample of them, especially on the variables of interest. Occasionally, one can use a tandem approach of measuring some effects both in the setting with the likely-to-be-poorer response rates (but more detail, for example) and in a setting with excellent response (but less detail). As ever, the researcher would do well to imagine the discussion section of the published results before going into the field.

Back to Top | Article Outline

ACKNOWLEDGMENTS

I am grateful for the insightful suggestions of Lindsay Morton.

Back to Top | Article Outline

ABOUT THE AUTHOR

PATRICIA HARTGE is the Deputy Director of Epidemiology and Biostatistics in the Division of Cancer Epidemiology and Genetics at the National Cancer Institute. She has conducted research on the etiology of non-Hodgkin lymphoma, ovarian and breast cancer, bladder cancer, and other tumors. She has developed and adapted various study designs and field research methods for cancer epidemiology.

Back to Top | Article Outline

REFERENCES

1. Morton LM, Cahill J, Hartge P. Reporting participation in epidemiologic studies: a survey of practice. Am J Epidemiol. 2006;163:197–203.

2. Chen J, Wacholder S, Morton LM, et al. Quantifying selection bias in epidemiologic studies [Abstract]. Am J Epidemiol. 2005;161(suppl):S145.

3. Bhatti P, Sigurdson AJ, Wang SS, et al. Genetic variation and willingness to participate in epidemiologic research: data from three studies. Cancer Epidemiol Biomarkers Prev. 2005;14:2449–2453.

4. Slattery ML, Edwards SL, Caan BJ, et al. Response rates among control subjects in case–control studies. Ann Epidemiol. 1995;5:245–249.

5. Olson SH. Reported participation in case–control studies: changes over time. Am J Epidemiol. 2001;154:574–581.

6. Curtin R, Presser S, Singer E. Survey nonresponse over the past quarter century. Public Opinion Quarterly. 2005;69:87–98.

7. Moorman PG, Newman B, Millikan RC, et al. Participation rates in a case–control study: the impact of age, race, and race of interviewer. Ann Epidemiol. 1999;9:188–195.

8. Ronckers C, Land C, Hayes R, et al. Factors impacting questionnaire response in a Dutch retrospective cohort study. Ann Epidemiol. 2004;14:66–72.

9. Corbie-Smith G, Viscoli CM, Kernan WN, et al. Influence of race, clinical, and other socio-demographic features on trial participation. J Clin Epidemiol. 2003;56:304–309.

10. Dunn KM, Jordan K, Croft PR. Does questionnaire structure influence response in postal surveys? J Clin Epidemiol. 2003;56:10–16.

11. Patten SB, Li FX, Cook T, et al. Irritable bowel syndrome: are incentives useful for improving survey response rates? J Clin Epidemiol. 2003;56:256–261.

12. Stang A, Ahrens W, Jockel KH. Control response proportions in population-based case–control studies in Germany. Epidemiology. 1999;10:181–183.

13. Edwards P, Roberts I, Clarke M, et al. Increasing response rates to postal questionnaires: systematic review. BMJ. 2002;324:1183.

14. Rosoff PM, Werner C, Clipp EC, et al. Response rates to a mailed survey targeting childhood cancer survivors: a comparison of conditional versus unconditional incentives. Cancer Epidemiol Biomarkers Prev. 2005;14:1330–1332.

Cited By:

This article has been cited 5 time(s).

Epidemiology
The Price Is Right
Cahill, J
Epidemiology, 17(3): 258-259.
10.1097/01.ede.0000206398.36599.af
PDF (84) | CrossRef
Epidemiology
The Search for Environmental Effects on Children’s Health: Navigating Between Scylla and Charybdis
Linet, MS
Epidemiology, 19(4): 530-531.
10.1097/EDE.0b013e31817ae59d
PDF (93) | CrossRef
Epidemiology
No Excess Risk of Breast Cancer in Mothers of Boys With Hypospadias
Sørensen, HT; Pedersen, L; Nørgaard, M; Rothman, KJ; Lash, TL
Epidemiology, 17(6): 706-707.
10.1097/01.ede.0000239648.41512.fb
PDF (100) | CrossRef
Epidemiology
Why Epidemiologists Cannot Afford to Ignore Poverty
Krieger, N
Epidemiology, 18(6): 658-663.
10.1097/EDE.0b013e318156bfcd
PDF (947) | CrossRef
Journal of Occupational and Environmental Medicine
Self-Reported Medical Conditions in Perfluorooctanesulfonyl Fluoride Manufacturing Workers
Grice, MM; Alexander, BH; Hoffbeck, R; Kampa, DM
Journal of Occupational and Environmental Medicine, 49(7): 722-729.
10.1097/JOM.0b013e3180582043
PDF (186) | CrossRef
Back to Top | Article Outline

© 2006 Lippincott Williams & Wilkins, Inc.

Twitter  Facebook 

Login

Article Tools

Share

Article Level Metrics