Incidence estimation requires the ability to count new cases of disease over time. Complete and speedy reporting may be needed for certain diseases, but for others a periodic system—whereby reporters periodically report new cases in a fixed time interval—may suffice. If incidence is to be estimated over a long period, consideration needs to be given to the frequency of reports and its correlate, the length of the reporting window, and how these might influence accuracy. Petridou et al1 concluded that recall bias, not telescoping bias, was the primary problem for estimation of annual injury incidence. Those authors recommended narrow time windows with frequent interviews. However, too-frequent requests for information could affect estimates by reducing participation.
To reduce the burden, one could consider time sampling whereby a reporter is assigned to report for random samples of the total time period of interest. This possibility seems to have been first proposed for surveillances schemes for work-related diseases in the United Kingdom2-5 under which physicians report incident cases: some physicians, were asked to provide time-sampled data (1 random month in 12), and others were asked to provide data continuously (every month).
Given its potential as an epidemiologic tool, it is important to evaluate the time-sampling method. All else being equal, incidence in the whole period could be estimated by multiplying reported cases by the reciprocal of the sampling fraction. However, although some behavioral changes (eg, increased participation) might reduce systematic error, other behavioral changes might create new biases. Schmidt et al6 compared intermittent versus continuous surveillance for estimating prevalence of episodic diseases via a simulation study but assumed no behavioral differences.
We investigated whether annual incidence estimates differ depending on whether a time-sampling method—specifically 1 random month in 12 (1/12)—or a full-coverage method (12/12) is used. Because a retrospective comparison based on data from the aforementioned surveillance schemes2-5 would likely be confounded by systematic population differences,7 a randomized design was used. Our specific objective was to compare incidence estimates of work-related disease in a section of the UK workforce, based on physician reporting by the 2 methods.
A randomized controlled trial (RCT), crossover design was used. In December 2003, we had access to a network of approximately 500 UK physicians in the Occupational Physician Reporting Activity surveillance scheme4 for occupational disease, under which previous reporting had been done on a time-sampling basis; specifically, each member has been asked to report for 1 randomly chosen month a year. Eligible reporters for the RCT were members in December 2003 who had reported at least 1 case of disease per year during 2001-2003, the restriction being to ensure adequate numbers of future cases. All eligible physicians were invited to take part.
Those who accepted were randomly allocated, with probability 0.5, to 1 of the 2 groups: group 1 would report every month in 2004 and 1 randomly chosen month in 2005, and group 2 would report for 1 randomly chosen month in 2004 and every month in 2005. At the beginning of a reporting month, physicians were sent a card on which to record all newly diagnosed cases of work-related disease in the following month, or else return it marked “I have no new cases to report.” The reporting process and instructions were identical to usual practice.4,5 The workforces for whom participants provided care were assumed to be of fixed size during 2004-2005; all else being equal, changes in case counts should therefore reflect changes in incidence. The study was part of a program5 approved by the UK Multicenter Research Ethics Committee (MREC number 02/8/72).
The analysis aimed to estimate the effect of reporting frequency (1/12 vs. 12/12) on incidence per month, regardless of diagnoses. Partial data from incomplete set of returns and 0 counts were used. The possibility of a carryover effect8 from 2004 to 2005—as evidenced by apparently different effects of frequency in the 2 years—was considered. We used a 2-level (reporters, months) Poisson regression model for repeated measures with gamma random effects, implemented in STATA v9.9 Given substantial variation in incidence between reporters, the model included a reporter offset term, derived from historical data and equal to the logarithm of mean cases per month over 1996-2003. In effect, this meant that the model outcome variable was a case ratio (ie, monthly cases divided by the reporter's previous mean). The effect of time sampling was estimated from the ratio of this quantity under 1/12 reporting versus 12/12 reporting; this is a rate ratio (RR). To help explain results, we examined time trends within the 12/12 reporting periods over and above expected seasonal variation7: for this, the reporter's offset term was multiplied by seasonal multipliers—estimated from the 1996 to 2003 data—which varied by month of year from 0.79 to 1.21, with an average of 1.00.
Ninety-seven reporters were eligible and invited to take part; 63 (65%) accepted. Thirty-two were allocated to group 1 and 31 to group 2. Mean cases per month before the study (1996-2003) varied among participants from 0.8 to 23.4; the latter outlier was almost twice the next largest value. The groups were balanced in terms of previous reporting density when measured by group medians (3.4 cases in each).
One reporter from group 1 withdrew from reporting before contributing any data; there were 8 further withdrawals. Reporters were more likely to withdraw just before or in a 12/12 year than a 1/12 year (7 vs. 2). Response rates among those not withdrawing were high: in 2004, all group 2 reporters responded in the allocated month, whereas group 1 returned 344 (93%) of a possible 368 cards. In 2005, 27 of the remaining 28 members in group 1 responded, whereas group 2 returned 313 (97%) of the possible 323 cards. The combined effect of withdrawal and nonresponse was that 87% (715/819) of possible returns were obtained—92% for 1/12 periods and 87% for 12/12.
The mean reporter case ratio was higher under 1/12 reporting (1.15) compared with 12/12 (0.80) (Table). The difference was larger in 2004; the RRs comparing 1/12 and 12/12 reporting (estimated from Poisson regression) were 1.34 and 1.17 for 2004 and 2005, respectively. The difference between years could be chance variation or could be due to experience in 2004 affecting 2005 behavior, ie, a carryover effect. A test of the difference in RRs, using a statistical interaction term, gave P = 0.67. If chance variation, then the best estimate of the impact of time sampling is from both years combined: RR = 1.26 (95% confidence interval [CI] = 1.11 to 1.42), estimated from a no-interaction model. RRs after excluding the outlier reporter, or those who did not return all 13 cards, were similar (1.25 and 1.24).
Case counts declined during the study, reaching 0.80 of the historical mean in 2005 (Table). The RR for 2005 versus 2004, from a no-interaction model, was 0.87 (95% CI = 0.77 to 0.98). However, other data7 from all OPRA reporters had suggested an increase of 4% per year in UK incidence over 1996–2005. To explore study time trends in finer detail for 12/12 reporters, seasonally adjusted cases ratios for each month were derived (Fig.); these suggest a decline from January to December. The corresponding model-based estimates of decline per month in reported incidence, over and above expected seasonal change and assuming a linear trend, were −1.1% (95% CI = −2.8 to 0.6) and −3.1% (95% CI = −4.7 to −1.5) for 2004 and 2005, respectively, and −2.2% (−3.3 to −1.1) for 2004–2005.
This RCT suggests that estimated incidence would be 26% higher (95% CI = 11%-42%) if reporting was 1/12 compared with 12/12. Given the design, these results are unlikely to be due to confounding. However, generalization to other settings should take account of the specific conditions: participants had previous experience of one of the study conditions (1/12 reporting). However, there was nothing novel about the new task apart from its frequency. A reminder system7 prompted reporters if they did not return their cards within a fixed time. When the trial ended, 59% of participants volunteered to become permanent 12/12 reporters.
One cannot conclude with certainty which type of reporting gave estimates nearer the truth without knowing the true incidence. However, the within-year declines seen in the Figure for 12/12 reporters are not consistent with longer term time trends seen for other Occupational Physician Reporting Activity reporters in the United Kingdom7; perhaps 12/12 reporters found the task of assembling and reporting cases every month onerous. Therefore, there is a suggestion of evidence against 12/12 and in favor of 1/12 reporting. However, the possibility that 1/12 reporting contained a degree of telescoping bias cannot be ruled out.
Schemes for long-term estimation of incidence may have attributes in common with surveillance systems said to be characterized by practicality rather than accuracy.10 However, practicalities should not rule out efforts to increase data quality. Comparative studies, preferably randomized, can help to evaluate alternatives that affect acceptability and representativeness.11
Nicola Cherry established the OPRA scheme in 1996. We thank the members of the surveillance schemes who volunteered to take part in the trial.
1. Petridou E, Dessypris N, Frangakis CE, Belechri M, Mavrou A, Trichopoulos D. Estimating the population burden of injuries: a comparison of household surveys and emergency department surveillance. Epidemiology
2. Meredith SK, McDonald JC. Work-related respiratory disease in the United Kingdom, 1989-1992: report on the SWORD project. Occup Med
3. Meredith SK, McDonald JC. Surveillance systems for occupational disease. Ann Occup Hyg
4. Cherry NM, Meyer JD, Holt DL, Chen Y, McDonald JC. Surveillance of work-related disease by occupational physicians in the UK: OPRA 1996–99. Occup Med
5. The Health and Occupational Reporting (THOR) network, The University of Manchester. Available at: http://www.medicine.manchester.ac.uk/coeh/thor/schemes/opra
. Accessed January 2, 2009.
6. Schmidt WP, Luby SP, Genser B, Barreto ML, Clasen T. Title Estimating the longitudinal prevalence of diarrhea and other episodic diseases: continuous versus intermittent surveillance. Epidemiology
7. McNamee R, Carder M, Chen Y, Agius R. Measurement of trends in incidence of work-related skin and respiratory diseases, UK 1996–2005. Occup Environ Med
8. Jones B, Kenward M. Design and Analysis of Cross-Over Trials.
2nd ed. London: Chapman and Hall; 2003.
9. StataCorp. Statistical Software: Release 9.2
. College Station, TX: Stata Corporation; 2005.
10. Last J. Dictionary of Epidemiology
. 4th ed. New York: Oxford University Press; 2001.
11. Center for Disease Control. Updated Guidelines for Evaluating Public Health Surveillance Systems. MMWR