The usefulness of the regression discontinuity design does not limit itself to the situation wherein pre- and postintervention measurements are from the same variable. The design may also be used when the outcome is different from the pretreatment test, as in the article by Bor et al5 This opens the possibility of even more applications. Again, one may expect that in real-life estimations the windows will be much wider than in the theoretical limiting situation, and that the more persons there are below a particular threshold value of a prognostic variable, the more their expected outcomes will be different from those above the threshold, and vice versa. One can make assumptions about the form of the relationship between the preintervention measurement and the outcome and posit that a shift of regression functions around the threshold corresponds to a treatment effect.
In general, in situations wherein a linear relation is credible,1 or a functional relation between test and outcome is already known (or can be derived from a large part of the data), the results will have greater credibility than if the form of a model has to be empirically tried out. Linearity of the regression line in the untreated can be assumed if both the pre- and posttreatment variables have normal distributions. Of course, perfect normal distributions from minus to plus infinity do not exist in biology, but many biological variables are reasonably bell-shaped, even if somewhat lopsided, or they can be transformed. Linear regression will then offer good approximations, except perhaps at extreme ends of the data.
WHAT IS ESTIMATED?
Bor et al5 emphasize that the regression discontinuity design estimates a local causal treatment effect “at the threshold,” which is different from what is obtained in an RCT where the average treatment effect in the total population in the trial is considered. A Bayesian perspective was recently proposed by Geneletti et al,8 with an extensive discussion of the causality assumptions. According to Bor et al,5 the causal effect might become similar to that of an RCT only in the ideal case of a constant additive effect over all preintervention measurements. They see an estimation of a local threshold effect as important, however, because if there still is an effect “at the threshold,” this shows that the threshold for treatment might be too high. The reasoning about the windows above and below the threshold is symmetrical: the extrapolation of the outcomes of the treated persons to the untreated gives the latter’s counterfactuals under treatment and estimates the same effect.
As with any design, there are all kinds of caveats. First, several caveats stem from choices of doctors or patients: (1) the intensity of treatment might differ farther away from the threshold, and (2) if the threshold is fuzzy,5,8 this means that other considerations to allocate treatment came into play that leads to the suspicion of confounding by indication. Second, there are biological and statistical considerations: (1) The effect of treatments may differ in persons closer to the threshold than in persons at more extreme values (ie, interaction), and (2) the relations may be nonlinear and, worse, of unknown form. To counter potential objections, most literature on regression discontinuity emphasizes the value of showing graphically what happens, of using different windows around the intervention threshold, and of being aware of nonlinearity and interactions.
A fierce debate that ran in the 1990s should be mentioned. Many a statistician would intuitively suspect that if measurement error is present in the first measurement—the preintervention measurement—this would bias the estimate of the treatment effect.11 However, this intuition is incorrect: the treatment effect is unbiased if the intervention threshold is based only on the preintervention measurement (and the relation between pre- and postmeasurement is correctly specified).12,13 In contrast, if instead of being based on one observed preintervention measurement, the “true” underlying value is used as threshold—for example, a doctor prescribes antihypertensive treatment based on repeated blood pressures, that is, “true hypertension” —corrections for measurement error should be made.12,13 Researchers might have some difficulty in realizing the problem, because they might be inclined in an analysis to use only one measured value that is readily available before an intervention, whereas this is part of a series that led to the decision to treat. As an alternative to correct for measurement error of one preintervention measurement value, in such instances one might use a “true underlying value” obtained from several measurements as the threshold.
LET’S TRY IT OUT
Whatever the caveats and the debates, here is an interesting idea with several variants in its practical application. Actually, one wonders whether it can be called “a design,” rather than the general idea of estimating counterfactual outcomes by a model instead of a comparison group. The latter idea has become commonplace in epidemiology, for instance when G-computation is used. A regression discontinuity design is less efficient than an RCT, and larger numbers are needed, but the regression discontinuity design can be applied in existing care settings and can use existing data. Thus, the idea should be experimented with. It might be useful to check its robustness again on data from RCTs (as was done by Finkelstein et al4 earlier) or in other situations in which the effect is well-known. Also, it might be useful to examine in which situation the regression discontinuity design is more robust: when pre- and postintervention measurements are the same, or when an outcome is studied that is different from the preintervention measurement. We have the benefit of a large methodological literature, going back several decades and over several fields of observational science from education to economics. Now it is our turn to study the feasibility and robustness of this design in diverse areas of medicine and public health!
ABOUT THE AUTHORS
JAN P. VANDENBROUKE is Royal Academy Professor at the Department of Clinical Epidemiology, Leiden University Medical Center, the Netherlands. His main interest is translating techniques of observational epidemiology to the research interests of tertiary-care physicians. He was introduced to the regression discontinuity design in a course by Olli Miettinen in 1977 in the Netherlands, and has repeatedly tried to interest clinical investigators in this design. SASKIA LE CESSIE is an associate professor at the departments of Medical Statistics and the department of Clinical Epidemiology of the Leiden University Medical Center. She is interested in statistical methods used in epidemiologic research–in particular, methods to derive causal effects from observational data.
REFERENCES
1. Rubin DB. Assignment to treatment group on the basis of a covariate. J Educ Stat. 1977;2:1–26
2. Trochim W. The regression-discontinuity design. In: Agency for Health Care Policy and Research Conference Proceedings: Research Methodology: Strengthening Causal Interpretations of Nonexperimental Data. 1990 Washington, D.C.: U.S. Department of Health and Human Services
3. Finkelstein MO, Levin B, Robbins H. Clinical and prophylactic trials with assured new treatment for those at greater risk: I. A design proposal. Am J Public Health. 1996;86:691–695
4. Finkelstein MO, Levin B, Robbins H. Clinical and prophylactic trials with assured new treatment for those at greater risk: II. Examples. Am J Public Health. 1996;86:696–705
5. Bor J, Moscoe E, Mutevedzi P, Newell ML, Bärnighausen T. Regression discontinuity designs in epidemiology: causal inference without randomized trials. Epidemiology. 2014;;25::729–737
6. Zuckerman IH, Lee E, Wutoh AK, Xue Z, Stuart B. Application of regression-discontinuity analysis in pharmaceutical health services research. Health Serv Res. 2006;41:550–563
7. Boot CPM Risicofactoren Voor Coronaire Hartziekten. Screening En Interventie In Een Huisartspraktijk. Proefschrift, Leiden 1979. (Risk Factors For Coronary Heart Disease. Screening and Intervention in One General Practice). 1979 Leiden, Netherlands: University of Leiden;
8. Sara Geneletti, O’Keefe Aidan G., Sharples Linda D., Sylvia Richardson, Gianluca Baio. Bayesian regression discontinuity designs: Incorporating clinical knowledge in the causal analysis of primary care data. Available at:
http://arxiv.org/pdf/1403.1806v1.pdf Accessed 28 March 2014.
10. Linden A, Adams JL, Roberts N. Evaluating disease management programme effectiveness: an introduction to the regression discontinuity design. J Eval Clin Pract. 2006;12:124–131
11. Stanley TD, Robinson A. Sifting statistical significance from an artefact of regression-discontinuity design. Evaluation Review. 1990;14:166–181
12. Cappelleri JC, Trochim WMK, Stanley TD, Reichardt CS. Random measurement error does not bias the treatment effect estimate in the regression-discontinuity design: I. the case of no interaction. Evaluation Review. 1991;19:395–419
13. Reichardt CS, Trochim W, Cappelleri J. Reports of the death of the regression-discontinuity design are greatly exaggerated. Evaluation Review. 1995;19:39–63
© 2014 by Lippincott Williams & Wilkins, Inc
Source
Epidemiology. 25(5):738-741, September 2014.
Your message has been successfully sent to your colleague.
Some error has occurred while processing your request. Please try after some time.
The item(s) has been successfully added to "".
