Observational research is susceptible to many types of bias. Special attention is given to misclassification, uncontrolled confounding, and selection bias, although many other threats to validity exist as well.1,2 Adjusting for biases due to these mechanisms has a long history. Many of the methods specify “bias parameters” (eg, sensitivity and specificity of misclassification), which dictate the assumed extent of the bias. Historically, authors have attempted to determine how large the bias parameter would have to be to make the main effect null.3 Others have specified bias parameters and determined bounds within which the adjusted effect estimate must reside.4,5 More recently, bias parameters have been employed to find bias-adjusted effect estimates. For example, Greenland and Lash6 give formulae to adjust effect estimates for bias due to uncontrolled confounding, misclassification, or selection bias.
Recently, many authors have proposed probabilistic bias analysis methods that treat the bias parameters as random variables with probability distributions.6–13 These methods serve to adjust the main effect estimate and propagate uncertainty surrounding the bias parameter, incorporating this uncertainty into the variance estimate of the adjusted main effect. The approach taken in probabilistic bias analysis is to repeatedly draw a random sample from the bias parameter distribution(s) and use those sampled parameters to adjust the effect estimate. The resulting distribution can be summarized with a “bias-adjusted” main effect, as well as uncertainty or simulation intervals.6,8–11,14
Interpretation of the results of a probabilistic bias analysis raises the question of whether they should be viewed under the common frequentist framework or, alternatively, as Bayesian. The explicit use of a distribution for an unknown bias parameter argues strongly for a Bayesian interpretation; indeed, Lash9 interprets probabilistic bias analysis as a semi-Bayes approach in which prior distributions are used for some parameters and not for others. Other authors have implemented simplifications under which a Bayesian interpretation is possible or have performed an outright Bayesian analysis.8,12,13,15–18 However, explicitly Bayesian approaches to bias modeling are difficult to implement (with some exceptions18,19), and a clear advantage of probabilistic bias analysis over a Bayesian analysis is ease of use. Unfortunately, the extent to which probabilistic bias analysis results mimic Bayesian results remains unclear, with only one published empirical comparison and few theoretical arguments.8,19 In the present paper, we compare probabilistic bias analysis with Bayesian bias modeling for the special case of a dichotomous exposure measured with error in a case-control study.
We consider a case-control study in which interest focuses on estimating the association between exposure, E, and case/control status, C. We assume y0 of n0 controls (C = 0) are classified as exposed, whereas y1 of n1 cases (C = 1) are classified as exposed. The true exposure E is misclassified, and we observe the apparent exposure, E*. We allow the possibility of differential misclassification and define the sensitivity and specificity of exposure measurement as pi = Pr(E* = 1|E = 1, C = i) and qi = Pr(E* = 0|E = 0, C = i), respectively. We further define the measure of etiologic interest, the true odds ratio (OR), as
, where ri = Pr(E = 1|C = i). An analyst can choose to ignore misclassification of exposure status, and estimate
, where θi = Pr(E* = 1|C = i). Unfortunately, as is well known, the OR based on apparent exposure status will not generally equal the OR based on true exposure status. If one is willing to assume known fixed values for pi and qi, a simple adjustment is available to calculate a bias-adjusted OR:
However, as shown by Gustafson et al,13 an arbitrarily small discrepancy between the true and assumed sensitivities and specificities can lead to an arbitrarily large bias in the adjustment. This is particularly worrying, because it implies that a good guess at sensitivity and specificity may not be sufficient; a perfect guess may be required.
Such concerns, as well as attempts to appropriately incorporate uncertainty in the bias parameter into standard error estimates for the adjusted OR, have lead many authors to treat the sensitivities and specificities as random variables with probability distributions that (ideally) are specified using previous research.6,8,9,11,14 Many choices are available for the distribution of sensitivity and specificity. For the time being, we will remain generic in our specification of the prior parameter distribution and say the sensitivities and specificities have some probability distribution functions, fpi(pi) and fqi(qi), where a choice for f might be the logistic-normal, beta, triangular, or trapezoidal distribution. We initially assume the sensitivities and specificities among cases and controls are not correlated for ease of presentation, but we relax this assumption in subsequent sections.
Probabilistic bias analysis draws random samples in 2 steps. The first involves sampling sensitivity and specificity values (p1, p0, q1, q0) from their bias parameter distributions. This is viewed as incorporating uncertainty regarding the extent of the misclassification. Sampled values are plugged into expression (1), replacing θ0, θ1 with the observed exposure probabilities among controls and cases (y0/N0 and y1/N1, respectively), producing an adjusted OR. However, not all combinations of sensitivity and specificity are admissible given the observed data. For instance, if q0 ≤ 1 − θ0, then the adjusted OR could be undefined or negative. Other combinations of sensitivity and specificity are capable of producing similarly nonsensical results. We give bounds for admissible values (p1, p0, q1, q0) in the Appendix. Lash9 recommends discarding samples that produce negative cell counts (ie, samples that lie outside the bounds of admissibility) and drawing new values. Because of these inadmissibility conditions, the distribution of (p1, p0, q1, q0) is dependent on the observed exposure probabilities, and we write it as f1(p1, p0, q1, q0|θ[Combining Circumflex Accent]0, θ[Combining Circumflex Accent]1).
The second step in probabilistic-bias-analysis adjustment for misclassification is to incorporate random sampling error. Standard asymptotic approximations are used to assume that the log of the bias-adjusted OR from the first step has a normally distributed sampling distribution: f2(log(Ψ)|log(Ψc)) ∼ N(log(Ψc), V(log(Ψc))). The variance term is typically estimated using the standard Woolf formula in the unadjusted data, although other options exist.15,20 For each iteration of a probabilistic bias analysis, a random sample is drawn from a normal distribution with a mean equal to the adjusted log-OR obtained in the first step of probabilistic bias analysis. These 2 steps are iterated a large number of times; resultant adjusted ORs are saved at each iteration, and inferences are based on the distribution of the adjusted OR.
A probabilistic-bias-analysis approach bears a distinct resemblance to a Bayesian analysis. The distribution of bias parameters (p1, p0, q1, q0) would be referred to as a prior distribution in a Bayesian analysis. A Bayesian analysis would also specify prior distributions for the parameters r0 and r1. Although we would suggest specifying informative priors in many cases, in this paper, we use uniform priors on r0 and r1 to aid in our comparison with probabilistic bias analysis, which typically does not use informative priors. We note that an improper “beta(0,0)” prior with an effective sample size of zero on these parameters could be considered more compatible with the probabilistic-bias-analysis approach. However, it can be verified that this produces an improper posterior in the present context. Thus, we instead use the weak uniform prior, ie, beta(1,1) with an effective sample size of two. Inference from a Bayesian analysis is based on the posterior distribution, which is typically not available in closed form, although samples from it can be generated using a Markov chain Monte Carlo (MCMC) algorithm. In the first step, the algorithm draws random samples of sensitivities and specificities from their conditional posterior distribution: g1(p1, p0, q1, q0|θ0, θ1). The samples from step one are then used to generate samples from g1(θ0, θ1|p1, p0, q1, q0, y0, y1) in a second step. Neither distribution has a standard form, and Metropolis-Hastings steps can be used to generate these samples. The samples from the 2 steps are plugged into expression (1) to generate an adjusted OR. The θ1, θ0 from the second step are used in the distribution g1(p1, p0, q1, q0|θ0, θ1) at the next iteration. As in probabilistic bias analysis, adjusted ORs from a large number of iterations are collected, and inferences are based on the distribution of those iterates.
The similarity between probabilistic bias analysis and Bayesian approaches to misclassification is intriguing. From a Bayesian perspective, probabilistic bias analysis places a prior distribution on the bias parameters. Both approaches sample iteratively from conditional posterior distributions, ie, prior distributions that depend on the observed data. The first step in both algorithms samples sensitivities and specificities from the conditional posterior distribution. The second step of both algorithms samples an adjusted OR (either explicitly as in probabilistic bias analysis or by sampling θ1,θ0 and plugging them into expression (1) as in the Bayesian approach). Finally, inference rests on the distribution of the adjusted OR, referred to as a posterior distribution in the Bayesian approach or a “bias-adjusted” distribution in a probabilistic bias analysis. In the event that the 2 approaches sample from the same distributions in the first and second steps, we would expect them to give identical results.8
We first focus on the conditional posterior distributions used to sample sensitivities and specificities in the 2 approaches. To aid in our exposition, we examine only the conditional distribution of specificity in both algorithms (similar results are obtained for the sensitivity parameters). The conditional distribution of specificity in probabilistic bias analysis can be written as f(q1|pi, ci, di, θ[Combining Circumflex Accent]i) = fqi(qi)IB(qi|θ[Combining Circumflex Accent]i), where IB(qi|θ[Combining Circumflex Accent]i) is an indicator function that takes the value of 1 if the random variable qi falls within the admissibility bounds given in the Appendix (thus ensuring a positive adjusted OR), evaluated at θ[Combining Circumflex Accent]i = yi / ni. The conditional distribution in probabilistic bias analysis is simply the bias parameter distribution, fqi(qi), truncated to the region implied by the admissibility bounds. If nearly all of the density of the prior distribution lies within these bounds, the truncation will be trivial and the prior and conditional posterior distributions will be nearly identical.
The conditional posterior distribution in a Bayesian analysis is somewhat different: g(qi|pi, ci, di, θi) = fqi(qi)/|pi + qi − 1|IB(qi|θi). There are 2 important distinctions between this distribution and the one used in a probabilistic bias analysis. First, the distribution in probabilistic bias analysis has a “hard bound” on what values of qi are possible because the bound depends on θ[Combining Circumflex Accent]i, which does not vary from iteration to iteration in a probabilistic bias analysis. In contrast, the distribution in the Bayesian analysis has a “soft bound” on qi because values of θi will change at each iteration of the algorithm. Therefore, specificity values that may be impossible to attain in probabilistic bias analysis are possible in the Bayesian analysis.
The second distinction is the division by the scaling factor |pi + qi − 1| in the Bayesian approach. The effect of this scaling factor can range from substantial to relatively insignificant. We illustrate the impact of the scaling factor on the conditional posterior distribution of the specificity in Figure 1, where we assume logit(qi) ∼ Normal(b, σ2). We show the shape of the prior distribution as well as the shape of the conditional posterior distribution under the probabilistic bias analysis and Bayesian approaches. We illustrate first with a somewhat imprecise prior: b = logit(0.7) and σ2 = 1; and second with a more precise prior: b = logit(0.7) and σ2 = 0.05. When pi + qi is close to 1, then the conditional posterior distribution in the probabilistic bias analysis and Bayesian approaches can look considerably different, suggesting the Bayesian and probabilistic bias analysis approaches could produce quite different adjusted ORs. On the other hand, if pi + qi is substantially larger than 1 or prior knowledge is sufficient to produce a relatively compact density function, the scaling factor will have little effect. Intuitively, this rescaling of the prior distribution amounts to redistributing that part of the prior distribution that falls in an inadmissible region (once the data are observed). The inadmissible part of the prior distribution corresponds to a belief in a small (admissible) value of qi. For this reason, prior distributions that fall outside the admissibility bounds will tend to result in conditional posterior distributions with higher density near the bound.
The second step of both approaches incorporates random error into the adjusted odds ratio. Probabilistic bias analysis typically relies on an asymptotic argument about the normality of the log-OR and directly draws samples from this asymptotic distribution. Lash9 used the Woolf estimator for the variance of the unadjusted log-OR. In contrast, the Bayesian approach updates the observed exposure probabilities (θ0, θ1) and then computes an adjusted OR, using expression (1). Comparison of the 2 approaches suggests that we could improve the comparability of the 2 approaches by bootstrapping θ0, θ1 in the second step of probabilistic bias analysis.6,9 The bootstrap estimates would be used to calculate an adjusted OR and would also provide a soft bound on the sensitivities and specificities, similar to the Bayesian approach.
We illustrate the potential for differences between the probabilistic bias analysis and Bayesian approach using data from the National Birth Defects Prevention Study, a population-based case-control study of congenital defects. Our interest was in estimating the association between maternal smoking in the periconceptional period (1 month before becoming pregnant to 3 months after) and an infant having an oral facial cleft.21 Exposure misclassification is frequently a concern in case-control studies of birth defects when exposure information is ascertained after the mother knows that her infant has a birth defect. MacLehose et al16 previously reported on a Bayesian approach to exposure misclassification in this study. Here, we compare the results of an explicit Bayesian analysis with an analysis performed using probabilistic bias analysis approach.
Table 1 shows the crude bivariate table of self-reported smoking and case/control status. Unadjusted for any potential misclassification, the data indicate a moderate increased risk of oral facial cleft among infants whose mothers smoked in the periconceptional period (OR = 1.4 [95% CI = 1.2–1.7]). A substantial amount of research has been done to evaluate the sensitivity and specificity of self-reported smoking among control mothers.22–24 Unfortunately, we are aware of no previous studies that evaluate the accuracy of self-report among mothers of infants with oral facial cleft. In specifying our prior distributions, it is reasonable to expect correlation between sensitivity among cases and controls and correlation between specificity among cases and controls. We follow Chu et al15 in specifying a correlated logistic-normal prior for sensitivities and specificities:
We complete the specification for the specificity parameters by assuming b0 = logit(0.94), b1 = logit(0.94),σq02 = 0.02, σq12 = 0.02, and ρq = 0.8. These hyperparameter values imply that our best guess at the prior specificity is 0.94, and we are 95% certain the specificity is between 0.92 and 0.95. A correlation of 0.8 has previously been used by Fox et al.14 We choose to apply probabilistic bias analysis and Bayesian approaches to misclassification under 2 sets of priors for sensitivity. Both sets of priors assume a0 = logit(0.91), σp02 = 0.02, and ρp = 0.8. This implies that our best guess for the sensitivity among controls is 0.91, and we are 95% certain the sensitivity among controls is between 0.88 and 0.93. However, for the first set of priors, we choose a diffuse prior on the sensitivity among cases: a1 = logit(0.5), σp12 = 2.0. For cases, this prior implies our best guess for sensitivity is 0.5, and we are 95% certain the sensitivity is between 0.05 and 0.95. In practice, such a vague prior will rarely be realistic and is included here to highlight the potential for divergence between probabilistic bias analysis and Bayesian analysis. Our second prior for sensitivity is altered such that a1 = logit(0.91), σp12 = 1.4. This prior implies that our best guess for the sensitivity among cases is 0.91, and we are 95% certain of values between 0.50 and 0.99. This represents a wide but more defensible prior. A paper by MacLehose et al16 provides more detail on prior specification and alternative prior specifications that address the low specificity of self-reported smoking in this problem.
The Bayesian approach is implemented through a MCMC algorithm similar to the one used by Gustafson and colleagues.13 We ran the algorithm for 10,000 iterations and excluded the initial 1000 iterations as a burn-in period. We implemented the probabilistic bias analysis as outlined earlier, with 10,000 iterations of the algorithm (no burn-in period is required with probabilistic bias analysis).
Figure 2 shows the posterior distribution for p1 and log(OR) from the 2 approaches using the vague prior and weakly informative priors. The vague prior places considerable probability outside the admissibility bounds and, as a result, the posterior distribution of sensitivity in the Bayesian approach favors smaller values than in the probabilistic bias analysis. The data contain information that informs our estimate of sensitivity, and so the Bayesian posterior distribution does not resemble the probabilistic-bias-analysis posterior distribution. The posterior distribution of the log(OR) is notably more peaked for the probabilistic bias analysis, and has a far heavier right tail in the Bayes approach. This is not surprising, as the lower sensitivity values favored in the Bayes approach correspond to higher adjusted ORs. The posterior distributions under the weakly informative prior are also shown in Figure 2. Very little prior density falls outside the admissibility bounds in this example. As a result, the posterior distribution of sensitivity in the Bayes and probabilistic bias analysis approach is virtually identical, as are the posterior log(OR) distributions.
Although analysis of a single dataset is instructive, we also examine the performance of the probabilistic bias analysis and Bayesian approaches in simulation studies to address mean squared error (MSE) and 95% credible interval width (the difference between the log-OR confidence intervals). We performed 2 sets of simulations, one in which the distribution for sensitivity among cases was close to the admissibility bounds and the other in which it was further away. For both simulations, we specified true exposure prevalences as r0 = 0.45 and r1 = 0.50, implying ψ = 1.2. Further, we assumed q0 = 0.95, q1 = 0.70, and p0 = 0.9 in both simulations. Finally, in the first simulations, we specified p1 = 0.60, whereas in the second set of simulations, we specified p1 = 0.80. Note that θ1 = 0.45 in the first set of simulations and θ1 = 0.55 in the second set. For each set of simulations, we generated data for 1000 cases and 1000 controls.
We chose highly informative prior distributions for q0, q1, and p0 and explored the results of the analyses under 3 prior specifications for p1. We used the distributions described in expressions (2) and (3), and we chose parameter values such that the prior on the sensitivity in cases can be thought of as vaguely, weakly, or highly informative. We specify a0 = logit(0.9), a1 = logit(0.6), σp02 = 0.02, σp12 = 3, and ρP = 0.8. We refer to this prior as vaguely informative in the sense that it states we are 95% certain the true sensitivity among cases is between 0.05 and 0.98, with a best guess of 0.6. As in the applied example, our intention is not to suggest such a vague prior is defensible, but to detect points of divergence between probabilistic bias analysis and Bayesian analyses. The weakly informative prior specifies σp12 = 1, with other parameters kept at the same values. This implies we are 95% certain the true sensitivity lies between 0.17 and 0.91. Finally, the highly informative prior specifies σp12 = 0.1, implying we are 95% certain the true value lies between 0.45 and 0.74.
In the second set of simulations (where the true value of p1 = 0.80), we define similar vaguely (σp12 = 3), weakly (σp12 = 1), and highly (σp12 = 0.1) informative priors for this set of simulations with a1 = logit(0.8). To complete the specification of the models in both simulations, prior distributions were specified for the specificity parameters: b0 = logit(0.95), b1 = logit(0.7), σq02 = 0.02, σq12 0.02, and ρq = 0.8. These parameters for specificity are “correctly” specified in the sense that they are centered around the truth with modest uncertainty.
In each of the 6 simulations (2 true sensitivity values × 3 prior specifications), we generated 100 datasets. Each dataset was analyzed using probabilistic bias analysis and Bayesian approaches. Each probabilistic bias analysis was run for 10,000 iterations, and each Bayesian analysis was run for 10,000 iterations (following a 1000-iteration burn-in phase). Because resulting distributions tended to have a very heavy right skew, we opted to base inferences on the median rather than the mean. Point estimates and highest posterior density intervals were computed for each simulation. We compared the 4 approaches under the different prior specifications by mean squared error (MSE) as well as the average interval width (upper 95% CI for the OR minus the lower 95% CI for the OR).
Results from the simulation study are given in Table 2. For the set of simulations in which the true p1 is close to the admissibility bounds, the Bayesian approach had reduced MSE when a vaguely or weakly informative prior was used. With a highly informative prior, no difference was seen in MSE. In the set of simulations in which the true p1 is further from the admissibility bounds, the Bayesian approach was observed to have an increased MSE relative to the probabilistic bias analysis approaches when an uninformative prior was used. With weakly or highly informative prior distributions, no difference in MSE was observed.
The average posterior interval widths are also given in Table 2. Posterior interval widths can be quite large in uncertainty analyses, and this is reflected in the results. Vaguely and weakly informative priors resulted in wide posterior intervals. We note that scenarios in which the probabilistic bias analysis and Bayesian approaches result in substantially different MSE are often (although not always) accompanied by particularly wide posterior intervals.
A careful comparison of the probabilistic bias analysis and Bayesian approaches to exposure misclassification in case-control studies indicates that the 2 methods frequently perform equally well. In many situations, the results of a probabilistic bias analysis can be interpreted explicitly as Bayesian results with a uniform prior on r0 and r1. Situations do exist in which probabilistic bias analysis should not be viewed as an approximately Bayesian approach and it can perform poorly. Fortunately, these situations are relatively easy to detect and may involve unrealistic prior specifications.
Probabilistic bias analysis represents a Bayesian approach in which the prior distribution of the bias parameter is not correctly updated with the observed data. The data contain somewhat limited information to update the prior distributions, so this inaccurate updating may be relatively inconsequential. The admissibility bounds given in the Appendix restrict the conditional posterior bias parameter distributions to certain regions of the sample space, once the data are observed. If the prior distributions for the bias parameters are almost completely contained within the bounds, then the data have nothing to contribute to the estimation of that bias parameter, and the prior and posterior distributions will look nearly identical. In this case, probabilistic bias analysis is a nearly perfect approximation of a Bayesian analysis. However, if a non-negligible amount of the prior distribution falls outside of the admissibility bounds, then the data can inform the bias parameter distribution. In this case, the probabilistic bias analysis results may not be a suitable approximation of the Bayesian approach, and interpretation of the probabilistic bias analysis may be questionable.
Our simulation results indicate that the Bayesian approach can have lower mean squared errors when substantial portions of the bias parameter prior distribution fall outside the admissibility bounds. It is important to note that in these cases, posterior bias parameter distributions in a Bayesian analysis tend to allocate more probability density to values of that bias parameter that result in a higher variance for the adjusted OR. For instance, in our applied example, under a vaguely informative prior, sensitivity values near 0.26 are favored relative to probabilistic bias analysis. These values of sensitivity imply very few unexposed cases, resulting in a markedly increased variance and wide intervals.
By calculating the probability that the prior distribution falls outside the admissibility bounds, an analyst can easily check whether probabilistic bias analysis is likely to be a decent approximate Bayesian analysis. If the total probability is larger than a few percent, the approximation may be poor. However, in the event that probabilistic bias analysis is a poor approximation of the Bayesian result, our simulations indicate that this is exactly the scenario in which no adjustment method is likely to be very useful—the posterior interval width for the adjusted OR is so large that it makes the results relatively useless. On seeing that a non-negligible portion of the prior probability falls in the inadmissible region, it may be tempting to alter one's prior so that this would not happen. Such tampering with a prior distribution in light of the observed data is to be discouraged; such a procedure would have unknown properties.
We have repeatedly focused on the distribution of sensitivity or specificity conditional on θ0,θ1. This was to aid in our comparison with the probabilistic bias analysis approach that also views these parameters conditionally. Conditionally, there are inadmissible values of sensitivity and specificity; however, because θ0,θ1 are random and vary from iteration to iteration, all values of sensitivity and specificity are admissible marginally (albeit perhaps very unlikely). Further, in our Bayesian approach, we have assumed a uniform prior on r0 and r1 to aid in our comparison with probabilistic bias analysis. In many instances, epidemiologists may wish to incorporate substantive information on the exposure-disease relationship. Greenland's data augmentation approach may be the most practical in such instances.18,19 However, specifying informative priors on the exposure probabilities may be quite difficult, in that prior knowledge about these probabilities may be based on studies that also experience exposure misclassification.
Misclassification is a common problem in observational research. Probabilistic bias analysis and Bayesian techniques are methods to adjust effect estimates for misclassification errors. Probabilistic bias analysis has an appealing advantage over the Bayesian approach, as it is much easier to implement. The extent to which these various techniques mimic one another has remained unknown. Further, without a formal theoretical framework (for example, Bayesian or frequentist), the interpretation of probabilistic bias analysis results has remained problematic. The results of our research indicate that probabilistic bias analysis is a useful, often nearly exact, approximation of a Bayesian analysis.
APPENDIX: BOUNDS FOR ADMISSIBLE SENSITIVITIES AND SPECIFICITIES
The bounds for admissible sensitivities and specificities are identical in the probabilistic bias analysis and Bayesian approaches (eAppendix, http://links.lww.com/EDE/A534). We develop them by noting that the prevalence of true exposure must fall between 0 and 1: 0 ≤ ri ≤ 1. Note that the relationship between the true and apparent exposure probabilities is θi = ripi + (1 − ri)(1 − qi). Inverting this equation, we see the following must hold:
. Isolating θi, we find the following bounds: 1−qi≤θi≤pi∪pi≤θi≤1−qi. Alternatively, we could solve the inequality for pi and qi to obtain the following bounds: (θi≤pi≤1∩1−θi≤qi≤1)∪(0≤pi≤θi∩0≤qi≤1−θi).
We thank Sander Greenland and Jay Kaufman for very helpful reviews of an earlier draft.
1. Greenland S. Accounting for uncertainty about investigator bias: disclosure is informative: how could disclosure of interests work better in medicine, epidemiology and public health? J Epidemiol Community Health. 2009;63:593–598.
2. Maclure M, Schneeweiss S. Causation of bias: the episcope. Epidemiology. 2001;12:114–122.
3. Cornfield J, Haenszel W, Hammond EC, Lilienfeld AM, Shimkin MB, Wynder EL. Smoking and lung cancer: recent evidence and a discussion of some questions. J Natl Cancer Inst. 1959;22:173–203.
4. Flanders WD, Khoury MJ. Indirect assessment of confounding: graphic description and limits on effect of adjusting for covariates. Epidemiology. 1990;1:239–246.
5. MacLehose RF, Kaufman S, Kaufman JS, Poole C. Bounding causal effects under uncontrolled confounding using counterfactuals. Epidemiology. 2005;16:548–555.
6. Greenland S, Lash TL. Bias analysis. In: Rothman KJ, Greenland S, Lash TL eds. Modern Epidemiology. 3rd ed. Philadelphia: Lippincott-Williams-Wilkins; 2008:345–380.
7. Eddy DM, Hasselblad V, Shachter RD. Meta-analysis by the Confidence Profile Method: The Statistical Synthesis of Evidence. Statistical Modeling and Decision Science. Boston: Academic Press; 1992.
8. Greenland S. Multiple-bias modelling for analysis of observational data. J R Stat Soc A. 2005;168:1–25.
9. Lash TL. Applying Quantitative Bias Analysis to Epidemiologic Data. New York: Springer. 2009.
10. Lash TL, Fink AK. Semi-automated sensitivity analysis to assess systematic errors in observational data. Epidemiology. 2003;14:451–458.
11. Phillips CV. Quantifying and reporting uncertainty from systematic errors. Epidemiology. 2003;14:459–466.
12. Gustafson P. Measurement Error and Misclassification in Statistics and Epidemiology: Impacts and Bayesian Adjustments. Boca Raton: Chapman & Hall/CRC; 2004.
13. Gustafson P, Le N, Saskin R. Case-control analysis with partial knowledge of exposure misclassification probabilities. Biometrics. 2001;57:598–609.
14. Fox MP, Lash TL, Greenland S. A method to automate probabilistic sensitivity analyses of misclassified binary variables. Int J Epidemiol. 2005;34:1370–1376.
15. Chu HT, Wang ZJ, Cole SR, Greenland S. Sensitivity analysis of misclassification: a graphical and a Bayesian approach. Ann Epidemiol. 2006;16:834–841.
16. MacLehose RF, Olshan AF, Herring AH. Bayesian methods for correcting misclassification an example from birth defects epidemiology. Epidemiology. 2009;20:27–35.
17. Steenland K, Greenland S. Monte Carlo sensitivity analysis and Bayesian analysis of smoking as an unmeasured confounder in a study of silica and lung cancer. Am J Epidemiol. 2004;160:384–392.
18. Greenland S. Bayesian perspectives for epidemiologic research III. Bias analysis via missing-data methods. Int J Epidemiol. 2009;38:1662–1673.
19. Greenland S. Relaxation penalties and priors for plausible modeling of nonidentified bias sources. Stat Sci. 2009;24:195–210.
20. Greenland S. Variance estimation for epidemiologic effect estimates under misclassification. Stat Med. 1988;7:745–757.
21. Honein MA, Rasmussen SA, Reefhuis J. Maternal smoking and environmental tobacco smoke exposure and the risk of orofacial clefts. Epidemiology. 2007;18:226–233.
22. Klebanoff MA, Levine RJ, Clemens JD, DerSimonian R, Wilkins DG. Serum cotinine concentration and self-reported smoking during pregnancy. Am J Epidemiol. 1998;148:259–262.
23. Klebanoff MA, Levine RJ, Morris CD. Accuracy of self-reported cigarette smoking among pregnant women in the 1990s. Paediatr Perinat Epidemiol. 2001;15:140–143.
24. Pickett KE, Rathouz PJ, Kasza K, Wakschlag LS, Wright R. Self-reported smoking, cotinine levels, and patterns of smoking in pregnancy. Paediatr Perinat Epidemiol. 2005;19:368–376.