# The Authors Respond

Cole, Stephen R.; Chu, Haitao; Brookhart, M. Alan; Edwards, Jessie K.

doi: 10.1097/EDE.0000000000000707
Letters

Department of Epidemiology, UNC-Chapel Hill, Chapel Hill, NC, cole@unc.edu

Department of Biostatistics, University of Minnesota, Minneapolis, MN

Department of Epidemiology, UNC-Chapel Hill, Chapel Hill, NC

## To the Editor:

In darkness of night without lumen, one takes voices heard to be human.

We thank Dr. McIsaac1 for his interest in our letter.2 We apologize that in our attempt to be concise and amusing, we were unclear. Dr. McIsaac1 is correct that we conflated the values of the parameter

with the validity of the reports. To clarify, consider this revised example. Say we have chosen at random one of two unfair coins which are weighted so that coin 1 and coin 2 land heads on average 1/3 and 2/3 of throws, respectively. Rather than consulting experts, we flip the chosen coin 12 times and it lands heads seven times. The results in our Table hold using this revised example.

Dr. McIsaac1 introduced a parameter

to index the validity of the data reports and thereby separate the data validity from the value of the parameter of interest

. Here we briefly offer another perspective to distinguish the value of the parameter of interest

from the validity of the data. To extend our revised example, say we were not able to flip the coin but instead gathered reports from 12 others who did flip the coin. Say the data are unchanged: seven heads in 12 throws reported. If we were to assume sensitivity and specificity of 1, then the results in our Table again hold. However, imagine we had prior knowledge or validation data suggesting that the sensitivity and specificity of a heads report were exactly 0.7 and 0.8, respectively. Then the modified likelihood at

is 0.21515, and the posterior probability of

is reduced from 0.8 under perfect sensitivity and specificity to 0.75, as might be expected. This modified likelihood is

, and it allows for imperfect sensitivity (

) and specificity (

) of reports yet collapses to the standard form2 when sensitivity and specificity are both 1.

Generally, one should strive to judge the validity of data separately from the data values. But in epidemiology, as in life, the rabits (random bits) of information received are partially divorced from context. Of course, this dialog itself simply represents a working model or useful fiction, which may be absurd in particular contexts. If we think the data are not at all associated with reality, then we will view all information as noise not signal. Such radical skepticism is a form of dogmatism. Nihilists cannot learn. On the other hand, the likelihoods we and Dr. McIsaac1 presented assume that we are equally certain about results from all sources. If, in an attempt to distinguish alternative facts from reality, we assign different values of sensitivity and specificity for different reports, then there is a danger in selectively reinforcing our priors, driving ourselves toward dogma, even if we begin with an open mind.

In this aprÃ¨s-truth world, 3 (P. 159) it may be helpful to state conclusions explicitly. We reaffirm our initial claim that dogmatists cannot learn,2 with the appendix by Dr. McIsaac1 that nondogmatists can be led astray by incorrect information, in the forms of mistaken prior or imperfect data.

Stephen R. Cole

Department of Epidemiology

UNC-Chapel Hill

Chapel Hill, NC

cole@unc.edu

Haitao Chu

Department of Biostatistics

University of Minnesota

Minneapolis, MN

M. Alan Brookhart

Jessie K. Edwards

Department of Epidemiology

UNC-Chapel Hill

Chapel Hill, NC

## REFERENCES

1. McIsaac M. Re: Dogmatists cannot learn. Epidemiology. 2017;28:e61â€“e62.
2. Cole SR, Chu H, Brookhart MA, Edwards JK. Dogmatists cannot learn. Epidemiology. 2017;28:e10â€“e11.
3. Blackburn S. Truth. 2005.New York: Oxford.
Copyright Â© 2017 Wolters Kluwer Health, Inc. All rights reserved.