Secondary Logo

Journal Logo

Linear No-threshold (LNT) vs. Hormesis

Paradigms, Assumptions, and Mathematical Conventions that Bias the Conclusions in Favor of LNT and Against hormesis

Sacks, Bill1; Meyerson, Gregory2

doi: 10.1097/HP.0000000000001033
REVIEW PAPER
Free
SDC

The linear no-threshold assumption misunderstands the complex multiphasic biological response to ionizing radiation, focusing solely on the initial physical radiogenic damage. This misunderstanding is enabled (masked and amplified) by a number of mathematical approaches that bias results in favor of linear no-threshold and away from alternatives, like hormesis, that take biological response into account. Here we explore a number of these mathematical approaches in some detail, including the use of frequentist rather than Bayesian statistical rules and methods. We argue that a Bayesian approach cuts through an epidemiological stalemate, in part because it enables a better understanding of the concept of plausibility, which in turn properly rests on empirical evidence of actual physical and biological mechanisms. Misuse of the concept of plausibility has sometimes been used to justify the mathematically simple and convenient linearity-without-a-threshold assumption, in particular with the everywhere-positive slope that is central to linear no-threshold and its variants. Linear no-threshold’s dominance in the area of dose regulation further rests on a misapplication of the precautionary principle, which only holds when a putative caution has positive effects that outweigh the negative unintended consequences. In this case the negative consequences far outweigh the presumed hazards.

1US Food and Drug Administration, Center for Devices and Radiological Health (retired), Diagnostic Radiologist (retired);

2North Carolina Agricultural and Technical State University, Department of English.

The authors declare no conflicts of interest.

For correspondence contact Bill Sacks by email at wsacks830@gmail.com.

(Manuscript accepted 18 October 2018)

Back to Top | Article Outline

INTRODUCTION

THE LINEAR no-threshold (LNT) assumption concerning ionizing radiation entails that there is no threshold dose3 below which the health impact is either harmless or beneficial. In other words, LNT entails that all ionizing radiation is harmful down to zero dose. It further adduces the independent claim that all ionizing radiation has cumulative negative effects over a lifetime.

However, as widespread and long-standing as advocacy of LNT may be, it only describes, more or less accurately, immediate damage to DNA and other cellular and organismal components. But LNT is incomplete and therefore fails as an accurate description of reality. First, LNT neglects to account for universal adaptive and protective organismal responses to harmful agents that are shared by living entities and that are induced and stimulated in the low-dose range, whether fully or only partially effective, but that are inhibited and/or overwhelmed only in the high-dose range. Second, LNT disregards a major portion of the radiobiological scientific literature that demonstrates empirically the presence of both (1) thresholds of response with respect to dose and dose rate and (2) beneficial response below those thresholds (hormesis). This disregard is obscured by the implication that such literature is either nonexistent, erroneous, or scientifically irrelevant.

Since the biological responses have been thoroughly discussed elsewhere (Sacks et al. 2016; Sacks and Siegel 2017), the present article confines itself to describing a number of arbitrary and unjustifiable mathematical approaches that are instrumental to LNT’s unsupported conclusions. These approaches often mask false conclusions and/or appear to justify them. In addition to various statistical maneuvers used by LNT proponents, in epidemiologic studies that conceal the existence of nonlinearity at low doses, as outlined by Scott (2018), chief among those considered in the present article are the following:

  • Use of frequentist vs. Bayesian statistics;
  • Restriction to a single parametric (algebraically expressible) formula for the relationship between dose and response covering the entire dose range from zero to high;
  • Restriction to a family of dose-response relationships that are either linear or quadratic, but always with slope that is everywhere positive;
  • Requirement that the dose-response relationship go through the origin (associating zero dose with zero effect a priori—what if doses can be too low for optimum health?);
  • Regarding data that are inconsistent with expectations as noise rather than signal;
  • Restriction to only one hypothesis, rather than several, as the only possible alternative to the null, a feature of frequentist reasoning; and
  • Choosing the favored hypothesis, rather than the conventional no-effect hypothesis, to play the role of the null.

The most consistently defended aspect of LNT is the absence of a threshold (NT) rather than linearity (L), and certain concessions are sometimes made by LNT proponents who propose a linear-quadratic formula or a factor that reduces the slope of the line at low doses, called the dose and dose-rate effectiveness factor (DDREF). But a threshold requires that the slope of the dose-response relationship be either negative or at least zero somewhere in the low-dose range, and this they rule out a priori (by assumption). For simplicity we focus on the linear example, but our argument applies to their insistence on everywhere-positive slope.

Linearity of response, or everywhere-positive slope, down to zero dose entails that all radiation is harmful and, in particular, causes future cancer, with a probability that declines but remains finite as zero dose is approached. The assumption of cumulativeness of damage over one’s lifetime eliminates the time factor by failing to distinguish between the biological harms of chronic vs. sporadic exposure. LNT’s neglect of biological processes that take time and the perhaps unwitting use of biasing mathematical approaches have the generally unintended (or perhaps sometimes intended) consequence of continual reinforcement of radiophobia among physicians and patients. And it leads governments to recommend such things as lowering harmless or beneficial levels of radon in homes while overreacting in the face of nuclear power plant accidents.

Back to Top | Article Outline

On the use of plausibility

John Boice, past president of the National Council on Radiation Protection and Measurements (NCRP), said, by way of endorsement, “it is the current judgment by national and international scientific committees that no alternative dose-response relationship appears more plausible than the LNT model on the basis of present scientific knowledge [emphasis ours]” (Boice 2015). He then compared a linear-quadratic to a linear model and said, “the statistical uncertainty in the data for <100 mGy (weighted colon dose) is large as seen by the wide confidence intervals. In fact, the best fit in the range of <2 Gy is ‘linear quadratic’ and not linear, but I’m challenged to see any practical difference.”

But what makes a proposition plausible? And on what basis are Boice’s candidate models confined to linear and linear quadratic? For Boice, plausibility seems to be at least influenced by convenience and simplicity, which are mathematical criteria that are independent of empirically demonstrable reality and are wholly contained within the mathematics and its relationship to the user, not related to the actual way that nature behaves.

Plausibility, however, is not, in fact, independent of empirically demonstrable reality but depends on it in part. And since what is plausible to one person may not be plausible to another, this means that plausibility is a joint property of empirical reality and the experience and predilection or relevant paradigm that the observer brings to the issue—no matter how many observers or scientific committees agree.

Back to Top | Article Outline

Bayesianism vs. frequentism

A Bayesian approach begins with prior empirical knowledge and bases probability of a proposition’s being true on that prior, continually using evidence from experiment or observation (epidemiological) to update the prior to a posterior—meaning prior to and posterior to the acquisition of new evidence or data. Thus, since updating based on new evidence is the essence of learning, Bayesianism enshrines the learning process in its approach. Frequentism, in contrast, does not update its starting assumptions (if it recognizes them at all) and does not account for learning, which leaves starting paradigms untouched, even if numerical conclusions require amendment.

Thus—since empirical evidence, as all agree, demonstrates that harm is greater in the high-dose range than in the lower—by assuming that the dose-response relationship is linear throughout the dose range from zero to high, and always being able to draw a straight line with positive slope through such a set of data points, frequentism never recognizes a challenge from the data to its starting paradigm. In contrast, Bayesians are open to learning that a particular starting paradigm (prior) may require modification. This is discussed in more detail in the section below headed “What family of curves fits the data best?” Furthermore, a curve that begins with negative slope in the low-dose range is far more plausible than a straight line or any line with everywhere-positive slope, as it accords with empirical reality (discussed in greater detail below).

There are situations in which frequentism is an adequate statistical method of eliciting valid information from the data, but there are situations, as we describe for low-dose radiation, where frequentism fails and Bayesianism is necessary.

Back to Top | Article Outline

Frequentism’s arbitrary assumptions and neglect of ambiguity

To better elucidate the basic concept underlying frequentism, we offer the following example: if 16% of US men are observed to develop prostate cancer some time in their lives (frequency), then the probability that any individual US male will develop prostate cancer is taken to be the same 16%. Thus, the essential feature of frequentism, following R.A. Fisher, is that the probability of a future outcome (prostate cancer) for an individual is defined as the observed frequency in the past of that outcome in a population of which that individual is a member (US males). So, past is mapped onto future, population onto individual, and frequency onto probability.

However, every individual is a member of an indefinite number of populations, each, in general, with a different frequency of a given outcome like cancer. For example, what is the probability that a particular individual will develop prostate cancer if he belongs to all of the following populations: men over 65, men with positive family history of prostate cancer, men who exercise regularly, even men with red hair, and so on ad infinitum?

The frequency of prostate cancer is likely to be different in each of these populations, except by coincidence. So in order to estimate this individual’s probability of developing prostate cancer, a choice must be made about categorizing this individual as a member of one of many possible populations. Given the ambiguity, such a choice is necessarily arbitrary and subjective, and at best conventional rather than evidence based, but generally unrecognized as such. In mapping past population frequency onto future individual probability, frequentism unwittingly obscures the problem of population ambiguity.4

Back to Top | Article Outline

Bayesianism’s understanding of subjectivity and objectivity

There is indeed an element of subjectivity in Bayesianism, just as there is in frequentism, despite frequentists’ common ignorance or denial of this feature. However, the subjectivity occurs at different junctures and has different consequences. Bayesians, unlike frequentists, are fully aware of the role that their subjectivity plays, but it occurs for Bayesians because for them probability represents their degree of belief in a proposition. Furthermore, that degree of belief is subjected to evidence as it becomes available, such that different Bayesians observing the same incoming data may start from different degrees of belief (priors), based on different experiences and different predilections, but their degrees of belief will converge toward progressively closer agreement as they update their priors with incoming evidence into posteriors. This learning process (which is an epistemological phenomenon; that is, it concerns human perception and knowledge of the referent rather than the referent itself) occurs in never-ending rounds of updating and observing, which renders all conclusions tentative to varying degrees and subject to subsequent modification should the evidence so demand. So the subjectivity of the Bayesians is the subjectivity endemic to any learning process and intrinsic to realist epistemology.

Fig. 1

Fig. 1

Frequentists, in contrast, hold that probability is not a degree of belief, but rather it is a property solely of that aspect of the outside world under consideration; thus, they regard probability as wholly ontological (concerning external reality rather than human perception and knowledge associated with it). Whereas probability as a degree of belief is a joint property of the outside world and the perceiver, largely based on her/his previous experience and knowledge; therefore to a Bayesian, probability is both ontological and epistemological. Bayesianism regards the frequentist position as the mind-projection fallacy, meaning that that which is partly in the mind of the observer is wrongly projected entirely onto the outside world. Thus, LNT proponents, as frequentists, wrongly think that linearity (or linear-quadraticity) is a property of empirical reality rather than an arbitrary (albeit convenient) mathematical restriction that they impose on nature a priori.

Back to Top | Article Outline

The single parametric (algebraically expressible) approach to the dose-response relationship

There is additional unacknowledged subjectivity in fitting algebraic curves to data, which often blocks perception of alternative choices. In particular, the insistence on a singular algebraic expression covering the entire dose range, in the face of a discoverable (and already discovered) multiplicity of actual physical and biological mechanisms within that range, acts as an obstacle to realistic comprehension. If actual mechanisms do not permit such simplification, the single algebraic representation will fail to represent the reality accurately. In the hands of influential organizations and individuals—voices of authority—the negative consequences of such neglect for public health are magnified.

Thus, LNT proponents insist not only that there be a single parametric curve/formula covering the entire dose range, but insist further that it be linear—or, in what is a relatively recent concession to complexity, linear-quadratic or modified by application of a DDREF, an artifact introduced to enable retention of linearity, even if piecewise. In fact, DDREF is an oxymoron; the relationship is either linear or not, and it cannot be both. Their insistence on a single parametric representation covering the entire dose range thus forces them to extrapolate from higher doses, where empirical evidence, it is agreed, demonstrates resulting illness and/or death, to lower doses all the way down to zero dose. That there is no valid empirical evidence of harm in this lower dose range is ignored, and epidemiological studies that claim to provide such evidence have been shown to contain numerous errors, as touched on below.

The inadequacy of a single algebraic relationship to characterize the entire dose range may be easily seen with the help of the following analogy: at different ranges of temperature, a graph accurately describing the volume of a particular aliquot of water would need to reflect that below freezing the volume changes according to one relationship (and formula), between freezing and boiling it changes according to a different relationship (and formula), and above boiling the relationship depends strongly on the container, if any is present, but under no circumstance would it be the same as in the two lower ranges. An a priori demand that the volume-temperature relationship be linear throughout the three ranges can be seen to be unwarranted. The relationship needs to be determined separately in each of the three ranges—ranges that are bounded and separated, in this case, by phase changes.

The necessity of assessing the physics and biology in each range applies as well to the dose-response relationship for ionizing radiation, for which the interaction of the physical and biological mechanisms on either side of the lower and upper thresholds differ from one another.

Back to Top | Article Outline

What family of curves fits the data best?

LNT proponents generally insist on linearity as the simplest, and most plausible family of curves. They fail, as mentioned above, to recognize that plausibility is in part a subjective judgment peculiar to each observer and dependent on each one’s previous experience and knowledge, and it is not a property of external reality alone. But insofar as plausibility does also reflect empirical reality, LNT proponents completely neglect the experimentally and observationally demonstrated biological response mechanisms.

To illustrate the mathematical point apart from empirical evidence, let’s consider a hypothetical data set (Fig. 2).

Fig. 2

Fig. 2

We can ask, “What’s the straight line that best fits these data?” or “What’s the quadratic that best fits these data?” or “What’s the (whatever) that best fits these data?” One could even ask, “What’s the sine wave that best fits these data?” and so on. The choice of family of curves is independent of and not dictated by the data and is derived from preconceptions as to how the data may best be described. Whatever choice is made constrains or biases the possible results and conclusions and rules many out, a priori. The fact that LNT proponents reject certain alternative choices by assumption (including hormesis) is closely related to their failure to recognize that such rejection (or for that matter, acceptance) is a joint property of their own perceptions and external reality and not simply a property of external reality alone. They thus narrow their interpretive options unwarrantedly and would thereby be doing so even if there were no empirical evidence for hormesis.

The more general question that should be asked is, “How do we choose which family of curves to fit to these data?” Once that’s decided, it’s a simple matter to find the best representative of the chosen family. But there are an indefinite number of families, each with its own best-fitting representative. Only a small proportion of curves can be described by a single parametric formula that covers the entire range, and there has to be independent evidence that the data can be so represented, let alone which family to choose among them. It cannot simply be taken for granted. Such evidence can only come from considering both the physics of radiogenic damage and the biology of organismal responses to the radiogenic damage over time. Ignoring the latter leads inevitably to falsehood, generally defended nevertheless by LNT proponents.

Just from a mathematical point of view, many data sets can be described by different algebraic formulas for piecewise ranges; for example, linear (Fig. 3).

Fig. 3

Fig. 3

This would apply in the face of the empirical finding that there are three different relationships between physical damage and biological defense responses operating in three different dose ranges.

An additional unwarranted assumption made by LNT proponents is that the line must go through the origin of the dose-response relationship. This is tantamount to assuming that at zero dose there will be zero effect; but what if a minimum level of radiation is necessary for health? Then zero dose would have a significant and harmful effect compared to optimal nonzero doses. Yet this is ruled out a priori by the LNT-associated assumptions; indeed these assumptions fly in the face of both experimental and observational (epidemiological) evidence that at very low doses, near or at zero, adaptive and protective defenses are at best hampered in their development and in some cases even prevented (Mitchel et al. 2008). Thus, Boice’s pronouncement, following national and international scientific committees, that “no alternative dose-response relationship appears more plausible than the LNT model on the basis of present scientific knowledge” does not even qualify as plausible, let alone as the most plausible.

As an aside, those epidemiological studies that purport to demonstrate evidence in favor of LNT, and there are many, have been generically refuted as they contain circular reasoning, significant misestimates of individual radiation exposures, failure to consider certain relevant confounders, and often specious statistical and category manipulation—all in addition to the omission of biological defense mechanisms, which alone prevents plausibility, let alone validity, from entering into their conclusions (Sacks et al. 2016).

Back to Top | Article Outline

Empirical mechanisms should govern whether unexpected data points are judged to represent noise or signal

When the ordinate position of a data point is inconsistent with the operative paradigm (whether acknowledged or unacknowledged), it is routinely regarded as the unexplained result of noise, perhaps due to measurement error or statistical fluke. As long as the operative paradigm is based on the preponderance of available evidence, this is a reasonable conclusion for the time being, even though it may subsequently prove to be invalid. But when the paradigm neglects a major portion of the available evidence, then regarding unexpected data points as noise generally rests on (unwarranted) preconceptions.

Thus, with regard to the data set of Lloyd et al. (1992) that was used as a major basis of the conclusions in the 2006 Biological Effects of Ionizing Radiation (BEIR) VII report of the National Academy of Sciences (NAS) BEIR Committee, Lloyd et al. (1992), as well as the BEIR Committee, chose to regard as noise the zero-dose data point (the control value) that showed a greater response than that at still low but nonzero doses. In other words, if the zero-dose value (control) they found were to be taken as signal, the initial slope would be negative, contrary to the everywhere-positive slope that LNT proponents postulate. So, both Lloyd et al. and the BEIR VII Committee felt justified, based solely on their predilection, in regarding it as noise and ignoring it (Siegel and Greenspan et al. 2018).

Again, a paradigm can obscure, either if it is not based on the preponderance of available evidence or if the paradigm ignores, rather than refutes, explanations by those who regard the data as signal. Such an explanation was indeed forthcoming in an earlier study that included a number of the same authors (Pohl-Rüling et al. 1983) that Lloyd even cites, though BEIR VII ignores—namely that the initial negative slope indicates repair of damaged DNA at low doses, even if this repair is inhibited at high doses. However, even Pohl-Rüling et al. failed to invoke the removal mechanisms for damaged cells—mechanisms available when repair is incomplete or fails entirely. Those removal mechanisms reside at the cellular, tissue, and organismal levels. But at least Pohl-Rüling et al. did not regard the elevated zero-dose data point, and consequent initial negative slope, as noise, and instead regarded it as signal requiring an explanation (Siegel and Greenspan et al. 2018).

Back to Top | Article Outline

Restriction of alternative hypotheses to a single candidate and misassignment of the role of the null hypothesis

According to the self-imposed rules associated with frequentist statistics, the following protocol applies: (1) choice of one hypothesis, the validity of which the investigators hope to demonstrate with the data, and (2) assignment of the role of the null to a second hypothesis that the investigators hope the data will allow them to reject in favor of the desired hypothesis. The desired hypothesis is called the alternative hypothesis, meaning alternative to the null, and the null is always the starting point. This is an all-or-none approach, with the data granting either ability or inability to reject the null and, correspondingly, either ability or inability to accept the desired alternative. The null cannot, by the rules of frequentism, be accepted nor the alternative rejected. Furthermore, frequentism considers no other hypotheses, at least not in this round. And finally, the all-or-none feature of frequentism is counterposed to the Bayesian regard of evidence as lending support of varying degrees of strength to a variety of hypotheses, including LNT (except for the requirement for plausibility, which is a nonmathematical consideration).

While not strictly required by frequentist protocol, a null usually means, as the word implies, something like “no effect.” But in any case, the null is always intended as a hypothesis that is set up, it is hoped, to be rejected in favor of the desired hypothesis. The reason the null cannot be accepted as true is that failure of the data to permit its rejection may be not a property of the null but rather may be a matter of too small a sample. But again, that’s only if consideration is restricted to two relatively close hypotheses, neglecting the possibility of a third, fourth, or any additional hypotheses. There is always an asymmetry in treatment between the null and the alternative, with one accepted by default unless the data permit its rejection.

Bayesianism, in contrast, does away with such asymmetry, as it can simultaneously handle any number of candidate hypotheses, and judgment is then made as to which among them best reflects the data or, equivalently, which among them is best supported by the evidence. And with adherence to empirical reality as a guide, the candidate hypotheses should all be plausible. Thus, for Bayesians, all plausible hypotheses are on an equal footing until the selection is made by the data. If we abandon empirical reality as a guide and appeal instead to mathematical extrapolation, simplicity, and/or convenience (as with LNT), it is possible and even likely that the candidate alternative hypothesis will fail to reflect reality (as LNT does).

According to frequentist protocol, to demonstrate the validity of the LNT-predicted effect at a discrete and definite set of different doses, particularly in the low-dose range, the data must allow rejection of the null hypothesis of no effect in favor of some harmful effect that is linearly related to dose at each of the chosen doses throughout the dose range. Furthermore, LNT’s assumed positive slope of increasing harm at increasing doses must obtain throughout the entire dose range, and, conversely, the lower the dose the less the harm, but without vanishing until zero dose is reached—otherwise there would be a threshold, below which the slope either becomes zero or at low enough doses becomes negative. The requirement of everywhere-positive slope also is imposed by LNT proponents on other possible relationships, such as linear-quadratic (equivalent to quadratic with vertex displaced from the origin).

It is easy to see that for a straight line (or quadratic, etc.) that goes through the origin—reflecting zero harm at zero dose—the lower the dose the lower the difference between the presumed harm and no effect. Thus, for the data to allow rejection of the properly assigned null (no effect) in favor of the alternative (LNT), larger and larger sample sizes are required by frequentists at lower and lower doses in order to preserve any reasonable statistical power. But if the alternative comes closer and closer to the null, the probability of rejection of the null, if the alternative is true, becomes less and less, i.e., the power diminishes toward 2.5% (for a two-sided test), where the null and alternative coincide. This means that even if the alternative (LNT) were true, it would become more and more difficult to acquire a sample size large enough to demonstrate its validity (by being able to reject the null in its favor).

In light of this inescapable difficulty in the low-dose range when using the frequentist protocol, LNT proponents adopt one of two approaches: (1) claim that the alternative (LNT) is true, while admitting that it is impossible to demonstrate, i.e., accept LNT on faith despite the absence of confirmatory evidence in the low-dose range (Boice 2015); or (2) assign the role of the null to the favored hypothesis (LNT). The latter, however, is a violation of frequentism’s own protocol and additionally, defies logic but does not prevent some LNT proponents from taking this approach (Siegel et al. 2017). By eliminating use of a null, Bayesianism avoids this entire problem.

With regard to the first approach, while it is true that the validity of LNT (if it were true) could not be demonstrated in the low-dose range, as its proponents rightly declare, it is not true that this is the actual reason that its validity cannot be demonstrated. The reason is that biological and physical considerations, bolstered by both experimental and observational empirical evidence, demonstrate that LNT is false. Nevertheless, its proponents excuse their failure to confirm its validity by the undeniable difficulty in demonstrating its validity (even if it were true), and thereby use false pretenses to cling to this unsupportable claim.

In fact, some LNT proponents, acknowledging the opposition’s favoring of a third hypothesis, that of hormesis (beneficial response to low-dose radiation) below a particular dose threshold, go so far as to claim that neither can hormesis be demonstrated in the low-dose range (using the frequentist approach), for the same reason of insufficient available sample size. However, this turns out to be false, as hundreds, if not thousands, of both experimental laboratory and observational epidemiological studies have demonstrated the validity of hormesis, with statistical significance and even using the frequentist approach (Luckey 1991; Sanders 2010).5 This is because hormesis does not share the smooth convergence toward no effect that hobbles the LNT hypothesis.

But Bayesian reasoning makes it easier to demonstrate that the preponderance of evidence supports the validity of hormesis and the falsity of LNT. Bayesianism permits multiple hypotheses to be considered simultaneously, rather than being confined to the two that frequentism entails, one of which (the null) is invoked with the sole intention of rejection—at least in any one instance of comparison. Again, the Bayesian approach essentially asks the question of any number of hypotheses, “Which of these several hypotheses do the data best support?” or more broadly, “What is the relative strength of support lent by the data to each of the various hypotheses under consideration?” The considered hypotheses could include no effect (though not in the role of a null) and even, if plausibility is set aside, LNT.

Furthermore, the fact that Bayesianism allows one to observe the learning process as more and more data are incorporated into the overall body of evidence and priors are continually updated to posteriors is a benefit that would hold whether or not the favored hypothesis were true. For Bayesians, the focus is the comparison of the degree to which the growing body of evidence supports each of the candidate hypotheses, rather than simply accepting one and rejecting the other(s) altogether.

Back to Top | Article Outline

Misuse of the concept of the null

This second approach—assigning the role of the null to LNT—is sometimes used by LNT proponents in the face of insufficient statistical power (based on insufficient sample size) to distinguish LNT from no effect. This, again, is a violation of frequentism’s own protocol and, not so incidentally, is irrelevant to a Bayesian approach. Since a proper null in frequentist protocol is specifically set up to be rejected in favor of the desired hypothesis (data permitting), the role of the null can neither logically nor legitimately be assigned to the desired hypothesis. To do so anyway, logic and protocol aside, as some do, means in effect that if the data do not warrant rejection of the desired hypothesis (illegitimately regarded as the null), then it is justified to accept it as true. But rejection of the desired hypothesis, if it were true, would require that the data be far enough from the response predicted by LNT at any particular dose in the low range that there would be only a 2.5% probability (for a two-tailed test or 5% for a one-tailed test) of this data point, or one farther away, occurring by chance, which in turn means that there is a 97.5% (or 95%) probability that the data will not permit the desired hypothesis to be rejected by chance. The failure to reject the pseudonull (desired hypothesis) is then wrongly claimed to imply that it is true. However, nonrejectability of a proper null never permits the conclusion that the null is true, since nonrejectability can also be due to insufficient sample size.

Thus, this misassignment of the role of the null to the desired hypothesis, rather than to the hypothesis of no effect, is an illegitimate manipulation of the already fraught frequentist ideation, and one that strongly biases the result in favor of the desired hypothesis, i.e., LNT or quadratic or even a third possibility, but always with everywhere-positive slope.

The erroneous procedure of misassigning the role of the null to the desired hypothesis, even in frequentism’s own terms, misplaces the burden of proof. So, while LNT proponents may find data that prohibit them from rejecting the misassigned null of LNT (at a series of low doses), and then claim that they have demonstrated the validity of LNT, this approach lies outside the realm of valid scientific procedure, even by the rules of frequentism.

Another example of a misplaced burden of proof is described in the following. The current US Environmental Protection Agency (EPA) policy takes the position that the LNT model is accurate unless “compelling evidence to the contrary” is presented. This approach is included in the agency’s guidelines that direct the use of LNT even if the scientific evidence cannot substantiate that conclusion. This is a circular argument that excludes the option of other alternative models from being considered (Cardarelli and Ulsh 2018).

With the phrase, “unless ‘compelling evidence to the contrary’ is presented,” the US EPA puts the burden of proof by fiat on those who would provide such evidence to the contrary rather than on those, like the US EPA, who maintain that radiation as an agent of harm is an exception to the rule that extant biological organisms have evolved adaptive responses to defend themselves against virtually all agents with which they come in contact at low enough doses. As is often said, extraordinary claims require extraordinary evidence, whereas there is no evidence for linearity in the low-dose range or for the absence of a threshold. Aside from the fact that strong evidence for hormesis has been repeatedly produced in numerous studies from around the world (Luckey 1991; Sanders 2010), the concept of compelling, while pretending to characterize the evidence, is actually a joint characterization of the evidence and the subjective eye of the beholder. The EPA is thus declaring that, to them, no amount of evidence would be compelling, nor do they offer a standard by which evidence could possibly be judged compelling to them. And by neglecting to offer such a standard they imply that compelling means absolute and independent of the subjective state of mind of the judges. This declaration renders LNT inviolable, but such a claim is again outside the realm of valid scientific procedure. They succeed in this mission not through scientific procedure but rather through political power.

Back to Top | Article Outline

Hormesis

The word “hormesis” comes from the Greek meaning to stimulate, the root of the word “hormone,” in this case meaning the stimulation of adaptive and protective defense mechanisms in biological organisms. Just as with the relationship between volume and temperature from below freezing to above boiling for an aliquot of water, there are three ranges of dose with respect to the dose-response relationship: (1) too little for optimal health, (2) just right, and (3) too much. In the first range, there is too little radiation exposure to stimulate the development and continued soundness of a healthy immune system and possibly other levels of defense (Mitchel et al. 2008). In the third range, there is so much radiation exposure that it sickens or kills by inhibiting and overwhelming repair and/or removal mechanisms.

The second (middle) range is, for obvious reasons, often referred to as the Goldilocks zone, in which there is sufficient radiation exposure to develop and maintain a healthy immune system, yet not so much as to overwhelm either the immune system or the additional protective biological repair/removal responses at suborganismal levels—nuclear, cellular, or tissue.

The main point is that in the three ranges the interactions between the biological processes and the physical damage differ from one another, so that there is no single parametric mathematical formula that can represent the entire range. While there may be qualitatively similar damage throughout the dose range, dissimilar adaptive and protective reactions to that damage require dissimilar formulas to represent the net outcome. Mathematics is useful as a tool that is subordinate to actual processes and laws of nature and is intended to describe those actual processes and mimic their features in mathematical form. However, when math is used independently from the actual processes, as with LNT, then the math dominates us and hinders our understanding.

So much of theorizing even in physics is guided by an esthetic sense of mathematical symmetry and beauty, including examples such as string theory, relativity theory, quantum mechanics, and so on. Even the age-old search for a unified field theory in physics is motivated by a desire to express neatly and concisely all four fundamental forces as manifestations of a single underlying force (Hossenfelder 2018). But such a motivation is like looking for your keys under the light when you know you have dropped them in the dark down the block. You are more likely to find them where they fell rather than where the light is better, and more likely to accurately explain nature’s laws with mathematical formulations that reflect those laws, whatever they may turn out to be, rather than with mathematics that is chosen for its esthetic appeal, simplicity, or convenience (like the best-fitting straight line covering three ranges that host three different mechanisms). Such criteria generally lead to fruitless searches for confirmation of the chosen formulation. If esthetics, simplicity, and convenience can be achieved with a mathematical description that closely approximates the actual relationships, fine, but this should not be assumed universally possible, and empirical reality should take precedence over these extrascientific proclivities. It is especially harmful when the attraction for esthetics blinds researchers to an entire voluminous body of evidence, whose existence is ignored or even denied (Luckey 1991; Sanders 2010).

Back to Top | Article Outline

The negative consequences of mathematical biasing in radiation science

To summarize, LNT enlists two propositions, the first inherent in the linearity throughout the dose range and the second adduced arbitrarily and independently: (1) all radiation harms to one degree or another, e.g., causes cancer, and (2) the probability of future harm is cumulative over one’s lifetime.

A corollary of the latter claim is that chronic exposure to ionizing radiation has the same effect as sporadic exposure with the same summed dose over time. That is, the hypothesis of cumulativeness eliminates the time factor from biological processes, whereas repair and/or removal of damaged cells requires time—usually on the order of hours, according to some studies (Rothkamm and Löbrich 2003; Löbrich et al. 2005).

If radiogenic damage were persistent and therefore cumulative, meaning if there were no repair or removal, or if such mechanisms were incomplete, then regulation to limit exposure over time might indeed help protect public health. But if repair/removal can keep pace with or outpace the radiogenic damage at low doses, then such regulatory limits are counterproductive. Not only are they expensive to administer, taking limited resources away from needed applications and applying them to unneeded procedures, but such regulations can actually cause harm both by interfering with natural protections and by fostering radiophobia that drives harmful decisions. Chief among such radiophobic decisions are the following:

  • Individual decisions to avoid medically needed radiological imaging;
  • Lowering radiation exposure for imaging studies below the level needed for interpretable images;
  • Government-sponsored forced mass relocations in the vicinity of nuclear power plant accidents that would harm no one outside the plant;
  • Propaganda that hinders the expansion of clean, sustainable, and safest-source-of-all nuclear energy for electricity and other energy needs; and
  • Obstacles to research funding for the beneficial therapeutic uses of low-dose radiation.

In short, there is no safe side, no side of caution, on which to err. The precautionary principle that directs us to err on the side of caution can only have the intended effect when there is no negative consequence to such erring or when the consequences are less than the harm that it is intended to prevent. But the reverse is the case for radiation regulatory practices. As long as the biological rate of repair of radiogenic damage or removal of unrepaired damaged cells can keep up with the rate at which damage is inflicted (determined by the dose rate), there is no persistent damage and therefore the damage will not be cumulative.

Moreover, Löbrich et al. (2005) found that the DNA double-strand breaks produced by computed tomography (CT) are repaired and/or removed between 5 and 24 h after the CT. And incidentally, in their small study the postscan numbers were less than the prescan numbers, though their sample size was too small for statistical significance (using a frequentist approach). That is, the CT scan might actually have left the patient in a healthier state in that regard than before the scan but at least seemed to return to the pre-CT condition. And the diagnostic information from a CT, of course, is often necessary to guide appropriate treatment.

The very availability of a healthier state than before the low-dose radiation exposure rests on one additional fact of major importance: every second of every day, the normal metabolism in our mitochondria damages our DNA and other cellular components through the generation of reactive oxygen species (ROS). This damage occurs in every cell in our bodies (except for nucleus-lacking red blood cells) at a rate that exceeds by some six orders of magnitude (a million times as frequent) that due to those radiation doses that are ever encountered from natural background, medical diagnostic procedures, and the exterior of nuclear power plants.

Therefore, if there were no evolved adaptive and protective repair/removal mechanisms for this continual endogenous damage, we would be extinct. Radiation in low doses has been found repeatedly to increase the repair/removal rate so that even the naturally occurring endogenous DNA damage ends up at a lower level than in the absence of the radiation (see, for example, Löbrich et al. 2005; Feinendegen 2016). Neglect by LNT proponents of these facts alone renders LNT an incomplete and therefore fallacious paradigm, as mathematically attractive and convenient as it may be. Knowledge and admission of these findings is crucial to the understanding of the way that low doses of ionizing radiation, that like high doses unquestionably produce initial damage, can actually leave us healthier within hours than before the exposure. That benefit is the essence of hormesis.

Back to Top | Article Outline

Influences that preserve LNT

If there is a threshold for harm, as thousands of studies demonstrate, lowering dose further while still within the hormetic range and to a concentration lower than the optimum is unnecessary, wasteful, harmful, and fear-inducing, with the significant negative consequences that unreasoned and unknowing fear always carries with it. Yet the conventional practice among radiologists and nuclear physicians calls for lowering exposure for medically necessary radiological imaging studies to a level as low as reasonably achievable (ALARA) without knowing where either the threshold or the optimum acute exposure lies. In view of the foregoing, nothing could, in fact, be more unreasonable.

There are a number of obstacles to gaining consensus based on the preponderance of evidence. These include, but are not confined to, the following:

  • Unquestioned adherence to accepted voices of authority, regardless of the evidential preponderance;
  • Retreat from LNT to the claim of agnosticism with regard to LNT vs. hormesis on behalf of the entire field of radiation science;
  • Appearing to take into account the opposing view only to dismiss it;
  • Seeking safe harbor by hiding behind the medical standard of care; and
  • Rare but occasional deliberate dishonesty.

For example, with regard to the third bulleted point, there is an article promoting LNT (Halm et al. 2014) that even cites the Löbrich et al. paper mentioned above (2005), only to ignore the observed reversion to baseline of the number of double-strand breaks between 5 and 24 h following a CT scan. The authors still claim persistent and hence cumulative damage, concluding that “unnecessary radiation-producing procedures should be eliminated when possible and, if appropriate, non-ionizing techniques such as US [ultrasound] or MRI [magnetic resonance imaging] should be used” (Sacks and Siegel 2017).

And with regard to the fifth bulleted point, while most of these practices may be the result of paradigm blindness rather than deliberate dishonesty, there is at least one instance in the BEIR VII report of apparent dishonesty on the part of at least one contributor to the report. A fragment of a sentence was quoted from another study, of which Löbrich was again one of the authors (Rothkamm and Löbrich 2003), in which he and his colleague observed early damage followed by repair or elimination of the damaged cells in vitro. But only the sentence fragment reporting the damage was quoted, while the rest of the sentence and the following sentence, reporting subsequent return to baseline, were omitted. While the others on the committee may have only been guilty of negligence in failing to check this citation, it is difficult to escape the conclusion that the person who lifted the fragment was guilty of deliberate fraud (Sacks et al. 2016). There is a decided difference between negligence and fraud, but the harm is the same.

Thus, for this reason and for the reasons mentioned above that are more likely a result solely of paradigm blindness, BEIR VII has no legitimate claim to being considered a voice of authority, yet it is widely considered to be a sort of bible in radiation science (Siegel and Greenspan et al. 2018). The NCRP has also been exposed as undeserving of this status (Jaworowski and Waligorski 2003).

Until these voices of authority, as presently constituted, are recognized as undeserving of such status and are questioned and overcome by the entire medical and regulatory professions, we will make no progress away from superstition, radiophobia, and a general flight from scientific procedure and evidence-based medicine.

Back to Top | Article Outline

SUMMARY

The following conventional (or unconventional) assumptions and usages all contribute to biasing the results against hormesis in favor of LNT:

  • Use of frequentist statistics;
  • Demand for a single parametric formula throughout the dose range;
  • Restriction to the linear or quadratic family of curves with everywhere-positive slope;
  • Requirement that the line go through the origin (i.e., that zero dose = zero effect);
  • Regarding data that depart from preconceived patterns as noise;
  • Restriction to one alternative hypothesis; and
  • Assigning the role of null to the desired hypothesis, to be accepted rather than rejected.
  • Such biasing away from biological and physical reality causes untold harm to countless people, and at least certain of those responsible should be held accountable.
Back to Top | Article Outline

REFERENCES

Boice JD Jr. The Boice Report #40: LNT 101. Health Phys News 43:25–26; 2015.
Cardarelli JJ, Ulsh BA. It is time to move beyond the linear no-threshold theory for low-dose radiation protection. Dose-Response 16:1–24; 2018. DOI org/10.1177/1559325818779651.
Feinendegen L. Quantification of adaptive protection following low-dose irradiation. Health Phys 110:276–280; 2016.
Halm BM, Franke AA, Lai GF, Turner HC, Brenner DJ, Zohrabian VM, DiMauro R. Gamma-H2AX foci are increased in lymphocytes in vivo in young children 1 h after very low-dose x-irradiation: a pilot study. Pediatr Radiol 44:1310–1317; 2014.
Hossenfelder S. Lost in math: how beauty leads physics astray. New York: Basic Books; 2018.
Jaworowski Z, Waligorski M. Problems of US policy on radiation protection. Exec Intel Rev 30:18–26; 2003.
Lloyd DC, Edwards AA, Leonard A, Deknudt GL, Verschaeve L, Natarajan AT, Darroudi F, Obe G, Palitti F, Tanzarella C. Chromosomal aberrations in human lymphocytes induced in vitro by very low doses of x rays. Int J Radiat Biol 61:335–343; 1992.
Löbrich M, Rief N, Kühne M, Heckmann M, Fleckenstein J, Rübe C, Uder M. In vivo formation and repair of DNA double-strand breaks after computed tomography examination. Proc Natl Acad Sci USA 102:8984–8989; 2005.
Luckey TD. Radiation hormesis. Boston: CRC Press; 1991.
Mitchel REJ, Burchart P, Wyatt H. A lower dose threshold for the in vivo protective adaptive response to radiation. Tumorigenesis in chronically exposed normal and Trp53 heterozygous C57BL/6 Mice. Radiat Res 170:765–775; 2008.
National Council on Radiation Protection and Measurements. Implications of recent epidemiologic studies for the linear-nonthreshold model and radiation protection. Bethesda, MD: NCRP; Commentary 27; 2018.
Pohl-Rüling J, Fischer O, Haas G, Obe G, Natarajan AT, van Buul PPW, Buckton KE, Bianchi N, Larramendy ML, Kucerova M, Polikova Z, Leonard A, Fabry L, Palitti F, Sharma T, Binder W, Mukherjee RN, Mukherjee U. Effect of low-dose acute x-irradiation on the frequencies of chromosomal aberrations in human peripheral lymphocytes in vitro. Mutat Res 110:71–82; 1983.
Rothkamm K, Löbrich M. Evidence for a lack of DNA double strand break repair in human cells exposed to very low x-ray doses. Proc Natl Acad Sci USA 100:5057–5062; 2003.
Sacks B, Siegel JA. Preserving the anti-scientific linear no-threshold myth: authority, agnosticism, transparency, and the standard of care. Dose-Response 15:1–4; 2017. DOI 10.1177/1559325817717839.
Sacks B, Meyerson G, Siegel JA. Epidemiology without biology: false paradigms, unfounded assumptions, and specious statistics in radiation science (with commentaries by Inge Schmitz-Feuerhake and Christopher Busby and a reply by the authors). Biol Theory 11:69–101; 2016. DOI 10.1007/s13752-016-0244-4.
Sanders CL. Radiation hormesis and the linear-no-threshold assumption. Berlin Heidelberg: Springer-Verlag; 2010.
Scott BR. A critique of recent epidemiologic studies of cancer mortality among nuclear workers. Dose-Response 16:1–9; 2018. DOI 10.1177/1559325818778702.
Siegel JA, Sacks B, Socol Y. The LSS cohort of atomic bomb survivors and LNT. Comments on “Solid Cancer Incidence among the Life Span Study of Atomic Bomb Survivors: 1958–2009” (Radiat Res 187:513–537; 2017) and “Reply to the Comments by Mortazavi and Doss” (Radiat Res 188:369–371; 2017). Radiat Res 188:463–464; 2017.
Siegel JA, Sacks B, Pennington CW, Welsh JS. DNA repair following exposure to ionizing radiation is not error-free: but this does not increase cancer incidence or mortality. J Nucl Med 59:359; 2018. DOI 10.2967/jnumed.117.198804.
Siegel JA, Greenspan BS, Maurer AH, Taylor AT, Phillips WT, Van Nostrand D, Sacks B, Silberstein EB. The BEIR VII estimates of low-dose radiation health risks are based on faulty assumptions and data analyses: a call for reassessment. J Nucl Med 59:1017–1019; 2018. DOI 10.2967/jnumed.117.206219.

3As an aside, important in most contexts, the concept of dose covers a multitude of possibilities, as all it describes is the energy deposited in tissue per kilogram of tissue. It overlooks the type of radiation—high or low linear-energy-transfer (LET)—as well as whether the source is external or internalized through inhalation, injection, or ingestion. We will pretend that low-LET external radiation is the subject of our consideration for the sake of simplicity, as that is adequate to make the points we want to make, and the arguments apply equally to all other forms of ionizing radiation.
Cited Here...

4Additional areas of frequentist subjectivity, which we mention only in passing, include the following: Arbitrary choice of p < 0.05 as the cutoff for statistical significance, also expressed as the 95% confidence interval or CI (or sometimes 90%, particularly when it helps achieve the preconceived result without requiring impractically large sample size); and Choice of a one-sided instead of two-sided test based on the preconception that some conceivable outcome is impossible in reality, a preconception that is sometimes arbitrary and mistaken.
Cited Here...

For example, consider the latest report (Commentary 27) from the NCRP (2018), whose cover exhibits the graph shown in Fig. 1. Their displayed selection of models a priori rules out hormesis (which would be indicated by a dip below the baseline), thus revealing a one-sided point of view that is taken to justify the use of a one-sided statistical test. Such choices precede the performance of a study and bias the outcome and conclusions in a particular direction that may not be warranted.
Cited Here...

5It is futile for us to cite one or two such studies, given their prolific abundance, so we direct the reader’s attention to these two books that, while not particularly recent, provide long lists of them. Furthermore, hormesis-supporting evidence continues to be forthcoming from studies—both in vitro and in vivo—performed by scientists around the world, studies that are easily available to anyone who seeks them with an open mind. Two websites that direct visitors’ attention to many such studies are www.radiationeffects.org, website of the international organization of radiation scientists, Scientists for Accurate Radiation Information (SARI), and https://www.x-lnt.org, website of the SARI-affiliated XLNT Foundation.
Cited Here...

Keywords:

cancer; hormesis; radiation, ionizing; radiobiology

© 2019 by the Health Physics Society