Naeser1 uses Popperian falsification to eliminate several candidate methods of calculating surgically induced astigmatism. While I agree that many of the methods should be discarded, I do not believe that Naeser's argument provides satisfactory justification.
Popper made it quite clear that a single instance of falsification was not necessarily sufficient grounds for abandoning a theory. He points out, for example, that “the observed motion of Uranus might have been regarded as a falsification of Newton's theory.”2 He also comments, “I do believe that the second law [of thermodynamics] is actually refuted by Brownian movement,” but he does not then abandon the second law. Furthermore, it is important “to accept refutations (though not too easily). These rules are essentially somewhat flexible. As a consequence the acceptance of a refutation is nearly as risky as the tentative adoption of a hypothesis.”2 There must be “clear disagreement [emphasis added] for acceptance of a refutation.”3 In Magee's words,4 Popper says “we should not abandon our theories lightly, for this would involve too uncritical an attitude towards tests, and would mean that the theories themselves were not tested as rigorously as they should be. So although [Popper] is what might be called a naïve falsificationist at the level of logic, he is a highly critical falsificationist at the level of methodology. Much misunderstanding of his work has sprung from a failure to appreciate this distinction.”
Kuhn5 puts matters more strongly: “… no theory ever solves all the puzzles with which it is confronted at a given time …. If any and every failure to fit were ground for theory rejection, all theories ought to be rejected at all times.”
Naeser took paired keratometric measurements on a spherical steel ball and used several candidate methods to calculate confidence intervals or regions on the mean induced astigmatism. Because the actual value is presumably zero, any method that resulted in a statistically significant difference from zero must be false and, hence, should be discarded. I argue, however, that this is an acceptance of tests that is too uncritical and refutation that is too easily undertaken.
We have made similar measurements on a steel ball.6 A feature of such measurements is that the spread of the data is not much greater than the scale of the discreteness of the measurements. This immediately raises the possibility of artifact and departure from normality.7 (An example can be seen in Figure 2 of reference 6.) Because the statistical methods that Naeser uses depend on the assumption of normality, some doubt is immediately cast on his conclusions. This can be made worse by the fact that measurement roundoff can have a considerable effect on the sample variance and, hence, the confidence interval. He does not address these matters. Because his paper presents no raw data or graphical representations, one is unable to assess these effects in this case.
Even in the absence of artifact and departure from normality, the statistical tests, by their very nature, are probabilistic. There is no certainty in their conclusions, including that a particular method has been proved false.
It is not at all clear, therefore, that Popperian falsification of the methods is justifiable. This applies, in particular, to the methods characterized by Naeser as vector analysis or astigmatic decomposition.
Naeser asserts that his example 2 falsifies the simple subtraction method of calculating change in astigmatism. For a change from 1.0 diopter (D) along 0 degrees to 1.0 D along 90 degrees, Naeser asserts that the change is 2.0 D along 0 degrees. Because the simple subtraction method gives 1 – 1 = 0 D, instead of the “correct” value 2.0 D, it is discarded. But it seems to me not unreasonable for someone simply to say that he or she regards induced astigmatism as the difference without regard to axis. (This is not to say that I would do so.) Simple subtraction would, then, be correct by definition and there would be no refutation. Surely, by deciding that the correct answer is 2.0 D, one is really committing the logical fallacy of petitio principii or begging the question.
The crux of the matter lies in the definition of astigmatism. Naeser defines it in one way, others in another. It is no surprise, therefore, that other definitions are going to produce values that are “false” by Naeser's definition. It is surely a mistake to invoke Popperian falsification for what is false by definition.
The issues here are complicated,8 but this particular use of Popperian falsification does not help, I suggest, to resolve them.
William F Harris PhD
aJohannesburg, South Africa
1. Naeser K. Popperian falsification of methods of assessing surgically induced astigmatism. J Cataract Refract Surg 2001; 27:25-30
2. Popper KR. Unended Quest: an Intellectual Autobiography. Glasgow, Fontana/Collins, 1976; 42, 99, 165, 166
3. Popper KR. The Poverty of Historicism. 2nd ed. London, Routledge and Kegan paul, 1960; 133
4. Magee B. Popper. London, Fontana/Collins, 1973; 23, 24
5. Kuhn TS. The Structure of Scientific Revolutions. 2nd ed. Chicago, IL, University of Chicago Press, 1970; 146
6. Cronje-Dunn S, Harris WF. Keratometric variation: the influence of a fluid layer. Ophthalmol Physiol Opt 1996; 16:234-236
7. Harris WF. Clinical measurement, artefact, and data analysis in dioptric power space. In press, Optom Vis Sci
8. Harris WF. Analysis of astigmatism in anterior segment surgery. J Cataract Refract Surg 2001; 27:107-128