Estimates of test-retest variability (TRV) in the form of a 95% range have been suggested as providing a cutoff value (or “change-criterion”) against which measured acuity changes can be judged to decide whether they are indicative of a clinically important change. This approach is based on ensuring that the specificity
of the procedure is 95% in individuals with no real change. In an earlier article we investigated empirically the ability of the procedure to detect varying degrees of change (its sensitivity
). In this article, we develop a simple statistical model to examine further the sensitivity
of the approach.
A statistical model was developed, and predictions from the model were compared with empirical visual acuity data.
The model predicts that for changes equal in size to the magnitude of the change-criterion, sensitivity
will be 50%. For changes 1.65 times the change-criterion, sensitivity
is 90% and increasing to 95% for changes 1.84 times the size of the change-criterion. Predicted sensitivities agreed well with those measured empirically.
The 95% range for TRV is often used to decide whether measured changes are indicative of clinically important changes. Evaluating the performance of visual acuity charts using a method analogous to that of estimating the sensitivity
of a screening test highlights some limitations of this method. Use of the 95% range as a change-criterion ensures a high specificity
, but a simple statistical model indicates that changes must approach twice the size of the change-criterion before they will be detected with sensitivity
in excess of 95%. This has implications for the clinician attempting to assess the reliability of visual acuity charts, and in other similar tests, to detect change.