It is commonplace for the optometric and ophthalmologic literature to include visual acuity (VA) measurements. Frequently, VA data are used to characterize populations of research participants, sometimes being used as a criterion for “normal” vision and at other times being used to divide research populations into groups that have relatively similar levels of visual disability or disease severity. In many research studies, VA is an outcome measure, often the primary one, that is used to quantify progression rates or responses to treatments within individuals or across selected population groups.
DESIGNATING VA SCORES
VA measurements express angular size, and there are several alternative ways of doing this. One common clinical practice is to score VA using “Snellen fractions” in which the numerator indicates the testing distance and the denominator indicates the size of the test optotypes. This ratio expresses an angle. There is no universally accepted testing distance. Metric units are used to express testing distances (such as 6 m, 4 m, 5 m, 3 m and 1 m), and in the United States imperial units are used (such as 20 ft and 10 ft). In continental Europe and in some Asian countries, VA scores are usually expressed as “decimal acuity” values. This is a single number that is equal to the value obtained when the numerator of the “Snellen fraction” is divided by the denominator. Occasionally, mainly in research reports, the VA is expressed as the minimum angle of resolution (MAR), and this gives a single number that indicates angular size [in arc-minutes (minarc)] of the critical detail within the test target. For most common optotypes, the size of the critical detail is taken to be one-fifth of the height of the optotype. The MAR is equal to the reciprocal of the decimal acuity value. It is widely accepted in the clinical research community that logarithmic scaling should be used for most statistical analyses and for quantifying differences or changes in VA.1 In the ophthalmic literature, VA values are often expressed in terms of “logMAR,” which is the common logarithm of the MAR. The logMAR scale facilitates the analysis of VA data and enables a simple system for giving credit for every test target correctly recognized. Giving credit for every extra letter read significantly enhances the sensitivity of VA measurement and thus gives tighter confidence limits for identifying changes or differences.2 On the logMAR scale, logMAR = 0.0 corresponds to the Snellen fractions of 20/20 and 6/6, to a decimal acuity of 1.0, and to an MAR = 1 minarc; and LogMAr = 1.0 corresponds to 20/200, 6/60, 0.1, or 10 minarc. Larger logMAR values mean poorer VAs, and logMAR values become negative for good VAs when the MAR becomes <1.0.
Whatever format authors chose to use when they present VA values in the professional and scientific literature, interested readers should be able to convert the values to their own favored system for specifying VA. The conversions can sometimes be slow and laborious, but the alternative VA specification systems are all quantifying the same angles, and there is no inherent ambiguity. There is no loss in the meaning of the VA values.
More problematic are the frequent inconsistencies and inadequacies in identifying the VA tests that were used.
A popular acronym is BCVA, which stands for best-corrected VA. But does BCVA always mean the same thing? And does it mean what it says? The “best-corrected” adjective suggests a refraction was performed just before the VA measurement was made, but it is not usual for reports of BCVA to identify the refraction procedure and when it was performed. The terms “presenting visual acuity” or “habitual visual acuity” are often used to refer to measurements made when patients are using their usual spectacles or contact lenses, but occasionally this too is said to be “BCVA.” When VA is being measured through progressive addition lenses, how safe is it to say that the VA that was measured was best-corrected? Should BCVA measurements be making distinctions between spectacles or contact lenses? Strong spectacle lenses will enhance VA in hyperopia and diminish VA in myopia.
Authors or clinicians reporting BCVA values should be explicit and say exactly what they mean by “best corrected” or “BCVA.”
“Snellen acuity” is another term that is commonly seen in the literature and in clinicians' daily communications, but, more often than not, the meaning is vague. The term “Snellen acuity” is widely used and is usually taken to mean that the VA was measured with a traditional Snellen chart with a single large letter at the top and progressively more letters per row as the sizes become smaller. There are innumerable different versions of what are called Snellen charts. There are wide variations in the choice and design of the optotypes, the size progression sequences, the numbers of letters at each of the various size levels, and the spacing between letters and between rows.
Sometimes, Snellen acuity will be intended to mean that VA was measured with a common projector chart. In other clinical environments, the same term could suggest that a Snellen chart was back-illuminated on a light box. Snellen acuity might also be used to convey that the Snellen chart was a portable card front-illuminated by the ambient room illumination or by a lamp directed at the chart. Visual acuity scores can be significantly affected by the chart design. The major problems with most Snellen charts are that they have too few letters at the larger size levels and this leads to imprecise and insensitive measurement of VA.
“Snellen acuity” is a term that should be used carefully, and authors should explicitly identify the charts that were used.
Bailey and Lovie3 introduced chart design principles that effectively standardize the visual task on VA tests so that, unlike Snellen charts, the visual task is the same at all size levels. For this standardization of visual task, there must be the same number of optotypes at each size level; there must be a logarithmic (constant ratio) progression of size; and the spacing between letters and between rows must be proportional to the size of the optotypes. Then size becomes the only significant variable from one row to the next. The Early Treatment of Diabetic Retinopathy Study (ETDRS) chart4 follows these design principles using Sloan letters as the optotype, and a standard testing distance of 4 m. The ETDRS chart and testing protocol have become the “gold standard” for VA measurement, especially in research.
There are many other charts that follow the same design principles, and there is a broad range of different optotypes, including different sets of letters, numbers, pictorial symbols, Landolt rings, Tumbling Es, and characters and letters from many different languages. Often, such charts are referred to as “LogMAR charts” because they are associated with VA scores being expressed in terms of logMAR. Although, at first glance, the various LogMAR charts are very similar in appearance, they do not necessarily yield equivalent VA scores.
Some optotype sets are easier or more difficult to recognize than others. On many of the charts that use numbers, symbols, or characters from different languages, the optypes have heights and widths that are significantly different from those used for traditional 5×5 optotypes (e.g.,Tumbling E, Landolt ring, Sloan letters), yet they are given the same size designation. Even though such charts may have the LogMAR chart format, the spacing between adjacent optotypes can be significantly narrower or wider than the 5-unit spacing between the adjacent optotypes used in Tumbling E, Landolt ring and ETDRS charts. Depending on the ocular disorder, spacing can have significant effects on VA scores.5
When clinicians and researchers use alternative LogMAR charts for VA measurement, they should be aware of how optotype difficulty, size, and spacing may differ from other charts and be conscious that such differences can affect results.
Again, reports of VA measured with LogMAR charts should identify the specific chart used. For example, it is not sufficient to say a “LogMAR chart with numbers” because, among available number charts, there can be differences of >20% in the height of the numbers or in the spacing between numbers, even when the labeled size ratings are all the same.
It is important that researchers and curious clinicians have the information required to replicate the vision testing procedures that are reported in the literature. Authors and editors have a responsibility to ensure that research reports clearly define their terms and are specific in identifying the tests that were used. Testing conditions such as chart luminance and viewing distances should be reported. The procedures used in conducting the VA measurements affect scores. Authors should clearly describe their methods and report their rules for stopping, for giving encouragement to guess, any use of pointing or other help with localization, and the method used to assign the VA score.
I urge authors and clinicians who report VA scores to explicitly define the terms they use and to give specific details when identifying the test charts and describing their procedures.
Ian L. Bailey
School of Optometry
University of California, Berkeley
415 Minor Hall
Berkeley, California 94720-2020