We examined the possible associations between variables using mixed-effects analyses of variance, with “eyes” as a repeated measure within “students,” and “school” as a fixed factor, along with whatever the covariate was for each analysis. For our Bland-Altman analyses, we normalized the x axis to be centered at x = 0, then tested the hypothesis that the y intercept was zero.
We used Rasch analysis (Winsteps version 3.69 software26) with the Andrich rating scale model27,28 to score the IVI_C, collapsing the five-category to a three-category response scale15 and evaluating the performance of individual questions using fit statistics.29,30 The question about confidence in getting to school was eliminated from analyses because of an item infit mean square statistic outside published norms.26,30 We converted Rasch “person measures” to a 0- to 100-point scale for ease of interpretation.
The simplest indication of the utility of the Ohio Contrast Cards is that they were used successfully. Of 26 partially sighted students enrolled in the study, we obtained Ohio Contrast Card data from both eyes of 17 students and on one eye of 8 students (the other eye having no light perception). Malingering was detected in one additional student, and her data were eliminated from the data set.
Within students, vision in the two eyes was quite similar. This was revealed in preliminary analyses of variance under the general linear model, with “better/worse eye” (from the average of the grating and letter charts) crossed with “test” (letter chart vs. grating cards). For both visual acuity and contrast threshold, there was an effect of test but no effect of better versus worse eye (F1,60 = 1.530, P = .221, for acuity; F1,60 = 1.368, P = .247, for contrast threshold). The similarity of performance across eyes is not surprising because most students' blinding conditions were presumptively bilateral (Appendix A, available at http://links.lww.com/OPX/A302). There was no difference in the overall level of visual performance between the Ohio State School for the Blind and summer students on a multivariate ANOVA after pooling data across the two eyes (F4,18 = 1.510, P = .241).
Clinically, it is often useful to compare a patient's contrast sensitivity to his/her visual acuity.31 The mixed-procedures analysis revealed strong associations between Bailey-Lovie logMAR acuity and Pelli-Robson contrast threshold (Fig. 3A: F1,22.740 = 30.392, P < .0001) and between the Teller Acuity Card and Ohio Contrast Card results (Fig. 3B: F1,30.606 = 8.613, P = .006), after eliminating nonsignificant effects for school.
Grating Tests Versus Letter Tests
Figures 4A and 4B compare the performance on the grating tests to the corresponding letter tests. Most data fell below the solid equality lines (slope = 1), indicating that the grating tests revealed better performance than the letter tests. Teller Acuity Card logMAR performance depended on the Bailey-Lovie logMAR value (F1,22.964 = 40.294, P < .0001), after eliminating a nonsignificant effect of school. Contrast threshold measured using the Ohio Contrast Cards was associated with Pelli-Robson contrast threshold (Fig. 4A: F1,32.913 = 33.078, P < .0001) and also with school (F1,32.935 = 5.881, P = .021). The contrast threshold data generally fell below the lower dashed line (Fig. 4B), indicating that performance on the grating tests was often better than the best prediction from the limit of reproducibility on Pelli-Robson chart.25
The results from Figs. 4A and 4B are shown as Bland-Altman33 plots in Figs. 5A and 5B, which show the signed differences between the logarithms of the letter and grating scores as a function of their means. The Bland-Altman acuity difference did not depend significantly on the Bland-Altman mean acuity score (F1,27.797 = 3.262, P = .082), but there was a statistically significant effect of school (F1,32.212 = 7.059, logMAR, P = .012). After averaging across eyes (for the students who were tested in both eyes), post hoc t tests revealed a significant residual for Ohio State School for the Blind students (mean, 0.373 [SD, 0.237]; t12 = 5.447; P > .0001), but no significant residual for the summer students (mean, 0.194 [SD, 0.300]; t7 = 1.72; not statistically significant). For the contrast data, the Bland-Altman difference data did not depend significantly on the Bland-Altman average data (F1,27.209 = 1.312, P = .262). School was significant overall (F1,27.726 = 4.944, P = .034). After pooling across eyes, the average value for the Ohio State School for the Blind students was significantly above zero (mean, 0.458 [SD, 0.249]; t12 = 6.367; P < .0001), whereas the average for the summer camp students (mean, 0.159; t7 = 0.974; not statistically significant) was not above zero. The results for both visual acuity and contrast sensitivity averages remained statistically significant after correction for two post hoc comparisons in each case.
Thus, Bland-Altman analysis confirms the impression from Fig. 4 that performance on the grating tests was generally better than performance on the letter charts for the Ohio State School for the Blind students, whereas the summer students did about the same on the letter charts and grating cards.
Modeling the Pelli-Robson Contrast Sensitivity Results
The Pelli-Robson chart was designed to test contrast sensitivity near the peak of the contrast sensitivity function, that is, at 3 m for the normal observer. In their original article, Pelli et al.4 suggested adjusting the test distance for low-vision patients, and a 1-m test distance is commonly used when the patient has reduced visual acuity. We tested at 1 m for all but two students (students 1 and 2). However, even the 1-m standard may be inappropriate for some low-vision patients. Contrast sensitivity measured using the Pelli-Robson chart could fall short of both the optimum contrast sensitivity and the contrast sensitivity measured using square-wave gratings, for this reason alone.
We dealt with this problem by performing a post hoc analysis comparing the empirical Pelli-Robson contrast sensitivity to the predicted level of the contrast sensitivity function at the spatial frequency of the Pelli-Robson letters at 1 m. We modeled the square-wave contrast sensitivity function using the square-wave template in Fig. 1A,5 which was based on the standard model of contrast detection.20,21 We assumed linear pooling of contrast within spatial frequency-tuned channels, and we used the parameters for channel spacing (0.5 octave), channel bandwidth (1.4 octaves) and channel pooling Minkowski exponent (β = 4) from Table 2 of Watson and Ahumada.21 The high constant contrast sensitivity at low spatial frequencies occurs because the contrast of the many harmonic components of the Fourier spectrum of the square wave decreases with increasing spatial frequency, but this decrease is matched by the increasing density of the harmonics along a log spatial-frequency axis.
We translated the contrast sensitivity function template relative to log spatial frequency and log contrast sensitivity axes to match the student's Teller Acuity Card and Ohio Contrast Card threshold data. This strategy requires the reasonable assumption that the shape of the spatial contrast sensitivity function for square waves, like the contrast sensitivity function shape for sine waves, is the same for low-vision students as for normally sighted individuals (Fig. 1C).9 We then used that template to estimate the spatial frequency and contrast sensitivity of the peak of the square-wave contrast sensitivity function for each student and also his/her contrast sensitivity at the peak of the spatial frequency band used to identify the Pelli-Robson letters. We estimated this channel frequency to be 1.466 cy/deg, using the formula from Majaj et al,34 which has been replicated by others35 and was also shown to apply equally well to the visual periphery of normally sighted eyes9 (see Fig. 4A of Chung and Legge9) and to the central vision of amblyopic eyes.36 For comparison, the spatial frequency suggested by the three legs of the Sloan E (2.5 cy/letter) was 0.889 cy/deg at 1 m. We discuss the implications of this choice of spatial frequency below. The Pelli-Robson scores for students 1 and 2, who were tested at distances much closer than 1 m, were omitted from the corresponding graphs and analyses.
Figure 6A shows the template fitted to a typical student's Teller Acuity Card (black dot) and Ohio Contrast Card (red dot) data. The contrast sensitivity function allowed us to estimate the spatial frequency and the level of maximum sensitivity (blue diamond) and the level of contrast sensitivity at the spatial frequency of the Pelli-Robson letters (yellow square). Quantity a is the amount by which the Ohio Contrast Cards underestimate contrast sensitivity relative to the maximum contrast sensitivity of which the student is capable, quantity b is the amount by which the Pelli-Robson chart is predicted to underestimate contrast sensitivity relative to the maximum based on the visibility of the letters alone, quantity c is the amount by which student Pelli-Robson performance differs from its predicted value, and the sum of quantities b + c is the amount that empirical Pelli-Robson performance falls below the maximum. Figs. 6B and 6C show some examples of data and their fitted contrast sensitivity functions.
Best Test Distance
Although Pelli and his colleagues4 suggested a testing distance of 3 m, the testing distance in a low-vision setting is 1 m by convention. If the testing distance is much too long, the fundamental spatial frequency of the Pelli-Robson letters will be well onto the falling limb of the contrast sensitivity function (Fig. 6), and the patient's full visual ability to detect contrast will not be measured. The spatial frequency at the maximum of the model contrast sensitivity function model allows us to estimate the best testing distance for each eye in this study (shown in Fig. 7A). Fifty-eight percent of students' eyes needed to be tested at 0.5 m or closer, and 16% of students' eyes needed to be tested at 0.25 m or closer. When the Pelli-Robson chart is used clinically, the examiner often has the patient's Bailey-Lovie logMAR acuity (VA) in hand, so Fig. 7B compares the estimated best test distance to the logMAR data of each eye. This association was statistically significant (F1,34 = 42.170, P < .0001) after eliminating students 1 and 2 and the data from eyes for which logMAR data were not available. The results in Fig. 7B suggest a mnemonic rule of thumb: the testing distance in meters should be approximately (1.5–VA) meters (solid line). Testing distances farther than (2–VA) meters are too far away, and distances closer than (1–VA) meters are probably too close (upper and lower dashed lines, respectively).
Predicted Pelli-Robson Performance
Fig. 4C shows the predicted square-wave contrast sensitivity at the Pelli-Robson spatial frequency for each student as a function of his/her observed Pelli-Robson score. These two quantities were reliably associated (F1,23.984 = 18.143, P < .0001 on a mixed-effects analysis), but there was no significant effect of school (F1,23.760 = 2.966, P = .098). A mixed-effects analysis showed that the Bland-Altman difference scores (Fig. 5C) did not depend on the Bland-Altman averages (F1,32.390 = 0.011, not statistically significant) or school (F1,28.933 = 3.337, P = .078, not statistically significant). After pooling across eyes, the Bland-Altman difference scores were not different from zero (mean, −0.018; t20 = −0.194, not statistically significant). Thus, the model fit the data quite well, suggesting that students identified the letters at about the same level of contrast as they required to detect them.
Estimating Student Visual Capabilities
It is natural to wonder how well the Pelli-Robson chart and Ohio Contrast Cards estimated the best contrast sensitivity of which a student was capable, when tested at the best distance. One advantage of the Ohio Contrast Cards is that their grating spatial frequency (0.15 c/deg) was almost always below the maximum of the contrast sensitivity function (the red dots were mostly to the left of the blue diamonds in Fig. 6), so the maximum possible error is set by quantity e in Fig. 6A, that is, the separation (−0.189 log10 units, or a factor of approximately 0.65) between the peak of the contrast sensitivity function and the constant contrast sensitivity level at low spatial frequencies. In fact, 67% of the values of a were less than 0.15 log10 units (white bars in Fig. 7C). By comparison, the contrast sensitivity at the Pelli-Robson letter frequency (1.47 cy/deg) could be much lower than the maximum of the contrast sensitivity function (quantity b). Only 29% of the estimated values of b were less than 0.15 log10 unit, the median value of b was 0.36 log10 unit, and for two eyes, b was more than 1 log10 unit (gray bars in Fig. 7C). Thus, the letters were not optimally visible to the students. As discussed previously, the empirical Pelli-Robson contrast sensitivity (upright green triangle in Fig. 6A) was not significantly different from the estimated square-wave contrast sensitivity at the Pelli Robson frequency (Figs. 3C, 4C), so the average value of quantity c was not significantly different from zero (mean, 0.091 [SD, 0.447]; t30 = 0.938; not statistically significant). The median value of (b + c) (black bars in Fig. 7C) was 0.507 log10 unit (mean, 0.516 [SD, 0.363]). In short, performance in identifying the Pelli-Robson letters was far short of the best students could have achieved.
With the corrected grating contrast sensitivity data from our theoretical analysis in hand, we are in a position to determine whether the same individuals showed reduced letter chart performance on both the acuity and the contrast measures. Figure 8 shows the amount by which each student's Pelli-Robson score fell short of the prediction based on his/her contrast sensitivity function (quantity c in Fig. 6A), as a function of the amount by which his/her Bailey-Lovie score fell short of his/her Teller Acuity Card score (quantity d in Fig. 6A). After eliminating the nonsignificant effect of school, the mixed-effects analysis of variance revealed that quantities c and d were highly associated with each other (F1,27.012 = 14.084, P < .0001). A linear regression line fit the data well: c = −0.140 + 0.827 * d (bold line in Fig. 8). This suggests that students varied in their skill in identifying letters and that this variation had a similar impact on their overall visual performance on both eye charts.
Our Modeling Assumptions
Of course, the contrast sensitivity function on which these analyses are based is the model described in the introduction, and the spatial frequency band that students used to identify the Pelli-Robson letters is the value from Majaj et al,34 Chung and Legge,9 and Pelli et al.36 We tried various other templates, including ones with higher and lower horizontal sections at low spatial frequencies and more or fewer broadly or narrowly tuned channels, and found little qualitative difference from the predictions we report here. We also investigated the impact of assuming that the spatial frequency band used to identify the Pelli-Robson letters was 0.889 cy/deg (2.5 cy/letter, as for the Sloan E22). For the lower letter spatial frequency, quantity b was smaller than we estimate here (but still statistically highly significantly different from zero), but c was larger (and statistically significant for students in both schools), but the sum b + c (black bars in Fig. 7C) was not affected. Further research will be required to evaluate these assumptions empirically.
Vision-related Quality of Life
The main reason for measuring a visually disabled patient's contrast sensitivity is to understand the likely impact of the patient's low vision on his/her vision-related quality of life. Therefore, for students who provided data from both eyes, we chose the eye with the better logMAR acuity on the Bailey-Lovie Chart for comparison to the IVI_C scores because we expected that the better eyes would be the limiting visual factor in our students' lives. For students who used only one eye, the tested eye's vision data were used. Data from students who did not provide all four measures on at least one eye were excluded from the analysis.
Because the four vision tests were highly correlated with one another (Figs. 3, 4), we performed a stepwise linear regression to find the vision test(s) that accounted for statistically significant amounts of the variance in the IVI_C scores, while controlling for the significant correlations among the vision tests, with an additional factor for school. Only the Ohio Contrast Card data were significantly related to students' scores on the IVI_C test (partial correlation coefficient = −0.565, t = −3.134, P = .005; Fig. 9). The other three vision tests showed partial correlations between −0.061 and +0.136, P > .548, and the partial correlation for school was −0.330, P = .133. We obtained similar results when we eliminated the rightmost data point in Fig. 9 as a possible outlier: only the Ohio Contrast Card data predicted the IVI_C scores (partial correlation coefficient = −0.565, t = −3.134, P = .005; the other three tests having partial correlations between −0.011 and 0.077, P > .650, and there was still no effect of school). When we repeated the analysis while excluding the Ohio Contrast Card data, the Pelli-Robson data were strongly related to IVI_C scores (partial correlation coefficient = −0.563, t = −3.742, P = .001), with a significant effect of school (partial correlation coefficient = −0.486, t = −3.332, P = .001), with Ohio State School for the Blind students having a more satisfactory vision-related quality of life. These results suggest that the Pelli-Robson chart can also provide valuable information. Together, these analyses point to sensory visual contrast sensitivity as a potent predictor of vision-related quality of life, as other investigators have found.2
In this project, we compared the results of four vision tests, including the new Ohio Contrast Card test. The four test results were generally correlated with one another. However, as is the common experience of clinicians who use both tests, performance on the Teller Acuity Cards was generally better, and often substantially better, than the performance on the Bailey-Lovie letter chart. This result is in good agreement with recent work of Bittner et al.37
Of the four tests we examined, only better contrast sensitivity on the Ohio Contrast Cards was independently associated with better vision-related quality of life as indicated by the IVI_C questionnaire. We suspect that this strong association occurred because these students' vision-related quality of life is mostly determined by their limited ability to perform simple, everyday tasks for which large stimuli are the most important. For example, orientation and mobility require the ability to see the terrain underfoot, stairs, doorways, and large obstacles, and social interactions benefit from the ability to see people's faces and to judge people's emotions from their posture. All these aspects of these students' lives depend on the visibility of stimuli that become hard to see when they are too low in contrast, not when they are too small, and none of them depend on students' ability to recognize and identify letters or other optotypes.
Performance for Gratings Versus Letters
Our modeling exercise showed that student ability to see and identify optotypes was compromised compared with what is expected in a normally sighted person. In the case of the Pelli-Robson chart, this is probably mostly a sensory deficit, because the observed Pelli-Robson performance was similar to the prediction based on the estimated contrast sensitivity function. By comparison, student performance on the Bailey-Lovie chart was generally below the high spatial frequency cutoff predicted by their Teller Acuity Card performance, suggesting that there are additional limitations on letter acuity.
We consider two additional limitations, which probably apply to both contrast sensitivity and acuity. First, even for normally sighted observers, a higher level of contrast is generally required to identify stimuli, rather than to simply detect them.38,39 The dashed fiducial lines in Figs. 4C, 5B, and 5C are based on the estimate from Pelli et al.38 that 0.230 log10 units higher contrast is required for identifying letters than for detecting them. The fiducial lines agree with the residual results reasonably well on average, even though the Bland-Altman average data are not different from zero, because the residual variance is quite large. To obtain a prediction for visual acuity, we translated the contrast sensitivity function template downward by 0.230 log10 units to predict the acuity cutoff for identifying letters. For a typical student with average Ohio Contrast Card and Teller Acuity Card performance, the estimated Bailey-Lovie logMAR acuity was 0.104 log10 units below the Teller Acuity Card cutoff (dashed line in Fig. 5A). Quantity d was statistically significantly larger than 0.104 log units (t31 = 4.227, P > .0001). Thus, the additional contrast required for letter identification is consistent with the Pelli-Robson data, but does explain the poor Bailey-Lovie Chart data compared with the Teller Acuity Card acuity data.
Second, there may be additional difficulties posed by the task of identifying the optotypes on an eye chart. For example, if a student has a scotoma that conceals part of an optotype such as a letter, he/she must scan the various parts of the letter to identify it. This makes letter identification more a question of educated inference than of actual recognition or reading. A scotoma is likely a more important factor for optotypes near the acuity limit than it is for the larger letters of the Pelli-Robson chart. By comparison, a grating is simply detected in the card tests we used here, not recognized or identified. Furthermore, the information for grating detection is distributed throughout the stimulus, and the student may be able to find it on the left or right of the card, even if the grating as a whole is only partly seen (see Bittner et al.37 for a similar discussion). Many stimuli that a partially sighted student sees cover a large part of the visual field (e.g., a wall), even when the specific stimulus (e.g., the edge of a doorway) is localized. For this reason, measuring the visibility of a large contrast or acuity grating may be a better way of determining how well a student with compromised visual capabilities can function in everyday life. This observation is in good agreement with the strong association between the Ohio Contrast Card data and the IVI_C results.
Several individual students had such great difficulty reading the letters that the explanations offered previously seem inadequate. Students 1 and 2 had severely compromised performance on the Pelli-Robson chart despite being tested at only a few centimeters' distance, and a third student was unable to identify the letters on either letter chart (data plotted outside the boxes in Figs. 3 and 4). It is beyond the scope of this article to determine what specifically caused these outlying and excluded students' difficulty in identifying letters in the presence of often much better performance on the card tests. However, we note that all were intellectually able, keeping up with their curriculum using Braille and other accommodations. The vision-related quality-of-life scores for students 1 and 2 are indicated in Fig. 9, which shows that these students were not obviously more disabled than the other partially sighted students in this study.
We particularly draw the reader's attention to student 1, who had cortical visual impairment. He was an outlier on most graphs because his grating performance was so much better than his letter performance. His better-eye performance on the letter charts revealed 2.84 logMAR (20/1400) visual acuity and 0.70 log10 contrast sensitivity, whereas his performance on the grating cards revealed 0.397 logMAR (20/50) acuity and 1.9 log10 contrast sensitivity. We suspect that student 1 might use dorsal-stream–based strategies40 to interact with the physical world, perhaps mediated by a nonstriate pathway to the sensory-motor areas in the parietal cortex, which may have been spared by his injury. We note that he enjoys remarkably good mobility without a cane and succeeds in many other motor skills required in everyday life, despite his severe cortical vision impairment. The fact that the card tests used looking and pointing rather than recognizing or reading as the indicator task is consistent with this view.
The Ohio Contrast Cards
This project was designed to determine whether the Ohio Contrast Cards showed promise as a useful test for use with low-vision patients who cannot recognize and identify letters or other optotypes and for whom the spatial frequency maximum of the contrast sensitivity function is unknown. We recognize that this is only the first step in the development of this new test and that dedicated research on the reproducibility of the measurements will be required before it can be used with confidence in the clinic. Furthermore, work with patients in other settings, for example, elderly patients or patients with multiple disabilities, will be needed before we can confidently recommend the Ohio Contrast Cards for all types of clinical practice where visually impaired patients are seen.
The Ohio Contrast Cards were convenient to use in tandem with the Teller Acuity Cards. The typical Ohio State School for the Blind student showed approximately 0.458 log10 units better performance on the Ohio Contrast Cards than on the Pelli-Robson chart or approximately three groups of three letters on the Pelli-Robson chart. This difference is statistically significantly different and would probably be clinically significantly different as well. Similarly, the Teller Acuity Cards produced visual acuity values that are approximately 0.346 logMAR better than the Bailey-Lovie Chart, a difference that is statistically significant and at approximately 3.5 lines on the Bailey-Lovie Chart is also clinically significant. The Ohio Contrast Card contrast sensitivity was the only one of the four scores measured here that correlated statistically significantly with students' scores on the IVI_C vision-related quality-of-life questionnaire. This suggested that the combination of a very low spatial frequency grating stimulus, a looking and pointing indicator task, and contrast sensitivity measurement shows promise for the clinical objective of advising the patient and his/her family and caregivers about the success the patient is likely to enjoy in the tasks of everyday life.
1. McDonald MA, Dobson V, Sebris SL, et al. The Acuity Card Procedure: A Rapid Test of Infant Acuity. Invest Ophthalmol Vis Sci 1985;26:1158–62.
2. Lennie P, Van Hemel SB. Visual Impairments: Determining Eligibility for Social Security Benefits. Washington, DC: National Academy Press; 2002.
3. Owsley C, Sloane ME. Contrast Sensitivity, Acuity, and the Perception of ‘Real-World’ Targets. Br J Ophthalmol 1987;71:791–6.
4. Pelli D, Robson J, Wilkins A. The Design of a New Letter Chart for Measuring Contrast Sensitivity. Clin Vis Sci 1988;2:187–99.
5. Campbell FW, Howell ER, Johnstone JR. A Comparison of Threshold and Suprathreshold Appearance of Gratings With Components in the Low and High Spatial Frequency Range. J Physiol 1978;284:193–201.
6. Atkinson J, Braddick O, Moar K. Development of Contrast Sensitivity Over the First 3 Months of Life in the Human Infant. Vision Res 1977;17:1037–44.
7. Hyvärinen L, Rovamo J, Laurinen P, et al. Contrast Sensitivity Function in Evaluation of Visual Impairment due to Retinitis Pigmentosa. Acta Ophthalmol (Copenh) 1981;59:763–73.
8. Hyvärinen L, Rovamo J, Laurinen P, et al. Contrast Sensitivity in Monocular Mlaucoma. Acta Ophthalmol (Copenh) 1983;61:742–50.
9. Chung ST, Legge GE. Comparing the Shape of Contrast Sensitivity Functions for Normal and Low Vision. Invest Ophthalmol Vis Sci 2016;57:198–207.
10. Bailey IL, Lovie JE. New Design Principles for Visual Acuity Letter Charts. Am J Optom Physiol Opt 1976;53:740–5.
11. Elliott DB, Sanderson K, Conkey A. The Reliability of the Pelli-Robson Contrast Sensitivity Chart. Ophthalmic Physiol Opt 1990;10:21–4.
12. Reeves BC, Wood JM, Hill AR. Reliability of High- and Low-Contrast Letter Charts. Ophthalmic Physiol Opt 1993;13:17–26.
13. Kiser AK, Mladenovich D, Eshraghi F, et al. Reliability and Consistency of Visual Acuity and Contrast Sensitivity Measures in Advanced Eye Disease. Optom Vis Sci 2005;82:946–54.
14. Cochrane G, Lamoureux E, Keeffe J. Defining the Content for a New Quality of Life Questionnaire for Students with Low Vision (the Impact of Vision Impairment on Children: IVI_C). Ophthalmic Epidemiol 2008;15:114–20.
15. Cochrane GM, Marella M, Keeffe JE, et al. The Impact of Vision Impairment for Children (IVI_C): Validation of a Vision-specific Pediatric Quality-of-life Questionnaire Using Rasch Analysis. Invest Ophthalmol Vis Sci 2011;52:1632–40.
16. West SK, Rubin GS, Broman AT, et al. How Does Visual Impairment Affect Performance on Tasks of Everyday Life? The SEE Project. Salisbury Eye Evaluation. Arch Ophthalmol 2002;120:774–80.
17. Robson JG. Spatial and Temporal Contrast-Sensitivity Functions of Visual System. J Opt Soc Am 1966;56:1141–2.
18. Nachmias J. Effect of Exposure Duration on Visual Contrast Sensitivity With Square-Wave Gratings. J Opt Soc Am 1967;57:421–7.
19. Depalma JJ, Lowry EM. Sine-Wave Response of Visual System. 2. Sine-Wave and Square-Wave Contrast Sensitivity. J Opt Soc Am 1962;52:328–5.
20. Carney T, Tyler CW, Watson AB, et al. Modelfest: Year One Results and Plans for Future Years. Human Vis Electronic Imaging V 2000;3959:140–51.
21. Watson AB, Ahumada AJ Jr. A Standard Model for Foveal Detection of Spatial Contrast. J Vis 2005;5:717–40.
22. Põder E. Spatial-Frequency Spectra of Printed Characters and Human Visual Perception. Vision Res 2003;43:1507–11.
23. Mayer DL, Beiser AS, Warner AF, et al. Monocular Acuity Norms for the Teller Acuity Cards Between Ages One Month and Four Years. Invest Ophthalmol Vis Sci 1995;36:671–85.
24. Brown AM, Dobson V, Maier J. Visual Acuity of Human Infants at Scotopic, Mesopic and Photopic Luminances. Vision Res 1987;27:1845–58.
25. Dougherty BE, Flom RE, Bullimore MA. An Evaluation of the Mars Letter Contrast Sensitivity Test. Optom Vis Sci 2005;82:970–5.
26. Linacre J. WINSTEPS Rasch Measurement Computer Program, 3.69 ed. Chicago, IL: Winsteps.com; 2009.
27. Andrich D. A Rating Formulation for Ordered Response Categories. Psychometrika 1978;43:561–73.
28. Rasch G. Probabilistic Models for Some Intelligence and Achievement Tests. Copenhagen: Danish Institute for Educational Research; 1960.
29. Massof RW, Rubin GS. Visual Function Assessment Questionnaires. Surv Ophthalmol 2001;45:531–48.
30. Pesudovs K, Burr JM, Harley C, et al. The Development, Assessment, and Selection of Questionnaires. Optom Vis Sci 2007;84:663–74.
31. Brown B, Lovie-Kitchin JE. High and Low Contrast Acuity and Clinical Contrast Sensitivity Tested in a Normal Population. Optom Vis Sci 1989;66:467–73.
32. Woods RL, Lovie-Kitchin J. The Reliability of Visual Performance Measures in Low Vision. In: OSA Technical Digest Series. Vision Science and Its Applications, Vol. 1. Washington, DC: Optical Society of America; 1995:246–9.
33. Bland JM, Altman DG. Statistical Methods for Assessing Agreement Between Two Methods of Clinical Measurement. Lancet 1986;47:307–10.
34. Majaj NJ, Pelli DG, Kurshan P, et al. The Role of Spatial Frequency Channels in Letter Identification. Vision Res 2002;42:1165–84.
35. Oruç I, Barton JJ. Critical Frequencies in the Perception of Letters, Faces, and Novel Shapes: Evidence for Limited Scale Invariance for Faces. J Vis 2010;10:20.
36. Pelli DG, Levi DM, Chung ST. Using Visual Noise to Characterize Amblyopic Letter Identification. J Vis 2004;4:904–20.
37. Bittner AK, Jeter P, Dagnelie G. Grating Acuity and Contrast Tests for Clinical Trials of Severe Vision Loss. Optom Vis Sci 2011;88:1153–63.
38. Pelli DG, Burns CW, Farell B, et al. Feature detection and letter identification. Vision Res 2006;46:4646–74.
39. Watson AB, Robson JG. Discrimination at Threshold: Labelled Detectors in Human Vision. Vision Res 1981;21:1115–22.
40. Weiskrantz L. Blindsight: A Case Study and Implications. Oxford, England: Clarendon Press; 1986.
Supplemental Digital Content
© 2017 American Academy of Optometry