Bland-Altman analysis is very commonly used for assessing repeatability of measurements or for comparing measurements taken with different instruments. An early article on the technique (Bland and Altman^{1} ) has been cited 31,000 times, with 1903 of those citations occurring in ophthalmic literature.^{2}

The technique involves analyzing pairs of measurements taken from a set of subjects, either from one instrument (to assess repeatability) or from two different instruments (for method comparison). An example using simulated data is shown in Fig. 1 A.

FIGURE 1: Bland-Altman plot showing the difference between two methods of measuring spherical equivalent refraction (A and B), plotted against the mean measurements. (A) Confidence intervals for limits of agreement calculated using the exact two-sided tolerance approach.^{3,4} (B) Confidence intervals for limits of agreement calculated using the approximate one-sided tolerance approach suggested by Bland and Altman.^{1,5}

This shows spherical equivalent refractive error measurements done with two different instruments, A and B, on the same 10 subjects. The difference between two measurements is plotted on the y axis and the mean of the two measurements on the x axis. The mean of differences d¯ (in this case, −0.08 D) and the standard deviation of differences (SD_{diff} ) (in this case, 0.73 D) are calculated, and d¯ is plotted as a horizontal reference line, along with two other reference lines, termed 95% limits of agreement. The 95% limits of agreement are calculated using equations based on the standard normal distribution as d¯ ± 1.96 SD_{diff} (in this case, an upper limit of agreement of 1.35 D and a lower limit of agreement of −1.51 D). The 95% limits of agreement have been described by Bland and Altman^{1} as an interval containing 95% of the population of differences. This statement, which is often echoed by other authors (e.g., Armstrong et al.^{6} ), is not accurate.

This is because the sample statistics, d¯ and SD_{diff} , are only estimates of the underlying population mean of differences μ_{diff} and population standard deviation σ_{diff} . It is true that 95% of a normally distributed population of differences lies between μ_{diff} ± 1.96 σ_{diff} . However, we can only estimate the probability that 95% of the population of differences lies between d¯ ± 1.96 SD_{diff} . For normally distributed data, it turns out that, unless sample sizes are enormous (>45,349),^{7} there is less than an even chance (i.e., <50%) that 95% of the population of differences lies between the limits of agreement: d¯ ± 1.96 SD_{diff} . For the sample size shown in Fig. 1 , n = 10, the chances of 95% of the population of differences lying between the limits of agreement can be calculated at only 37%.^{7}

This uncertainty about how likely the limits of agreement are to contain 95% of the population is the reason that techniques have been developed to calculate confidence intervals for limits of agreement.^{1,3–5,7–13} An example of this is shown in Fig. 1 . This is not a new idea; approximate methods have been available to calculate such confidence intervals since the 1940s^{14} and suggested for Bland-Altman analysis by Bland and Altman in their seminal 1986 article.^{1} An example of Bland and Altman's approximate method is shown in Fig. 1 B. In 2015, a more precise method for limit of agreement confidence limits based on exact two-sided tolerance factors was published, based on Ludbrook's previous work.^{3,9} The exact method and the approximate method can give quite different results even for relatively large sample sizes.^{7} Despite the usual rigor that researchers apply to describing potential errors in their data, researchers almost never use any methods to describe confidence limits for limits of agreement. My 2015 article reviewed the use of confidence limits in limits of agreement in articles in Optometry and Vision Science . Of the 160 articles that reported the use of Bland-Altman analysis, only one article^{15} was found that reported the use of confidence limits for their limits of agreement. Because it was the prevailing technique, that article used Bland and Altman's approximate method.

Since 2015, the use of confidence limits on limits of agreement has increased but is far from universal. The purpose of this article is first to review the use of confidence limits on limits of agreement in Optometry and Vision Science from 2016 to 2018 including whether approximate or exact methods have been used. Second, to assist researchers, the article includes as digital supplementary content an Excel workbook to calculate the confidence limits for limits of agreement and an Excel workbook to draw the appropriate limits of agreement and confidence limits on a Bland-Altman plot.

METHODS
The terms “Bland,” “Altman,” “Bland-Altman,” “LoA,” and “limits of agreement” were searched for on the Optometry and Vision Science website, with the search restricted to the words appearing in articles and to the 3-year range from January 2016 to December 2018 inclusive. The search was conducted selecting the “all fields” limitation, which searches in the following fields: “title,” “author,” “abstract,” “full text,” “volume,” and “issue.” It seems to not recognize the search terms if they occur in references or in figure captions. The journal articles were then read to determine whether they had used confidence intervals for 95% limits of agreement and, if used, whether the confidence intervals were exact or approximate. There was no assessment of the scientific merit of using Bland-Altman analysis or of the appropriateness of reporting Bland-Altman results, beyond assessing the use of confidence intervals for limits of agreement.

RESULTS
There were 47 articles^{16–62} that mentioned Bland-Altman analysis. Two of these articles^{45,50} did not actually report a result but rather discussed use of Bland-Altman analysis in future studies. In addition, there were eight articles^{63–70} that mentioned “LoA” or “limits of agreement” but did not mention Bland-Altman. Two of these articles^{69,70} did not actually report limits of agreement but discussed other studies using them, and another article had used the abbreviation “LOA” for “lower order aberrations.”^{65} Excluding such studies, this left 50 studies that claimed to use or were judged to use Bland-Altman analysis.

Of the 50 remaining studies, eight reported the use of confidence intervals for limits of agreement. Of those eight studies, four studies^{24,27,32,49} used Bland and Altman's approximate method,^{1,5} and four studies^{33,36,42,46} used the exact method.^{3}

Of the 50 studies using Bland-Altman analysis, 48 reported subject numbers in their analysis, with two studies^{51,52} not reporting subject numbers in their Bland-Altman analysis. Some studies performed Bland-Altman analyses with more than one sample size. We assessed the smallest sample size from each study. These ranged from n = 3 to n = 2072, with a median sample size of 40. Of the four studies that used the approximate method for calculating confidence intervals for limits of agreement, the minimum sample size was 30, and the maximum sample size was 2072, with a median sample size of 179.5. Of the four studies that used the exact method for calculating confidence intervals for limits of agreement, the minimum sample size was 10, and the maximum sample size was 127, with a median sample size of 23.5. Of the studies that did not use confidence intervals for limits of agreement, sample sizes ranged from n = 3 to n = 785, with a median sample size of 40.

DISCUSSION
This review of Optometry and Vision Science articles found that of 50 studies using Bland-Altman limits of agreement only eight reported using confidence intervals for limits of agreement, or 16% of the articles. This is a considerable improvement since a review in 2015 of 160 Optometry and Vision Science articles, which could find only one study^{15} that reported such confidence intervals, or 0.6% of studies. A similar review in anesthesiology literature in 2000 found that 2 (5%) of 42 articles reported such confidence intervals.^{10}

It seems that researchers' use of confidence intervals for Bland-Altman limits of agreement has increased. This may be due to increased awareness of the value of their use and to the availability of new techniques and software. Bland and Altman's approximate techniques for calculating confidence are used as frequently as the exact method. The superiority of the exact method over the approximate method has been previously addressed,^{7} but I note that one study (Xiong et al.^{27} ) used the approximate method with a sample size of 2072. The tables published for the exact method only provide coefficients for a maximum size of 1001, so Xiong et al. were using the only method available to them. At that sample size, the confidence limits calculated by any method are very close to the limits of agreement, and the approximation is certainly adequate.

The median smallest sample size used was 40. Such a sample size might seem large enough that confidence limits are not relevant, but limits of agreement will have larger confidence ranges than those for μ_{diff} and σ_{diff} . For a sample size of 40, the exact 95% confidence limits for 95% limits of agreement are from d¯ ± 1.6214 SD_{diff} to d¯ ± 2.5244 SD_{diff} . It seemed that one study used a sample size of three to calculate limits of agreement, although it was difficult to discern the method of calculation from the methods description.^{66} For the second smallest sample size used (n = 5),^{55} the authors did not calculate confidence intervals for their limits of agreement. Those sample sizes were for a subgroup analysis of repeatability (main analysis was for a sample size of 40), and limits of agreement for that subgroup seem to have been incorrectly calculated, but 95% confidence limits for 95% limits of agreement would have been from d¯ ± 1.2397 SD_{diff} to d¯ ± 6.1569 SD_{diff} . Subgroup analyses often contain small sample sizes, and for those small sample sizes, calculating confidence for limits of agreement can be useful. One study^{57} performed a Bland-Altman analysis on a sample size of n = 6 for a comparison between calculated IOL power and nominal power, plotted against nominal power. We have reproduced their figure slightly altered with confidence limits for limits of agreement added to show how wide they can be for such small sample sizes (Fig. 2 ). The limits of agreement in Fig. 2 are approximately ±0.4 D, but the 95% confidence limits for those limits of agreement are between ±0.25 and ±1.1 D. Small sample sizes mean a larger range of possible limits of agreement, and calculating the confidence range for limits of agreement can help interpret them. A range of ±1.1 D in Fig. 2 may still be an acceptable range for IOL power variation, but one cannot know what that confidence range is until it is calculated.

FIGURE 2: Example of the value of adding confidence limits for limits of agreement for small sample sizes. The figure is taken from an Optometry and Vision Science article,^{57} but inner and outer bounds have been added for the limits of agreements.^{3}

The exact method described in 2015^{3} is based on two-sided tolerance factors. It calculates an interval, symmetrical around the sample mean of differences d¯ , which has a given confidence that at least 95% of the population is contained in that interval. The interpretation of this is illustrated in Fig. 1 A. The outer bands show an interval in which there is a 97.5% confidence that at least 95% of the population is contained. The inner bands show an interval in which there is a 2.5% confidence that at least 95% of the population is contained. Readers may not be familiar with this way of expressing those confidence limits, but there is value in using this method. In particular, it captures the notion that limits of agreement should be considered as a pair of bounds in which 95% of the population lies. Thus, there is a pair of bounds that is so narrow that it is unlikely (i.e., 2.5% confidence) that at least 95% of the population lies between them, and there is a pair of bounds in which it is really likely (95% confidence) that 95% of the population lies between them. My 2015 article published exact tables (Table 2 and Supplementary Table 2) for calculating these bounds, but similar tables have been published elsewhere,^{71,72} including ISO standards.^{73} I termed this “calculating confidence limits as a pair” because they describe a pair of bounds for the interval. The approach is more correctly termed: calculating two-sided tolerance limits for an interval containing 95% of the population of differences. The 2015 article shows examples plotted as conventional error bars, but my current preference is to plot them as shown in Figs. 1 A and 2 to show that they are pairs of bounds to an interval.

Bland and Altman^{1,5} provided an alternative approach, an approximate method that is similar to the use of one-sided tolerance factors. This approach estimates confidence intervals for single boundaries. The upper confidence limits show approximately the 97.5 and 2.5% confidence limits for a boundary under which 97.5% of the population lies. The lower confidence limits show approximately the 97.5 and 2.5% confidence limits for a boundary above which 97.5% of the population lies. It has been shown elsewhere that the approximate method differs from the exact method, being permissive for values of n < 40 for outer limits and n < 499 for inner limits.

A contributing factor to the continued use of approximate (as opposed to exact) confidence limits for limits of agreement might be the availability of software. There is at least one company that includes in its software options the approximate method for calculating Bland and Altman limit of agreement confidence limits. This can be done by generating the Bland-Altman plot, calculating the confidence limits, and modifying the plot using drawing software to add the confidence intervals to the limits of agreement. Researchers may find that onerous, so I have included as supplementary material two Excel workbooks to assist with drawing confidence intervals for limits of agreement.

The first workbook (Supplemental Digital Content 1, available at https://links.lww.com/OPX/A429 ) takes the input of d¯ , SD_{diff} , and n . These are entered into a box with the title “Input.” Those values are used to calculate the 95% limits of agreement and their confidence limits based on two-sided tolerance factors, along with a confidence interval for d¯ and a probability that d¯ is different from zero (two-tailed, based on the t distribution), which are shown in a box marked “Output.” The workbook is locked except for the input cells. These currently contain data from Fig. 1 as an example, but these can be deleted, and researchers can insert their own values. The author will make unlocked versions of the workbook available on request.

The second workbook (Supplemental Digital Content 2, available at https://links.lww.com/OPX/A430 ) is illustrated in Fig. 3 and takes an input of two columns of matched pairs data, reading 1 and reading 2. The workbook will take up to 1001 pairs of values entered under the heading “INPUT NUMBERS HERE.” It calculates and outputs useful statistics, including d¯ , SD_{diff} , and n, and confidence intervals for the mean of differences and 95% limits of agreement, and produces a chart with the appropriate reference lines drawn on. The entry columns are unlocked and can be altered by the user. All other cells and sheets are locked. An example of the output chart is shown in Fig. 3 (reproducing the data in Fig. 1 ). Researchers might find this graphical output useful, although they should adjust the axis labels so that they provide appropriate information and meet the requirements of the journals to which they wish to submit. As mentioned, the output is locked, so to adjust the figure, it is necessary to copy the output sheet to another sheet and adjust axes, labels, and scales as appropriate. In addition, the author will make unlocked versions of the workbook available on request. A user guide for both workbooks (Supplemental Digital Content 3) is available at https://links.lww.com/OPX/A431 .

FIGURE 3: Worksheet input and output page for calculating confidence limits for limits of agreement. The input is two columns of matched pair data (left). The output is Bland-Altman statistics and a Bland-Altman plot with confidence limits for 95% limits of agreements.

When reporting Bland-Altman limits of agreement, authors should report confidence intervals for limits of agreement. If the summary statistics are reported in table form, the inner and outer confidence limits should be reported for limits of agreement. If Bland-Altman plots are generated, then the researchers should either present the information on confidence limits in the figure legend and/or graphically represent the confidence limits in the figure itself. The latter option (drawing the confidence limits) will provide readers with a clear visual representation of the likely errors in limits of agreement and is in my opinion the preferred option, especially when those errors are large compared with the limits of agreement (i.e., when subject numbers are small). However, individual journals and editors may have different requirements, and authors may find that the addition of the confidence intervals detracts from the clarity of a figure. In those circumstances, the confidence intervals may be included in figure legends.

CONCLUSIONS
The use of confidence intervals for Bland-Altman limits of agreement has increased in Optometry and Vision Science , although some authors are still using approximate methods when better methods are available. To encourage the use of exact confidence intervals for limits of agreement, Excel workbooks have been included to assist with the calculations and plot the data with appropriate reference lines.

REFERENCES
1. Bland JM, Altman DG. Statistical Methods for Assessing Agreement between Two Methods of Clinical Measurement. Lancet 1986;1:307–10.

2. ClarivateAnalytics. Web of Science. Available at:

http://wokinfo.com/ . Accessed December 13, 2018.

3. Carkeet A. Exact Parametric Confidence Intervals for Bland-Altman Limits of Agreement. Optom Vis Sci 2015;92:e71–80.

4. Carkeet A. Comment on: Statistical Methods for Conducting Agreement (Comparison of Clinical Tests) and Precision (Repeatability or Reproducibility) Studies in Optometry and Ophthalmology. Ophthalmic Physiol Opt 2015;35:345–6.

5. Bland JM, Altman DG. Measuring Agreement in Method Comparison Studies. Stat Methods Med Res 1999;8:135–60.

6. Armstrong RA, Davies LN, Dunne MC, et al. Statistical Guidelines for Clinical Studies of Human Vision. Ophthalmic Physiol Opt 2011;31:123–36.

7. Carkeet A, Goh YT. Confidence and Coverage for Bland-Altman Limits of Agreement and their Approximate Confidence Intervals. Stat Methods Med Res 2018;27:1559–74.

8. Hamilton C, Stamey J. Using Bland-Altman to Assess Agreement between Two Medical Devices—Don't Forget the Confidence Intervals! J Clin Monit Comput 2007;21:331–3.

9. Ludbrook J. Confidence in Altman-Bland Plots: A Critical Review of the Method of Differences. Clin Exp Pharmacol Physiol 2010;37:143–9.

10. Mantha S, Roizen MF, Fleisher LA, et al. Comparing Methods of Clinical Measurement: Reporting Standards for Bland and Altman Analysis. Anesth Analg 2000;90:593–602.

11. Olofsen E, Dahan A, Borsboom G, et al. Improvements in the Application and Reporting of Advanced Bland-Altman Methods of Comparison. J Clin Monit Comput 2015;29:127–39.

12. Zou GY. Confidence Interval Estimation for the Bland-Altman Limits of Agreement with Multiple Observations per Individual. Stat Methods Med Res 2013;22:630–42.

13. Donner A, Zou GY. Closed-form Confidence Intervals for Functions of the Normal Mean and Standard Deviation. Stat Methods Med Res 2010;21:347–59.

14. Wald A, Wolfowitz J. Tolerance Limits for a Normal Distribution. Ann Math Stat 1946;17:208–15.

15. McClenaghan N, Kimura A, Stark LR. An Evaluation of the M&S Technologies Smart System II for Visual Acuity Measurement in Young Visually-normal Adults. Optom Vis Sci 2007;84:218–23.

16. McAnany JJ, Smith BM, Garland A, et al. iPhone-based Pupillometry: A Novel Approach for Assessing the Pupillary Light Reflex. Optom Vis Sci 2018;95:953–8.

17. Pierro L, Iuliano L, Gagliardi M, et al. Central Corneal Thickness Reproducibility among Ten Different Instruments. Optom Vis Sci 2016;93:1371–9.

18. Lee TE, Yoo C, Kim YY. Comparison of Three Different Tonometers in Eyes with Angle Closure. Optom Vis Sci 2019;96:124–9.

19. Wittich W, St. Amour L, Jarry J, et al. Test-retest Variability of a Standardized Low Vision Lighting Assessment. Optom Vis Sci 2018;95:852–8.

20. Arribas-Pardo P, Mendez-Hernandez C, Cuiña-Sardiña R, et al. Tonometry After Intrastromal Corneal Ring Segments for Keratoconus. Optom Vis Sci 2017;94:986–92.

21. Dave PA, Dansingani KK, Jabeen A, et al. Comparative Evaluation of Foveal Avascular Zone on Two Optical Coherence Tomography Angiography Devices. Optom Vis Sci 2018;95:602–7.

22. Dey A, David RL, Asokan R, et al. Can Corneal Biomechanical Properties Explain Difference in Tonometric Measurement in Normal Eyes? Optom Vis Sci 2018;95:120–8.

23. Badakere SV, Choudhari NS, Rao HL, et al. Comparison of Scleral Tono-Pen Intraocular Pressure Measurements with Goldmann Applanation Tonometry. Optom Vis Sci 2018;95:129–35.

24. Yeung D, Sorbara L. Scleral Lens Clearance Assessment with Biomicroscopy and Anterior Segment Optical Coherence Tomography. Optom Vis Sci 2018;95:13–20.

25. Pearce JG, Maddess T. Inter-visit Test-retest Variability of OCT in Glaucoma. Optom Vis Sci 2017;94:404–10.

26. Nguyen NH. Holographic Refraction and the Measurement of Spherical Ametropia. Optom Vis Sci 2016;93:1235–42.

27. Xiong S, Lv M, Zou H, et al. Comparison of Refractive Measures of Three Autorefractors in Children and Adolescents. Optom Vis Sci 2017;94:894–902.

28. Alabdulkader B, Leat SJ. A Standardized Arabic Reading Acuity Chart: The Balsam Alabdulkader-Leat Chart. Optom Vis Sci 2017;94:807–16.

29. Garcia N, Melvi G, Pinto-Fraga J, et al. Lack of Agreement among Electrical Impedance and Freezing-point Osmometers. Optom Vis Sci 2016;93:482–7.

30. Haworth KM, Chandler HL. Seasonal Effect on Ocular Sun Exposure and Conjunctival UV Autofluorescence. Optom Vis Sci 2017;94:219–28.

31. Weise KK, Swanson MW, Penix K, et al. King-Devick and Pre-season Visual Function in Adolescent Athletes. Optom Vis Sci 2017;94:89–95.

32. Khadka J, Fenwick EK, Lamoureux EL, et al. Item Banking Enables Stand-alone Measurement of Driving Ability. Optom Vis Sci 2016;93:1502–12.

33. Nguyen MT, Berntsen DA. Aberrometry Repeatability and Agreement with Autorefraction. Optom Vis Sci 2017;94:886–93.

34. Tong KK, Lujan BJ, Zhou Y, et al. Directional Optical Coherence Tomography Reveals Reliable Outer Nuclear Layer Measurements. Optom Vis Sci 2016;93:714–9.

35. Hopkins GR 2nd, Dougherty BE, Brown AM. The Ohio Contrast Cards: Visual Performance in a Pediatric Low-vision Site. Optom Vis Sci 2017;94:946–56.

36. Sorkin N, Rosenblatt A, Barequet IS. Predictability of Biometry in Patients Undergoing Cataract Surgery. Optom Vis Sci 2016;93:1545–51.

37. Harvey EM, Leonard-Green TK, Mohan KM, et al. Interrater and Test-retest Reliability of the Beery Visual-motor Integration in Schoolchildren. Optom Vis Sci 2017;94:598–605.

38. Ramasubramanian V, Glasser A. Predicting Accommodative Response Using Paraxial Schematic Eye Models. Optom Vis Sci 2016;93:692–704.

39. Portela-Camino JA, Martín-González S, Ruiz-Alcocer J, et al. A Random Dot Computer Video Game Improves Stereopsis. Optom Vis Sci 2018;95:523–35.

40. Arango T, Morse AR, Seiple W. Comparisons of Two Microperimeters: The Clinical Value of an Extended Stimulus Range. Optom Vis Sci 2018;95:663–71.

41. Demirel S, Ozmert E, Batioglu F, et al. A Color Perimetric Test to Evaluate Macular Pigment Density in Age-related Macular Degeneration. Optom Vis Sci 2016;93:632–9.

42. Ulaganathan S, Read SA, Collins MJ, et al. Measurement Duration and Frequency Impact Objective Light Exposure Measures. Optom Vis Sci 2017;94:588–97.

43. Laby DM, Kirschen DG. The Refractive Error of Professional Baseball Players. Optom Vis Sci 2017;94:564–73.

44. Ostrin LA. Objectively Measured Light Exposure in Emmetropic and Myopic Adults. Optom Vis Sci 2017;94:229–38.

45. Khadka J, Fenwick E, Lamoureux E, et al. Methods to Develop the Eye-tem Bank to Measure Ophthalmic Quality of Life. Optom Vis Sci 2016;93:1485–94.

46. Hirano M, Hutchings N, Simpson T, et al. Validity and Repeatability of a Novel Dynamic Visual Acuity System. Optom Vis Sci 2017;94:616–25.

47. Pearce JG, Maddess T. Retest Variability in the Medmont M700 Automated Perimeter. Optom Vis Sci 2016;93:272–80.

48. Ortiz-Toquero S, Rodriguez G, de Juan V, et al. Rigid Gas Permeable Contact Lens Fitting Using New Software in Keratoconic Eyes. Optom Vis Sci 2016;93:286–92.

49. Han QM, Cong LJ, Yu C, et al. Developing a Logarithmic Chinese Reading Acuity Chart. Optom Vis Sci 2017;94:714–24.

50. Fuller DG, Chan N, Smith B. Neophyte Skill Judging Corneoscleral Lens Clearance. Optom Vis Sci 2016;93:300–4.

51. de Fez D, Luque MJ, García-Domene MC, et al. Can Applications Designed to Evaluate Visual Function Be Used in Different iPads? Optom Vis Sci 2018;95:1054–63.

52. Tan B, Zhou Y, Yuen TL, et al. Effects of Scleral-lens Tear Clearance on Corneal Edema and Post-lens Tear Dynamics: A Pilot Study. Optom Vis Sci 2018;95:481–90.

53. Riede-Pult BH, Evans K, Pult H. Investigating the Short-term Effect of Eyelid Massage on Corneal Topography. Optom Vis Sci 2017;94:700–6.

54. Markoulli M, Gokhale M, You J. Substance P in Flush Tears and Schirmer Strips of Healthy Participants. Optom Vis Sci 2017;94:527–33.

55. Rosenfield M, Ciuffreda KJ. Evaluation of the Svone Handheld Autorefractor in a Pediatric Population. Optom Vis Sci 2017;94:159–65.

56. Sailoganathan A, Rou LX, Buja KA, et al. Assessment of Visual Acuity in Children Using Crowded Lea Symbol Charts. Optom Vis Sci 2018;95:643–7.

57. Bonaque-González S, Bernal-Molina P, Marcos-Robles M, et al. Optical Characterization Method for Tilted or Decentered Intraocular Lenses. Optom Vis Sci 2016;93:705–13.

58. Shin MC, Chung SY, Hwang HS, et al. Comparison of Two Optical Biometers. Optom Vis Sci 2016;93:259–65.

59. Brussee T, van den Berg TJTP, van Nispen RMA, et al. Association between Contrast Sensitivity and Reading with Macular Pathology. Optom Vis Sci 2018;95:183–92.

60. Brussee T, van den Berg TJ, van Nispen RM, et al. Associations between Spatial and Temporal Contrast Sensitivity and Reading. Optom Vis Sci 2017;94:329–38.

61. Richdale K, Bullimore MA, Sinnott LT, et al. The Effect of Age, Accommodation, and Refractive Error on the Adult Human Eye. Optom Vis Sci 2016;93:3–11.

62. Gür Güngör S, Akman A, Küçüködük A, et al. Non-contact and Contact Tonometry in Corneal Edema. Optom Vis Sci 2016;93:50–6.

63. Kim E, Bakaraju RC, Ehrmann K. Power Profiles of Commercial Multifocal Soft Contact Lenses. Optom Vis Sci 2017;94:183–96.

64. Twa MD, Schulle KL, Chiu SJ, et al. Validation of Macular Choroidal Thickness Measurements from Automated SD-OCT Image Segmentation. Optom Vis Sci 2016;93:1387–98.

65. Leube A, Ohlendorf A, Wahl S. The Influence of Induced Astigmatism on the Depth of Focus. Optom Vis Sci 2016;93:1228–34.

66. Perches S, Collados MV, Ares J. Repeatability and Reproducibility of Virtual Subjective Refraction. Optom Vis Sci 2016;93:1243–53.

67. McAllister F, Harwerth R, Patel N. Assessing the True Intraocular Pressure in the Non-human Primate. Optom Vis Sci 2018;95:113–9.

68. El-Nimri NW, Walline JJ. Centration and Decentration of Contact Lenses during Peripheral Gaze. Optom Vis Sci 2017;94:1029–35.

69. Laby DM, Kirschen DG, Govindarajulu U, et al. The Hand-eye Coordination of Professional Baseball Players: The Relationship to Batting. Optom Vis Sci 2018;95:557–67.

70. Kee CS, Leung TW, Kan KH, et al. Effects of Progressive Addition Lens Wear on Digital Work in Pre-presbyopes. Optom Vis Sci 2018;95:457–67.

71. Odeh RE, Owen DB. Tables for Normal Tolerance Limits, Sampling Plans, and Screening. New York: Marcel Dekker, Inc; 1980.

72. Odeh RE. Tables of Two-sided Tolerance Factors for a Normal Distribution. Commun Stat Simulat 1978;7:183–201.

73. International Organization for Standarization (ISO). Statistical Interpretation of Data—Part 6: Determination of Statistical Tolerance Intervals: ISO 16269-6:2014. Geneva, Switzerland: ISO; 2014.