Secondary Logo

Journal Logo

Article

Multivariate analysis of refractive data

Mathematics and statistics of spherocylinders

Naeser, Kristian MDa,b,1,*; Hjortdal, Jesper MD, PhD, Dr Meda,1

Author Information
Journal of Cataract & Refractive Surgery: January 2001 - Volume 27 - Issue 1 - p 129-142
doi: 10.1016/S0886-3350(00)00816-6
  • Free

Abstract

In a recent study, Naeser and Hjortdal1 reported a method for statistical analysis of astigmatism. The confidence area for astigmatism was reported as an ellipse in 2-dimensional space. These calculations were based on the concept of polar values2–8 and were performed by relatively simple mathematical methods.1,9

The present study discusses statistical analysis of spherocylinders, with specific reference to cataract and refractive surgery. The objective is to present methods that may be useful not only for the 3-dimensional analysis of spherocylinders but also for multivariate analyses of more complex optical entities. This multidimensional analysis requires formal mathematical matrix calculation. For dimensions beyond 2, matrix calculations become tedious. Fortunately, matrix operations are well suited for computer calculations, but the user must have a minimal understanding of the nature of matrix algebra.

We will review basal matrix operations and expressions necessary for describing multivariate data. We will present the vector format for a spherocylinder and briefly discuss the use of scalars. Subsequently, we will use these methods for a clinical comparison of autorefraction and manifest refraction. Finally, we will consider the possibilities of performing these calculations in commercially available computer programs.

Materials and methods

This paper is both a theoretical and practical study. For the theoretical part, we attempted to maintain the expressions for variances, covariances, and correlation coefficients as in our previous study of bivariate analysis of astigmatism,1 based on the theory described by Hald.9 The material on matrix algebra was derived from several sources, but we have attempted to maintain the logic reported in a recent comprehensive textbook on the subject.10 The text has been meticulously edited and includes only the absolute minimum information, allowing the nonmathematician reader to understand and perform multivariate analysis.

For the practical part, we performed subjective refraction and autorefraction on 50 eyes. There were 10 female and 40 male eyes. The individuals were staff of the Department of Ophthalmology, Århus, Denmark; their mean age was 31 years (range 18 to 48 years). Manifest refraction to the nearest 0.25 diopter (D) in sphere and cylinder was performed. Autorefraction was performed with a Nikon NRK-8000 autorefractor. All spherocylinders were primarily registered in a minus-cylinder axis format. These spherocylinders were transformed to a plus-cylinder power format (see below and example 3). All spherocylinders were converted to a power vector, based on the spherical equivalent power and 2 polar values separated by an arch of 45 degrees.1,4 Subsequent statistical analyses were performed with these entities. Data were processed with the SPSS (Statistical Package for the Social Sciences) statistical program. Matrix calculations were performed with the Mathcad (Mathsoft Inc.) program. The accuracy of the autorefraction was taken as the paired difference for each refractive element derived from autorefraction and manifest refraction. Univariate, bivariate, and trivariate statistical and graphical analyses were performed.

Matrix algebra and multivariate statistics

This section describes multivariate statistical methods and derives expressions for eigenvalues and eigenvectors, which characterize the dimensions and spatial orientations of the relevant confidence ellipsoids.

Basic Concepts

Multiple variables can be arranged in a data matrix of rows and columns. An [N × p] matrix has N rows and p columns and is denoted by X:

In this data matrix, each row could represent p variables of refractive data on a single individual, with a total of N individuals. For each variable x, the first suffix refers to the row number, while the second suffix refers to the column number. The rows in a data matrix are called row vectors. Data occurring in the rth row of X is denoted by xr:

When these data are written in a column vector, it is denoted by xr:

Generally, the row vector x′ is said to be the transpose of the column vector x.

Calculation Methods

Addition of 2 or several matrices is possible provided the number of rows and columns is identical; ie, both matrices should be of the type [N × p]. Identical row–column elements are added or subtracted. Addition of the 2 [2 × 2] matrices x and y:

Multiplication with a constant is accomplished by multiplying the constant with each element in the matrix. Multiplying the constant k with the matrix x:

Multiplication of 2 Matrices

Two matrices can be multiplied provided the number of columns in the first matrix is identical to the number of rows in the second matrix. The 2 matrices should be of the form [k × n] and [n × m], creating a new matrix of the form [k × m]. This is a so-called row column multiplication, in which you go down the row in the first matrix and down the column in the second:

Multiplying the 2 × 2 matrix x with the 2 × 1 matrix (vector) y creates the [2 × 1] matrix (vector) z:

Special Matrices

A square matrix is of the form [p × p]. A symmetric matrix is a square matrix, in which the diagonal from the upper left corner to the lower right corner contains independent variables, while the remaining variables are mirrored in this diagonal. Mathematically, the symmetrical matrix is defined by X′ = X; see equations 2 and 3 for vector transpositions. The identity matrix I is a symmetrical matrix with the number 1 diagonally from left to right and zeroes in the remaining positions. The identity matrix is defined by XI = IX = X for any matrix X. A [2 × 2] identity matrix is given by

The covariance matrix , sometimes called the variance–covariance matrix, is a symmetrical matrix with the variances for the specific variables on the diagonal and the covariances in the remaining positions. The correlation matrix P is a symmetrical matrix with the number 1 on the diagonal and correlation coefficients in the remaining positions. The mean of vector x is the combined mean of its variables, expressed as

The variances for x1 and x2 are denoted sx12 and sx22, respectively. The covariance between x1 and x2 is denoted s[x1;x2]. The correlation coefficient between x1 and x2 is denoted r[x1;x2]. For a [N × 2] matrix x, the mean vector, the covariance matrix, and the correlation matrix can be shown as

The mean vector and the covariance matrix fully characterize a multivariate data set. The correlation matrix together with the variances can be used as an alternative to the covariance matrix. Both denotations are seen in statistical software packages or textbooks.

Traces and Determinants

The trace of a symmetric [p × p] matrix is equal to the sum of its diagonal elements going from the upper left corner to the lower right corner. For the covariance matrix , the trace denoted by tr() is defined by

For the covariance matrix, shown in equation 9, the trace is the sum of the variances:

The determinant of a covariance matrix , denoted by ||, is defined

Here, |ij| = the matrix obtained from by deleting its first row and j′th column. This type of calculation becomes complicated for higher orders of p but can easily be performed on computer programs. For a [2 × 2] matrix, the calculations can be abbreviated somewhat, as the determinant is equal to the difference between the products, derived from multiplication in diagonal directions.

For the covariance matrix ∑ shown in (equation 9), we calculate the determinant as

Eigenvalues and Eigenvectors

These two entities define the dimension and orientation of the multivariate probability density function. The eigenvalues λ of a covariance matrix are the roots of the polynomial equation defined by

In a [p × p] matrix, this equation becomes a polynomium of the order p in λ. Each eigenvalue λ of the symmetric matrix has a nonzero vector, c, called an eigenvector, satisfying the matrix equation:

Thus, the matrix has p eigenvalues and p corresponding eigenvectors. Any eigenvector is orthogonal to the remaining eigenvectors. In the case of a covariance matrix of real numbers, eigenvalues and eigenvectors are always presented by real numbers. The eigenvalues of a [p × p] matrix can be arranged after magnitude (λ1 ≥ λ2 ≥ λ3 … … ≥ λp), while the corresponding eigenvectors (c1, c2, c3 … … cp) are denoted by the same suffixes. Eigenvectors are not unique, so they are often normalized to a unity length of 1. This means that the scalar value or the Euclidean length of the eigenvector is 1.

Some important properties of a symmetric matrix, such as a covariance matric, are (1) the trace is equal to the sum of its eigenvalues, or mathematically,

(2) the determinant is equal to the product of its eigenvalues, or mathematically,

These 2 correlations make it possible to control the correctness of the calculations.

In example 1, we will demonstrate the calculation of eigenvalues and eigenvectors from a [2 × 2] correlation matrix.

Example 1. Calculation of the eigenvalues (λ1 and λ2) and eigenvectors (c1 and c2) of the following covariance matrix :

Eigenvalues. |∑ − λ · I| = 0 implies that

The 2 roots of this equation are 9 and 2. So the 2 eigenvalues of are λ1 = 9 and λ2 = 2.

Checking for correctness: The trace of the covariance matrix is equal to 8 + 3 = 11, which is equal to the sum of the eigenvalues = 9 + 2. The determinant of the matrix is equal to 8 · 3 − 6 = 18, which is equal to the product of the eigenvalues = 2 · 9.

Eigenvectors. To compute an eigenvector corresponding to the largest eigenvalue 9, we use the equation · c = λ · c, which implies that

In this context c1 and c2 indicate the Cartesian coordinates for a single eigenvector. Any vector in which the first element is Symbol larger than its second will be an eigenvector of . Normalization or scalation of the vector to unity length together with the correlation between the 2 eigenvalues require that

Symbol
Symbol

So, the normalized eigenvector c1 corresponding to the eigenvalue λ1 = 9 can be represented by the following 2 vectors

In a similar fashion, we can show that the normalized eigenvectors c2 corresponding to the eigenvalue 2 are

The 2 representations of each normalized eigenvector are separated by an arch of 180 degrees. By convention, the 2 eigenvectors in the 2 first quadrants from zero to 180 degrees are usually chosen, and only this representation will be used in the following. It is easily seen that the 2 eigenvectors c1 and c2 are orthogonal but this can also be demonstrated mathematically as c1· c2 = 0.

The 2 orthogonal eigenvectors are shown in 2-dimensional space in Figure 1.

Figure 1.
Figure 1.:
(Naeser) This relates to example 1 and shows the normalized eigenvectors c 1 and c 2. The eigenvectors are orthogonal and both have lengths of 1.

Testing Hypotheses and Constructing Confidence Regions for Multivariate Data

Univariate data can be tested with a t test. Multivariate data can be tested in a similar manner with the Hotelling T2.11 For univariate data, the Hotelling T2 statistic is equal to the square of the Student t statistic; ie, T2 = t2. For multivariate data, the Hotelling T2 statistic is distributed as an F test with (p, np) degrees of freedom in the following manner, where n is the sample size, p is the number of dimensions examined, and α is the type 1 error.

These results can be used for descriptive and analytical statistical purposes and to construct confidence regions about the mean refractive status.

Consider a 2-dimensional ellipsoid with the center in the origio of a coordinate system with axes labelled x1 and x2. The lengths of the axes are determined by the values of a and b, each representing half the axis lengths. Provided the ellipsoid has axes parallel to the coordinate axes, it can be reported with the usual equation as

Confidence regions for multivariate data generally have the form of a p-dimensional ellipsoid in p-dimensional space. The center of the ellipsoid is determined by the combined mean of the involved univariate data, ie, by the mean vector. The axes are parallel to the eigenvectors, while the lengths of the axes are determined by the eigenvalues and the chosen confidence limit. It is most convenient1,9,10 to introduce a new equidistant coordinate system, with its center defined by the mean vector and with axes y1, y2, … … yp parallel to the respective eigenvectors. In this coordinate system, the confidence ellipsoid attains the general form shown in equation 20.

A (1 − α) · 100% confidence region for the combined mean value is an ellipsoid centered at the mean vector, oriented along the eigenvectors and with axes lengths of

In the (y1, y2, … … yp) coordinate system, the confidence ellipsoid of the mean, ie, the region of constant probability density around the combined mean value, is determined by the value for T2. From equation 19, it follows that the confidence ellipsoid is defined by

Similarly, the tolerance or normal1 ellipsoid, the area including a certain proportion of the observations, can be reported as

Equations 22 and 23 are analogous to the use of standard errors of the mean and standard deviations, respectively, for univariate data.

Example 2. Suppose the data in example 1 were derived from 50 eyes and the mean vector amounted to

The 95% confidence ellipsoid for the theoretical true mean, μ, is constructed as follows: The number of dimensions p = 2 and the number of subjects n = 50,

In the (y1, y2) coordinate system with center in [1, 2] and axes parallel to c1 and c2, the confidence ellipse emerges as

This ellipse is demonstrated in Figure 2. The radii of the ellipse are the figures in the denominators of the final expression in equation 24. The point (0,0) is not included in the confidence area, indicating a combined mean value significantly different from zero.

Figure 2.
Figure 2.:
(Naeser) Bivariate data from example 2 displaying the 95% confidence ellipse for the combined mean. These data may be reported in the original (x 1, x 2) coordinate system but are more conveniently described in the transformed (y 1, y 2) Cartesian model with origio in the combined mean (1,2) and axes parallel to the eigenvectors described in Figure 1. The unequivocal correlations between points in the 2 systems have been reported.1 In this new coordinate system the ellipse is characterized by: y 1 2/1.17 + y2 2/0.26 = 1.

Format for describing a single spherocylinder

For characterizing a single spherocylinder in conventional notation, the following are required:

  1. S: spherical power expressed in diopters
  2. M: magnitude of net astigmatism, expressed in diopters.
  3. α: direction of the steeper astigmatic meridian (stronger plus meridian or stronger minus axis), expressed in degrees. A power format is symbolized by @.
  4. SEP: The spherical equivalent power defined as: SEP = S + Symbol · M. The spherocylinder in conventional minus-cylinder axis format can be expressed as
  5. Symbol
    Symbol

The polar value method was originally defined in relation to the steeper convex astigmatic meridian, derived from a keratometer reading. To avoid confusion, we will maintain this notation in the description of refractive spherocylinders in this study. The minus-cylinder axis format is transformed to a plus-cylinder power format:

See example 3 for a calculation of real data.

Regardless of the cylinder notation, this format is useless for statistical analysis, which requires conversion to a suitable power vector.1,12–14

An astigmatism can be uniquely described by 2 separated polar values, such as KP(90) and KP(135), where1–4

The correct format for reporting a spherocylinder has been the subject of some controversy,14–17 and various formats can serve many purposes. A consensus has emerged around the following notation. In the column vector format, the power vector n is defined by

In the row power vector format,

All entities are “sphere-equivalent” powers.15,17 Each power can be calculated separately, in random combination and with all kinds of advanced mathematical conversions. The polar values are calculated as in equations 27 and 28, in which we now calculate the polar value of the corrective lens. The final result can be retransformed to conventional notation by the following equations:

where p is an integer, as previously described.1Equation 31 yields both a plus and a minus root,18 but for the present purpose we will consider only the plus root4

Example 3. Autorefraction and subjective refraction was registered from the right eye of a single individual, as described in Materials and Methods. The difference between the autorefraction and the subjective refraction was calculated as follows. Transforming the measured spherocylinder from minus-cylinder axis format to plus-cylinder power notation and further on to power vector n′ nomenclature:

Autorefraction:

Subjective refraction:

Difference; autorefraction minus subjective refraction with reverse transformation:

The 2 measurements and their difference are shown in Figure 3. This difference is the basal trivariate variable used in estimation of the accuracy of autorefraction.

Figure 3.
Figure 3.:
(Naeser) This refers to example 3. The various refractions are shown as points in a 3-dimensional space. A = vector OA = subjective refraction = (0.25, 0.44, −0.75); B = vector OB = autorefraction = (0.2, 0.32, −0.63); C = vector OC = precision or error of autorefraction = autorefraction − subjective refraction = BA = vector OBOA = vector AB = (−0.05, −0.12, 0.12).

Scalars

Occasionally, it may be useful to present refractive data as a single quantity, a “scalar”.15–17 A scalar is the Euclidean length, |n|, of a power vector. Mathematical expression for the scalar or length of the power vector n, based on the Pythagorean theorem:

Following rearrangement, it is seen that this expression is independent of the astigmatic direction, relying solely on astigmatic magnitude and spherical equivalent power:

Example 4. The scalar value of the difference between autorefraction and subjective refraction, shown in example 3, is

The scalar can be used to express the total dioptric sphere-equivalent strength of a dioptric power vector, but this is a piece of reduced information and does not show the 3-dimensional direction of the power vector. After conversion to a scalar, directional data14 are irretrievably lost. The scalar or its squared value (Mahalanobis distance) can be used to test data for multivariate normal distribution.9,10

Variance for spherocylindrical power

The total variance of astigmatic power is the sum of variances of the polar values KP(90) and KP(135).1 Following an analogous argumentation,1 it can easily be shown that the total variance of spherocylindrical power is the sum of the variances of the elements in the power vector n.

where s[x]2 denotes the variance for a specific variable.

Estimation of total variance is extremely important in evaluating refractive procedures, as the variance directly reflects the predictability of the procedure. A high predictability (= a low total variance) is always desirable for refractive procedures.

Accuracy of autorefraction

In this context, the subjective refraction is assumed to be the gold standard and the accuracy of autorefraction is therefore the difference between the results derived from autorefraction and subjective refraction. The difference for each individual was calculated as described in example 3. We consider these differences in the following. The differences are summarized in Table 1. The data are analyzed in a univariate, bivariate, and trivariate manner.

Table 1
Table 1:
Paired differences (autorefraction − subjective refractions) for the elements in the power vector n; univariate data. None of the elements differed significantly from zero.

Univariate Analysis

In univariate analysis, we are dealing with confidence intervals, which are reported in Table 1. None of the average univariate refractive data differed significantly from zero; ie, there was no statistically significant difference between autorefraction and subjective refraction on the univariate level.

The standard deviation for the SEP was by far the largest. According to equation 36, the total variance amounted to 0.663 D2, 87% of this figure provided by the variance for SEP. The variance for Symbol · KP(90) reached approximately twice the variance for Symbol · KP(135); an F test revealed a difference between these 2 variances at a significance level beyond 1%.

Symbol
Symbol
Symbol
Symbol

Bivariate Analysis

In bivariate analysis, we are dealing with confidence areas. The relevant data in the form of means, variance, covariances, and correlations coefficients are shown in Tables 1 to 3. All covariances and correlation coefficients are close to zero, indicating confidence ellipses with only slight inclination and rotation compared to the original coordinate system.1 The 3 confidence ellipses are shown in Figure 4.

  1. The bivariate distribution of Symbol · KP(90) and Symbol · KP(135) is uniquely characterized by the following average and [2 × 2] covariance matrices:
  2. Symbol
    Symbol
    Symbol
    Symbol
    Table 2
    Table 2:
    Correlation coefficients for the elements in the power vector n.
    Table 3
    Table 3:
    Covariances for the elements in the power vector n.
    Figure 4.
    Figure 4.:
    (Naeser) Bivariate analysis; 95% confidence ellipses for the combined means of 2 refractive components. All 3 ellipses contain the point (0,0), indicating a lack of difference from zero at the 5% level. The center of each ellipse is the combined mean for the 2 univariate distributions. Top: combined mean values and 95% CI for Symbol · KP(90) and Symbol · KP(135). Middle: combined mean values and 95% CI for Symbol · KP(90) and SEP. Bottom: combined mean values and 95% CI for SEP and Symbol · KP(135).
    Symbol
    Symbol
    Symbol
    Symbol
    Symbol
    Symbol
    Symbol
    Symbol

The bivariate mean did not differ statistically significantly from zero (Hotelling T2 = 1.766, P = .182 with (2, 48) degrees of freedom). The eigenvalues and their corresponding eigenvectors are as follows:

The 95% confidence ellipse for the bivariate mean is shown in Figure 4, top.

The remaining 2 bivariate distributions are calculated in a similar fashion.

  1. The bivariate mean for Symbol · KP(90) and SEP did not differ significantly from zero (Hotelling T2 = 1.161, P = .322) (Figure 4; middle).
  2. Symbol
    Symbol
  3. The eigenvalues amounted to 0.059 and 0.577.

3. The bivariate mean for Symbol · KP(135) and SEP did not differ significantly from zero (T2 = 0.530, P = .592) (Figure 4, bottom).

Symbol
Symbol

Trivariate Analysis

In trivariate analysis, we are dealing with confidence volumes as confidence ellipsoids in 3-dimensional space. The difference between the 2 spherocylinders is therefore a 3-dimensional distribution characterized by the following mean and covariance matrices:

Eigenvalues amounted to 0.060, 0.026, and 0.578.

The 3 orthogonal eigenvectors were calculated as

Hotelling T2 = 1.178; P = .328 with (3, 47) degrees of freedom. Individual values and the 95% tolerance ellipsoid is depicted in 3-dimensional space in Figure 5. According to equation 23 this ellipsoid is described by the following in the converted coordinate system:

Figure 5.
Figure 5.:
(Naeser) Trivariate analysis. Individual refractive data and the 95% tolerance ellipsoid for observations. In the large figure a, the ellipsoid is viewed from above. In the smaller figures b-c, the ellipsoid is viewed from a position parallel to one of the axes in the coordinate system. In b, for example, we look down along the axis for SEP on the plane determined by Symbol · KP(90) and Symbol · KP(135).
Symbol
Symbol
Symbol
Symbol

where F[0.05,3,47] = 2.80 and Hotelling T2 = 8.76.

The confidence ellipsoid for the combined mean is shown in Figure 6; the combined mean did not differ significantly from zero at the 5% level, as the point (0,0,0) is included in the confidence volume. The confidence ellipsoid is constructed in accordance with equation 22.

Figure 6.
Figure 6.:
(Naeser) This figure is analogous to Figure 5 but reports the 95% CI for the 3 combined mean values. In a, the point (0,0,0) is clearly indicated inside the ellipsoid, indicating a lack of significant difference between the 2 methods at the 5% level.

Discussion

The objective of this study was to present the refractive surgeon with relevant power vectors and to describe their statistical management. The perspective for this method is most importantly the assessment of refractive procedures.

The usual format of sphere, cylinder, and axis may characterize a single refraction or power but is not suited for statistical analysis.1,12–14 For that purpose, conversion to power vectors1,13–23 are necessary. Various power vectors were recently reviewed (Table 2 in reference 13), and each power vector contains a spherical, a meridional, and an oblique astigmatic component. For the present purpose, analyzing spherocylinders, we chose the vector n consisting of an SEP and the half of 2 polar values separated by an arch of 45 degrees (equations 27 to 29). These 2 polar values can be conceived as 2 cross cylinders, one at 90–180 and one at 45–135 degrees. This power vector is identical to Rabbetts's power v16 and to equation 5 in Harris's paper.17 The diminution of polar values to half magnitude is essential to obtain a sphere-equivalent format.15,17,24 This power vector has the advantage that it may be modified in accordance with the preoperative steeper meridian, hereby reporting the surgically induced flattening and torque of the cylinder. This principle was recently described.8

The present power vector can be used to describe a population of refractive data as in example 3 in which the polar values KP(90)2 and KP(135)4 were used. The polar values were developed for refractive surgery, in which the change in meridional polar values reflects the flattening of the surgical meridian, while the oblique polar values describe the surgically induced rotation or torque.3,4 In the analysis of series of refractive surgeries, it would be relevant to use the polar values of the surgical meridian or the meridian of the corrective plus-cylinder meridian as reference8 in each case, with subsequent compilation of data.

For keratometer readings, a positive polar value indicates a steepening of that specific corneal meridian. For example, a surgically induced with-the-rule corneal astigmatism is characterized by a positive KP(90). In the present study, we have described the polar values of the refractive or corrective astigmatic lens, in which a positive KP(90) indicates a corrective plus lens in the vertical meridian, typically seen in an ocular against-the-rule astigmatism. This type of astigmatism is typically corrected by a flattening of the horizontal meridian. This discrepancy may seem somewhat confusing but is analogous to the usual concepts for refraction; ie, a myopic eye with too much refractive power in comparison to axial length is correctable with a minus lens.

Advantages of the polar values method include invariance during several operations. Polar values are invariant during transposition from plus- to minus-cylinder format.1 From equation 35 it follows that the length of the power vector n relies solely on the values of the astigmatic magnitude and the spherically equivalent power, while it does not vary with astigmatic direction. This implies maintenance or invariance of shape for the confidence ellipsoid during change of reference meridian or rotation of the coordinate system. These changes of reference meridian or rotations will cause a rotation of the confidence ellipsoid around the SEP axis (as in Figure 8 in reference 21).

The confidence area for astigmatism can be reported as an ellipse in 2-dimensional space. The center of the ellipse is the combined mean of 2 polar values2–8 separated by an arch of 45 degrees, while the dimensions and orientation of the ellipse is determined by the variances of and correlation coefficient between these polar values.1 The lengths of the axes of the ellipse are determined by the modified variances of the polar values, and these calculations can be performed with rather simple mathematical methods.1,9

The theory of multivariate analysis of spherocylinders has been developed in several theoretical and practical papers,20–28 which may be relatively inaccessible to the nonmathematician. In the present study, we attempted to describe the complete theory in a single, intelligible paper and to use power vectors relevant for refractive surgery. Statistical analysis of 3-dimensional data are performed with the Hotelling T2. The multivariate mean is simply the average of all univariate variables involved. Using matrix algebra, the orientation of the confidence ellipsoid is determined by the eigenvectors, while the eigenvalues, characterizing the lengths of the ellipsoid radii, are identical to the described1 modified variances. In the new (y1, y2, y3) coordinate system, the covariances between the variables are zero. The confidence ellipsoid in this equidistant coordinate system is therefore fully characterized by the following covariance matrix in which the eigenvalues are listed down the diagonal:

With covariances of zero, it is easily seen that the trace and the determinant amount to the sum and the product, respectively, of the eigenvalues. This is analogous to equations 17 and 18. With 5 decimals, we obtain a determinant 8.904 · 10−4 and a trace of 0.663 from both the original covariance matrix and the matrix listed above. We also note that the trace is mathematically identical to the total variance, as shown in equation 36.

Accuracy and precision of autorefractors have been assessed in several recent studies, but inconsistent power vectors have often been used. Zadnik and coauthors29 disregarded the astigmatism and only compared the spherically equivalent components. Solomon30 compared a number of autorefractors, but used only the vertical and horizontal components of the power vectors.15,24 Wesemann and Dick31,32 used a “weighted axis difference”, not based on optical33 principles. Several studies used the correct power vectors but failed to perform multivariate analysis.34–36 Rubin and Harris37 examined the variation in autorefraction over time using correct multivariate techniques; in this study, confidence ellipsoids were demonstrated as stereo-pair scatter plots.

In the present study, we compared autorefraction to manifest refraction in 50 healthy eyes. Each spherocylinder was transformed to a relevant power vector and the precision of autorefraction was defined as the difference between autorefraction and manifest refraction. For individual data, the variation was considerably larger for the SEP than for astigmatism. It agrees with our clinical experience that the astigmatism derived from autorefraction is nearly identical to the final subjective refraction, while the sphere needs some adjustment. Table 1 reveals a significantly larger variation for the vertical–horizontal astigmatic components compared to the oblique powers. Similar observations were done by Elliott and coauthors.35 The cause of this discrepancy remains speculative, but the vertical power may be more sensitive to blinking and palpebral tonus than the oblique components.

For aggregate data univariate, bivariate, and trivariate statistical analyses failed to demonstrate any significant average differences between the 2 refraction methods. No refractive components and no combinations of refractive components displayed mean significant differences. In other words, in groups of healthy eyes, autorefraction may be used as a substitute for manifest refraction.

It is possible to disintegrate videokeratographic data into not only spherocylinders but also more irregular component using Fourier analysis.38 Similarly, Zernicke polynomials may characterize aberrations derived from wavefront analysis.39 Statistical analysis of these irregular components are easily managed by the same matrix operations. The averages for each component is added to the mean vector and the variances and covariances to the covariance matrix, with subsequent calculation of new eigenvalues and eigenvectors. The Hotelling T2 can be used for statistical analysis of any dimensionality.

Multivariate analysis can be performed with most major statistical software packages. Graphical analysis illustrates the statistics. Calculation of eigenvalues and eigenvectors may be done on mathematical programs or simply on a pocket calculator. For the computer graphics, we had to write a program, which is apparently unavailable on a commercial basis. Analysis of spherocylinders, including evaluation of refractive procedures, can be performed in an exact manner with multivariate statistics. Besides the presently described basal operations, multivariate statistics support exciting functions such as factor, canonical, and discriminant analysis, which generally allow the investigator to identify the more significant variables. However, these subjects must be picked up from textbooks.

References

1. Naeser K, Hjortdal JØ. Bivariate analysis of surgically induced regular astigmatism. Mathematical analysis and graphical display. Ophthalmic Physiol Opt 1999; 19:50-61
2. Naeser K. Conversion of keratometer readings to polar values. J Cataract Refract Surg 1990; 16:741-745
3. Naeser K, Behrens JK, Naeser EV. Quantitative assessment of corneal astigmatic surgery: expanding the polar values concept. J Cataract Refract Surg 1994; 20:162-168
4. Naeser K, Behrens JK. Correlation between polar values and vector analysis. J Cataract Refract Surg 1997; 23:76-81
5. Naeser K. Assessment of surgically induced astigmatism; call for an international standard (editorial). J Cataract Refract Surg 1997; 23:1278-1280
6. Naeser K, Guo S. Precision of autokeratometry expressed as confidence ellipses in Euclidian 2-space. Ophthalmic Physiol Opt 2000; 20:160-168
7. Naeser K. Format for reporting surgically induced astigmatism on aggregate data. J Cataract Refract Surg 1998; 24:1550-1552
8. Naeser K, Hjortdal J. Polar value analysis of refractive data. J Cataract Refract Surg 2001; 27:86-94
9. Hald A. Statistical Theory, with Engineering Applications. New York, NY, London: John Wiley & Sons, 1952; 585-623
10. Johnson DE. Applied Multivariate Methods for Data Analysis. Pacific Grove, CA: Duxbury Press, 1998; 1-92, 397-438
11. Hotelling H. The generalisation of Student's ratio. Ann Math Stat 1931; 2:360-378
12. Raasch T. Clinical refraction in three-dimensional dioptric space revisited. Optom Vis Sci 1997; 74:376-380
13. Harris WF. Dioptric power; its nature and its representation in three- and four-dimensional space. Optom Vis Sci 1997; 74:349-366
14. Thibos LN, Wheeler W, Horner D. Power vectors: an application of Fourier analysis to the description and statistical analysis of refractive error. Optom Vis Sci 1997; 74:367-375
15. Harris WF. Dioptric strength: a scalar representation of dioptric power. Ophthalmic Physiol Opt 1994; 14:216-218
16. Rabbetts RB. Scalar representation of astigmatism. Ophthalmic Physiol Opt 1996; 16:257-260
17. Harris WF. Scalar representation of dioptric power. Author's reply. Ophthalmic Physiol Opt 1996; 16:261-262
18. Bennett AG. A new approach to the statistical analysis of ocular astigmatism and astigmatic prescriptions. The Frontiers of Optometry: First International Congress 1984; 2:35-42
19. Saunders H. The algebra of sphero-cylinders. Ophthalmic Physiol Opt 1985; 5:157-163
20. Harris WF. Representation of dioptric power in Euclidian 3-space. Ophthalmic Physiol Opt 1991; 11:130-136
21. Deal FC Jr, Toop J. Recommended coordinate systems for thin spherocylindrical lenses. Optom Vis Sci 1993; 70:409-413
22. McKendrick AM, Brennan NA. Distribution of astigmatism in the adult population. J Opt Soc Am A 1996; 13,2:206-214
23. Harris WF. Astigmatism. Ophthalmic Physiol Opt 2000; 20:11-30
24. Harris W, Rubin A, Malan D. Reassessing autorefractors. Optician 1993; 206:35-38
25. Harris WF. Direct, vec and other squares, and sample variance-covariance of dioptric power. Ophthalmic Physiol Opt 1990; 10:72-80
26. Harris WF. The mean and variances of samples of dioptric powers: the basic calculations. Clin Exp Optom 1990; 73:89-92
27. Harris WF. Statistical inference on mean dioptric power: hypothesis testing and confidence regions. Ophthalmic Physiol Opt 1990; 10:363-372
28. Harris WF, Malan DJ, Rubin A. Ellipsoidal confidence regions for mean refractive status. Optom Vis Sci 1991; 68:950-953
29. Zadnik K, Mutti DO, Adams AJ. The repeatability of measurement of the ocular components. Invest Ophthalmol Vis Sci 1992; 33:2325-2333
30. Solomon H. Comparison of autorefractors. Optician 1993; 205:35-44
31. Wesemann W, Dick B. Erfahrungen mit dem handgehaltenen Autorefraktometer “Retiniomax” bei Erwachsenen und Kindern. Klin Monatsbl Augenheilkd 1997; 211:387-394
32. Wesemann W, Dick B. Accuracy and accommodation capability of a handheld autorefractor. J Cataract Refract Surg 2000; 26:62-70
33. Harris WF, Van Gool RD. Criteria for assessing autorefractor accuracy and accommodation (letter). J Cataract Refract Surg 2000; 26:944-945
34. Bullimore MA, Fusaro RE, Adams CW. The repeatability of automated and clinician refraction. Optom Vis Sci 1998; 75:617-622
35. Elliott M, Simpson T, Richter D, Fonn D. Repeatability and accuracy of automated refraction: a comparison of the Nikon NRK-8000, the Nidek AR-1000 and subjective refraction. Optom Vis Sci 1997; 74:434-438
36. Harvey EM, Miller JM, Dobson V, et al. Measurement of refractive error in native American preschoolers: validity and reproducibility of autorefraction. Optom Vis Sci 2000; 77:140-149
37. Rubin A, Harris WF. Refractive variation during autorefraction: multivariate distribution of refractive status. Optom Vis Sci 1995; 72:403-410
38. Hjortdal JØ, Erdmann L, Bek T. Fourier analysis of video-keratographic data; a tool for separation of spherical, regular astigmatic and irregular astigmatic corneal power components. Ophthalmic Physiol Opt 1995; 15:171-185
39. Liang J, Grimm B, Goelz S, Bille JF. Objective measurement of wavefront aberrations of the human eye with the use of a Hartmann-Schack wavefront sensor. J Opt Soc Am A 1994; 11:1949-1957
© 2001 by Lippincott Williams & Wilkins, Inc.