Impairment of visual acuity (VA) has been shown to be the most sensitive single predictor of ophthalmic injury in an emergency department (ED) setting and, thus, an important indicator of the need for urgent ophthalmic consultation.1 Inconsistencies in the frequency and quality of VA measurements in such settings can inhibit optimal workflow and resource management. In attempts to improve both the frequency and consistency of VA measurements, we evaluated the effects of implementing a novel near VA chart in our ED.
Our ED incorporates VA measurement by nonophthalmic ED technicians as an essential screening technique for patients with visual or eye complaints. Allowing technicians who may not be licensed physicians but well trained in VA measurement to perform such a task allows for otherwise untapped resources of unutilized human skills to be applied to the clinical environment. Variation of VA measurement in ED settings has many causes including patient immobility, poor lighting, and lack of a standardized protocol. Logistical and physical barriers such as inconsistent testing distance, patient immobility, or altered levels of consciousness may preclude a patient from being tested at all. In our ED, space constraints and physical testing requirements for distance acuity testing allowed for placement of only two inconveniently located Snellen distance VA charts, exacerbating the aforementioned challenges in VA assessment. In addition, the design of Snellen charts contributes to relatively high test–retest variability so that a person may have a change in VA of over two lines that may be due only to chance rather than a true change in acuity.2
The test instrument implemented in this study, the Runge Sloan letter near card (Runge card, Figure 1, Good-Lite, Elgin, Illinois), possesses features that might increase the frequency of VA assessments and reduce variation in testing conditions. The Runge card uses Sloan letters of similar legibility (Z, N, H, R, V, K, D, C, O, S).3,4 Also, rather than being arranged in lines of equal VA, the Runge card features three rows of letters, with each letter across each line corresponding to a certain level of VA. The letters are progressively smaller, with the smallest letter corresponding to a VA of 20/16 or logarithm of the minimum angle of resolution (logMAR) of −0.1 and largest letter corresponding to a VA of 20/1,000 or logMAR of 1.7. A patient is instructed to read each letter across the line as one would read across a line of text in a book or newspaper, simplifying testing instructions. Like the Early Treatment of Diabetic Retinopathy Study (ETDRS) distance chart, which is considered the gold standard for VA assessment in clinical trials, the Runge card progresses regularly through letter size.2,5 The Runge card has an equal number of letters (three) for each level of VA, unlike the Rosenbaum near card6 or Snellen distance chart. In addition, the Runge card is conveniently portable and has a built-in mechanism to verify a standardized measuring distance. Any limitations of near charts are counterbalanced by their convenience and portability for use in an ED setting.
Validity testing of the Runge Sloan letter near card has shown that it provides agreement with the ETDRS distance chart that approximates that of the Snellen chart, with a more uniform level of agreement over varying degrees of visual impairment.7 These results enable providers to better interpret results of VA testing performed at near.
We hypothesized that implementing the Runge near card in an ED setting could increase the frequency of VA assessments in the department while at the same time increase the agreement rate with VAs subsequently measured by ophthalmic physicians-in-training. We presumed that such agreement, as well as the frequency of the VA assessment by nonophthalmic ED technicians, could serve as a useful indicator of the clinical impact of implementing the Runge near card, given that the measurements by ophthalmic physicians-in-training are used in clinical decision-making at our institution and many others.
The ED, affiliated with the Department of Emergency Medicine at our institution, is a Level 1 trauma center that is the largest provider of emergency medicine services in Wisconsin, recording over 75,000 visits annually.
Before any intervention at the hospital ED, VAs were taken by nonophthalmic ED technicians. For the entire ED, there were only two Snellen distance VA charts. Concern for eye injury, trauma, or other ocular disturbances would result in consultation of ophthalmology residents. However, there were no consistent guidelines for the technicians as to when to assess VA and no requirement to assess VA before an ophthalmology consult. These factors, paired with inconveniences such as patient immobility and the chaotic nature of the ED often resulted in the absence of VA assessments by the technicians. As part of a process improvement effort, we first assessed the preintervention baseline frequency of VA assessments and baseline agreement between nonophthalmic ED technicians and ophthalmology residents in the ED; second, we implemented the Runge Sloan letter near card; and third, we assessed changes from the baseline. Institutional review board/ethics committee approval was obtained to review, analyze, and report these data and to perform relevant chart reviews.
Phase 1—Preintervention Data Collection
We conducted a retrospective chart review (inclusion criteria: age ≥ 18 years, VA ≥ 20/500) of our hospital ED ophthalmology consults from December 1st, 2016, to April 30th, 2017. The Snellen distance VAs for each eye taken by the ED technicians and the Runge Sloan letter near card VAs for each eye taken by the ophthalmology residents during the ED consult were recorded for 120 patients. The number of times an ophthalmology consult was placed without an accompanying VA assessment was also recorded for these patients.
Over 100 ED technicians were trained in the measurement of VA with the Runge card. The two Snellen distance VA charts in the ED were replaced with nine Runge cards. These Runge cards were placed in six team stations (localized staff work areas providing line-of-sight communication among staff and subserving a selected pod of evaluation rooms), two triage rooms, and the trauma bay. Runge card use was initiated on May 1st, 2017. For ease of use, the Runge cards provided to the ED all had a premeasured 16-inch (41 cm) cord attached to the card with a pinhole (PH) occluder attached at the end of the cord (Figure 2). Patients were instructed to wear corrective lenses if needed for optimal near acuity during the testing. Testing was performed under ambient lighting with examination room lights on, typically in the same room for both technicians and residents. Patients were instructed to hold the PH occluder to their nose with the PH window open and hold the card at the distance of the attached cord. Patients were asked to read one of the three Runge card lines from left to right until it was no longer able to read the letters. The best-read VA was recorded. Using a different line, the test was repeated on the same eye with the PH occluder in place. This protocol was repeated for the other eye. Written instructions were also posted in the ED.
Phase 3—Postintervention Data Collection
We conducted a retrospective chart review (inclusion criteria: age ≥ 18 years, VA ≥ 20/500) of our hospital ED ophthalmology consults from May 1st, 2017, to November 15th, 2017. The Runge Sloan letter near card VAs for each eye taken by the ED technicians and the ophthalmology residents during the ED consult were recorded for 97 patients. The number of times an ophthalmology consult was placed without an accompanying VA assessment was also recorded for these patients. If available, spectacle use, including use of either glasses or contacts, and whether trauma was an indication for ophthalmic consultation were also recorded for these patients.
Phase 4—Statistical Analysis
Generalized estimating equation models, accounting for left and right eye measurements within the same patient without PH, were used to obtain the proportion of measurement pairs that had no more than prespecified logMAR VA differences between ED technicians and ophthalmology residents. These prespecified differences were ≤2 VA lines (0.204 logMAR) and ≤5 lines (0.498 logMAR). We chose 2 VA lines as an analysis threshold because 0.2 logMAR, approximately equivalent to 2 VA lines, is considered a valid test–retest variability threshold, with anything outside this threshold being indicative of real change.3 A threshold of five VA lines difference was considered a measurement inconsistency of sufficient magnitude to clearly indicate clinical nonutility of one or both of the measurements. Subanalyses using the same methods also assessed good versus poor VA defined by the ED technician measurements (>20/80 vs. ≤20/80 respectively). Odds ratio calculations were completed comparing agreement between good and poor VA groups. A chi-squared test was also used to assess preintervention versus postintervention comparisons, as well as within-group comparisons, with a p value of <.05 being considered significant. All p values reported for these and subsequent analyses were unadjusted for multiple comparisons. The aforementioned methods were also used for the postintervention analyses. We analyzed 116 total eyes from 67 patients, 95 eyes from 60 patients with good VA, and 21 eyes from 17 patients with poor VA. Variation in numbers of eyes and of patients for this and subsequent analyses resulted from VAs outside our inclusion criteria of VA ≥20/500 or missed VA testing in one of the eyes of the patients.
For postintervention analysis in which a more detailed within-group analysis was planned, we added the following metrics: PH use, best VA obtained by any method (PH use or not), spectacle use, or consult indication of trauma. For VA measurements without PH, we analyzed 129 total eyes from 70 patients, 92 eyes from 53 patients with good VA, and 37 eyes from 44 patients with poor VA. For measurements with PH, we analyzed 106 total eyes from 60 patients, 77 eyes from 44 patients with good VA, and 29 eyes from 20 patients with poor VA.
Frequency of Visual Acuity Assessments
The retrospective chart reviews of ED encounters compared the frequency with which VA assessment was omitted by the ED technicians before requesting an ophthalmology consult before versus after intervention. Before intervention, the rate of failure of including VA assessment in patient encounters was 36% (43 of 120 encounters); after intervention, the rate improved to 21% (20 of 97 encounters, p = .01, chi-square test).
Agreement of Visual Acuity Measurements
Overall Agreement Rates and Effect of Acuity Level Before Intervention
Before intervention, for a within-group analysis using the groupings of overall, good VA, and poor VA, we looked at the proportions of VA measurements differing between technicians and residents by ≤2 lines and ≤5 lines without PH. In the following analyses, reported percentages pertain to number of eyes undergoing VA measurement. Table 1 shows the proportions of VA measurements differing between technicians and residents by ≤2 lines without PH and ≤5 lines without PH. PH testing was not uniformly performed in the preintervention group and could not be evaluated. Distribution of VA measurement pairs are shown graphically in Figure 3. Notably, agreement was more likely in patients with good VA relative to poor VA without PH (for ≤2 lines difference, O.R. 3.4, p = .02; for ≤5 lines difference, O.R. 5.9, p = .007) (Table 1).
Table 1. -
Visual Acuity Measurement Agreement Rate Between Technicians and Residents Before and After Intervention, by Visual Acuity Level and Use of Pinhole Testing
|% w/in 2 lines
||% w/in 5 lines
||% w/in 2 lines
||% w/in 5 lines
|Odds ratio (comparing rate of agreement for good vs. poor VA)
||3.4, p = .02
||5.9, p = .007
||2.5, p = .05
||3.8, p = .003
||11, p = .001
||5.1, p = .05
ap < .05, comparing agreement rate between pinhole and nonpinhole use
VA = visual acuity; PH = pinhole; n = total number of eyes
When comparing preintervention and postintervention data using the chi-squared test, there was not a significant difference in the proportions of VA measurements differing between technicians and residents by ≤2 and ≤5 lines in the overall group and subgroups of good and poor VA without PH.
Overall Agreement Rates and Effect of Acuity Level and PH Testing After Intervention
After intervention, for a within-group analysis using the same groupings of overall, good VA, and poor VA, Table 1 shows the proportions of VA measurements differing between technicians and residents by ≤2 lines and ≤5 lines both without and with PH. Figure 4 shows the scatterplot distribution of VA measurement pairs. Notably, levels of agreement at the ≤2 lines threshold increase with use of adjunctive PH testing in the overall group from 51% to 64% (p < .05) and in the subgroup with good VA from 58% to 73% (p < .05).
Effect of PH on Obtaining Best-Measured Visual Acuity
A subanalysis to determine which VA assessment method (with and without PH) yielded the best-measured VA showed that PH most often yielded the best VA for both the ED technicians and the ophthalmology residents. The following indicate the percentage that each method of VA assessment yielded the best VA for ED technicians: with PH, 49%; without PH, 23%. They obtained equal acuity measurements between PH or no PH in 28%. For ophthalmology residents, best VA was measured with PH in 41% and without PH in 17%; equal measurements were obtained in 42%. Using the best VA irrespective of PH use did not significantly improve or change the correlation within two and five lines. Factors such as trauma or whether spectacles were used also did not influence agreement.
A limitation of this, as in other, process improvement endeavors, was the difficulty in controlling for multiple variables simultaneously. For maximum impact, we implemented a new VA instrument, the Runge near card, and a program of training in its use. As such, it is not possible to separate the effects of staff education and features of the Runge card on its utilization or on the frequency of VA measurement. We believed that our approach optimized our resources for maximal impact.
To the best of our knowledge, this study represents the first attempt to implement the Runge Sloan letter near card in an ED setting. Feedback obtained from the ED staff indicated no complaints or technical problems concerning use of the Runge card. By informal survey, ED staff indicated that the portability and ease of use were two of the best qualities of the Runge card. We expect utilization and agreement rates to increase as the ED staff become more familiar with the Runge card and after additional educational efforts regarding its use.
When comparing preintervention and postintervention data, there was not a significant difference in the proportions of VA measurements differing between technicians and residents by ≤2 and ≤5 lines in the overall group and subgroups of good and poor VA without PH. Despite no significant changes in overall agreement, we found, however, that the use of PH increased levels of agreement between ED technicians and ophthalmology residents when using the criterion of being within two lines (Table 1). In addition, for both the ED technicians and ophthalmology residents, the use of PH gave the best VA, suggesting the PH's utility to give the best assessment of visual function. The use of PH in ED settings may not be common practice, but these results suggest that testing with PH seems to be a useful adjunct in the ED to increase levels of agreement between ED technicians and ophthalmology residents. Use of the PH can mitigate the variable effect of uncorrected refractive error on VA testing, a likely reason for the improved consistency we observed among examiners in an ED setting where spectacle correction for near-acuity measurement suited to the patients' refractive error may not be readily available.
We found considerable variation between ED technician VA scores and the ophthalmology resident VA scores with the Runge card. Nevertheless, the overall postintervention group with use of the PH showed an agreement rate within 2 lines or better in 64% of eyes (73% of eyes with good VA) despite numerous testing variables, suggesting that the agreement rate could increase with further quality improvement efforts. It should also be noted that the two-line standard is stringent, given that the test–retest variation under highly controlled conditions using a standard ETDRS chart is approximately 1–2 lines.8,9 Regarding testing variables, at the initial implementation, formal termination rules for assigning VA for the patients were specified for the ED technicians but not for the residents because while the ED technicians worked primarily in the ED, the residents had transitory rotations through the ED.
The quick adoption and acceptance of the Runge card in the ED suggests that we might improve our agreement rates by changing to a more complex but more accurate line assignment methodology. Based on psychometric data, a line assignment method where multiple letters per VA level are tested and the criterion for assignment is getting more than half of them correct might be more optimal for clinical use3 compared with our initial methodology of using only one line on the Runge card to assess VA, chosen for simplicity's sake at a preliminary stage of implementation. Our plan is to add updated VA assessment instructions to the examiner side of the Runge Cards and postupdated instructions in the ED. In addition, we will retrain all of our ophthalmology residents with the updated line assignment methodology. Our goal is to enable a consistent methodology for both ED technicians and ophthalmology residents, which may improve agreement between VA scores.
Similar to the reasoning behind implementation of the Runge card, implementation of mobile technologies in VA assessments seems to be an increasingly common initiative among health care providers.10,11 The study by Perera et al12 (2015) found that there was considerable variability in the quality and accuracy of the Snellen charts developed for the iPhone due to factors such as lack of standardization and glare from the device screen. More recent studies have validated the use of the android-based peek acuity in both adult and child populations.13,14 However, many physicians use smartphones that are not android-based and have their own VA testing applications,12 such that difficulty of ensuring consistency across different examiners and cost are factors that may favor effectiveness of the Runge card in clinical settings, particularly in those with limited resources.
Our findings of an increase in frequency of VA assessment conducted by the ED technicians before requesting an ophthalmology consult suggest the potential utility of training in the use of the Runge Sloan letter near card in such environments. We strongly believe that our experience can be duplicated elsewhere and perhaps improved upon by heeding the experience we describe herein, particularly regarding standardization of VA line assignment rules among examiners. Our findings also suggest that introduction of VA cards utilizing an equivalent design, such as the Runge Near Vision Pocket Card with LEA SYMBOLS® for pediatric patients and the Tumbling “E” Runge Pocket Near Vision Test Card for illiterate patients, might produce similar results. Of particular note, we found that testing with pinhole seemed to be a useful adjunct in the ED setting to increase agreement between nonophthalmic ED technicians and ophthalmology physicians-in-training.
Implementation of the Runge near card with pinhole testing can improve accuracy of visual function assessment that may allow for better resource management in an ED setting.1
James F. Wu, BS, is an MD degree candidate at the Medical College of Wisconsin, Milwaukee, WI. His special interests and institutionally-supported scholarly pathways in the medical school curriculum are in the areas of healthcare quality improvement and ophthalmology research.
Alexis Visotcky, MS, is a member of the Biostatistics research staff at the Medical College of Wisconsin, Milwaukee, WI.
Aniko Szabo, PhD, is an associate professor of Biostatistics at the Medical College of Wisconsin, Milwaukee, WI. In addition to teaching and research, she directs the Biostatistical Consulting Service at MCW.
Stephen Eyler, MD, is a comprehensive ophthalmologist in clinical practice at the Illinois Eye Center in Peoria, IL. He completed his residency in ophthalmology at the Medical College of Wisconsin, during which time his residency curriculum included a process improvement activity in the field of ophthalmology.
Peter Siegmann, is a licensed paramedic (EMT-P) in the State of Wisconsin and serves in a supervisory and training role for technical staff at the Froedtert Hospital Emergency Department, Milwaukee, WI.
Gregory J. Griepentrog, MD, is a clinical associate professor of Ophthalmology in the Orbit and Ophthalmic Plastic Surgery Service at the Medical College of Wisconsin, Milwaukee, WI. He additionally serves as the Department of Ophthalmology & Visual Sciences Patient Safety and Quality Officer.
Clinton C. Warren, MD, is an assistant professor of Ophthalmology and vitreoretinal surgeon at the Medical College of Wisconsin in Milwaukee, WI. He currently serves on the Residency Action Committee at the Medical College of Wisconsin.
Dennis P. Han, MD, is a clinical professor of Ophthalmology and former Vitreoretinal Section Chief and Clinical Imaging Service Director at the Medical College of Wisconsin, Milwaukee, WI. He is certified in Lean healthcare efficiency and serves as a consultant to clinical practices in the area of healthcare process improvement. He is the scholarly pathway advisor and supervisor for James F. Wu.
Runge Sloan letter near cards were provided at no charge by its manufacturer, the Good-Lite Company, for quality improvement purposes at our institution. The Good-Lite Company received no access to or played a role in the study protocol design, generation, analysis or interpretation of data, or in the creation of this manuscript.
1. Al-Qurainy IA, Dutton GN, Ilankovan V, Titterington DM, Moos KF, el-Attar A. Midfacial fractures and the eye: The development of a system for detecting patients at risk of eye injury. Br J Oral Maxillofac Surg. 1991;29:363-367.
2. Kaiser PK. Prospective evaluation of visual acuity assessment: A comparison of snellen versus ETDRS charts in clinical practice (an AOS thesis). Trans Am Ophthalmol Soc. 2009;107:311-324.
3. Sloan LL. New charts for the measurement of visual acuity at far and near distances. Am J Ophthalmol. 1959;48:807-813.
4. Ferris FL, Kassoff A, Bresnick GH, Bailey I. New visual acuity charts for clinical research. Am J Ophthalmol. 1982;94(1):91-96.
5. Bailey IL, Lovie JE. New design principles for visual acuity letter charts. Am J Optom Physiol Opt. 1976;53(11):740-745.
6. Horton JC, Jones MR. Warning on inaccurate rosenbaum cards for testing near vision. Surv Ophthalmol. 1997;42(2):169-174.
7. Cooke MD, Winter PA, McKenney KC, et al. An innovative visual acuity chart for urgent and primary care settings: Validation of the runge near vision card. Eye (Lond). 2019;33:1104-1110.
8. Rosser DA, Laidlaw DA, Murdoch IE. The development of a reduced logMAR visual acuity chart for use in routine clinical practice. Br J Ophthalmol. 2001;85:432-436.
9. Rosser DA, Cousens SN, Murdoch IE, Fitzke FW, Laidlaw DA. How sensitive to clinical change are ETDRS logMAR visual acuity measurements? Invest Ophthalmol Vis Sci. 2003;44:3278-3281.
10. Shehzad MR, Ahmad I. Validation of smart phone based visual acuity charts for community outreach programs. Ophthalmol Pakistan. 2016;6(4):15-17.
11. O'Neill S, McAndrew DJ. The validity of visual acuity assessment using mobile technology devices in the primary care setting. Aust Fam Physician. 2016;45(4):212-215.
12. Perera C, Chakrabarti R, Islam FM, Crowston J. The eye phone study: Reliability and accuracy of assessing snellen visual acuity using smartphone technology. Eye (Lond). 2015;29(7):888-894.
13. Bastawrous A, Rono HK, Livingstone IA, et al. The development and validation of a smartphone visual acuity test (Peek acuity) for clinical practice and community-based fieldwork. JAMA Ophthalmol. 2015;133(8):930-937.
14. Rono HK, Bastawrous A, Macleod D, et al. Smartphone-based screening for visual impairment in Kenyan school children: A cluster randomised controlled trial. Lancet Glob Health. 2018;6(8):e924–e932.