PVS Context in Primary Care Settings
The FGS opened with a general question about the process of well-child care (“Walk us through a WCV for a 3- or 4-year-old child”). All practices described a process whereby nurses or medical staff first completed a number of screening tests including visual acuity (see Table 5), according to a protocol that was electronic or printed. We next asked participants to tell us about a facet of well-child care their office did particularly well. Most practices discussed immunizations and mentioned facilitating factors including reminders, audit, feedback, acceptance by parents, and requirement for entry into school.
PVS Office Practices
Next, we focused the discussion on office practices related to PVS. Seven of nine practices used an electronic medical record with an age-specific well-child protocol that included a prompt to record visual acuity for each eye tested alone. Two of these seven protocols also specified testing for strabismus. All offices reported that acuity screening was done by a nurse or medical assistant who had been trained in the primary care setting (except one nurse who reported “nursing school”). Each practice reported that the physician made the final decision to pass or refer. No physicians reported a direct role in the routine acuity screening process. All practices used a printed bill; five with a separate procedure code for quantitative vision screening (99173), two with a code for visual field testing (92081), and two had no separate code for PVS.
Seven of nine practices used the Kindergarten (“Sailboat”) Chart (Precision Vision, LaSalle, IL; Fig. 3), with the academic practice using a Lea chart and the final practice not testing acuity at pre-school age. Most practices used a 20 foot test distance, with the academic and one other practice using 10 feet at pre-school age. Most practices tested children standing in a hallway, using a cover paddle to occlude the eye, with occasional comments that they allowed the less mature children to use both eyes together. Reported time to complete screening varied from 2 to 15 min. During FG discussions, most groups agreed that routine screening started at age 4 or 5 years. One practice that implemented photo-screening stopped its use on retirement of the physician leader who bought the device. This practice had not yet begun a new method for pre-school children. Two practices mentioned having the SureSight autorefractor but did not discuss this as a routine method for pre-school children.
Facilitators and Barriers to PVS
Facilitators or enabling factors for acuity screening are shown in Table 6. All practices said acuity was a routine part of their WCV. Six practices said their “practice was set up for screening,” with comments pertaining to space and patient flow or to administrative factors like electronic medical records. Four practices mentioned external factors (required by Medicaid or other agencies). Three practices mentioned attributes of the test itself that made their job easier (fast, not dependent on compliance, more like a game).
The most frequent barrier to acuity screening was young age, with all practices saying that 3- and 4-year-old children were more difficult to test (See Table 7). Equally frequent was an “uncooperative” child, described as not developmentally ready for the test, shy, immature, afraid to give a wrong answer, nervous about shots, or “scared to death.” Lack of reimbursement was a problem for all practices. Five practices mentioned time constraints. Other barriers were less frequent and were staff-based (belief that children not attending pre-school or school were not ready to name letters or shapes), family-based (parents or siblings interfere with testing process), and practice-based (crowded environment).
Issues Related to Screening Outcome
To facilitate and expand our discussion about the outcome of PVS, we developed a flowchart during each session using the prompt that “most practices have children that pass, do not pass, or do not cooperate with PVS.” We next asked practices to assign a percentage of children in each category (see Fig. 4). This initiated a discussion of factors related to “not passing” (failing) vs. “uncooperative,” as well as a discussion about what happened to children in each category. Fig. 4 shows that, on average, practices estimated that similar percentages of 4- and 5-year-old children are judged to fail acuity screening (13 and 11%, respectively), whereas 4 year olds are less likely than 5 year olds to pass (63 vs. 80%) and more likely to be judged “uncooperative” (25 vs. 6%). Practices were unable to estimate similar percentages for 3-year-old children.
When we asked about the criteria used to fail a child, we observed that the nurses directed this question to the physician (n = 5), or the physician answered first (n = 3); only one nurse stated the criteria. Physicians and staff within practices generally agreed on a criterion that would raise a concern, but across practices the value varied from 20/25 to 20/50 for a 4-year-old child. All (n = 9) practices agreed that the nurse handled the actual interaction with the child, including recording the acuity value or indicating that the child was uncooperative, whereas the physicians made the decision to refer or rescreen. A child who passed acuity screening might be referred by the physician for other reasons such as strabismus, parental concern, another positive finding on the physical examination, or sometimes a belief that an eye examination was indicated for children who were entering school. When asked whether a practice would ever not refer a child who failed acuity screening, practices said “no,” but all practices (n = 9) indicated that they did not routinely refer children they deemed “uncooperative.” When asked what happened to children who did not cooperate, practices gave different answers, either rescreening next year, rescreening within the year, or referring to an eye specialist. No physician mentioned routinely retesting uncooperative children themselves.
Once a decision to refer was made, most offices described a similar process of urging (n = 3) or making (n = 5) an appointment with a local eye specialist. Some practices routinely referred to either optometrists (n = 4) or ophthalmologists (n = 3), and others chose one or the other depending on the suspected condition (n = 1) or proximity to the practice (n = 1). In general, offices seemed satisfied with the eye specialists they used and the care their patients received; no office said anything negative about their consulting eye specialists.
Questions related to billing showed non-uniformity across patients, with most practices screening and/or billing only if the patient was thought to be covered for the service (n = 6). Two practices said they would bill if the practice could document a complaint and diagnosis, and one practice said they did not bill at all (n = 1). All practices were reluctant to use the separate code for non-covered patients, as few third party payers offered reimbursement, leaving many parents with out-of-pocket charges.
Willingness to Pilot New Tests
Because earlier research suggested that most practices were using non-recommended or previously recommended acuity tests,24 we demonstrated the VIP37 and next the Eye Check test, both using single surrounded Lea figures calibrated for use at 5 feet. Responses were generally positive for both tests: participants liked the occluding glasses and the lap board, the pointing or matching response, and the 5 foot test distance. All practices responded favorably to one or both of the tests shown. These “first impressions” were obtained immediately after a demonstration by one of the authors (CMD) and were not informed by hands-on experience with either test. Most practices agreed that detecting children with strabismus and amblyopia was a priority at pre-school age and expressed willingness to pilot a new method.
The FG format encouraged practice groups to review and discuss their experiences, allowing us to probe beyond existing data obtained by surveys24,25,33,41 or during monitored research projects,27,28,30,33,41 – 43 to better understand PVS processes in primary care settings. Three to six different groups often are adequate to reach data saturation,36 and our final sample size of 59 individuals from nine private pediatric practices is similar to other FG studies,44 – 47 although much smaller than survey studies.25,33,41
All offices but one quantified vision with an acuity chart, administered by nurses or medical assistants; no offices were using devices to screen pre-school children. Physicians conducted the physical examination and made decisions related to referral or follow-up without observing or repeating the acuity test. The majority of nurses and CMAs worked in the practice for <5 years, indicating a need to train new personnel periodically.
Prompts to record monocular acuity were included in most offices' protocols WCVs for ages 3, 4, and 5 years. Although prompts have successfully improved other practice-based behavior,48 they did not result in universal PVS. Discussions at the practice level corroborated our previous report that 3 year olds are hard to screen in the primary practice settings24 and added that many practices have difficulty screening children at age 4 years. Most practices use the Kindergarten chart at 20 feet despite its lack of evidence, lack of endorsement, and poor success rate as reported by our participants. Yet, these practices did not express dissatisfaction with the test, instead expressing frustration with the child. No practice mentioned reviewing current AAP recommendations, other pediatric literature, or vision screening literature in search of a better method. A more active approach by eye and pediatric specialists to translate current knowledge about developmentally appropriate and sensitive acuity screening methods will be necessary to improve practices.49
After the demonstrations, most practices voiced some advantages of using the single surrounded Lea targets at the 5 foot test distance compared with their current technique. Willingness to pilot either new method was queried because “Diffusion of Innovations” theory posits that “trialability“ (the degree to which an innovation may be experimented with on a limited basis) predicts more rapid uptake of an innovation.50 A few practices were less willing to change without having studies comparing the new tests to their current test (one practice using Allen cards and the other using the Kindergarten Chart). Various other facilitators that were mentioned by practices with regard to immunizations could be applied to PVS, including audits, feedback, and performance incentives, as well as parent education and school requirements. We elicited additional barriers that cannot be solved without changing health care systems. Time was consistently voiced as a barrier to acuity screening. Little time is available for the increasing number of screening tests and counseling that are recommended for preventive care visits.51,52 Like others,25,33 we showed that lack of reimbursement makes acuity testing difficult for many practices to justify, especially at younger ages when testing takes longer.
Similar to a previous study,43 we found that many children who “do not pass” because of poor cooperation were rescreened at the next WCV, and that this practice was more common for younger children. Reluctance to refer all children who do not pass acuity testing may help explain the large differences in percentage of children referred after screening in community (28%) vs. medical settings (6%).42 Given our prior finding that attendance to WCV decreases markedly after the 4-year visit,26 this practice may result in missing children with “severe” amblyopia (20/125 to 20/400 acuity in the amblyopic eye), whose outcome is better when treatment is started by age 4 years.53 Newer methods with good testability and accuracy might decrease this uncertainty and result in more and earlier referrals.
Our participants were primary care pediatricians and staff members who were aware of our interest in PVS and had time to reflect on their practices before the scheduled FGS. Because our discussions were limited to one session lasting <1 hour, we were unable to thoroughly cover all topics. The degree to which their responses are representative of primary care practices overall is unknown because the FG methodology does not address generalizability.36 In contrast to quantitative studies, such as previous surveys that included large and representative samples of pediatricians and family physicians, FGs involve a limited number of participants who discuss topics in depth. This format is very useful to probe interesting or unexpected findings from previous studies, or to refine the methods, protocol, or procedures of future quantitative studies. Krueger and Casey have suggested the concept of “transferability” in which the reader decides how the FG results might transfer to their situation or environment.36
Traditionally, FGs are composed of groups who share attributes that are more demographic in nature, such as sex, age, or occupation. We thought it appropriate to our needs to include all practice staff in FGSs. Although many of the “practice pattern” questions could have been included on a survey type instrument, we wanted the opportunity to probe further for certain questions, especially regarding “how” and “why” practices chose to do things in a certain way. We believe our approach gave a fuller picture than we may have obtained either from demographically matched FGs (e.g., all the nurses across practices) or from surveys. Another useful approach would have been individual interviews, but this approach was not feasible given time and budgetary constraints.
FGSs conducted in pediatric primary care offices revealed some facilitators for PVS, including agreement that VS is an important part of routine well-child care, and existence of protocol-based prompts for visual acuity results. Most barriers involved difficulty testing young children (<5 years old) with existing methods, lack of time, and lack of reimbursement. Many practices agreed that the acuity tests we demonstrated (single surrounded Lea pictures at 5 feet) might overcome some existing barriers and promote universal PVS in the medical home. Few practices are aware of the availability of new evidence-based acuity tests; thus, active translational efforts are needed to change current primary care practices. FG responses will be useful in designing future interventions to improve rates of PVS in the medical home.
University of Alabama at Birmingham School of Optometry
1716 University Boulevard
Birmingham, AL 35294
This publication was made possible by a supplement to Award number R01EY0 15,893 from the National Eye Institute, funded under the American Recovery and Reinvestment Act of 2009. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the National Eye Institute or the National Institute of Health. The parent grant is registered as Clinical Trials number NCT01109459.
1. Foote FM. Progress in meeting the eye problems of children. Am J Public Health Nations Health 1950;40:313–6.
2. American Academy of Pediatrics, Committee on Practice and Ambulatory Medicine and Section on Ophthalmology. Eye examination in infants, children, and young adults by pediatricians. Pediatrics 2003;111:902–7.
5. Ethan D, Basch CE, Platt R, Bogen E, Zybert P. Implementing and evaluating a school-based program to improve childhood vision. J Sch Health 2010;80:340–5.
8. Powell C, Wedner S, Richardson S. Screening for correctable visual acuity deficits in school-age children and adolescents. Cochrane Database Syst Rev 2005:CD005023.
9. Blum HL, Peters HB, Bettman JW. Vision Screening for Elementary Schools: The Orinda Study. Berkeley, CA: University of California Press; 1959.
10. Preslan MW, Novak A. Baltimore vision screening project. Ophthalmology 1996;103:105–9.
11. Yawn BP, Lydick EG, Epstein R, Jacobsen SJ. Is school vision screening effective? J Sch Health 1996;66:171–5.
12. Traboulsi EI, Cimino H, Mash C, Wilson R, Crowe S, Lewis H. Vision first, a program to detect and treat eye diseases in young children: the first four years. Trans Am Ophthalmol Soc 2008;106:179–85.
13. Bodack MI, Chung I, Krumholtz I. An analysis of vision screening data from New York City public schools. Optometry 2010;81:476–84.
14. Marshall EC, Meetz RE, Harmon LL. Through our children's eyes—the public health impact of the vision screening requirements for Indiana school children. Optometry 2010;81:71–82.
15. Williams C, Northstone K, Howard M, Harvey I, Harrad RA, Sparrow JM. Prevalence and risk factors for common vision problems in children: data from the ALSPAC study. Br J Ophthalmol 2008;92:959–64.
16. Das M, Spowart K, Crossley S, Dutton GN. Evidence that children with special needs all require visual assessment. Arch Dis Child 2010;95:888–92.
17. Moore B. The Massachusetts preschool vision screening program. Optometry 2006;77:371–7.
19. The Multi-ethnic Pediatric Eye Disease Study. Prevalence of amblyopia and strabismus in African American and Hispanic children ages 6 to 72 months the multi-ethnic pediatric eye disease study. Ophthalmology 2008;115:1229–36.
20. Friedman DS, Repka MX, Katz J, Giordano L, Ibironke J, Hawse P, Tielsch JM. Prevalence of amblyopia and strabismus in white and African American children aged 6 through 71 months the Baltimore Pediatric Eye Disease Study. Ophthalmology 2009;116:2128–34.
21. Chua B, Mitchell P. Consequences of amblyopia on education, occupation, and long term vision loss. Br J Ophthalmol 2004;88:1119–21.
22. Rahi J, Logan S, Timms C, Russell-Eggitt I, Taylor D. Risk, causes, and outcomes of visual impairment after loss of vision in the non-amblyopic eye: a population-based study. Lancet 2002;360:597–602.
23. US Preventive Services Task Force. Vision screening for children 1 to 5 years of age: US Preventive Services Task Force recommendation statement. Pediatrics 2011;127:340–6.
24. Marsh-Tootle WL, Funkhouser E, Frazier MG, Crenshaw K, Wall TC. Knowledge, attitudes, and environment: what primary care providers say about pre-school vision screening. Optom Vis Sci 2010;87:104–11.
25. Kemper AR, Clark SJ. Preschool vision screening in pediatric practices. Clin Pediatr (Phila) 2006;45:263–6.
26. Marsh-Tootle WL, Wall TC, Tootle JS, Person SD, Kristofco RE. Quantitative pediatric vision screening in primary care settings in Alabama. Optom Vis Sci 2008;85:849–56.
27. Stange KC, Flocke SA, Goodwin MA, Kelly RB, Zyzanski SJ. Direct observation of rates of preventive service delivery in community family practice. Prev Med 2000;31:167–76.
28. Hambidge SJ, Emsermann CB, Federico S, Steiner JF. Disparities in pediatric preventive care in the United States, 1993–2002. Arch Pediatr Adolesc Med 2007;161:30–6.
29. Levinson DR. Most Medicaid Children in Nine States are not Receiving All Required Preventive Screening Services, HHS OEI-05-08-00520. Washington, DC: U.S. Department of Health & Human Services, Office of Inspector General; 2010.
30. Shaw JS, Wasserman RC, Barry S, Delaney T, Duncan P, Davis W, Berry P. Statewide quality improvement outreach improves preventive services for young children. Pediatrics 2006;118:1039–47.
31. Zaba JN, Reynolds W, Mozlin R, Costich J, Slavova S, Steele GT. Comparing the effectiveness of vision screenings as part of the school entrance physical examination to comprehensive vision examinations in children ages 3 to 6: an exploratory study. Optometry 2007;78:514–22.
32. Castanes MS. Major review: the underutilization of vision screening (for amblyopia, optical anomalies and strabismus) among preschool age children. Binocul Vis Strabismus Q 2003;18:217–32.
33. Kemper AR, Clark SJ. Preschool vision screening by family physicians. J Pediatr Ophthalmol Strabismus 2007;44:24–7.
34. Wilkinson S. Focus group research. In: Silverman D, ed. Qualitative Research: Theory, Method, and Practice, 2nd ed. Thousand Oaks, CA: Sage Publications; 2004:177–99.
35. Onwuegbuzie AJ, Dickinson WB, Leech NL, Zoran AG. Toward more rigor in focus group research: a new framework for collecting and analyzing focus group data. Int J Qualit Meth 2009;8:1–21.
36. Krueger RA, Casey MA. Focus Groups: A Practical Guide for Applied Research, 4th ed. Thousand Oaks, CA: Sage Publications; 2009.
37. Vision in Preschoolers (VIP) Study Group. Preschool vision screening tests administered by nurse screeners compared with lay screeners in the vision in preschoolers study. Invest Ophthalmol Vis Sci 2005;46:2639–48.
38. Maguire MG, Vision in Preschoolers Study Group. Children unable to perform screening tests in vision in preschoolers study: proportion with ocular conditions and impact on measures of test accuracy. Invest Ophthalmol Vis Sci 2007;48:83–7.
39. Boyatzis RE. Transforming Qualitative Information: Thematic Analysis and Code Development. Thousand Oaks, CA: Sage Publications; 1998.
40. Patton M. Qualitative Research and Evaluation Methods, 3rd ed. Thousand Oaks, CA: Sage Publications; 2002.
41. Wall TC, Marsh-Tootle W, Evans HH, Fargason CA, Ashworth CS, Hardin JM. Compliance with vision-screening guidelines among a national sample of pediatricians. Ambul Pediatr 2002;2:449–55.
42. Hartmann EE, Bradford GE, Chaplin KN, Johnson T, Kemper AR, Kim S, Marsh-Tootle WL; The PUPVS Panel for American Academy of Pediatrics. Project Universal Preschool Vision Screening: a demonstration project. Pediatrics 2006;117:226–37.
43. Wasserman RC, Croft CA, Brotherton SE. Preschool vision screening in pediatric practice: a study from the Pediatric Research in Office Settings (PROS) Network. American Academy of Pediatrics [erratum in Pediatrics 1992;90:1001]. Pediatrics 1992;89:834–8.
44. Holley CD, Lee PP. Primary care provider views of the current referral-to-eye-care process: focus group results. Invest Ophthalmol Vis Sci 2010;51:1866–72.
45. Lane FJ, Huyck M, Troyk P, Schug K. Responses of potential users to the intracortical visual prosthesis: final themes from the analysis of focus group data. Disabil Rehabil Assist Technol 2012;7:304–13.
46. Dine CJ, Kahn JM, Abella BS, Asch DA, Shea JA. Key elements of clinical physician leadership at an academic medical center. J Grad Med Educ 2011;3:31–6.
47. Leighton P, Lonsdale AJ, Tildsley J, King AJ. The willingness of patients presenting with advanced glaucoma to participate in a trial comparing primary medical vs primary surgical treatment. Eye (Lond) 2012;26:300–6.
48. Grol R, Wensing M. Effective implementation: a model. In: Grol R, Wensing M, Eccles M, eds. Improving Patient Care: The Implementation of Change in Clinical Practice. New York, NY: Elsevier Butterworth Heinemann; 2005:41–57–72, 158–72.
49. Westfall JM, Mold J, Fagnan L. Practice-based research—“Blue Highways” on the NIH roadmap. JAMA 2007;297:403–6.
50. Rogers EM. Diffusion of preventive innovations. Addict Behav 2002;27:989–93.
51. LeBaron CW, Rodewald L, Humiston S. How much time is spent on well-child care and vaccinations? Arch Pediatr Adolesc Med 1999;153:1154–9.
52. Zuckerman B, Stevens GD, Inkelas M, Halfon N. Prevalence and correlates of high-quality basic pediatric preventive care. Pediatrics 2004;114:1522–9.
53. Repka MX, Kraker RT, Beck RW, Birch E, Cotter SA, Holmes JM, Hertle RW, Hoover DL, Klimek DL, Marsh-Tootle W, Scheiman MM, Suh DW, Weakley DR. Treatment of severe amblyopia with weekend atropine: results from 2 randomized clinical trials. J AAPOS 2009;13:258–63.
Keywords:© 2012 American Academy of Optometry
health behavior; primary care; pediatric; strabismus; amblyopia; pre-school; guideline adherence; vision screening