Marsh-Tootle, Wendy L.*; Frazier, Marcela G.†; Kohler, Connie L.‡; Dillard, Carey M.§; Davis, Kathryn‖; Schoenberger, Yu-Mei¶; Wall, Terry C.**
As early as 1950, the public health community recognized the importance of vision screening to promote school success.1 The importance of early eye examination and vision assessment is recognized by eye specialists,2,3 primary care doctors,2,4 teachers,5,6 and the American Public Health Association.7 No large-scale studies are available to tell how many children enter school with untreated vision problems,8 as no national system for data collection exists. Local and regional projects have estimated that 10 to 30% of children fail school-based vision screenings, depending on age, race/ethnicity, geographic location, and socioeconomic status.9–15 Children with special needs have markedly higher rates of vision problems, including low vision.16 At least one state (MA) has passed recent legislation stipulating that children with special-needs be examined by eye specialists before school entry.17
Amblyopia is the most common cause of vision impairment in childhood and the most common cause of monocular vision loss among adults 20 to 70 years of age.18 Under-detection and under-treatment of amblyopia, which affects 1 to nearly 4% of children, are common, thus contributing to preventable vision loss.19,20 The majority of children with amblyopia present without complaints or observable signs,2 but could have been detected by vision screening.19 Adults with amblyopia are less likely to have completed higher university degrees, and more likely to develop vision impairment in the other eye.21,22
The United States Preventative Services Task Force recommends vision screening at least once for all children between the ages of 3 and 5 years, to detect the presence of amblyopia or its risk factors.23 The recommended screening process includes history and physical examination (observation, corneal light reflex, red reflex, cover test, ophthalmoscopy) and quantitative vision screening (usually visual acuity,2,24,25) with refractive screening recently suggested by the United States Preventive Services Task Force.23,26–30
Three recent studies report rates of pre-school vision screening (PVS) in the medical home, with variable results. By hybrid data collection, i.e., by direct observation of acuity screening and chart review by trained nurses, 5% of children aged 3 to 6 years were screened in a large sample of primary care practices in Ohio27; by analysis of billing data in AL, a state that reimburses vision screening separately from the well-child examination, 12% (3 years), 23% (4 years), and 47% (5 years) of children who attended well-child visits (WCVs) were screened with acuity or photorefraction26; and during a quality improvement project in Vermont, 62% of 4-year-old children were “screened” (type of vision screening not stated).30 In a large national sample (including approximately 3000 physicians and 24,000 patient visits each year, during a 10 year period), 11 to 17% of children aged to 18 years had acuity screening during office visits reported as part of the National Ambulatory Medical Care Survey.28 These are perhaps the most representative data, as the sampling of physicians and patient visits is not restricted by geographic location, insurance carrier, or patient characteristics. Some studies29,30 do not specify the nature of the vision screening reported. This is important because “subjective” screening, by history and physical examination, is also required, but not sufficient to fulfill the recommendations of the American Academy of Pediatrics (AAP).2 No known study addresses the sensitivity of screening in primary care offices; however, one study raises concerns that many children who pass have vision problems.31 Additional gaps occur after screening: a school-based study has shown an average delay of 1 to 2 years between failing a screening and being examined by an eye specialist.11
Proposed barriers preventing children from receiving proper vision screening include social factors (ignorance, inconvenience, language, and a perceived lack of providers), financial factors affecting low income families, and political barriers, resulting in limited funding of preventive care services.32 Other barriers reported by pediatricians and family physicians who completed mailed surveys include the following: children are uncooperative, acuity is time-consuming, and reimbursement is low.25,33
In a previous study, we obtained on-line responses from physicians participating in the intervention arm of a randomized controlled trial designed to evaluate an educational Web site for its ability to improve PVS.24 Multiple-choice questions revealed additional barriers (PVS interrupts patient flow) and facilitators (recognizing that amblyopia often presents without outward signs) that were related to physician-reported adherence to PVS guidelines from the AAP.2 Few physicians responded to open-ended questions, and we noted that other methods, beyond surveys, would be helpful to further explore additional elements of the practice environment that are related to universal PVS. The purpose of this exploratory focus group (FG) study is to enhance our understanding of PVS in pediatric primary care settings by collecting open-ended interview data directly from providers and staff within their office settings, and to assess readiness to pilot simplified tests of visual acuity using single surrounded Lea pictures at 5 feet. We invited all staff members who are involved in any aspect of PVS to participate in FGs to further probe the factors related to PVS within the office setting.
We collaborated with the Alabama Chapter of the AAP, who announced the study with a brief e-mail to all members and provided us with a list of potential practices from which to recruit. We telephoned 73 practices and invited them to participate in an hour-long within-practice focus group session (FGS) about WCVs including PVS. To encourage participation, we traveled to each office separately and offered lunch and a $25 gift card to each participant. Once the meeting had been scheduled, we faxed consent forms and a survey about PVS to be completed in advance of the FG meeting.
FG research involves engaging a small number of people in a group discussion that is “focused” around a topic or issue of interest.34 To guide the discussion, we developed a topic guide to probe how vision screening is conducted in the context of a WCV of a pre-school child. The size of the FG was determined by the number of staff involved in PVS at each practice. We planned to conduct FGs until “saturation” was reached, i.e., when no new information was obtained, signifying that collecting further data may have no additional interpretive value.35 We expected to conduct at least three FGs, with the upper limit dependent on saturation35,36 and without an intention to enroll practices after saturation was reached.
The topic guide and Health Insurance Portability and Accountability Act–compliant consent forms were approved by the University of Alabama at Birmingham Institutional Review Board for Human Use, and each study participant gave written informed consent.
To better understand the background, practice role, and current PVS knowledge and practices of the participants, we collected surveys and signed consent forms at the start of the FGS. The survey consisted of 13 questions about PVS practices. Survey responses were not reviewed or discussed during the FG.
In an effort to prevent biasing the discussion, we hired a public health graduate student (KD) who had limited prior knowledge of PVS guidelines or issues to moderate the FGs. Sessions were recorded with a digital audio recorder. The research assistant (CMD) took notes to be used in case of equipment failure. Participants were asked to introduce themselves using first names only and to describe their role at the clinic for transcription purposes (to maintain confidentiality, yet know whether the comments were made by a physician or a nurse). A semi-structured topic guide was used to ensure that all topics were covered as planned. We asked practices if they used an electronic or printed protocol to guide the WCV, and we requested a blank copy of the protocol to help us understand how vision screening was addressed. We also requested a copy of the fee slip, to review the procedure codes used by the practices.
After a discussion about office practices, the moderator asked whether clinic personnel would be willing to use a new type of PVS test. Following this discussion, the research assistant (CMD) demonstrated two acuity tests using the commercially available protocols, materials, and recording forms: first, the Vision in Pre-schoolers (VIP) Screening Kit (Good Lite Company, Elgin, IL; see Fig. 1), and then the Eye Check Screening Test (Good Lite Company; see Fig. 2). After each demonstration, FG participants were asked for feedback about the test and protocol just demonstrated, including any barriers or facilitators (factors participants agreed made PVS easier for them) to using the test in their office. No members of the study group have any financial interest in the Good Lite Company or either test.
After the discussion ended, an optometrist (MGF or WLM-T) joined the group to answer any further questions, to thank FG participants for their time, and to invite the practice to participate in future research. Participants were furnished with a folder containing the protocols and recording forms that are packaged with the VIP and EyeCheck tests, as well as two publications from the VIP group.37,38 One of the optometrists addressed comments about sensitivity and specificity of the VIP test if asked, and presented the EyeCheck as a alternative format that had not been separately validated.
FG recordings were transcribed and individually reviewed by a team of six investigators who met face-to-face on three separate occasions to conduct the analysis according to principles from qualitative research suggested by Kruger and Casey.35
Themes (recurrent unifying concepts or statements about the subject of inquiry)39 and categories (groups of themes) were identified using the “long table” approach, a common “time-tested” method described by Kruger and Casey.36 First, all investigators independently read the anonymized transcripts with the purpose of understanding the scope of responses to the topic guide across all practices, and identifying emerging themes. Next, transcripts were color-coded by practice, reorganized by topic, and compiled into one computer file for investigators to review. At the second meeting, investigators discussed and identified themes and came to an agreement on categories. Two investigators (WLM-T and MGF) then drafted a written summary, in table form, with categories as table titles, listing themes with supporting quotes. Third, the team met to discuss any discrepancies between the investigators, which were few, and agreed on the final grouping and selection of the prevailing themes and representative quotes. The latter are included for the reader to judge the validity of the themes and categories we constructed.40 Finally, WLM-T and MGF compared the themes to the original transcripts, resolving any discrepancies or ambiguous items, and counting the frequency that participants voiced a theme (thus not over-counting themes or phrases that were repeatedly voiced by only one participant). Final summaries were e-mailed to all participants, and replies were received confirming that all authors agreed with the final themes and counts.
The written survey results were entered into an Excel database and summarized to complement FG themes.
Eighteen practices (25% of those contacted) were immediately interested in participating, and 13 practices were scheduled between December 2009 and March 2010 (five practices were not able to schedule the FGS before we reached saturation and ended the study). Four practices (three county health departments and a family practice) are not included in this analysis because of different practice patterns; we plan to analyze these separately. Data from nine pediatric practices with 4 to 12 participants per practice [59 total participants, including 13 physicians, 32 nurses/certified medical assistants (CMAs), and 14 front desk or billing staff] are included. One academic practice was included despite some unique characteristics including large numbers of residents. In all the practices, at least one pediatrician and one nurse attended. Tables 1 and 2 summarize participants' education and experience based on the written survey. Approximately 60% of participants did not have a 4-year college degree, and approximately 60% of nurses and other staff members worked in the current practice for ≤5 years.
PVS Survey Responses
Survey responses from 14 participants (front desk and billing personnel) were not included in Tables 3 and 4 because they did not have a direct role in PVS. Of the remaining 13 MDs, 20 nurses, and 12 CMAs (Tables 3 and 4), 82% said they themselves performed some aspect of vision screening for pre-school children. Physicians most frequently reported doing external inspection, fix/follow, red reflex, cover test, and fundoscopy, whereas staff reported performing visual acuity. Most respondents reported a lower success rate for visual acuity at age 3 vs. 4 years (see Table 4). Medical staff reported starting acuity screening earlier than physicians. When asked how other sources of PVS affected office practices at pre-school age, the most frequent response was “no effect” or no response was given. We questioned other sources of PVS at kindergarten age separately because all kindergarteners attending public school in Alabama are screened with a photorefractor. Responses regarding kindergarten screening were similar, with “no effect” and no response most frequent.
PVS Context in Primary Care Settings
The FGS opened with a general question about the process of well-child care (“Walk us through a WCV for a 3- or 4-year-old child”). All practices described a process whereby nurses or medical staff first completed a number of screening tests including visual acuity (see Table 5), according to a protocol that was electronic or printed. We next asked participants to tell us about a facet of well-child care their office did particularly well. Most practices discussed immunizations and mentioned facilitating factors including reminders, audit, feedback, acceptance by parents, and requirement for entry into school.
PVS Office Practices
Next, we focused the discussion on office practices related to PVS. Seven of nine practices used an electronic medical record with an age-specific well-child protocol that included a prompt to record visual acuity for each eye tested alone. Two of these seven protocols also specified testing for strabismus. All offices reported that acuity screening was done by a nurse or medical assistant who had been trained in the primary care setting (except one nurse who reported “nursing school”). Each practice reported that the physician made the final decision to pass or refer. No physicians reported a direct role in the routine acuity screening process. All practices used a printed bill; five with a separate procedure code for quantitative vision screening (99173), two with a code for visual field testing (92081), and two had no separate code for PVS.
Seven of nine practices used the Kindergarten (“Sailboat”) Chart (Precision Vision, LaSalle, IL; Fig. 3), with the academic practice using a Lea chart and the final practice not testing acuity at pre-school age. Most practices used a 20 foot test distance, with the academic and one other practice using 10 feet at pre-school age. Most practices tested children standing in a hallway, using a cover paddle to occlude the eye, with occasional comments that they allowed the less mature children to use both eyes together. Reported time to complete screening varied from 2 to 15 min. During FG discussions, most groups agreed that routine screening started at age 4 or 5 years. One practice that implemented photo-screening stopped its use on retirement of the physician leader who bought the device. This practice had not yet begun a new method for pre-school children. Two practices mentioned having the SureSight autorefractor but did not discuss this as a routine method for pre-school children.
Facilitators and Barriers to PVS
Facilitators or enabling factors for acuity screening are shown in Table 6. All practices said acuity was a routine part of their WCV. Six practices said their “practice was set up for screening,” with comments pertaining to space and patient flow or to administrative factors like electronic medical records. Four practices mentioned external factors (required by Medicaid or other agencies). Three practices mentioned attributes of the test itself that made their job easier (fast, not dependent on compliance, more like a game).
The most frequent barrier to acuity screening was young age, with all practices saying that 3- and 4-year-old children were more difficult to test (See Table 7). Equally frequent was an “uncooperative” child, described as not developmentally ready for the test, shy, immature, afraid to give a wrong answer, nervous about shots, or “scared to death.” Lack of reimbursement was a problem for all practices. Five practices mentioned time constraints. Other barriers were less frequent and were staff-based (belief that children not attending pre-school or school were not ready to name letters or shapes), family-based (parents or siblings interfere with testing process), and practice-based (crowded environment).
Issues Related to Screening Outcome
To facilitate and expand our discussion about the outcome of PVS, we developed a flowchart during each session using the prompt that “most practices have children that pass, do not pass, or do not cooperate with PVS.” We next asked practices to assign a percentage of children in each category (see Fig. 4). This initiated a discussion of factors related to “not passing” (failing) vs. “uncooperative,” as well as a discussion about what happened to children in each category. Fig. 4 shows that, on average, practices estimated that similar percentages of 4- and 5-year-old children are judged to fail acuity screening (13 and 11%, respectively), whereas 4 year olds are less likely than 5 year olds to pass (63 vs. 80%) and more likely to be judged “uncooperative” (25 vs. 6%). Practices were unable to estimate similar percentages for 3-year-old children.
When we asked about the criteria used to fail a child, we observed that the nurses directed this question to the physician (n = 5), or the physician answered first (n = 3); only one nurse stated the criteria. Physicians and staff within practices generally agreed on a criterion that would raise a concern, but across practices the value varied from 20/25 to 20/50 for a 4-year-old child. All (n = 9) practices agreed that the nurse handled the actual interaction with the child, including recording the acuity value or indicating that the child was uncooperative, whereas the physicians made the decision to refer or rescreen. A child who passed acuity screening might be referred by the physician for other reasons such as strabismus, parental concern, another positive finding on the physical examination, or sometimes a belief that an eye examination was indicated for children who were entering school. When asked whether a practice would ever not refer a child who failed acuity screening, practices said “no,” but all practices (n = 9) indicated that they did not routinely refer children they deemed “uncooperative.” When asked what happened to children who did not cooperate, practices gave different answers, either rescreening next year, rescreening within the year, or referring to an eye specialist. No physician mentioned routinely retesting uncooperative children themselves.
Once a decision to refer was made, most offices described a similar process of urging (n = 3) or making (n = 5) an appointment with a local eye specialist. Some practices routinely referred to either optometrists (n = 4) or ophthalmologists (n = 3), and others chose one or the other depending on the suspected condition (n = 1) or proximity to the practice (n = 1). In general, offices seemed satisfied with the eye specialists they used and the care their patients received; no office said anything negative about their consulting eye specialists.
Questions related to billing showed non-uniformity across patients, with most practices screening and/or billing only if the patient was thought to be covered for the service (n = 6). Two practices said they would bill if the practice could document a complaint and diagnosis, and one practice said they did not bill at all (n = 1). All practices were reluctant to use the separate code for non-covered patients, as few third party payers offered reimbursement, leaving many parents with out-of-pocket charges.
Willingness to Pilot New Tests
Because earlier research suggested that most practices were using non-recommended or previously recommended acuity tests,24 we demonstrated the VIP37 and next the Eye Check test, both using single surrounded Lea figures calibrated for use at 5 feet. Responses were generally positive for both tests: participants liked the occluding glasses and the lap board, the pointing or matching response, and the 5 foot test distance. All practices responded favorably to one or both of the tests shown. These “first impressions” were obtained immediately after a demonstration by one of the authors (CMD) and were not informed by hands-on experience with either test. Most practices agreed that detecting children with strabismus and amblyopia was a priority at pre-school age and expressed willingness to pilot a new method.
The FG format encouraged practice groups to review and discuss their experiences, allowing us to probe beyond existing data obtained by surveys24,25,33,41 or during monitored research projects,27,28,30,33,41–43 to better understand PVS processes in primary care settings. Three to six different groups often are adequate to reach data saturation,36 and our final sample size of 59 individuals from nine private pediatric practices is similar to other FG studies,44–47 although much smaller than survey studies.25,33,41
All offices but one quantified vision with an acuity chart, administered by nurses or medical assistants; no offices were using devices to screen pre-school children. Physicians conducted the physical examination and made decisions related to referral or follow-up without observing or repeating the acuity test. The majority of nurses and CMAs worked in the practice for <5 years, indicating a need to train new personnel periodically.
Prompts to record monocular acuity were included in most offices' protocols WCVs for ages 3, 4, and 5 years. Although prompts have successfully improved other practice-based behavior,48 they did not result in universal PVS. Discussions at the practice level corroborated our previous report that 3 year olds are hard to screen in the primary practice settings24 and added that many practices have difficulty screening children at age 4 years. Most practices use the Kindergarten chart at 20 feet despite its lack of evidence, lack of endorsement, and poor success rate as reported by our participants. Yet, these practices did not express dissatisfaction with the test, instead expressing frustration with the child. No practice mentioned reviewing current AAP recommendations, other pediatric literature, or vision screening literature in search of a better method. A more active approach by eye and pediatric specialists to translate current knowledge about developmentally appropriate and sensitive acuity screening methods will be necessary to improve practices.49
After the demonstrations, most practices voiced some advantages of using the single surrounded Lea targets at the 5 foot test distance compared with their current technique. Willingness to pilot either new method was queried because “Diffusion of Innovations” theory posits that “trialability“ (the degree to which an innovation may be experimented with on a limited basis) predicts more rapid uptake of an innovation.50 A few practices were less willing to change without having studies comparing the new tests to their current test (one practice using Allen cards and the other using the Kindergarten Chart). Various other facilitators that were mentioned by practices with regard to immunizations could be applied to PVS, including audits, feedback, and performance incentives, as well as parent education and school requirements. We elicited additional barriers that cannot be solved without changing health care systems. Time was consistently voiced as a barrier to acuity screening. Little time is available for the increasing number of screening tests and counseling that are recommended for preventive care visits.51,52 Like others,25,33 we showed that lack of reimbursement makes acuity testing difficult for many practices to justify, especially at younger ages when testing takes longer.
Similar to a previous study,43 we found that many children who “do not pass” because of poor cooperation were rescreened at the next WCV, and that this practice was more common for younger children. Reluctance to refer all children who do not pass acuity testing may help explain the large differences in percentage of children referred after screening in community (28%) vs. medical settings (6%).42 Given our prior finding that attendance to WCV decreases markedly after the 4-year visit,26 this practice may result in missing children with “severe” amblyopia (20/125 to 20/400 acuity in the amblyopic eye), whose outcome is better when treatment is started by age 4 years.53 Newer methods with good testability and accuracy might decrease this uncertainty and result in more and earlier referrals.
Our participants were primary care pediatricians and staff members who were aware of our interest in PVS and had time to reflect on their practices before the scheduled FGS. Because our discussions were limited to one session lasting <1 hour, we were unable to thoroughly cover all topics. The degree to which their responses are representative of primary care practices overall is unknown because the FG methodology does not address generalizability.36 In contrast to quantitative studies, such as previous surveys that included large and representative samples of pediatricians and family physicians, FGs involve a limited number of participants who discuss topics in depth. This format is very useful to probe interesting or unexpected findings from previous studies, or to refine the methods, protocol, or procedures of future quantitative studies. Krueger and Casey have suggested the concept of “transferability” in which the reader decides how the FG results might transfer to their situation or environment.36
Traditionally, FGs are composed of groups who share attributes that are more demographic in nature, such as sex, age, or occupation. We thought it appropriate to our needs to include all practice staff in FGSs. Although many of the “practice pattern” questions could have been included on a survey type instrument, we wanted the opportunity to probe further for certain questions, especially regarding “how” and “why” practices chose to do things in a certain way. We believe our approach gave a fuller picture than we may have obtained either from demographically matched FGs (e.g., all the nurses across practices) or from surveys. Another useful approach would have been individual interviews, but this approach was not feasible given time and budgetary constraints.
FGSs conducted in pediatric primary care offices revealed some facilitators for PVS, including agreement that VS is an important part of routine well-child care, and existence of protocol-based prompts for visual acuity results. Most barriers involved difficulty testing young children (<5 years old) with existing methods, lack of time, and lack of reimbursement. Many practices agreed that the acuity tests we demonstrated (single surrounded Lea pictures at 5 feet) might overcome some existing barriers and promote universal PVS in the medical home. Few practices are aware of the availability of new evidence-based acuity tests; thus, active translational efforts are needed to change current primary care practices. FG responses will be useful in designing future interventions to improve rates of PVS in the medical home.
University of Alabama at Birmingham School of Optometry
1716 University Boulevard
Birmingham, AL 35294
This publication was made possible by a supplement to Award number R01EY0 15,893 from the National Eye Institute, funded under the American Recovery and Reinvestment Act of 2009. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the National Eye Institute or the National Institute of Health. The parent grant is registered as Clinical Trials number NCT01109459.
1. Foote FM. Progress in meeting the eye problems of children. Am J Public Health Nations Health 1950;40:313–6.
2. American Academy of Pediatrics, Committee on Practice and Ambulatory Medicine and Section on Ophthalmology. Eye examination in infants, children, and young adults by pediatricians. Pediatrics 2003;111:902–7.
5. Ethan D, Basch CE, Platt R, Bogen E, Zybert P. Implementing and evaluating a school-based program to improve childhood vision. J Sch Health 2010;80:340–5.
8. Powell C, Wedner S, Richardson S. Screening for correctable visual acuity deficits in school-age children and adolescents. Cochrane Database Syst Rev 2005:CD005023.
9. Blum HL, Peters HB, Bettman JW. Vision Screening for Elementary Schools: The Orinda Study. Berkeley, CA: University of California Press; 1959.
10. Preslan MW, Novak A. Baltimore vision screening project. Ophthalmology 1996;103:105–9.
11. Yawn BP, Lydick EG, Epstein R, Jacobsen SJ. Is school vision screening effective? J Sch Health 1996;66:171–5.
12. Traboulsi EI, Cimino H, Mash C, Wilson R, Crowe S, Lewis H. Vision first, a program to detect and treat eye diseases in young children: the first four years. Trans Am Ophthalmol Soc 2008;106:179–85.
13. Bodack MI, Chung I, Krumholtz I. An analysis of vision screening data from New York City public schools. Optometry 2010;81:476–84.
14. Marshall EC, Meetz RE, Harmon LL. Through our children's eyes—the public health impact of the vision screening requirements for Indiana school children. Optometry 2010;81:71–82.
15. Williams C, Northstone K, Howard M, Harvey I, Harrad RA, Sparrow JM. Prevalence and risk factors for common vision problems in children: data from the ALSPAC study. Br J Ophthalmol 2008;92:959–64.
16. Das M, Spowart K, Crossley S, Dutton GN. Evidence that children with special needs all require visual assessment. Arch Dis Child 2010;95:888–92.
17. Moore B. The Massachusetts preschool vision screening program. Optometry 2006;77:371–7.
19. The Multi-ethnic Pediatric Eye Disease Study. Prevalence of amblyopia and strabismus in African American and Hispanic children ages 6 to 72 months the multi-ethnic pediatric eye disease study. Ophthalmology 2008;115:1229–36.
20. Friedman DS, Repka MX, Katz J, Giordano L, Ibironke J, Hawse P, Tielsch JM. Prevalence of amblyopia and strabismus in white and African American children aged 6 through 71 months the Baltimore Pediatric Eye Disease Study. Ophthalmology 2009;116:2128–34.
21. Chua B, Mitchell P. Consequences of amblyopia on education, occupation, and long term vision loss. Br J Ophthalmol 2004;88:1119–21.
22. Rahi J, Logan S, Timms C, Russell-Eggitt I, Taylor D. Risk, causes, and outcomes of visual impairment after loss of vision in the non-amblyopic eye: a population-based study. Lancet 2002;360:597–602.
23. US Preventive Services Task Force. Vision screening for children 1 to 5 years of age: US Preventive Services Task Force recommendation statement. Pediatrics 2011;127:340–6.
24. Marsh-Tootle WL, Funkhouser E, Frazier MG, Crenshaw K, Wall TC. Knowledge, attitudes, and environment: what primary care providers say about pre-school vision screening. Optom Vis Sci 2010;87:104–11.
25. Kemper AR, Clark SJ. Preschool vision screening in pediatric practices. Clin Pediatr (Phila) 2006;45:263–6.
26. Marsh-Tootle WL, Wall TC, Tootle JS, Person SD, Kristofco RE. Quantitative pediatric vision screening in primary care settings in Alabama. Optom Vis Sci 2008;85:849–56.
27. Stange KC, Flocke SA, Goodwin MA, Kelly RB, Zyzanski SJ. Direct observation of rates of preventive service delivery in community family practice. Prev Med 2000;31:167–76.
28. Hambidge SJ, Emsermann CB, Federico S, Steiner JF. Disparities in pediatric preventive care in the United States, 1993–2002. Arch Pediatr Adolesc Med 2007;161:30–6.
29. Levinson DR. Most Medicaid Children in Nine States are not Receiving All Required Preventive Screening Services, HHS OEI-05-08-00520. Washington, DC: U.S. Department of Health & Human Services, Office of Inspector General; 2010.
30. Shaw JS, Wasserman RC, Barry S, Delaney T, Duncan P, Davis W, Berry P. Statewide quality improvement outreach improves preventive services for young children. Pediatrics 2006;118:1039–47.
31. Zaba JN, Reynolds W, Mozlin R, Costich J, Slavova S, Steele GT. Comparing the effectiveness of vision screenings as part of the school entrance physical examination to comprehensive vision examinations in children ages 3 to 6: an exploratory study. Optometry 2007;78:514–22.
32. Castanes MS. Major review: the underutilization of vision screening (for amblyopia, optical anomalies and strabismus) among preschool age children. Binocul Vis Strabismus Q 2003;18:217–32.
33. Kemper AR, Clark SJ. Preschool vision screening by family physicians. J Pediatr Ophthalmol Strabismus 2007;44:24–7.
34. Wilkinson S. Focus group research. In: Silverman D, ed. Qualitative Research: Theory, Method, and Practice, 2nd ed. Thousand Oaks, CA: Sage Publications; 2004:177–99.
35. Onwuegbuzie AJ, Dickinson WB, Leech NL, Zoran AG. Toward more rigor in focus group research: a new framework for collecting and analyzing focus group data. Int J Qualit Meth 2009;8:1–21.
36. Krueger RA, Casey MA. Focus Groups: A Practical Guide for Applied Research, 4th ed. Thousand Oaks, CA: Sage Publications; 2009.
37. Vision in Preschoolers (VIP) Study Group. Preschool vision screening tests administered by nurse screeners compared with lay screeners in the vision in preschoolers study. Invest Ophthalmol Vis Sci 2005;46:2639–48.
38. Maguire MG, Vision in Preschoolers Study Group. Children unable to perform screening tests in vision in preschoolers study: proportion with ocular conditions and impact on measures of test accuracy. Invest Ophthalmol Vis Sci 2007;48:83–7.
39. Boyatzis RE. Transforming Qualitative Information: Thematic Analysis and Code Development. Thousand Oaks, CA: Sage Publications; 1998.
40. Patton M. Qualitative Research and Evaluation Methods, 3rd ed. Thousand Oaks, CA: Sage Publications; 2002.
41. Wall TC, Marsh-Tootle W, Evans HH, Fargason CA, Ashworth CS, Hardin JM. Compliance with vision-screening guidelines among a national sample of pediatricians. Ambul Pediatr 2002;2:449–55.
42. Hartmann EE, Bradford GE, Chaplin KN, Johnson T, Kemper AR, Kim S, Marsh-Tootle WL; The PUPVS Panel for American Academy of Pediatrics. Project Universal Preschool Vision Screening: a demonstration project. Pediatrics 2006;117:226–37.
43. Wasserman RC, Croft CA, Brotherton SE. Preschool vision screening in pediatric practice: a study from the Pediatric Research in Office Settings (PROS) Network. American Academy of Pediatrics [erratum in Pediatrics 1992;90:1001]. Pediatrics 1992;89:834–8.
44. Holley CD, Lee PP. Primary care provider views of the current referral-to-eye-care process: focus group results. Invest Ophthalmol Vis Sci 2010;51:1866–72.
45. Lane FJ, Huyck M, Troyk P, Schug K. Responses of potential users to the intracortical visual prosthesis: final themes from the analysis of focus group data. Disabil Rehabil Assist Technol 2012;7:304–13.
46. Dine CJ, Kahn JM, Abella BS, Asch DA, Shea JA. Key elements of clinical physician leadership at an academic medical center. J Grad Med Educ 2011;3:31–6.
47. Leighton P, Lonsdale AJ, Tildsley J, King AJ. The willingness of patients presenting with advanced glaucoma to participate in a trial comparing primary medical vs primary surgical treatment. Eye (Lond) 2012;26:300–6.
48. Grol R, Wensing M. Effective implementation: a model. In: Grol R, Wensing M, Eccles M, eds. Improving Patient Care: The Implementation of Change in Clinical Practice. New York, NY: Elsevier Butterworth Heinemann; 2005:41–57–72, 158–72.
49. Westfall JM, Mold J, Fagnan L. Practice-based research—“Blue Highways” on the NIH roadmap. JAMA 2007;297:403–6.
50. Rogers EM. Diffusion of preventive innovations. Addict Behav 2002;27:989–93.
51. LeBaron CW, Rodewald L, Humiston S. How much time is spent on well-child care and vaccinations? Arch Pediatr Adolesc Med 1999;153:1154–9.
52. Zuckerman B, Stevens GD, Inkelas M, Halfon N. Prevalence and correlates of high-quality basic pediatric preventive care. Pediatrics 2004;114:1522–9.
53. Repka MX, Kraker RT, Beck RW, Birch E, Cotter SA, Holmes JM, Hertle RW, Hoover DL, Klimek DL, Marsh-Tootle W, Scheiman MM, Suh DW, Weakley DR. Treatment of severe amblyopia with weekend atropine: results from 2 randomized clinical trials. J AAPOS 2009;13:258–63.
health behavior; primary care; pediatric; strabismus; amblyopia; pre-school; guideline adherence; vision screening