In the United States today, it is estimated that >6 million children will receive general anesthesia for both surgical and nonsurgical procedures.1 Preclinical studies have established that anesthesia is toxic to the brain in neonatal animals.2 Studies after exposure to anesthetics common in clinical use completed between 1999 and 2010 have shown neuronal apoptosis or neurodegeneration in the developing brains of mammals, including rats, mice, and nonhuman primates. In addition, animals exposed early in development have demonstrated abnormal attention, learning and memory, and behavior changes. However, this work is limited by the lack of human trials, making conclusions regarding human brain development after exposure highly tentative. To our knowledge, there is little rigorous clinical research that investigated the neurodevelopmental outcome of children after exposure to anesthesia as infants or toddlers. This is concerning because researchers agree that early brain injury results in changes that affect the subsequent acquisition of higher order cognitive skills.3,4 As Taylor and Alden note, “The critical issue…is not whether there are sequelae, but the extent to which normal brain development is possible in spite of early brain insult.”4 (p. 56; emphasis added). In light of this understanding, it is important to conduct rigorous, well-designed studies that can enlighten both researchers and clinicians regarding any iatrogenic effects of anesthesia.
The purpose of this article is to discuss the issue of outcome measurement after anesthesia administered during infancy. Several recent studies that apply academic achievement measures or a characterization of learning problems as measures of outcome will be reviewed to elucidate the contributions and limitations of the extant literature. Based on this review, the methodology of neuropsychological assessment will be discussed as a way to increase the validity and sensitivity of forthcoming studies that are planned to evaluate the short- and long-term effects of anesthetic exposure during infancy and early childhood.
INVESTIGATIONS OF NEURODEVELOPMENT AFTER EARLY ANESTHESIA
Within the field of anesthesiology, investigations of neurodevelopmental effects have most frequently used a historical cohort design that relies on retrospective “data of convenience” collected for other purposes. One group of studies gleaned data from academic achievement tests administered over the course of the child’s school years. The second group used both academic achievement and intelligence quotient (IQ) scores to determine classification of learning disability (LD), which was then applied as a measure of outcome.
Assessing Central Nervous System Integrity Using School-Based Performance
With the advent of standardized testing methods, school records frequently provide information regarding a population of children. For example, Hansen et al.5 completed a large cohort study to investigate the outcome of children who had been exposed to anesthesia during inguinal hernia repair in infancy. This Danish study had the advantage of several large national databases that provided not only demographic and medical information but also comprehensive information regarding school history. The sample included adolescents born between the years 1986 and 1990. Exposed children (n = 2689) were compared with a control group (n = 14,575) randomly selected from the overall sample and matched by age to the exposed group. Outcome was designated as the average composite score attained from a nationally mandated test of general academic achievement that assessed Danish, foreign languages, mathematics, science, and social studies. Teacher ratings of performance across these same subjects were also combined to produce an average rating. After adjusting for various medical (e.g., birth weight) and demographic confounds, neither between-group comparison reached statistical significance. In this study, it is worth noting that children were excluded if they were unable to follow the general curriculum because of special needs defined as neuropsychological or severe functional limitations. When the proportion of these “nonattainers” was compared based on exposure status, the exposed group experienced a significantly higher risk of cognitive or functional limitations, suggesting that children who could not be included in the main analyses probably had worse outcome than those included in the sample.
A second, uncontrolled study by Block et al.6 was limited by the comparison of academic achievement test results to published test norms. That is, the data analyses were completed by comparing the sample described below to the representative national normative group supplied by the test publisher. Although this statistical method offers convenience, the lack of a control group results in the inability to account for confounding variables in the comparison sample.7 In this study, outcome was assessed by comparing a historical cohort of children to the test mean of the Iowa Test of Basic Skills (ITBS). Similar to the academic achievement instrument used in the study by Hansen et al.,7 this test provides a composite score that combines the individual’s performance on reading, language, mathematics, and social studies. Investigators applied the earliest recorded test scores that were available, reporting that 89% of these scores were obtained when children in the sample (n = 287) were in second to fourth grades (i.e., approximately 7 to 10 years old). The sample showed statistically lower ITBS scores when compared with the national mean. However, when the investigators eliminated children with confounding medical conditions (e.g., prematurity/low-birth weight; acute respiratory distress syndrome; intracranial hemorrhage, among other conditions) from the subsample of children whose parents had consented for medical record review (n = 133), the remaining 58 children did not show lower than expected ITBS scores in relation to the standardization sample. Further analysis, however, indicated that a larger proportion of these children scored below the fifth percentile than in the test standardization sample, suggesting a meaningful difference in the sample distribution between the 2 groups.
Issues Inherent in Using Academic Achievement as a Measure of Outcome
Key issues related to using school-based measures of academic achievement as a surrogate for outcome in medical studies are best discussed within the context of the purpose of the tests. Rather than to determine central nervous system (CNS) integrity, achievement tests were developed to evaluate academic performance. These tests typically provide standardized assessments of specific skills and knowledge at specific grade levels. Since achievement is relatively fluid and can change from year to year based on a variety of environmental factors (e.g., quality of teaching, school absences), achievement tests differ from tests of intelligence (i.e., IQ) or aptitude, which tend to reflect more stable traits. Schools often use the results of academic achievement tests to place students into appropriate grade levels or to group students for subject-specific instruction. Subject matter for achievement tests most frequently includes math and reading and also may incorporate science, social studies, and writing. Lower scores may suggest that a student is in need of remediation, while higher scores may suggest that a student is prepared for instruction in more advanced material. Although schools may independently use achievement tests, these tests are often state-mandated. In the United States under the No Child Left Behind initiative, achievement tests have commonly been used to assess broad student proficiency, meaning that they are developed to explicitly assess the knowledge and skills that students have acquired at a particular point in their educations.8
Academic achievement tests are subject to several criticisms that relate to threats to the validity of the testing. First, achievement tests developed for educational settings nature rely heavily on content. Content validity is generally based on direct comparison of the material presented on the test to the learning objectives used in the classroom. This comparison is sometimes at the expense of a more comprehensive validity analysis that examines test constructs, concurrent measures, and the test’s ability to predict carefully delineated results (e.g., later school achievement, college grades).9 Second, the implementation of No Child Left Behind mandates “high-stakes testing” across the nation. High-stakes testing refers to incentivizing teacher effectiveness as assessed through student performance by rewarding those schools and districts that meet specific, predetermined standards.10 As a result, the application of academic achievement tests is now more uniform across school systems and more comparable with the national educational testing mandated in Denmark5 and Australia.11 Although this may seem to provide some advantage for researchers who wish to use these test results to assess outcome (e.g., related to earlier medical events), educational researchers have long been critical of mandated achievement tests’ ability to remain neutral with respect to the curriculum and still effectively monitor and motivate pedagogy.12 Experts argue that it is problematic to impose such testing procedures without injecting a degree of bias into teaching approaches. That is, with such high-stakes testing, educators may focus on material expected to appear on the test in hopes of showing better student performance and, thus, greater teacher effectiveness. Finally, researchers have also argued that, although achievement tests are meant to identify specific skills and knowledge, they are vulnerable to the confounds of the child’s innate IQ and self-regulation skills.13
Learning Disability as Outcome
Perhaps in an effort to address the limitations of academic achievement tests, some researchers have identified children who experienced early general anesthesia and have subsequently been identified with LD. To our knowledge, the most methodologically sound investigation of the late neurotoxic effects of anesthesia was completed by Wilder et al.14 A major advantage of this study is that the authors identified a sample of children with LD who had been exposed to general anesthesia. Using medical records available for >5000 children from 5 counties in Minnesota, children were identified as having anesthesia before 4 years of age. Of this group, 593 children were identified who met criteria for the exposure group, with approximately 75% of this group undergoing a single surgical procedure. To determine LD in these children, a second large database of all children seen for an evaluation of learning problems was available. Children were compared with respect to their individual IQ and academic achievement test results. Using these test results, the investigators used 3 methods to apply decision rules to identify LD. The following 3 methods were used ranging from the most to the least statistically rigorous: the predicted discrepancy method used the psychometric properties of the 2 measures to predict expected academic achievement level based on measured IQ score in comparison with actual scores; the simple difference method directly compared IQ and academic achievement; and LD as identified by an academic achievement test in the low-average range or below in the presence of IQ scores at the ninth percentile or higher. Application of the 3 LD identification rules indicated that 932 children from the original sample developed LD before 19 years of age. After controlling for relevant variables (i.e., gender, birth weight, gestational age), regression analyses indicated a dose–response relationship. That is, the risk for LD increased (P < 0.001) with the number of exposures to anesthesia. However, the risk did not increase for the children with a single exposure.
Classification Issues Associated with LD
Results of the study by Wilder et al.14 were highly dependent on designating children in the sample as LD, a characterization variously applied by the research team. The changing definition of LD by educators provides the context for the necessity to vary the definition of LD in the study by Wilder et al.
The definition of LD has evolved over the years. The term LD came into formal use with the passage of the Individuals with Disabilities Education Act (IDEA) in 1975.15 LD, as defined by the code of federal regulations,16 is not explained by visual, hearing, or motor disabilities, mental retardation, emotional disturbances, or a result of socioeconomic status that results in impoverished educational opportunities or inadequate schooling. As of 2009, about 5% of all public school students were identified as having LD, with males comprising nearly two-thirds of students receiving special education services.17 The most commonly identified LD is dyslexia, which involves challenges with word fluency and word recognition.17 Other common disorders include dyscalculia, dysgraphia, auditory and visual disorders, and nonverbal learning disabilities.17
According to IDEA, LD is a challenge in the psychological processes of understanding. That is, individuals with LD have, among other problems, difficulties with language and/or mathematics that may affect the ability to communicate, listen, think, read, and do more complex learning-related tasks incorporating these skills. Such disabilities can stem from any number of sources including perceptual anomalies and/or brain insults, and they are commonly described as occurring on a continuum of mild to profound impairment. Before 2004, the presence of a severe discrepancy between IQ and academic achievement had been the key in identifying a child as LD. As noted earlier, the various methods of identifying the required discrepancy between IQ and achievement led to inconsistent labeling of the disability both within and across educational settings.
In the early days of the IDEA, efforts were made to standardize guidelines for the identification of LD across states. However, in >3 decades after these efforts, inconsistencies in the implementation of guidelines and the misclassification of students to provide services for all children requiring academic assistance resulted in a nearly 200% increase in the number of children identified as having LD.15 The 2004 reauthorization of IDEA saw 2 major revisions aimed at amending this overidentification. One of these was that it was no longer relevant whether or not a child had a severe discrepancy in his or her intellectual and achievement abilities in the identification of LD.16 The second revision was the implementation of a new process of identification referred to as response to intervention (RTI). RTI essentially requires the use of “generally effective” instruction in the classroom, monitors the student’s progress and revises instruction and/or instructors accordingly, and then again monitors progress.18 When a child does not respond to the instruction, he or she is either evaluated for special education using a risk versus deficit model or immediately qualified for special education.19
It is important to point out that LD is an educational label rather than a clinical diagnosis. Because one of the major uses of the label is to qualify the child for an individualized educational plan, this label has been variously applied by educators and its usage has been driven to some extent by the resources of the school districts. With respect to research, the various applications of the 2 models (i.e., severe discrepancy versus RTI) of LD identification require that studies using LD classification as an outcome measure define exactly how the children in the sample were identified. In particular, the RTI literature notes the overidentification of LD before 2004, a period during which many of the subjects in the studies discussed were designated as LD. Perhaps more importantly, the studies discussed here, whether they used either academic achievement levels or LD status as surrogate measures of neurodevelopmental outcome, provided limited information regarding the status of the brain. As noted by experts within the field of neuropsychology, even the IQ measures used in the definition of LD are highly dependent on school experience and are not particularly sensitive to the status of the CNS.20
ASSESS NEURODEVELOPMENTAL EFFECTS OF ANESTHESIA WITH NEUROPSYCHOLOGICAL INSTRUMENTS
Noting weaknesses in applying either academic achievement or LD status as a quantification of outcome, we discuss a groundbreaking study as an example of how neuropsychological testing might be applied to address the question of the neurodevelopmental effects associated with earlier exposure to anesthesia. Lezak et al.21 define neuropsychology as “…an applied science concerned with the behavioral expression of brain dysfunction.” (p. 3). They go on to elaborate that one important use of neuropsychological test data is to investigate specific brain disorders with instruments of known sensitivity to CNS insult. Ing et al.11 focused on what might be termed the “outcome problem” inherent in cohort studies that use educational test results to ascertain the effects of early anesthetic exposure. These investigators examined the association between exposure before 3 years of age and cognitive outcome at 10 years of age. Unlike the studies discussed earlier, these investigators capitalized on a large cohort study that applied a battery of age-appropriate neuropsychological instruments. This study is, to our knowledge, the first of its kind with the availability of neuropsychological test results and serves to illustrate the advantage of using instruments sensitive to the CNS to delineate neurodevelopmental deficits in discrete cognitive domains.
The Western Australia Pregnancy Cohort (n = 2868) used in the study by Ing et al.11 included children born between 1980 and 1992. Children were assessed 8 times between the ages of 1 and 16 years with the most extensive testing occurring at 10 years of age. After adjusting the sample for loss to follow-up, 321 of the remaining 2608 children had surgical procedures requiring anesthesia between the ages of 1 and 3 years. Standardized neuropsychological tests of language function, attention, abstract reasoning, motor skills, and a parent report of behavior were included in the battery. Between-group analyses, corrected for multiple comparisons, indicated statistically significant differences on measures of language and abstract reasoning between the exposed and unexposed children. The clinical implication of these findings was evaluated by calculating a disability rating for each of the variables that reached statistical significance. After adjustment for confounders, results indicated a significant difference between the incidence of clinical disability between the exposed and the unexposed children on measures of higher order language abilities and abstract reasoning. Finally, a similar comparison between exposed children regrouped according to single versus multiple exposures indicated differences in the same cognitive areas. An interesting finding of this study was that despite the statistically different performances on neuropsychological tests of language and abstract reasoning, the Peabody Picture Vocabulary Test, a well-recognized surrogate for IQ level, did not show between-group differences. Also worthy of comment is the fact that no differences in parent rating scales were found, suggesting that parent surveys are probably not sensitive to cognitive changes that may be associated with exposure, particularly in instances when the exposure is early in the child’s history and not associated with a complex medical condition.
The problems of the extant research regarding neurodevelopmental toxicity were discussed cogently in an article by Sun.2 These investigators note not only the retrospective nature of most studies but also the use of outcome variables of convenience that included a classification of LD or developmental delay, the use of academic achievement test results, or application of subjective parent rating scales. Sun et al.1 report results of a carefully designed, controlled prospective pilot study that used neuropsychological instruments to investigate a single exposure to anesthesia before the age of 3 years (Pediatric Anesthesia NeuroDevelopment Assessment [PANDA]). Importantly, the PANDA pilot study established that neuropsychological testing was feasible within the age range of interest and provided precedent for the use of siblings to control for the important confounds of genetics, socioeconomic status, and overall family environment. Studies of head injury have identified these variables as being particularly important when outcome is measured.22 Although the sample size was small and the age range restricted, this study serves as a model for the design of future research investigations of later neurodevelopment after anesthetic exposure as an infant or toddler.
Advantages of Neuropsychological Instruments as Outcome Measures
The literature reviewed for this article illustrates how cohort studies can inform future research designs. The study by Hansen et al.5 showed that when the proportion of nonattainers identified by neuropsychological testing was divided based on exposure history, the exposed group experienced a significantly higher risk of low-cognitive performance or functional limitations. The findings of Block et al.6 indicated that the distribution of exposed children was significantly different and lower than that of a national standardization sample, providing further rationale for applying more rigorous methodology to investigate the cognitive status of children with a history of early exposure. Although Wilder et al.14 did not identify differences in LD status between study groups, LD status for almost all the children in the sample was calculated from scores obtained at lower educational levels (i.e., second–fourth grade), a procedure that was likely underestimating the rate of LD in the overall sample. In contrast to this body of work, the study that applied neuropsychological test results indicated exposed children showed deficits in the area of language and abstract reasoning but not on a measure of IQ.11 In summary, these findings indicate that future trials investigating neurodevelopmental outcome after anesthesia in children will be improved by prospective studies with state-of-the-art procedures designed to assess the status of the CNS.
Cognitive Domains and Associated Neuropsychological Instruments
By convention, carefully validated and standardized neuropsychological tests are organized into 7 domains.21,23
Intelligence testing provides a general measure of overall ability, social understanding, and practical knowledge. The neuropsychologist uses IQ test results to lay the foundation for tests more sensitive to brain dysfunction rather than to indicate CNS damage.21 However, intelligence testing has gained complexity and subsequent controversy since its initial conceptualization by Binet (Das24 for a comprehensive discussion of this topic).
Although language, like most other cognitive domains, can be deconstructed into many component parts, key areas to evaluate in a general battery include instruments that assess both expressive and receptive speech. In addition, auditory comprehension or the ability to understand and follow complex verbal commends and verbal fluency are frequently assessed. Although there are a variety of age-appropriate instruments to assess the multiple aspects of language in children, examples of key tests are included in Table 1.
Learning and Memory
Memory is the capacity to register (i.e., learn), retain, and retrieve information. Neuropsychologists frequently measure memory with respect to verbal, visual, and tactile performance. This field has developed a number of comprehensive memory tests for adults, children, and adolescents that allow for the understanding of how the attendant domains of attention, visual-spatial skills, and executive abilities all impact memory. One limitation of this domain is that memory in children younger than 3 years is usually not developed to the point that traditional assessment techniques are valid.
These skills generally refer to visual-perceptual, visual-spatial, or visual-constructional abilities. Visual-perceptual tasks often assess aspects of visual inattention that can range from impulsivity to more localizing symptoms of visual neglect. Other visual-spatial skills require the individual’s ability to rotate his or her own body in space, to match the angle of line from a mixed array, or to perform visual discriminations by matching a discrete segment to an integrated design. Constructional problems usually involve drawing/copying or building, emphasizing tasks that may generalize to deficits in daily living skills.
Attention and Executive Function
This complex domain includes measures of abstract reasoning encompassing the ability to filter out nonessential competing stimuli through focusing, sustaining, and/or dividing attention to organize material, solve novel problems, and maintain mental flexibility using input for other brain regions (e.g., memory; visual-spatial information). Novel problem solving and organizational abilities are frequently referred to as executive function because the activities serve to manage and coordinate both cognition and behavior. Because tests of executive function by definition depend on novelty, they are highly vulnerable to practice effects and thus it is not appropriate to apply the same measures over time as might be required in longitudinal studies.
Motor and Psychomotor Abilities
A comprehensive neuropsychological evaluation frequently includes measures assessing dexterity and strength in the upper extremities. Performance on these tests can be compared with respect to right versus left hands, allowing for the individual to act as his/her own control. This comparison informs as to the relative integrity of the 2 brain hemispheres. In children, performance on a simple task that requires both a controlled motor behavior and speed provides much information regarding impulsivity and other aspects of problem-solving style. “Psychomotor” instruments add a cognitive challenge to an otherwise simple task such as copying symbols or inserting pegs into a board, providing an assessment of brain function under challenging circumstances.
Test Selection Across Developmental Levels
This brief discussion of the varied tests associated with these 7 domains highlights the complexity of neuropsychological methodology and underscores that the specificity of results increases as tests within the cognitive domains become more fine-grained. In addition, over the last 15 years, neuropsychology has gained a developmental perspective and benefits from tests specifically constructed to measure brain function and development during childhood.25 The PANDA pilot study1 used prospective neuropsychological testing to document outcome but this sample was limited to children between the ages of 7 and 10 years. Although not discussed in the article, limitations of testing methodology may have dictated the age range of the sample. Indeed, a continuing problem when planning research on the outcome of a medical condition such as traumatic brain injury (TBI) is the selection of appropriate neuropsychological instruments. Case-control designs that follow infants and children after injury depend on the availability of appropriate instrumentation within specific cognitive domains that extend across a broad age range. In this case, test selection presents a particular challenge. Unlike instruments designed for adults, pediatric instruments differ depending on the developmental epoch, even though they may be measuring the same cognitive construct. Recently, researchers within the pediatric TBI community in conjunction with National Institutes of Health provided recommendations for common data elements appropriate across developmental epochs to assess outcome after TBI.26
Table 1 is based on the National Institutes of Health model and presents a compendium of age-appropriate instruments (i.e., from infancy through later adolescence) that are available to assess 7 cognitive domains. This battery, although not developed by consensus, is intended to provide an example of instruments available that might be used by investigators as they unravel the toxic effects of anesthesiology on the developing brain. The interested reader is referred to the comprehensive texts for more detailed information regarding individual tests.21,23
Animal studies have documented that anesthesia holds the potential to damage the immature nervous system, and cohort studies completed with children have provided the foundation for the exploration of the iatrogenic effects of anesthesia administered to infants or toddlers. These studies relied on testing originally designed for the purpose of classifying school progress or identifying those in need of specialized educational services. In spite of these limitations, findings have provided preliminary evidence that suggests children exposed to anesthesia early in life are different from children of the same age who were never exposed. These studies are important because they provide the basis for future controlled, prospective studies that investigate outcome after anesthesia. The methodology of neuropsychological testing, initially developed to assess the integrity of the CNS in adults, now includes valid, comparable instruments to assess children across the age range. Thus, neuropsychological testing provides investigators with highly sensitive, robust measures of outcome that can be applied in prospective studies. As investigators and clinicians seek to weigh the risks and benefits of anesthesia exposures, this method is likely to play an important role assessing the overall integrity of the brain and identifying deficits within specific cognitive domains.
Name: Sue R. Beers, PhD.
Contribution: This author conceptualized manuscript, organized the initial draft, completed final draft, and submitted to journal.
Attestation: Sue R. Beers approved the final manuscript and is the designated archival author.
Name: Dana L. Rofey, PhD.
Contribution: This author reviewed and critiqued conceptualized manuscript, wrote the sections on achievement testing and learning disability, and provided comments for final draft.
Attestation: Dana L. Rofey approved the final manuscript.
Name: Katie A. McIntyre, MS.
Contribution: This author assisted the first author in writing the neuropsychological assessment section, completed Table 1, and prepared the reference section.
Attestation: Katie A. McIntyre has approved the final manuscript.
This manuscript was handled by: Peter J. Davis, MD.
1. Sun LS, Li G, DiMaggio CJ, Byrne MW, Ing C, Miller TL, Bellinger DC, Han S, McGowan FX. Feasibility and pilot study of the Pediatric Anesthesia NeuroDevelopment Assessment (PANDA) project. J Neurosurg Anesthesiol. 2012;24:382–8
2. Sun L. Early childhood general anaesthesia exposure and neurocognitive development. Br J Anaesth. 2010;105(Suppl 1):i61–8
3. Lenneberg EH Biological Foundations of Language. 1967 New York John Wiley
4. Taylor HG, Alden J. Age-related differences in outcomes following childhood brain insults: an introduction and overview. J Int Neuropsychol Soc. 1997;3:555–67
5. Hansen TG, Pedersen JK, Henneberg SW, Pedersen DA, Murray JC, Morton NS, Christensen K. Academic performance in adolescence after inguinal hernia repair in infancy: a nationwide cohort study. Anesthesiology. 2011;114:1076–85
6. Block RI, Thomas JJ, Bayman EO, Choi JY, Kimble KK, Todd MM. Are anesthesia and surgery during infancy associated with altered academic performance during childhood? Anesthesiology. 2012;117:494–503
7. Hansen TG, Henneberg SW, Morton NS, Christensen K. The value of observational cohort studies. Ped Anesth. 2010;20:880–94
8. Nichols S, Glass G, Berliner D. High-stakes testing and student achievement: Updated analyses with NAEP data. Education Policy Analysis Archives. 2012;20:1–35
9. Pedhazur EJ, Schmelkin LP Measurement, Design, and Analysis: An Integrated Approach. 2013 New York Psychology Press
10. Ryan JE. The perverse incentives of the No Child Left Behind Act. New York University Law Review. 2004;79:932–89
11. Ing C, DiMaggio C, Whitehouse A, Hegarty MK, Brady J, von Ungern-Sternberg BS, Davidson A, Wood AJ, Li G, Sun LS. Long-term differences in language and cognitive function after childhood exposure to anesthesia. Pediatrics. 2012;130:e476–85
12. Resnick LB, Resnick DPPfleiderer J. Tests as standards of achievement in school. Proceedings of the 1989 ETS Invitational Conference: The Uses of Standardized Tests in American Education. 1989 Princeton Educational Testing Service:63–80
13. Duckworth AL, Quinn PD, Tsukayama E. What no child left behind leaves behind: the roles of IQ and self-control in predicting standardized achievement test scores and report card grades. J Educ Psychol. 2012;104:439–51
14. Wilder RT, Flick RP, Sprung J, Katusic SK, Barbaresi WJ, Mickelson C, Gleich SJ, Schroeder DR, Weaver AL, Warner DO. Early exposure to anesthesia and learning disabilities in a population-based birth cohort. Anesthesiology. 2009;110:796–804
15. Kavale KA, Spaulding LS. Is response to intervention good policy for specific learning disability. Learn Disabil Res Pract. 2008;23:169–79
16. . Individuals with Disabilities Education Improvement Act of 2004 (IDEA), Pub. (2004)
17. Cortiella C The State of Learning Disabilities. 2011 New York, NY National Center for Learning Disabilities
18. Fuchs D, Mock D, Morgan PL, Young CL. Respon siveness-to-intervention: Definitions, evidence, and implications for the learning disabilities construct. Learn Disabil Res Pract. 2003;18:157–71
19. Vaughn S, Fuchs LS. Redefining learning disabilities as inadequate response to instruction: the promise and potential problems. Learn Disabil Res Pract. 2003;18:137–46
20. Zhu J, Weiss LG, Prifitera A, Coalson DGoldstein G, Beers SR. The Wechsler Intelligence Scales for children and adults. Comprehensive Handbook of Psychological Assessment. 2004 Hoboken, NJ John Wiley:51–75
21. Lezak MD, Howieson DB, Loring DW Neuropsychological Assessment. 20044th ed New York Oxford University Press
22. Taylor HG, Yeates KO, Wade SL, Drotar D, Stancin T, Minich N. A prospective study of short- and long-term outcomes after traumatic brain injury in children: behavior and achievement. Neuropsychology. 2002;16:15–27
23. Strauss E, Sherman EMS, Spreen O A Compendium of Neuropsychological Tests: Administration, Norms, and Commentary. 2006 New York Oxford University Press
24. Das JPGoldstein G, Beers SR. Theories of intelligence: issues and applications. Comprehensive Handbook of Psychological Assessment. 2004 Hoboken, NJ John Wiley:5–24
25. Goldstein G, Beers SRGoldstein G, Beers SR. Introduction to section one. The Wechsler Intelligence Scales for children and adults. Comprehensive Handbook of Psychological Assessment. 2004 Hoboken, NJ John Wiley:3–4
26. McCauley SR, Wilde EA, Hicks R, Anderson V, Bedell G, Beers SR, Campbell TS, Chapman SB, Ewing-Cobbs L, Gerring JP, Gioia GA, Levin HS, Michaud LJ, Prasad MR, Swaine BR, Turkstra LS, Wade SL, Yeates KO. Recommendations for the use of common outcome measures in pediatric traumatic brain injury research. J Neurotrauma. 2012;29:678–705
27. Beers SR, Wisniewski SR, Garcia-Filion P, Tian Y, Hahner T, Berger RP, Bell MJ, Adelson PD. Validity of a pediatric version of the Glasgow Outcome Scale-Extended. J Neurotrauma. 2012;29:1126–39
28. Bayley N Bayley Scales of Infant and Toddler Development. Administration Manual. 20063rd ed San Antonio Psychological Corporation
29. Wechsler D Wechsler Preschool and Primary Scale of Intelligence. Administration Manual. 20124th ed San Antonio Psychological Corporation
30. Wechsler D Wechsler Abbreviated Scale of Intelligence. Administration Manual. 20112nd ed San Antonio Psychological Corporation
31. Semel E, Wiig EH, Secord WA Clinical Evaluation of Language Fundamentals—Preschool. Administration Manual. 20032nd ed San Antonio Psychological Corporation
32. Semel E, Wiig EH, Secord WA Clinical Evaluation of Language Fundamentals—Fourth Edition. Administration Manual. 2013 San Antonio Psychological Corporation
33. Dunn LM, Dunn DM Peabody Picture Vocabulary Test. Administration Manual. 20074th ed Minneapolis NCS Pearson
34. Williams KT Expressive Vocabulary Test. Administration Manual. 20072nd ed Minneapolis NCS Pearson
35. Delis DC, Kramer JH, Kaplan E, Ober BA California Verbal Learning Test—Children’s Edition. Administration Manual. 1994 San Antonio Psychological Corporation
36. Korkman M, Kirk U, Kemp S NEPSY-II. Administration Manual. 20102nd ed San Antonio Psychological Corporation
37. Beery KE, Buktenica NA, Beery NA The Beery-Buktenica Developmental Test of Visual-Motor Integration. Administration Manual. 20106th ed Minneapolis NCS Pearson
38. Gioia GA, Espy KA, Isquith PK Behavioral Rating Inventory of Executive Function-Preschool Version. Administration Manual. 2003 Lutz, FL Psychological Assessment Resources
39. Gioia GA, Isquith PK, Guy SC, Kenworthy L Behavioral Rating Inventory of Executive Function. Administration Manual. 2000 Lutz, FL Psychological Assessment Resources
40. Bruininks RH, Bruininks BD Bruininks-Oseretsky Test of Motor Proficiency. Administration Manual. 20052nd ed Minneapolis, MN NCS Pearson
41. Lafayette Instrument. Grooved Pegboard Test: User instructions. 2002 Lafayette, IN Lafayette Instrument Co
42. Wechsler D Wechsler Intelligence Scale for Children. Administration Manual. 20034th ed San Antonio Psychological Corporation
43. Achenbach TM, Rescorla LA Manual for the ASEBA Preschool Forms and Profiles. Administration Manual. 2000 Burlington, VT ASEBA
44. Achenbach TM, Rescorla LA Manual for the ASEBA School-age Forms and Profiles. Administration Manual. 2001 Burlington, VT ASEBA