SCREENING measures offer a brief snapshot of child development. One of the first types of screening tools was found in Spartan society where an assessment similar to an Apgar test was used with newborns. If children failed the test, their lives were terminated. Fortunately, we have progressed and now use better methods to get children the help and support they need. Today we use developmental screening measures in an early detection system aimed at locating children with delays/disabilities who are in need of early intervention (EI) or early childhood special education (ECSE) services. Screening measures can also be used in a tracking program, or developmental surveillance, to monitor children who are at risk for developing a disability. We should be concerned about how well our screening measures function, given their importance in the life of a child and her or his family. At this time, empirically validated screening practices, procedures, and tools to identify children who require specialized supports and services are limited.
It is important to have an evidence base for modern methods of screening young children for at least two reasons. First, high-quality experimental studies strengthen support for use (Buysse & Wesley, 2006; Snyder, 2006; Snyder, Lawson, Thompson, & Stricklin, 1993). Research studies conducted on a measure can help consumers better understand the strengths and weaknesses of the assessment tool. Work has been done to analyze research practices in EI and ECSE (Carta, 2002; McLean, Snyder, Smith, & Sandall, 2002; Odom et al., 2005; Smith et al., 2002; Thompson, Diamond, McWilliam, Snyder, & Snyder, 2005).
Second, high-quality research allows for replication and extension. The better the study outlines the methods used to conduct research, the easier it will be for another researcher to come along and replicate and extend knowledge. In turn, the replication can validate the original research findings. It could also lead to additional areas for exploration.
To document whether screening tools have empirical support, screening test manuals should be examined, as these manuals should contain report information on experimental studies. However, busy professionals may not have time to conduct their own research on screening measures beyond what is published in a manual. The information in the manual is unlikely to be exhaustive and comparative and is likely updated only with new editions. Previous research reviews have examined available evidence on conventional and authentic assessment practices for determining eligibility for Individuals with Disabilities Education Act special services (Bagnato, Macy, Salaway, & Lehman, 2007; Macy & Bagnato, 2010). To better understand the effectiveness and accuracy of screening tools, as well as provide screening tool users helpful information on the evidence needed to make sound decisions, a review of the literature was conducted to locate empirical studies.
METHODS
The main purpose of this research synthesis is to describe the evidence available on developmental screening measures. Fourteen developmental screening measures were chosen. The list of screening tools was derived using the following process. “Developmental screening measure” was defined as tools that screened multiple developmental domains (e.g., cognitive, adaptive, social, communication, motor). Screening measures were excluded if they focused on a single developmental area (e.g., language), disorder (e.g., autism), or academic content area (e.g., reading). The National Early Childhood Technical Assistance Center (Ringwalt, 2008) and the American Academy of Pediatrics (AAP; American Academy of Pediatrics, Council on Children With Disabilities, Bright Futures Steering Committee, Medical Home Initiative for Children With Special Needs Project Advisory Committee, 2006) have publications that identify commercially available developmental screening tools. The AAP and National Early Childhood Technical Assistance Center lists of developmental screening measures were used to derive a list of 19 developmental screening measures. From the list of 19, tools were excluded if they had less than two research studies as of 2011 that met research criteria (i.e., Developmental Assessment of Young Children, First STEP, Kaufman Brief Intelligence Test, and Kaufman Survey of Early Academic and Language Skills).
The following 14 measures met the inclusion criteria:
- Ages and Stages Questionnaire (Squires, Twombley, Bricker, & Potter, 2009),
- Bayley Infant Neurodevelopmental Screener (Bayley, 2006),
- Battelle Developmental Inventory Screening Test (Newborg, 2005),
- Brigance II Screens (Brigance & Glascoe, 2002, 2005a, 2005b),
- Child Development Inventories (Ireton, 1987, 1988, 1994),
- Developmental Activities Screening Inventory (Fewell & Langley, 1984),
- Denver II (Frankenburg et al., 1992, 1996),
- DIAL-3 (Mardell-Czudnowski & Goldenberg, 1975, 1998),
- Developmental Observation Checklist System (Hresko, Miguel, Sherbenou, & Burton, 1994),
- Early Screening Inventory (Meisels, Marsden, Wiske, & Henderson, 2008),
- Early Screening Profiles (Harrison et al., 1990),
- Learning Accomplishment Profile Diagnostic Screen (Nehring et al., 1997),
- McCarthy Screening Test (McCarthy, 1978), and
- Parents' Evaluation of Developmental Status (Glascoe, 1997a).
Studies were included in the research synthesis if the investigation (a) researched one or more of the selected 14 developmental screening measures, (b) involved young children birth to kindergarten with disabilities or at risk for developing a disability due to environmental or biological risk conditions, (c) examined the usefulness, accuracy, consistency, and/or effectiveness of the tool at screening young children with disabilities and/or at risk, and (d) was published in a peer-reviewed and scholarly publication.
Excluded were studies that used the screening tool to examine variable(s) other than the tool, or used the screener as an outcome measure. For example, a researcher wanted to know the impact of an intervention on mother–child interactions and used the Brigance to measure the child's development before and after the intervention. This type of study resulted in exclusion from this research synthesis because the tool was used mainly to investigate the interaction between dyads (i.e., dependent variable) and not necessarily to examine the psychometric properties of the measure. Dissertation/master's thesis studies, conference/paper proceedings, conceptual papers, technical reviews, and non-peer-reviewed studies were excluded.
The search was done broadly in the fields of psychology, developmental disabilities, special education, allied health fields (speech and language therapy, physical therapy, occupational therapy), as well as EI. Ancestral searches were conducted. Key words used for the literature search included titles of the developmental screening measures. Databases used were Academic Search Elite, Education Resource Information Center, Google Scholar, Health Source, Psychological Abstracts, PscyARTICLES, PsycINFO, PubMed/MEDLINE, and Teacher Reference Center.
RESULTS
A total of 222 studies were found that met criteria for this research synthesis on the 14 developmental screening measures (Table 1).
Table 1: Developmental Screening Assessment Characteristics
The screening instruments that had the most published research were (a) Ages and Stages Questionnaire, (b) Denver/Denver Developmental Screening Test, and (c) McCarthy/McCarthy Screening Test. The oldest study found was in 1971 on the Denver, and several screening measures had studies published within a year of this literature review (i.e., 2011). The age of children reported in these studies spanned the range from infants, toddlers, preschoolers, to kindergartners. A total of 135 087 young children, in the United States and abroad, were included in research studies on these developmental screening measures. Reliability, validity, and utility studies were published on the 14 instruments.
RELIABILITY
Reliability studies indicate how consistently screening measures identify children with delays/disabilities. Stability of scores and child performance across time, setting, and people are also indicators of how reliable a screening assessment performs. Three types of reliability research were examined: (a) interitem, (b) interrater, and (c) test–retest.
Interitem
Internal consistency is evidenced by strong interitem correlations of a scale. Eight studies investigated interitem reliability. These types of studies first appeared in 1983 and the last was in 2002. The studies ranged from using items from a screening tool to track progress over time (Bagnato, Suen, Brickley, Smith-Jones, & Dettore, 2002), classify children across age groups (Suen, Mardellczudnowski, & Goldenberg, 1989), to comparing items from a screening measure across populations of children around the world (Shapira & Harel, 1983; Valencia & Rankin, 1985; Williams & Williams, 1987).
Table 2-a: Research Studies on Developmental Screening Assessments
Table 2-b: Research Studies on Developmental Screening Assessments
Table 2-c: Research Studies on Developmental Screening Assessments
Table 2-d: Research Studies on Developmental Screening Assessments
Table 2-e: Research Studies on Developmental Screening Assessments
Table 2: f. Research Studies on Developmental Screening Assessments
Table 2: g. Research Studies on Developmental Screening Assessments
Table 2: h. Research Studies on Developmental Screening Assessments
Table 2: i. Research Studies on Developmental Screening Assessments
Table 2: j. Research Studies on Developmental Screening Assessments
Table 2: k. Research Studies on Developmental Screening Assessments
Table 2: l. Research Studies on Developmental Screening Assessments
Table 2: m. Research Studies on Developmental Screening Assessments
Table 2: n. Research Studies on Developmental Screening Assessments
Interrater
Interrater agreement can indicate how consistently a measure works. There were 23 interrater reliability studies in which there were at least two observers who used one or more of the screening tools. Researchers have conducted interrater reliability studies since the 1970s. The studies incorporated a variety of raters to include parents (See Aylward & Verhulst, 2008; Frankenburg, van Doorninck, Liddell, & Dick, 1976; Squires, Bricker, & Potter, 1997), physicians and pediatric nurse practitioners (Rosenbaum, Chua-Lim, Wilhite, & Mankad, 1983), psychologists (Reynolds, 1978), research assistants (Frankenburg et al. 1976), and teachers (Brulle & Ivarie, 1988; Tsai, McClelland, Pratt, & Squires, 2006; VanDerHeyden et al., 2004). Low to high agreement between raters was found over the span of the studies.
Test-retest
To examine the consistency of screening measures across time, 15 studies analyzed test–retest reliability. All of these studies measured child development on at least two occasions. The assumption with this type of design is that when a measure has strong reliability, there will be little change in development from one period to another, because this construct is thought to remain constant over time. Studies varied in the length of time that had passed between administrations with a week to over a year between testing periods. Reliability coefficients were reported for studies, and many had statistically significant correlations across time (e.g., Foxcroft, 1997; Harper & Wacker, 1983; Rose, Calhoun, & Pendergast, 1990).
VALIDITY
Effectiveness of the screening measures was reported in validity studies. Validity refers to how well a tool measures what it purports to measure (Grisham-Brown & Pretti-Frontczak, 2011; Losardo & Notari-Syverson, 2011). The following five types of validity research were examined: (a) concurrent, (b) construct, (c) criterion, (d) predictive, and (e) sensitivity and specificity.
Concurrent
Concurrent validity is used to understand how a developmental screening tool correlates with other measures of the same construct that are measured at the same time. A total of 135 concurrent validity studies examined one or more of the 14 screening instruments to one or more comparable instruments.
Two or more measures were compared, with the majority of the studies using one other tool to make comparisons with one of the 14 screening measures. Some studies compared the screening measure to another screening measure and/or a diagnostic measure. Research findings revealed that some screening measures produce dissimilar results or disagreements (Shevell, 2010; Sices, Stancin, Kirchner, & Bauchner, 2009). None of the studies made comparisons with a commercially available programmatic assessment (e.g., Assessment, Evaluation, and Programming System, Help, Carolina, etc.); however, there were some studies that compared one of the developmental screeners (i.e., Brigance) to curriculum-based measures for academic skills (VanDerHeyden, Broussard, & Cooley, 2006; VanDerHeyden et al., 2004). Language-translated screening assessments were also compared with the original English versions (Bian et al., 2010; Chew & Lang, 1993; Kim & Sung, 2007; Mardell-Czudnowski, Dionne-Simard, & Oullet-Mayrand, 1987).
Another type of study appeared in the literature comparing two or more screening assessments in diverse settings (e.g., rural, reservation, health care, education, social service, etc.). Other types of concurrent validity studies used the screening with other developmental assessments on a specific population of children (e.g., homeless, low birth weight, etc.), academic (e.g., reading and mathematics) or developmental area (e.g., motor). Correlations were often reported in concurrent validity studies.
Construct
Construct validity refers to the extent a developmental screening tool measures the concept or trait it intends to measure. It should also align with a theoretical concept. Agreement between the theoretical concept of a normal curve distribution and having scores represents a sample of the population. There were 30 construct validity studies in this synthesis.
Many of the construct validity studies in this synthesis examined the theory of norms in which new norms were developed for a screening instrument (Barnes & Stark, 1975; Byrne, Backman, & Bawden, 1995; Duyme, Zorman, Tervo, & Capron, 2011; Mardell-Czudnowski & Goldenberg, 1984). Another construct was delay. In particular, the study was designed to measure to what extent the screening measure could discriminate between children with delays/disabilities and typical development (Feldman, Haley, & Coryell, 1990). Different developmental and academic constructs were investigated to include but not limited to adaptive (Macmann & Barnett, 1984), communication/language, mathematics, motor (Reeves, 1997), and reading. Several studies examined culture and the cultural relevancy of screening tools (Akaragian & Dewa, 1992; D'Aprano, Carapetis, & Andrews, 2011; Dionne, Squires, Leclerc, Peloquin, & McKinnon, 2006; Heo, Squires, & Yovanoff, 2008; Mishra, 1981; Tsai, McClelland, Pratt, & Squires, 2006).
Criterion
Criterion validity is used to measure how a particular variable(s) impacts or predicts an outcome based on data from other variables. For example, one study in this synthesis observed a child's race, gender, and mother's educational attainment to predict performance on the screening measure (Ittenbach & Harrison, 1990). Only six studies looked at criterion validity.
Predictive
Predictive validity refers to the extent a screening measure can predict, or correlate with, other measures of the same construct that are measured in the future. There were 51 predictive validity studies in this synthesis. Some of the research focused on using a screening assessment to predict (a) special education status/placement (Mantzicopoulos, 1999b; Mantzicopoulos & Maller, 2002), (b) referrals for diagnostic testing (Eno & Woehlke, 1995; Frankenburg et al. 1976), (c) performance on formal assessments (Feeney & Bernthal, 1996), (d) later academic/school achievement (Diamond, 1987, 1990; Gordon, 1988; Gullo, Clements, & Robertson, 1984; Lindquist, 1982; Reynolds, 1978; Taylor & Ivimey, 1980; Valencia, 1984; VanDerHeyden et al., 2004, 2006), (e) as a screener for kindergarten to measure school readiness constructs (Cadman et al., 1984), and (f) later grade retention (Wenner, 1995).
Studies were also included where parent reports and observations were accurately able to assess their child's development and classify their child's risk for delay/disability (Colligan, 1977; Henderson & Meisels, 1994; Skellern & O'Callaghan, 1999). It was found to be more difficult to accurately predict children with environmental risk factors when compared with children with biological risk factors (Hess, Papas, & Black, 2004).
Sensitivity and specificity
Sensitivity shows how well a measure correctly identifies children with delay/disability, whereas specificity indicates the degree to which a measure correctly identifies children without delay/disability (Macy, Bricker, & Squires, 2005). There were 67 sensitivity and 66 specificity studies of screening measures.
Some studies showed that screening tools failed to detect children with delays and showed a lack of measurement sensitivity (Borowitz & Glascoe, 1986; Frisk et al., 2009). On the contrary, studies were found that showed developmental screening tools that maintain strong sensitivity and were able to correctly identify young children with developmental delay/risk (Aylward & Verhulst, 2008; Chaffee, Cunningham, Secord-Gilbert, Elbard, & Richards, 1990; Doig, Macias, Saylor, Craver, & Ingram, 1999; Kerstjens et al., 2009).
Many of these same studies had high specificity and were able to accurately determine the children who were developing typically and not found to be at risk or delayed (Doig et al., 1999). Some of the studies investigated screening tools that demonstrated strong reliability in sensitivity but not specificity (e.g., Chaffee et al., 1990; Glascoe et al., 1992) and vice versa (Greer et al., 1989; Schonhaut et al., 2009; Shoemaker, Saylor, & Erickson, 1993). To accurately classify children with risk/delay status, these five types of validity studies were used to consider the consistency and stability of children's performance on developmental screening measures.
UTILITY
Utility studies examined the usefulness of developmental screening measures. A total of 61 utility studies are included in this research synthesis. The largest category of utility studies were those in which researchers studied cross-cultural relevancy (al-Naquib, Frankenburg, Mirza, Yazdi, & al-Noori, 1999; Anthony & Assel, 2007; Bian et al., 2010; Bryant, Davies, & Newcombe, 1974; Campos, Squires, & Ponte, 2010; Coghlan, Kiing, & Wake, 2003; Dionne et al., 2006; Howard & de Salazar, 1984; Janson, 2003; Janson & Squires, 2004; Kosht-Fedyshin, 2006; Miller, Onotera, & Deinard, 1984; Olade, 1984; Solomons, 1982). Studies examined the usefulness of using screening tools with children with specific conditions, like those with autism spectrum disorders (da Cunha & de Melo, 2005; Gücüyener et al., 2006). Other utility investigations included cost analysis (Armstrong & Goldfeld, 2008; Chan & Taylor, 1998; Dobrez et al., 2001; Hix-Small, Marks, Squires, & Nickel, 2007), readability of screening tools (Brothers, Glascoe, & Robertshaw, 2008), referrals (German, Williams, Herzfeld, & Marshall, 1982), ease of implementation (Earls & Hay, 2006), feasibility (Elbers, Macnab, McLeod, & Gagnon, 2008; Jee et al., 2010; Lando, Klamer, Jonsbo, Weiss, & Greisen, 2005; Schonwald, Huntington, Chan, Risko, & Bridgemohan, 2009), parents written responses on parent-completed tools (Cox, Huntington, Saada, Epee-Bounya, & Shonwald, 2010), communication between parents and practitioners (Sices et al., 2008), practitioner satisfaction with screening practices (Costenbader, Rohrer, & Difonzo, 2000), and training/professional development (Nicol, 2006; Thompson, Tuli, Saliba, DiPietro, & Nackashi, 2010). Table 2 shows reliability, validity, and utility studies for 14 developmental screening measures.
DISCUSSION
Research utilization is enhanced when consumers are aware of studies (Schiller, Malouf, & Danielson, 1995; Winton, 2006). Knowledge transfer occurs when researchers disseminate their work and consumers are able to relate and apply the research in context. The Rand Corporation conducted a study on how large scale, government funded research findings were used. Their major findings pointed to the importance of the transfer of knowledge in utilizing research (Rand Corporation, 2011).
Knowledge of the evidence available on screening tools can help consumers make decisions. It is helpful to know how much research an instrument has to support its use, although tools that have been around longer tend to have more studies, which does not necessarily mean they are better or more rigorous in detecting delay/disability. Consumer materials are needed that can transfer research findings into consumer-oriented products that are accessible. Several national centers have been funded by federal agencies that translate research into practice (e.g., Tracking, Referral, and Assessment Center for Excellence). These centers often provide materials where lengthy scientific studies have been condensed into practical information. Consumers will benefit when they are able to utilize research outcomes.
The AAP screening recommendations may have something to do with the increase in demand for the use of developmental screening tools that contain evidence of strong reliability, validity, specificity, sensitivity, and utility (AAP, 2006). The research base on developmental screening measures has been growing since the 1970s. As a result, practice and research implications are considered.
Practice implications
Debate occurred in the past over the extent parents and families should be included during screening assessment. It was a commonly held belief that many parents had little understanding or training in child development and could not reliably provide meaningful developmental information about their child (Bailey, Buysse, Smith, & Elam, 1992; McLean, Bailey, & Wolery, 2004). Articles started to appear in the professional literature describing the importance of a family-centered approach (Bailey & Blasco, 1990; Bernheimer & Keogh, 1995; Crais, 1993; Dinnebeil & Rule, 1994; Dunst, Johanson, Trivette, & Hamby, 1991; McBride, Brotherson, Joanning, Whiddon, & Demmitt, 1993). In addition, other publications have been written about desirable characteristics of assessment tools and procedures that include families (Bagnato, Pretti-Frontzcak, & Neisworth, 2010; National Research Council and Institute of Medicine, 2000; National Research Council, 2008; Neisworth & Bagnato, 2005). Studies supporting the effectiveness of having parents report information about their child's development during the assessment process started to get published in journals (Bricker & Squires, 1989a & 1989b; Diamond & Squires, 1993; Sexton, Thompson, Perez, & Rheams, 1990; Suen, Logan, Neisworth, & Bagnato, 1995). It was not long until professional attitudes and assessment practices became focused on the child and his or her family.
New policies and assessment practices resulted from discussions/debates and research that pushed the field toward a more ecological approach to conducting developmental screening assessment. Today, professional organizations and experts endorse meaningful partnerships with parents as a salient feature to the assessment process (Bruder, 2000; Division for Early Childhood, 2007; National Research Council and Institute of Medicine, 2000; National Research Council, 2008; McConnell, 2000). Developmental screening tools are geared specifically toward parents (Glascoe, 2006; Marks & Glascoe, 2010; Squires, 1996; Watson, Kiekhefer, & Olshansky, 2006). Research is conducted with parents, like the interrater reliability studies from this synthesis. Practitioners in health, social service, and education fields are increasingly being called to use developmental screening assessment tools that meet evidence-based practice standards (Radecki, Sand-Loud, O'Connor, Sharp, & Olson, 2011; Sand et al., 2005). This synthesis shows that there is a promising body of evidence on developmental screening measures.
Research implications
There are four implications from this synthesis. First, more research is needed on the reliability, validity, and utility of screening tools and practices. Additional studies will help us better understand the way scales perform and under which conditions they are most effective. This research synthesis revealed that more investigations are needed on indicators of internal consistency (e.g., interitem, split-half, coefficient alpha estimates, etc.). There were less than a dozen and there were limitations in variety of this type of research design. More variety in research designs is recommended. Few studies employed random assignment or quasi-experimental designs (Thompson et al., 2010).
Another gap in the literature is fidelity of implementation research on screening measures and practices. Accuracy of test administration is crucial and most of the screening assessments in this study have manuals and/or multimedia (e.g., video) tools available to help users with fidelity of implementation. A future area of research should focus on replication studies. Replication research is often expensive and subsequent studies may not produce as strong of results as initial findings. However, replicating experiments can lead to generalizability and further validate a measure (Hoffman, 1982; Sturner, Heller, Funk, & Layton, 1993, Sturner, Funk, & Green, 1996).
Second, caution should be used when interpreting research designs that compared screening tools to reference tests that are not equivalent (Camp, 2007). Several studies in this synthesis compared a screening measure to a diagnostic measure. The purpose, procedures, and results of a diagnostic test are very different from a developmental screener. Screening measures may work best when they are used in the ways they are intended (e.g., to determine whether more testing is needed, track individual or groups of children at risk, and to conduct early detection services in communities-Child Find). Another consideration is that similar instruments (i.e., two screening measures) may produce different results (Shevell, 2010; Sices, et al., 2009). Furthermore, comparing different methods of assessment (e.g., direct test by clinician, parent report measure, etc.) may produce dissimilar results (Voigt et al., 2007).
Many studies had small sample sizes which is especially problematic when estimating specificity and sensitivity because the total number of children have positive or negative results on a reference test (Camp, 2006, 2007). A general rule of thumb is at least 200 children in each age interval (Altman, 1991), but several of the studies in this synthesis had far less than the recommendation for sample sizes. One more limitation of the research is that some of the screening tools had multiple editions; therefore, research conducted on the tools may have been on a previous version of the tool.
Third, researchers and practitioners need to collaborate on investigations that can be field-tested over an extended period of time. Developmental screening tools are widely used in a tracking system for children who perform typically when screened but may have other factors that place them at risk (e.g., low birth weight, abuse/neglect, teen parent, exposure to lead poisoning, etc.) where a practitioner will need to follow their progress. Yet, there were no studies found that examined the consistency of screening tools used to track children over an extended period of time. Longitudinal test–retest reliability studies could track children over time to refine the scale, scoring options, accommodations needed, use of the tool within a Response to Intervention model, and more.
One type of concurrent validity study in this review examined the use of screening tools that were paired with academic screening tools for either reading or mathematics (Schellinger, Beer, & Beer, 1992; VanDerHeyden et al., 2004; VanDerHeyden, et al., 2006) which could be used in future studies with an Response to Intervention approach. Repeated measures designs are likely to help us understand how screening tools perform in a tracking and progress-monitoring system. A couple of utility studies investigated the effects of professional development and training (Nicol, 2006; Thompson et al., 2010). New avenues for professional and paraprofessional training could focus on implementing research-based screening practices and measures.
Last, practitioners need to consider proper placement of cutoff scores (e.g., SD) to calibrate the tool for accurate performance. Screening tools that have low specificity and produce a large number of false positives can result in overreferrals for diagnostic testing (Camp, 2006; Glascoe, 2001). The opposite problem can result with underreferrals and failing to refer a child who is at risk or delayed. It was evident in this synthesis that depending on where the researcher/author(s) placed cutoff scores for their study, sensitivity and specificity results varied drastically (Glascoe, 1997b).
CONCLUSION
Although the evidence base has come a long way, there are still gaps in what we know about screening measures (Marks, Glascoe, & Macias, 2011). It is unclear the effects of tools when they are used for unintended purposes. For instance, screening measures are used in an early detection system to determine whether a more comprehensive assessment is needed; however, effects are unknown when professionals use screening tools beyond their scope to diagnose a disability, determine eligibility for services, make treatment decision, and/or monitor child progress toward Individualized Education Program goals/objectives. What is the impact when screenings are used in unintended ways?
There is scant research on different procedures for screening, like differences between parent-completed tools, direct test tools, and observation tools. Under what conditions does one screening procedure provide optimal results for different children and their families? For instance, parent-report screening tools may perform differently with parents who are working with service providers in a social welfare context because their child was removed from the home based on abuse/neglect.
Another unanswered question in the literature is how does screening align with other service delivery activities? Do certain tools work better when paired like a specific screening measure and diagnostic or programmatic measures? Some tools had more and/or robust research. Does that mean those are better? Different? Why do some tools have more research? Is it due to commercial interest, entrenched use of the tool because of when it was published and number of people using it, and/or resources available to conduct research?
Empirical studies help identify strengths and weaknesses of screening instruments. Services for young children with disabilities have been a legal construct in the United States since 1986 with the passage of Public Law 99-457, and EI practices have changed a lot in 25 years (Bagnato, McLean, Macy, & Neisworth, in press). States and local agencies should be required to justify screening practices, selection of developmental screening measures, and efforts employed to locate eligible children in Child Find programs. Like service delivery practices, research will continue to grow and develop over time. Research needs to continue, so improvements in screening instruments and practices will be made on the basis of empirical findings (Aylward, 2009; Marks, Hix-Small, Clark, & Newman, 2009). Ongoing research allows for continued advances to be made on new and updated editions of screening instruments. Research should drive decisions made at all levels and especially at the practitioner and system levels.
REFERENCES
An asterick indicates Screening measures reported in this review.
Altman D. G. (1991). Some common problems in medical research. In Practical statistics for medical research (pp. 396–438). New York, NY: Chapman and Hall.
American Academy of Pediatrics (2006). Identifying infants and young children with developmental disorders in the medical home: An algorithm for developmental surveillance and screening, Pediatrics, 118(1), 405–420.
Aylward G. P. (2009). Developmental screening and assessment: What are we thinking? Journal of Developmental and Behavioral Pediatrics, 30(2), 169–175.
Bagnato S. J., Macy M., Salaway J., Lehman C. (2007). Research foundations for conventional tests and testing to ensure accurate and representative early intervention eligibility. Pittsburgh, PA: TRACE Center for Excellence in Early Childhood Assessment, Early Childhood Partnerships, Children's Hospital/University of Pittsburgh; US Department of Education, Office of Special Education Programs, and Orelena Hawks Puckett Institute.
Bagnato S. J., McLean M., Macy M., Neisworth J. (in press). Assessment for inclusive instruction in early childhood intervention: Aligning our professional standards & practice-based evidence. Joint publication. Journal of Early Intervention/Topics in Early Childhood Special Education.
Bagnato S. J., Neisworth J. T., Pretti-Frontczak K. L. (2010). LINKing authentic assessment and early childhood intervention: Best measures for best practices (2nd ed.). Baltimore, MD: Brookes.
Bagnato S. J., Suen H., Brickley D., Smith-Jones J., Dettore E. (2002). Child developmental impact of Pittsburgh's Early Childhood Initiative (ECI) in high-risk communities: First-phase authentic evaluation research. Early Childhood Research Quarterly, 17(4), 559–589.
Bailey D. B. Jr., Blasco P. M. (1990). Parents' perspectives on a written survey of
family needs. Journal of Early Intervention, 14, 196–203.
Bailey D. B., Buysse V., Smith T., Elam J. (1992). The effects and perceptions of family involvement in program decisions about family-centered practices. Evaluation and Program Planning, 15(1), 23–32.
*. Bayley N. (2006). Bayley scales of infant and toddler development screening test (3rd ed.). San Antonio, TX: Harcourt Assessment, Inc.
Bernheimer L. P., Keogh B. K. (1995). Weaving interventions into the fabric of everyday life: An approach to
family assessment. Topics in Early Childhood Special Education, 15, 415–433.
Bricker D., Squires J. (1989a). The effectiveness of parental screening of at-risk infants: The infant monitoring questionnaires.Topics in Early Childhood Special Education, 9(3), 67–85.
Bricker D., Squires J. (1989b). Low cost system using parents to monitor the development of at-risk infants. Journal of Early Intervention, 13(1), 50–60.
*. Brigance A. H., Glascoe F. P. (2002). Brigance infant and toddler. Billerica, MA: Curriculum Associates.
Bruder M. B. (2000). Family-centered early intervention: Clarifying our values for the new millennium. Topics in Early Childhood Special Education, 20(2), 105–116.
Buysse V., Wesley P. (2006). Making sense of evidence-based practice: Reflections and recommendations. In Buysse V., Wesley P.W. (Eds.), Evidence-based practice in early childhood field (pp. 225–244). Washington, DC: ZERO TO THREE.
Camp B. W. (2006). What the clinician really needs to know: Questioning the clinical usefulness of sensitivity and specificity in studies of screening tests. Journal of Developmental and Behavioral Pediatrics, 27(3), 226–230.
Camp B. W. (2007). Evaluating bias in validity studies of developmental/behavioral screening tests. Journal of Developmental and Behavioral Pediatrics, 28(3), 234–240.
Carey W. B., Crocker A. C., Coleman W. L., Elias E. R., Feldman H. M. (Eds.). (2009). Developmental-behavioral pediatrics (4th ed.). Philadelphia, PA: Elsevier.
Carta J. J. (2002). An early childhood special education research agenda in a culture of accountability for results. Journal of Early Intervention, 25(2), 102–104.
Cox J. E., Huntington N., Saada A., Epee-Bounya A., Shonwald A. D. (2010). Developmental screening and parents' written comments: An added dimension to the parents' evaluation of developmental status questionnaire. Pediatrics, 126, 170–176.
Crais E. R. (1993). Families and professionals as collaborators in assessment. Topics in Language Disorders, 14(1), 29–40.
Diamond K. E., Squires J. K. (1993). The role of parental report in the screening and assessment of young children. Journal of Early Intervention, 17(2), 107–115.
Dinnebeil L. A., Rule S. (1994). Variables that influence collaboration between parents and service coordinators. Journal of Early Intervention, 18, 349–361.
Division for Early Childhood. (2007). Promoting positive outcomes for children with disabilities: Recommendations for curriculum, assessment, and program evaluation. Missoula, MT.
Dunst C. J., Johanson C., Trivette C., Hamby D. (1991). Family-oriented early intervention policies and practices: Family-centered or not? Exceptional Children, 58, 115–126.
Elbaum B., Gattamorta K. A., Penfield R. D. (2010). Evaluation of the battelle developmental inventory, 2nd edition, screening test for use in states' child outcomes measurement systems under the individuals with disabilities education act. Journal of Early Intervention, 32(4), 255–273.
*. Fewell R., Langley M. B. (1984). Developmental activities screening inventory (DASI-II). Austin, TX: PRO-ED.
*. Frankenburg W. K., Dodds J., Archer P., Bresnick B., Maschka P., Edelman N., Shapiro H. (1992). Denver II training manual. Denver, CO: Denver Developmental Materials.
*. Frankenburg W. K., Dodds J., Archer P., Bresnick B., Maschka P., Edelman N., Shapiro H. (1996). Denver II technical training manual. Denver, CO: Denver Developmental Materials.
*. Glascoe F. P. (1997a). Parents' evaluation of developmental status (PEDS). Ellsworth & Vandermeer Press, Ltd.
Glascoe F. P. (1997b). Parents' concerns about children's development: Prescreening technique or screening test? Pediatrics, 99(4), 522–528.
Glascoe F. P. (2001). Are over-referrals on developmental screening tests really a problem? Pediatrics and Adolescent Medicine, 155(1), 1–10.
Glascoe F. P. (2006). If you don't ask, parents may not tell: Noticing problems vs. expressing concerns. Archives of Pediatric & Adolescent Medicine, 160(2), 220–221.
Glascoe F. P., Byrne K. E. (1993). The usefulness of the Battelle Developmental Inventory Screening Test. Clinical Pediatrics, 32, 273–280.
Grisham-Brown J., Pretti-Frontczak P. (2011). Assessing young children in inclusive settings: The blended practices approach. Baltimore, MD: Brookes.
*. Harrison P. L., Kaufman A. S., Kaufman N. L., Bruininks P. H., Rynders J., Ilmer S., Cicchetti D. V. (1990). AGS early screening profiles. Circle Pines, MN: American Guidance Service.
Hoffman L. W. (1982). Methodological issues in follow-up and replication studies. Journal of Social Issues, 38(1), 53–64.
Holtzman N. A. (2003). Expanding newborn screening: How good is the evidence? Journal of the American Medical Association, 290(19), 2606–2608.
*. Hresko W. P., Miguel S. A., Sherbenou R. J., Burton S. D. (1994). Developmental observation checklist system (DOCS). Austin, TX: PRO-ED.
*. Ireton H. R. (1987). Preschool development inventory manual. Minneapolis, MN: Behavior Science Systems.
*. Ireton H. R. (1988). Early childhood development inventory. Minneapolis, MN: Behavior Science Systems, Inc.
*. Ireton H. R. (1994). Child development review manual. Minneapolis, MN: Behavior Science Systems.
Ireton H., Glascoe F. P. (1995). Assessing children's development using parents' reports: The child development inventory. Clinical Pediatrics, 34, 248–255.
Losardo A., Notari-Syverson A. (2011). Alternative approaches to assessing young children (2nd ed.). Baltimore, MD: Brookes.
Macy M., Bagnato S. (2010). Keeping it “R-E-A-L” with authentic assessment. National Head Start Association Dialog, 13(1), 1–21.
Macy M. G., Bricker D. D., Squires J. K. (2005). Validity and reliability of a curriculum-based assessment approach to determine eligibility for part C services. Journal of Early Intervention, 28(1), 1–16.
*. Mardell-Czudnowski C., Goldenberg D. (1975). Developmental indicators for the assessment of learning (DIAL). Edison, NJ: Childcraft Education Corp.
*. Mardell-Czudnowski C., Goldenberg D. (1998). Developmental indicators for the assessment of learning (3rd ed.) (DIAL-3). Circle Pines, MN: American Guidance Service.
Marks K., Hix-Small H., Clark K., Newman J. (2009). Lowering developmental screening thresholds and raising quality improvement for preterm children. Pediatrics, 123, 1516–1523.
Marks K. P., Glascoe F. P. (2010). Helping parents understand developmental-behavioral screening. Contemporary Pediatrics, 27, 54–61.
Marks K. P., Glascoe F. P., Macias M. M. (2011). Enhancing the algorithm for developmental-behavioral surveillance and screening in children 0-5 years. Clinical Pediatrics, 123, 1516–1523.
McBride S. L., Brotherson M. J., Joanning H., Whiddon D., Demmitt A. (1993). Implementation of family-centered services: Perceptions of family and professionals. Journal of Early Intervention, 17, 414–430.
*. McCarthy D. (1978). McCarthy screening test. New York, NY: Psychological Corp.
McConnell S. R. (2000). Assessment in early intervention and early childhood special education: Building on the past to project into our future. Topics in Early Childhood Special Education, 20, 43–48.
McLean M., Wolery M., Bailey D. B. (Eds.). (2004). Assessing infants and preschoolers with special needs. Upper Saddle River, NJ: Pearson Merrill Prentice Hall.
McLean M. E., Snyder P., Smith B. J., Sandall S. R. (2002). The DEC recommended practices in early intervention/early childhood special education: Social validation. Journal of Early Intervention, 25(2), 120–128.
*. Meisels S. J., Marsden D. B., Wiske M. S., Henderson L. W. (2008). Early screening inventory revised (ESI-R). San Antonio, TX: Pearson/Psychological Corp.
Montgomery M. L., Saylor C. F., Bell N. L., Macias M. M., Charles J. M., Katikaneni L. D. P. (1999). Use of the child development inventory to screen high-risk populations. Clinical Pediatrics, 38(9), 535–539.
National Research Council. (2008). Early childhood assessment: Why, what, and how. Committee on Developmental Outcomes and Assessments for Young Children, Board on Children, Youth, and Families, Board on Testing and Assessment, Division of Behavioral and Social Sciences and Education. Washington, DC: National Academies Press.
National Research Council and Institute of Medicine. (2000). From neurons to neighborhoods: The science of early childhood development. Committee on Integrating the Science of Early Childhood Development, Board on Children, Youth and Families, Commission on Behavioral and Social Sciences and Education. Washington, DC: National Academies Press.
*. Nehring A. D., Nehring E. F., Bruni J. R. Jr., Randolph P. L., Kaplan Press Sanford A. R., Preminger J. L. (1997). Learning accomplishment profile diagnostic edition. Lewisville, NC: Kaplan Early Learning Company.
Neisworth J. T., Bagnato S. J. (2005). DEC recommended practices: Assessment. In Sandall S., Hemmeter M. L., Smith B. J., McLean M. E. (Eds.), DEC recommended practices: A comprehensive guide for practical application in early intervention/early childhood special education (pp. 45–69). Longmont, CO: Sopris West.
*. Newborg J. (2005). Battelle developmental inventory examiner's manual (2nd ed). Itasca, IL: Riverside Publishing.
Odom S. L., Brantlinger E., Gersten R., Horner R. H., Thompson B., Harris K. R. (2005). Research in special education: Scientific methods and evidence-based practices. Exceptional Children, 71(2), 137–148.
Radecki L., Sand-Loud N., O'Connor K. G., Sharp S., Olson S. (2011). Trends in the use of standardized tools for developmental screening in early childhood: 2002–2009. Pediatrics, 32(3), 2010–2180.
Rand Corporation. (April, 2011). Saving the government money: Examples from RAND's federally funded research and development centers. Santa Monica, CA: Rand Corporation.
Reynolds C. R. (1978). Teacher-psychologist interscorer reliability of the McCarthy drawing tests. Perceptual and Motor Skills, 47, 538.
Ringwalt S. (2008). Developmental screening and assessment instruments with an emphasis on social and emotional development for young children ages birth through five. Chapel Hill, NC: The University of North Carolina, FPG Child Development Institute, National Early Childhood Technical Assistance Center.
Sand N., Silverstien M., Glascoe F. P., Gupta V. B., Tonniges T. P., O'Connor K. G. (2005). Pediatricians' reported practices regarding developmental screening: Do guidelines work? Do they help? Pediatrics, 116(1), 174–179.
Schiller E. P., Malouf D. B., Danielson L. S. (1995). Research utilization: A federal perspective. Remedial and Special Education, 16(6), 372–375.
Sexton D., Thompson B., Perez J., Rheams T. (1990). Maternal versus professional estimates of developmental status of young children with handicaps. An ecological approach. Topics in Early Childhood Special Education, 10(3), 80–95.
Shevell M. (2010). Two developmental screening tests may identify different groups of children. The Journal of Pediatrics, 156(3), 508.
Sices L., Stancin T., Kirchner H. L., Bauchner H. (2009). PEDS and ASQ developmental screening tests may not identify the same children. Pediatrics, 124(4), e640–e647.
Smith B. J., Strain P. S., Snyder P., Sandall S. R., McLean M. E., Ramsey A. B., Sumi W. C. (2002). DEC recommended practices: A review of 9 years of EI/ECSE research literature. Journal of Early Intervention, 25(2), 108–119.
Snyder P. (2006). Best available research evidence: Impact on research in early childhood. In Buysse V., Wesley P. W. (Eds.), Evidence-based practice in early childhood field (pp. 35–70). Washington, DC: Zero to Three.
Snyder P., Lawson S., Thompson B., Stricklin S. (1993). Evaluating the psychometric integrity used in early intervention research: The Battelle Developmental Inventory. Topics in Early Childhood Special Education, 13(2), 216–232.
Squires J. (1996). Parent-completed developmental questionnaires: A low-cost strategy for child-find and screening. Infants & Young Children, 9(1), 16–28.
*. Squires J., Twombley E., Bricker D., Potter L. (2009). Ages and stages questionnaires (ASQ): A parent-completed child monitoring system (3rd ed.). Baltimore, MD: Brookes.
Sturner R., Heller J. H., Funk S. G., Layton T. L. (1993). The Fluharty preschool speech and language screening test: A population-based validation study using sample-independent decision rules. Journal of Speech & Hearing Research, 36(4), 738–745.
Sturner R., Funk S. G., Green J. A. (1996). Preschool speech and language screening: Further validation of the sentence repetition screening test. Journal of Developmental and Behavioral Pediatrics, 17(6), 405–413.
Suen H. K., Logan C. R., Neisworth J. T., Bagnato S. J. (1995). Professional congruence: Is it necessary? Journal of Early Intervention, 19, 243–252.
Thompson B., Diamond K. E., McWilliam R., Snyder P., Snyder S. (2005). Evaluating the quality of evidence from correlational research for evidence-based practice. Exceptional Children, 71(2), 181–194.
Van Der Heyden A. M., Broussard C., Fabre M., Stanley J., Legendre J., Creppell R. (2004). Development and validation of curriculum-based measures of math performance for preschool children. Journal of Early Intervention, 27(1), 27–41.
Voigt R. G., Llorente A. M., Jensen C. L., Fraley J. K., Barbaresi W. J., Heird W. C. (2007). Comparison of the validity of direct pediatric developmental evaluation versus developmental screening by parent report. Clinical Pediatrics, 46(6), 523–529.
Watson K. C., Kiekhefer G. M., Olshansky E. (2006). Striving for therapeutic relationships: Parent-provider communication in the developmental treatment setting. Qualitative Health Research, 16(5), 647–663.
Winton P. J. (2006). The evidence-based practice movement and its effect on knowledge utilization. In Buysse V., Wesley P. W. (Eds.), Evidenced based practice in the early childhood field (pp. 71–116). Washington, DC: Zero to Three.
Appendix References From Empirical Studies for 14 Developmental Screening Assessments
ASQ (45 STUDIES): AGES & STAGES QUESTIONNAIRES
1.
Bian X., Yao G., Squires J., Wei M., Chen C., & Fang B. (2010). Studies of the norm and psychometric properties of ages and stages questionnaires in Shanghai children. Zhonghua Er Ke Za Zhi. Chinese Journal of Pediatrics, 48(7), 492–496.
2.
Bornman S., Jevcik R., Romski M., & Pae H. (2010). Successfully translating language and culture when adapting assessment measures. Journal of Policy and Practice in Intellectual Disabilities, 7(2), 110–118.
3.
Campos J., Squires J., & Ponte J. (2011). Universal development screening: Preliminary studies in Galicia, Spain. Early Child Development and Care, 181(4), 475–485.
4.
Chan B., & Taylor N. (1998). The follow along program cost analysis in southwest Minnesota. Infants & Young Children, 10(4), 71–79.
5.
Chiu S., & DiMarco M. (2010). A pilot study comparing two developmental screening tools for use with homeless children. Journal of Pediatric Health Care: Official Publication of National Association of Pediatric Nurse Associates & Practitioners, 24(2), 73–80.
6.
Dionne C., Squires J., Leclerc D., Peloquin J., & McKinnon S. (2006). Cross-cultural comparison of a French Canadian and U.S. developmental screening test. Developmental Disabilities Bulletin, 34(1–2), 43–56.
7.
Dobrez D., Sasso A. L., Holl J., Shalowitz M., Leon S., & Budetti P. (2001). Estimating the cost of developmental and behavioral screening of preschool children in general pediatric practice. Pediatrics, 108, 913–922.
8.
Earls M., & Hay S. (2006). Setting the stage for success: Implementation of developmental and behavioral screening and surveillance in primary care practice. The North Carolina Assuring Better Child Health and Development (ABCD) Project, 118(1), 183–188.
9.
Elbers J., Macnab A., McLeod E., & Gagnon F. (2008). The Ages and Stages Questionnaires: feasibility of use as a screening tool for children in Canada. Canadian Journal of Rural Medicine, 13(1), 9–14.
10.
Frisk V., Montgomery L., Boychyn E., Young R., vanRyn E., McLachlan D., & Neufeld J. (2009). Why screening Canadian preschoolers for language delays is more difficult than it should be. Infants and Young Children, 22(4), 290–308.
11.
Gollenberg A. L., Lynch C. D., Jackson L. W., McGuinness B. M., & Msall M. E. (2010). Concurrent validity of the parent-completed ages Ages and Stages Questionnaires, 2nd ed. with the Bayley Scales of Infant Development II in a low-risk sample. Child Care, Health & Development, 36(4), 485–490.
12.
Handal A., Lozoff B., Breilh J., & Harlow S. (2007). Effects of community residence on neurobehavioral development in infants and young children in flower-growing region of Ecuador. Environmental Health Perspectives, 115(1), 128–133.
13.
Heo K., Squires J., & Yovanoff P. (2008). Cross-cultural adaptation of a preschool screening instrument: Comparison of Korean and U.S. Populations. Journal of Intellectual Disability Research, 52, 195–206.
14.
Hix-Small H., Marks K., Squires J., & Nickel R. (2007). Implementing developmental screening at 12 and 24 months in a primary care pediatric office. Pediatrics, 120 (2), 1–9.
15. Janson H. (2003). Influences on participation rate in a national Norwegian child development screening questionnaire study. Acta Paediatrica, 92(1), 91–96.
16.
Janson H., & Squires J. (2004). Parent-completed developmental screening in a Norwegian population sample: A comparison with U.S. normative data. Acta Paediatrica, 93, 1525–1529.
17.
Jee S. H., Szilagyi M., Ovenshire C., Norton A., Conn A., Blumkin A., & Szilagyi P. G. (2010). Improved detection of developmental delays among young children in foster care. Pediatrics, 125(2), 282.
18.
Kapci E., Kucuker S., & Uslu R. I. (2010). How Applicable Are “Ages and Stages Questionnaires” for Use with Turkish Children? Topics in Early Childhood Special Education, 30(3), 176–188.
19.
Kerstjens J., Bos A., ten Vergert E., de Meer G., Butcher P., & Reijneveld S. (2009). Support for the global feasibility of the Ages and Stages Questionnaire as developmental screener. Early Human Development, 85(7), 443–447.
20.
Kim E. Y., & Sung K. (2007). The ages and stages questionnaire: Screening for developmental delay in the setting of a pediatric outpatient clinic. Korean Journal of Pediatrics, 50(11), 1061–1066.
21.
Klamer A., Lando A., Pinborg A., & Greisen G. (2005). Ages & stages questionnaire used to measure cognitive deficit in children born extremely preterm. Acta Paediatrica, 94, 1327–1329.
22.
Lando A., Klamer A., Jonsbo J., Weiss J., & Greisen G. (2005, May). Developmental delay at 12 months in children born extremely preterm. Acta Paediatrica, 94, 1604–1607.
23.
Limbos M. M., & Joyce D. P. (2011). Comparison of the ASQ and PEDS in screening for developmental delay in children presenting for primary care. Journal of Developmental & Behavioral Pediatrics, 32(7), 499–511.
24.
Lindsay N., Healy G., Colditz P., & Lingwood B. (2008). Use of the ages & stages questionnaire to predict outcome after hypoxic-ischaemic encephalopathy in the neonate. Journal of Paediatrics and Child Health, 44, 590–595.
25.
Marks K., Hix-Small H., Clark K., & Newman J. (2009). Lowering developmental screening thresholds and raising quality improvement for preterm children. Pediatrics, 123, 1516–1523.
26.
McCoy S., Bowman A., Smith-Blockley J., Sanders K., Megens A., & Harris S. (2009). Harris Infant Neuromotor Test: Comparison of US and Canadian normative data and examination of concurrent validity with the Ages and Stages Questionnaire. Physical Therapy, 89(2), 173–180.
27.
Nicol P. (2006). Using the Ages and Stages Questionnaire to teach medical students developmental assessment: A descriptive analysis. BioMed Central Medical Education, 6, 29. Retrieved from http://biomedcentral.com/1472-6920/6/29
28.
O'Connor C., Laszewski A., Hammel J., & Durkin M.S. (2011). Using portable computers in home visits: Effects on programs, data quality, home visitors and caregivers. Children and Youth Services Review, 33(7), 1318–1324.
29.
Richter J., & Janson H. (2007). A validation study of the Norwegian version of the Ages and Stages Questionnaires. Acta Paediatrica, 96, 748–752.
30.
Rydz D., Srour M., Oskoui M., Marget N., Shiller M, Birnbaum R., ... Shevell M. I. (2006). Screening for developmental delay in the setting of a community pediatric clinic: A prospective assessment of parent-report questionnaires. Pediatrics, 118(4), e1178–e1186.
31.
Schonhaut L., Salinas P., Armijo I., Schönstedt M., Álvarez J., & Manríquez M. (2009). Validation of a parent-completed developmental screening test. Revista Chilena de Pediatría, 80(6), 513–519.
32. Shevell M. (2010). Two developmental screening tests may identify different groups of children. The Journal of Pediatrics, 156(3), 580.
33.
Sices L., Stancin T., Kirchner H., & Bauchner H. (2009). PEDS and ASQ developmental screening tests may not identify the same children. Pediatrics, 124(4), e640–e647.
34.
Skellern C. Y., & O'Callaghan M. (1999, October). Parent-completed questionnaires: An effective screening instrument for developmental delay in follow-up of ex-premature infants. Journal of Pediatrics & Child Health, 35(5), A2.
35.
Skellern C. Y., Rogers Y., & O'Callaghan M. (2001). A parent-completed developmental questionnaire: Follow up of ex-premature infants. Journal of Paediatrics & Child Health, 37(2), 125–129.
36.
Squires J., Bricker D., & Potter L. (1997, June). Revision of a parent-completed developmental screening tool: Ages and Stages Questionnaires. Journal of Pediatric Psychology, 22(3), 313–328.
37.
Squires J., Carter A., & Kaplan P. (2003). Developmental monitoring of children conceived by ICSI and IVF. Fertility and Sterility, 79(2), 453–454.
38.
Squires J. K., Carter A., & Kaplan P. F. (2001, September). Developmental monitoring of children conceived by ICSI and IVF. Fertility & Sterility, 76(3) (Suppl. 1), S145–S146.
39.
Squires J. K., Kaplan P. F., & Carter A. M. (2000, April). Developmental Monitoring of ICSI/IVF Offspring. Fertility & Sterility, 73(4) (Suppl. 1), 14S.
40.
Squires J., Katzev A., & Jenkins F. (2002, June). Early screening for developmental delays: Use of parent-completed questionnaires in Oregon's Healthy Start Program. Early Child Development and Care, 172(3), 275–282.
41.
Squires J., Potter L., Bricker D., & Lamorey S. (1998). Parent-completed developmental questionnaires: Effectiveness with low and middle income parents. Early Childhood Research Quarterly, 13(2), 345–354.
42.
Thompson L. A., Tuli S. Y., Saliba H., DiPietro M., & Nackashi J. A. (2010). Improving developmental screening in pediatric resident education. Clinical Pediatrics, 49(8), 737–742.
43.
Tsai H. A., McClelland M., Pratt C., & Squires J. (2006). Adaptation of the 36 month Ages and Stages Questionnaire in Taiwan. Journal of Early Intervention, 28(3), 213–225.
44.
Yao G., Bian X., Squires J., Wei M., & Song W. (2010). Cutoff scores of the Ages and Stages Questionnaire-Chinese for screening infants and toddlers. Zhonghua Er Ke Za Zhi. Chinese Journal of Pediatrics, 48(11), 824–828.
45.
Yu L., Hey E., Doyle L., Farrell B., Spark B., Altman D., & Duley L. (2007). Evaluation of the Ages and Stages Questionnaires in identifying children with neurosensory disability in the Magpie Trial follow-up study. Acta Paediatrica, 96, 1803–1808.
BINS (9 STUDIES): BAYLEY INFANT NEURODEVELOPMENTAL SCREENER
1. Aylward G. (2004). Prediction of function from infancy to early childhood: Implications for pediatric psychology. Journal of Pediatric Psychology, 29(7), 555–564.
2.
Aylward G. P., & Verhulst S. J. (2000). Predictive utility of the Bayley Infant Neurodevelopmental Screener (BINS) risk status classifications: Clinical interpretation and application. Developmental Medicine & Child Neurology, 42(1), 25–31.
3.
Aylward G. P., & Verhulst S. J. (2008). Comparison of caretaker report and hands-on neurodevelopmental screening in high-risk infants. Developmental Neuropsychology, 33(2), 124–136.
4.
Dobrez D., Sasso A. L., Holl J., Shalowitz M., Leon S., & Budetti P. (2001). Estimating the cost of developmental and behavioral screening of preschool children in general pediatric practice. Pediatrics, 108, 913–922.
5.
Gücüyener K., Ergenekon E., Soysal A., Aktaş A., Derinöz O., Koç E., & Atalay Y. (2006). Use of the Bayley Infant Neurodevelopmental Screener with premature infants. Brain & Development, 28(2), 104–108.
6.
Guedes D. Z., Primi R., & Kopelman B. I. (2011). BINS validation—Bayley neurodevelopmental screener in Brazilian preterm children under risk conditions. Infant Behavior & Development, 34(1), 126–135.
7.
Hess C., Papas M., & Black M. (2004). Use of the Bayley Infant Neurodevelopmental Screener with an environmental risk group. Journal of Pediatric Psychology, 29(5), 321–330.
8.
Leonard C., Piecuch R., & Cooper B. (2001). Use of the Bayley Infant Neurodevelopmental Screener with low birth weight infants. Journal of Pediatric Psychology, 26(1), 33–40.
9.
Macias M. M., Saylor C. F., Greer M. K., Charles J. M., Bell N., & Katikaneni L. D. (1998). Infant screening: The usefulness of the Bayley Infant Neurodevelopmental Screener and the Clinical Adaptive Test/Clinical Linguistic Auditory Milestone Scale. Journal of Developmental and Behavioral Pediatrics, 19(3), 155–161.
BDIST (11 STUDIES): BATTELLE DEVELOPMENTAL INVENTORY SCREENING TEST
1.
Elbaum B., Gattamorta K. A., & Penfield R. D. (2010). Evaluation of the Battelle Developmental Inventory, 2nd Edition, Screening Test for use in states' child outcomes measurement systems under the Individuals with Disabilities Education Act. Journal of Early Intervention, 32(4), 255–273.
2.
Feldman A., Haley S., & Coryell J. (1990). Concurrent and construct validity of the Pediatric Evaluation of Disability Inventory. Physical Therapy, 70(10), 602–610.
3.
Frisk V., Montgomery L., Boychyn E., Young R., vanRyn E., McLachlan D., & Neufeld J. (2009). Why screening Canadian preschoolers for language delays is more difficult than it should be. Infants and Young Children, 22(4), 290–308.
4. Glascoe F. P. (2001). Are overreferrals on developmental screening tests really a problem?. Archives of Pediatrics & Adolescent Medicine, 155(1), 54–59.
5.
Glascoe F. P., & Byrne K. (1993). The usefulness of the Battelle Developmental Inventory Screening Test. Clinical Pediatrics, 32(5), 273–280.
6.
Glascoe F. P., & Byrne K. E. (1993). The Accuracy of Three Developmental Screening Tests. Journal of Early Intervention, 17(4), 368–379.
7.
Glascoe F. P., Martin E. D., & Humphrey S. (1990). Comparative Review of Developmental Screening Tests. Pediatrics, 86(4), 547.
8.
McLean M., & And O. (1987). Concurrent Validity of the Battelle Developmental Inventory Screening Test. Diagnostique, 13(1), 10–20.
9.
Mirrett P. L., Bailey D. R., Roberts J. E., & Hatton D. D. (2004). Developmental screening and detection of developmental delays in infants and toddlers with fragile X syndrome. Journal of Developmental and Behavioral Pediatrics, 25(1), 21–27.
10.
Ottenbacher K. J., Msall M. E., Lyon N., Duffy L. C., Granger C. V., & Braun S. (1999). Measuring developmental and functional status in children with disabilities. Developmental Medicine & Child Neurology, 41(3), 186–194.
11.
Ottenbacher K., Msall M., Lyon N., Duffy L., Ziviani J., Granger C., & Braun S. (2000). Functional assessment and care of children with neurodevelopmental disabilities. American Journal of Physical Medicine & Rehabilitation/Association of Academic Physiatrists, 79(2), 114–123.
BRIGANCE (17 STUDIES)
1.
Brulle A. R., & Ivarie J. (1988). Teacher checklists: A reliability analysis. Special Services in the Schools, 5(1–2), 67–75.
2.
Campbell E., Schellinger T., & Beer J. (1991). Relationships among the Ready or Not parental checklist for school readiness, the Brigance Kindergarten and First Grade Screen, and SRA scores. Perceptual and Motor Skills, 73(3, Pt. 1), 859–862.
3.
Costenbader V., Rohrer A. M., & Difonzo N. (2000). Kindergarten screening: A survey of current practice. Psychology in the Schools, 37(4), 323–332.
4.
D'Aprano A., Carapetis J., & Andrews R. (2011). Trial of a developmental screening tool in remote Australian Aboriginal communities: A cautionary tale. Journal of Paediatrics and Child Health, 47(1–2), 12–17.
5.
Frisk V., Montgomery L., Boychyn E., Young R., vanRyn E., McLachlan D., & Neufeld J. (2009). Why screening Canadian preschoolers for language delays is more difficult than it should be. Infants and Young Children, 22(4), 290–308.
6.
Glascoe F. P. (1996). Can the Brigance screens detect children who are gifted and academically talented? Roeper Review, 19(1), 20–24.
7.
Glascoe F. P. (1997). Do the Brigance Screens detect developmental and academic problems? Assessment for Effective Intervention, 22(2), 87–103.
8. Glascoe F. (2001). Are overreferrals on developmental screening tests really a problem? Archives of Pediatrics & Adolescent Medicine, 155(1), 54–59.
9. Glascoe F. (2002). The Brigance Infant and Toddler Screen: Standardization and validation. Journal of Developmental and Behavioral Pediatrics, 23(3), 145–150.
10. Gordon R. R. (1988). Increasing efficiency and effectiveness in predicting second-grade achievement using a kindergarten screening battery. Journal of Educational Research, 81(4), 238–244.
11. Mantzicopoulos P. (1999a). Reliability and validity estimates of the Brigance K & 1 screen based on a sample of disadvantaged preschoolers. Psychology in the Schools, 36(1), 11–19.
12. Mantzicopoulos P. (1999b). Risk assessment of head start children with the Brigance K&1 Screen: Differential performance by sex, age, and predictive accuracy for early school achievement and special education placement. Early Childhood Research Quarterly. Special Issue: Pathways to Child Care Quality, 14(3), 383–408.
13. Mantzicopoulos P. (2000). Can the Brigance K&1 screen detect cognitive/academic giftedness when used with preschoolers from economically disadvantaged backgrounds? Roeper Review, 22(3), 185–191.
14.
Mantzicopoulos P., & Maller S. J. (2002). The Brigance K & 1 screen: Factor composition with a Head Start sample. Journal of Psychoeducational Assessment, 20(2), 164–182.
15.
Van Der Heyden A. M., Broussard C., Fabre M., Stanley J., Legendre J., & Creppell R. (2004). Development and validation of curriculum-based measures of math performance for preschool children. Journal of Early Intervention, 27(1), 27–41.
16.
VanDerHeyden A. M., Broussard C., & Cooley A. (2006). Further development of measures of early math performance for preschoolers. Journal of School Psychology, 44(6), 533–553.
17. Wenner G. (1995). Kindergarten screens as tools for the early identification of children at risk for remediation or grade retention. Psychology in the Schools, 32(4), 249–254.
CDI (26 STUDIES): CHILD DEVELOPMENT INVENTORIES
1.
Byrne J. M., Backman J. E., & Bawden H. N. (1995). Minnesota Child Development Inventory: A normative study. Canadian Psychology/Psychologie canadienne, 36(2), 115–130.
2.
Chaffee C., Cunningham C. E., Secord-Gilbert M., Elbard H., & Richards J. (1990). Screening effectiveness of the Minnesota Child Development Inventory expressive and receptive language scales: Sensitivity, specificity, and predictive value. Psychological Assessment: A Journal of Consulting and Clinical Psychology, 2(1), 80–85.
3. Colligan R. C. (1977). The Minnesota Child Development Inventory as an aid in the assessment of developmental disability. Journal of Clinical Psychology, 33, 162–163.
4.
Creighton D. E., & Sauve R. S. (1988). The Minnesota Infant Development Inventory in the developmental screening of high-risk infants at eight months. Canadian Journal of Behavioural Science/Revue canadienne des sciences du comportement, 20(4), 424–433.
5.
Dobrez D., Sasso A. L., Holl J., Shalowitz M., Leon S., & Budetti P. (2001). Estimating the cost of developmental and behavioral screening of preschool children in general pediatric practice. Pediatrics, 108, 913–922.
6.
Doig K., Macias M., Saylor C., Craver J., & Ingram P. (1999). The Child Development Inventory: A developmental outcome measure for follow-up of the high-risk infant. The Journal Of Pediatrics, 135(3), 358–362.
7.
Duyme M., Zorman M., Tervo R., & Capron C. (2011). French norms and validation of the Child Development Inventory (CDI): L'lnventaire du Développement de l'Enfant (IDE). Clinical Pediatrics, 50(7), 636–647.
8.
Eisert D. C., Spector S., Shankaran S., Faigenbaum D., & Szego E. (1980). Mothers' reports of their low birth weight infants' subsequent development on the Minnesota Child Development Inventory. Journal of Pediatric Psychology, 5, 353–364.
9.
Glascoe F. P. (1996). Can the BRIGANCE® Screens detect children who are gifted and academically talented? Roeper Review, 19(1), 20–24.
10.
Glascoe F. P. (1997). Do the Brigance Screens detect developmental and academic problems? Assessment for Effective Intervention, 22(2), 87–103.
11.
Gottfried A. W., Guerin D., Spencer J. E., & Meyer C. (1983). Concurrent validity of the Minnesota Child Development Inventory in a nonclinical sample. Journal of Consulting and Clinical Psychology, 51(4), 643–644.
12.
Gottfried A. W., Guerin D., Spencer J. E., & Meyer C. (1984). Validity of Minnesota Child Development Inventory in screening young children's developmental status. Journal of Pediatric Psychology, 9, 219–229.
13.
Guerin D., & Gottfried A. (1987). Minnesota Child Development Inventories: Predictors of intelligence, achievement, and adaptability. Journal of Pediatric Psychology, 12(4), 595–609.
14.
Hopchin M., & Erickson D. (1997). Relationships between the Diagnostic Inventory for Screening Children and the Minnesota Child Development Inventory in an early intervention population. Canadian Journal of Rehabilitation, 10(3), 185–191.
15.
Ireton H., & And O. (1981). Minnesota Preschool Inventory Identification of Children at Risk for Kindergarten Failure. Psychology in the Schools, 18(4), 394–401.
16.
Ireton H., & Glascoe F. P. (1995). Assessing children's development using parents' reports: The Child Development Inventory. Clinical Pediatrics, 34(5), 248–255.
17.
Ireton H., & Thwing E. (1976). Appraising the development of a preschool child by means of a standardized report prepared by the mother: The Minnesota Child Development Inventory. Clinical Pediatrics, 15, 875–882.
18.
Ireton H., Thwing E., & Currier S. K. (1977). Minnesota Child Development Inventory: Identification of children with developmental disorders. Journal of Pediatric Psychology, 2(1), 18–22.
19.
Kenny T. J., Hebel J. R., Sexton M. J., & Fox N. L. (1987). Developmental screening using parent report. Journal of Developmental and Behavioral Pediatrics, 8(1), 8–11.
20.
Kopparthi R., McDermott C., Sheftel D. N., & Lenke M. C. (1991). The Minnesota Child Development Inventory: Validity and reliability for assessing development in infancy. Journal of Developmental and Behavioral Pediatrics, 12(4), 217–222.
21.
Montgomery M. L., Saylor C. F., Bell N. L., Macias M., Charles J. M., & Katikaneni L. (1999). Use of the Child Development Inventory to screen high-risk populations. Clinical Pediatrics, 38(9), 535–539.
22.
Rydz D., Srour M., Oskoui M., Marget N., Shiller M, Birnbaum R., ... Shevell M. I. (2006). Screening for developmental delay in the setting of a community pediatric clinic: A prospective assessment of parent-report questionnaires. Pediatrics, 118(4), e1178–e1186.
23.
Saylor C. F., & Brandt B. J. (1986). The Minnesota Child Development Inventory: A valid maternal-report form for assessing development in infancy. Journal of Developmental and Behavioral Pediatrics, 7(5), 308–311.
24. Schraeder B. D. (1993). Assessment of measures to detect preschool academic risk in very-low-birth-weight children. Nursing Research, 42(1), 17–21.
25.
Shoemaker O. S., Saylor C. F, & Erickson M. T. (1993) Concurrent validity of the Minnesota Child Development Inventory with high-risk infants. Journal of Pediatric Psychology, 18(3), 377–388.
26.
Sturner R. A., Funk S. G., Thomas P. D., & Green J. A. (1982). An adaptation of the Minnesota Child Development Inventory for preschool developmental screening. Journal of Pediatric Psychology, 7, 295–306.
DASI-II (2 STUDIES)
Developmental Activities Screening Inventory
1.
Fewell R. R., Langley M. B., & Roll A. (1982). Informant versus direct screening: A preliminary comparative study. Diagnostique, 7(3), 163–167.
2.
Rose T. L., Calhoun M. L., & Pendergast D. (1990). Interrater reliability and test-retest stability of the developmental activities screening inventory-II. Diagnostique, 16(1), 3–9.
DENVER (58 STUDIES)
1.
Akaragian S., & Dewa C. (1992). Standardization of the Denver developmental screening test for Armenian children. Journal of Pediatric Nursing, 7(2), 106–109.
2.
al-Naquib N., Frankenburg W., Mirza H., Yazdi A., & al-Noori S. (1999). The standardization of the Denver developmental screening test on Arab children from the Middle East and North Africa. Le Journal Médical Libanais. The Lebanese Medical Journal, 47(2), 95–106.
3. Appelbaum A. S. (1978). Validity of the revised Denver developmental screening test for referred and nonreferred samples. Psychological Reports, 43(1), 227–233.
4.
Barnes K. E., & Stark A. (1975). The Denver developmental screening test: A normative study. American Journal of Public Health, 65(4), 363–369.
5.
Borowitz K. C., & Glascoe F. P. (1986). Sensitivity of the Denver developmental screening test in speech and language screening. Pediatrics, 78(6), 1075–1078.
6.
Brachlow A., Jordan A. E., & Tervo R. (2001). Developmental screenings in rural settings: A comparison of the child development review and the Denver II developmental screening test. Journal of Rural Health, 17(3), 156–159.
7.
Bryant G. M., Davies K. J., & Newcombe R. G. (1974). The Denver developmental screening test: Achievement of test items in the first year of life by Denver and Cardiff infants. Developmental Medicine & Child Neurology, 16(4), 475–484.
8.
Burgess D. B., Asher K. N., Doucet H. J., Reardon K., & Daste M. R. (1984). Parent report as a means of administering the prescreening developmental questionnaire: An evaluation study. Journal of Developmental and Behavioral Pediatrics, 5(4), 195–200.
9.
Cadman D., Chambers L. W., Walter S. D., Feldman W., Smith K., & Ferguson R. (1984). The usefulness of the Denver developmental screening test to predict kindergarten problems in a general community population. American Journal of Public Health, 74, 1093–1097.
10.
Camp B., van Doorninck W., Frankenburg W., & Lampe J. (1977). Preschool developmental testing in prediction of school problems. Studies of 55 children in Denver. Clinical Pediatrics, 16(3), 257–263.
11.
Costenbader V., Rohrer A. M., & Difonzo N. (2000). Kindergarten screening: A survey of current practice. Psychology in the Schools, 37(4), 323–332.
12.
da Cunha H., & de Melo A. (2005). Assessment of risk to neuro-psychomotor development: Screening using the Test Denver II and identification of maternal risks. Acta Cirúrgica Brasileira/Sociedade Brasileira Para Desenvolvimento Pesquisa Em Cirurgia, 20, 142–146.
13. Diamond K. E. (1987). Predicting school problems from preschool developmental screening: A four-year follow-up of the revised Denver developmental screening test and the role of parent report. Journal of the Division for Early Childhood, 11(3), 247–253.
14. Diamond K. E. (1990). Effectiveness of the revised Denver developmental screening test in identifying children at risk for learning problems. Journal of Educational Research, 83(3), 152–157.
15.
Dobrez D., Sasso A. L., Holl J., Shalowitz M., Leon S., & Budetti P. (2001). Estimating the cost of developmental and behavioral screening of preschool children in general pediatric practice. Pediatrics, 108, 913–922.
16.
Drachler M., Marshall T., & de Carvalho Leite J. (2007). A continuous-scale measure of child development for population-based epidemiological surveys: A preliminary study using Item Response Theory for the Denver Test. Paediatric & Perinatal Epidemiology, 21(2), 138–153.
17.
Epir S., & Yalaz K. (1984). Urban Turkish children's performance on the Denver developmental screening test. Developmental Medicine & Child Neurology, 26(5), 632–643.
18.
Feeney J., & Bernthal J. (1996). The efficiency of the revised Denver developmental screening test as a language screening tool. Language, Speech, and Hearing Services in Schools, 27(4), 330–332.
19.
Fewell R. R., Langley M. B., & Roll A. (1982). Informant versus direct screening: A preliminary comparative study. Diagnostique, 7(3), 163–167.
20.
Frankenburg W. K., Dodds J., Archer P., Shapiro H., & Bresnick B. (1992). The Denver II: A major revision and re-standardization of the Denver developmental screening test. Pediatrics, 89(1), 91.
21.
Frankenburg W. K., Camp B. W., & Van Natta P. A. (1971). Validity of the Denver developmental screening test. Child Development, 42, 475–485.
22.
Frankenburg W. K., Ker C. Y., Engelke S., Schaefer E. S., & Thornton S. M. (1988). Validation of key Denver Developmental Screening Test items: A preliminary study. The Journal of Pediatrics, 112(4), 560–566.
23.
Frankenburg W. K., van Doorninck W. J., Liddell T. N., & Dick N. P. (1976). The Denver prescreening developmental questionnaire (PDQ). Pediatrics, 57(5), 744.
24.
German M. L., Williams E., Herzfeld J., & Marshall R. M. (1982). Utility of the revised Denver developmental screening test and the developmental profile II in identifying preschool children with cognitive, language, and motor problems. Education & Training of the Mentally Retarded, 17(4), 319–324.
25.
Glascoe F. P., & Borowitz K. C. (1988). Improving the sensitivity of the language sector of the Denver developmental screening test. Diagnostique, 13(2–4), 76–85.
26.
Glascoe F. P., Byrne K. E., Ashford L. G., Johnson K. L., Change B., & Strickland B. (1992). Accuracy of the Denver-II in developmental screening. Pediatrics, 89(6), 1221.
27. Glascoe F. P. (2001). Are over-referrals on developmental screening tests really a problem?. Archives of Pediatrics & Adolescent Medicine, 155(1), 54–59.
28.
Glascoe F. P., & Byrne K. E. (1993). The accuracy of three developmental screening tests. Journal of Early Intervention, 17(4), 368–379.
29.
Glascoe F. P., Martin E. D., & Humphrey S. (1990). Comparative review of developmental screening tests. Pediatrics, 86(4), 547.
30.
Greer S., Bauchner H., & Zuckerman B. (1989). The Denver Developmental Screening Test: How good is its predictive validity? Developmental Medicine & Child Neurology, 31(6), 774–781.
31.
Hallioglu O., Topaloglu A., Zenciroglu A., Duzovali O., Yilgor E., & Saribas S. (2001). Denver developmental screening test II for early identification of the infants who will develop major neurological deficit as a sequalea of hypoxic-ischemic encephalopathy. Pediatrics International: Official Journal of the Japan Pediatric Society, 43(4), 400–404.
32.
Harper D. C., & Wacker D. P. (1983). The efficiency of the Denver developmental screening test with rural disadvantaged preschool children. Journal of Pediatric Psychology, 8(3), 273–283.
33.
Howard D. P., & de Salazar M. N. (1984). Language and cultural differences in the administration of the Denver developmental screening test. Child Study Journal, 14(1), 1–9.
34.
Jaffe M. M., Harel J., Goldberg A., Rudolph-Schnitzer M., & Winter S. T. (1980). The use of the Denver developmental screening test in infant welfare clinics. Developmental Medicine and Child Neurology, 22(1), 55–60.
35.
Kapci E., Kucuker S., & Uslu R. I. (2010). How applicable are “ages and stages questionnaires” for use with Turkish children?. Topics in Early Childhood Special Education, 30(3), 176–188.
36.
Kerfeld C. I., Guthrie M. R., & Stewart K. B., (1997). Evaluation of the Denver II as applied to Alaska native children. Pediatric Physical Therapy, 23, 23–31.
37.
Krohn E. J., & Traxler A. J. (1979). Relationship of the McCarthy scales of children's abilities to other measures of preschool cognitive, motor, and perceptual development. Perceptual and Motor Skills, 49(3), 783–790.
38.
Lim H., Ho L., Goh L., Ling S., Heng R., & Po G. (1996). The field testing of Denver Developmental Screening Test Singapore: A Singapore version of Denver II Developmental Screening Test. Annals of the Academy of Medicine, Singapore, 25(2), 200–209.
39. Lichtenstein R. (1981). Comparative validity of two preschool screening tests: Correlational and classification approaches. Journal of Learning Disabilities, 14(2), 68–72.
40. Lindquist G. T. (1982). Preschool screening as a means of predicting later reading achievement. Journal of Learning Disabilities, 15(6), 331–332.
41.
Luiz D. M., Foxcroft C. D., & Tukulu A. N. (2004). The Denver II Scales and the Griffiths Scales of Mental Development: A correlational study. Journal of Child and Adolescent Mental Health, 16(2), 77–81.
42.
McLean M., McCormick K., Baird S., & Mayfield P. (1987). Concurrent validity of the battelle developmental inventory screening test. Diagnostique, 13(1), 10–20.
43.
Miller L., & Sprong T. A. (1986). Psychometric and qualitative comparison of four preschool screening instruments. Journal of Learning Disabilities, 19(8), 480–484.
44.
Miller V., Onotera R. T., & Deinard A. S. (1984). Denver developmental screening test: Cultural variations in Southeast Asian children. Journal of Pediatrics, 104(3), 481–482.
45.
Mirrett P. L., Bailey D. R., Roberts J. E., & Hatton D. D. (2004). Developmental screening and detection of developmental delays in infants and toddlers with fragile x syndrome. Journal of Developmental and Behavioral Pediatrics, 25(1), 21–27.
46. Niparko N. (1982). The effect of prematurity on performance on the Denver developmental screening test. Physical & Occupational Therapy in Pediatrics, 2(1), 29–50.
47. Olade R. A. (1984). Evaluation of the Denver developmental screening test as applied to African children. Nursing Research, 33(4), 204–207.
48.
Rosenbaum M. S., Chua-Lim C., Wilhite J., & Mankad V. N. (1983). Applicability of the Denver prescreening developmental questionnaire in a low-income population. Pediatrics, 71(3), 359–363.
49. Sabin J. N. (1978). Analysis of the Denver Developmental Screening Test. Farmworker Journal, 1(1), 39–55.
50.
Schloon M., Shelhorn B., & Flehmig I. (1974). Reliability of the Denver development test. Zeitschrift für Entwicklungspsychologie und Pädagogische Psychologie, 6(1), 39–50.
51. Sciarillo W. (1986). Effectiveness of the Denver Developmental Screening Test with biologically vulnerable infants. Journal of Developmental and Behavioral Pediatrics, 7(2), 77–83.
52.
Shapira Y., & Harel S. (1983). Standardization of the Denver developmental screening test for Israeli children. Israel Journal of Medical Sciences, 19(3), 246–251.
53. Solomons H. C. (1982). Standardization of the Denver developmental screening test on infants from Yucatan, Mexico. International Journal of Rehabilitation Research, 5(2), 179–189.
54.
Sturner R. A., Horton M., Funk S. G., Barton J., Frothingham T. E., & Cress J. N. (1982). Adaptations of the Denver developmental screening test: A study of preschool screening. Pediatrics, 69(3), 346.
55.
Thompson L. A., Tuli S. Y., Saliba H., DiPietro M., & Nackashi J. A. (2010). Improving developmental screening in pediatric resident education. Clinical Pediatrics, 49(8), 737–742.
56. Ueda R. (1978). Standardization of the Denver developmental screening test on Tokyo children. Developmental Medicine and Child Neurology, 20, 647–656.
57.
Ware C. J., Sloss C., Chugh C. S., & Budd K. S. (2002). Adaptations of the Denver II scoring system to assess the developmental status of children with medically complex conditions. Children's Health Care, 31(4), 255–272.
58.
Williams P. D., & Williams A. R. (1987). Denver developmental screening test norms: A cross-cultural comparison. Journal of Pediatric Psychology, 12(1), 39–59.
DIAL (15 STUDIES): DEVELOPMENTAL INDICATORS FOR THE ASSESSMENT OF LEARNING
1.
Anthony J. L., & Assel M. A. (2007). A first look at the validity of the DIAL-3 Spanish Version. Journal of Psychoeducational Assessment, 25(2), 165–179.
2.
Barnett D. W., Faust J., & Sarmir M. A. (1988). A validity study of two preschool screening instruments: The LAP–D and DIAL–R. Contemporary Educational Psychology, 13(1), 26–32.
3.
Chen T. H., Wang J. J., Mardell-Czudnowski C., Goldenberg D. S., & Elliott C. (2000). The development of the Spanish version of the developmental indicators for the assessment of learning-third edition (Dial-3). Journal of Psychoeducational Assessment, 18(4), 316–343.
4.
Chew A. L., & Lang W. S. (1993). Concurrent validation and regression line comparison of the Spanish edition of the lollipop test (La Prueba Lollipop) on a bilingual population. Educational And Psychological Measurement, 53(1), 173–182.
5.
Costenbader V., Rohrer A. M., & Difonzo N. (2000). Kindergarten screening: A survey of current practice. Psychology in the Schools, 37(4), 323–332.
6. Docherty E. M. (1983). The DIAL-preschool screening for learning-problems. Journal Of Special Education, 17(2), 195–202.
7.
Glascoe F. P., Martin E. D., & Humphrey S. (1990). Comparative review of developmental screening tests. Pediatrics, 86(4), 547.
8.
Heo K., Squires J., & Yovanoff P. (2008). Cross-cultural adaptation of a preschool screening instrument: Comparison of Korean and U.S. Populations. Journal of Intellectual Disability Research, 52, 195–206.
9. Lichtenstein R. (1981). Comparative validity of two preschool screening tests: Correlational and classification approaches. Journal of Learning Disabilities, 14(2), 68–72.
10.
Mardell-Czudnowski C., Dionne-Simard, & Oullet-Mayrand C. (1987). The performance of normal French-Canadian preschool children on DIAL-R and the K-ABC. Canadian Journal for Exceptional Children, 3(3), 82–87.
11.
Mardell-Czudnowski C., & Goldenberg D. (1984). Revision and restandardization of a preschool screening test: DIAL becomes DIAL-R. Journal of the Division for Early Childhood, 8(2), 149–156.
12.
Miller L., & Sprong T. A. (1986). Psychometric and qualitative comparison of four preschool screening instruments. Journal of Learning Disabilities, 19(8), 480–484.
13.
Miller L. J., & Sprong T. A. (1987). A comparison of the Miller assessment for preschoolers and developmental indicators for the assessment of learning–-revised. Physical & Occupational Therapy in Pediatrics, 7(1), 57–69.
14.
Schellinger T., Beer J., & Beer J. (1992). Relationships between scores on the DIAL-R concepts scale and SRA scores. Psychological Reports, 70(1), 271–274.
15.
Suen H. K., Mardellczudnowski C., & Goldenberg D. S. (1989). Classification reliability of the Dial-R Preschool Screening-Test. Educational & Psychological Measurement, 49(3), 673–680.
DOCS (2 STUDIES): DEVELOPMENTAL OBSERVATION CHECKLIST
1.
Bagnato S. J., Suen H. K., Brickley D., Smith-Jones J., & Dettore E. (2002). Child developmental impact of Pittsburgh's early childhood initiative in high-risk communities: First-phase authentic evaluation research. Early Childhood Research Quarterly, 17(4), 559–580.
2.
Baird S. M., Campbell D., Ingram R., & Gomez C. (2001). Young children with Cri-du-Chat: Genetic, developmental, and behavioral profiles. Infant-Toddler Intervention, 11(1), 1–14.
ESI (3 STUDIES): EARLY SCREENING INVENTORY
1.
Costenbader V., Rohrer A. M., & Difonzo N. (2000). Kindergarten screening: A survey of current practice. Psychology in the Schools, 37(4), 323–332.
2.
Henderson L. W., & Meisels S. J. (1994). Parental involvement in the developmental screening of their young children: A multiple-source perspective. Journal of Early Intervention, 18(2), 141–154.
3.
Meisels S. J., Henderson L. W., Liaw F., & Browning K. (1993). New evidence for the effectiveness of the Early Screening Inventory. Early Childhood Research Quarterly, 8(3), 327–346.
ESP (7 STUDIES): EARLY SCREENING PROFILES
1.
Eno L., & Woehlke P. (1995). Predicting preschool speech/language referral-status with the lollipop test and the cognitive-language profile of the early screening profiles. Perceptual & Motor Skills, 80(3 Pt. 1), 1025–1026.
2.
Frisk V., Montgomery L., Boychyn E., Young R., vanRyn E., McLachlan D., & Neufeld J. (2009). Why screening Canadian preschoolers for language delays is more difficult than it should be. Infants and Young Children, 22(4), 290–308.
3.
Ittenbach R. F., & Harrison P. L. (1990). Race, gender, and maternal education differences on three measures of the early screening profiles. Educational and Psychological Measurement, 50(4), 931–942.
4.
Lenkarski S., Singer M., Peters M., & McIntosh D. (2001). Utility of the early screening profiles in identifying preschoolers at risk for cognitive delays. Psychology in the Schools, 38(1), 17–24.
5.
McIntosh D. E., Gibney L., Quinn K., & Kundert D. (2000). Concurrent validity of the early screening profiles and the differential ability scales with an at-risk preschool sample. Psychology in the Schools, 37(3), 201–207.
6. Reeves L. (1997). Construct validity of the motor profile with preschool children with speech-language delays: Component of the early screening profiles. Perceptual and Motor Skills, 85(1), 335–343.
7.
Serna L., Lamros K., Nielsen E., & Forness S. R. (2002). Head start children at risk for emotional or behavioral disorders: Behavior profiles and clinical implications of a primary prevention program. Behavioral Disorders, 27(2), 137–141.
LAP-D (7 STUDIES): LEARNING ACCOMPLISHMENT PROFILE DIAGNOSTIC
1.
Barnett D. W., Faust J., & Sarmir M. A. (1988). A validity study of two preschool screening instruments: The LAP–D and DIAL–R. Contemporary Educational Psychology, 13(1), 26–32.
2.
Long C. E., Blackman J. A., Farrell W. J., Smolkin M. E., & Conaway M. R. (2005). A comparison of developmental versus functional assessment in the rehabilitation of young children. Pediatric Rehabilitation, 8(2), 156–161.
3.
Macmann G. M., & Barnett D. W. (1984). An analysis of the construct validity of two measures of adaptive behavior. Journal of Psychoeducational Assessment, 2(3), 239–247.
4.
Poth R. L., & Barnett D. W. (1988). Establishing the limits of interpretive confidence: A validity study of two preschool developmental scales. School Psychology Review, 17(2), 322–330.
5. Sexton D. (1983). Multisource assessment of young handicapped children: A comparison of a diagnostician, teachers, mothers, and fathers. Diagnostique, 9(1), 3–11.
6.
Sexton D., Hall J., & Thomas P. J. (1984). Multisource assessment of young handicapped children: A comparison. Exceptional Children, 50(6), 556–558.
7.
Sexton D., Miller J., & Murdock J. (1984). Correlates of parental-professional congruency scores in the assessment of young handicapped children. Journal of the Division for Early Childhood, 8(2), 99–106.
MCCARTHY SCREENING TEST (40 STUDIES)
1. Aylward G. (2004). Prediction of function from infancy to early childhood: Implications for pediatric psychology. Journal of Pediatric Psychology, 29(7), 555–564.
2.
Aylward G. P., & Verhulst S. J. (2000). Predictive utility of the Bayley Infant Neurodevelopmental Screener (BINS) risk status classifications: Clinical interpretation and application. Developmental Medicine & Child Neurology, 42(1), 25–31.
3.
Blixt S., & Kitson D. L. (1982). Factor Structure of the McCarthy Screening Test. Psychology in the Schools, 19(1), 33–38.
4.
Bondy A. S., Sheslow D., Norcross J. C., & Constantino R. (1982). Comparison of Slosson and McCarthy scales for minority pre-school children. Perceptual and Motor Skills, 54(2), 356–358.
5.
Bondy A. S., Constantino R., Norcross J. C., & Sheslow D. (1984). Comparison of Slosson and McCarthy Scales for exceptional preschool children. Perceptual and Motor Skills, 59(2), 657–658.
6.
Chang S., & Bashaw W. L. (1984). The reliability of the McCarthy screening test from the criterion-referenced testing perspective. Journal of Clinical Psychology, 40(3), 791–800.
7.
Cronin M. E., Arvin I., & Brown L. (1983). The predictive validity of McCarthy screening test and SEARCH. Diagnostique, 8(4), 244–255.
8.
Eisert D. C., Spector S., Shankaran S., Faigenbaum D., & Szego E. (1980). Mothers' reports of their low birth weight infants' subsequent development on the Minnesota Child Development Inventory. Journal of Pediatric Psychology, 5, 353–364.
9. Ferrari M. (1980). Comparisons of the Peabody picture vocabulary test and the McCarthy scales of children's abilities with a sample of autistic children. Psychology in the Schools, 17(4), 466–469.
10. Foxcroft C. D. (1997). Note on reliability and validity of the school-entry group screening measure. Perceptual and Motor Skills, 85(1), 161–162.
11.
Gerken K., Hancock K. A., & Wade T. H. (1978). A comparison of the Stanford-Binet intelligence scale and the McCarthy scales of children's abilities with preschool children. Psychology in the Schools, 15, 468–472.
12.
Gómez-Benito J., & Forns-Santacana M. (1993). Concurrent validity between the Columbia Mental Maturity Scale and the McCarthy scales. Perceptual & Motor Skills, 76(3 Pt. 2), 1177–1178.
13.
Gottfried A. W., Guerin D., Spencer J. E., & Meyer C. (1983). Concurrent validity of the Minnesota child development inventory in a nonclinical sample. Journal of Consulting and Clinical Psychology, 51(4), 643–644.
14.
Gottfried A. W., Guerin D., Spencer J. E., & Meyer C. (1984). Validity of Minnesota Child Development Inventory in screening young children's developmental status. Journal of Pediatric Psychology, 9, 219–229.
15.
Gullo D. F., & McLoughlin C. S. (1982). Comparison of scores for normal preschool children on Peabody Picture Vocabulary Test-Revised and McCarthy Scales of Children's Abilities. Psychological Reports, 51(2), 623–626.
16.
Gullo D. F., Clements D. H., & Robertson L. (1984). Prediction of academic achievement with the McCarthy screening test and metropolitan readiness test. Psychology in the Schools, 21(2), 264–269.
17.
Harrington R. G., & Jennings V. (1986). A comparison of three short forms of the McCarthy scales of children's abilities. Contemporary Educational Psychology, 11(2), 109–116.
18. Kaufman A. S. (1975) Factor structure of the McCarthy Scales at five age levels between 2 and 8. Educational & Psychological Measurement, 35, 641–656.
19. Kaufman A. S. (1977). A McCarthy short form for rapid screening of preschool, kindergarten, and first-grade children. Contemporary Educational Psychology, 2(2), 149–157.
20.
Kaufman A. S., & Kaufman N. L. (1973a). Sex differences on the McCarthy scales of children's abilities. Journal of Clinical Psychology, 29, 362–365.
21.
Kaufman A. S., & Kaufman N. L. (1973b). Black-white differences at ages 2-8 on the McCarthy Scales of Children's Abilities. Journal of School Psychology, 11, 194–204.
22.
Kenny T. J., Hebel J. R., Sexton M. J., & Fox N. L. (1987). Developmental screening using parent report. Journal of Developmental and Behavioral Pediatrics, 8(1), 8–11.
23.
Krohn E. J., & Traxler A. J. (1979). Relationship of the McCarthy scales of children's abilities to other measures of preschool cognitive, motor, and perceptual development. Perceptual and Motor Skills, 49(3), 783–790.
24. Mishra S. P. (1981). Factor analysis of the McCarthy scales for groups of white and Mexican-American children. Journal of School Psychology, 19(2), 178–182.
25.
Moore C. L., & Burns W. J. (1977). Brief screening for developmentally delayed preschoolers. Perceptual and Motor Skills, 45(3, Pt. 2), 1169–1170.
26.
Piersel W. C., & Santos L. (1982). Comparison of McCarthy and goodenough-harris scoring systems for kindergarten children's human figure drawings. Perceptual & Motor Skills, 55(2), 633–634.
27.
Prasse D. P., Siewert J. C., & Ellison P. H. (1983). McCarthy performance and neurological functioning in children born ‘at risk'. Journal of Psychoeducational Assessment, 1(3), 273–283.
28.
Reynolds C. R. (1979). Objectivity of scoring for the ***cCarthy drawing tests. Psychology in the Schools, 16(3), 367–368.
29. Reynolds C. R. (1978). Teacher-psychologist interscorer reliability of the McCarthy drawing tests. Perceptual and Motor Skills, 47, 538.
30.
Sattler J. M., & Altes L. M. (1984). Performance of bilingual and monolingual Hispanic children on the Peabody Picture vocabulary test-revised and the McCarthy perceptual performance scale. Psychology in the Schools, 21(3), 313–316.
31.
Stone B. J., & Gridley B. E. (1991). Test bias of a kindergarten screening battery: Predicting achievement for White and Native American elementary students. School Psychology Review, 20(1), 132–139.
32.
Taylor R. L., & Ivimey J. K. (1980). Predicting academic achievement: Preliminary analysis of the McCarthy Scales. Psychological Reports, 46(3, Pt. 2), 1232.
33. Teeter P. (1984). Cross-validation of the factor structure of the McCarthy scales for kindergarten children. Psychology in the Schools, 21(2), 158–164.
34.
Umansky W., & Cohen L. R. (1980). Race and sex differences on the McCarthy screening test. Psychology in the Schools, 17(3), 400–404.
35.
Umansky W., Paget K. D., & Cohen L. R. (1981). The test-retest reliability of the McCarthy screening test. Journal of Clinical Psychology, 37(3), 650–654.
36.
Valencia R. R. (1984). The McCarthy scales and Kaufman's McCarthy short form correlations with the comprehensive test of basic skills. Psychology in the Schools, 21(2), 141–147.
37.
Valencia R. R., & Rankin R. J. (1983). Concurrent validity and reliability of the Kaufman version of the McCarthy scales short form for a sample of Mexican-American children. Educational and Psychological Measurement, 43(3), 915–925.
38.
Valencia R. R., & Rankin R. J. (1985). Evidence of content bias on the McCarthy Scales with Mexican American children: Implications for test translation and nonbiased assessment. Journal of Educational Psychology, 77(2), 197–207.
39.
Vance B., Blixt S. L., & Kitson D. L. (1982). Factor structure of the McCarthy Screening Test. Psychology in the Schools, 19(1), 33–38.
40.
Vance B., Kistson D. L., & Singer M. (1983). Comparison of the Peabody picture vocabulary test-revised and the McCarthy screening test. Psychology in the Schools, 20(1), 21–24.
PEDS (20 STUDIES): PARENTS' EVALUATION OF DEVELOPMENTAL STATUS
1.
Armstrong M. F., & Goldfeld S. (2008). Systems of early detection in Australian communities: The use of a developmental concern questionnaire to link services. Australian Journal of Advanced Nursing, 25(3), 36–42.
2.
Brothers K. B., Glascoe F., & Robertshaw N. S. (2008). PEDS: Developmental milestones—An accurate brief tool for surveillance and screening. Clinical Pediatrics, 47(3), 271–279.
3.
Campos J., Squires J., & Ponte J. (2011). Universal development screening: Preliminary studies in Galicia, Spain. Early Child Development and Care, 181(4), 475–485.
4.
Coghlan D., Kiing J., & Wake M. (2003). Parents' evaluation of developmental status in the Australian day-care setting: Developmental concerns of parents and carers. Journal Of Paediatrics and Child Health, 39(1), 49–54.
5.
Cox J. E., Huntington N., Saada A., Epee-Bounya A., & Schonwald A. D. (2010). Developmental screening and parents' written comments: An added dimension to the parents' evaluation of developmental status questionnaire. Pediatrics, 126(3), S170–S176.
6.
Davies S., & Feeney H. (2009). A pilot of the parents' evaluation of developmental status tool. Community Practitioner: The Journal of the Community Practitioners' & Health Visitors' Association, 82(7), 29–31.
7.
Dobrez D., Sasso A. L., Holl J., Shalowitz M., Leon S., & Budetti P. (2001). Estimating the cost of developmental and behavioral screening of preschool children in general pediatric practice. Pediatrics, 108, 913–922.
8. Glascoe F. (2001). Are overreferrals on developmental screening tests really a problem? Archives Of Pediatrics & Adolescent Medicine, 155(1), 54–59.
9. Glascoe F. (2003). Parents' evaluation of developmental status: How well do parents' concerns identify children with behavioral and emotional problems? Clinical Pediatrics, 42(2), 133.
10.
Glascoe F., Macias M. M., Wegner L. M., & Robertshaw N. S. (2007). Can a broadband developmental-behavioral screening test identify children likely to have autism spectrum disorder? Clinical Pediatrics, 46(9), 801–805.
11. Kosht-Fedyshin M. (2006). Translation of the parents' evaluation of developmental status (PEDS) developmental screening tool for identification of developmental delay in children from birth to five years of age in the Karagwe District of Northwestern Tanzania, East Africa: A pilot study. The Internet Journal of Tropical Medicine, 3(1), 3.
12.
Limbos M. M., & Joyce D. P. (2011). Comparison of the ASQ and PEDS in screening for developmental delay in children presenting for primary care. Journal of Developmental & Behavioral Pediatrics, 32(7), 499–511.
13.
Malhi P., & Singhi P. (2002). Can parental concerns detect children with behavioral problems? Studia Psychologica, 44(4), 359–365.
14.
Pritchard M., Colditz P., & Beller E. (2005). Parents' evaluation of developmental status in children born with a birth weight of 1250 g or less. Journal of Paediatrics & Child Health, 41(4), 191–196.
15.
Schonwald A., Huntington N., Chan E., Risko W., & Bridgemohan C. (2009). Routine developmental screening implemented in urban primary care settings: More evidence of feasibility and effectiveness. Pediatrics, 123(2), 660–668.
16. Shevell M. (2010). Two developmental screening tests may identify different groups of children. The Journal of Pediatrics, 156(3), 580.
17.
Sices L., Drotar D., Keilman A., Kirchner H., Roberts D., & Stancin T. (2008). Communication about child development during well-child visits: impact of parents' evaluation of developmental status screener with or without an informational video. Pediatrics, 122(5), e1091–e1099.
18.
Sices L., Stancin T., Kirchner H., & Bauchner H. (2009). PEDS and ASQ developmental screening tests may not identify the same children. Pediatrics, 124(4), e640–e647.
19.
Thompson L. A., Tuli S. Y., Saliba H., DiPietro M., & Nackashi J. A. (2010). Improving developmental screening in pediatric resident education. Clinical Pediatrics, 49(8), 737–742.
20.
Voigt R. G., Johnson S. K., Mellon M. W., Hashikawa A. H., Campeau L. J., Williams A. R., & Juhn Y. J. (2009). Relationship between parenting stress and concerns identified by developmental screening and their effects on parental medical care-seeking behavior. Clinical Pediatrics, 48(4), 362–368.