ASSURING the delivery of high-quality and cost-effective services that produce positive functional outcomes for infants and young children with disabilities and their families is a significant challenge for states when implementing Early Intervention Programs (EIPs) under the Individuals with Disabilities Education Act (IDEA). Ensuring all children and families have access to early intervention services appropriate to meet their needs, regardless of where they reside, is an equally daunting task.
Wide variation exists nationwide in the types and amounts of early intervention services provided to young children with developmental disabilities, both within and between individual states. This variation appears to reflect, in part, a lack of agreement about which clinical assessment and intervention approaches are optimal for young children with specific developmental conditions and raises questions about whether some clinical practices may be more effective than others in achieving desired outcomes for children and their families. In response to this problem, several states, including New York, have initiated efforts and used diverse approaches to develop service guidelines for their EIPs to assist families, service providers, and public officials when making decisions related to identification, assessment, and intervention for eligible children.
In 1996, the New York State Department of Health's Early Intervention Program (NYSDOH-EIP) began a multiyear project to develop evidence-based clinical practice guidelines focused on identification, assessment, and intervention for young children with developmental problems likely to require early intervention services. The overall goal of the clinical practice guidelines project was to improve the quality and consistency of care for young children with developmental problems, by providing families, service providers, and public officials with recommendations about best practices based on scientific evidence and expert clinical opinion. Four specific objectives of the guidelines were to improve knowledge about care for young children with developmental problems by providing families, professionals, and government officials with accurate background information and evidence-based recommendations on assessment and intervention; enhance communication among all those involved with early intervention services (parents, professionals, and EIP administrators) when deciding upon assessment and intervention approaches, and in monitoring the child's progress; facilitate program evaluation and quality improvement efforts by defining appropriate outcomes measures and quality criteria for early intervention services; and promote research by identifying gaps in current knowledge about the care of young children with developmental disabilities.
The NYSDOH-EIP convened multidisciplinary panels of clinicians and parents, assisted by a research and methodology staff, to develop separate clinical practice guidelines for infants and young children with the following developmental conditions: autism/pervasive developmental disorders (PDDs), communication disorders, Down syndrome, hearing loss, motor disorders, and vision impairment.
All 6 clinical practice guidelines have now been completed. The guidelines on autism/PDD and communication disorders were published in 1999 and have been widely disseminated to public early intervention officials, service providers, parents, and others associated with the State's EIP (New York State Department of Health [NYSDOH], 1999a, 1999b). The remaining 4 clinical practice guidelines are currently in press. These 6 clinical practice guidelines provide evidence-based consensus recommendations on the best practices for the vast majority of serious developmental problems seen in children from birth to 3 years of age.
This article describes the development of the 6 NYSDOH-EIP clinical practice guidelines, including a detailed description of the guideline methodology and an overview of the types of recommendations included in the first 2 guidelines to be released (on autism/PDD and communication disorders). The potential advantages, limitations, and special considerations for using evidence-based clinical practice guidelines to improve care for young children in need of early intervention services are also discussed.
Role of NYSDOH-EIP in defining the project parameters
When developing clinical practice guidelines, decisions made in the early stages of conceptualization and definition of the project are often crucial determinants of the ultimate usefulness, credibility, longevity, and overall impact of the guidelines. Therefore, the processes and rationale used in setting the parameters for the project are described in detail below.
The NYSDOH-EIP, with input from its EIP advisory groups, specified the topics, target audiences, goals and objectives, methodology, and general format for the guidelines. Within these parameters, distinct consensus panels of clinicians and parents, with assistance from a methodologist and research staff, developed 6 individual clinical practice guidelines, using a rigorous evidence-based approach.
Selecting guideline topics and defining target audiences
The topics for the 6 guidelines were chosen by the NYSDOH-EIP either because the developmental condition was commonly seen among young children receiving EIP services in New York State or because children with the condition frequently required an intensive level of intervention through the EIP. The primary target audiences for the guidelines were (1) parents and families, (2) professionals (including early intervention service providers, primary healthcare providers, and healthcare specialists), and (3) local EIP officials. It was considered important that a single set of guideline documents be developed for each guideline topic that would meet the needs of, and be understandable to, each target audience.
Selecting the guideline methodology
The development of evidence-based recommendations on the care for young children with developmental problems was recognized as a complex and challenging task. It was considered very important to use an established and well-accepted methodology to ensure that the guidelines would have maximum credibility and impact. It was also important to identify a methodology that would be consistent and helpful within the required steps in the EIP process, including the identification, referral, and evaluation of children potentially in need of early intervention services and the development of individualized family service plans (including the strategies and services to be provided and outcomes to be achieved for eligible children).
Clinical practice guidelines typically focus either on a specific condition or problem (such as autism/PDDs) or, alternatively, on clinical treatments and interventions that may be used to assess or intervene with individuals who have differing conditions or problems. Although both approaches have merit and can be useful to consumers and professionals, a condition-oriented approach was thought to be most consistent with the early intervention process (since this approach to guideline development focuses on the development of recommendations to assist with the identification, evaluation, and diagnosis of a particular problem and interventions that can be effective in improving or ameliorating the problem).
It was also considered essential to use a methodology that would yield recommendations on evaluation and assessment methods that can most accurately assess a child's developmental status and, if appropriate, result in a specific diagnosis. This was considered important for several reasons.
First, a thorough understanding of a child's developmental status and underlying condition(s) affecting a child's development is important to identify appropriate interventions, with respect to both early intervention services and other related services and treatments that may be needed, including healthcare services. The NYSDOH-EIP receives numerous inquiries from local public officials, providers, and parents with respect to the most effective interventions for children with specific developmental problems. While the need for information was perhaps most evident for children with autism/PDDs at the time the project was initiated, requests for information on interventions for children with speech-language delays, Down syndrome, and motor and sensory problems were also frequently received.
Second, many children are referred to the EIP on the basis of the concern of a parent, caregiver, health professional, or other professional about the child's development, without a specific condition or diagnosis. A primary concern of parents upon referral is often to first diagnose, and then understand, their child's developmental problems.
Finally, from a systems perspective, state EIPs must have reliable and accurate information about children's developmental status and/or the existence of a diagnosed condition with a high probability of developmental delay to determine the eligibility of individual children. The accuracy of data on reasons for eligibility is also critical for states when assessing the effectiveness of child find efforts (including the extent to which a state is under- or overserving children in this age group), and for monitoring and operational purposes.
After reviewing several possible approa-ches, the NYSDOH-EIP selected the methodology used by the US Agency for Health Care Policy and Research (AHCPR) in developing 19 evidence-based clinical practice guidelines, which were released from 1992 to 1996. This agency, which has been renamed the US Agency for Healthcare Research and Quality (AHRQ), is part of the US Public Health Service and is the primary federal agency involved with health services research.
The AHRQ clinical practice guideline methodology was derived from the work of many experts in health services research and incorporated the principles for developing high-quality practice guidelines recommended by the Institute of Medicine (IOM, 1992). The methodology is very similar to that of the US Preventive Services Task Force and in fact was developed by some of the same experts who worked on this Task Force. The AHRQ methodology, which is considered by many in health services research to be the standard for developing evidence-based clinical practice guidelines, has been described in numerous publications (Eddy & Hasselblad, 1995; Holland, 1995; Schriger, 1995; Shekelle et al., 2001; Wolf, 1991, 1995).
Description of the multidisciplinary guideline panels, project staff, and consultants
A defining feature of the AHRQ methodology is the use of a multidisciplinary panel to review all available scientific evidence on the guideline topic and then develop consensus recommendations on the basis of this evidence. Each NYSDOH-EIP guideline was developed by a separate multidisciplinary panel that included topic experts, generalist clinicians, and parents. A panel chair was selected for each panel as a first step in selecting the guideline panels. The panel chair worked closely with project staff on all aspects of guideline development and chaired all panel meetings.
The NYSDOH-EIP sought the advice of its advisory committee to identify the types and number of the various professional disciplines that typically provide the assessment and intervention services to children with the conditions addressed by the guideline and that should be included on the guideline panel. Panel members were selected from individuals who responded to announcements about the guideline initiative and a standardized instrument was used to rate their experience with young children with the developmental conditions to be covered by the guideline. Each panel also included several parents who had a child with the developmental problem addressed by that guideline. Table 1 presents the specific composition of each guideline panel.
Project staff included a project director, a methodologist (J.P.H.), research staff (including graduate assistants), facilitators, and medical writers. The project director, methodologist, and senior research associate all had substantial experience in the AHRQ methodology. The project director had served in a similar capacity for a number of guidelines produced by the AHQR. The project methodologist was a physician with a master's degree in public health who served as the methodologist for the AHRQ clinical practice guideline on low back pain and is widely published in guideline methodology. The senior research associate completed graduate work in developmental psychology and had served in research teams for AHRQ guideline panels.
An international expert in developmental psychology and early intervention research served as a consultant on all 6 practice guidelines. The role of this consultant was to assist the project team in adapting the AHRQ methodology to early intervention topics. His participation, particularly in methodological issues, related developmental and disabilities research, and criteria for selection of evidence, was invaluable in ensuring the relevance of the topics addressed by each guideline and the literature reviewed and used as evidence for guidelines to the field of early intervention.
In addition, an internationally recognized expert on communication disorders and language development served as a consultant for the guidelines on communication disorders. She provided valuable input on all aspects of the guideline and in particular and offered insight that assisted the panel in developing an integrated strategy for assessment and intervention that served as a foundation of specific guideline recommendations.
Description of the multidisciplinary panel process
Each panel developed their guideline in a series of 5 or more multiday panel meetings spread over a 10- to 12-month period. Because of the panel's multidisciplinary composition, not all members had strong backgrounds in evaluation methodology or research design. At the first meeting of each panel, the methodologist and staff conducted an “in-service” training on clinical research methodologies for group design, statistical analysis, and single-subject research methodology. In addition to the methodologist and staff, the training team also typically included panel members with expertise in research methods (eg, in single-subject research methodology). The training was well received by the panel members and extremely effective in ensuring all panelists had the basic level of knowledge necessary to evaluate and understand the implications of research. We believe this training was key to enabling full-panel participation in the evidence evaluation process and use of evidence to develop and support practice recommendations.
Also at their first meeting, each panel was asked to refine the scope of their guideline within the parameters provided by the NYSDOH-EIP. This involved determining the types and severity of developmental conditions to be addressed by the guideline and the specific assessment and intervention methods to be critically evaluated. The research staff then carried out a systematic literature evaluation on the basis of the scope of the guideline. At subsequent meetings, panel members evaluated evidence found in the literature evaluation and used this information to develop evidence-based guideline recommendations, using explicit clinical decision-making rules. Drafts of each guideline underwent an extensive peer review by 30 to 50 national experts and practicing clinicians. Each panel met and made final changes to the guideline documents after considering the peer reviewers' comments.
Systematic literature evaluation
The systematic literature evaluation for each guideline was completed by the research staff under the direction of the methodologist and panel chair. Abstracts obtained from computerized bibliography searches were all reviewed independently by the panel methodologist, the senior research associate, and in some cases by the panel chair or other panel members with research expertise in specific areas. When an abstract was selected for further review by either the staff or a panelist, the article was obtained for more in-depth screening for quality and applicability to the guideline topic.
A standardized screening form was completed for all selected articles to document whether the study (or studies) described met specific criteria for quality (internal validity) and applicability to the topic (external validity). Each study meeting the criteria for adequate evidence was reviewed in depth and systematically abstracted by the research staff onto an evidence table that included information about the design, subject characteristics, and results of the study. The articles and associated evidence tables were independently reviewed for accuracy and consistency by the panel methodologist and at least 1 panel member who was a content expert on the topic. Copies of the articles and associated evidence tables were also provided to all panel members, who were invited to review the articles and offer suggestions to improve the quality or accuracy of information abstracted onto evidence tables.
Using both the evidence tables and the full-text articles, the guideline panels critically evaluated studies that met the criteria for in-depth review and developed and documented their conclusions about their strengths and limitations and the applicability of the evidence to the guideline topic. The evidence tables allowed panel members to more easily identify the strengths and limitations of an individual study and facilitated comparisons of quality, clinical applicability, and results between different studies addressing similar topics.
Because of limitations of time and resources, not all clinical questions addressed by a guideline could be evaluated by means of an in-depth literature review. Therefore, for the 6 guidelines, a literature search and an in-depth evaluation of the evidence were not conducted for clinical questions that were (1) not a primary focus of the guidelines, (2) not specific to children with the developmental problem being addressed, (3) generally considered to be noncontroversial, or (4) on issues that were not likely to be the subject of scientific study.
Selection of articles for in-depth review was based on both the quality of the scientific study reported in the article and the clinical applicability of the study's findings to the clinical questions addressed by the guideline. The quality of a study is primarily related to study design factors including controls for bias and confounding factors, methods used for statistical analysis, and statistical power. Our confidence in a study's findings becomes even greater when multiple well-designed studies done by independent researchers find similar results with different samples.
The clinical applicability of a study is the extent to which the study's results would also be expected to occur in the particular clinical situation of interest. The clinical applicability of a study's findings is considered to be higher when the subject characteristics, clinical methods, and clinical setting of the study are similar to clinical situation of interest. Study quality and clinical applicability are independent of each other, but both are important in determining if a study's findings will be useful in answering a specific clinical question.
A comprehensive literature search was done for each topic that was a considered a primary focus of a guideline. Computer bibliographic databases (including CINAHL, ERIC, MEDLINE, and PsycINFO) were systematically searched to identify potentially relevant scientific journal articles and book chapters published since 1980. It was presumed that journals included in these databases were peer reviewed. The peer review status of journals in the databases was not verified.
There were 3 major categories of articles used by the panels in developing the guidelines: articles containing original evidence; systematic reviews or meta-analyses; and general and review articles without original evidence. Journal articles were considered to contain original evidence if they reported original evidence about assessment or intervention methods or systematically gathered descriptive information (eg, descriptive epidemiology or studies that described the characteristics or behaviors of groups of similar subjects). Books that are published as serials were considered in this category; however, other books, monographs, dissertations, and other published materials not found in journals were not considered to be primary sources of original evidence. If a systematic review or meta-analysis was found to address questions of interest and followed strict evidence approaches, these were considered “original evidence” by the panel (eg, Dawson and Osterling's 1997 review of 8 model intervention programs for children with autism, which appeared in Guralnick's edited volume on the effectiveness of early intervention (1997).
General and review articles, including book chapters, that did not contain original evidence but were useful in understanding issues of importance to the field or particular opinions were also used by the panels and project staff in development of the guidelines. These materials were often used in the development of background chapters but were not considered evidence for the purposes of guideline recommendations.
Established criteria for study quality and clinical applicability were used to select articles found in the literature search for in-depth review. To be selected for in-depth review articles had to meet all the general criteria below, as well as the specific in-depth review criteria discussed below. To meet general criteria for in-depth review, studies had to (1) be published in English in a peer-reviewed scientific publication; (2) describe a study with original data about the topic of interest (or be a systematic review or synthesis of such data from other studies); (3) describe study design, subject characteristics, and results adequately; (4) have no significant features that are likely to systematically bias results; and (5) evaluate infants or young children with the condition addressed by the guideline who were of appropriate age. To be considered adequate evidence about assessment methods for young children with the developmental condition, articles had to (1) evaluate an assessment method currently available to providers in the United States, and provide an adequate description of the assessment method; (2) include at least 10 subjects with the condition and 10 without the condition; (3) measure an outcome relevant to the questions addressed by the guideline; and (4) compare results of the assessment method with an acceptable reference standard and provide enough data to calculate sensitivity and specificity of the test. To be considered adequate evidence about intervention methods for young children with the developmental condition, articles had to: (1) evaluate an intervention method currently available to providers in the United States (not obsolete or experimental); (2) provide an adequate description of the intervention method, or a reference where such a description can be found; (3) evaluate efficacy of the intervention using functional outcomes important for the child's overall development (not just parent-related outcomes); (4) use identical outcome assessment methods for all subjects; (5) provide quantitative description of results and appropriate statistical analysis; (5) meet all criteria for either (a) or (b) as follows: (a) For group comparison studies, (i) evaluate at least 2 groups receiving different interventions (or intervention group(s) and a control group receiving no intervention), (ii) assign subjects groups randomly, or using some other method not likely to significantly bias study results, (iii) report functional outcomes for at least 10 subjects in each group; (b) for single-subject design studies, (i) evaluate at least 3 subjects with developmental condition who are less than or equal to 48 months of age, or have a mean age of less than or equal to 48 months, (ii) use acceptable single-subject design methodology (multiple baseline or ABAB design). To be considered adequate evidence about developmental characteristics for young children with the developmental condition, articles had to (1) compare developmental characteristics of children with the developmental condition to those of (a) typically developing children or (b) children with general developmental delays (but not children with a specific condition such as autism or Williams syndrome); (2) include data on developmental characteristics of the child (not just parent reaction or behavior) using similar assessment methods for all subjects; and (3) report quantitative data on developmental characteristics separately for each group with appropriate statistical analysis of results.
For each guideline, studies that met the quality and clinical applicability criteria were considered by the panel as providing “adequate evidence” about assessment methods, intervention methods, and/or developmental characteristics for young children with the developmental condition addressed by the guideline. All of the 6 guidelines included evidence from studies of intervention methods that used both group comparison designs and single-subject designs.
Single-subject methodology is an approach that determines the effect, if any, of intervention(s) upon an individual. It is contrasted with group research experimental designs in that the focus is upon the factors affecting a given individual, as opposed to a group average. Single-subject methodology is particularly useful for clinicians wishing to evaluate the efficacy of a specific therapeutic intervention as well as for applied researchers who wish to develop and evaluate the efficacy of specific treatment procedures or methods. It is used in many different disciplines and has its origins in the time series statistical approach and in methodologies that focus on evaluation of learning in individuals.
Single-subject methodology utilizes a process of frequently repeated measurement of important characteristics of the individual in objective terms. The pattern of these measurements is used to ascertain the degree to which a specific intervention has a noncoincidental effect. This is demonstrated through within and between series designs. They are based upon principles of replication (demonstrating the efficacy of the intervention several times) via controlled application of the intervention, and using appropriate control conditions by which to evaluate the degree of change as compared to preintervention behavior variability. Thus, case studies and anecdotal reports are not examples of single-subject methodology, as they simply present “pre-post” information, which does not meet the research standards presented here.
Single-subject design studies were an important source of evidence for the guidelines. For example, 19 of 45 studies and 17 of 37 studies for the guidelines on autism/PDDs and communication disorders, respectively, were single-subject design studies.
The Down syndrome guideline used a different focus for the literature evaluation than did the other 5 guidelines. Since the diagnosis of Down syndrome is frequently made at or before the time of birth, the Down syndrome guideline did not discuss screening and early detection of the condition, as did the other 5 guidelines. Instead a systematic literature evaluation was done to find studies that described both quantitative and qualitative differences in the developmental characteristics of young children with Down syndrome compared to typically developing children. This evidence about developmental characteristics was then used as the evidence basis for many of the recommendations about assessment and intervention methods for children with Down syndrome.
Developing evidence-based recommendations
The panel members used formal decision-making rules to develop evidence-based consensus guideline recommendations based on relevant evidence gleaned from these studies.
Consistent with the evidence-based approach of the project, panels were instructed to make best practice recommendations based on only 2 primary considerations: (1) what is in the best interest of the child and the family and (2) what does the applicable high-quality scientific evidence say about the issue? The panels were asked to develop guideline recommendations for the best practices of care, regardless of how these fit with existing administrative or payment policies.
When scientific evidence was available, the panel gave this the most weight in making guideline recommendations. When adequate scientific evidence was not found, or when the topic was not a focus of the evidence review, the guideline recommendations were developed on the basis of the expert opinion of the panels. All guideline recommendations were the consensus of each panel (ie, all members of a panel had to agree to a recommendation before its inclusion in the final guideline).
Recommendations included in a guideline were given a “strength of evidence” rating, designated by an alphanumeric designation (A, B, C, D1, or D2) placed in brackets imme-diately after the recommendation. Table 2 presents the strength of evidence ratings used for all the NYSDOH-EIP guidelines. The strength of evidence rating indicates the amount, quality, and clinical applicability of the scientific evidence the panel used as the basis for that specific guideline recommendation. However, it is not an indication of importance of the recommendation or its direction (ie, whether it is a recommendation for or against the use of a clinical method). For example, if there was strong evidence that an intervention was effective, then a recommendation for use of the method would receive an [A] evidence rating. Similarly, if strong evidence exists that an intervention is not effective, a recommendation against the use of that method would also receive an [A] evidence rating. This method for rating strength of evidence is very similar to those used in the AHCPR guidelines (Holland, 1995).
Peer review process
Each guideline underwent an extensive peer review by between 50 and 60 national and, in some cases, international topic experts, generalist providers, and parents. Peer reviewers were asked to provide general comments about the final draft guidelines, rate them on usefulness and understandability, and identify any relevant research that may have been missed by the panels, that would either lend additional support or provide evidence to modify or refute the recommendations in the draft guidelines. The peer review comments received on all 6 guidelines were extensive and detailed. All comments from peer reviewers were examined by the panels at the final meeting of each panel and carefully considered in making final revisions to each guideline. In particular, the panels thoroughly discussed peer review comments that identified areas of disagreement with a recommendation or recommendations included in the final draft guidelines and suggested either substantive modification or elimination of a recommendation. Final decisions about whether to modify or eliminate recommendations were made on the basis of the strength of evidence provided by the reviewer(s) to substantiate the comment(s) and with the consensus of the entire panel.
Results of the literature evaluation process
A total of 33,217 article abstracts were reviewed in the process of developing the 6 guidelines, and 4509 potentially relevant articles were retrieved and systematically screened to determine if they met the criteria for in-depth review. For the 6 guidelines combined, only 342 articles met the criteria for in-depth review (an average of 58 per guideline), and these provided the evidence used by the panels to develop evidence-based guideline recommendations. The 342 articles selected for in-depth review represented just 7.6% of all the articles retrieved and systematically screened, and only 1.04% of all the abstracts reviewed. Table 3 lists the number of abstracts screened, articles systematically reviewed, and articles meeting criteria as adequate evidence across all 6 guidelines.
This level of yield for the systematic literature evaluation is similar to the experience of other evidence-based guideline projects (Holland, 1995). The process of completing the literature evaluation for evidence-based practice guidelines involves casting a broad net for all potentially relevant literature and systematically filtering articles to select only those studies that are of high quality and applicable to the guideline topic. In such a process, it is a common result that the number of articles meeting the criteria for adequate evidence is only a small proportion of the abstracts reviewed and articles screened.
Each of the guideline panels made a significant effort to identify articles that examined the relationship between the frequency, intensity, and duration of interventions and functional outcomes for the child. However, with the exception of the autism guideline, no articles meeting study quality and clinical applicability criteria were found on this important issue.
Finding high-quality scientific studies that focused on children younger than 3 years presented a particular challenge. For some clinical questions, it was possible to find articles where all subjects were younger than 3 years, but for other clinical questions the only high-quality studies that could be found included children older than 3 years. For all 6 guidelines, some studies that met the study quality criteria for in-depth review were considered to provide acceptable evidence even though some children older than 3 years were included as subjects. However, when making evidence-based guideline recommendations, the panels generally gave less weight to studies that included older children or broader age ranges in the subject groups than to studies where all subjects were in narrower age ranges and younger than 3 years.
Types of information included in the guidelines
As part of the guideline development process, members of the guideline panel, advisory groups, and peer reviewers provided suggestions on what information should be included in the guideline documents to make them most useful to the target audiences (parents, professionals, and government officials). Parents expressed a need for accurate background information about developmental problems, an understanding of assessment methods and test results, and an evaluation of the effectiveness of different intervention methods to help them better communicate and participate in joint decision making with professionals and EIP officials involved in their child's care. Discussions with professionals from different disciplines revealed that they often had a limited understanding of each other's roles and that important information was not always shared between professionals. In addition, there was a widely held perception that primary healthcare providers (eg, pediatricians) had a limited understanding about certain developmental problems or about specific assessment and intervention methods.
Overview of the guideline recommendations
In the process of developing the first 2 guidelines on autism/PDD and communication disorders, a standard presentation format that was used for the subsequent 4 guidelines was developed. Each panel was given access to, and encouraged to review, the work of the other guideline panels. In this way, a number of consensus recommendations emerged across all 6 guidelines on best practices in assessment and intervention for all children with developmental problems. However, each guideline includes many recommendations about best practices that are unique to the developmental condition addressed by that guideline.
Two types of guideline statements were used across all 6 guidelines. Recommendations for use of a particular assessment or intervention approach were made when the evidence showed the method had efficacy for producing desired functional outcomes, and the potential benefits outweighed potential harms. Recommendations not to use an assessment or intervention approach were made when no scientific evidence could be found or the available evidence showed the method had no effectiveness or potential harms outweighed potential benefits.
As a way of illustrating the guideline approach, we present sample recommendations from the communication disorders and autism/PDDs guidelines in Tables 4 and 5. A particular challenge to public officials, early interventionists, and families relates to when to intervene and when to engage in “watchful waiting” with young children with expressive language delays. The communication disorders guideline recommended an enhanced approach to developmental surveillance as illustrated in Table 4. When intervening with children with autism/PDD, decisions about the intensity of intervention are often difficult. Table 5 summarizes the evidence-based recommendations on intensity levels for behavioral interventions using applied behavioral analysis techniques from the autism/PDD guideline.
As mentioned previously, the guideline on autism/PDD is the only guideline for which a panel found sufficient evidence to recommend a specific level of intensity for an intervention. This recommendation has been controversial and questions have been raised about the extent of research available to support the recommendation. The autism/PDDs guideline explains the evidence and deliberations of the panel in detail; however, it is worth mentioning that the 6 studies, completed between 1987 and 1998, were used by the panelists in their deliberations to develop the recommendations included in Table 5.
Level of evidence for major questions addressed by each guideline
The major assessment and intervention questions addressed by each guideline panel were extensive as were the number of recommendations made by each panel with respect to each question by the level of evidence available to support the recommendations. (A detailed table can be obtained by contacting the first author.) For example, the autism guideline panel made a total of 15 recommendations on early identification of children with autism, 27% of which had strong supporting evidence (a rating of A); 7% for which evidence was sought and none was found and the recommendations were made by the panel on a consensus basis (D1); and 67% for which a literature search was not conducted and the recommendations were made by panel consensus (D2). The same panel made 58 recommendations on behavioral and educational approaches to intervention with children with autism, 76% of which had supporting evidence, 22% for which evidence was sought and none was found and the recommendations were made by panel consensus, and 2% for which a literature search was not conducted and the recommendations were made by panel consensus.
Across all 6 guidelines, a total of 1,834 guideline recommendations were made (assessment and intervention). Twenty-one percent of these recommendations had supporting evidence and 66% were based on the consensus of the panel, with no literature search conducted. For an additional 13% of the guideline recommendations, a literature search was conducted and no studies meeting the criteria to serve as evidence were found. There were more intervention recommendations with supporting evidence for the guidelines on autism (55% of all recommendations), communication disorders (55%), and Down syndrome (68%); more assessment recommendations with supporting evidence for the motor disorders (24%) and vision impairment (24%) guidelines; and for the hearing loss guideline, there was an equal percentage of recommendations based on evidence for both assessment and intervention (12%).
Format of the final guideline documents
Each of the 6 NYSDOH-EIP guidelines will be produced in 3 versions: (1) the Report of the Recommendations, which is the standard guideline report; (2) an abbreviated Quick Reference Guide; and (3) a more detailed Guideline Technical Report that provides documentation of the entire guideline development process, including a comprehensive bibliography, description of the guideline methodology, and evidence tables for all articles used as evidence for the guideline. All versions of each guideline will contain a complete set of the guideline recommendations, as well as information about New York State's EIP policies and procedures relevant to issues addressed by the guideline. The guideline versions vary only in the level of detail provided about the methodology, systematic literature evaluation results, and the rationale for specific guideline recommendations.
The case for evidence-based guidelines for children with developmental disabilities
All professional disciplines today are being called upon to document the effectiveness of their practices for achieving desired outcomes with scientifically valid evidence. As part of this trend, early interventionists and early childhood practitioners are also being asked to consider the science behind their practice and identify the outcomes being achieved for children and families through their efforts. Families need and want to know which assessment approaches will best identify their child's strengths and needs, and which interventions will be most effective in improving their child's functioning, development, and family life. National education policies such as “No Child Left Behind” have embraced the quest for science-based practices. The scientific basis for the effectiveness of early intervention and special education is emerging as a critical theme under the IDEA.
The development and use of evidence-based clinical practice guidelines has been relatively well established in the healthcare arena. When this project was initiated in 1996, a science-oriented, evidence-based approach using an established methodology had never been applied to the development of clinical practice guidelines for young children with disabilities (although there had been efforts to identify best practices in early intervention services using a consensus, opinion-based approach; see the DEC Task Force Report on Recommended Practices, 1993).
Beginning in the late 1990s and continuing into the new century, work on the development and compilation of evidence-based practice recommendations for identifying, evaluating, and delivering services to young children and to children and youth with disabilities has been increasing. Most notable have been the efforts by the National Academy of Science to use a science-based approach to review and report on the science of early childhood development (National Research Council and Institute of Medicine, 2000) and the education of children with autism (National Research Council, 2001). Also noteworthy were the efforts of DEC and collaborating universities (supported by federal funding) to use a more rigorous method to update and produce a set of recommended early intervention practices supported by research literature (Sandall, McLean, & Smith, 2000; Smith et al., 2002). We view these ventures to be part of an important and growing trend toward the use of science to promote evidence-based practices in service delivery to young children with developmental problems.
Finding evidence-based answers to specific questions about assessment and intervention for young children with developmental disabilities is challenging. The quality and quantity of well-designed studies on important questions related to identifying, assessing, and intervening with young children with developmental disabilities is relatively limited. Even well-designed studies rarely offer unambiguous answers to questions of interest, such as the accuracy of screening and diagnostic tools, the effectiveness of particular intervention methods or the degree to which the intensity of services affects a child's developmental progress and functioning. Careful analysis and considerable judgment are needed when using scientific evidence to develop recommendations for effective practice.
All 6 NYSDOH-EIP guideline panels adhered to relatively rigorous criteria for selecting studies to develop recommendations for the clinical questions addressed by the guideline. Numerous articles that did not meet criteria for evidence, including case reports, quasi-experimental studies, and theoretical or opinion articles were found. Although such articles often provide information that may be useful in clinical practice, they cannot be used to provide scientific support for the efficacy of specific clinical assessment or intervention methods.
In our opinion, the most important elements of the evidence-based approach for NYSDOH-EIP guideline development were (1) the focus on the scientific evidence and on what is best for children and families and (2) the use of a well-established methodology that incorporates an unbiased process by a multidisciplinary panel. These factors allowed guideline panel members to move beyond the potential “professional turf” issues that often become obstacles to reaching consensus in multidisciplinary groups.
Evidence-based clinical practice guidelines can be powerful tools to help parents and professionals make the best possible decisions about early intervention services that both meet a child's individualized needs and are consistent with the priorities, concerns, and resources of the family. We hope that the NYSDOH-EIP clinical practice guidelines will make an important contribution by offering evidence-based recommendations that can be used to facilitate high-quality early intervention service delivery.
We also believe the guidelines have helped identify important gaps in knowledge that must be addressed by research to achieve even better outcomes for infants and young children with developmental disabilities. Research questions and topics that should be of high priority to make the next iteration of these and other existing guidelines more useful include research on improved methods to evaluate and assess developmental problems in children, particularly in differentiating children for whom intervention is essential to developmental progress and those children who are experiencing variations in development that will resolve without interventions; better methods of diagnosing specific conditions (such as autism and cerebral palsy) at the earliest age possible; outcomes research to identify the comparative effects of different settings and methods of interventions; and, perhaps most important of all, research on the relative effects of differing duration and levels of frequency and intensity of interventions on children's developmental progress.