Secondary Logo

Journal Logo

Articles

Early Intervention Services Assessment Scale (EISAS)—Conceptualization and Development of a Program Quality Self-Assessment Instrument

Aytch, Lynette S. PsyD; Castro, Dina C. PhD; Selz-Campbell, Laurie MS

Author Information
  • Free

Abstract

EDERAL and state appropriations for early intervention services have increased substantially since the enactment of P.L. 99-457 (1986, now Part C, IDEA). The annual appropriation for infants and toddlers with disabilities served under Part C has grown from approximately $50 million in 1987 to about $350 million a decade later (NECTAS, 1998). The substantial increase in funding has occurred in the context of relative stability in the number of children served over this same period of time (NECTAS, 1998). Concerns among many state policy makers about the rising cost of early intervention services are often counterbalanced by the perception among many stakeholders that the rising expenditures is a positive indicator of increased efforts to meet the needs of infants and toddlers with special needs and their families (Tarr & Barnett, 2001).

As in other arenas of public policy, the rising expenditure of funds to support early intervention services has been accompanied by increasing demands for accountability in terms of the efficacy, quality, and outcomes of services. Funding sources, service agency boards, legislators, consumer groups, and administrators want to ensure that resources are used efficiently and result in desired out-comes. Therefore, information gathered thro-ugh systematic program monitoring and evaluation need to address questions such as “What services are provided?,” “Who is providing services?,” “Who is receiving services?,” “What is the quality of services?,” “Are services appropriate to the individual needs of children and families?,” and “What are the outcomes?.” Answering these questions is essential to making informed decisions; however, the task of answering these questions is often complex, tedious, and expensive.

In addition to demands for fiscal accountability, early intervention programs engage in self-evaluation of program policies, procedures, and practices to inform internal decisions about program planning and improvement. Issues such as the extent to which policies and practices are consistent with pro-gram philosophy and vision, the effectiveness of intervention services in facilitating desired child and family outcomes, and how stakeholders (ie, parents, staff) perceive program quality are important considerations in program self-assessment. An approach to assessment that addresses these process-related issues (ie, how services are provided, quality of services) jointly with compliance issues (ie, the extent to which programs comply with regulations) provides information that is optimally useful (NEC*TAS, 1998). A challenge faced by many programs is the lack of well-established tools or procedures that are appropriate for comprehensive self-assessment of early intervention program practices.

The early care and education field provides a strong conceptual framework for understanding process variables as they relate to aspects of the early learning environment. Process variables focus on quality of the care environment, such as quality of the teacher-child relationship, peer-peer interactions, and quality of the learning/developmental activities. In contrast to structural features of the care environment (ie, teacher-child ratios, group size, teacher training), process variables are not generally amenable to regulation (Phillips & Howes, 1987). A body of child-care research over the last 20 years, using the environment rating scales (Harms & Clifford, 1980; Harms, Clifford, & Cryer, 1998), has established a strong relationship between childcare quality and child development outcomes (Cost, Quality and Child Outcomes Study Follow-up, 1999; Cost, Quality, & Child Outcomes Study Team, 1995; Goelman & Pence, 1987; Howes, Phillips, & Whitebook, 1992; Kontos, Howes, Shinn, & Galinsky, 1995).

In contrast to the childcare classroom, early intervention is a complex system of services that involves diverse service settings, the participation of multiple disciplines, coordination of services, collaboration across agencies, and services to a heterogeneous population of children and families. Additionally, early intervention programs reflect broad diversity in design and implementation of intervention models (Guralnick, 1997). Aytch, Cryer, Bailey, and Selz (1999) proposed 4 major characteristics of early intervention practice that present significant methodological challenges to program evaluation, particularly as it relates to assessing quality: (1) early intervention represents a broad range of services, (2) services need to be highly individualized based on the specific needs and priorities of the child and family, (3) services seek to address multiple goals with a variety of anticipated outcomes, and (4) many desired features of service (eg, family-centered practice, quality of relationship between provider and family) are highly subjective. These distinct features of early intervention programs make the task of comprehensive evaluation of program quality from the perspective of multiple consumers and stakeholders a daunting endeavor.

Despite these methodological challenges to evaluation, a tool that could be used broadly by programs to assess quality of early intervention services would make a valuable contribution to the field. In an effort to address this need, the Early Intervention Services Assessment Scale (EISAS)* is being developed. The EISAS is a comprehensive program self-assessment instrument designed to assess the quality of services provided to infants and toddlers with disabilities and their families. The purpose of this article is to provide a detailed description of the early phases of conceptualization, development, and design of the EISAS. This description includes (1) the conceptual framework for the EISAS, (2) the process of extensive constituent input into the development process, (3) the core principles and values of recommended practice that are reflected in the instrument's content, and (4) plans for a field study to examine the feasibility and utility of the EISAS as a quality assessment tool. The objectives of this article are to (1) document the systematic and rigorous process of EISAS development, (2) provide sample items of the initial draft, and (3) disseminate information to the field about the instrument development effort.

Conceptual framework of the EISAS

The EISAS focuses on the assessment of program practices across the major domains of early intervention (eg, assessment, intervention planning, service provision, transition planning, administrative practices). Bronfenbrenner's (1977) ecological model provided a theoretical framework for taking into account the influences of a broad spectrum of contextual variables on child and family outcomes in early intervention. For example, contextual variables such as appropriateness of multidisciplinary assessment (Fewell, 2000; McLean, Bailey, & Wolery, 1996), training and experience of personnel (Winton, 2000), individualized intervention activities (Wolery & Gest, 2000), integrated and transdisciplinary services (McWilliam, 2000), and family-centered services (Bruder, 2000; Dunst, Trivette, & Jodry, 1997) have been identified as crucial aspects of early intervention quality. Although these variables differ in their proximity to the child and family's actual experience in early intervention services, there is general consensus within the field regarding the importance of these dimensions of program practice to effective early intervention (Guralnick, 1997).

Three fundamental principles guided the conceptualization of the EISAS: (1) early intervention is effective in minimizing the adverse effects of developmental disabilities, (2) the quality of early intervention experiences are related to child and family outcomes, and (3) parent participation is a critical component in promoting and sustaining early intervention outcomes. Research and clinical experience provides compelling evidence that early intervention does minimize the adverse effects of risk factors associated with developmental delays or developmental disabilities and is effective in facilitating the achievement of desired developmental outcomes (Dunst, Snyder, & Mankinen, 1989; Guralnick, 1998; Ramey, Campbell, Burchinal, Skinner, Gardner, & Ramey, 2000; Simeonsson, Cooper, & Scheiner, 1982). Despite strong evidence supporting the efficacy of early inter-vention, research still seeks to identify what specific interventions are most effective for what specific populations and to better understand the relationship between variations in quality and child and family outcomes.

As previously discussed, a substantial body of childcare literature supports the principle that program quality is related to better child outcomes in a variety of developmental domains, including cognition, language, and social skills (Burchinal, Roberts, Nabors, & Bryant, 1996; Cost, Quality, and Outcomes Study Team, 1995; Howes et al., 1992; Peisner-Feinberg & Burchinal, 1995; Phillips, McCartney, & Scarr, 1987). Although this research focuses primarily on typically developing preschool age children in center-based programs, it provides an empirical foundation for advancing our understanding of the relationship between the quality of early experiences and developmental outcomes.

Parent participation is an essential component of family-centered early intervention practice. The emphasis on family-centered practice evolved from 3 major developments in early childhood theory over the last 30 years. These included a growing realization that the interactions between parent and child play a critical role in early development (Ainsworth & Bell, 1974; Bradley et al., 1989), recognition that positive intervention outcomes are enhanced when programs made concerted efforts to work with parents (Dunst, Trivette, & Deal, 1994), and federal early intervention legislation that mandates provision of services that support families in caring for a child with special needs and promotes parent participation in decisions regarding services (IDEA, 1997).

The EISAS reflects 6 core values that are generally embraced by the field as essential to early intervention quality: (1) the centrality of the family to promote and sustain optimal child development outcomes, (2) parents as primary decision makers in all aspects of early intervention service, to the extent they desire, (3) provision of services in a manner that respects the beliefs, values, and traditions of families that may be influenced by culture, language, and social and economic status, (4) coordination of services to reduce dupli-cation and eliminate gaps, (5) provision of intervention services that are consistent with professional standards and recommended early intervention practice, and (6) transdisciplinary and interagency collaboration to promote effective and efficient use of resources.

Establishing content validity

In an effort to establish content validity of the EISAS, we engaged in a multiphase process to ensure that the program self-assessment and parent survey reflected relevant empirical and clinical knowledge, recommended practices, and practical experiences in early intervention. The initial phase of EISAS development focused on gathering information from multiple sources to broaden our understanding of quality in early childhood care, education, and intervention. This information gathering process involved extensive review of the literature, input from practitioners and parents through a statewide survey, multiple focus groups, and consultation with a team of technical consultants. The Division of Early Childhood (DEC) Recommended Practices (1993, 2000) provided a roadmap for understanding and defining indicators of quality practice for young children with disabilities and their families.

The survey and focus groups were conducted with early intervention professionals and families in North Carolina. The purpose of these activities was to get input from early intervention stakeholders about how they define quality practices and what they believed to be facilitators and barriers to the provision of high quality services. An ongoing constituent advisory board (eg, parents, program administrators, and service providers) and a team of technical consultants (eg, early childhood university faculty and researchers) worked with the project from its conception. The technical consultants provided extensive input regarding conceptualization of the instrument, methodological challenges in program self-assessment, hierarchical arrangement of indicators along the quality continuum, and organizational structure of the EISAS.

The second phase of work focused on getting detailed feedback from program personnel and families. Early intervention staff and parents from 5 programs across North Carolina and programs in Connecticut and Milwaukee participated in this review process. Many of the participating programs in North Carolina were part of multicounty consortiums, which meant that the program provided services to a wide geographical area. The review process required that program staff meet with the research team to thoroughly review and critique the instrument. Program staff were asked to address the following questions/issues: (1) is the instrument sufficiently comprehensive?, (2) are the most relevant indicators/practices included within each subscale?, (3) does this approach to program assessment seem feasible and useful?, (4) recommendations for adding/deleting domains and indicators, and (5) consideration of logistical issues regarding utilization of this instrument (eg, how would the assessment be conducted, what commitment of time and resources would be needed). In addition, each participating program organized a focus group of parents and service providers that was facilitated by members of the research team. The focus groups gave feedback on similar issues addressed by program staff. This extensive review process resulted in substantive revisions and refinements to the instrument.

Organizational overview of the EISAS

The format of the EISAS is strongly influenced by the widely used environment rating scales—Infant/Toddler Environment Rating Scale (Harms, Cryer, & Clifford, 1995), Early Childhood Environment Rating Scale—Revised (Harms et al., 1998). Like the environment rating scales, the EISAS is composed of multiple subscales that consist of indicators (ie, descriptions of early intervention practice/activity) that are arranged hierarchically to reflect inadequate to excellent quality practice.

The EISAS is composed of 2 major components: the Program Self-Assessment and the Parent Survey. These 2 components of the instrument were designed to be conceptually congruent. That is, the instrument is designed to assess similar dimensions of early intervention practice from the program and parent perspectives. For example, an indicator to be answered by the program states “family has the opportunity to identify questions and concerns they would like for the assessment process to address.” In the Parent Survey, parents indicate the extent of their agreement with the statement “before an assessment was done, we had a chance to talk about our concerns and needs related to our child.” The purpose of this structure was to provide a mechanism for programs to determine whether their perspective was confirmed or not by parent response. It is important to note that the Parent Survey focuses on family experiences and perceptions of services rather than their satisfaction with services.

This initial version of the EISAS Program Self-Assessment consists of 5 subscales and 17 items. Subscales represent major domains of early intervention practice and are composed of Items. For example, Subscale II—Intervention Planning is composed of 2 items—Family participation in the intervention planning process and The intervention planning process. Items represent subsections in a particular subscale and consist of multiple, hierarchically arranged indicators. See Table 1 for the organizational structure of the program self-assessment. Consistent with the program component, the EISAS Parent Survey consists of 5 sections addressing the same intervention domains. See Table 2 for a list of Parent Survey sections. For samples of the EISAS, see Table 3 for the Program Self-Assessment and Table 4 for the Parent Survey.

Table 1
Table 1:
EISAS program self-assessment
Table 2
Table 2:
EISAS Parent Survey
Table 3
Table 3:
EISAS Program version—Subscale 1: Assessment
Table 4
Table 4:
EISAS Parent Survey—Sample of Section I: Assessment

EISAS scoring

The EISAS uses a continuum of “1” to “7” to rate the quality of practices. Indicators listed under “1” describe practices that are inconsistent with recommended practice and represent Inadequate quality. Indicators listed under “3,” “5,” and “7” are consistent with recommended practice but represent increasingly higher levels of quality; that is, Minimal, Good, and Excellent, respectively. The distinguishing feature of excellent quality is the extent to which programs support and mentor families in their efforts to be active participants and primary decision makers in all aspects of early intervention service. How a program scores on the continuum between “1” and “7” is determined by the number of indicators that are checked “true” or “false.” On the Parent Survey, parents respond to each statement by checking “strongly agree,” “agree,” “disagree,” or “strongly disagree.”

SUMMARY AND DISCUSSION

The Early Intervention Services Assessment Scale is a comprehensive self-assessment instrument for program assessment of quality practices. The overarching rational for the development of this instrument was to provide a conceptually sound tool based on recommended practice that can be used broadly by early intervention programs to engage in a structured process of quality self-assessment. Practitioners, parents, administrators, and researchers were represented among the diverse group of constituents who contributed to the conceptualization and development of the instrument.

Conceptually, we believe the EISAS has several strengths that enhance its efficacy as an early intervention quality assessment instrument. First, the EISAS reflects core principles and values of early intervention, including the centrality of families in supporting and sustaining intervention outcomes, parents as primary decisions makers, services that are responsive to cultural beliefs, values, and traditions of families, effective coordination of services, and interagency collaboration.

Second, the EISAS is composed of a program component and parent component. These 2 components are conceptually congruent to provide a mechanism for programs to assess the extent to which family perceptions and experiences in early intervention are consistent with the program's assessment of practices. This allows programs to obtain feedback that goes beyond issues such as satisfaction with services and feelings about the parent-professional relationship, to detailed information about family experiences and parent participation in early intervention services.

Finally, the EISAS assessment process requires collaboration and cooperation among service providers, parents, administrators, and agencies involved in the provision of services to children and their families. A collaborative process that engages consumers and practitioners in a systematic and comprehensive process of program review is ideal given the nature and operational structure of the early intervention system of services. We believe the potential benefits of the EISAS are that this process of self-assessment requires cooperation and input across all program participants and yields information that is valuable for program planning and improvement. It is the opinion of the authors that the strengths of the EISAS address the methodological challenges to early intervention program assessment discussed earlier. The comprehensiveness of the EISAS addresses the broad range of services provided to children and their families. Extensive parent input and participation provides data that reflects the individual, subjective experiences of families and insight into the extent to which intervention experiences are consistent with desired goals and outcomes.

In contrast to the strengths, we believe the EISAS has 2 major limitations that potentially compromise its effectiveness and reliability as a program assessment instrument. First, self-assessment as a program evaluation strategy has some substantial limitations, particularly as it relates to reliability. Assessment conducted by objective observers of early intervention services across children and families, settings, and services is difficult and often not practical. Parent and professional perceptions about services are often subjective, highly dependent on personal roles, responsibilities, and experiences. Additionally, the tendency toward inflated assessment due to the subjectivity inherent in participant-oriented approaches presents a significant liability (Worthen, Sanders, & Fitzpatrick, 1997). The reliability of assessment results is substantially enhanced the greater the objectivity of the assessors and the extent to which practices can be documented. Therefore, it is essential that judgments about current program practice be documented by policy and procedures, program files, child and family records (ie, IFSP, progress notes, reports, portfolios, etc.), observations, interviews, and family report.

The second limitation is related to the considerable length of the program self-assessment and parent survey. In an effort to be as comprehensive as feasible, the EISAS attempts to address major domains of early intervention at a level of detail that is reflective of program practices and family experiences. Much of the feedback we received for reviewers indicated that completing the initial version of the instrument was cumbersome and redundant in sections. In addition, it was indicated that forcing a designation regarding whether an indicator was “true” or “false” was unrealistic. The feedback suggested that the issue for programs tended not to be whether practices were implemented, but the extent to which practices were implemented. Also, there was concern about parents' willingness and motivation to complete the entire survey, particularly if multiple sections were completed at one time.

Despite the limitations associated with program self-assessment, we believe that the EISAS has the potential to make a valuable contribution to the field. At present, there is no widely accepted instrument used by early intervention programs to assess the quality of practices or broad-based consensus on what constitutes high quality practice. One advantage of having a quality assessment measure such as the EISAS is that, at a minimum, it provides an impetus for a dialogue about how we define quality practice and strategies for the assessment of quality. Currently, early intervention programs tend to develop their own tools to assess quality, if quality is assessed at all, which limits our ability to talk about constructs of quality practice across programs, states, the nation, and beyond.

As a result of the extensive constituent input and feedback we received in the development of the initial version of the EISAS, we have confidence in the comprehensiveness and content validity of the tool. However, there is still considerable work that needs to be done to examine the utility and feasibility of this instrument. As a next step in the development process, we plan to conduct a field study of the EISAS with a sample of early intervention programs. Important questions to address in a field study include the following: (1) can EISAS ratings discriminate between variations in program quality? (2) to what extent is there variability in ratings of quality between service providers and parents? and (3) do service providers and parents perceive the EISAS to be a meaningful and useful approach to the assessment of quality practices? The outcomes from a pilot study will provide valuable information and direction for refinement of the EISAS as well as insight into program perceptions of quality practices.

REFERENCES

Ainsworth, M. D., & Bell, S. M. (1974). Mother-infant interaction and the development of competence. In K. Connolly & J. Bruner (Eds.), The growth of competence (pp. 97–118). London: Academic Press.
Aytch, L. S., Cryer, D., Bailey, D., & Selz, L. (1999). Defining and assessing quality in early intervention programs for infants and toddlers with disabilities and their families: Challenges and unresolved issues. Early Education and Development, 10(1), 7–23.
Bradley, R. H., Caldwell, B., Rock, S. L., Ramey, C., Barnard, K., Gray, C., et al. (1989). Home environment and cognitive development in the first 3 years of life: A collaborative study involving six sites and three ethnic groups in North America. Developmental Psychology, 25, 217–235.
Bronfenbrenner, U. (1977). Toward an experimental ecology of human development. American Psychologist, 32, 513–531.
Bruder, M. R. (2000). Family-centered early intervention: Clarifying our values for the new millennium. Topic in Early Childhood Special Education, 20(2), 105–115.
Burchinal, M. R., Roberts, J. E., Nabors, L. A., & Bryant, D. M. (1996). Quality of center child care and infant cognitive and language development. Child Development, 67, 606–620.
Cost, Quality, and Child Outcomes Study Team. (1995). Cost, quality, and child outcomes in child care centers: Executive summary. Denver, CO: Economics Department, University of Colorado at Denver.
Cost, Quality and Child Outcomes Study Team. (1999). The children of the cost, quality, outcomes study go to school: Executive summary. Chapel Hill, NC: Frank Porter Graham Child Development Center.
DEC Task Force on Recommended Practices. (1993). DEC recommended practices: Indicators of quality in programs for infants and young children with special needs and their families. Reston, VA: Council for Exceptional Children.
Dunst, C., Snyder, S. W., & Mankinen, M. (1989). Efficacy of early intervention. In M. C. Wang, M. C. Reynolds, & H. J. Walberg (Eds.), Handbook of special education: Research and practice (Vol. 3). New York: Pergamon Press.
Dunst, C. Trivette, C., & Deal, A. (1994). Supporting and strengthening families: Methods, strategies, and practices. Cambridge, MA: Brookline Books.
Dunst, C. J., Trivette, C. M., & Jodry, W. (1997). Influences of social support on children with disabilities and their families. In M. J. Guralnick (Ed.), The effectiveness of early intervention (pp. 499–522). Baltimore: Brookes.
Education of the Handicapped Act Amendments of 1986, 20 U.S.C. § 1400 et seq.
    Fewell, R. R. (2000). Assessment of young children with special needs: Foundations for tomorrow. Topic in Early Childhood Special Education, 20(1), 38–42.
    Goelman, H., & Pence, A. (1987). Effects of childcare, family, and individual characteristics on children's language development: The Victoria day care research project. In D. Phillips (Ed.), Quality in childcare: What does the research tell us? (pp. 89–104). Washington, DC: National Association for the Education of Young Children.
    Guralnick, M. J. (1997). The effectiveness of early intervention. Baltimore: Paul H. Brookes Publishing Co.
    Guralnick, M. J. (1998). Effectiveness of early intervention for vulnerable children: A developmental perspective. American Journal on Mental Retardation, 102, 319–345.
    Harms, T. & Clifford, R.M. (1980). Early childhood environment rating scale. New York: Teachers College Press.
    Harms, T., Clifford, R. M., & Cryer, D. (1998). Early childhood environment rating scale (Rev. ed.). New York: Teachers College Press.
    Harms, T., Cryer, D., Clifford, R. M. (1995). Infant/toddler environment rating scale. New York: Teachers College Press.
    Howes, C., Phillips, D. A., & Whitebook, M. (1992). Thresholds of quality: Implications for the social development of children in center-based child care. Child Development, 63, 449–460.
    Individuals with Disabilities Education Act (IDEA) Amendments of 1997, PL 105-17, 20 U.S.C. §1400 et seq.
    Kontos, S., Howes, C., Shinn, M., & Galinsky, E. (1995). Quality in family childcare and relative care. New York: Teachers College.
    McLean, M., Bailey, D. B., & Wolery, M. (1996). Assessing infants and preschoolers with special needs (2nd ed.). New Jersey: Prentice-Hall.
    McWilliam, R. A. (2000). Recommended practices interdisciplinary models. In S. Sandall, M. E., McLean, & B. J. Smith (Eds.), DEC recommended practices in early intervention/early childhood special education (pp. 47–52). Denver, CO: DEC/Sopris West.
    National Early Childhood Technical Assistance System (NEC*TAS). (1998). Part H Updates: updates on selected aspects of the program for infants and toddlers with disabilities (Part H) of the Individuals with Disabilities Education Act (IDEA).
    National Early Childhood Technical Assistance System (NEC*TAS). (1999). Programs for young children with disabilities under IDEA: Excerpts from the twentieth annual report to congress on the implementation of the Individuals with Disabilities Education Act by the U.S. Department of Education (1998).
      Phillips, D. A., & Howes, C. (1987). Indicators of quality in child care: Review of research. In D. A. Phillips (Ed), Quality in child care: What does research tell us? (pp. 1–19). Washington, DC: National Association for the Education of Young Children.
      Phillips, D., McCartney, K., & Scarr, S. (1987). Child care quality and children's social development. Developmental Psychology, 23, 537–543.
      Peisner-Feinberg, E., & Burchinal, M. R. (1995). Child care quality and children's developmental outcomes. In S. Helburn (Ed.), Cost, quality, and outcomes in child care centers: Technical report. Denver, CO: Department of Economics, University of Colorado at Denver.
      Ramey, C. T., Campbell, F. A., Burchinal, M., Skinner, M., Gardner, D., & Ramey, S. L. (2000). Persistent effects of early intervention on high-risk children and their mothers. Applied Developmental Sciences, 4, 2–14.
      Sandall, S., McLean, M. E., & Smith, B. J. (Eds.). (2000). DEC recommended practices in early intervention/early childhood special education. Denver, CO: DEC/Sopris West.
        Simeonsson, R. J., Cooper, D. H., & Scheiner, A. P. (1982). A review and analysis of the effectiveness of early intervention programs. Pediatrics, 69, 635–641.
        Tarr, J. E., & Barnett, W. S. (2001). A cost analysis of Part C early intervention services in New Jersey. Journal of Early Intervention, 24(1), 45–54.
        Winton, P. (2000). Early childhood intervention personnel preparation: Backward mapping for future planning. Topic in Early Childhood Special Education, 20(2), 87–94.
        Wolery, M., & Gast, D. L. (2000). Classroom research for young children with disabilities: Assumptions that guided the conduct of research. Topic in Early Childhood Special Education, 20(1), 49–55.
        Worthen, B.R., Sanders, J. R., & Fitzpatrick, J. L. (1997). Program evaluation: Alternative approaches and practical guidelines. New York: Longman Publishers.

        *Information about the current status of EISAS development can be obtained from the lead author Lynette S. Aytch.
        Cited Here

        Keywords:

        early intervention; program self-assessment; quality practices

        ©2004Lippincott Williams & Wilkins, Inc.