Secondary Logo

Journal Logo

Original Research/Study

Research Informing Practice in Early Childhood Intervention

How Hard Can It Be?

Kemp, Coral PhD

Author Information
doi: 10.1097/IYC.0000000000000168
  • Free


THE INTERNATIONAL COMMUNITY recognizes that early childhood intervention (ECI) addresses the development of infants and young children who are at risk of delay because of environmental disadvantage or biological risk as well as the population of children with established disabilities (Guralnick, 2019). Some children with established disabilities will have delays in one developmental domain (e.g., physical or sensory), whereas others will have delays across multiple domains. Children at biological risk (e.g., children with very low birth weight, children exposed to alcohol or other drugs in utero) or those at environmental risk due to unresponsive parenting, families living in poverty, or families subjected to domestic violence may demonstrate delays in cognitive, language, or social emotional development at some point before entering the school system. For all, there will be a threat to optimal growth and development. The importance of family, specifically carer–infant/carer–child interactions, is recognized to be of vital importance as children develop (Bailey, Raspa, & Fox, 2012; Guralnick, 2011; Innocenti, Roggman, & Cook, 2013). In addressing the needs of vulnerable infants and young children, therefore, supporting these interactions is important for all who work in the field of early intervention.

Following vulnerable children's participation in early childhood settings, early interventionists also strive to assist early childhood educators to use their interactions with children to promote child development. Early childhood education and care (ECEC) settings, such as long day care and preschools, offer important educational opportunities for children at environmental risk of delay in situations in which families may have difficulty in supporting their children's development. For children at biological risk or those with established disabilities, early childhood educators can also support families by providing opportunities for peer interaction and additional opportunities for functional practice of skills. In these settings, early childhood educators promote engagement and participation in activities that have the potential to promote development. Respite for families is also an important function of early childhood services.

In this article, I address the following issues: (a) definitions of evidence-based practice (EBP); (b) the contributions of research, clinical practice, and stakeholder values to EBP; (c) incentives for using research-based practice; and (d) barriers to the effective implementation of research-based practice. An argument for researcher–practitioner partnerships is made and examples of successful partnerships in the Australian context provided.


When discussing EBP, it is important to differentiate two levels of practice: The first includes broad approaches (e.g., family-centered practice, embedded practice, inclusive practice, and response to intervention) and manualized programs. The second includes the very specific interventions and strategies used within those approaches/programs—for example, task analysis, time delay, prompting hierarchies, and reinforcement strategies.

For the broad approaches, although there is evidence for the efficacy of family-centered practice and embedded practice, both are multifaceted and it may be difficult to isolate components of these practices that make them effective. For example, what one person regards to be acceptable family-centered practice might be rejected by another as service directed. There have been discrepancies between family and service responses to surveys relating to the implementation of family-centered practice (e.g., Dempsey & Keen, 2008). Embedded practice requires that the interventions/instructional strategies included in this practice have demonstrated effectiveness (McBride & Schwartz, 2003; Vanderheyden, Snyder, Smith, Sevin, & Longwell, 2005). Response to intervention relies on a high-quality universal program and effective interventions for the children who need more support (Jackson, Pretti-Frontczak, Harjusola-Webb, Grisham-Brown, & Romani, 2009; Spencer, Petersen, Slocum, & Allen, 2015). Inclusion works best for both children with and without disabilities in high-quality early childhood programs (Barton & Smith, 2015; Buysse & Hollingsworth, 2009).


The terms “evidence-based” and “research-based” are often used synonymously; however, sources of evidence other than research also contribute to EBP and are, therefore, considered to be important when selecting interventions. Evidence-based practice has its origins in clinical medicine (Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996; Snow, 2019), with the suggestion that each of the following be used to guide the selection of programs/approaches or specific interventions/strategies: (a) research evidence, (b) stakeholder values, and (c) practice evidence.

A range of practices/interventions, some of which may have little or no support in the scientific research literature, has been used over the years in services for infants and young children with delays/disabilities or at risk of delay. Of course, not every practice that we implement in our service system will have strong research supporting it. For example, some practices are implemented because they are linked to cultural differences of children and families or other factors unique to the family's circumstances. Embedded practice, which does have research evidence to support it, is also supported by practice evidence. A program implemented once a week (or even more often) by an early intervention professional in a clinical setting will not be as powerful as the practice of skills through the routines and activities in which the child is engaged on a regular basis within the family unit or early childhood center (distributed, functional practice). Similarly, we believe in family-centered practice, again not just because we have research evidence to support it but because we understand that families will be better able to meet the needs of their vulnerable infants and young children if they have their needs for information, guidance, respite, and overall support met. It is clear, also, that families and carers have a deep knowledge of their children and a vested interest in their long-term growth and development. Those who advocate EBP acknowledge the importance of stakeholder values and practitioner expertise. However, they also believe that, where possible, these two sources are underpinned by research evidence.


Research is one of the most overused and, in many cases, misused terms. For example, the term is frequently used to indicate that more information is needed before a decision can be made (Bair & Enomoto, 2013). It is not unusual to search the web or find a book on the topic in the local library or perhaps ask a friend or acquaintance. When it comes to interventions for children with disabilities/delays or at risk of delay, this approach can be problematic, considering that friends and acquaintances are likely to have inadequate knowledge and that much of the information on the Internet is unreliable or incomplete, even information provided on professional web pages (Stephenson, Carter, & Kemp, 2012).

So, does every intervention promoted for vulnerable infants and young children have an acceptable evidence base? Clearly, not every intervention will have substantial research evidence to support it. Some interventions that we will likely avoid would have been shown to be ineffective or even harmful. Other practices might have been the subject of little or no research. Although they may seem to be harmless, there is harm to children's optimal development if these interventions are used instead of interventions that do have research evidence to support them—especially if the interventions supported by research lead to better child outcomes or if the non–research-based interventions interfere with more functional and inclusive interventions.

Some practices might seem to be effective because there are testimonials supporting them and/or because they appear to be medical or pseudomedical (e.g., some therapies and diets). How can there be such glowing testimonials supporting these dubious practices? If you think that you are getting an effective treatment, you are likely to perceive that it is working for you or, in this case, your child. In these circumstances, the measures of change are likely to be parent reports or provider reports.

When searching for evidence to support an approach that interventionists are prepared to recommend to families, weight should be given to (a) quality of research design; (b) number of studies supporting the approach; (c) generalizability of the evidence; and (d) published research findings that are independent and reported in peer-reviewed journals. To be considered research based, researchers agree that a practice or an intervention should be supported by multiple, high-quality, experimental, or quasi-experimental studies demonstrating that the intervention/approach has a meaningful impact on outcomes for the individual (Cook & Odom, 2013; Slocum et al., 2014). The standards that are applied to the quantity and quality of research needed for a practice or program to qualify as research-based vary for different circumstances (Cook & Cook, 2013). For example, researchers have nominated a level of research support needed to demonstrate the efficacy of a practice or intervention for use with children with autism spectrum disorder (ASD; Johnson, Fleury, Ford, Rudolph, & Young, 2018). They suggest that there should be a minimum of two high-quality experimental or quasi-experimental studies conducted by at least two different research groups. If the research uses a single case methodology, these authors recommend that there be a minimum of five high-quality studies conducted by at least three research groups with a total of 20 or more participants or at least one high-quality group or quasi-experimental design and at least three high-quality single case studies reported by a minimum of two research groups. Other resources, for example, the What Works Clearinghouse ( and the Cochrane Library (, apply different standards. Studies that use group designs, both randomized controlled trials and quasi-experimental designs with quality design features, and single case experimental designs are included when evaluating the evidence for interventions reviewed by the What Works Clearinghouse. Standardized information that allows the person accessing the information to compare results is provided on this site. The Cochrane Library provides research reviews covering a range of medical, educational, and therapy interventions. The databases used and the criteria for including studies for review are provided for each intervention as is an overview of the recommendations.

In Australia, there is a government website that can be accessed by all who are interested in the services available to families and practitioners. The Raising Children Network ( specifically targets families and provides information of interest to parents who are dealing with the day-to-day challenges of raising children. There is also information relating to children with disabilities including information about services. Interestingly, there is a guide relating to specific interventions and therapies for children with ASD but not children with other types of disability. Interventions for the population of children with ASD are rated as (a) established, (b) promising, (c) yet to be determined, (d) ineffective or harmful, or (e) unratable. There is no clear information about how these ratings are applied but the focus appears to be on the expertise of those reviewing the interventions—experts, however, who are not specifically named as providing evidence for individual interventions.

Although not all researchers will employ the same standards when applying a rating system, the take-home message is that one study with limited participants should not be sufficient to convince practitioners to change their practice. There is likely to be general agreement for using the following to describe levels of support regardless of who applies these standards and how they are applied: (a) strong support from research published in peer-reviewed journals (multiple experimental/quasi-experimental studies with strong design features and findings that can be generalized across populations and settings); (b) emerging support from published research (some experimental/quasi-experimental studies); (c) no support, limited support, and no research; and (d) evidence of negative or harmful outcomes. Clearly, practitioners will be favoring the first two levels of evidence. However, it is also important to consider further developments in research. An intervention that currently has limited support may gather more support in time. It is valuable, therefore, for practitioners to keep up to date with the research.


Research-supported approaches/interventions are often identified in controlled, clinical conditions, conditions that may not exist in the environments in which practitioners provide services. This includes single case experimental research as well as group designs. Randomized controlled trials are considered to be the gold standard in research and have strong internal validity (Sackett et al., 1996). However, they are sometimes difficult to implement, given the number and diversity of research participants commonly included in research in the disability area. Nowadays, there are also ethical restrictions that may limit the use of these designs, especially with vulnerable populations. There is also a diverse range of outcomes and outcome measures identified, which may make a summary and ultimately an evaluation of research findings difficult to achieve.

Researchers continue to emphasize the need for intervention effectiveness rather than efficacy research, that is, proof that the intervention works in the real world with practitioners rather than researchers implementing the interventions with extensive resources. This provides us with practice-based or clinical evidence. Ideally, we select practices based on efficacy research and determine whether or not these can work in practice. Researchers (e.g., Cook & Odom, 2013; Fixsen, Blasé, Metz, & Van Dyke, 2013; Strain, 2018) have suggested that in order to have an influence on child outcomes, EBPs need to be implemented and evaluated in natural environments such as ECEC settings, the family home, and the community.

Although early intervention research is sometimes implemented in natural settings, where this occurs, the intervention is commonly implemented by researchers rather than practitioners (Katz & Girolametto, 2013). If we are to have research-based interventions validated through practitioner implementation (practice-based evidence), and for this practice to be sustained, it is important to ensure that practitioners themselves are able to implement the practice/intervention under investigation. Of note, many of the studies included in the 2018 review by Johnson et al. (2018) for children with ASD were implemented in naturalistic settings. However, the primary interventionists were generally members of the research team and there were often multiple interventionists. A relatively smaller number of studies involved early childhood personnel as the primary interventionist (4% of studies), caregivers (7% of studies), and private specialists (4% of studies).

Practitioners determine the programs/interventions that are used in their services. They can select/recommend those that come with recommendations from other practitioners or from parents or they can seek out the interventions that have research evidence supporting them and work from there. Not all interventions that have research support will be easily implemented in a service and not all will work for every child and family. However, knowing that an intervention has efficacy data to support it is an important starting point.


In the absence of strong research-based evidence, practitioners can still consider the evidence that is available to them (best available evidence). A valuable source is the data that they collect on the children with whom they are working. These data do not need to be discrete trial events, which can be difficult or inappropriate to collect in family homes or early childhood settings. However, ignoring the importance of data in various forms or determining that collecting data in a systematic way is somehow discriminatory is to deny a valuable way of deciding whether the program selected is meeting the needs of the young children in their care. Slocum et al. (2014) refer to data-based decision making as the “ultimate hedge against the inherent uncertainties of imperfect knowledge derived from research” (p. 50), stating further that as the quality of research evidence decreases, so “the importance of frequent direct measurement of client progress increases” (p. 50). Indeed, practitioner-implemented, data-based decision making is a way of conducting one's own research with the focus being on the individual receiving the intervention.


There are incentives for using research-based approaches/interventions with infants and young children with disabilities/delays or those at risk of delay. Of course, this must be research evidence that is ultimately confirmed in practice. The most important incentive would have to be the desire to make a difference to short- and long-term outcomes for vulnerable children and their families. However, there are also more pragmatic incentives, for example, the need to be accountable for the funding of the service, whether it be government or private funding, and finally the professional credibility to which all interventionists would aspire. Fixsen et al. (2013, p. 214) have devised an impressive formula for ensuring that an intervention makes a difference to outcomes: Effective interventions × Effective implementation = Improved outcomes. If either the measure of effective intervention or that of effective implementation is low, the proposed outcomes will not be achieved.


Failure to identify research-based practice

Practitioners may not have ready access to databases that will allow them to read the relevant research. Even if able to access research literature, they may not have a strong background in reading and interpreting research or may not be willing or able to spend the time to keep abreast of the research. This is not a problem pertaining only to educators or therapists working in the field of ECI or disability. Indeed, Sackett et al. (1996) reported that in order to keep up with advances in research in the medical profession at that time, general practitioners would need to read 19 articles a day for each of 365 days a year. The number of articles produced more than 20 years later is considerably greater. Although the number of studies that early interventionists would need to read is likely to be fewer, it is probably unrealistic to think that they would be reading the research literature on a daily or even weekly basis. If this is the case, interventionists are likely to rely on practices that they have identified on the basis of professional judgment/opinion and experiences. These practices may be based on theory and/or professional wisdom but may not have been validated more objectively by measuring child outcomes following the implementation of the practice.

Interventions that are difficult to implement in natural settings

Working to improve outcomes that families value is likely to involve coaching and support in the family home and the community (including ECEC settings). Even so, there will be some interventions with strong supportive evidence that will be just too difficult to include in naturalistic environments largely due to lack of funding and available time and/or limited staff qualifications and experience. Indeed, Fixsen et al. (2013) have suggested that insufficient funding needed to support the effective implementation of an intervention may also help explain what they refer to as the “science-to-service gap” (p. 214). Paucity of time may very well be connected to funding. Time is needed to discuss research-based practice with colleagues to determine if/how an intervention can be implemented in the practice of the service.

Limited staff qualifications and experience

Service provision, including the implementation of EBP, will be influenced by the quality of the preservice and in-service education accessed by those supporting infants and young children with disabilities and their families in early childhood settings and other community settings and in the family home. The question is as follows: How well does preservice training prepare early intervention practitioners to implement research-supported interventions? Perhaps they are well prepared for interventions specifically related to their initial area of study, which may be much broader than early childhood or disability. However, ECI crosses many disciplines, involves working collaboratively with families and a range of professions, and should focus on measurable outcomes across all developmental domains for infants and young children as well as measureable outcomes for their family members. In the United States, the establishment of the Early Childhood Personnel Center has led to the identification of core competencies for those working in the ECI field, competencies that cross disciplinary boundaries (Bruder et al., 2019). In Australia, we are a long way from achieving this, as employers of early interventionists rarely ask for specific early intervention qualifications and there are few preservice courses that specifically cover the essential skills required to work in the field.

Early interventionists in Australia who do complete preservice courses in early childhood and allied health and who subsequently work to update their knowledge and skills through in-service training may not access courses that include a strong focus on scientific research or that equip preservice early interventionists with the competencies needed to work in the field. One of the failures of preservice courses is the lack of adequate practicum/internship programs specifically relating to the area of ECI. In-service courses are offered through a variety of sources, often provided by practitioners. For in-service courses, therefore, there is the issue of the quality of content and the approach to adult learning. Who monitors quality? What qualifications are needed for this?


What then do early interventionists rely on for information about EBP? Professional reading is one way of keeping up with advances in the field. Do they/are they able to read the research? Do they know which practices have sufficient evidence to support/refute them? In fact, failure to engage in professional reading of any kind has been identified as a problem, especially among education professionals (Rudland & Kemp, 2004).

Fixsen et al. (2013) refer to three approaches to translating evidence-based programs into practice: (a) Letting it happen: Researchers publish their research and hope that practitioners will read the research and use it in their practice; (b) Helping it happen: Creating manuals and running in-service programs; and (c) Making it happen: Program developers/researchers support practitioners/managers to implement the practice. The latter involves a researcher/practitioner partnership. One might also include the families here and call this researcher/practitioner/family partnerships.

Some service providers in Australia most definitely have very productive partnerships already in operation. These services generally employ a research officer who has practical experience in the field combined with knowledge of and interest in research. Some practitioners in these organizations are enrolled in research degrees and have strong ties with one or more universities. The role of the research team/officer (perhaps a research committee) is to (a) filter requests from researchers to approach their clients for inclusion as research participants; (b) agree to direct involvement by the service provider in the research (i.e., research designed outside the service); (c) design research (in partnership with researchers and families) that can be implemented by the practitioners for the service; and (d) monitor/guide the professional reading of service practitioners. There are advantages to each of the partners in the researcher/practitioner partnerships approach. The advantages to early intervention services include support in testing the effectiveness of interventions in real-world settings (i.e., practice-based evidence) and professional development for program staff. These include access to expertise from researchers investigating EBP in the field, university funding, and participation of university interns/practicum students. For the researchers, such a partnership provides opportunities to implement research in partnership with practitioners and families, keep in touch with the practical side of their field, apply for funding in partnership with services, and investigate the social validity of interventions. An added incentive for researchers is the possibility of publications and/or conference presentations. Finally, and most importantly, are the advantages for children and families? Children are likely to benefit from access to interventions that have proven to be effective in naturalistic environments and families have access to informed advice when deciding what programs/interventions to choose.

As acknowledged, many services offering early intervention support to infants and young children and their families have research committees within their organizations. Such organizations should be encouraged to more closely link these committees with the day-to-day operation of the organization, that is, the interventions implemented by their staff. Single case research can provide a valuable tool for investigating the effects of specific interventions on priority outcomes in early intervention settings. Involving interventionists in identifying priority outcomes in consultation with families and, where a child is included in an ECEC setting, early childhood educators would be central to this approach.


Single case research projects can illustrate the effectiveness of specific interventions when implemented with children in ECEC services (e.g., Hong & Kemp, 2007; Kemp, Stephenson, Cooper, & Hodge, 2016,2019). Two studies included program staff in the delivery of the interventions, with a member of the program staff assisting with the collection of data for the third study. Of the three studies, one was implemented in a university-affiliated program; the others were implemented in community programs in partnership with university researchers (see Kemp et al., 2016,2019). For these studies, the selection of priority goals was made by parents and those working closely with the children, whereas decisions relating to the design of the research were led by university researchers.

The research design, including all components of the studies, was developed in close consultation with service staff; acknowledging that all procedures involved in the research had to be feasibly implemented in real-world settings. The Kemp et al. (2016,2019) studies were conducted in ECEC settings with center educators implementing the interventions. In the 2016 study, families provided information with regard to their children's interests and use of mobile devices, both of which informed the conduct of the research. Educators participating in both studies were coached in the delivery of the program. Coaching in situ has been identified as an effective way of ensuring that EBPs are implemented with a high level of fidelity (Artman-Meeker, Fettig, Barton, Penney, & Zeng, 2015; Ledford et al., 2017).

To illustrate the level of practitioner involvement in the research, the modification to the intervention for a second phase for one of the children in the study by Kemp et al. (2019) was made collaboratively with the special educator and the educator who worked within the center program. Staff from the ECI program, including the research manager, were coauthors in the publication of the 2016 and 2019 studies.


Although a range of types of evidence is needed to guide our practice, research-based evidence is still an essential component. For EBP to be supported, systems and services need to employ staff with knowledge about evidence-based programs in management positions. Fixsen et al. (2013) refer to statewide management. This does not occur in New South Wales and in other states in Australia where the ECI system has generally developed through community programs rather than being a state-organized system implemented through human services and later education. This might be the same in other countries. Furthermore, skilled and knowledgeable interventionists (i.e., special educators and therapists) are needed in ECI. To ensure that there is a good supply of qualified and skilled interventionists, the quality of preservice and in-service courses needs to be addressed and opportunities for practitioners to partner with researchers organized so that they can continue to examine research and update their skills and knowledge.

An important bridge in the research-to-practice gap is achieved by encouraging practitioners to use data as part of their decision-making process. In this way, practitioners gather data on proposed outcomes, evaluate the data, and either continue the practice/intervention or make changes to their practice. This data-based decision making can be used to validate the current practice but better still can be used to evaluate the effectiveness of a practice/intervention that has been demonstrated to have efficacy when researched in a controlled setting. The practitioner needs to know how well it can work in his or her setting with his or her children (i.e., the real world).

Research-informing practice in early intervention—How hard can it be? Certainly, it is not easy but if there is a will to have the very best practice for the infants and young children and their families in need of early intervention, we must find the means by which this can be achieved.


Artman-Meeker K., Fettig A., Barton E. E., Penney A., Zeng S. T. (2015). Applying an evidence-based framework to the early childhood coaching literature. Topics in Early Childhood Special Education, 35, 183–96. doi:10.1177/0271121415595550
Bailey D. B., Raspa M., Fox L. C. (2012). What is the future of family outcomes and family-centered services? Topics in Early Childhood Special Education, 31, 216–223. doi:10.1177/0271121411427077
Bair M. A., Enomoto E. K. (2013). Demystifying research: What's necessary and why administrators need to understand it. NASSP Bulletin, 97, 124–138.
Barton E. E., Smith B. J. (2015). Advancing high-quality preschool inclusion: A discussion and recommendations for the field. Topics in Early Childhood Special Education, 35, 69–78.
Bruder M. B., Catalino T., Chiarello L. A., Mitchell M. C., Deppe J., Gundler D., Ziegler D. (2019). Finding a common lens competencies across professional disciplines providing early childhood intervention. Infants & Young Children, 32, 280–293. doi:10.1097/Iyc.0000000000000153
Buysse V., Hollingsworth H. L. (2009). Program quality and early childhood inclusion: Recommendations for professional development. Topics in Early Childhood Special Education, 29, 119–128.
Cook B. G., Cook S. C. (2013). Unraveling evidence-based practices in special education. Journal of Special Education, 47, 71–82. doi:10.1177/0022466911420877
Cook B. G., Odom S. L. (2013). Evidence-based practices and implementation science in special education. Exceptional Children, 79, 135–144. doi:10.1177/001440291307900201
Dempsey I., Keen D. (2008). A review of processes and outcomes in family-centered services for children with a disability. Topics in Early Childhood Special Education, 28, 42–52. doi: 10.1177/0271121408316699
Fixsen D., Blasé K., Metz A., Van Dyke M. (2013). Statewide implementation of evidence-based programs. Exceptional Children, 79, 213–230. doi:10.1177/001440291307900206
Guralnick M. J. (2011). Why early intervention works: A systems perspective. Infants & Young Children, 24, 6–28. doi:10.1097/IYC.0b013e3182002cfe
Guralnick M. J. (2019). Effective early intervention: The developmental systems approach. Baltimore, MD: Brookes Publishing Co.
Hong S-J., Kemp C. (2007). Teaching sight word recognition to preschoolers with delays using activity-based intervention and didactic instruction: A comparison study. Australasian Journal of Special Education, 31, 89–107. doi:10.1080/10300110701704986
Innocenti M. S., Roggman L. A., Cook G. A. (2013). Using the PICCOLO with parents of children with a disability. Infant Mental Health Journal, 34, 307–318.
Jackson S., Pretti-Frontczak K., Harjusola-Webb S., Grisham-Brown J., Romani J. M. (2009). Response to intervention: Implications for early childhood professionals. Language, Speech, and Hearing Services in Schools, 40, 424–434. doi:10.1044/0161-1461(2009/08-0027)
Johnson L. D., Fleury D., Ford A., Rudolph B., Young K. (2018). Translating evidence-based practices to usable interventions for young children with autism. Journal of Early Intervention, 40, 158–176.
Katz E., Girolametto L. (2013). Peer-mediated interventions for preschoolers with ASD implemented in early childhood settings. Topics in Early Childhood Special Education, 33, 133–143.
Kemp C., Stephenson J., Cooper M., Hodge K. (2016). Engaging preschool children with severe and multiple disabilities using books and iPad apps. Infants & Young Children, 29, 249–266. doi:10.1097/IYC.0000000000000075
Kemp C., Stephenson J., Cooper M., Hodge K. (2019). The use of peer mediation and educator facilitation to promote turn taking in young children with autism spectrum disorder in inclusive childcare. Infants & Young Children, 32, 151–171. doi:10.1097/IYC.0000000000000146
Ledford J. R., Zimmerman K. N., Chazin K. T., Patel N. M., Morales V. A., Bennett B. P. (2017). Coaching paraprofessionals to promote engagement and social interactions during small group activities. Journal of Behavioral Education, 26, 410–432. doi:10.1007/s10864-017-9273-8
McBride B. J., Schwartz I. S. (2003). Effects of embedding caregiver-implemented teaching strategies in daily routines on children's communication outcomes. Journal of Early Intervention, 26, 175–193.
Rudland N., Kemp C. (2004). The professional reading habits of teachers: Implications for student learning. Australasian Journal of Special Education, 28, 4–17.
Sackett D., Rosenberg W., Gray J., Haynes R., Richardson W. (1996). Evidence based medicine: What it is and what it isn't. British Medical Journal, 312 (7023), 71–72. doi:10.1136/bmj.312.7023.71
Slocum T. A., Detrich R., Wilczynski S. M., Spencer T. D., Lewis T., Wolfe K. (2014). The evidence-based practice of applied behavior analysis. The Behavior Analyst, 37, 41–56. doi:10.1007/s40614-014-0005-2
Snow P. (June 2019). What does evidence-based practice mean in education in 2019? InSpEd Insights. Retrieved from
Spencer T. D., Petersen D. B., Slocum T. A., Allen M. M. (2015). Large group narrative intervention in Head Start preschools: Implications for response to intervention. Journal of Early Childhood Research, 13, 196–217. doi:10.1177/1476718X13515419
Stephenson J., Carter M., Kemp C. (2012). Quality of the information on educational and therapy interventions provided on the websites of national autism associations. Research in Autism Spectrum Disorders, 6, 11–18. doi:10.1016/j.rasd.2011.08.002
Strain P. S. (2018). Personal thoughts on early childhood special education research: An historical perspective, threats to relevance, and call to action. Journal of Early Intervention, 40, 107–116. doi:10.1177/1053815117750411
Vanderheyden A. M., Snyder P., Smith A., Sevin B., Longwell J. (2005). Effects of complete learning trials on child engagement. Topics in Early Childhood Special Education, 25, 81–94. doi:10.1177/02711214050250020501

collaborative partnerships; evidence-based practice; professional training

© 2020 Wolters Kluwer Health, Inc. All rights reserved.