Secondary Logo

Journal Logo


Physical Therapy–Related Child Outcomes in School

An Example of Practice-Based Evidence Methodology

Effgen, Susan K. PT, PhD, FAPTA; McCoy, Sarah Westcott PT, PhD, FAPTA; Chiarello, Lisa A. PT, PhD, PCS, FAPTA; Jeffries, Lynn M. PT, PhD, PCS; Bush, Heather PhD

Author Information
doi: 10.1097/PEP.0000000000000197
  • Free


The need for scientific data supporting the effectiveness of physical therapy (PT) services for children with disabilities is critical, especially for those who are eligible and receive school-based PT. Although little empirical data are available, many indications can be found that the amount and type of school-based PT across the nation vary widely. Some school systems rely on consultative or integrated models where the therapists educate team members who then support the students throughout the school day, whereas others provide more “hands on” service delivery along with consultation with team members.1–3 Therapists report direct “hands-on” service to be what they consider ideal school-based practice.3 Given regional variation in employment of school-based physical therapists,4 differing recommendations for frequency of intervention,5 and a dearth of research on school-based practice, a nationwide study designed to describe current services and evaluate the relationship between interventions to student outcomes was warranted.

A “Practice-Based Evidence” (PBE) design has been used in several studies of rehabilitation outcomes6,7 and offers a method for evaluation of the natural service provision setting, which in our case is school-based PT practice, and how combinations and amount of services relate to changes in the student outcomes.8–10 The PBE design provides an excellent starting point for the study of complex services that are not well defined or researched or have efficacy data. Although a causal relationship between the services and outcomes cannot be confirmed with an observational study design, PBE aims to minimize bias by reducing the plausibility of alternative explanations.8–10 Observation designs such as the PBE design are well suited for holistic research of PT services provided within the school environment where interventions for the student are multifaceted and at the level of activity and/or participation with many personal and/or environmental influences.11,12

The purpose of this article was to describe the application of PBE methodology in our study titled: PT Related Child Outcomes in the Schools (PT COUNTS). The purpose of the PT COUNTS study was to describe prospectively student outcomes in areas physical therapists believe they affect and the PT services used, and to determine associations between outcomes and services. Malleable factors such as the use of combinations of service delivery and intervention approaches that influence or mediate change in student outcomes on individualized and standardized functional performance outcome measures were investigated.


Research Design

The PBE methodology includes collection of 3 types of data: (1) participant personal variables, (2) services provided during the episode of care, and (3) appropriate outcomes10 (Table 1). The PBE methodology accounts for participants' covariates and does not involve manipulating the intervention; rather, therapists describe what they actually do during service provision. Given that data come from current practice methods and settings, the research findings are then relatively straightforward to interpret. The PBE design culminates in the identification of best and unproductive practices, which can later inform validation studies or randomized controlled trials.7,12 To compensate for the shortcomings attributed to observational studies, several checks and balances are suggested (Table 1).10 The PBE model we used to study school-based PT practice is shown in Figure 1.

Fig. 1:
PT COUNTS Practice-based evidence. Abbreviation: PT COUNTS = PT Related Child Outcomes in the Schools.
Practice-Based Evidence Methodology


Like most PBE studies with a focus on estimation, this study required a large and diverse sample to examine outcomes.10 A traditional trial would require a sample size of 184 students for detecting moderate correlations of 0.25 with 90% power; a multiple linear regression with 3 explanatory variables and 2 control variables would also require 184 students to detect an R2 of 0.10 with at least 90% power, assuming a 2-sided significance level of 0.05. Given that students were clustered within therapists and therefore considered not to be independent, our desired sample size was inflated by the design effect of 1.3, resulting in a planned sample size of 240 students.13 Recruitment for physical therapists and their students was organized into 4 national regions (Northeast, Northwest, Southeast, and Central) each led by a co-investigator. No attempt was made to have participants in every state, but broad geographical representation was desired. Multiple methods were used to recruit physical therapists including announcements at American Physical Therapy Association (APTA) meetings, an e-mail to the APTA Section on Pediatrics membership, and personal e-mails to school-based physical therapists across the nation.

Figure 2 provides an outline of our procedures and timeline. During the first year of the study, physical therapists were recruited from public school systems with at least 2, but not more than 100 physical therapists. Once the physical therapists indicated interest in participating, we then sought and received school system approval. This process ranged from very informal to complex, requiring approval of the school districts' research review committee or additional Institutional Review Boards (IRBs) beyond the university IRB approvals already obtained. After the district approval, individual school principals had to approve the study prior to student recruitment. Physical therapists in more than 100 school systems were contacted with approval received from 59 school systems in 28 states.

Fig. 2:
Procedures and timeline of research activities. Abbreviations: GAS = Goal Attainment Scaling; IEP = Individualized Education Program; IRB = Institutional Review Board; SFA = School Function Assessment; S-PTIP = School-Physical Therapy Intervention for Pediatrics data collection system.

Physical Therapist Participants

To participate, physical therapists had to have worked at least 1 year in schools to ensure experience in the setting. Physical therapists completed at least 7 hours of online training, which included the Collaborative Institutional Training Initiative IRB certification and readings and narrated PowerPoint presentations of study outcome measures and the documentation method for PT services. For each outcome measure (Goal Attainment Scaling [GAS] and School Function Assessment [SFA]), therapists completed posttests requiring a score of 80%. For documentation of services (School-Physical Therapy Interventions for Pediatrics [S-PTIP] data form), the physical therapists scored videos of students receiving PT at school and met minimum criteria of 70% agreement with the investigators. The therapist training was a comprehensive process requiring computer knowledge and considerable time and effort, which was probably why some of the therapists did not complete the training and participate in the study.

To approximate random selection of students, the trained physical therapists provided a coded list of all students on their workloads who met the inclusion criteria. If the number of eligible students was 6 or fewer, they were asked to recruit each student. If they had more than 6 eligible students, the study site coordinators randomly selected 6 students to recruit. Therapists or other school system personnel distributed written study information to the selected families. If the families indicated interest, they were sent more information and the consent form. If not enough families indicated an interest in the study, then more families were randomly selected from the therapist's list. This process continued to reach a target of at least 1 student, but not more than 6, for each physical therapist. At the study completion, the physical therapists received an honorarium of $100/student and continuing education credit for the training.

Student Participants

Students with disabilities from kindergarten through sixth grade (ages 5-12 years) who received special education and the related service of a physical therapist at least monthly were recruited. A student was excluded if the student (1) had a disability of a progressive nature such as muscular dystrophy, for which Individualized Education Program (IEP) goals might be to maintain function and not to achieve new, higher levels of function; (2) planned to move out of the district before the end of the school year; (3) had major surgery planned that might affect physical performance or limit school attendance; or (4) had a history of low school attendance (absences greater than 30% the previous year). Signed consent forms were received from the parents of 342 students. Assent was not obtained from the students because of concerns that students might misinterpret the assent and think they could refuse PT intervention when the “research” involved only the detailed documentation of the usual care. An 11% attrition (39 students) was experienced because of either student or physical therapist issues (eg, moving, illness).

Sample Summary

Figure 3 details physical therapist and student recruitment and attrition. Some physical therapists and students were lost at each stage of the study. At the actual start of data collection, 118 physical therapists were participating from the Northeast (25), Northwest (31), Southeast (31), and Central (31) regions with 109 physical therapists completing final data collection. The majority of physical therapists were female (96%), white (96%), and middle-aged (mean age = 46 years, SD = 9.2 years), and had worked in schools an average of 13.1 years (SD = 9.1 years); 49% had a postbaccalaureate degree, with 8% indicating that they were pediatric clinical specialists. At the study completion, there were 302 students with completed posttests, and 296 students had the full 6 months of intervention data collected. The majority of students were 5–7 years of age (59%) (mean age = 7.3 years, SD = 2.0 years), white (n = 213; 72%), and male (n = 165; 56%), and medical diagnoses were cerebral palsy (n = 103; 34.8%), Down syndrome (n = 46; 15.5%), other genetic disorders (n = 40; 13.5%), global developmental delay (n = 32; 10.8%), autism (n = 21; 7.0%) and other (n = 54; 18.2%). Figure 4 shows the final sample of physical therapists and students by state.

Fig. 3:
Flow chart of study enrollment and participation. Abbreviation: PT = physical therapist.
Fig. 4:
PT COUNTS enrollment map. The number inside state is the number of participating physical therapists. Legend in the top right corner indicates the number of students participating in the state. Abbreviation: PT COUNTS = PT Related Child Outcomes in the Schools.


Data for PT COUNTS were collected during the second year of the study. We used measures of individualized outcomes that captured student-specific functional behaviors, and standardized outcomes that captured the same school-related behaviors across the entire sample.

Individualized Outcomes

For students with disabilities, an individualized approach to education is required under the Individuals with Disabilities Education Act (IDEA). The IEP is the road map developed by the IEP team for the student's plan of study and expected outcomes. To monitor achievement of IEP goals related to services provided by the physical therapist, we asked physical therapists to write subgoals, using the GAS process. The GAS process is an individualized, criterion-referenced outcome measure of change in performance of a behavior. Goal attainment scaling involves defining a set of specific goals for the student that includes a 5-point possible range of outcomes.14 The merits of GAS include that goals are (1) criteria-referenced, making the measurement responsive to minimal clinically significant changes; (2) written for all levels of functional ability; and (3) given a numeric score for analyzing group performances.14 Goal attainment scaling has been used in many studies with children with disabilities,15–21 including those in schools19; and GAS has demonstrated good content validity, reliability, and responsiveness in studies of children with cerebral palsy,16,17 is clinically affordable,17 and has been more responsive to changes in motor performance than standardized measures.21

To apply GAS, the physical therapist selected the student's IEP goals they would focus on during the following year. Each goal was converted into related, progressively more difficult subgoals using a 5-point scale.19,21,22 The student's present baseline status was anchored at −2; 0 was the expected performance, −1 represented a less favorable outcome but better than the present level, and more favorable, predetermined outcomes were given the values of +1 and +2. Table 2 provides an example of a goal using GAS and the criteria for developing and reviewing GAS goals. To improve the validity of the GAS outcome beyond the required training, an iterative process was used between the investigators and therapists. The investigators reviewed the 596 student goals to first check that the −2 and 0 levels matched the narrative description provided by the therapists of the students' current and expected achievement levels, respectively, and then evaluated the goals for consistency using a standardized GAS Criteria Checklist14 with feedback and suggested edits provided to the therapists if necessary.

Example of Goal Using Goal Attainment Scaling and the Criteria for the Goal Review Process

We set a maximum of 4 goals per student and we aimed to have 1 in each of the following categories: posture/mobility, self-care, recreation/fitness, and academics. We initially asked the physical therapist to categorize the goal, but on reflection of their categorization, we recategorized the goals through a consensus process so that the content across categories was similar. The physical therapist selected 1 goal as the student's primary goal. At the end of the 6 months of data collection, the physical therapist determined student achievement of the goals. To reduce some potential bias, another IEP team member was asked to verify goal achievement. To allow for decreases in function, we also had therapists report regression of skills below the baseline level.

Standardized Outcomes

The SFA23 is a standardized, criterion-referenced tool, specifically developed to examine the functional performance and participation of children with disabilities in schools from kindergarten through grade 6. It is a judgment-based test that is both discriminative and evaluative. The student's levels of activity, required support, and performance in daily school routines are assessed. Validity studies suggest that the SFA has high internal consistency (ranging from 0.92 to 0.98), is comprehensive, and is appropriate for elementary students with disabilities. Test-retest and interrater reliability studies report good stability of the scores (test-retest r values > 0.80).23–25

The SFA has 3 parts: Part I, Participation in 6 major school activity settings; Part II, Task Supports, assistance and adaptation to perform school-related physical tasks; and Part III, Activity Performance for physical tasks and cognitive-behavioral tasks that examine the student's ability to perform common school activities using a 4-point Likert scale ranging from 1 (does not perform) to 4 (consistent performance). We did not include the cognitive/behavioral tasks. Under physical tasks, for Posture and Mobility we used subsections: Travel, Maintaining and Changing Positions, Manipulation with Movement, and Up/Down Stairs; for Recreation and Fitness, the subsection: Recreational Movement; and for Self-care, we used subsections: Eating and Drinking, Hygiene, and Clothing Management. The chosen subsections related to the adaptive and functional skills on which physical therapists focus. Physical therapists completed the SFA in collaboration with teachers and other related service personnel as needed.

Physical Therapy Services

A process-oriented data collection system was developed specific to the school setting for reliably recording the services provided. The Pediatric Physical Therapy Intervention Activities data collection system,12 which was modeled after a system developed for adults poststroke,7 was modified for students in kindergarten through 12th grade and was called School-Physical Therapy Interventions for Pediatrics (S-PTIP).26 The S-PTIP includes a manual with operational definitions, the data form (see the Appendix),26 and examples illustrating the use of the tool. Therapists recorded services to the student (time spent in various activities and types of service delivery, interventions used, service location, participation by the student in services) and time spent on behalf of the student (eg, consultation, documentation).

Before the start of this study, face and content validity of the S-PTIP was completed via review by a panel of experts and field-testing by school-based therapists.27 In addition, 15 physical therapists and 25 students from across the nation participated in an intrarater reliability study. A high degree of consistency (the Cronbach alpha of 0.95) was found on S-PTIP reports among therapists.28

Following student assessment in the beginning of the school year, therapists used the S-PTIP to record service delivery, activities, and interventions for each student during the second year of the study. A hard copy of the S-PTIP data form was completed weekly by the physical therapist for each student and then submitted to the site coordinator. Because of time for pre- and posttesting, S-PTIP data were collected for the middle 6 months of the school year.

The PBE design involves capturing intervention data from multiple therapists across multiple settings over long periods of time; therefore, procedural fidelity is important,29 which was defined for PT COUNTS as the accuracy and consistency of reported PT services on the S-PTIP form. To evaluate this, the investigators observed 34 physical therapists (approximately 30% from each region) during a service delivery session midway through data collection. During the visits, the physical therapists provided their usual student services while the investigator observed the session. After the session, the physical therapist and the investigator independently completed the S-PTIP form and then discussed any discrepancies. The physical therapists and students observed during the fidelity study reflected the overall population of study participants.

We compared S-PTIPs by calculating the number of discrepancies between the physical therapist and the investigator. A few discrepancies were found for interventions, and when they did occur, they were in situations where through observation an intervention may not be readily apparent. For instance, postural awareness had the most (11) discrepancies. This is influenced easily by the therapist's intent and knowledge of the student, of which an observer could not be aware. In contrast, an intervention such as bathroom access (0 discrepancies) can be observed and coded more objectively. Coding of activities showed some discrepancies of a similar nature. For example, when a student was sitting and playing ball with the physical therapist, it was difficult for the observer to determine whether the activity was primarily a recreation, physical education, or sitting activity. The physical therapist understood why that activity was occurring, which the observer would not know, but was required for correct coding. Also, when transitioning from sit to stand, it is difficult for an observer to determine whether the primary activity is sitting, standing, or transitioning. Therefore, for the procedural fidelity analyses and for the later study analyses, we combined the activities of physical education and recreation into 1 category, and activities of sitting, standing, and transitions into 1 category. With these overlapping areas combined, greater overall consistency in observations was obtained with a range of 1 to 9 discrepancies in activity between the physical therapist and the investigator. For types of service delivery to the student, we found a range of discrepancies in service type of 1 to 4 between the physical therapist and the investigator. For the overall minutes of services provided to the student, the physical therapist and the investigator showed a high significant correlation (0.98; P < .05).

Data Management and Analysis

To manage the large volume of data and provision of appropriate summaries that correctly depict potential relationships and trends, we developed a data management plan in collaboration with our statistical team. The data management plan outlined procedures for multilevel data collection, web-based data entry, quality assurance, and data integrity. Our processes for data management and analysis planning are described as follows.

Each study site coordinator monitored and received data from the physical therapists and then used a secure web-based Research Electronic Data Capture (REDCap) system for data entry. The REDCap system provided an intuitive interface to enter and validate data and an export mechanism to statistical packages; its relational features provided linked data for student and physical therapist. To verify the validity of the entered data, we used programmed edit checks where illogical or out-of-range values were flagged for confirmation during data entry. We limited study personnel that had administrative rights to make changes to the data, and all changes were tracked through the REDCap audit trail. Given the nature of PBE studies, planning specific comparative analyses is difficult because little is known about what groups will emerge. Consequently, the focus of the PT COUNTS analysis plan was on descriptive statistics and plots for further examination of potential data issues. Analysis strategies for investigating potential relationships or comparisons were not specifically elucidated (as in a comparative trial). This does not mean that our research plan was undefined, but rather was focused on descriptions with the intent of informing dynamic and flexible covariate-adjusted comparisons.

Before creating any calculated variables, this study resulted in more than 465 data capture fields with multiple options for collapsing and summarizing variables (eg, by week, averages per week, cumulative). Hence, prior to comparative analyses, we first considered descriptive tables constructed using both outcome and comparison groups; comparisons were made using chi-square tests of independence and ANOVA for categorical and continuous outcomes, respectively. Review of these tables resulted in a series of data-driven research questions, which were then further investigated through factorial ANOVAs and multiple linear and logistic regressions,30 using the SAS (v9.3 or higher) for all the analyses.


The PT COUNTS study used a PBE research design to answer complex research questions related directly to current delivery of school-based PT services. We recruited 177 physical therapists, of whom 109 completed the training and the study. A total of 342 of those therapists' students were recruited, of whom 296 had complete data needed for analyses. The physical therapists completed individualized (GAS) and standardized (SFA) outcomes measures for students at the beginning and end of the school year and documented the services provided for 6 months. Upon completion of data analysis, findings of this study will provide guidance to physical therapists on what activities, interventions, and service approaches are associated with positive student outcomes. These results might also drive additional research by guiding choices of interventions to study and thus reduce the cost and time associated with such research. The use of the PBE design should be considered for studies in other settings of PT service.


The authors thank Dr Tracy Stoner, Dr Dianne Rios, and Julia Smarr for assisting in therapist and student recruitment, retention, and training. Thank you to all of the physical therapists who completed the extensive training and assisted in data collection.


1. Effgen S, Kaminker MK.The educational environment. In: Campbell SK, Palisano RJ, and Orlin MN, eds. Physical Therapy for Children. 4th ed. St Louis, MO: Elsevier Saunders; 2012:978.
2. McEwen I, ed. Providing Physical Therapy Services Under Parts B & C of the Individuals with Disabilities Education Act (IDEA). Alexandria, VA: Section on Pediatrics, APTA; 2009.
3. Effgen SK, Kaminker MK. Nationwide survey of school-based physical therapy practice. Pediatr Phys Ther. 2014;26(4):394–403.
4. Effgen S, Teeters Myers C, Myers D. National distribution of physical and occupational therapists serving children with disabilities in educational environments. Phys Disabil Edu Relat Serv. 2007;XXVI(1):41–61.
5. Kaminker MK, Chiarello LA, Smith JAC. Decision making for physical therapy service delivery in schools: a nationwide analysis by geographic region. Pediatr Phys Ther. 2006;18(3):204–213.
6. Teeter L, Gassaway J, Taylor S, et al. Relationship of physical therapy inpatient rehabilitation interventions and patient characteristics to outcomes following spinal cord injury: the SCIRehab project. J Spinal Cord Med. 2012;35(6):503–526.
7. Jette DU, Latham NK, Smout RJ, Gassaway J, Slavin MD, Horn SD. Physical therapy interventions for patients with stroke in inpatient rehabilitation facilities. Phys Ther. 2005;85(3):238–248.
8. Horn SD, Gassaway J. Practice-based evidence study design for comparative effectiveness research. Med Care. 2007;45(10)(suppl 2):S50–S57.
9. Dejong G, Horn SD, Gassaway JA, Slavin MD, Dijkers MP. Toward a taxonomy of rehabilitation interventions: using an inductive approach to examine the “black box” of rehabilitation. Arch Phys Med Rehabil. 2004;85(4):678–686.
10. Horn SD, DeJong G, Deutscher D. Practice-based evidence research in rehabilitation: an alternative to randomized controlled trials and traditional observational studies. Arch Phys Med Rehabil. 2012;93(8):S127–S137.
11. Bartlett DJ, Macnab J, MacArthur C, et al. Advancing rehabilitation research: an interactionist perspective to guide question and design. Disabil Rehabil. 2006;28:1169–1176.
12. Hashimoto M, McCoy SW. Validation of an activity-based data form developed to reflect interventions used by pediatric physical therapists. Pediatr Phys Ther. 2009;21(1):53–61.
13. Donner A, Klar N. Design and Analysis of Cluster Randomized Trials in Health Research. London, England: Arnold; 2000.
14. McDougal J, King G. Goal Attainment Scaling: Description, Utility, and Applications in Pediatric Therapy Services. 2nd ed. London, Ontario, Canada: Thames Valley Children's Centre; 2007.
15. Law LSH, Dai MO, Siu A. Applicability of goal attainment scaling in the evaluation of gross motor changes in children with cerebral palsy. Hong Kong Physiother J. 2004;22:22–28.
16. Steenbeek D, Meester-Delver A, Becher JG, Lankhorst GJ. The effect of botulinum toxin type A treatment of the lower extremity on the level of functional abilities in children with cerebral palsy: evaluation with goal attainment scaling. Clin Rehabil. 2005;19(3):274–282.
17. Novak I, McIntyre S, Morgan C, et al. A systematic review of interventions for children with cerebral palsy: state of the evidence. Dev Med Child Neurol. 2013;55(10):885–910.
18. Ahl LE, Johansson E, Granat T, Carlberg EB. Functional therapy for children with cerebral palsy: an ecological approach. Dev Med Child Neurol. 2005;47(9):613–619.
19. Brown DA, Effgen S, Palisano R. Performance following ability-focused physical therapy intervention in individuals with severely limited physical and cognitive abilities. Phys Ther. 1998;78(9):934.
20. King GA, McDougall J, Tucker MA, et al. An evaluation of functional, school-based therapy services for children with special needs. Phys Occup Ther Pediatr. 1999;19(2):5–29.
21. Palisano RJ, Haley SM, Brown DA. Goal attainment scaling as a measure of change in infants with motor delays. Phys Ther. 1992;72(6):432–437.
22. King G, McDougall J, Palisano RJ, Gritzan J, Tucker MA. Goal attainment scaling: its use in evaluating pediatric therapy programs. Phys Occup Ther Pediatr. 1999;19(2):31–52.
23. Coster W, Denny T, Haltiwanger J, Haley SM. School Function Assessment. San Antonio, TX: The Psychological Corporation; 1998.
24. Hwang LJ, Davies PL. Brief report: Rasch analysis of the School Function Assessment provides additional evidence for the internal validity of the activity performance scales. Am J Occup Ther. 2009;63:369–373.
25. Davies PL, Soon PL, Young M, Clausen-Yamaki A. Validity and reliability of the School Function Assessment in elementary school students with disabilities. Phys Occup Ther Pediatr. 2004;24(3):23–43.
26. McCoy SW, Jeffries L, Effgen S, et al. School Physical Therapy Interventions for Pediatrics (S-PTIP) Manual and Forms, Version 4. Published February 2014. Accessed December 4, 2014.
27. McCoy SW, Linn M. Validity of the School-Physical Therapy Interventions for Pediatrics data system for use in clinical improvement design studies. Pediatr Phys Ther. 2011;23:121–122.
28. Effgen SK, McCoy S, Jeffries L, et al. Reliability of the School-Physical Therapy Interventions for Pediatrics data system. Pediatr Phys Ther. 2013;26(1):118–119.
29. Wolery M, Garfinkle AN. Measures in intervention research with young children who have autism. J Autism Dev Disord. 2002;32(5):463–478.
30. Bush H. Biostatistics an Applied Introduction for the Public Health Practitioner. Clifton Park, NY: Delmar; 2012.
No title available.

child/disabled; evidence-based practice/methods; health services research/methods; observational study; outcomes; physical therapy practice; research design; schools

Copyright © 2016 Academy of Pediatric Physical Therapy of the American Physical Therapy Association