Secondary Logo

Journal Logo

Original Research

Grading the Functional Movement Screen

A Comparison of Manual (Real-Time) and Objective Methods

Whiteside, David1; Deneweth, Jessica M.1; Pohorence, Melissa A.2; Sandoval, Bo2; Russell, Jason R.1; McLean, Scott G.1; Zernicke, Ronald F.1; Goulet, Grant C.1

Author Information
The Journal of Strength & Conditioning Research: April 2016 - Volume 30 - Issue 4 - p 924-933
doi: 10.1519/JSC.0000000000000654

Abstract

Introduction

There is an expectation placed on sports practitioners to elicit performance improvements in athletes while simultaneously curtailing musculoskeletal injuries. The latter often involves screening processes proposed to detect functional movement deficiencies that relate to injury (28,32). At the elite level, injury screens are often used to direct personalized remedial strength and/or conditioning programs that are designed to improve function and reduce injury risk. While the broad research base detailing the structural and mechanical contributors to musculoskeletal injury has engendered a number of injury screening protocols, the Functional Movement Screen (FMS) is particularly prominent.

The FMS has garnered widespread attention over the past decade as a standard functional evaluation that alleges to provide a straightforward, low-cost assessor of injury risk in athletes. It was developed to reveal functional deficiencies that present as potential injury risk factors, and has gained several high profile proponents in the NFL, NBA, NHL, MLB, PGA, and NCAA. The screen involves 7 exercises that are supposed to challenge flexibility, stability, strength, and coordination, purportedly providing suitable evaluations of each (12). A tester is required to observe and assign a score to each exercise (0, 1, 2, or 3) based on a number of functional criteria. The aggregate of these scores—called as the composite score (0–21)—provides an indication of performance across the entire screen. As such, neither the individual nor composite scores are specifically attributed to a specific functional source, which, intuitively, is critical to a remedial training program (18). The content validity of the FMS also seems questionable as the screen seemingly neglects several nonmechanical factors related to injury, such as cognition (5), neural pathways (35), and proprioception (8). This may explain why the link between FMS scores and mechanics/injury mechanisms has often been reported as unclear or absent (3,21,30,33). However, other work has reported a relation between FMS scores and injury (10,24), presenting the possibility that the FMS possesses some value as a screening tool. Akin to FMS use in the field, the majority of investigations in this domain have proceeded under the assumption that manual grading (i.e., viewing the exercise live and assigning a grade in real time) is a valid measurement tool. More explicitly, there is an expectation that the tester can sufficiently perceive the mechanics related to each of the grading criteria and accurately grade each exercise.

The accuracy of FMS grading is partly dependent on the grading criteria defined by the developers of the FMS. Explorations into expert interrater reliability in the FMS have revealed mixed results (Table 1), and it is evident that a consensus is yet to be established, although the vague FMS grading criteria may offer an explanation for this (27). In contrast to the research attention that has been afforded to interrater reliability, virtually no investigations have probed the accuracy of FMS grades assigned by a manual tester. The single exception was a study on the deep squat exercise that reported 80% agreement between the scores that were assigned by a manual tester and those assigned by objective inertial measurement units (IMUs) (23). Although that study provided initial support to the criterion validity of manual FMS grading, it is logical to extend this line of inquiry to the remaining exercises. Establishing whether manual grading is a valid measurement tool would help determine the value of the FMS and significance of previous research. Consequently, the primary objective of this study was to evaluate the accuracy of manual FMS grades using an objective (motion capture) grading system.

Table 1
Table 1:
Interrater reliability in the FMS (using expert raters).*

Methods

Experimental Approach to the Problem

To date, there have been no systematic evaluations of the criterion validity of manual FMS grading. To address this, eleven NCAA Division I athletes were fitted with a full-body IMU system and completed 6 of the 7 FMS exercises. During each exercise, a certified FMS tester graded each athlete according to the standard protocols, while the IMU system recorded full-body kinematics. The kinematic recordings were used to grade each movement objectively, according to discrete kinematic thresholds that were developed to correspond to FMS grading criteria (Table 2). The criterion validity of manual FMS grading was then evaluated by comparing the manual and objective grades.

Table 2
Table 2:
The FMS scoring criteria and their kinematic equivalents developed for this study.*

Subjects

Before subject recruitment, the institutional review board of the university approved the procedures and cohort. Eleven NCAA Division I women's basketball players (mean age: 19.7 ± 1.5 years; height: 1.81 ± 0.07 m; mass: 74.6 ± 11.4 kg) were recruited and provided informed consent to participate in the study. No athletes reported pain during the testing protocol.

Procedure

The athletes were required to attend a single testing session at the team's indoor practice facility. A single FMS tester directed all testing sessions, which were administered in accordance with the standard FMS protocols. These protocols are described in the following sections.

Inertial Measurement System

In recent times, IMUs have been used in isolation (4,7) and also as connected arrays (9,15,41) in the analysis of human motion. These sensors provide logistical and ecological advantages over traditional optical motion capture systems. Kinematic data in this study were measured using a 17-segment inertial measurement system (MVN BIOMECH; Xsens Technologies, Enschede, the Netherlands) operating at 120 Hz. This motion capture system has been used previously to measure kinematics in a variety of movement scenarios (11,22,25,37). A single tester fitted the system to each athlete using a suitably sized Lycra bodysuit that effectively fixed each IMU (mass: 0.03 kg; dimensions: 0.038 × 0.053 × 0.021 m) to a particular location on each body segment (Figure 1). When the Lycra suit had been fitted, 9 anthropometric measures were taken according to the system's native protocols (described in (26)). Subjects then assumed a neutral standing posture for the purpose of calibrating the sensor orientations (i.e., defining the “zero” angles at each joint). A subject-specific anatomical model was then constructed using both the anthropometric measures and sensor orientation data garnered during the calibration trial (effectively accounting for variations in sensor placement on the segment). Within the model, segments were represented using right-hand reference frames (positive x pointing anteriorly; positive y pointing proximally; and positive z pointing rightward). Constraining the model according to the anthropometric measurements also minimized the effect of soft tissue artifact, whereas Xsens native Kalman filter was used to eliminate drift (37). No additional filtering techniques were applied to the data. The joint (hip, knee, and lumbar) rotations of interest in this study were expressed using the Euler ZXY decomposition sequence (42) and have been validated previously (20,26,43). The resolution of the 3-dimensional sensor orientations was 0.05°, and the measured accuracy was ±0.5°. To preserve consistency with the FMS grading guidelines, segment orientations were expressed relative to the anatomical planes (frontal, sagittal, or transverse) noted in Table 2, using the acute angle between the relevant anatomical axis and the plane of reference. In the deep squat, for example, alignment of the knees over the feet was assessed by measuring the angular excursion of the long (y)-axis of the tibia from the vertical, in the frontal plane.

Figure 1
Figure 1:
Locations of the inertial measurement unit sensors (broken circles represent posteriorly mounted sensors).

Functional Movement Screen

The FMS is a standard, widely used screening tool comprising 7 exercises that are alleged to be suitable assessors of function. According to the FMS developers, these exercises were designed to “present practitioners with observable locomotor, manipulative, and stabilizing function” as they supposedly require a balance of mobility and stability. The 7 exercises that make up the FMS are (a) deep squat; (b) hurdle step; (c) in-line lunge; (d) shoulder mobility; (e) active straight-leg raise; (f) trunk stability push-up; (g) rotary stability (Figure 2) and have been detailed previously (12–14,24,40). This study assessed all but the shoulder mobility exercise, as this exercise already involves an objective measurement modality (a ruler). The standard FMS equipment (dowel, hurdle, and 0.61 × 0.15-m board) was used to perform the exercises.

Figure 2
Figure 2:
The Functional Movement Screen exercises appraised in this study. A) Deep squat; (B) active leg raise; (C) trunk stability push-up; (D) in-line lunge; (E) rotary stability; (F) hurdle step.

A certified FMS tester administered the screen to all athletes. The rater used the scripted instructions provided in the FMS documentation to inform athletes before each exercise. Athletes completed each exercise 3 times and the tester assigned a score based on observations of the relevant functional criteria. The first repetition of each exercise was recorded by the IMU system and graded according to the objective kinematic criteria. A score of 3 denoted that the athlete achieved all of the relevant criteria, whereas a score of zero was assigned if the athlete experienced pain while performing the exercise (no instances of pain were noted). The intermediate scores denoted that some criteria were achieved and others not (see Supplemental Digital Content, http://links.lww.com/JSCR/A7). Because no athlete experienced pain in this study, the FMS scoring was effectively reduced to a 3-point scale (1–3). The composite score was not calculated, as it was not relevant to the research question. The 6 exercises were completed in the order noted above (though shoulder mobility was skipped).

Although practitioners observe 3 repetitions, different criteria are graded in each repetition (e.g., knee flexion in the first repetition; trunk tilt in the second; dowel position in the third). Therefore, the tester is not garnering 3 observations of each criterion when they are grading (it would be reasonable to expect 2, at most). For this reason, sampling all 3 trials with the IMU system was deemed inappropriate, as the objective system would be collecting extra observations. It would be equally unsuitable to have the manual tester evaluate a single repetition, as they would be unable to evaluate multiplanar criteria from 1 vantage point. Grading a single repetition with the IMUs and all 3 repetitions with the manual tester was a compromise that attempted to control the number of observations garnered by each grading system and is acknowledged as a limiting factor in this study.

Developing Objective Functional Movement Screen Grading Criteria

Although the grading criteria for the FMS exercises are clearly outlined (12), they are nonspecific and often do not refer to specific segment orientations and/or joint rotations. Therefore, it was necessary to establish discrete kinematic thresholds corresponding to each of the grading criteria to assess grading accuracy in this study (Table 2).

Statistical Analyses

The accuracy of manual grading was assessed using the weighted Kappa statistic (κw) (16). This statistic provided a value between −1 and 1 that denoted the level of agreement between 2 ratings. A value of zero signified a level of agreement equivalent to what would be expected from chance alone, whereas 1 represented perfect agreement between the ratings. To aid interpretation of what may be considered a relatively small sample size, level of agreement was interpreted using the scales developed by both Altman (1) and Fleiss et al. (16) (Table 3).

Table 3
Table 3:
Corresponding kappa values and levels of agreement on 2 prominent scales.

Results

The levels of agreement between the manual tester and objective (IMU) system were calculated for each exercise (Table 4). Five of the exercises displayed percentage agreement of 50% or less (deep squat, right hurdle step, right in-line lunge, and both rotary stability exercises), and 5 were greater than 50% (left hurdle step, left in-line lunge, trunk stability push-up, and both active leg raise exercises).

Table 4
Table 4:
Level of grading agreement between the certified tester and IMU system.*

The weighted kappa values revealed that the level of agreement was, statistically, poorest in the rotary stability exercise (overall κw = 0.05; left κw = 0.13; right κw = 0.08). The deep squat (κw = 0.22), in-line lunge (κw = 0.20; L = 0.36; R = 0.23), active leg raise (κw = 0.21; L = 0.38; R = 0.38), and push-up (κw = 0.34) all displayed poor and fair levels of agreement on the Fleiss et al. (16) and Altman (1) scales, respectively. The right hurdle step displayed poor/fair agreement (κw = 0.32), and the left hurdle step displayed fair to good/moderate agreement (κw = 0.52).

Discussion

The objective of this study was to determine the criterion validity of manual FMS grading by evaluating the scores assigned by a certified FMS tester against those measured by an objective motion capture system. The results indicated disparities in the scores assigned by the tester and IMU system on all 6 FMS exercises. The ambiguous grading criteria and need to visually extract numerous measures simultaneously may have contributed to the poor accuracy of the manual grades. These findings extend previous interrater reliability investigations that have shown mixed results, and imply that manual grading may not provide a valid measurement instrument. Establishing explicit grading guidelines and criteria may help standardize the FMS and improve grading accuracy, although the screen's relevance to performance and injury requires scientific scrutiny thereafter.

The considerable grading disparity between the tester and IMU system in all 6 FMS exercises is at odds with preliminary research reporting manual grading to be a valid measurement tool (23). The left hurdle step displayed the highest kappa value of any exercise (0.52), although even this value only corresponds to the intermediate level of agreement on both scales. These findings imply that either the manual tester was unable to accurately perceive the grading criteria or their interpretations of the grading criteria differed to what is noted in Table 2. Both scenarios present a concern for proponents of the FMS, as inaccurate and/or subjective grading compromises the value of the screen.

Regarding perception, the FMS documentation often directs the tester's attention to specific mechanics (e.g., trunk stability push-up: “make sure the chest and stomach come off the floor simultaneously”). However, the abundant criteria in most exercises require the tester to survey several areas simultaneously. It has been postulated that, beyond a threshold, cue density can interfere with information processing in humans (39). Therefore, the need to simultaneously extract visual cues from several sources may be expected to jeopardize manual grading and partially explain the poor levels of agreement in this study. Reducing testers' attentional focus to fewer, readily identifiable cues could prevent this information overload and improve grading accuracy. However, Figure 3 denotes an example of how mechanical asymmetries could also confound the grading process. Specifically, in the deep squat, athlete A's left femur descended below the horizontal but the right did not, in which case the subject's grade is seemingly dependent on the tester's vantage point. Furthermore, 7 athletes were precariously close (±5°) to the 90° threshold. Given that visual estimates of kinematics involve mean errors in the order of 6.5–12.6° in video-based (36) and 10–15° in live (2) observation of static hip movements, there is a distinct potential for erroneous measurement in these cases. Indeed, some advocate eschewing visual estimation of kinematics altogether in clinical settings for this reason (29).

Although compromissary to the celerity that the FMS is renowned for, obtaining objective measures provides a solution to the abovementioned issues and also access to more relevant mechanics. For example, Figure 4 illustrates how 10 of the 11 athletes were assigned the same (IMU-derived) FMS grade in the active leg raise, despite a clear (22.5°) variation in the peak hip flexion values among these athletes. Therefore, the peak hip flexion angle would provide a more sensitive indicator of flexibility in the active leg raise than any ordinal scale (Figure 4). Objective measures could also enhance research into FMS and injury associations and aid practitioners by identifying the precise sources of movement deficiencies. However, the development of uniform grading criteria that relate to injury is an obvious precursor to this.

Figure 3
Figure 3:
Asymmetry in the recorded femur angle during the deep squat exercise.
Figure 4
Figure 4:
Differences in the assigned Functional Movement Screen (FMS) score (squares) and peak hip flexion angle (circles) in the active leg raise exercise.

Another major challenge for graders is the ambiguity of the grading criteria. For example, directives such as “watch for a stable torso” (hurdle step) and “performs a correct repetition” (rotary stability) are open to interpretation. Both Minick et al. (27) and Shultz et al. (38) allude to this ambiguity as a drawback of the FMS, and others have proposed explicating the grading guidelines to yield more reliable scores (31). Devoid of categorical grading thresholds, even objective measurement tools are futile, as the question of what to measure remains. Explicit grading criteria (Table 2) offer a first step toward uniformity, but future critique of their relevance to performance is essential. It is also worth noting that expert coaches have been shown to possess enhanced perceptual sensitivity to specific kinematic cues (19). Therefore, clarification of the grading criteria may be sufficient to improve the criterion validity of manual FMS grading to acceptable levels.

Although dubious grading presents a concern for FMS clientele, it is equally relevant to point out that noticeable kinematic discrepancies existed between athletes who were assigned the same objective FMS score. This implies that the ordinal FMS scale may not be sensitive enough to direct individualized training programs (18). The calculation of a “corrected” FMS score, using the ratio scale, offers an alternative. Specifically, each criterion in a given exercise could be scored individually and then averaged, affording each an equal weighting toward the calculation of the overall score (as opposed to simply having the lowest score prevail as is presently the case). For example, an athlete who fails all of the grading criteria in a given exercise is currently assigned the same score as an athlete who only fails one of the criteria. A corrected score would account for this situation (Figure 5) and also provide the individual breakdown of criterion scores that would allow practitioners and researchers to isolate the source of movement deficiencies. Grading each of the criteria may also combat probable redundancy in the composite score, where several exercises assess similar function (e.g., deep squat, hurdle step, and in-line lunge all contain a frontal plane stability component).

Figure 5
Figure 5:
Differences between the true (red) and corrected (blue) Functional Movement Screen (FMS) scores in the right-leg in-line lunge exercise.

Finally, although testers are encouraged to score low when in doubt, the manual tester's scores were, on average, 0.54 points higher than the IMU system. If the FMS was assumed to credibly predict injury, the possibility of overlooking potentially injurious movement patterns would therefore be a concern among the athletes in this study. However, it is important to reiterate that a consensus is yet to be established regarding the construct validity of the FMS as it has failed to reliably forecast injuries in past research (3,21,30,33). Perhaps more pertinently, the findings in this study challenge the propriety of such research, as it seems that any reported relation (positive or negative) between FMS performance and injury rates may be a product of specious grading, rather than the screening tool. Repeating these studies using uniform grading criteria and objective grading methods would enhance their scientific rigor and may yield different results. Notwithstanding this possibility, the content validity of the FMS is undermined by its failure to evaluate several factors that contribute to musculoskeletal injury and must also be addressed before the FMS can be considered a reliable injury screening tool.

Limiting to this study, the devised objective kinematic thresholds may not have corresponded to raters' interpretations of the criteria, potentially explaining the differences noted in this study. To address this, post hoc adjustments of the thresholds were undertaken to determine the thresholds that would have yielded “good” levels of agreement (1). It was statistically impossible to reconcile the differences in the deep squat and trunk stability push-up. In the hurdle step and rotary stability exercises, the thresholds would need to be raised to unreasonable levels, where they were deemed to no longer reflect the scoring guidelines (e.g., it would be necessary to concede that a tibia tilted 57° from the vertical was an acceptable level of vertical alignment). The only reasonable exception was the in-line lunge, whose criteria therefore deserve the closest scrutiny in future work. Future research could establish consensus grading criteria by (1) clarifying practitioners' mechanical interpretation of the grading criteria (i.e., determine the subjective thresholds that constitute, for example, “upper torso toward vertical”) and (2) evaluating the efficacy of the cues that practitioners rely on to assess these criteria (i.e., establish whether practitioners can reliably discriminate movements on either side of the threshold). The discrepancies in the number of observations recorded by each testing modality, and appraisal of only one manual tester are also limiting to this study.

Practical Applications

There is a need for clarification of the FMS grading criteria. Ascribing explicit kinematic thresholds to each of the grading criteria would aid testers and improve the uniformity of the FMS. Explicit criteria could also combat subjective grading by facilitating the use of objective grading systems. These findings suggest that manual grading of the FMS is susceptible to error and imply that this screen be used cautiously to evaluate injury risk and direct strength and/or conditioning programs. Moreover, the assumption that manual grading provides a valid measurement tool in the FMS not only challenges the integrity of the screen but may also indicate a fundamental limitation in the previous studies that have attempted to link FMS scores to performance and/or injury. In these studies, it is plausible that the relationships (positive or negative) reported between the FMS and performance/injury were undermined by an invalid measurement tool and are therefore unrelated to the FMS scoring criteria. The high potential for subjective and/or inaccurate grading implies that standard procedures must be developed before FMS performance and injury rates can be conclusively studied (6,17,31,34).

Acknowledgments

The authors wish to thank Steven Davidson and Shannon Pomeroy for their assistance in this study.

References

1. Altman DG. Some common problems in medical research. In: Practical Statistics for Medical Research (Vol. 1). London, United Kingdom: Chapman & Hall, 1991. pp. 396–403.
2. Banskota B, Lewis J, Hossain M, Irvine A, Jones MW. Estimation of the accuracy of joint mobility assessment in a group of health professionals. Eur J Orthopaedic Surg Traumatol 18: 287–289, 2008.
3. Beach TAC, Frost DM, Callaghan JP. FMS scores and low-back loading during lifting—Whole-body movement screening as an ergonomic tool? Appl Ergon 45: 482–489, 2014.
4. Bergamini E, Picerno P, Pillet H, Natta F, Thoreux P, Camomilla V. Estimation of temporal parameters during sprint running using a trunk-mounted inertial measurement unit. J Biomech 45: 1123–1126, 2012.
5. Besier T, Lloyd D, Ackland T, Cochrane J. Anticipatory effects on knee joint loading during running and cutting maneuvers. Med Sci Sport Exer 33: 1176–1181, 2001.
6. Butler RJ, Plisky PJ, Kiesel KB. Interrater reliability of videotaped performance on the functional movement screen using the 100-point scoring scale. Athle Train Sports Health Care 4: 103–109, 2012.
7. Campolo D, Formica D, Guglielmelli E, Keller F. Kinematic analysis of the human wrist during pointing tasks. Exp Brain Res 201: 561–573, 2010.
8. Caraffa A, Cerulli G, Projetti M, Aisa G, Rizzo A. Prevention of anterior cruciate ligament injuries in soccer. Knee Surg Sports Traumatol Arthrosc 4: 19–21, 1996.
9. Chardonnens J, Favre J, Cuendet F, Gremion G, Aminian K. A system to measure the kinematics during the entire ski jump sequence using inertial sensors. J Biomech 46: 56–62, 2013.
10. Chorba RS, Chorba DJ, Bouillon LE, Overmyer CA, Landis JA. Use of a functional movement screening tool to determine injury risk in female collegiate athletes. N Am J Sports Phys Ther 5: 47–54, 2010.
11. Cloete T, Scheffer C. Repeatability of an Off-the-Shelf, Full Body Inertial Motion Capture System During Clinical Gait Analysis. Buenos Aires, Argentina: IEEE, 2010. pp. 5125–5128.
12. Cook G. Movement: Functional Movement Systems: Screening, Assessment, Corrective Strategies. On Target Publications, 2010.
13. Cook G, Burton L, Hoogenboom B. Pre-participation screening: The use of fundamental movements as an assessment of function—Part 1. N Am J Sports Phys Ther 1: 62–72, 2006.
14. Cook G, Burton L, Hoogenboom B. Pre-participation screening: The use of fundamental movements as an assessment of function—Part 2. N Am J Sports Phys Ther 1: 132–139, 2006.
15. Dowling AV, Favre J, Andriacchi TP. Inertial sensor-based feedback can reduce key risk metrics for anterior cruciate ligament injury during jump landings. Am J Sport Med 40: 1075–1083, 2012.
16. Fleiss JL, Levin B, Paik MC. Statistical Methods for Rates and Proportions. Hoboken, NJ: John Wiley & Sons, 2013.
17. Frohm A, Heijne A, Kowalski J, Svensson P, Myklebust G. A nine-test screening battery for athletes: A reliability study. Scand J Med Sci Sport 22: 306–315, 2012.
18. Frost DM, Beach TA, Callaghan JP, McGill SM. Using the Functional Movement Screen to evaluate the effectiveness of training. J Strength Cond Res 26: 1620–1630, 2012.
19. Giblin GL, Farrow D, Reid M, Ball K, Abernethy B. Keep your eye off the ball: Expertise differences in visual search behavior of tennis coaches. J Sport Exerc Psychol 35: S29, 2013.
20. Ha TH, Saber-Sheikh K, Moore AP, Jones MP. Measurement of lumbar spine range of movement and coupled motion using inertial sensors—A protocol validity study. Man Ther 18: 87–91, 2013.
21. Hoover D, Killian CB, Bourcier B, Lewis S, Thomas J, Willis R. Predictive validity of the Functional Movement Screen in a population of recreational runners training for a half marathon. Med Sci Sport Exer 40: S219, 2008.
22. Jansen SEM, Toet A, Werkhoven PJ. Human locomotion through a multiple obstacle environment: Strategy changes as a result of visual field limitation. Exp Brain Res 212: 449–456, 2011.
23. Jensen U, Weilbrenner F, Rott F, Eskofier B. Sensor-based mobile functional movement screening. In: Wireless Mobile Communication and Healthcare. Berlin, Germany: Springer, 2013. pp. 215–223.
24. Kiesel K, Plisky P, Butler R. Functional movement test scores improve following a standardized off-season intervention program in professional football players. Scand J Med Sci Sport 21: 287–292, 2011.
25. Krüger A, Edelmann-Nusser J. Biomechanical analysis in freestyle snowboarding: Application of a full-body inertial measurement system and a bilateral insole measurement system. Sports Technol 2: 17–23, 2009.
26. Laudanski A, Brouwer B, Li Q. Measurement of lower limb joint kinematics using inertial sensors during stair ascent and descent in healthy older adults and stroke survivors. J Healthc Eng 4: 555–576, 2013.
27. Minick KI, Kiesel KB, Burton L, Taylor A, Plisky P, Butler RJ. Interrater reliability of the functional movement screen. J Strength Cond Res 24: 479–486, 2010.
28. Myer GD, Ford KR, Brent JL, Hewett TE. An integrated approach to change the outcome part I: Neuromuscular screening methods to identify high ACL injury risk athletes. J Strength Cond Res 26: 2265–2271, 2012.
29. Norkin CC, White DJ. Procedures. In: Measurement of Joint Motion: A Guide to Goniometry. Philadelphia, PA: F.A. Davis Company. p. 26, 2009.
30. Okada T, Huxel KC, Nesser TW. Relationship between core stability, functional movement, and performance. J Strength Cond Res 25: 252–261, 2011.
31. Onate JA, Dewey T, Kollock RO, Thomas KS, Van Lunen BL, DeMaio M, Ringleb SI. Real-time intersession and interrater reliability of the functional movement screen. J Strength Cond Res 26: 408–415, 2012.
32. Padua DA, Marshall SW, Boling MC, Thigpen CA, Garrett WE, Beutler AI. The Landing Error Scoring System (LESS) is a valid and reliable clinical assessment tool of jump-landing biomechanics: The JUMP-ACL study. Am J Sport Med 37: 1996–2002, 2009.
33. Parchmann CJ, McBride JM. Relationship between functional movement screen and athletic performance. J Strength Cond Res 25: 3378–3384, 2011.
34. Parenteau-G E, Gaudreault N, Chambers S, Boisvert C, Grenier A, Gagné G, Balg F. Functional movement screen test: A reliable screening test for young elite ice hockey players. Phys Ther Sport 15: 169–175, 2014.
35. Pietrosimone BG, McLeod MM, Lepley AS. A theoretical framework for understanding neuromuscular response to lower extremity joint injury. Sports Health 4: 31–35, 2012.
36. Qu Y, Hwang J, Lee KS, Jung MC. The effect of camera location on observation-based posture estimation. Ergonomics 55: 885–897, 2012.
37. Roetenberg D, Luinge H, Slycke P. Xsens MVN: Full 6DOF Human Motion Tracking Using Miniature Inertial Sensors. Enschede, Netherlands: Xsens Motion Technologies BV, 2009. pp. 1–9.
38. Shultz R, Anderson SC, Matheson GO, Marcello B, Besier T. Test-retest and interrater reliability of the functional movement screen. J Athl Train 48: 331–336, 2013.
39. Srivastava J. Computers in human behavior. Comput Hum Behav 29: 888–895, 2013.
40. Teyhen DS, Shaffer SW, Lorenson CL, Halfpap JP, Donofry DF, Walker MJ, Dugan JL, Childs JD. The functional movement screen: A reliability study. J Orthop Sports Phys Ther 42: 530–540, 2012.
41. van den Noort JC, Ferrari A, Cutti AG, Becher JG, Harlaar J. Gait analysis in children with cerebral palsy via inertial and magnetic sensors. Med Biol Eng Comput 51: 377–386, 2013.
42. Wu G, Siegler S, Allard P, Kirtley C, Leardini A, Rosenbaum D, Whittle M, D'Lima DD, Cristofolini L, Witte H, Schmid O. ISB recommendation on definitions of joint coordinate system of various joints for the reporting of human joint motion—part I: Ankle, hip, and spine. J Biomech 35: 543–548, 2002.
43. Zhang JT, Novak AC, Brouwer B, Li Q. Concurrent validation of Xsens MVN measurement of lower limb joint angular kinematics. Physiol Meas 34: N63–N69, 2013.
Keywords:

FMS; injury; screening; IMU

Supplemental Digital Content

Copyright © 2016 by the National Strength & Conditioning Association.