Human standing balance is an unbiased indicator of concussion severity (7,9,25). It has been incorporated into sports-related concussion evaluation and management protocols used to guide clinical decisions such as rehabilitation and return to play (5,11,14). However, the limited on-field access to sophisticated equipment and the moderate-to-low reliability of simple sideline tests undermine the clinical utility of balance assessments performed on field (3,6).
Human standing balance can be assessed in numerous ways, ranging from complex techniques like 3D motion tracking to simple techniques like visually observing individuals as they stand. Optical 3D motion tracking systems and force plates provide precise measurements of the kinematics and kinetics of the human body during standing balance, but are typically restricted to research laboratories due to costs, space requirements, and lengthy setup protocols (1,15,22). Simple balance assessment tools such as the Swaymeter, Berg Balance Scale, and Balance Error Scoring System (BESS) are less precise, but are portable, require minimal training, and allow users to quickly gather information regarding standing balance. The use of these simpler balance assessment tools, however, has been called into question due to their marginal validity, biased/unreliable scores, practice effects, and dependency upon the environment (2,3,6,20,30). As a result, medical professionals currently lack a reliable method to perform accurate balance assessment on the sideline (6).
BESS is the current standard for assessing standing balance in concussed athletes on the sideline (9,13,28). The simplicity of the BESS has also led researchers to adopt it when performing studies unrelated to concussion (17,27,31). The BESS is free to use and entails counting the total number of predefined “errors” a subject makes while balancing in three different stances (feet together, one foot, and tandem) on two different surfaces (firm and foam). Human judgment of these balance errors introduces variability between raters due to different interpretations and strictness of the scoring criteria. Although some studies have reported high reliability for BESS (intraclass correlation (ICC) coefficient = 0.98 (29)), others have reported poorer reliability both between raters (ICC = 0.74) and within raters grading the same test twice (ICC = 0.57 (8)). These latter results suggest that BESS scores must change by almost 50% before the difference can be attributed to changes in balance rather than rater judgment variability (8). Unreliable quantification of balance by the BESS may lead to inappropriate clinical decisions, such as allowing concussed athletes to return to play. The permanent or fatal consequences associated with sustaining subsequent concussive injuries before resolution of the first concussion underscores the need to develop more reliable sideline balance quantification methods that allow users to make more informed decisions (10,16).
A modified version of BESS (mBESS) using only the three stance conditions on the firm surface is currently included in the Sport Concussion Assessment Tool 3 (SCAT3) and the Official NFL Sideline Tool (12,19). In addition to having no clinical validation in literature, researchers have suggested that using only the firm surface may not challenge postural stability as much as using the foam surface, and as a result, mBESS may not differentiate between concussed and nonconcussed athletes as well as BESS (13,28).
The goal of our study was to determine the validity of a simple, objective, on-field balance assessment tool that can provide an accurate, reliable, and affordable alternative to the currently available laboratory and clinical methods. To achieve this goal, we developed an algorithm to calculate objective BESS (oBESS) scores from kinematic data collected by small wireless sensors worn by subjects while they perform a regular BESS protocol. To present a low-cost balance assessment system that is optimized for clinical and/or sideline use, we selected the “best” algorithm as the one requiring the fewest number of sensors to accurately predict BESS scores. Due to the richness of information regarding standing balance provided by kinematic analysis, we hypothesized that this system would predict BESS scores with a level of accuracy associated with good clinical reliability (ICC > 0.75 (23)) when using sensor data from all six BESS conditions. Additionally, the suggestion that firm surface conditions may not challenge postural stability as much as the foam surface conditions (13,28) led us to hypothesize that oBESS scores produced using data from the foam surface conditions would display high correlation with total BESS (ICC > 0.75), whereas those produced using data from the firm surface conditions (mBESS) would display poor correlation with the total BESS (ICC < 0.75).
Thirty healthy and physically active subjects (15 female and 15 male) age 20 to 37 yr (25.4 ± 4.2 yr) participated in the study. Exclusion criteria included neurological or musculoskeletal conditions, respiratory or cardiovascular problems, pregnancy, and the inability to provide informed consent. All subjects gave written informed consent and the study was approved by the University of British Columbia Clinical Research Ethics Board and conformed to the Declaration of Helsinki.
Inertial measurement units (IMU) (Shimmer, Realtime Technologies Ltd., Dublin, Ireland) wirelessly collected 6 degree-of-freedom kinematic data (triaxial linear accelerations and triaxial angular velocities) sampled at 102.4 Hz and streamed these data in real time to a desktop computer using a custom LabVIEW program (National Instruments, Austin, TX). IMU were secured using elastic straps to seven different landmarks aimed at objectifying the balance errors characteristic of the BESS: forehead, sternum, anterior waist (below navel), right and left wrist, and right and left shin (Fig. 1).
Subjects were filmed from the front performing the six standard BESS conditions, i.e., three stances (feet together, one foot, and tandem) on two surfaces (firm and foam). The foam pad was medium density and measured 43 cm × 43 cm × 10 cm thick (SunMate foam; Columbia Foam Inc., BC, Canada). Subjects placed their hands on their iliac crests and closed their eyes for all tests (Fig. 1). Each condition was 20 s long, and the conditions were separated by 30-s rest periods to minimize subject fatigue. Video clips were scored by four experienced raters (15, 20, 25, and 150 h of grading experience) from the University of North Carolina at Chapel Hill by counting the number of predefined balance errors subjects made during each condition. The standardized balance errors consisted of the following:
- a. Moving the hands off the hips;
- b. Opening the eyes,
- c. Step, stumble, or fall;
- d. Abduction or flexion of the hip beyond 30°;
- e. Lifting the forefoot or heel off the testing surface;
- f. Remaining out of the proper testing position for greater than 5 s.
The maximum number of errors per condition was limited to 10, and the total BESS score was the sum of errors committed during all six conditions. If a subject did not maintain the proper stance for at least 5 s, or did not otherwise complete the condition, they were given the maximum score of 10. The three stances were performed in order (feet together, one foot, tandem) on the firm surface followed by the foam surface. Before each condition, subjects were instructed on how to perform the stance and verbally given the criteria for each balance error. Once subjects were in the correct stance and comfortably balanced, an auditory tone (750 Hz, 100-ms duration) signaled the start and end of each 20-s condition. This auditory tone was also used to synchronize the IMU data to the video recordings of the balance tests.
An algorithm was developed to compute oBESS scores from the IMU data. The algorithm was designed to sum the total number of balance errors committed by the subject over the duration of the conditions being analyzed, allowing for easy interpretation by users with previous BESS experience. From a general perspective, the IMU data were first sectioned into windows and then the number of windows in which the data exceeded a specified threshold value was summed to generate an oBESS score. Various window lengths, threshold values, number of IMU, and different combinations of data (linear acceleration + angular velocity, a + ω; linear acceleration only, a; and angular velocity data only, ω) were explored to find combinations that yielded the highest ICC values.
All IMU data were first low-pass filtered (5 Hz, fourth-order Butterworth). For each condition’s 20-s data segment, two resultants were calculated from the triaxial signals (ax, ay, az, ωx, ωy, ωz) to yield the magnitude of the linear acceleration and angular velocity vectors (|a|, |ω|) for each IMU in each condition. This process reduced the dimensionality of the data and created a simpler algorithm. Resultant signals were unbiased by removing their mean and were then split into nonoverlapping windows varying between one window (20-s long) and 40 windows (each 0.5-s long) to encompass a wide range of window sizes. Because balance errors occur over a few seconds or less, models with fewer, longer windows may capture multiple errors within a single window, whereas models with many, shorter windows may lead to a single error influencing multiple windows. Nonoverlapping windows were used to avoid capturing the same error multiple times. Eight thresholds varying from 0.25SD to 2.0SD in increments of 0.25SD were considered to investigate the effect of different magnitudes of balance errors. Models with low thresholds detected very small body movements that may not be actual balance errors, whereas model with high thresholds may discount small balance errors and focus more on larger body movements. Four IMU combinations were investigated to identify the ideal number of IMU to quantify BESS: all seven IMU, five IMU (forehead, chest, waist, R and L wrist only), three IMU (forehead, chest, waist only), and one IMU (forehead only). A raw error score, R, was then defined as the number of windows in which the threshold was exceeded by any IMU included in the analysis during the conditions being analyzed (all, firm only, and foam only). A window’s error score was binary (1 or 0) and was counted only once even if multiple IMU exceeded their thresholds within the window.
When subjects could not maintain the testing stance for a minimum of 5 s, or otherwise could not complete the condition (an automatic 10 in the standard BESS scoring system), a value of 5 was added to the resultant R score for that given condition. Preliminary analyses indicated that adding a value of 10 was not needed since part of the balancing behavior was already incorporated into the data. The oBESS score for a series of conditions was then calculated using the raw error score R and the following equation:
The coefficients (c1, c2, c3, c4) were calculated using a least squares fit between the mean of the four raters’ BESS scores and the raw error scores R for all included IMU and conditions.
Interrater reliability was assessed using the ICC coefficient as described by Shrout and Fleiss (26). The optimal model was selected by maximizing the ICC coefficient between the oBESS scores and the mean rater BESS scores for every combination (n = 3840) of the four parameters: number of windows (1–40), eight error thresholds (0.25SD to 2.00SD), three groups of data (a + ω, a, ω), and four combinations of IMU (1,3,5,7). To maximize portability for ease of use in clinical settings, preference was given to models that required fewer IMU to quantify the BESS. This entire process was repeated for each combination of conditions (all, firm only, and foam only).
The predictive ability of the optimal algorithm using IMU data from all six conditions was then assessed by generating coefficients for equation 1 using data from all subjects but one, and then using these coefficients to predict the missing subject’s oBESS score. This “one-by-one” method was repeated for each subject, and the resultant oBESS scores were compared with mean rater BESS scores using ICC.
For all ICC analyses, comparisons were considered good if the ICC values were greater than 0.75 and moderate to poor if less than 0.75 (23). All analyses were conducted using MATLAB (Version R2012a; The MathWorks Inc., Natick, MA), and where needed, statistical significance was set to P = 0.05.
Data from one subject was removed from the study because the subject balanced on the incorrect foot during one of the six conditions. Analyses were performed using the remaining 29 subjects. The four raters showed little variance in their total BESS scores across all subjects (ICC3,1 = 0.91); however, they were less consistent when grading conditions performed on the firm surface (ICC3,1 = 0.82; mBESS) than the foam surface (ICC3,1 = 0.95). Subjects committed an average of 9.8 ± 7.1 balance errors: 3.2 ± 3.6 on the firm surface and 6.6 ± 4.1 on the foam surface.
Using all six BESS conditions, many different combinations of parameters produced oBESS scores with a good fit to the mean rater scores (ICC3,1 > 0.75, Fig. 2). The graded pattern of ICC values within each panel in Figure 2 showed that the algorithms were relatively insensitive to small changes in the number of windows and the threshold values used. The similarity in the pattern of ICC values between the different panels of Figure 2 showed that the algorithms were similarly insensitive to the groups of data and number of IMU used.
The algorithm with the best fit to the mean rater scores (ICC3,1 = 0.94) had 11 windows (1.8 s long) and a 0.50SD error threshold and used both linear acceleration and angular velocity data from five IMU (forehead, chest, waist, and left and right wrist). Using only one IMU at the forehead, the algorithm that best fit the mean rater scores (ICC3,1 = 0.92) had four windows (5 s long), a 1.50SD error threshold, and relied on linear acceleration data only (Table 1A). This latter simpler algorithm was able to accurately predict individual BESS scores using the “one-by-one” validation method (ICC3,1 = 0.90, Table 1B).
The best oBESS algorithm for the subset of foam-only conditions accurately fits the mean experienced raters (ICC3,1 = 0.89), whereas the best oBESS algorithm for the subset of firm-only conditions did not (ICC3,1 = 0.68).
Our results showed that it is possible to objectively predict BESS scores from kinematic information collected from the body while subjects perform the standard BESS test. Reliability values suggest that when using data from all six standard BESS conditions, the oBESS can produce scores that accurately fit mean rater BESS scores (ICC3,1 = 0.92), and also accurately predict individual BESS scores in normal healthy subjects (ICC3,1 = 0.90). These results indicate that oBESS is a valid measure of balance and may offer an objective alternative to the current laboratory and clinical balance assessment methods.
The oBESS required linear acceleration data from only one IMU placed at the forehead to accurately and reliably quantify balance errors. Although the model with the highest ICC value required both linear acceleration and angular velocity data from five IMU placed on the body, the model requiring only one IMU at the forehead displayed similar accuracy (ICC3,1 = 0.94, 0.92, respectively) while providing a more practical and cost-effective solution for use in clinical settings. Sensors at locations other than the forehead did not greatly improve the fit to mean rater scores or predictive ability of the algorithm, suggesting that the additional sensors contained either redundant balance information or information that did not correlate with the BESS scores. From a biomechanics perspective, a single sensor located at the forehead makes sense during standing balance since the head is at the distal end of the kinematic chain and would therefore capture—and perhaps even amplify—the balance error motions of the intermediate body segments.
Models with fewer windows were generally able to produce more accurate oBESS scores (Fig. 2), suggesting that separating the kinematic data into larger windows was a better method to objectively quantify BESS. This was supported by the optimal model, which required only four windows (each 5 s long). On the other hand, the algorithm appeared less sensitive to changes in error threshold. Although certain models using angular velocity data were only able to compute accurate oBESS scores, models using linear acceleration data only or both linear acceleration and angular velocity data performed better. This finding suggests that linear accelerations are likely sufficient to capture the kinematics associated with the balance errors of the BESS.
Interpretations of ICC values as a measure of test–retest reliability vary considerably in a clinical setting (4). Although some individuals suggest that a value of 0.60 is the minimum acceptable value for a reliable clinical test (21), others argue that values must be greater than 0.90 for use in the sports medicine setting when assessing an athlete’s cognitive status after a sports-related concussion (24). Studies investigating the reliability of the BESS have suggested that 0.90 is probably too stringent for the BESS and recommended an ICC of at least 0.75 be considered reliable (8,23). Further work is needed to evaluate the test–retest reliability of the oBESS algorithm developed here.
The interrater reliability of BESS raters has varied in previous studies. Our four raters showed little variance in their BESS scores (ICC3,1 = 0.91), which differed from a recent study that reported an ICC value between experienced BESS raters of 0.57 (8). This difference between our results and those of Finnoff et al. (8) may be due to the lower number of total balance errors in our study (average = 9.8) compared with this prior work (average = 15.1). Lower total balance errors may minimize interrater grading differences due to less reliance on subjective judgment of balance error criteria strictness, including subjective decisions to assign a maximum score (10 errors) for conditions when the subject cannot properly complete the trial. Although some balance error criteria of the BESS are easy to identify and score, such as the subject opening their eyes during a stance, others are more subjective, such as the subject flexing their hip beyond a 30° angle. In fact, Finnoff et al. (8) have suggested that it may be beneficial to create a simpler BESS test that eliminates subjective errors, and thus increasing the reliability of the test.
The mBESS used in the SCAT3 protocol relies only on the three BESS conditions performed on a firm surface. Our subjects generated fewer balance errors (BESS score = 3.2) in the three firm surface conditions than that in the three foam surface conditions (BESS score = 6.6). This finding suggests that mBESS may not challenge standing balance enough to differentiate balance behavior between subjects (13), and therefore may not be sensitive enough to track recovery after concussion. Based on similar logic, Valovich McLeod et al. (28) suggested that the BESS conditions performed on the foam surface (instead of the firm surface) should be considered for future versions of the mBESS. This consideration is supported by our analysis, which showed that oBESS scores generated using data from the foam trials only fit the mean rater BESS scores (ICC3,1 = 0.89) better than those generated using data from the firm trials only (ICC3,1 = 0.68).
Our study was limited to healthy normal subjects and resulted in relatively low balance error scores (mean = 9.8 ± 7.1). Higher scores have been reported in individuals with sports-related concussion (5.8 ± 6.5 errors above baseline) and therefore additional work is needed to validate the current algorithm using subjects with a higher and wider range of BESS scores (18). Although the present study used a similar balance pad for the foam surface conditions, the BESS is typically performed using a thinner and less compliant Airex Balance Pad (Power Systems Inc., Knoxville, TN). This may have led to disparity in the degree to which subject stability was challenged in comparison to previous studies, as increased compliance of the balance pad may have led subjects to commit fewer balance errors. Another potential limitation of our study is the inability of the algorithm to objectively identify when subjects opened their eyes. In our data, this type of balance error often occurred simultaneously with other errors (such as putting the elevated foot down during one foot stance), and in these instances, this error was likely accounted for in the sensor data. Even though our sensors could not detect eye opening, the good correlation with rater BESS scores suggests that the algorithm may have made up for this deficiency by relying on more detailed kinematic data that would be ignored by the raters. Finally, our algorithm relied on a manual addition of five error points for noncompletion of a trial. This manual input could be incorporated through a handheld device used to report the results of the balance assessment.
The present study may be relevant to current users of the BESS, specifically athletic trainers, team physicians, or other medical personnel involved in sport, by proposing a novel, objective way to quantify the BESS without using sophisticated laboratory-based equipment. Linear acceleration data collected from the forehead may be sufficient to objectively quantify the BESS and provide the basis for a simple (one sensor), inexpensive (no angular rate sensors required), and portable (wireless capability) balance assessment tool. However, in addition to further testing in populations representative of a wider range of balance errors, there are several other steps required before the oBESS can be widely implemented. Further investigation is required regarding the applicability of the coefficients generated in the present study to different subjects (male, female, short, tall, etc.) and whether population-specific coefficients are required to maintain high reliability. Implementation will also require a better platform to collect test data, as the computer software used in the present study would limit usability in clinical settings. To address this issue, we are pairing the oBESS with bluetooth-enabled mobile devices such as smart phones to allow for convenient cost-effective objective quantification of BESS on the sideline.
In summary, we have validated an algorithm to objectively measure BESS using a single inertial sensor worn on the forehead. Objectifying the BESS test minimizes the variability introduced by human judgment and generates the same oBESS score regardless of who is administering the test (athletic trainer, team doctor, clinician, coach, or parent). Our findings also suggest that a modified protocol of only three BESS conditions may offer enough information regarding balance to predict total BESS scores. However, current practices using only the firm surface conditions may require further investigation, because results from the present study suggest conditions performed on the foam surface correlate better with total BESS. Further research is required to optimize oBESS for use in clinical populations, but the present results indicate that the oBESS has the potential to replace the current human-scored BESS test.
We would like to thank the BESS raters from the Matthew Gfeller Sport-Related Traumatic Brain Injury Research Center, Department of Exercise and Sport Science, University of North Carolina at Chapel Hill.
This work was supported by the Natural Sciences and Engineering Research Council of Canada grant to J. S. Blouin and G. P. Siegmund. J. S. Blouin received salary support from the Canadian Institutes of Health Research/Canadian Chiropractic Research Foundation and the Michael Smith Foundation for Health Research. G. P. Siegmund owns shares in a consulting company, and both he and the company may derive benefit from being associated with this work.
The results of the present study do not constitute endorsement by the American College of Sports Medicine.
1. Barela JA, Dias JL, Godoi D, Viana AR, de Freitas PB. Postural control
and automaticity in dyslexic children: the relationship between visual information and body sway. Res Dev Disabil
. 2011; 32 (5): 1814–21.
2. Barlow M, Schlabach D, Peiffer J, Cook C. Differences in change scores and the predictive validity of three commonly used measures following concussion in the middle school and high school aged population. Int J Sports Phys Ther
. 2011; 6 (3): 150–7.
3. Bell DR, Guskiewicz KM, Clark MA, Padua DA. Systematic review of the balance error scoring system
. Sports Health
. 2011; 3 (3): 287–95.
4. Broglio SP, Ferrara MS, Macciocchi SN, Baumgartner TA, Elliott R. Test–retest reliability of computerized concussion assessment programs. J Athl Train
. 2007; 42 (4): 509–14.
5. Cavanaugh JT, Guskiewicz KM, Stergiou N. A nonlinear dynamic approach for evaluating postural control
: new directions for the management of sport-related cerebral concussion. Sports Med
. 2005; 35 (11): 935–50.
6. Clark RA, Bryant AL, Pua Y, McCrory P, Bennell K, Hunt M. Validity and reliability of the Nintendo Wii Balance Board for assessment of standing balance
. Gait Posture
. 2010; 31 (3): 307–10.
7. Davis GA, Iverson GL, Guskiewicz KM, Ptito A, Johnston KM. Contributions of neuroimaging, balance testing, electrophysiology, and blood markers to the assessment of sport-related concussion. Br J Sports Med
. 2009; 43: 136–45.
8. Finnoff JT, Peterson VJ, Hollman JH, Smith J. Intrarater and interrater reliability of the Balance Error Scoring System
(BESS). PM R
. 2009; 1 (1): 50–4.
9. Guskiewicz KM. Balance assessment in the management of sport-related concussion. Clin J Sports Med
. 2011; 30: 89–102.
10. Guskiewicz KM, McCrea M, Marshall SW, et al. Cumulative effects associated with recurrent concussion in collegiate football players. JAMA
. 2003; 290 (19): 2549–55.
11. Guskiewicz KM, Ross SE, Marshall SW. Postural stability and neuropsychological deficits after concussion in collegiate athletes. J Athl Train
. 2001; 36 (3): 263–73.
12. Herring SA, Cantu RC, Kuskiewicz KM, Putukian M, Kibler WB. Concussion (mild traumatic brain injury) and the team physician: a consensus statement—2011 update. Med Sci Sports Exerc
. 2011; 43 (12): 2412–22.
13. Hunt TN, Ferrara MS, Bornstein RA, Baumgartner TA. The reliability of the modified balance error scoring system
. Clin J Sports Med
. 2009; 19: 471–5.
14. Johnson EW, Kegel NE, Collins MW. Neuropsychological assessment of sport-related concussion. Clin J Sports Med
. 2011; 30: 73–88.
15. Lafond D, Duarte M, Prince F. Comparison of three methods to estimate the center of mass during balance assessment. J Biomech
. 2004; 37: 1421–6.
16. Laurer HL, Bareyre FM, Lee VM, et al. Mild head injury increasing the brain’s vulnerability to a second concussive impact. J Neurosurg
. 2001; 95 (5): 859–70.
17. Macinnis MJ, Rupert JL, Koehle MS. Evaluation of the Balance Error Scoring System
(BESS) in the diagnosis of acute mountain sickness at 4380 m. High Alt Med Biol
. 2012; 13 (2): 93–7.
18. McCrea M, Guskiewicz KM, Marshall SW, et al. Acute effects and recovery time following concussion in collegiate football players. JAMA
. 2003: 290 (19): 2556–63.
19. McCrory P, Meeuwisse WH, Aubry M, et al. Consensus statement on concussion in sport: the 4th International Conference on Concussion in Sport held in Zurich, November 2012. Br J Sports Med
. 2013; 47: 250–8.
20. Onate JA, Beck BC, Van Lunen BL. On-field testing environment and balance error scoring system
performance during preseason screening of healthy collegiate baseball players. J Athl Train
. 2007; 42 (4): 445–51.
21. Osborne JW. Best Practices in Quantitative Methods
. Thousand Oaks (CA): Sage Publications, 2007. p. 48.
22. Paloski WH, Wood SJ, Feiveson AH, Black FO, Hwang EY, Reschke MF. Destabilization of human balance control by static and dynamic head tilts. Gait Posture
. 2006; 23 (3): 315–23.
23. Portney LG, Watkins MP. Foundations of Clinical Research: Applications to Practice
. Norwalk (CT): Appleton & Lange; 1993. p. 514.
24. Randolph C, McCrea M, Barr WB. Is neuropsychological testing useful in the management of sport-related concussion? J Athl Train
. 2005; 40: 139–54.
25. Riemann BL, Guskiewicz KM. Effects of mild head injury on postural stability as measured through clinical balance testing. J Athl Train
. 2000; 35 (1): 19–25.
26. Shrout PE, Fleiss JL. Intraclass correlations: uses in assessing rater reliability. Psychol Bull
. 1979; 86 (2): 420–8.
27. Valovich McLeod TC, Armstrong T, Miller M, Sauers JL. Balance improvements in female high school basketball players after a 6-week neuromuscular-training program. J Sport Rehabil
. 2009; 18 (4): 465–81.
28. Valovich McLeod TC, Bay RC, Lam KC, Chhabra A. Representative baseline values on the Sport Concussion Assessment Tool 2 (SCAT2) in adolescent athletes vary by gender, grade, and concussion history. Am J Sports Med
. 2012; 40 (4): 927–33.
29. Valovich McLeod TC, Perrin DH, Guskiewicz KM, Shultz SJ, Diamond R, Gansneder BM. Serial administration of clinical concussion assessments and learning effects in healthy young athletes. Clin J Sports Med
. 2004; 14: 287–95.
30. Valovich McLeod TC, Perrin DH, Gandsneder BM. Repeat administration elicits a practice effect with the balance error scoring system
but not with the standardized assessment of concussion in high school athletes. J Athl Train
. 2003; 38: 51–6.
31. Zammit E, Herrington L. Ultrasound therapy in the management of acute lateral ligament sprains of the ankle joint. Phys Ther Sport
. 2005; 6 (3): 116–21.