Introduction
The countermovement jump (CMJ) is one of the most commonly used performance tests in research and in athlete-monitoring programs (5,9,18 ). Vertical jump testing can be performed using simple tools such as switch mats to calculate jump height (1 ). Similarly, force platforms can also determine jump height, but also give practitioners insight into the force-time characteristics to make more informed decision-making (10 ). Due to its widespread use and potential depth of information, reliability needs to be explored in various capacities to ensure that data derived from CMJ testing are providing coaches and sport scientists with prudent information across competitive and preparatory training periods. Understanding the reliability of these monitoring data is vital, both for insight into the stability of the characteristic and for informing practitioners how to detect meaningful changes in their monitoring programs (6 ).
Reliability of the CMJ test has been explored previously in laboratory settings (12,14 ). Markovic et al. (12 ) observed excellent reliability of CMJ height (CMJH) (intraclass correlation coefficient [ICC] = 0.96) and small within-subject variation (coefficient of variation [CV] = 2.4–4.6%). Similarly, Moir et al. (14 ) observed excellent intersession reliability of CMJ performance (ICC = 0.87–0.95) and small CV values (4.0–6.6%). Although these studies indicate relatively strong reliability of CMJ performance, the former (12 ) only collected multiple trials in a single session and the latter (14 ) collected multiple trials across only 4 weeks. Indeed, research examining reliability of CMJ performance in an applied athlete monitoring setting is lacking. More robust data sets collected in an applied athlete-monitoring setting could provide insight into the construct validity of CMJ testing in athlete-monitoring programs and aid in appropriate decision-making from these data. Therefore, the purpose of this study was to assess the intrasession and intersession reliability of vertical jump variables in a cohort of NCAA D-I volleyball athletes while participating in an athlete monitoring program during a competitive period. A secondary purpose was to determine whether relationships existed between any of these variables throughout the monitoring program.
Methods
Experimental Approach to the Problem
To assess intrasession and intersession reliability of CMJ performance variables in NCAA D-I volleyball athletes, a repeated-measures design was used. Vertical jumps were performed on a twice-weekly basis for 14 weeks on dual-force platforms. Jump height, reactive strength index (RSI)-modified, peak power relative to body mass, and countermovement depth (CM depth) data were collected on each occasion. These dependent variables were chosen due to their popularity in athlete monitoring using the CMJ. Athlete monitoring data are often evaluated on an individual basis, rather than in group means. Therefore, our analysis was conducted as a within-subject reliability analysis to more appropriately capture what a sport scientist would be examining in the field of practice.
Subjects
Eleven female volleyball players (mean ± SD ; age = 19.8 ± 0.8 years, range 18–21 years, height = 1.75 ± 0.07 m, body mass = 71.6 ± 8.9 kg) volunteered to participate in the study. All 11 athletes were members of the same athletic team in the NCAA D-I system. Athletes qualifying for the analysis did not sustain any major injury during the competitive season that would have prevented them from performing the vertical jump assessments. All athletes were informed of the study's benefits and risks, and also read and signed written informed consent, and the procedures were approved by the East Tennessee State University's institutional review board.
Procedures
Countermovement Jump Assessment
Countermovement jump data were collected twice per week over the course of a 14-week competitive season. Data collection was performed on the first training day of the week (i.e., Monday) and another session was performed at least 48 hours later (i.e., Wednesday). Each session was conducted at the same time of day, immediately before a strength and conditioning session and after a standardized warm-up. Next, as a specific warm-up, a series of CMJs were completed: 2 repetitions at 50% perceived effort, one at 75% perceived effort, and one at maximal effort.
After the standardized warm-up, each athlete was taken through the CMJ data collection procedures. The athlete was first asked to place a polyvinyl chloride pipe across their back (nestled between the upper trapezius and the seventh cervical vertebrae; or the high-bar back squat position) (16 ). The polyvinyl chloride pipe was used to eliminate arm swing and to ensure standardization across athletes. The athlete was instructed to stand tall on the dual-force platforms, and then to perform a maximal CMJ effort using a self-selected CM depth. Before each jumping trial, each athlete was instructed to have a 3–5-second period of quiet stance to acquire system weight. A specific countdown to the jump trial was not used; instead, the athlete was then told they could jump when ready. In addition, each athlete was instructed to “jump as high as you can” before each trial. Two trials were collected for each athlete, separated by 60 seconds of rest. Data were collected on dual-force platforms sampling at 1,000 Hz (PASCO, Roseville, CA, USA) and were processed using a commercially available software program (NMP Technologies Ltd., London, United Kingdom). Dependent variables assessed across the competitive season were CMJH, RSIMOD , relative peak power (rPP), and CM depth.
Countermovement jump height was calculated from vertical net impulse during the take-off phase of the jump. RSIMOD was calculated as the ratio of jump height to time to take-off (i.e., the time in seconds of the take-off phase) (16 ). Furthermore, the take-off phase of the jump was considered from the initiation of the downward phase of the CM until the instant no part of the athlete was touching the force plate. Relative peak power was calculated by dividing the power output by the athlete's standing weight. Finally, CM depth was calculated through double integration of force-time data using the trapezoid rule (10 ).
Statistical Analyses
Intrasession reliability of the dependent variables was assessed by comparing trial 1 and trial 2. Intraclass correlation coefficients (3,1) were calculated to assess the rank-order relationship and test-retest reliability between trials. Additional calculations of reliability included percent difference, CV, and typical error (TE). Coefficient of variation was calculated as the SD divided by the mean score between trials (and multiplied by 100 to represent as a percentage). Intersession reliability of the dependent variables was calculated between the day 1 and 2 average values. The same reliability analyses were performed to determine intersession reliability (i.e., between Monday and Wednesday testing). Due to the impact that standing weight can have on the calculation of CM depth using force-time data (10 ), an additional ICC was calculated for body mass. Before performing any parametric statistical approach, the data were screened for normality using the Shapiro-Wilk test. A paired-samples t -test was performed between the trials to determine whether there was systematic bias toward one trial. If any data violated the assumption of normality, a nonparametric test (Wilcoxon signed-rank) was performed in place of the paired-samples t -test. Simple percent difference scores were then calculated between trials. Percent difference scores were determined for each individual testing point before being averaged together for intersession statistics. To examine the interrelatedness of CMJ variables throughout the monitoring program, a Pearson product-moment zero-order correlation was calculated. The alpha level was set as p ≤ 0.05. Intraclass correlation coefficient values of less than 0.5, between 0.5 and 0.75, between 0.75 and 0.9, and above 0.9 were interpreted as poor, moderate, good, and excellent reliability, respectively (8 ).
Results
Normality screening revealed that the dependent variables were not normally distributed (p < 0.05). As such, the nonparametric Wilcoxon signed-rank test was performed. There was a statistical difference intrasession (p = 0.027) and intersession (p = 0.032) for rPP. No other dependent variables were found to be statistically significant in either the within-session or intersession conditions (p > 0.05). Excellent reliability was observed in standing body mass (ICC = 0.97) between trials (Figure 1 ).
Figure 1.: Intraclass correlation coefficients for both intrasession and intersession dependent variables. CMJH = countermovement jump height; RSImod = reactive strength index modified; rPP = relative peak power; CM depth = countermovement depth.
Intrasession reliability revealed excellent reliability values for CMJH (ICC = 0.94) and RSIMOD (ICC = 0.93). Good reliability values were observed for rPP (ICC = 0.79), whereas only moderate reliability was observed for CM depth (ICC = 0.61) (Figure 1 ). Relatively small percent differences were observed for CMJH (4.0 ± 3.3%) and RSIMOD (6.7 ± 6.6%) in addition to small CV and TE values. Although rPP had a somewhat low average percent difference (8.8 ± 17.6%) and CV (6.1 ± 10.3%), the TE values were considerably higher than for CMJH or RSIMOD . Countermovement depth yielded the largest variability in terms of percent difference (12.6 ± 45.9%), as well as CV and TE (Table 1 ).
Table 1: Intrasession reliability of countermovement jump characteristics.*†
Intersession reliability followed similar trends, revealing excellent reliability for CMJH (ICC = 0.92) and RSIMOD (ICC = 0.92). However, poor reliability was observed for rPP (ICC = 0.41) and CM depth (ICC = 0.39) (Figure 1 ). Percent differences were consistent with the low values observed in intrasession reliability data for CMJH (4.6 ± 3.9%) and RSIMOD (7.8 ± 6.5%). Small CV and TE values were also observed in the intersession data for CMJH and RSIMOD . Conversely, rPP resulted in poor reliability (ICC = 0.41) and larger percent difference (15.6 ± 27.1%) than was observed in intrasession data. The larger variability in rPP was supported by TE values. Countermovement depth yielded poor reliability (ICC = 0.39), similar to what was observed in intrasession reliability data (Table 2 ). In addition, significant negative relationships were observed between CM depth and rPP (r = −0.253) or RSIMOD (r = −0.318), but not between CM depth and CMJH (r = 0.031).
Table 2: Intersession reliability of countermovement jump characteristics.*†
Discussion
The purpose of this study was to examine the reliability of several important variables in CMJ performance in a field-based athlete-monitoring program. The novelty of this investigation was that these data were collected in a practical setting. Therefore, the results can provide insight into reliability of vertical jump monitoring data when implemented in the field. The main findings of the study revealed that CMJH and RSIMOD exhibited excellent reliability both intrasession and intersession, whereas rPP and CM depth yielded suspect reliability (Tables 1 and 2 ). Therefore, these data suggest CMJH and RSIMOD to be the most consistent field-based tools for decision-making during the athlete-monitoring process, when data are collected in the practical setting. Of note, the greater reliability observed intrasession compared with intersession may indicate that fatigue accumulated throughout the training week was detected in these variables, particularly for rPP and CM depth. Because of their consistency, CMJH and RSIMOD are likely more informative tools for assessing physical performance over the long term (rather than using them for fatigue monitoring).
Perhaps, the most surprising result occurred in CM depth, which yielded poor reliability intersession (ICC = 0.39) and only moderate reliability intrasession (ICC = 0.61) (Figure 1 ). Previous research has suggested that although CM depth has negligible effects on jump height, it greatly influences kinetic and kinematic jump variables (4,11 ). Furthermore, Mandic et al. (11 ) suggest that force and power characteristics derived from CMJs should be interpreted with caution due to the effect that CM depth can have on these characteristics. The poor reliability observed in CM depth across the competitive season suggests that either (a) interpretation may be compromised when monitoring kinetic or kinematic data derived from CMJ testing, or (b) an underlying mechanism (e.g., fatigue) may be affecting CM depth achieved throughout an athlete-monitoring program. The observed lack of reliability may explain, in part, the significant mean difference observed in rPP for both intrasession and intersession conditions. The lower reliability may be impacted by the double numerical integration required to determine the displacement from force-time data. Because this mathematical technique is ultimately a tool of approximation, it is possible that data may drift due to integration (10 ). Although only going through a single integration, the velocity calculated from the force-time data may impart a similar issue on the rPP outcome variable. The trapezoidal rule was used in the current study for variables requiring integration of raw data (10 ). Determining CM depth through this method of integration can be influenced by any deviations in the baseline standing weight before each jump. Our analysis suggests excellent reliability of standing mass (ICC = 0.97), suggesting that the calculation of CM depth was appropriate. Still, future investigations may consider similar assessments using 3D motion capture or linear position transducers to measure displacement.
RSIMOD has become a popular athlete-monitoring tool in recent years (2,7,16 ). Originally, RSI was a metric observed during depth jumps, showing the ratio of jump height to ground contact time (3 ). The modified version of RSI (RSIMOD ), however, shows the ratio between jump height and movement time or time to take-off in the CMJ (2 ). The significant, yet small, negative relationship observed between CM depth and RSIMOD in the current investigation suggests that RSIMOD can be influenced by CM depth. This aligns with previous research questioning the utility of RSI and RSIMOD as a measurement of plyometric ability in depth jump and CMJ, respectively (15 ). The previous evidence, combined with that of the current investigation, provide rationale for raising concerns about the utility of RSIMOD when performing CMJ testing. More specifically, the poor reliability observed in CM depth and the potential influence that this may have on RSIMOD suggests that practitioners should be careful in their interpretation of RSIMOD or consider means of standardizing CM depth during data collection to minimize the potentially adverse effects.
Although the reliability of CM depth and rPP was suspect in this investigation, CMJH remained a strong variable demonstrated by excellent reliability. Previous research agrees with this result, showing excellent ICC values in laboratory tests of CMJH reliability (12 ). In addition, the observed CV of intrasession (CV = 2.9 ± 2.4%) and intersession (CV = 3.2 ± 2.8%) CMJH in the current study seem to agree with those observed by Moir et al. (14 ). It seems that measures of CMJH are the only variable in the current study not influenced by CM depth (r = 0.031), which agrees with the findings of Mandic et al. (11 ). Ultimately, jump height may be the most reliable metric for monitoring changes in CMJ performance across a competitive season when compared with other variables that can be influenced by CM depth. Although plenty of evidence has demonstrated the efficacy of examining the kinetic and kinematic nuances of the CMJ in determining neuromuscular status (5,13 ), the importance of having a highly repeatable global outcome variable such as CMJH cannot be overstated. Because of its high reliability, changes in CMJH may indicate a more solidified change in physiological state. Therefore, CMJH may be an indicator of when to adjust trend analysis strategies such as process control limits or smallest worthwhile change to better reflect the current adapted state of the athlete.
Several limitations of the current study are worth noting. First, the 11 volleyball athletes who participated in the athlete-monitoring program represent a relatively homogenous group. In addition, volleyball athletes are typically very experienced jumpers and often have greater jumping ability than athletes of other sports (17 ). These realizations may have influenced the results of the current study. As such, practitioners and researchers should continue to examine the reliability of various metrics within the monitoring process of other athlete populations to ensure appropriate conclusions can be drawn from data.Practical Applications
The results of the current investigation suggest that CMJH and RSIMOD are the most consistent variables obtained from CMJ testing. Although CMJH and RSIMOD may provide the most robust indications of physical performance alterations, it may not be as sensitive to fatigue as other kinetic or kinematic field-based variables. Perhaps, the most vital takeaway from this investigation is the realization that reliability needs to be continually assessed and accounted for throughout athlete-monitoring programs. This study used real athlete monitoring data and suggests that test-retest reliability may be dependent on each team's unique circumstances. When implementing a recurring athlete-monitoring program that uses the CMJ, practitioners should exercise caution when interpreting peak power outputs and, to a lesser extent, RSIMOD calculations. This caution is a function by the inherent variability that exists in CM depth across a training program. Furthermore, when using the CMJ test in a monitoring program, practitioners should keep track of CM depth variability along with performance-based metrics.
References
1. Carlock JM, Smith SL, Hartman MJ, et al. The relationship between vertical jump power estimates and weightlifting ability: A field-test approach. J Strength Cond Res 18: 534–539, 2004.
2. Ebben WP, Petushek EJ. Using the reactive strength index modified to evaluate plyometric performance. J Strength Cond Res 24: 1983–1987, 2010.
3. Flanagan EP, Comyns TM. The use of contact time and the reactive strength index to optimize fast stretch-shortening cycle training. Strength Cond J 30: 32–38, 2008.
4. Gajewski J, Michalski R, Buśko K, Mazur-Różycka J, Staniak Z. Countermovement depth—A variable which clarifies the relationship between the maximum power output and height of a vertical jump. Acta Bioeng Biomech 20: 127–134, 2018.
5. Gathercole R, Sporer B, Stellingwerff T. Countermovement jump performance with increased training loads in elite female rugby athletes. Int J Sports Med 36: 722–728, 2015.
6. Hopkins WG. Measures of reliability in sports medicine and science. Sports Med 30: 1–15, 2000.
7. Kipp K, Kiely MT, Geiser CF. Reactive strength index modified is a valid measure of explosiveness in collegiate female volleyball players. J Strength Cond Res 30: 1341–1347, 2016.
8. Koo TK, Li MY. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J Chiropr Med 15: 155–163, 2016.
9. Kraska JM, Ramsey MW, Haff GG, et al. Relationship between strength characteristics and unweighted and weighted vertical jump height. Int J Sports Physiol Perform 4: 461–473, 2009.
10. Linthorne NP. Analysis of standing vertical jumps using a force platform. Am J Phys 69: 1198–1204, 2001.
11. Mandic R, Jakovljevic S, Jaric S. Effects of countermovement depth on kinematic and kinetic patterns of maximum vertical jumps. J Electromyogr Kinesiol 25: 265–272, 2015.
12. Markovic G, Dizdar D, Jukic I, Cardinale M. Reliability and factorial validity of squat and countermovement jump tests. J Strength Cond Res 18: 551–555, 2004.
13. McMahon JJ, Suchomel TJ, Lake JP, Comfort P. Understanding the key phases of the countermovement jump force-time curve. Strength Cond J 40: 96–106, 2018.
14. Moir G, Shastri P, Connaboy C. Intersession reliability of vertical jump height in women and men. J Strength Cond Res 22: 1779–1784, 2008.
15. Snyder BW, Munford SN, Connaboy C, et al. Assessing plyometric ability during vertical jumps performed by adults and adolescents. Sports (Basel) 6: 132, 2018.
16. Suchomel TJ, Bailey CA, Sole CJ, Grazer JL, Beckham GK. Using reactive strength index-modified as an explosive performance measurement tool in Division I athletes. J Strength Cond Res 29: 899–904, 2015.
17. Suchomel TJ, Sole CJ, Bailey CA, Grazer JL, Beckham GK. A comparison of reactive strength index-modified between six U.S. collegiate athletic teams. J Strength Cond Res 29: 1310–1316, 2015.
18. Taylor K, Chapman DW, Cronin JB, Newton MJ, Gill N. Fatigue monitoring in high performance sport: A survey of current trends. J Aust Strength Cond 20: 12–23, 2012.