Traditional resistance training programs have a number of variables manipulated to achieve specific outcomes. Typically, these variables include load, repetitions and sets, whereby the volume or overall workload is calculated from these variables for each session. This is appropriate for the strength endurance and strength phases of the conditioning program where the intention is either to lift heavier loads or increase the number of repetitions lifted at the same load. However, when the phase of the conditioning program moves to power development, other foci may provide better power-specific adaptation. Advances in technology (linear position transducers, rotary encoders, accelerometers, etc.) now enable the direct measurement of many kinematic (e.g., velocity) and kinetic (e.g., power) variables during certain resistance training exercises. Although this type of data is used effectively to assess the effects of resistance training interventions, its major benefit may be the ability to continuously monitor performance and provide feedback during training (5).
The ability to monitor resistance training becomes even more critical with the introduction of periodized training programs, where the manipulation of numerous training variables is seen as vital to achieving a number of training goals and to avoid overreaching or overtraining (1,4,6,7). Given that specific training goals change according to individual and positional needs and the time of the training year, it follows that performance feedback needs to parallel the specific training focus. This may result in goal-oriented movement tasks in the gymnasium that increase the likelihood of transference to on-field performance or at the very least improves the mechanical variable of interest such as power output. Therefore, what is required is equipment integrated with software that can provide reliable instantaneous feedback related to the variable of interest during that training phase such as velocity of motion or power output. To these ends, we developed a system and software able to provide such information. The purpose of this study was to determine the reliability of performance velocity for jump squats under feedback and nonfeedback conditions using this system over 3 consecutive training sessions.
Experimental Approach to the Problem
Twenty subjects performed a total of 3 “jump squat” training sessions. Before completing these sessions, the subjects were randomly allocated to a feedback or nonfeedback group. The feedback group received feedback on “peak bar velocity” after every repetition of the training sessions, whereas the nonfeedback group did not. The percent change in the mean, typical error (TE), and intraclass correlation coefficients (ICCs) were calculated for each session.
Twenty semiprofessional rugby players were randomly assigned to one of 2 groups, feedback (n = 10, age = 23.0 ± 3.6 years, height = 183.5 ± 9.4 cm, weight = 98.0 ± 121.1 kg, training age = 2.6 ± 1.4 years, 1RM squat = 180.1 ± 30.9 kg) and nonfeedback (n = 10, age = 20.9 ± 2.9 years, height = 183.5 ± 5.5 cm, weight = 99.2 ± 11.1 kg, training age = 2.2 ± 0.6 years, 1RM squat = 183.6 ± 38.9 kg). All subjects had a minimum of 2 years resistance training experience and were currently in the preseason phase of their training program. All testing procedures and risks were fully explained, and participants were asked to provide their written consent before the start of the study. The study was approved by the AUT University Ethics Committee.
All participants completed a familiarization session and 3 separate training sessions. At the beginning of each session, participants were required to complete a standardized warm-up consisting of 5 minutes of cycling followed by 2 sets of 8 body weight vertical jumps. In the training sessions, participants performed 4 sets of 8 concentric squat jumps using a barbell with an absolute load of 40 kg. This movement was regularly used by these athletes as part of their off-season and in-season training. The depth of the squat was set at a knee angle of 90°, and this was controlled using an adjustable rack that the barbell had to make contact with before the commencement of each repetition. Participants were instructed to perform the movement as fast and explosively as possible. Three minutes of rest was given between sets. Participants in group 1 were given real-time feedback on peak velocity of the jump squat at the completion of each repetition using customized software, whereas those in group 2 did not receive any feedback. The same testing procedures were replicated 2 additional times with each session separated by at least 48 hours to minimize the effect of fatigue. All training sessions were completed within 2 weeks of the first session.
A wire from a linear position transducer (Celesco PT5A-150; Chatsworth, CA, USA) was attached to the end of an Olympic barbell. The barbell was loaded with two 10-kg plates for an absolute load of 40 kg. The barbell was placed on an adjustable squat rack, which was adjusted to the height of each individual.
Peak velocity during the concentric phase for each repetition was recorded using a position transducer with accuracy of ±0.18% and repeatability of ±0.02 of output (3.81 m) (2), and customized data acquisition and analysis software (Labview, National Instruments, Austin, TX, USA). Velocity was differentiated from the displacement time data which were sampled at 500 Hz and low-pass filtered at 10 Hz.
Change in the means, TEs, ICCs, and 90% confidence limits were used to determine the test–retest consistency of the average set and session peak velocity for both groups (8). T tests were used to determine statistically significant differences with further analysis undertaken to make inferences about the true value of the effect statistic with regard to practical significance (9,10). The chances that the true value of the effect statistic (change in mean) was practically beneficial, trivial, or harmful were calculated for velocity by assuming the smallest practically important change velocity was 0.06 m·s−1. This velocity value was chosen because it is the largest variation that may be attributed to technological error (error arising from apparatus). The TE was used as a measure of absolute consistency and represents the random variation in each subject's measurement between tests, after shifts in the mean have been taken into account. The ICCs were used as a measure of relative consistency and relate to the reproducibility of the rank order of subjects on the retest. The chances that the true value of the effect statistic (difference in TEs and ICCs between feedback and nonfeedback groups) were practically positive, trivial, or negative were also calculated. The same threshold value used for the difference in means was also used for difference in TEs, whereas a threshold value of 0.1 (Cohen's value of the smallest clinically important correlation) was used for the differences in ICCs (9).
Consistency statistics for the between session feedback and nonfeedback conditions can be observed in Table 1. In terms of the change in the mean between sessions, there was less change in the feedback condition as compared to in the nonfeedback conditions between sessions 1–2 (0.07 and 0.13 m·s−1) and 2–3 (0.02 and −0.04 m·s−1), respectively. Although the difference between the changes in the means was not statistically significantly different (p = 0.287 and p = 0.160, respectively), further analysis, using a threshold value of 0.06 m·s−1, was undertaken to determine the probability that the differences in the mean changes was practically significant. Percent chances that the benefit of feedback during jump squats is practically beneficial (positive) or trivial on the effect statistics can be observed in Table 2. It was found that there was a 48.5% probability that the difference in the change in the means from sessions 1 and 2 was practically beneficial, 49.6% that it was trivial, and 1.9% that it was harmful. Similarly, there was a 53.6% probability that the difference in the change in the means from sessions 2 and 3 was practically beneficial, 45.9% that it was trivial, and 0.5% that it was harmful.
With regard to the TE, there appeared to be less random variation associated with the feedback condition when averaged over sessions 1 and 2 (0.06–0.10 m·s−1). However, this difference was minimal when comparisons were made between sessions 2 and 3 (0.06–0.07 m·s−1). Analysis, using the same threshold values as previously used, was undertaken to determine the probability that the differences in TE between groups was practically significant. It was found that there was a 29.9% probability that the difference in TE between feedback and nonfeedback groups for sessions 1 and 2 was practically positive, and 69.3% that it was trivial. With regard to sessions 2 and 3, there was a 6.1% probability that the difference in TE between feedback and nonfeedback groups was practically positive and 92.1% that it was trivial.
The larger ICCs for the feedback condition across both sessions 1 and 2 (ICC = 0.83 vs. 0.53) and sessions 2 and 3 (ICC = 0.87 vs. 0.74) may also indicate the feedback condition was more consistent than the nonfeedback condition in terms of relative consistency. Analysis, using a threshold value of 0.1, was undertaken to determine the probability that the differences in ICCs between groups was practically significant. It was found that there was a 79.8% probability that the difference in ICC between feedback and nonfeedback groups for sessions 1 and 2 was practically positive and 11.5% that it was trivial. Similarly, there was a 58.3% probability that the difference in ICC between feedback and no-feedback groups for sessions 2 and 3 was practically positive and 27.6% that it was trivial.
The purpose of this study was to determine the reliability of performance velocity for jump squats under feedback and nonfeedback conditions over 3 consecutive training sessions. Although previous studies have investigated the consistency of jump squat velocity using position transducers (3,11), none have compared consistency of jump squats under feedback and nonfeedback conditions.
The difference in the mean for 2 tests, that is, change in the mean, is because of random change (because of sampling error) and systematic change (nonrandom change, e.g., changes in behavior, motivation, etc.) (8). If the random change (sampling error) is assumed to be constant for both the feedback and nonfeedback conditions, then a smaller change in the mean would suggest a smaller systematic change (change because of influence of feedback condition), therefore implying better stability in the variable of interest (velocity of movement). Similarly, the TE consists of technological error (error arising from apparatus) and biological error (because of subject-related factors) (8). If technological error is assumed to be constant for both the feedback and nonfeedback conditions, given that the exact same equipment was used for each condition, then a smaller TE would suggest smaller biological error, again implying more stability in the variable of interest. If the same criteria are used and it is also assumed that the smallest TE is comprised solely of technological error (0.06 m·s−1), then this value would represent the smallest worthwhile difference in the velocities because any difference greater than this would be a biological error implying a change because of subject factors. Because the ICCs are used as a measure of relative consistency and relate to the reproducibility of the rank order of subjects on the retest, a larger ICC would also imply more stability in the variable of interest. Cohen's value of the smallest clinically important correlation was used to determine if practical differences in the ICCs existed (9).
In terms of the comparisons between sessions 1 and 2, using the above criteria it appears from both Tables 1 and 2 that feedback provided greater relative and absolute consistency than the nonfeedback condition. The smaller change in mean (0.07 vs. 0.13 m·s−1) indicates a 48.5% probability of feedback being practically beneficial in ensuring stability of velocity of movement. There is a 29.9% chance that the smaller TE (0.06 vs. 0.10 m·s−1) is beneficial, and a 79.8% chance the larger ICC (0.83 vs. 0.53) is beneficial suggesting better stability of performance. It would seem that even in a simple test–retest situation, the provision of feedback will add consistency to performance in the squat jump. Although there are no preset standards for acceptable reliability measures, it has been suggested that ICC values >0.75 may be considered reliable (12).
Similar results are seen when making comparisons between sessions 2 and 3. The smaller absolute change in mean (0.02 vs. 0.04 m·s−1) indicates a 53.6% probability of feedback being practically beneficial in ensuring stability of velocity of movement. The 6.1 and 92.1% chances that the smaller TE (0.06 vs. 0.07 m·s−1) is beneficial or at worst trivial and the 58.3% chance the larger ICC (0.87 vs. 0.74) is beneficial again suggest that feedback can potentially provide greater relative and absolute consistency than the nonfeedback condition across sets and over the entire session.
These results suggest that there is approximately a 50–50 chance that the effect of feedback on the reliability of performance velocity for jump squats will be either beneficial or trivial. It almost certainly will not have a negative effect on training outcomes. Given these probabilities, the strength and conditioning practitioner is now able to decide whether to instrument various devices to enable the provision of such performance feedback.
With advances in technology (linear position transducers, rotary encoders, etc.), it is now possible to continuously monitor specific kinetic and kinematic performance during training, such as velocity of jump squats, as seen in this study. The chances that the provision of feedback being beneficial to the consistency of performance across sessions suggests that this technique may be more advantageous in producing a more consistent performance or training stress. Therefore, it is suggested that by providing athletes instantaneous feedback on the velocity of movement after each repetition, improvements in the consistency of performance may result.
In addition to the potential improvement to the consistency of the training stimulus, another possible benefit that may result from the ability to accurately monitor performance during training is the ability to set training performance targets, such as maximum velocity, number of repetitions, or sets completed above a predetermined performance threshold. This may prove to be very motivational when fatigue sets in addition to creating competition in the training environment.
It is possible that by optimizing the consistency of training sessions the potential for improving the mechanical variable of interest (jump squat velocity) may also be enhanced. Further research needs to be conducted to investigate the effect of feedback on jump squat performance over consecutive training sessions and on sport-specific performance.
1. Baechle, TR and Earle, RW. Essentials of Strength Training and Conditioning
(2nd ed.).Champaign, IL: Human Kinetics, 2000.
2. Celesco Transducer Products Inc. Product Datasheets. Available at: http://www.celesco.com/datasheets/index.htm
. Retrieved June 12, 2010.
3. Cormie, P, McBride, JM, and McCaulley, GO. Validation of power
measurement techniques in dynamic lower body resistance exercises. J Appl Biomech
23: 103–118, 2007.
4. Day, ML, McGuigan, MR, Brice, G, and Foster, C. Monitoring exercise intensity during resistance training using the session RPE scale. J Strength Cond Res
18: 353–358, 2004.
5. Drinkwater, EJ, Galna, B, McKenna, MJ, Hunt, PH, and Pyne, DB. Validation of an optical encoder during free weight resistance movements and analysis of bench press sticking point power
during fatigue. J Strength Cond Res
21: 510–517, 2007.
6. Fleck, SJ and Kraemer, WJ. Designing Resistance Training Programs
(3rd ed.).Champaign, IL: Human Kinetics, 2004.
7. Foster, C, Florhaug, JA, Franklin, J, Gottschall, L, Hrovatin, LA, Parker, S, Doleshal, P, and Dodge, C. A new approach to monitoring exercise training. J Strength Cond Res
15: 109–115, 2001.
8. Hopkins, WG. Measures of reliability in sports medicine and science. Sports Med
30: 1–15, 2000.
9. Hopkins, WG. A spreadsheet for combining outcomes from several subject groups. Sportscience
10: 50–53, 2006.
10. Hopkins, WG. A spreadhseet to compare means of two groups. Sportscience
. 11: 22–23, 2007.
11. Hori, N and Andrews, WA. Reliability of velocity, force and power
obtained from the Gymaware optical encoder during countermovement jump with and without external loads. J Austr Strength Cond
17: 12–17, 2009.
12. Walmsley, R and Amell, T. The application and interpretation of intraclass correlations in the assessment of reliability in isokinetic dynamometry. Isokinet Exerc Sci
6: 117–124, 1996.