Skip Navigation LinksHome > December 2013 - Volume 35 - Issue 6 > Strength and Power Profiling of Athletes: Selecting Tests a...
Strength & Conditioning Journal:
doi: 10.1519/SSC.0000000000000011
Article

Strength and Power Profiling of Athletes: Selecting Tests and How to Use the Information for Program Design

McGuigan, Michael R. PhD, CSCS*D1; Cormack, Stuart J. PhD2; Gill, Nicholas D. PhD1,3

Free Access
Article Outline
Collapse Box

Author Information

1Sports Performance Research Institute New Zealand, AUT University, Auckland, New Zealand;

2School of Exercise Science, Australian Catholic University, Melbourne, Australia; and

3New Zealand Rugby Union, Wellington, New Zealand

Conflicts of Interest and Source of Funding: The authors report no conflicts of interest and no source of funding.

Michael R. McGuigan is a professor of Strength and Conditioning at Sports Performance Research Institute New Zealand, AUT University.

Figure. No caption a...
Image Tools

Stuart J. Cormack is a senior lecturer in the School of Exercise Science, Australian Catholic University.

Figure. No caption a...
Image Tools

Nicholas D. Gill is a strength and conditioning coach for the New Zealand All Blacks rugby team.

Figure. No caption a...
Image Tools
Collapse Box

Abstract

ABSTRACT: STRENGTH AND POWER DIAGNOSIS CAN PROVIDE VALUABLE INSIGHTS INTO THE DIFFERENT CAPACITIES OF ATHLETES. THE STRENGTH AND POWER TESTS CHOSEN SHOULD BE RELIABLE AND VALID AND TAKE INTO ACCOUNT THE REQUIREMENTS OF THE SPORT AND WHAT IS A MEANINGFUL CHANGE IN PERFORMANCE. THE RESULTS OF THESE TESTS NEED TO BE REPORTED IN A CLEAR, MEANINGFUL, AND TIMELY MANNER FOR COACHES IF THEY ARE TO HAVE MAXIMAL IMPACT ON TRAINING PROGRAMS. THE PRACTITIONER CAN USE THIS EVIDENCE-BASED INFORMATION IN CONJUNCTION WITH THE ART OF COACHING TO MAXIMIZE TRAINING PROGRAM EFFECTIVENESS.

Back to Top | Article Outline

INTRODUCTION

Strength and conditioning professionals have a large number of strength and power tests available to them to assess these particular physical qualities of their athletes. There is also an increasing body of published literature on the assessment of these qualities in athletic populations (1,3,8,11,12). In this current article, we will attempt to provide extensive examples of how this type of strength and power profiling can be applied in athletic assessment and specifically how this information can be used to drive programming. Practitioners use strength and power diagnosis for a number of reasons, including monitoring of acute performance in training, measuring the chronic response to training interventions, identifying strengths and weaknesses of an athlete, individualizing training programs, and comparing athletes to normative data. It is our contention that strength and power diagnosis can have the greatest impact on individualizing training programs.

Back to Top | Article Outline

SELECTING STRENGTH AND POWER TESTS

It is well recognized that strength and power are critical components of athlete performance. Like training, assessment of physical capacities must be specific to the athlete cohort the practitioner is working with, so it is important to avoid implementing tests just for the sake of testing. The data that are generated from testing need to be meaningful for coaches and athletes and should be used to impact on athlete preparation and performance in some manner. It is also important to critically examine what tests are used and not choose tests solely because they have been used previously or because the equipment and expertise is available.

It is critical that practitioners select appropriate tests for assessing the physical capacities of their athletes. In addition, 2 vital factors that also need to be considered in developing or selecting assessment protocols for measuring athlete capacities are validity and reliability. Validity refers to whether the specific test measures what it is supposed to measure. Reliability refers to how repeatable the performance or test variables are. Reliability is optimally assessed with repeated trials and is important for tracking athlete's performance over time (4). Ideally the reliability of tests should be determined using the practitioners own laboratory/testing set-up and with similar populations rather than relying on previously published reliability data. This is important as we need to know if the test protocol can detect any changes in the results of the athletic tests that we have chosen to implement. If the reliability is poor, then the variation in result is likely to be “too noisy” to interpret (4). It is also important to consider factors such as obtaining information about the movements of the sport from needs analyses to select valid tests. In addition, considering familiarization sessions, consistency in instructions, warm-ups, and calibration of equipment will enhance the reliability of the tests.

The final consideration, and one that is directly related to the reliability and validity of a test, is “what is a worthwhile change in a test performance?” Worthwhile change refers to the ability of a test to detect the smallest practically important change. This can be calculated as 0.2 × between-subject standard deviation (4). This has implications for athletes because it helps with determining whether the change that has been seen in a particular test is “worthwhile” or not and can therefore greatly assist with the interpretation of test results. If the test protocol is to have an impact on the athlete's preparation and performance, knowledge of the worthwhile change is crucial.

There are a number of physical capacities that can be tested within strength and power diagnosis. These include:

a. Maximal strength: This refers to the maximal force producing capacity of a muscle during concentric, eccentrics, or isometric contractions or the maximal force that can be produced during a specific movement. This is commonly measured using a repetition maximum (RM) test (e.g., 1RM). It is important to note that these different strength components may not be the same for individuals and can be tested separately if needed.

b. Power: This evaluates an athlete's ability to apply force rapidly, thus generating high levels of power. This can be measured under different conditions. For example, jumps performed unloaded or with additional load. Another common approach is to compare jumps performed with a stretch shorten cycle (SSC) or without a SSC (static jump [SJ]).

c. Strength endurance: This refers to the ability to repeatedly develop a high level of force. The term “power endurance” is also used by some practitioners when referring to activities that involve repeated efforts (7).

d. Reactive strength: This is defined as the ability to develop maximal force in minimal time and is demonstrated in movements consisting of a rapid eccentric contraction followed by a concentric muscle action, for example, drop jumps. Reactive strength is sometimes measured as height jumped/contact time.

e. Rate of force development (RFD): This is calculated from the slope of the force-time curve and provides a measurement of the athlete's rate of developing force against a given load.

Table 1 outlines these various strength and power qualities and lists some of the common tests that can be used to assess them.

Table 1
Table 1
Image Tools

The choice of tests will be dependent on a number of different factors (in addition to reliability and validity). These can include (but are not limited to):

a. The sport: For example, some sports involve dealing with larger external loads (rugby union, American football), so it may be pertinent to include a loaded jump squat, whereas for other sports, body weight-only jump squats may be sufficient as moving loads greater than body weight are not required. The practitioner should conduct a thorough needs analysis of the sport to determine which physical capacities are important and therefore select appropriate tests to assess those.

b. Availability of equipment: Some tests rely on technology such as force platforms, linear position transducers, or accelerometers for assessing qualities such as power. However, even if this type of technology is unavailable, it is possible to conduct a strength and power profile using a maximal strength test, jump heights/distances for various jumps, and a strength endurance test.

c. Number of athletes: The general logistics of the testing session must be taken into account. When testing only a small number of athletes, it may be possible to do a wide range of tests, for example, including a loaded power profile with jump squats with various percentage of body weight. However, if conducting testing with a larger group, it may be easier to use absolute loads for a loaded power profile (e.g., with body weight and 40 kg additional load).

d. Level of the athlete: Consideration needs to be given to the training age and experience of the athlete being tested. For example, with developmental athletes, it may be sufficient to just test maximal strength, whereas a more advanced athlete might require a more complete battery of tests that covers a number of physical capacities. However, practitioners should always be mindful of avoiding inclusion of multiple tests that assess highly related strength and power capacities and provide little additional information on the capabilities of the athlete.

e. Individualized testing batteries: Although it is more commonplace to provide individual training programs for athletes, the standard approach for testing is to conduct the same battery of tests across a squad. However, given the different positional demands within sports, it may be worth considering individualizing testing as well. The practitioner may well include standard tests with a squad, for example, maximal strength, but there may be cases where different types of power are assessed (e.g., unloaded countermovement jump [CMJ] for one athlete versus a more extensive load profile for another).

Finally, when selecting tests, the main criteria should be to ask what information can be obtained from this test and how can the data be used. This allows coaches to drive the training prescription process and for assessment of the impact of training interventions. Simply conducting strength and power tests for the sake of testing should be avoided.

Back to Top | Article Outline

REPORTING INFORMATION

It is not enough to have developed a reliable and valid testing battery and collected data and determined whether the changes are worthwhile or not. If the information is not presented to the coach and/or athletes in a way that makes sense to them, then the ability of that information to make a difference to the preparation and performance of the athlete will be reduced. To assist with interpretation of the results, the practitioner needs to make an assessment about magnitude of the change, taking into account the reliability of the test.

It is important to present the results in a way the coach and athlete can understand. This can be done in a number of different ways and by using a combination of different methods. A good first step is to graph the results in some way. Typically, numbers by themselves are not very helpful or well understood. By graphing data, it may be possible to identify trends in the results or to visualize large changes in physical capacities. In the practical programming section, we will discuss some specific examples of how these can be used with simple tools, such as Microsoft Excel. Figures are often a good way to present data visually to coaches and athletes. This can demonstrate clearly where the athlete sits within the group. For example, graphing z-scores (athlete score − average score/standard deviation) using radar plots provides a visual representation of the athlete's strengths and weaknesses relative to the group and therefore can be a useful tool for prescribing specific training to target those weaknesses. Figure 1 shows an example of a radar plot of z-scores.

Figure 1
Figure 1
Image Tools

This approach can be particularly useful for one-time testing and also allows practitioners to include performance expressions of strength and power. However, an important part of the testing process is retesting and comparing with previous results, and these figures can be used to show results over a period. A problem can arise when testing squads when athletes may not be available because of injury. With small group sizes, a particularly strong (or weak) athlete in a particular test may have a major effect on means and/or standard deviations. An alternative approach that we have used is modified z-scores, where benchmark means and standard deviations are determined for the various tests. These benchmarks can be determined by the practitioner (e.g., it may be decided that the benchmark for relative CMJ power is 65 W/kg). These benchmarks can be developed based on a number of sources, including published literature on a similar population, previous testing data with that population, and feedback from the coach, and other practitioner feedback. Once these benchmarks are developed, the modified z-scores can be calculated as:

Equation (Uncited)
Equation (Uncited)
Image Tools

Figure 2 shows a sample of modified z-scores for an athlete measured over time using this approach. It is also possible to show other standards in addition to your own benchmarks. For example, a practitioner may want to show a younger athlete (e.g., younger than 18 years) the standard of an international performer for the particular tests being conducted.

Figure 2
Figure 2
Image Tools

Practitioners should also consider the turnaround time of reports for coaches from testing sessions. For this type of data to have maximal impact, turnaround time needs to be fast, in addition to providing meaningful information. Historically, sports science has not done a great job of providing coaches and high performance programs with timely testing reports. Providing reports within 3 days of testing will help with changing the current perception of many of these testing sessions as simply being opportunities to collect data for research studies.

Back to Top | Article Outline

PRACTICAL PROGRAMMING

An important part of the process of athlete assessment is determining the priorities for intervention. By completing a thorough assessment of the various strength and power capacities, the strength and conditioning professional should be able to target specific qualities (7). After completing the testing, one of the key considerations is what capacities need to be targeted by the training program. A fundamental question needs to be asked: is it more important to focus on weaknesses or continue to develop strengths, or try to do a combination of both? The underlying philosophy of the approach discussed in this article is to individualize training programs for athletes so this will depend on the individual athlete with regards to training age, competition level, phase of season, prioritized needs based on performance needs/priorities, etc. The practitioner will also need to consider the impact of training certain capacities will have on the other physical capacities. Certainly, frequent retesting of these physical qualities will assist with this process and provides the practitioner with regular feedback on the effect of the training program and/or specific interventions. In this section, we will discuss examples of some of the physical capacities that can be tested within strength and power diagnosis.

From a practical standpoint, the use of radar plots can provide some simple insights into capacities that can be targeted. If we take Figure 1 as an example, there are some issues that can be highlighted by the practitioner. The athlete has clearly reached the standard for the body weight CMJ and SJ tests. However, there seems to be some deficiencies in terms of maximal strength capability, which may also contribute to the lower scores for the loaded jump squat tests. Therefore, it may be concluded that the emphasis on the next training block should be on more maximal strength development in conjunction with some higher load power work, for example, jump squats with external loading.

The sample data presented in Figure 2 depict testing conducted in January and April. This sample shows improvements in maximal strength (the athlete has now reached international standard) with some improvement in unloaded and loaded jumping (note: reporting the improvements in terms of smallest worthwhile change will confirm whether these changes are worthwhile or not). However, reactive strength and RFD components obviously need more work along with SJ power. Therefore, the practitioner may decide that these components will be the focus of the next training block.

Load power profiles can also provide useful information about the power capabilities of athletes (5). Figure 3 shows 2 sample athletes with differing power profiles in the bench pull. First, this enables the practitioner to determine at which load the respective athletes produce peak power (30 kg for athlete A and 50 kg for athlete B). This could have potential implications for what loads to target during training sessions and also for establishing threshold levels for monitoring purposes. Recent research has shown significant benefits of providing this type of feedback to athletes during power training sessions (9). The load profile can also provide insights to how the athletes respond to different loads across the spectrum (10). As can be seen from Figure 3, athlete B is able to maintain the power output to a great extent across the spectrum of loads, even though the peak level is less than that of athlete A. Ultimately, this needs to be put into the context of the sport requirements as to whether this is desirable or not.

Figure 3
Figure 3
Image Tools

Comparing the performance loads can highlight what type of training activity and loading the athlete should exploit (5,6,10). Using Figure 4 as an example, the ratio of CMJ:SJ or eccentric utilization ratio (EUR) is 1.05 for athlete A and 0.95 for athlete B. If this component was deemed important, this would suggest that a primary emphasis for the training of athlete B would be inclusion of more SSC-specific work, for example, plyometrics to improve this component. This could also involve some specific RFD training where the focus is attempting to maximize the slope of the force-time curve with ballistic types of exercises, such as jump squats. Depending on the phase of the annual plan, the major training focus could be on the most fundamental of those qualities requiring improvement. For example, if athlete B had a much lower maximal strength as well as a lower EUR, the primary focus would be on maximal strength.

Figure 4
Figure 4
Image Tools

Figure 5 shows an example of a simple unloaded versus loaded jump squat profile that can be used to determine how well an athlete tolerates external load (2). In this example, athlete A is able to handle the increase in load during the CMJ (ratio 0.98), whereas athlete B has a much larger drop off in relative peak power with the addition of external load (ratio 0.74). From a programming perspective, we could recommend the greater inclusion of loaded jump squats for athlete B and a greater emphasis on maximal strength training. One could also consider conducting a more extensive load profile for athlete B, for example, testing at a range of loads from body weight only, body weight + 20 kg, body weight + 30 kg, body weight + 40 kg, etc., to determine where the significant deflection point in performance drop-off occurs, to make the training more specific in terms of what loads to target during sessions (11). This is an example where testing can be individualized rather than simply applying a battery of tests across a squad of athletes.

Figure 5
Figure 5
Image Tools

Reactive strength capacity is typically assessed using drop jumps. By using this information across a range of drop jump heights and comparing to the CMJ result, we can obtain useful insights to the athlete's tolerance of stretch load (7). Figure 6 shows an example of 2 athletes who have completed a drop jump profile across 30, 45, and 60 cm. It is clear that athlete B is better able to tolerate the drop jump heights relative to CMJ performance, whereas athlete A produces less jump height with increasing drop height. This information again would need to be put into the context of other testing results to help understand the cause of this. It could perhaps be a lack of eccentric strength, which may be alleviated to some extent by the inclusion of more maximal strength work. Another explanation could be lack of reactive strength, in which case incorporating reactive strength drills could also be considered.

Figure 6
Figure 6
Image Tools

A majority of strength and power assessments that are used are bilateral in nature. However, unilateral assessment can provide the practitioner with valuable information on potential imbalances. For example, during single-leg jumping, measures between right and left legs can be compared, along with comparing the sum of the single legs to the bilateral measure (Figure 7). In the example shown in Figure 7, there is a deficit for athlete B (right to left) (19%). Therefore, the subsequent training could focus on including some additional single-leg work, particularly for the left leg. When comparing the sum of right and left to the scores for bilateral CMJ, there are differences in the “bilateral deficit,” with athlete A producing 25% more power with the sum of the unilateral jumps and athlete B producing only 8% more. This could suggest, depending on sport specificity, that athlete A can focus more on bilateral work in the next training block, whereas athlete B would be doing more unilateral work.

Figure 7
Figure 7
Image Tools

Again with this example, the practitioner without access to technology that measures variables such as force, power and velocity, a simple approach where jump height is measured when jumping off each leg and bilaterally could be implemented. This principle could also be applied to testing a capacity, such as strength endurance, conducted using body weight as the resistance or submaximal loads as the resistance, with the measurement being the maximum number of repetitions performed until failure, the maximum number of repetitions completed within a certain time interval, or the shortest time to complete a certain number of repetitions. Another consideration when using these types of tests is to not only focus on the outcome measures, such as jump height, but to also closely observe the technique of the movement because this also provides valuable insights into athlete performance.

Back to Top | Article Outline

PRACTICAL APPLICATIONS

Strength and power diagnosis can provide valuable insights into the different capacities of athletes. A detailed profile allows determination of the underlying performance limitations rather than simply testing a narrow range of qualities. This allows the strength and conditioning practitioner to individualize training programs. The strength and power tests chosen should be reliable and valid and take into account the requirements of the sport and what is a meaningful change in performance. The results of these tests need to be reported in a clear, meaningful, and timely manner for coaches if they are to have maximal impact on training programs. Finally, the practitioner can use this evidence-based information in conjunction with the art of coaching to maximize training program effectiveness.

Back to Top | Article Outline

REFERENCES

1. Argus C, Gill N, Keogh JWL. Characterization of the differences in strength and power between different levels of competition in rugby union athletes. J Strength Cond Res 26: 2698–2704, 2012.

2. Argus CK, Gill ND, Keogh JW, McGuigan MR, Hopkins WG. Effects of two contrast training programs on jump performance in rugby union players during a competition phase. Int J Sports Physiol Perform 7: 68–75, 2012.

3. Baker D. Comparison of upper-body strength and power between professional and college-aged rugby league players. J Strength Cond Res 15: 30–35, 2001.

4. Hopkins W. How to interpret changes in an athletic performance test. Sportscience 8: 1–7, 2004.

5. McGuigan MR, Cormack S, Newton RU. Long-term power performance of elite Australian rules football players. J Strength Cond Res 23: 26–32, 2009.

6. McGuigan MR, Doyle TL, Newton M, Edwards DJ, Nimphius S, Newton RU. Eccentric utilization ratio: Effect of sport and phase of training. J Strength Cond Res 20: 992–995, 2006.

7. Newton R, Dugan E. Application of strength diagnosis. Strength Cond J 24: 50–59, 2002.

8. Nibali M, Chapman D, Robergs R, Drinkwater E. A rationale for assessing the lower-body power profile in team sport athletes. J Strength Cond Res 27: 388–397, 2013.

9. Randell AD, Cronin JB, Keogh JW, Gill ND, Pedersen MC. Reliability of performance velocity for jump squats under feedback and nonfeedback conditions. J Strength Cond Res 25: 3514–3518, 2011.

10. Sheppard JM, Cormack S, Taylor KL, McGuigan MR, Newton RU. Assessing the force-velocity characteristics of the leg extensors in well-trained athletes: The incremental load power profile. J Strength Cond Res 22: 1320–1326, 2008.

11. Sheppard JM, Cronin JB, Gabbett TJ, McGuigan MR, Etxebarria N, Newton RU. Relative importance of strength, power, and anthropometric measures to jump performance of elite volleyball players. J Strength Cond Res 22: 758–765, 2008.

12. Wisloff U, Castagna C, Helgerud J, Jones R, Hoff J. Strong correlation of maximal strength with sprint performance and vertical jump height in elite soccer players. Br J Sp Med 38: 285–288, 2004.

Keywords:

power; strength; testing; monitoring

Figure. No available...
Figure. No available...
Image Tools

© 2013 by the National Strength & Conditioning Association

Login

Article Tools

Images

Share

Search for Similar Articles
You may search for similar articles that contain these same keywords or you may modify the keyword list to augment your search.