There is little information available about the variability in top athlete's performance from competition to competition. According to Hopkins et al. (3,5,6), this reliability is the key factor in determining the extent to which a performance enhancement strategy affects the chances of an athlete to change his/her position in a competition. It has been estimated that an enhancement in performance has substantial effect on changing top athlete's competitive results in an event only if the enhancement is at least ∼0.5 of the magnitude of the typical within-athlete variation in performance between events (6,3). Research has shown that reliability of competitive performance in swimmers, runners, and weightlifters, expressed as a coefficient of variation, ranges from 1.2 to 3.1, 1.2 to 4.2, and 2.3 to 2.7%, respectively (7,9,12,13). It could be hypothesized that the subjective assessment of performance during surfing competition is associated with a greater degree of variability between competitions in comparison with other time or weightlift-based sports such as running (7), swimming (12,13), or weightlifting (9). Moreover, it is plausible that the large degree of uncertainty associated with the highly instable and changing environment in which surfing is contested could impact reliability of competitive scores (10,11). However, the variability of performance between sporting contests in such subjective or more complex sports, such as surfing, has not been previously investigated.
It is a common practice for researchers and sport scientists to use laboratory or field-based tests that simulate the competitive event as a surrogate of performance during the actual event (3). Regardless of the doubtful practical value and external validity of this practice in most complex sports, it is not possible to replicate surfing practice using a laboratory-based setting. Therefore, it seems clear that to study the effects of training, nutritional, or other interventions on surfing performance, reliability of surfing performance has to be assessed using the outcomes of actual competitive events. Moreover, the event itself seems to provide the only dependable estimate of performance enhancement (1). Thus, estimates of the variation in performance between competitions is important for surfers, coaches, and sport scientists interested in strategies or factors that might enhance performance in a “real-world” setting. Accordingly, the aim of this investigation was to determine the typical variation in competitive performance of elite surfers.
Experimental Approach to the Problem
Elite surfers usually enter several competitions in the course of the season, so it is possible to estimate the reliability of performance assessment from competition to competition. Performance scores of the best competition surfers worldwide were obtained every event to monitor variation in competitive performance over a period of 1 season. The study design was a prospective cohort study. It was hypothesized that, due to some elements inherent to surfing practice (i.e., nonreplicable competition venues and highly unstable and changing environmental conditions) and surfing competition assessment (i.e., externally judged sport), surfers' competitive performance would be highly variable.
We obtained official scores (points) for the 12 competitions (the whole season) of the 2002 World Championship Tour (WCT) at the website of the Association of Surfing Professionals (ASP). The ASP is the leading governing body of professional surfing and is responsible for organizing a competitive calendar around the world. The WCT consists of the top 46 surfers competing throughout the season in the prime surfing locations worldwide (Table 1). The WCT is the highest standard of competition in the world; therefore, this standard of surf is classified as elite. We analyzed 46 surfers who entered 6 or more such events, held between March and December 2002. All surfers were experienced elite level, male professional surfers. To further explore the variability of competitive performance, we also carried out a separate analysis on official performance points for 182 male surfers who competed on 3 consecutive events within the 2002 World Qualifying Series (WQS) held in August 2002 (Table 2). The WQS is the feeder system for the WCT. At the end of each competitive season, the last 16 surfers in the WCT automatically lose their place in the WCT and are replaced for the following season by the top 16 WQS surfers. Therefore, the WQS is the second highest standard of competition in the world. Surfers competing in the WCT are also allowed to compete in the WQS. These data are in the public domain, so we did not seek written consent for their use from individual athletes. The Institutional Review Board for Human Investigation approved all experimental procedures.
Surfing contests were based on rounds and elimination heats, where surfers received a numerical score based on their final position in each event. A numerical score (points) was attributed depending on the final position achieved by the surfers during each of the 11 WCT events (1 contest was cancelled due to poor climatological conditions). Also, surfers competing at the 3 WQS events were given the same amount of points for the same final position achieved. Surfers competing at the WQS events were seeded by the event organizers based on previous season competitive results. That is, better-seeded surfers were entering the competition at higher rounds. New surfers in the circuit were unseeded and, therefore, started competition from the first rounds (similar to, e.g., tennis tournaments). We performed separate analyses for each group of surfers (i.e., seeded and unseeded).
As scores in each competition were normalized, i.e., surfers obtained the same score for the same final position achieved in each contest, it was not necessary to use a reliability model that had a fixed effect for the difficulty of the event. We examined plots of residuals vs. predicted scores for each analysis to check for nonuniformity of error. Residuals from the raw score points showed a tendency to increase for athletes with higher scores. This was removed by taking the natural log of the raw data. Our measure of variability was the typical error (i.e., within-subject variation) expressed as a Cohen (i.e., standardized) effect size (ES) of log-transformed final scores obtained for the surfers after finishing each event. Within-subject variation represents the typical variation of an athlete's performance between events, after any changes in the mean have been taken out of consideration (5). The Cohen ES was calculated by dividing the typical error by the between-subject SD on the natural logarithm of the raw points score obtained for consecutive pairwise reliabilities of the set of 11 events (4). Threshold values for the Cohen ES statistic are 0.8 (large), 0.5 (moderate), and 0.2 (small). Precision of the estimates of within-subject Cohen ES are shown as 90% likely limits (confidence limits), which represent the limits within which the true value is 90% likely to occur.
The magnitude of the change scores computed from consecutive pairs of events for WCT and WQS surfers are presented in Tables 3 and 4, respectively. In WCT events, there was a high variability in competitive performance assessment between contests, with all effects sizes ranging between 0.72 and 1.01 (n = 46). In WQS events, ES ranged between 0.61 and 1.04. The variability between events for the seeded surfers (0.61-0.68, n = 145) competing at the WQS was smaller than for WQS unseeded surfers (0.89-1.04, n = 37) and WCT surfers. The ES averaged 0.88 across all the surfers (i.e., WCT and WQS surfers).
Surfing is a judged sport. As a result, there is an element of subjectivity in terms of what type of performance gets a good score. Results obtained in the present study, examining world elite surfers, clearly show that surfing performance is difficult to predict. These observations are consistent with the popular notion that the predictability of a surfer's performance is low. To our knowledge, this is the first study reporting variability of performance outcomes in a subjective (i.e., externally judged), non-time-based sport such as surfing.
Comparison of the variability of competitive outcomes between surfers and those reported in other sports, such as swimming (12,13), running (7), or weightlifting (9), is difficult because, among other things, surfing is not a time-based sport. The reliability of competitive performance in swimmer and runners, expressed as a coefficient of variation (CV), has been reported to range from 1.2 to 3.1% and from 1.2 to 4.2%, respectively. Converting results in our study to CV, values for surfers were much higher, ranging from 32.9 to 700.6%. Several factors are likely to explain the origin of these differences between surfers and time-based sports. The obvious explanation is that the subjective nature of surfing performance assessment induces an inherent degree of variability compared with, e.g., the objective (i.e., time based) performance assessment in running or swimming. However, surfers contesting within the ASP circuits, either WCT or WQS, have to fulfill the same judging criteria to maximize their scoring opportunities (8). While the complete suppression of these subjective elements associated with surfing judging seems to be difficult, this uniformity in the judging criteria should avoid, at least partially, large disparities in judges' performance outcomes. Therefore, some other factors need to be considered to further explain the low consistency in surfing performance observed in the present study.
Competitive success in surfing is believed to depend on the complex interactions between many variables (10). In addition to the complex interrelationships between surfer's psychological, tactical, cognitive, biomechanical, and physiological capacity, specific to every sport, surfing performance can also be influenced by several external factors such as equipment, wave conditions, level of the opponents and, as previously mentioned, judging (10,11). Among all these factors, wave conditions (i.e., type, shape, and height) are likely to be the most relevant to explain the high degree of surfers' performance variability. Wave conditions can vary drastically from day-to-day at the same surfing venue (2): swell size, speed and direction, tides, currents, the characteristics of the shore bottom, kelp, and wind direction and strength all affect wave conditions. Every surf venue has a unique set of variables that will ultimately define the “anatomy” of a particular wave. Thus, wave conditions would eventually dictate what manoeuvres will be possible on any given day (2). It is possible that different wave characteristics better fit particular surfers' technical skills. For example, “rights” (i.e., waves that break from the “peak” to the surfer's right) might be better for “regular footer” surfers (i.e., a stance in which the right foot is at the rear on the board) than for “goofy footer” surfers (i.e., a stance in which the left foot is at the rear on the board). This is because regular footer surfers are gliding over the wave facing their look to the wall of the wave, which is believed to be advantageous for fast and precise surfing. Therefore, we speculate that the low performance predictability observed in top surfers might arise from the multiple combinations of possible competition scenarios that surfing venues may offer. Further research is required to quantify the contribution of these different factors on surfing competitive performance.
For WQS seeded surfers, the variation in performance between events was smaller than for WCT and WQS unseeded surfers. A possible explanation for this, somewhat better predictability in competitive performance for WQS seeded surfers, is that seeded surfers entered the competition in latter rounds than unseeded surfers. That is, seeded surfers started the competition closer to the final and, therefore, they performed a lower number of rounds than unseeded surfers. The possibility to dispute a higher number of rounds by unseeded surfers might have been related to the increased variability in competitive results observed in the present study. Familiarity with competition is also another likely factor for the reduced variability in seeded surfers, as surfers are seeded based on the results obtained in the previous season.
To change an elite athlete's chance of improving his/her position in a competition, an intervention has to change that athlete's performance by an amount equivalent to ∼0.5 the typical variation in an athlete's performance from competition to competition (∼0.2 expressed in Cohen units) (3,5,6). The variation between competitions for the top male professional surfers ranged from 0.61 to 1.04 Cohen units. That is, the test (i.e., competition) noise is much higher than the smallest worthwhile change. As tests suitable for detecting such small changes in performance need to be less noisy than the typical variation of the athlete between events, assessment of changes in competitive performance in surfing as a result of short-term (i.e., between consecutive competitions) training, nutritional, or other interventions in surfers appear to be problematic (3,5,6). A practical solution could be average the score of several competitions in a row to reduce the noise (5). If we assume that the smallest worthwhile change in performance that has a substantial effect on the athlete's chance of improving his/her final placement in a competition is ∼0.5 (or 0.2 expressed in Cohen units) of the typical variation of performance in competitions, then the smallest worthwhile change in performance for a surfer is ∼0.5 (0.2 Cohen units) of the typical error in the events (3,5,6). The smallest worthwhile change in performance is half the noise, so the noise is twice as big as the signal. In the present study, if we assume the minimum typical variation of performance of 0.61 Cohen units in this cohort of top level and we take the average performance across 10 events, the noise will be halved (i.e., noise-0.8-divided by root 10), which makes the noise (0.19 Cohen units) slightly smaller than the signal (0.20 Cohen units) (5). Coaches and scientists are advised to consider these time frames when examining possible performance enhancement interventions and when evaluating the progression in competitive performance of top-level surfers.
In summary, we have demonstrated that performance of professional surfers is difficult to predict. The typical variation (within-subject variation expressed as a Cohen ES) in competitive performance ranged from 0.61 to 1.04. The reasons for this relatively high competitive performance variability could be related to a number of factors, including nonreplicable and unpredictable environmental factors within and between events and the subjective nature of surfing performance assessment. This study provides a framework for examining the significance of changes in competitive performance assessment in top professional surfers.
Changes is competitive performance are the major concern of professional athletes and their support team. The present study is the first investigating the variability in competitive performance in an externally judged sport such as surfing. The results of the present study suggest that performance of elite surfers is not very stable (i.e., difficult to predict) throughout the competitive season. Information about this variation in performance is also important for those interested in factors that affect competitive performance, such coaches and sport scientists. Considering the large variability in competitive performance reported in this study, a practitioner monitoring an elite, male, professional surfer will have little hope of noticing small to moderate changes in competitive performance between consecutive events. Thus, testing the effects of acute short-term nutritional training or other interventions appears to be impractical in surfers. On the basis of these observations, several competitions in a row appear to be needed for tracking the smallest worthwhile performance change in competition scores in this cohort of surfers as a result of different training interventions. Moreover, this time frame (i.e., several competitions in a row) is needed to monitor competitive progression in top-level surfers. We also advice coaches and sport scientist to analyze surfer's performance in different wave conditions to determine how best to make such changes in performance.
The excellent statistical guidance provided by Dr. Will G. Hopkins is greatly appreciated.
1. Atkinson, G and Nevill, AM. Selected issues in the design and analysis of sport performance research. J Sports Sci
19: 811-827, 2001.
2. Guisado, R. The Art of Surfing
. Guilford, CT: Falcon, 2003.
3. Hopkins, WG. Measures of reliability
in sports medicine and science. Sports Med
30: 1-15, 2000.
4. Hopkins, WG. A spreadsheet for analysis of straightforward controlled trials. Sportscience
7, 2003. Available at: http://www.sportsci.org/jour/03/wghtrials.htm
. Accessed June 6, 2006.
5. Hopkins, WG. How to interpret changes in an athletic performance test. Sportscience
8: 1-7, 2004.
6. Hopkins, WG, Hawley, JA, and Burke, LM. Design and analysis of research on sport performance enhancement. Med Sci Sports Exerc
31: 472-485, 1999.
7. Hopkins, WG and Hewson, DJ. Variability of competitive performance of distance runners. Med Sci Sports Exerc
33: 1588-1592, 2001.
8. ISA 2003 World Surfing
Games. Available at: http://www.isa.org
. Accessed March 12, 2003.
9. McGuigan, MR and Kane, MK. Reliability
of performance of elite Olympic weightlifters. J Strength Cond Res
18: 650-653, 2004.
10. Mendez-Villanueva, A and Bishop, D. Physiological aspects of surfboard riding performance. Sports Med
35: 55-70, 2005.
11. Mendez-Villanueva, JA, Bishop D, and Hamer, P. Activity profile of world-class professional surfers during competition. J Strength Cond Res
20: 477-482, 2006.
12. Pyne DB, Trewin CB, and Hopkins WG. Progression and variability of competitive performance of Olympic swimmers. J Sports Sci
22: 613-620, 2004.
13. Stewart, AM and Hopkins, WG. Consistency of swimming performance within and between competitions. Med Sci Sports Exerc
32: 997-1001, 2000.