Introduction
The difference between “average” and “fast” is a few tenths of a second in a 40-m sprint. Valid and reliable timing is therefore critical for the effective monitoring of sprinting performance. In published studies of sprinting performance, electronic timing is advisable because of the importance of small variations in timing (5,7 ). Although theoretically more precise, electronic timing is influenced by the differences in timer activation methods and the starting position of athletes. These variables can generate meaningful differences in performance times that may complicate comparison of results from different studies or estimation of the effect magnitude of training interventions.
Because the International Association of Athletics Federations (IAAF) mandated fully electronic timing to the hundredth of a second for running events, timing methods used in international athletics have been considered the “gold standard” for accurately and reliably quantifying sprint performance. Omega's Scan ‘O’ Vision (Swiss Timing, Corgemont, Switzerland) fully automatic photofinish system has been used in international championship and World Cup meetings. However, like many gold standard methods, such timing equipment is expensive and impractical to use for most practitioners.
In theory, recording gun smoke and sprinters passing the finish line with a single video camera should provide enough information for valid sprint time analysis when captured into a computer video analysis program. Although this timing method represents a practical “gold standard” for the validation of other methods, it has so far not been reported in the literature. A review of published studies monitoring speed performance reveals considerable variation in timing methods and hardware manufacturers.
In American football, measuring the 40-yd (36.58-m) dash performance from a 3-point stance is standard practice (2,3,6,8-10 ). In contrast, most sprint tests in soccer studies use a rocking start or allow leaning back before movement initiation from a standing start (1,11-14 ). The Brower Timing System (Draper, UT, USA) has been used in the majority of these publications, but specific start procedures and hardware approaches to timer initiation have varied. Only Duthie et al. (4 ) have so far evaluated the reliability of different starting techniques and their impact on measured performance time.
Therefore, the purpose of this study was to compare different sprint start positions and generate correction factors between popular timing triggering methods on the 40-m/40-yd sprint. This information should facilitate more meaningful comparisons of published sprint performance results.
Methods
Experimental Approach to the Problem
Data were collected in 2 phases. The purpose of phase 1 was to establish the validity of times determined using the (a) Brower Timing System, a popular wireless and portable timing system, with an audio speaker sensor and (b) video recordings analyzed in Dartfish 5.0 software. These systems were compared with the Omega Scan ‘O’ Vision system during national competitions at Bislett stadium in Oslo, an internationally certified athletics venue. Timing was initiated by gunfire, and the athletes ran from the set start block (4-point stance) position. The times of 48 different heat winners (60- to 400-m running events) were determined by using all 3 systems simultaneously.
Phase 2 data collection was designed to the impact of several popular timer triggering and start position combinations in a group of track athletes. In addition, to assessing test-retest reliability , repeated trials for each of the 3 starting methods were compared. In each series, sprinters performed 3 sprints in randomized order under the following conditions: (a) start from standard sprinter blocks with gunfire, measured by using both Brower Timing with audio speaker sensor and Dartfish video, (b) 3-point start with fingers placed on a timer touch pad at the start line, measured by using both Brower and Dartfish, and (c) standing rocking start leaning back before sprinting, measured by using both Brower, Dartfish, and a dedicated indoor system used by the Norwegian Olympic Center (NOC) employing an imbedded pressure sensor below the track surface. The different timing methods and starting positions compared are summarized in Table 1 . Rest between each of the 3 sprints in a series was for 6 minutes. The pause duration between the 2 series was 20 minutes.
Table 1: Timing methods and starting position during phase 2 data collection.*
Subjects
Video-based timing data against the Omega system (phase 1) were validated during different national athletic meets at Bislett Stadium in Oslo. The athletes signed up and participated voluntarily for these competitions on the basis of being timed. Approvals from meet organizers to set up extra timing systems were obtained in advance. In the second phase of data collection, 25 athletes participated in the study at the NOC. The subjects had all been competing in track and field events (100- to 400-m sprint and hurdle) for at least 2 years and were currently actively training for a minimum of 5 d·wk−1 . Written informed consent was obtained in advance from each subject before participation. They were all healthy and free from injuries at the time of testing. Regarding nutrition, hydration, sleep, and physical activity, the athletes were instructed to prepare themselves as they would for a regular competition, including no involvement in high-intensity training the last 3–4 days before testing. All the subjects were familiar with the different starting positions through the training sessions with their clubs, relays, and competitions. The characteristics of the subjects participating in the phase 2 are presented in Table 2 .
Table 2: Characteristics of the subjects used to compare timing systems and start positions.*
Procedures
Starting Positions
For the start block condition, the athletes followed the instructions and commands according to IAAF competition rules. This method includes reaction time to the starter's pistol. In the 3-point and standing start conditions, no start command was given, and the reaction time was not included in the total time. In the 3-point condition, the athletes placed their fingers from their front hand on the pod immediately behind the starting line. During standing starts, timing was initiated in 2 ways: via the release of pressure from the front foot on the subsurface triggering plate and via breaking a photocell beam 1 m above the starting line. The athletes were informed not to lean into the photobeam before the start. After the ready signal was given by the test operator, the athletes started on their own initiative. The starting positions and triggering methods employed are illustrated in Figure 1A–D .
Figure 1: Body position at timer triggering for different start methods compared. (A) Block start, (B) 3-point start with hand release, (C) standing start with photocell trigger, and (D) standing start with floor sensor trigger below the front foot.
Timing Equipment
Omega's Scan ‘O’ Vision photofinish timing system was the “gold standard” for the validation of all other timing systems in this study. The timing is initiated by an electronic gun, which sends a current through an attached wire to a separate console in a control room that triggers all timing devices. Trial shots, so-called “zero shots,” were fired before the start of each competition to ensure exact timing initiation. Scan ‘O’ vision Star cameras were installed at the finish. They can capture up to 2,000 images per second with a high resolution of 2048 pixels per vertical line. The Omega system splices thousands of scans together, forming a composite image of the contestants. The corresponding time is displayed for each picture, providing the photofinish judge with enough information to estimate the time within ±0.0005 seconds (http://www.swisstiming.com/Athletics.495.0.html ).
The Brower Timing System
(Draper) employs 3 different time initiating devices: (a) An audio sensor can capture gunfire and start the timer, in principle equivalent to athletics timing wherein reaction time is included. This method of time initiation was used in both data collection phases (Bislett Stadium and Norwegian Olympic Center). (b) A small hand pod (12 × 5 cm) placed at the start line was also used in the second session to measure athletes' performance starting from a 3-point stance with feet split and 1 hand on the pad. The timer starts when hand pressure against the pod is released. The time difference between 3-point stance and block start by gunfire is the net effect of including reaction time and the possible benefits of start blocks. (c) A pair of infrared Brower photocells, model TRD-T175 (Draper), were also used for time triggering in the second data collection. An infrared transmitter with corresponding reflector was placed on each side of the running course 1 m above the ground at the start line. In this test, the athletes stood with their front foot on the start line and were allowed to lean backward before rolling forward into the timer initiation infrared beam.
Single beamed infrared photocells captured the 40-m finish line for all Brower timing methods. We adjusted the finish line transmitter and corresponding reflector to head level for each runner instead of the normally used chest level, ensuring that the sensor beam was not broken by a swinging arm 0.03–0.05 seconds before body triggering.
Dartfish Video Timing
Video recordings obtained by means of a Sony HD camera (HDR-HC9E) were analyzed in Dartfish 5.0 to estimate sprint times during both the data collection phases. For all block starts, each video clip captured both gun smoke from the starter's pistol and the athletes passing the finish line. There are 0.02 seconds between each video frame in the Dartfish analyzer window. To ensure the best possible accuracy, 2 independent analyzers assessed the size of the smoke plume in the first frame where smoke was visible. For a small smoke plume, the start was set 0.01 seconds back on the timeline (cue in). When a large plume was visible in the first “smoke frame,” the start was set 0.02 seconds back on the timeline. Similar procedures were used to set video cue in for 3-point starts and standing starts. If the hand or foot left the pod or the floor plate between 2 frames, the start time was set between these 2 timeline values. The finish time (cue out) was set the same way. If the athlete passed the finish line between 2 frames, the finish time was set between these 2 timeline values. The time of each athlete's run was calculated by subtracting cue in from cue out. The mean values were taken from the times determined by 2 independent observers.
Floor Plate Triggering
Besides audio start triggering (AS), photocell start triggering (PS), and hand release (HR) start triggering (all from Brower Timer), a purpose-built foot pressure release system (FR) was also used during the second data collection session. The FR system was custom built and employed a 60 × 60-cm pressure sensitive floor plate placed under the track surface. The timer is triggered when the pressure from the front foot against the floor plate is removed. Pairs of photocells covered each fifth meter of the running distance. The infrared beam was split to reduce the possibility of arm swings triggering the cells. Transmitters were placed 140 cm above the ground, and reflectors for the split beam were placed 130 and 150 cm above the floor. Both the beams had to be interrupted to trigger the timer stop. Electronic times were transferred to a computer running dedicated software developed in MatLab (BioRun, Biomekanikk AS, Norway). Forty-yard (36.58-m) times were calculated by using the formula: time 40 yd = rime 35 m + ([time 40 m − time 35 m] × 0.316).
Statistical Analyses
PASW Statistics 18.0 (SPSS, Chicago, IL, USA) was used for all the analyses. The timing systems were validated based on the mean difference, Pearson's R correlation, standard error of measurement (SEM ), and coefficient of variation (CV). Reliability calculations were based on the mean difference, intraclass correlation, SEM , and CV. Where a systematic bias was determined using test-retest analysis, SEM and CV were calculated after the adjustment for mean bias. All the averaged values are rounded to the nearest 0.01 second. The General Linear Model with Repeated Measures followed by Bonferroni adjustment for multiple comparisons was used to compare the results from the 3 starting positions. Test-retest differences in performance were compared using the paired samples T -test. Dartfish video timing provided the basis for reliability measurements. Alpha was set to ≤ 0.05 for tests of the null hypothesis. Mean difference, standard error, and 95% confidence interval for the differences between block starts (reference starting position) and the other starting methods are presented. Finally, correction factors between 40 m and 40 yd were calculated based on the NOC timing system measurements. Residual error estimates for correction factors were calculated and expressed as typical error.
Results
Concurrent measurements of athletic events using the Omega timing system, Brower audio triggering, and Dartfish based video analysis demonstrated that the 2 latter measurement methods were valid to the limits of precision of the instruments (Table 3 ). Table 3 also shows that both HR and FR from a subsurface pressure sensor showed a small but significant bias compared with that of video-based timing. That is, with the hand pod, timer activation actually began about 0.04 seconds before video detection of hand movement. With foot pressure activation, timer activation was delayed by 0.02 seconds compared with the detection of foot movement on video.
Table 3: Validation of timing systems.*
Test-retest reliability results are presented in Table 4 . A small (∼0.04 seconds), but a significant performance decline was detected when comparing the best results from series 1 and series 2, separated by only 20 minutes.
Table 4: Test-retest reliability for 40-m sprint testing.*
Table 5 gives the differences in performance time associated with the 3 starting positions compared, all based on Dartfish video analysis. The impact of the starting position on the 40-m performance time was statistically significant and much larger than the typical variation from test to test. Table 6 shows correction factors for the conversion of metric to yard distances.
Table 5: Correction factors between starting positions (block start to gunfire as reference).*†
Table 6: Correction factors for conversion between 40 m and 40 yd .*
Discussion
The analysis of phase 1 data revealed no systematic variation between Omega, video, and Brower speaker sensor timing. Table 3 shows an absolute variation of 0.01 seconds and no mean difference between Omega and the other systems. The SEM was 0.01 seconds for both video and Brower vs. Omega. That is, for practical purposes, the Brower audio sensor/photocell timing system and Dartfish video analysis give identical results to Omega phototiming to a precision of ±0.01 seconds.
Video timing was subsequently used as the validation standard during phase 2 comparisons against Brower finger pod and NOC floor plate. We detected small but significant timing biases when video detection of movement was matched with timer triggering based on sensors placed under the hand or front foot. The instrumental biases were small (0.02–0.04 seconds) and may vary across instruments depending on the calibration of pressure thresholds and other details in construction.
We observed a small (∼0.04 seconds) but statistically significant decline in the performance from the first series of 3 sprints to the second series. The 2 series were separated by only 20 minutes. The poorer times in trials 4–6 could be explained by fatigue or a lack of mobilization. Duthie et al. (4 ) reported a 0.02-second mean improvement in the 10-m time for standing starts with the floor plate between 2 test sessions separated by 7 days. In our own hands, pilot measurements before this investigation revealed 0.05–0.1 second individual time variation in both directions when testing was performed on different days. Therefore, a design with a test-retest within the same day was chosen to minimize this source of variation. Table 4 shows that the CV for test-retest timing using different starting positions ranged from 0.7 to 1.0% after correction for the small systematic timing bias between trials. The block method appeared to be the most reliable in a group of athletes familiar with this start method. Under normal testing conditions, we have observed that approximately 80% of the athletes reach their best 40-m performance within 2 trials.
The key finding of this study is that the starting method and timing system used can combine to generate large absolute differences in “sprint time.” Table 5 shows that 40-m times triggered by hand release from a 3-point stance, breaking a photocell beam from a standing start, and releasing the front foot from the ground during a standing start generate approximately 0.17, 0.27, and 0.69 seconds faster times, respectively, compared with block starts with a timer triggered by gunfire (Figure 1 ).
These figures are not in accordance with the findings of Duthie et al. (4 ), who reported smaller time differences between the starting positions. These discrepancies can be explained by several factors. First, Duthie et al. used timing equipment made by another manufacturer (Swift Performance, Australia) with a different calibration of pressure threshold and other details in the construction of the foot pod. Second, the standing start procedure in our study allowed leaning backward before rolling forward as opposed to the fixed position used by Duthie et al. Finally, Duthie et al. might have placed the photocells at the start line at a different height, allowing other body parts than the chest to trigger the beam.
At the extreme, a 40-m sprint time of 4.4 seconds measured from a standing start with triggering via floor sensors below the front foot gives a poorer performance than does 5.0 seconds measured from starting blocks with time initiated by a starter's gun. The method of sprint timing used can result in greater differences in sprint time than obtained using several years of a conditioning training program (20). These differences are essentially absolute. Therefore, their impact on the interpretation of shorter sprint distance performances would be even greater.
The young athletes in this study were not all sprint specialists but were experienced with block start conditions and the other starting conditions employed. Although the absolute differences observed might vary somewhat with the experience of the athletes being tested, we believe that the correction factors quantified here provide a reasonable framework for comparing sprint performances across timing methods.
The sources of time differences detected include the starting device (gun, pod, and photocells), inclusion of reaction time, vertical and horizontal placement of starting device related to the start line, body configuration, and center of gravity velocity at the triggering point. The difference in performance time of 0.17 seconds between block start and 3-point start in this study can mainly be explained by the reaction time, which is identical to the mean reaction time reported by the IAAF from the last athletic championships (http://www.iaaf.org/history/index.html ). Based on these considerations, the athletes in this study gained no positive benefits by using start blocks. However, video recordings from the 3-point starts revealed slight horizontal body movement before finger lift-off, because the athletes tried to delay lifting their hand from the timing sensor. Therefore, it is likely that a small performance advantage is achieved when using block starts in experienced performers.
The differences in performance time are larger between block starts and standing start measurements with the standing condition being consistently faster. A standing start with photocell triggering and a standing start with front foot release triggering result in 0.1- and 0.52-second better times, respectively, vs. that of the 3-point start with hand release. These differences can be explained by the body position and horizontal velocity of the center of gravity at triggering point. In the case of photocell triggering, the center of gravity is located above the start line (Figure 1C ), in contrast to 3-point and block starts where the center of gravity is about 40–50 cm behind the start line (Figure 1A, B ). Because the standing start allows leaning back before running, the athletes have a small horizontal movement at photocell triggering. A disadvantage with the standing photocell start is the raised upper body position to avoid early triggering. Despite this biomechanically poor stance, the horizontal speed generated from the slight flying start and the shorter effective running distance combine to yield a 0.1-second advantage vs. 3-point starts. The benefit is even more pronounced with foot release triggering. Here, the center of gravity is about 50–60 cm past the start line, and the horizontal velocity of the center of gravity is considerably higher by the time the front foot releases the triggering plate (Figure 1D ). Compared with the “gold standard” block start, this start method eliminates reaction time, reduces timed running distance by about 1 m, and allows the benefit of a substantial flying start.
American athletes are often timed over 40 yd (36.58 m). Therefore, the relationship between 40- and 40-yd mean times (n = 50) is shown in Table 6 . Based on 50 simultaneous timings of 35- and 40-m times and interpolation of 36.58-m time, a simple correction factor of 1.08 (1.084) can be used to convert 40-yd performances to comparable 40-m times. Similarly, a conversion factor of 0.92 (0.923) is appropriate to convert 40-m times to 40-yd equivalents. These correction factors might be useful when, for example, comparing speed performance in American football with familiar European team sports.
Practical Applications
The difference between excellent and mediocre in a 40-m sprint is a few tenths of a second. Comparison of sprint timing results without consideration of the specific start configuration and timing methods can cause a lot of confusion. This study has shown that time differences among commonly used start positions and triggering methods can exceed 0.5 seconds, larger than the typical gains derived from specific training or even the difference between superior and mediocre sprinters. For internal comparisons of performance in a training monitoring setting, changing timing methods is unacceptable. Electronic triggering by hand release from a 3-point stance may represent the most practical start method with minimal momentum or distance shortening effects. However, timing methods vary from sport to sport and across investigations. Therefore, this investigation provides useful correction factors that should improve the validity of performance comparisons across research studies and over typical “American” and “European” sprint test distances.
References
1. Aziz, AR, Mukherjee, S, Chia, MY, and Teh, KC.
Validity of the running repeated sprint ability test among playing positions and level of competitiveness in trained soccer players.
Int J Sports Med 29: 833–838, 2008.
2. Brechue, WF, Mayhew, JL, and Piper, FC. Characteristics of sprint performance in college football players.
J Strength Cond Res 24: 1169–1178, 2010.
3. Dupler, TL, Amonette, WE, Coleman, AE, Hoffman, JR, and Wenzel, T. Anthropometric and performance differences among high-school football players.
J Strength Cond Res 24: 1975–1982, 2010.
4. Duthie, GM, Pyne, DB, Ross, AA, Livingstone, SG, and Hooper, SL. The
reliability of ten-meter sprint time using different starting techniques.
J Strength Cond Res 20: 246–251, 2006.
5. Hetzler, RK, Stickley, CD, Lundquist, KM, and Kimura, IF.
Reliability and accuracy of handheld stopwatches compared with electronic timing in measuring sprint performance.
J Strength Cond Res 22: 1969–1976, 2008.
6. Kuzmits, FE and Adams, AJ. The NFL combine: Does it predict performance in the National Football League?
J Strength Cond Res 22: 1721–1727, 2008.
7. Mayhew, JL, Houser, JJ, Briney, BB, Williams, TB, Piper, FC, and Brechue, WF. Comparison between hand and electronic timing of 40-yd dash performance in college football players.
J Strength Cond Res 24: 447–451, 2010.
8. Pincivero, DM and Bompa, TO. A physiological review of American football.
Sports Med 23: 247–260, 1997.
9. Stodden, DF and Galitski, HM. Longitudinal effects of a collegiate strength and conditioning program in American football.
J Strength Cond Res 24: 2300–2308, 2010.
10. Stuempfle, KJ, Katch, FI, and Petrie, DF. Body composition relates poorly to performance tests in NCAA Division III football players.
J Strength Cond Res 17: 238–244, 2003.
11. Vescovi, JD, Brown, TD, and Murray, TM. Positional characteristics of physical performance in Division I college female soccer players.
J Sports Med Phys Fitness 46: 221–226, 2006.
12. Vescovi, JD, Rupf, R, Brown, TD, and Marques, MC. Physical performance characteristics of high-level female soccer players 12–21 years of age.
Scand J Med Sci Sports 2010 (Epub ahead of print).
13. Wisløff, U, Castagna, C, Helgerud, J, Jones, R, and Hoff, J. Strong correlation of maximal squat strength with sprint performance and vertical jump height in elite soccer players.
Br J Sports Med 38: 285–288, 2004.
14. Wong, PL, Chaouachi, A, Chamari, K, Dellal, A, and Wisloff, U. Effect of preseason concurrent muscular strength and high-intensity interval training in professional soccer players.
J Strength Cond Res 24: 653–660, 2010.