Abstracts

The Journal of Strength & Conditioning Research: February 2019 - Volume 33 - Issue 2 - p e3–e217
doi: 10.1519/JSC.0000000000002990
Abstracts
Free

Research Abstract Submission

Thursday, July 12, 2018, 8:30 AM–8:45 AM

Back to Top | Article Outline

Sex-Based Differences in Androgen and Glucocorticoid Receptor Phosphorylation in Human Skeletal Muscle

J. Nicoll,1 A. Fry,1 E. Mosier,1 and A. Sterczala2

1 University of Kansas; and 2 University of Pittsburgh

Previous research has indicated males express greater androgen receptor (AR) and lower glucocorticoid receptor (GR) content compared to females. In addition, it is well known males produce higher testosterone compared to females. Recent evidence suggests the AR and GR can be regulated in the absence of their ligands via phosphorylation (pAR & pGR). Phosphorylation at specific sites has been shown to influence receptor transcriptional activity (pGR ser134 & ser211; pAR ser213 & ser515) and nuclear-cytoplasmic shuttling (pAR ser650 & pGR ser226). However, it is unknown if AR and GR are phosphorylated in human skeletal muscle. Further, it is unknown if there is differential phosphorylation of the AR and GR between sexes that is similar to reports on their ligands (e.g., testosterone). Purpose: Determine if there is differential expression and phosphorylation of AR and GR in skeletal muscle at rest between males and females. Methods: Ten college aged males (mean ± SD, age = 22 ± 2.4 years, height = 175 ± 7 cm, body mass = 84.1 ± 11.8 kg) and 10 females (mean ± SD; age = 20 ± 0.9 years; height = 169 ± 7 cm; body mass = 67.1 ± 8.7 kg) reported to the laboratory following an overnight fast. Resting percutaneous vastus lateralis muscle biopsies were collected and immediately frozen in liquid nitrogen. Biopsies were analyzed for pGR (ser134, ser211, and ser226) and pAR (ser81, ser213, ser515, ser650) via western blotting. Independent samples t-tests were used to compare total and phosphorylated receptor expression between males and females. Pearson correlations were used between AR and GR variables to determine if there were relationships that may regulate the anabolic-catabolic milieu. A phosphorylation index (PI) was calculated to determine phosphorylated receptor expression after accounting for differences in total receptor content. Results are reported as mean ± SE. Significance was determined at alpha-level p ≤ 0.05. Results: Males had more total AR compared to females (+42 ± 4%; p < 0.001). Females had higher pARser81 (+87 ± 11%; p = 0.001) and ser515 (+55 ± 13%; p = 0.019). However, when the phosphorylated ratios were corrected for differences in total AR expression (i.e., PI), the overall phosphorylation at these sites were similar between sexes (ser515: males = 100% vs. females = 92%; ser81: males = 100% vs. females = 107%). pGRser134 was higher in males compared to females (+50 ± 15%; p = 0.016), and there was a trend for higher pGRser211 in men (+34 ± 15%; p = 0.082). In males, there was a relationship between pARser650 and pGRser134 (r = 0.63; p = 0.05), and there tended to be a relationship between total AR and pGRser211 (r = −0.59; p = 0.068). These relationships were not observed in females (p > 0.05). In females, there was a relationship between pARser213 and pGRser226 (r = −0.78; p = 0.007), and there tended to be relationships between pARser81, and pGRser211 (r = 0.58; p = 0.07) and pGRser226 (0.61; p = 0.059) respectively. These relationships were not observed in males (p > 0.05). Conclusions: At rest, ARs and GRs are differentially phosphorylated at some, but not all sites between males and females. There are several relationships between AR and GR phosphorylation at specific residues, and these relationships appear to be sex specific. Practical Applications: Differential regulation of phosphorylated AR and GR may partially influence how skeletal muscle for both sexes are able to undergo hypertrophy and strength gains despite different circulating hormonal concentrations. Acknowledgments: This project was funded by an NSCA-GNC Nutrition Grant, and by an ISSN-MusclePharm Grant.

Thursday, July 12, 2018, 8:45 AM–9:00 AM

Back to Top | Article Outline

Biomarker Changes Correlate With Strength, Endurance, and Body Composition Changes Throughout the Competitive Season in Women's Division I Collegiate Soccer Players

B. McFadden,1 A. Walker,2 D. Sanders,2 B. Bozzini,2 C. Ordway,3 H. Cintineo,2 M. Bello,2 and S. Arent2

1 Center for Health and Human Performance, Rutgers University; 2 Rutgers Center for Health and Human Performance; and 3 Rutgers University

In-season, increased soccer specific demands and insufficient recovery may result in loss of strength and fat free mass (FFM). Biomarker monitoring may be a useful tool to detect the impact of these stressors, particularly if related to performance outcomes. Purpose: To identify the relationship between biomarker changes and changes in performance and body composition variables in female athletes over the course of a competitive season. Methods: Women's DI college soccer players (N = 21; Mage = 19.7 ± 1.5 years; Mweight = 66.3 ± 6.2 kg) were monitored throughout their competitive season. Athletes performed a battery of tests pre- and post-season, including vertical jump (VJ), V[Combining Dot Above]O2max, and 3RM testing for bench press (BP), squat (SQ) and deadlift (DL). Body composition was also assessed via Bodpod prior to the start of the season and at weeks 6, 10, 14, and 17 (post-season). Blood draws were performed prior to the start of preseason and every 4-weeks thereafter. The athletes arrived fasted and euhydrated in the morning 18–36 hours post-game. Total cortisol (TCORT), free cortisol (FCORT), estradiol (E2), growth hormone (GH), Insulin-Like Growth Factor-1 (IGF1) and Interleukin-6 (IL6) were analyzed. Delta area under the curve (DAUC) was calculated for the biomarkers and body composition variables. Pearson-product moment correlations were used to assess the relationships between biomarker changes and changes in performance and body composition with significance set at P < 0.05. Results: TCORT was negatively correlated with change in fat free mass (FFM) (r = −0.48; P < 0.05). TCORT had a positive correlation with changes in percent body fat (%BF) that appraoched significance (r = 0.39; P = 0.08). TCORT was positively correlated with change in V[Combining Dot Above]O2max (r = 0.47; P < 0.05), and the negative correlation with SQ change approached significance (r = −0.45; P = 0.08). The negative correlation between TCORT and E2 approached significance (r = −0.36; p = 0.10). GH was positively correlated with changes in IGF1 (r = 0.48; P < 0.05) and changes in DL (r = 0.56; P < 0.05). IGF1 was also positively correlated with changes in DL (r = 0.56; P < 0.05), as well as with E2 (r = 0.49; P < 0.05). IL6 was negatively correlated with change in BP (r = −0.52; P < 0.05). No significant correlations were seen for biomarkers and changes in VJ. Conclusions: A power/endurance tradeoff is often noted in times of increased aerobic activity as might be seen during soccer specific training, particularly if coupled with a decrease in time spent resistance training. Greater changes in anabolic hormones (GH and IGF1) from baseline were related to improvements in measures of strength (DL), whereas greater changes in catabolic or inflammatory hormones (TCORT and IL6) were related to less improvement in FFM, BP and SQ. Interestingly, increases in TCORT were correlated with improvements in V[Combining Dot Above]O2max., which may be a result of aerobic training overload. These findings support a relationship between changes in anabolic and catabolic hormones and changes in performance and body composition. Practical Applications: Findings suggest that biomarker monitoring may be useful for evaluating the impact of training on physiological changes related to player performance. Further, results of this study support the necessity of an adequate resistance training program throughout the competitive season to offset decrements in strength and FFM. Responses of E2 warrant particular consideration for the female athlete. Acknowledgments: Research supported by Quest Diagnostics.

Thursday, July 12, 2018, 9:00 AM–9:15 AM

Back to Top | Article Outline

The Effect of Caffeine on Time Perception During Exercise in a Hot, Humid Environment

C. Diehl, R. Maceri, S. Martinez, T. Michael, M. Miller, and N. Hanson

Western Michigan University

Introduction: Previous studies have shown that exercise is a physiological disturbance that can influence the experience of time, and that intensity level can greatly affect temporal perception. Caffeine is an ergogenic aid that has reliable and positive performance results in thermoneutral environmental conditions; however, during exercise in the heat some studies have shown that there are no benefits. What is not known is how the perception of time is affected by caffeine consumption before exercise in the heat. Purpose: To evaluate the effect of caffeine on time perception during a 10 km run in a hot, humid environment. Methods: Ten (4 females, 6 males) healthy, trained runners participated in this study. Subjects (26.2 ± 8.7 years [mean ± SD], range of 18–42) were asked to complete 4 laboratory visits with the first visit entailing a familiarization and a V[Combining Dot Above]O2max test. The following 3 visits included self-paced 10 km runs on a laboratory treadmill in an environmental chamber at 30.6° C & 50% relative humidity. They were told to complete the 10 km run as quickly as they could. Condition order was randomized and consisted of a placebo (PLA), 3 mg·kg−1 body mass caffeine dose, and 6 mg·kg−1 caffeine dose. Subjects were blind to the condition; powdered caffeine was weighed and mixed with an orange drink to mask the flavor. Time estimations of 3, 7 and 20 seconds were taken at the 4 and 8 km distance points. Time estimation ratios (TERs) were calculated by taking the estimation by the subject divided by the objective time (e.g., 6.4/7 seconds = 0.92); ratios under 1.0 represent a feeling of time progressing relatively slowly compared to ratios above 1.0. A 2 (distance) × 3 (caffeine condition) × 3 (time estimation) repeated-measures ANOVA was used to determine the effect of distance and caffeine on time perception, and a one-way ANOVA was used to determine the effect of caffeine condition on 10 km run time. Results: There was not a difference between caffeine conditions in 10 km run time (PLA: 53.2 ± 8.0, 3 mg·kg−1: 53.4 ± 8.4, 6 mg·kg−1: 52.7 ± 8.2 minutes; p = 0.575). There was not a main effect of distance on TERs (4 km: 0.86 ± 0.06 mean ± SE, 8 km: 0.86 ± 0.05; p = 0.820). There was also not a main effect of caffeine condition on TERs (PLA: 0.85 ± 0.06, 3 mg·kg−1: 0.88 ± 0.05, 6 mg·kg−1: 0.85 ± 0.05; p = 0.473), and no significant interactions were found. Conclusions: There was not a significant difference in 10 km completion time between caffeine conditions, indicating that the overall relative intensity was the same and that they likely had similar pacing strategies regardless of caffeine condition. This corroborates previous research showing that intensity is a major determinant of time perception during exercise, regardless of environmental conditions or caffeine dosage. Practical Applications: This study suggests that caffeine does not positively affect performance in a hot, humid environment; however, our results also show that it does not negatively affect the perception of time. Acknowledgments: This study was funded by an internal university grant.

Thursday, July 12, 2018, 9:15 AM–9:30 AM

Back to Top | Article Outline

The Effects of Resistance Training on Motor Unit Action Potential Amplitudes vs. Recruitment Threshold Relationships During Repetitive Contractions

A. Sterczala,1 J. Miller,2 M. Wray,2 H. Dimmick,2 M. Trevino,3 and T. Herda2

1 University of Pittsburgh; 2 University of Kansas; and 3 Armstrong State University

Purpose: To determine the effects of an 8-week resistance training program on motor unit action potential amplitude (MUAPAMP) vs. recruitment threshold (RT) relationships during repetitive (Rep) contractions. Methods: Nine recreationally active men (20.6 ± 0.9 years; 177.5 ± 7.1 cm; 78.3 ± 9.5 kg) completed 3 lower body resistance-training sessions per week for 8 weeks. Exercise intensities and volumes were programmed according to a linear periodization model. Pre- and post-training, subjects performed 2 consecutive isometric knee extensions at 40% of their pre-training maximal voluntary contraction (MVC) torque. A 5-pin electromyographic (EMG) sensor array was placed over the vastus lateralis. EMG signals collected during the contractions were decomposed to yield action potentials of single MUs. The RTs and MUAPAMPS were calculated for each MU. Slopes and y-intercepts were calculated for each subject's linear MUAPAMP vs. RT relationships. Potential differences in the slopes and y-intercepts were analyzed via 2 separate 2-way repeated measures ANOVAs (time [pre vs. post-resistance training] × repetition [Rep 1 vs. Rep 2]) with paired samples t-test post hoc analyses. Results: For the slopes, there was a significant 2-interaction (p = 0.009). For post-resistance training, subjects demonstrated significantly smaller slopes during Rep 2 (0.00157 ± 0.00053 mV/%MVC) than Rep 1 (0.00201 ± 0.00070 mV/%MVC) (p = 0.010). For pre-training, subjects demonstrated similar slopes for Rep 1 (0.00187 ± 0.00060 mV/%MVC) and Rep 2 (0.000190 ± 0.00053 mV/%MVC) (p = 0.737). Finally, there were no differences in the slopes between pre- and post-resistance training for Rep 1 (p = 0.534) and Rep 2 (p = 0.202). For the y-intercepts, there was no significant 2-way time × Rep interaction (p = 0.307) or main effects for time (p = 0.306) and Rep (p = 0.541). Discussion: Since MUAPAMP is correlated with the size of the MU and its constituent muscle fibers, the slope from the MUAPAMP vs. RT relationship indicates the size of the MU recruited at a given torque level. The decrease in slope from Rep 1 to Rep 2 observed post-resistance training suggests a delay in MU recruitment such that a MU of a given size was recruited at a higher relative torque during Rep 2. The delayed MU recruitment suggests that, post resistance-training, subjects were able to produce the same force with fewer active MUs during Rep 2. A resistance training-related increase in post-activation potentiation may explain how subjects could produce the same torque with fewer active MUs. Practical Applications: The ability to produce similar torque with fewer active MUs would reduce the neural cost of a muscular contraction, thus increasing resistance to fatigue during repetitive tasks.

Thursday, July 12, 2018, 9:30 AM–9:45 AM

Back to Top | Article Outline

Kinetic Analysis of Unilateral Landings in Female Volleyball Players After a Dynamic and Combined Dynamic-Static Warm-up

J. Avedesian,1 L. Judge,2 H. Wang,2 and C. Dickin2

1 University of Nevada, Las Vegas; and 2 Ball State University

Introduction: A warm-up is an important period before training or competition to prepare an athlete for the physical demands of subsequent activity. Prior research has extensively focused on the effects of warm-up in relation to various jumping performance attributes, however, limited research has examined the biomechanical nature of landings in volleyball following common warm-up practices. Purpose: To determine whether the addition of static stretching to a dynamic warm-up affects the kinetics of the lower extremity during volleyball-specific dominant and non-dominant unilateral landing tasks in female volleyball players. Methods: Twelve female, collegiate-level volleyball players (age: 19.8 ± 1.2 years; height: 1.72 ± 0.06 m; mass: 70.9 ± 5.4 kg; volleyball playing experience: 9.8 ± 2.6 years) performed unilateral landings on the dominant and non-dominant limb prior to- and post-dynamic (DWU) and combined dynamic-static (CDS) warm-ups. Kinetic variables of interest were measured at the hip and knee during the landing phase of a volleyball-simulated jump-landing maneuver. Separate repeated-measures MANOVA tests were performed, with follow-up simple contrasts performed to determine specific statistical differences. Results: A significant 3-way interaction (warm-up × limb × time) for peak internal knee adduction moment was observed, as this kinetic parameter significantly increased (p = 0.013; d = 0.79) in the non-dominant limb at 1 minute post CDS warm-up. No other warm-up differences were detected, however, a significant limb by time interaction effect was observed for vertical ground reaction force (vGRF). Specifically, dominant limb vGRF decreased from pre warm-up to 15 minutes post warm-up (p = 0.046; d = 0.24). Main effects of limb were determined for dominant limb hip abduction moment (p < 0.001; d = 1.32), dominant knee internal rotation moment (p < 0.001; d = 1.88), and non-dominant knee external rotation moment (p < 0.001; d = 1.86). Conclusions: Following a CDS warm-up, female volleyball athletes performing unilateral landings displayed non-dominant frontal plane knee kinetics that may increase the risk of a non-contact anterior cruciate ligament injury compared to a DWU. A reduction in vGRF during dominant limb landings at 15 minutes post warm-up lends support to incorporating jump-landing exercises into a volleyball warm-up. Significant kinetic differences in non-dominant limb landings that may increase the injury risk may be attributed to altered hip and trunk mechanics during the landing maneuver. Practical Applications: While more research regarding the effects of warm-up on landing biomechanics is needed, volleyball coaches and strength and conditioning professionals should incorporate a volleyball-specific DWU prior to activity to reduce lower extremity injury risk. Consideration must also be given to the increased risk associated with unilateral landings, therefore emphasis should be placed on landing technique, trunk/core stability, and posterior chain muscle strengthening to minimize injury during a landing maneuver.

Thursday, July 12, 2018, 9:45 AM–10:00 AM

Back to Top | Article Outline

Changes in Gluteus Maximus and Biceps Femoris Muscle Activation During the Glute-Ham Raise

M. Cuthbert,1 J. McMahon,1 N. Ripley,1 T. Suchomel,2 and P. Comfort1

1 University of Salford; and 2 Carroll University

Introduction: The hip extensors (gluteus maximus [GMax] and hamstrings) and knee flexors (hamstrings) play an important role in enhancing performance and reducing injury risk, during athletic tasks that require rapid force development and absorption. It is important, therefore, to accurately determine the level of activation of these muscles, via electromyography (EMG), during resistance training exercises and examine the changes across different phases of an exercise, to inform exercise prescription. Although previously researchers have reported very high-normalized EMG values (100–200% maximum voluntary isometric contraction [MVIC]) for the hamstrings, during the razor curl, these values may be elevated due to the use of suboptimal normalization techniques. The glute-ham raise (GHR) is an alternative exercise to train the GMax and hamstring muscles; however, no study has compared EMG amplitude across the phases of the exercise. Purpose: To determine the changes in EMG of the GMax and BF across the phases (phase 1—knee extension; phase 2—hip flexion; phase 3—hip extension; phase 4—knee flexion) of the GHR. Methods: Subjects (n = 11; age = 23 ± 4 years, height = 175.95 ± 6.9 cm; mass = 75.15 ± 9.65 kg) had EMG electrodes placed on the GMax and BF muscles in accordance with SENIAM guidelines. Subjects performed 3 maximum isometric voluntary contraction (MVIC) trials during knee flexion and hip extension using an isokinetic dynamometer; in order normalize the EMG during the 4 phases of a glute-ham raise. EMG data were analyzed in a bespoke Excel spreadsheet, identifying the 4 phases based on thresholds of >2 standard deviations + mean EMG acquired during periods of residual EMG. GMax activation was normalized to hip extension MVIC, while BF activation was normalized to hip extension MVIC for phases 2 and 3 and to knee flexion MVIC for phases 1 and 4. Data were compared across phases using a one-way ANOVA with Bonferroni post hoc analyses, or non-parametric equivalent. Cohen's d effect sizes were also calculated to determine the magnitude of differences between phases. An a priori alpha level was set at p < 0.05. Results: Highest peak EMG for GMax and BF occurred during phase 4, which was significantly greater than all other phases for the BF (p ≤ 0.001) and greater than phases 2 and 4 for the GMax (p ≤ 0.001) (Table 1). Mean GMax EMG was significantly (p < 0.001) lower in phase 2 compared to all other phases (Table 1). Mean BF EMG was greatest during phase 3, although this was only significantly greater (p < 0.001) than during phase 2 (Table 1). Conclusions: The glute-ham raise leads to near maximal activation of the BF (>75%), in all but phase 2, whereas submaximal MVIC activation of the GMax is evident across all phases. Practical Applications: The glute-ham raise may be beneficial in maximizing activation of the BF, especially during phase 4, but likely provides a poor training stimulus for the GMax.

Thursday, July 12, 2018, 10:00 AM–10:15 AM

Back to Top | Article Outline

Effect of Verbal Cue on Muscle Activation of the Lower Trapezius During the “Y” Exercise Performed at Different Angles

W. Jennings,1 D. Arndts,2 Z. Dechant,2 D. Plum,2 E. Valle,2 R. Fritz,2 A. Askow,2 J. Stone,2 and J. Oliver2

1 The Sport Science Center at Texas Christian University (TCU); and 2 Texas Christian University

Overhead athletes are inherently dependent on shoulder health and strength. One of the most common injuries among overhead athletes is subacromial impingement, which is associated with dysfunctional scapular activation via hindered lower trapezius (LT) activation sequence(s). Practitioners responsible for training overhead athletes should therefore focus on strengthening and activating the LT to reduce risk of injury, specifically subacromial impingement. The “Y” exercise produces the greatest amount of LT electromyographic (EMG) activity, which can be accentuated by altering positions and surfaces. Further, previous studies have reported that verbal cues successfully increase the desired muscle activation in a variety of exercises. Purpose: To examine the effect of changing positions during the “Y” exercise with and without cueing on muscle activation of the LT. Methods: Twelve (n = 12) (Mean ± SD; 20 ± 2 years; 75.1 ± 15.2 kg; 174.5 ± 9.3 cm) healthy male (n = 5) and female (n = 7) competitive overhead athletes completed a maximal isometric voluntary contraction (MVIC) of the LT and upper trapezius (UT). Next, athletes performed the “Y” exercise for sets of 5 repetitions in a randomized order of prone positions (table [TAB], incline bench [INC], exercise ball [BAL], and floor [FLO]) and cueing conditions (no cue [NO], internal cue [INT], and external cue [EXT]). Cues were explained prior to the start of each set and delivered verbally during repetitions to identify the start of each phase (ascent, hold, and descent). Muscle activity (EMG) of the upper and lower trapezius was analyzed by the absolute integral of the middle 3 repetitions using repeated measures analysis of variance. Results: A significant effect of cue was observed (p = 0.004). Both the INT and EXT cue elicited higher activity of the UT and LT, compared to no cue (p = 0.009 and p = 0.013, respectively). Further, the difference among prone positions approached significance (p = 0.074). The highest LT activity was observed for FLO (Mean ± SE; 72.0 ± 0.0 %MVIC), which produced greater activity than the TAB (65.7 ± 0.1 %MVIC; p = 0.040) and the INC (56.7 ± 0.1 %MVIC). The BAL produced the second highest LT muscle activity (68.1 ± 0.1% MVIC), but this was only greater than the INC (p = 0.008). Conclusions: These data indicate that LT activation is higher when INT and EXT cues are utilized. Further, the greatest LT activity during the “Y” exercise occurs when performed on the FLO and BAL. Practical Applications: Practitioners should incorporate the “Y” exercise in the prone position on the FLO or BAL and should use verbal cues, either INT or EXT, to maximize activation of the LT.

Thursday, July 12, 2018, 10:15 AM–10:30 AM

Back to Top | Article Outline

Effects of Placebo vs. Open-Label Placebo Supplementation on Fatigue Resistance During Repeated Maximal Strength Testing: Preliminary Results

A. Swafford, M. Stock, D. Kwon, R. MacLennan, and D. Fukuda

University of Central Florida

Placebo effects have been well studied in a variety of disciplines, but relatively few investigators have examined their ability to delay neuromuscular fatigue during maximal contractions. In addition, recent clinical trials in patient populations have reported reduced subjective pain ratings following consumption of open-label placebos. This would suggest that deception may not be necessary for placebos to have benefits. Purpose: To compare the effects of placebo vs. open-label placebo supplementation on fatigue resistance during repeated maximal strength testing. Methods: Nine untrained subjects (4 males, 5 females; mean ± SD age = 23 ± 3 years, mass = 63.7 ± 12.8 kg, height = 1.64 ± 0.07 m) participated in this study. The subjects visited the laboratory on 4 separate occasions, the first of which was a familiarization. All visits were at the same time of day, and the time between sessions was ≥48 hours but <1 week. Laboratory conditions were kept constant throughout the study, and subjects were asked to keep their physical activity levels, dietary habits, and caffeine consumption consistent. Upon arrival to the laboratory, the subjects consumed a placebo supplement, an open-label placebo supplement, or no intervention (control). For the placebo trial, the subjects were told that they would be consuming a dietary supplement that previous researchers had deemed to be effective for improving strength and delaying fatigue, and they would “feel” more energetic. For the open-label trial, the subjects were informed that the intervention would have no effects and they should not expect improved performance. Each intervention required the subjects to consume 2 capsules, both of which contained flour. A 15-minute period of quiet rest separated consumption of the capsules and testing. The order of the 3 trials was randomized and counterbalanced. Testing required the subjects to perform 20, 6 second maximal voluntary isometric contractions (MVICs) of the dominant knee extensors, each of which were separated by 3 seconds of rest (i.e., 6 seconds “on,” 3 seconds “off”). Visual feedback of each subject's torque was provided on a computer monitor. Peak torque for each MVIC was normalized to the highest value within the protocol (%). A repeated measures analysis of variance and effect size statistics were used to examine differences in the linear slope coefficient for the decline in normalized peak torque (%/MVIC #). Results: The mean ± SD slopes were: placebo = −1.11 ± 1.04; open-label = −1.26 ± 1.05; control = −1.40 ± 1.16. There were no mean differences among interventions (F = 0.737, p = 0.457). The partial eta squared was 0.084, suggesting that a moderate amount of variability in the slopes was due to the intervention. Evaluation of Cohen's d statistics revealed: (a) placebo vs. control = 0.27; (b) placebo vs. open-label = 0.14; (c) open-label vs. control = 0.13. Conclusions: Based on these preliminary data, there were no mean differences in fatigue resistance among placebo and open-label placebo supplementation and a control condition. However, evaluation of the effect size statistics suggest that placebo supplementation may offer a slight advantage. Practical Applications: The role of placebo and open-label placebo supplementation in delaying neuromuscular fatigue is worthy of scientific inquiry. Our data suggest that placebo supplementation may result in a small improvement in fatigue resistance.

Thursday, July 12, 2018, 12:00 PM–1:30 AM

Back to Top | Article Outline

No Changes in Body Composition or Metabolism Over the Course of a Cross Country Season in Female NCAA Division I Cross Country Runners

D. Hooper,1 J. Mallard,1 K. Beckmann,1 T. Orange,1 K. Coyle,1 K. Conway,1 G. Pujalte,2 and J. Wight1

1 Jacksonville University; and 2 Mayo Clinic

Purpose: Athletes that participate in sports that emphasize leanness have long been known to be prone to low energy availability. Potential consequences of low energy availability include low body mass and body fat percentage (%BF), which long term could lead to symptoms of the female athlete triad, including amenorrhea and low bone density. In addition, as a compensatory measure, the body may reduce metabolic rate in order to help reduce energy deficit and its negative consequences. The purpose of this study, therefore, was to assess changes in body composition and metabolism of women over the course of a cross-country season, as these athletes are particularly vulnerable to low energy availability. Methods: Seven female NCAA Division I cross-country runners (age: 20.8 ± 1.5 years, height: 169.6 ± 5.7 cm, weight: 58.3 ± 4.1 kg) were assessed at the beginning (PRE) and end (POST) of the cross-country season. Assessments included body composition (%BF) by air displacement plethysmography, body mass by calibrated scale, resting metabolic rate (RMR) measured by indirect calorimetry, which was also compared to predicted RMR via the Harris-Benedict equation (MRMR:PRMR). In addition, a resting blood sample was taken and analyzed for Triiodothyronine (T3) enzyme-linked immunosorbent assay (ELISA). Results: There were no significant differences in BM from PRE to POST (PRE: 58.3 ± 4.1 vs. POST: 59.1 ± 5.1 kg), or %BF (PRE: 15.7 ± 4.6 vs. POST: 16.4 ± 5.1%). There were also no significant differences from PRE to POST for RMR (PRE: 1,466 ± 123 vs. POST: 1,410 ± 66 kcal·d−1), MRMR:PRMR (PRE: 1.1 ± 0.1 vs. POST: 1.1 ± 0.1) or T3 (PRE: 307.4 ± 112.3 vs. POST: 373.3 ± 114.0 ng·dl−1). In addition, MRMR:PRMR did not drop below 0.9 in any athletes at any point. Conclusions: Despite the rigorous training associated with competing in Division I cross-country, a single season did not induce significant changes in body composition or multiple assessments of metabolic rate. Practical Applications: It is possible for female athletes to complete a cross-country season without significant body mass reductions. Thus, these women were able avoid the potential negative effects on metabolism of low energy availability. Although symptoms of the female athlete triad should be consistently evaluated in distance runners, this study demonstrates that when these variables are closely monitored, it is possible to compete in this sport without inducing a significant energy deficit.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

The Acute Effect of Miniature Trampoline Usage on Muscle Activation During Vertical Jumping

C. Hernandez,1 N. Rhouni,2 M. Reid,1 D. Favela,1 J. Ramirez,1 E. Corella,1 T. Gillum,2 J. Coburn,3 and N. Dabbs1

1 California State University, San Bernardino; 2 California Baptist University; and 3 Biochemistry and Molecular Exercise Physiology Laboratory, Center for Sport Performance, California State University, Fullerton

Introduction: Muscle activation plays an important role in an individual's performance. Muscle fatigue is the decline in ability of a muscle to generate force. When individuals warm-up, they have to consider, if the warm-up will be mostly beneficial without causing muscle fatigue. The use of a miniature trampoline for exercise is becoming more popular around the world. Using a trampoline may even be a beneficial way to warm-up without causing muscular fatigue. Purpose: The purpose of this study was to investigate the acute effects of miniature trampoline usage on muscle activation during vertical jumping in recreationally trained males and females. Methods: Twenty-one recreationally trained individuals (age = 23.3 ± 2.08 years; height = 170.7 ± 9.3 cm; weight = 70.1 ± 11.1 kg) volunteered to participate in 2 testing days. On the first visit (familiarization day), the participants read and signed an IRB approved inform consent, a PAR-Q, and a health questionnaire. Following anthropometric measurements, participants were randomized into a control group (CG) or trampoline group (TG). Participants then performed a dynamic warm-up and were familiarized with the equipment. Participants performed CMVJ's on the trampoline they felt comfortable with the 6 trampoline jumps. The participants in the TG then practiced jumping on the miniature trampoline. Following at least 24 hours, participants came to the second testing day. They completed a dynamic warm-up followed by 3 CMVJ's. Electromyography (EMG) sensors were used to collect muscle activation while performing counter-movement vertical jumps (CMVJ) and were recorded and analyzed in a custom LabView program. The EMG sensors were place on the following muscles of the dominant right leg: vatus lateralis (VL), vatus medialis (VM), bicep femoris (BF), medial gastrocnemius (MG), and tibialis anterior (TA). The TG performed 6 jumps as high as possible on the miniature trampoline and the CG rested for 20 seconds. Immediately after, participants were reassessed for EMG muscle activity during CMVJs. Best jump trials muscle activity was used for pre and post measures and RMS values were calculated. Percent change scores were calculated for all muscles and used for analysis. Independent t-tests were used to compare percent change scores between TG and CG for each muscle. Results: There was no significant difference between TG and CG for percent change EMG muscle activation in VL (p = 0.39; %change-Control = 7.2 ± 19.4%; %change-Trampoline = 22.6 ± 54.8%), VM (p = 0.71; %change-Control = 5.9 ± 31.9%; %change-trampoline = 12.9 ± 59.04%), MG (p = 0.917; %change-Control = 17.8 ± 34.5%; %change-Trampoline = 15.3 ± 51.6%), TA (p = 0.20; %change-Control = −13.5 ± 18.6%; %change-Trampoline = 3.2 ± 37.01%), BF (p = 0.69; %change-Control = −6.5 ± 26.1%; %Trampoline-Control = −1.2 ± 35.2%). Conclusions: The results show that there was no significant difference between trampoline and control groups in any of the muscles tested. The amount of time the participants jumped on the miniature trampoline wasn't sufficient enough for selected muscles to increase muscle activation. If the participants jumped for longer, the results may have changed the muscle activation between the 2 groups. Practical Applications: Practitioners may use these results by knowing that performing 6 jumps on a trampoline will not fatigue your muscles before doing a jumping activity. This may be a new exercise to include in a warm-up without risking fatigue.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

The Motor Unit Mean Firing Rate vs. Recruitment Threshold Relationship is Unaffected by Strength and Conditioning Participation in Middle-School Boys

R. MacLennan,1 M. Stock,1 J. Mota,2 and B. Thompson3

1 University of Central Florida; 2 University of North Carolina at Chapel Hill; and 3 Utah State University

The lack of anabolic hormones observed in young athletes has led many investigators to postulate that improvements in muscle strength during strength and conditioning participation are a result of motor unit adaptations. While intuitive, there is little empirical evidence to support this idea. Purpose: To examine changes in the mean firing rate vs. recruitment threshold relationship following strength and conditioning programming in middle-school boys. Methods: Thirty-six physically-active middle-school aged boys enrolled in this study, but due to attendance requirements and our laboratory's motor unit inclusion/exclusion criteria, data from 14 have been presented herein. Nine boys (mean ± SD age = 12 ± 1 year; body mass index [BMI] = 20.8 ± 3.6 kg·m−2) partook in an after-school strength and conditioning program for 16 weeks. Five boys served as controls (age = 13 ± 1 year; BMI = 18.8 ± 4.4 kg·m−2). The training subjects performed 45 minutes of supervised high-intensity plyometric, speed, and agility training, followed by 45 minutes of multi-joint barbell exercises (e.g., back squats, deadlifts), twice per week for 16 weeks. Prior to and following the intervention, the subjects performed maximal voluntary isometric contraction (MVIC) testing and trapezoidal contractions at 50% and 80% MVIC of the right knee extensors. Posttests assessed both the original absolute pretest force levels and percentages based on the new MVIC. Bipolar surface electromyographic (EMG) signals were detected from the vastus lateralis with an array sensor during the submaximal contractions. A surface EMG signal decomposition algorithm was used to decompose the signals into their constituent motor unit action potential trains. Motor units with decomposition accuracy <92.0% were not analyzed. Following the decomposition procedure, the recruitment threshold (% MVIC) and mean firing rate (pulse per second [pps]) of each motor unit were determined. Linear regression was used to quantify the slope (pps/% MVIC) and y-intercept (pps) of the mean firing rate vs. recruitment threshold relationship for each subject and time point. The data were examined with mixed factorial analyses of variance and effect size statistics. Results: The increase in MVIC force for the training group was small (Cohen's d = 0.23), with no group × time interaction and no main effects for group or time. Despite a thorough familiarization session, we found that many subjects had a difficult time performing the submaximal contractions in accordance with the visual trajectories. One thousand one hundred twenty-five motor units were studied. The mean ± SD number of motor units analyzed for the 50% and 80% MVIC test was 16 ± 5 and 12 ± 4, respectively. Analysis of the slopes and y-intercepts revealed no interactions and no main effects for time, force level, or group. Conclusions: Sixteen weeks of strength and conditioning participation in middle-school boys resulted in only a slight increase in knee extensor MVIC strength and no meaningful changes in submaximal motor unit recruitment and firing rate behavior for the vastus lateralis. Practical Applications: While motor unit adaptations may play a role in increasing muscle strength during youth, our data revealed no such evidence. We believe that the lack of training (dynamic multi-joint) vs. testing (single-joint isometric) specificity may partially explain these findings. Acknowledgments: This study was funded by the NSCA Foundation as part of the Young Investigator Grant program.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Joint Power Contribution in Three Loading Configurations During the Vertical Jump

J. Morgan, A. Jagodinsky, M. Torry, A. Maeda, S. Pollalis, and L. Hileman

Illinois State University

Introduction: Improving power output of the lower extremity is a common goal for many strength and conditioning programs. Specifically, loaded vertical jumps have been proven to increase vertical jump performance in a training environment. There is a lack of research investigating the effects of different loading configurations on joint power output during jump training. Such research is needed to understand how various loading configurations influence the ability to maximize total lower extremity power output during jump training. Purpose: The purpose of this study is to evaluate joint power contributions to peak lower extremity power during 3 loaded vertical jump conditions, compared to an unloaded control condition. Methods: Ten male volunteers (age: 22.1 ± 1.2 years; height: 1.75 ± 0.05 m; weight: 76.0 ± 10.0 kg) with at least 2 years experience in weight training performed 5 trials in each condition: Kettlebell squat jump (KB), barbell back squat jump (BB), dumbbell squat jump (DB), and unloaded vertical jump (C). Ten percent of participants' one repetition max back squat was used for each loaded condition. Ten infrared cameras (200 Hz) and an AMTI force plate (1,000 Hz) captured 3-dimensional segment motion and ground reaction force data respectively. Moments at the ankle, knee and hip were calculated during the concentric phase of the movement using an inverse dynamics approach. Joint powers were calculated as the product of the joint moment and joint angular velocity, and net power was considered the sum of joint powers at each time point. Percent contribution of each joint power to the net peak power (%peak) was selected as the dependent variable. A 2-way repeated measures ANOVA (α = 0.05) was employed to assess differences in percent contribution across joints and condition. Results: A significant within-subjects interaction was observed (F[6, 294] = 6.417, p < 0.001, partial η2 = 0.116). Post-hoc analysis revealed a significant difference in %peak between ankle (49.44 ± 9.83%) and knee (34.35 ± 9.73%) (p < 0.001), ankle and hip (16.21 ± 9.63%) (p < 0.001)], and knee and hip (p < 0.001) across conditions. Additionally, post-hoc analysis of ankle %peak between conditions was significant (F[3, 27] = 5.663, p = 0.004, partial η2 = 0.386), yet pairwise comparisons failed to meet significance following adjustments for multiple comparisons. Conclusions: Significant differences in %peak between joints across all conditions suggests that subjects exhibited a proximal to distal sequence of power production, resulting in the ankle contributing the most to net peak power during the concentric phase. Overall significant of ankle %peak between conditions suggests that the various loading configurations had an effect on how much the ankle contributed to net peak power, however lack of significance following adjustments for multiple comparison indicates that a larger sample size may be needed to uncover the nature of the condition effect. Future data collections, building on this sample, hope to identify significance in the observed patterns across conditions. Practical Applications: The overall aim of this study is to examine the effect of different loading configurations on joint power characteristics during vertical jump training. If a coach chooses to use loaded vertical jumps to train vertical power in athletes, it is important to understand how this will impact the intended training outcome. Based on the current findings, strength coaches looking to improve ankle contribution during vertical jump training should consider the differential effects that various loading configurations may have on ankle power production.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Coordination Patterns of the Lower Extremity Joints During Squats

G. Aguirre,1 C. Ortega,1 D. Di Lorenzo,1 T. Cushman,2 and S. Flangan1

1 California State University Northridge; and 2 Moorpark College

Introduction: During a lower extremity extension pattern (triple extension of the hip, knee, and ankle), energy is generated as the muscle-tendon complexes about the hip, knee, and ankle generate energy and deliver it to the segments. Typically, this pattern follows a proximal-to-distal sequence, with peak net joint moment power (NJMP) occurring first at the hip, followed by the knee, and culminating at the ankle. It is hypothesized that the opposite would occur during a flexion pattern. However, to our knowledge, this hypothesis has not been tested. Furthermore, it is unknown how this coordination pattern might change when the extension and flexion patterns are performed to a novel depth. Purpose: The purpose of this investigation was to examine the coordination patterns of the lower-extremity joints during a squat task performed to both a self-selected and prescribed depth. Methods: Ten subjects performed 12 trials of bodyweight squats to both self-selected depth (SSD) and prescribed depth (PD). PD was calculated as 50% of each subject's leg length measured from the ASIS to the medial malleolus. No external cues were given to subjects performing an SSD. Reflective markers were placed on anatomical landmarks on the participant's lower extremity to collect kinematic data and 2 force plates were used to collect kinetic data for each trial. NJMP was calculated using standard inverse dynamics techniques. Differences in timing between the ankle and knee (A-K) and between the knee and hip (K-H) were calculated and expressed as a percentage of total movement time. Differences between joint pairs (A-K, K-H) and phases were conducted using a 2 × 2 factorial ANOVA with repeated measures for each depth. Results: Coordination patterns between phases of SSD were not significantly different (interaction p = 0.194), while coordination patterns between phases of a PD were significantly different (interaction p = 0.044). During the eccentric phase of a PD, ankle-knee relative timing was shorter compared to the concentric phase (p = 0.033). Conclusions: Coordination patterns of the ankle, knee and hip joint mirrored each other between the concentric and eccentric phases of an SSD. However, coordination patterns between phases of a PD did not mirror each other. Our findings confirm the expected theorized distal-to-proximal coordination pattern expected during the eccentric phase of a squat. However, altering squat depth disrupts the timing between phases. Alterations in timing may suggest an increased risk of injury during novel tasks. Practical Applications: Squats should be performed to varying depths in a regular basis so that timing of peak NJM during the eccentric phase mirrors that of the concentric phase. Acknowledgment: The authors wish to acknowledge the help are Meital Aulker, Daniel Padilla, Ingrid Cassady, Christine Boktor, and Paul Rodriguez. This project was funded through grants from the National Institutes of Health (NIH) Building Infrastructure Leading to Diversity (BUILD) #8TL4GM118977-02.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Are Foot Longitudinal Dimensions Related to Jump Ability in Men and Women?

L. Weiss,1 L. Allison,2 H. Daugherty,1 M. Paquette,1 and D. Powell1

1 The University of Memphis; and 2 Wright Medical Technology, Inc

Foot movements through the sagittal plane contribute to vertical jump performance. However, it remains unclear if variations in foot dimensions independently explain a portion of the differences in jumping ability. Purpose: To determine if gross longitudinal foot measures are predictive of vertical jump performance without arm swing. Methods: Subjects were 60 young adults (31 men, 29 women). This report was based on a subset of variables that may be related to countermovement vertical jump (CMVJ) performance. A Vertec was used to measure CMVJ displacement. Non-normalized gross longitudinal foot dimensions included maximum foot length, calcaneus-to-talocrural joint length, and calcaneus-to-metatarsophalangeal length. Data for these variables were collected with the subject in a unilateral weight-bearing position while elevated on a 40 cm platform. The investigator first identified surface landmarks and then performed the designated measurements using a 12-inch sliding digital caliper with an attached spirit level to facilitate horizontal placement. Other than maximum foot length, measurements were obtained on both medial and lateral sides of the foot and then averaged. Measurements were obtained bilaterally during 2 sessions to establish stability reliability and precision for all variables. However, predictive models were based on first-session data. Stability reliability was determined using an intraclass correlation coefficient (2-way random model), and precision using both the standard error of measurement and the coefficient of variation. Heteroscedasticity of the predictors of jump performance was evaluated using a Koenker test. Linear regression with forced entry was then used to produce predictive models. Results: Average CMVJ displacement for all 60 subjects during the first (S1) and second (S2) sessions was 37.85 cm (10.81) and 37.70 cm (10.27), respectively (ICC = 0.97, SEM = 1.73, CV% = 6.6). Average maximum foot length for S1 and S2 was 25.74 cm (1.95) and 25.76 cm (1.95) for the left foot (ICC ≈ 1.00, SEM = 0.1, CV% = 0.5) and 25.73 cm (1.99) and 25.70 cm (1.96) for the right foot (ICC ≈ 1.00, SEM = 0.1, CV% = 0.5). Average calcaneus-to-talocrural joint length for S1 and S2 was 5.94 cm (0.64) and 5.92 cm (0.63) for the left foot (ICC = 0.99, SEM = 0.08, CV% = 1.9) and 5.92 cm (0.64) and 5.94 cm (0.65) for the right foot (ICC = 0.98, SEM = 0.09, CV% = 2.2). Average calcaneus-to-metatarsophalangeal joint length for S1 and S2 was 18.10 cm (1.35) and 18.09 cm (1.33) for the left foot (ICC = 0.99, SEM = 0.05, CV% = 0.4) and 18.06 cm (1.36) and 18.08 cm (1.36) for the right foot (ICC ≈ 1.00, SEM = 0.07, CV% = 0.6). Koenker tests based on either left-side (LM = 1.102, p = 0.78) or right-side (LM = 1.846, p = 0.61) data indicated the presence of homoscedasticity. Two linear models based on the 3 left-and right-side foot lengths produced no significant (p > 0.5) predictors and were not independent of each other. Conclusions: Although the 3 potential explanatory variables were bilaterally reliable and precise, and no issues with heteroscedasticity existed, the predictive linear models included no significant predictors of vertical jump displacement. Practical Applications: Maximum foot length, calcaneus-to-talocrural joint length, and calcaneus-to-metatarsophalangeal joint length in young adult men and women appear to be of little use in explaining vertical jump ability.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Immediate Effect of Mini-Trampoline Jumping on Balance Performance

M. Reid,1 N. Rhouni,2 E. Corella,1 D. Favela,1 C. Hernandez,1 J. Ramirez,1 T. Gillum,2 J. Coburn,3 and N. Dabbs1

1 California State University, San Bernardino; 2 California Baptist University; and 3 Biochemistry and Molecular Exercise Physiology Laboratory, Center for Sport Performance, California State University, Fullerton

Introduction: Jump performance and balance are a crucial part of many sports. Jumping performance improvements have been shown due to training programs including repetitive jumping on trampolines. The use of the trampoline for training has been favored over jumping on hard surfaces because of the decreased risk of injury from landing on high impact surfaces. Trampoline jumping programs have been shown to improve balance over time, which can be advantageous during high velocity athletic performance. Purpose: Therefore, the purpose of this study was to investigate the immediate effect of mini trampoline jumping on dynamic balance. Methods: Twenty-one recreationally trained individuals (age: 23.35 ± 2.13, height: 169.98 ± 8.94 cm, body mass: 69.15 ± 8.78 kg) volunteered to participate in 2 days of testing. On day one (familiarization) participants read and signed the IRB approved informed consent and completed the PARQ and health history questionnaire. Participants were then randomly assigned a testing condition based on random draw of either trampoline group (TG) or control group (CG). Anthropometrics were measured followed by a dynamic warm-up. Following that, they were familiarized with the Biodex Balance System SD: they stood on the unstable platform for three 20 seconds trials with 10 seconds rest in between each trial. Participants in the TG were then familiarized with the mini-trampoline protocol, which consisted of counter movement vertical jumps (CMVJ) until they felt comfortable performing jumps on the trampoline. Participants were then reminded to maintain their typical diet and hydration and told to not exercise 24 hours prior to their next testing session. On day 2, the participants completed the dynamic warm-up and completed the balance protocol. Anterior-posterior (AP), medial-lateral (ML), and overall stability index was recorded and used for analysis. The TG completed 6 maximal CMVJ on the trampoline and the CG rested for 20 seconds. Immediately after either jumping or resting, participant's balance was reassessed. A 2 × 2 (time by group) mixed factor ANOVA was used to analyze group (TG vs. CG) and time (pre vs. post) effects. Results: There were no significant (p = 0.28) interactions between time and group for overall dynamic balance (Pre-control = 0.86 ± 0.35; post-control = 0.70 ± 0.39; pre-trampoline = 0.72 ± 0.25; post-trampoline = 0.76 ± 0.32). There was no significant time (p = 0.79) or group (p = 0.55) effect for overall dynamic balance. There were no significant (p = 0.63) interactions between time and group for AP dynamic balance. There was no significant time (p = 0.76) or group (p = 0.45) effect for AP dynamic balance. There were no significant (p = 0.097) interactions between time and group for ML dynamic balance. There was no significant time (p = 0.94) or group (p = 0.44) effect for ML dynamic balance. Conclusions: The results show that there were no effects of trampoline jumping on balance performance. The amount of jumps performed on the trampoline was not sufficient to cause an increase or decrease in balance performance. Participants balance results in the TG may have shown a decrease in balance had they jumped on the trampoline longer than the 6 maximal CMVJ possibility due to fatigue. Practical Applications: Since there was no significant change in balance, practitioners may use these results to be confident that performing jumps on a trampoline as a part of a dynamic warm-up will not hinder balance during athletic performance.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Age-Related Differences in Isometric, Dynamic, and a Stretch-Shortening Cycle Electromechanical Delay Assessment

M. Magrini,1 A. Barrera-Curiel,1 R. Colquhoun,1 M. Ferrell,1 J. Hernandez-Sarabia,1 P. Tomko,1 N. Jenkins,1 R. Thiele,2 and J. DeFreitas1

1 Oklahoma State University; and 2 Kansas State University

Electromechanical delay (EMD) is defined as the time from muscle excitation to the onset of force production and has been shown to be affected by the aging process. While recent research has compared EMD during isometric, and concentric-only contractions, little is known with regard to EMD during a stretch shortening cycle (SSC). Purpose: The purpose of the current investigation was to examine the age-related differences in isometric, isokinetic, and SSC contractions. Methods: Twelve older (OA: mean ± SD: Age = 74 ± 7) and 15 young (YA: mean ± SD: Age = 24 ± 4) adults participated in this study. During each maximal isometric and isokinetic EMD assessment, participants were instructed to kick out as hard and as fast as possible against the lever arm of a calibrated isokinetic dynamometer. Dynamic EMD was assessed during 3 isokinetic speeds; 60°·s−1 (EMDSLOW), 180°·s−1 (EMDMED) and 300°·s−1 (EMDFAST). For the SSC EMD (EMDSSC), the reactive leg drop assessment was utilized to elicit the unloaded stretch shortening cycle. The participants were seated with one leg passively raised and supported by an elastic band held by an investigator. Upon release of the elastic band the leg would free fall and the participants were instructed to kick back up to the starting position as fast as possible. Surface electromyography (sEMG) was recorded from the vastus lateralis during all conditions. For the isometric and isokinetic EMD assessments, EMD was manually measured from the onset of EMG to the onset of torque production. EMDSSC was manually measured from the onset of EMG to the onset of concentric movement defined as the first positive degree change. Results: A mixed factorial ANOVA showed that the EMDSSC condition was significantly slower (p < 0.001) than the isometric and isokinetic EMD measurements (shown as † in Figure 1). Furthermore, the older adults had significantly slower EMD across each condition (EMDISO [p = 0.008, Cohen's d = 0.88], EMDSLOW [p = 0.004, Cohen's d = 1.02], EMDMED [p < 0.001, Cohen's d = 1.19], EMDFAST [p = 0.004, Cohen's d = 1.02] and EMDSSC [p < 0.001, Cohen's d = 1.33]) (shown as * in Figure 1). Conclusions: OA had significantly longer EMD when compared to YA. Additionally, the EMDSSC took significantly longer than the isometric and dynamic EMD measures and had the greatest effect size. Together, these findings suggest that older adults take longer to respond to a change in joint angle and reverse momentum of the limb than younger adults. The effect sizes also suggest that EMDSSC may be the most age-sensitive of the EMD measures. Practical Applications: EMD is longer in older than younger adults. The prolonged EMD in older adults may have important implications for fall risk. In addition, clinicians and practitioners may be able to use EMDSSC as an age-sensitive performance outcome.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Influence of Resistance Load and Sex on Force, Velocity, and Power During the Close Grip Bench Press

J. Merrigan,1 J. White,1 J. Oliver,2 J. Fields,1 and M. Jones1

1 George Mason University; and 2 Texas Christian University

Kinetic (force, power) and kinematic (velocity) responses to upper body resistance exercise by sex are of interest and have not been clearly identified. Purpose: Examine sex related changes in force (AF), velocity (AV), and power (AP) during incremental loads of the close grip bench press (CGB). Methods: Resistance-trained men (n = 15, X ± SD: 22.0 ± 4.0 years, 177.2 ± 7.8 cm, 87.5 ± 12.8 kg, 16.9 ± 2.9% body fat % (BF), training age: 3.60 ± 1.68 years) and women (n = 15: 25.0 ± 5.0 years, 161.4 ± 8.9 cm, 59.3 ± 7.3 kg, 22.4 ± 4.2% BF, training age: 3.27 ± 1.33 years) participated. Body composition was assessed via dual energy X-ray absorptiometry. Kinetic and kinematic data were recorded at 50, 60, 70, 80, 90, and 95% of self-reported one repetition maximum (1RM) as part of CGB 1RM testing (100%). The bench was positioned on a force plate with 4 cable-position transducers attached to the barbell. Signals were filtered through a zero lag low pass Butterworth filter with a cutoff frequency of 20 Hz. AF, AV, and AP were determined using custom-built Lab View software. Independent t-tests compared genders on 1RM CGB and 1RM CGB per body weight ratio (1RM/BW). Two-way ANOVAs were run to examine the sex and load (%1RM) on AF, AV, and AP. Results: Intraclass correlation coefficients AF, AV, and AP were 0.98, 0.97, and 0.93, respectively. Men had higher 1RM CGB (115.76 ± 19.67 kg vs. 60 ± 11.52 kg, p = 0.001) and 1RM/BW (1.32 ± 0.12 vs. 1.00 ± 0.22, p = 0.001) than women. There were significant sex × load interactions for AF (p < 0.001), AV (p < 0.001), and AP (p < 0.001). Men had higher AF, AV, and AP than women at every load, except for AV at 95 (p = 0.435) and 100% (p = 0.615) of 1RM. The greatest AF for women and men did not differ from 80 to 100%. In women, AV decreased from 60 to 90%, while in men AV decreased from 70 to 95%. Women experienced no change in AP from 50 to 90%, and AP for men did not change from 50 to 80%. Conclusions: The effect of load on AF, AV, and AP was similar in both men and women, however, the pattern was different. Women responded differently from men to increasing loads during the CGB, specifically AV. In addition, women maintain maximal power kinetics from 50 to 90%, while men maintained power from 50 to 80% 1RM. Practical Applications: There is a need for further investigation into sex differences in acute kinetic and kinematic responses to resistance exercise. Sex differences may require the need for different loading schemes for men and women. For example, prior research with other lifts has demonstrated the need for velocity-based training with men and more strength-focused training with women.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Comparisons of Time to Failure in Different Isometric Fatiguing Muscle Actions

W. Miller and X. Ye

University of Mississippi

Introduction: Sustained isometric muscle actions, either maximal or submaximal eventually result in muscle fatigue. Task failure during a sustained maximal isometric muscle action (MIMA) can be defined at the point where the isometric strength level falls below a certain percentage of the baseline strength; while during a sustained submaximal isometric muscle action (SIMA), task failure is the point where an individual fails to maintain the target force. For a certain percentage (e.g., 50% of the maximal isometric voluntary contraction), it has been demonstrated that the time to failure during the MIMA is shorter than that during the SIMA. Previous studies had a common goal in attempting to determine the neural mechanisms related to fatigue and to examine time to failure. However, less is known regarding how the neuromuscular system responds immediately following MIMA- or SIMA-induced muscle fatigue. Purpose: To examine if time to failure differs for consecutive fatiguing submaximal contractions performed after a bout of MIMA and SIMA. Methods: Nine physically active males (n = 4; Age: 22.25 ± 2.2 years; Weight: 81.95 ± 5.45 kg; Height: 71 ± 1.4 in) and females (n = 5; Age: 22 ± 3.53 years; Weight: 70 ± 23.09 kg; Height: 66.1 ± 1.52 in) participated in this randomized cross-over pilot study. Participants completed the investigation in 3 visits separated by at least 48 hours. Visit 1 included familiarization of equipment and procedures. Visit 2 and 3 were randomized for MIMA or SIMA conditions and included, maximal voluntary isometric contraction (MVIC), fatiguing condition of MIMA or SIMA, followed by repeated post-fatigue submaximal trapezoid contractions (STCs) at 50% MVIC (increased the force from 0 to 50% MVIC in 5 seconds, held it for 10 seconds, and decreased to 0 in 5 seconds). The failure criteria for MIMA included 100% MVIC at start to the point where force dropped below 50% of pre-determined MVIC for 3 seconds; SIMA was identical except included inability to maintain 50% pre-determined MVIC for 3 seconds. The failure criteria for repeated post-fatigue STCs were identical for both visits 2 and 3 in that, participants were to perform repeated STCs until 50% MVIC level could not be achieved. Separate paired samples t-tests were conducted to compare time to failure in seconds between the MIMA and SIMA, and between post-MIMA STCs and post-SIMA STCs. Results: Paired samples t-tests showed a statistically significant difference in time to failure between MIMA (mean ± SD: 45.40 ± 10.34 seconds) and SIMA (82.68 ± 23.38 seconds) conditions (p = 0.001, Cohen's d = 2.06). However, no significant difference in time to failure was found between post-MIMA STCs (56.90 ± 42.69 seconds) and post-SIMA STCs (74.17 ± 64.22 seconds) conditions (p = 0.361, Cohen's d = 0.32). Conclusions: Sustained submaximal isometric exercise induced longer time to task failure than the maximal exercise did. Practical Applications: As one of the most important markers, strength level during a sporting event or training session usually directly influences the sports performance. Even though both maximal and submaximal exercises can induce the same level of force deficit, the practitioners should realize that the all-out maximal exercise tends to impose a greater burden on the neuromuscular system, which may negatively affect subsequent performance to a greater extent.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Effect of Different Onset Thresholds on Electromyography Variables of the Bicep Femoris During the Glute-Ham Raise Exercise

N. Ripley, J. McMahon, N. Walker, M. Cuthbert, and P. Comfort

University of Salford

Introduction: Electromyography (EMG) has been regularly used to assess muscular activity during dynamic tasks, including resistance exercises. One problem that currently compromises such research is how the onset of activation is identified, with no consistency across studies. Purpose: To examine the effect of using different onset thresholds on EMG variables. Methods: Resistance trained individuals (n = 13, age: 23 ± 4 years; mass: 75.15 ± 9.65 kg; height: 1.76 ± 0.07 m) participated in this study. Following a standardized skin preparation, Ag-AgCl electrodes and wireless EMG sensors were attached to the bicep femoris, in accordance with SENIAM guidelines; attached parallel to the orientation of the muscle fibers, in a bipolar configuration, with an inter-electrode distance of 17.5 mm. Raw EMG data were captured at 1,500 Hz, with high- and low-pass filtering between 10 and 1,000 Hz. Onset thresholds were calculated in a custom Excel spreadsheet, using calculations of; standard deviation (SD) of a resting baseline plus the mean baseline EMG (1-, 2- and 3 × SD + mean), mean baseline EMG plus an arbitrary value (mean + 0.015 mv), and percentage (10%) of the peak EMG during the task. Mean and SD were determined for peak EMG amplitude, mean EMG task amplitude and time of activation onset. Within-session reliability was assessed via intraclass correlation coefficients (ICC) and coefficient of variation (CV). Acceptable reliability was determined with an ICC ≥ 0.8 and CV < 10%. Standardized differences were calculated using Cohen's d effect sizes. Multiple one-way repeated measures analysis of variance with Bonferroni post hoc analyses, were used to determine differences in EMG variables between different onset thresholds. An a priori alpha level was set at p ≤ 0.05. Results: Different onset thresholds had no effect on both initial peak and mean EMG amplitudes, therefore were not taken forward for further analysis (table 1). Pair wise comparisons between 1 × SD + mean baseline vs. 10% peak task and mean baseline + 0.015 mv vs. 10% peak task, identified a significant delay in the time to activation when using 10% peak task. All other pairwise comparisons showed no significant difference. Conclusions: High levels of reliability were found when measuring EMG amplitudes. The highest reliability for time of activation was found for mean baseline + 0.015 mv and 10% of peak task, this is understandable as the SD is not taken into account. Different onset thresholds resulted in no difference between EMG amplitudes and no significant differences between time of activation for all but 2 pairwise comparisons. Practical Applications: The onset threshold used has no effect on task EMG amplitudes; however, they did effect the time of activation. If the time of activation is an important variable it is advisable to use an onset threshold calculation that produces the greatest reliability, e.g., mean baseline + 0.015 mv or 10% of peak task.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Effects of Foot Positioning on Muscle Activity During Heel-Raise Exercise

A. Puckett and A. Waldhelm

University of South Alabama

Introduction: The heel raise exercise is important in both performance training and rehabilitation. This exercise can improve strength, power, and muscle size of the triceps surae muscles. Many individuals perform this exercise with varying foot positions, yet little is known about the effects of foot position on muscle activation of the calf muscles when performing the heel raise exercise. Purpose: The objective is to analyze the effect of foot position on muscle activation of the triceps surae muscles when performing the heel raise exercise in sitting and standing positions. Methods: Twelve healthy college students volunteered for the study. Surface electrodes were placed on the right lateral (LG) and medial gastrocnemius (MG) and soleus muscles and maximum voluntary isometric contraction was used to normalized the measurements. Each participant performed 10 repetitions of the heel raise exercise with their feet positioned in neutral, maximal internal rotation and maximal external rotation in both sitting and standing. High-speed cameras captured the exercise to differentiate between concentric and eccentric phases with markers placed on the lateral midline of the knee, lateral malleolus, posterior calcaneus and at the dorsal surface of the second toe. Multiple 2 × 2 (foot position × muscles) ANOVAs with p < 0.05 was used to examine differences between and within muscles for each phase and position. Results: In the standing position, for the concentric phase of the heel raise exercise there was not a significant foot position by muscle interaction (p = 0.126), nor were significant differences for the main effects, foot position (p = 0.983) and muscle (p = 0.287). A significant foot position by muscle interaction was not found in the sitting position for the concentric phase of the exercise (p = 0.189) and for foot position main effect (p = 0.168), but a significant muscle main effect was identified (F [2, 30] = 15.9, p = 0.005). Post hoc tests with Bonferroni correction (p < 0.016) did not show significant differences within each of the 3 muscles, LG (p = 0.255), Soleus (p = 0.551) and LG (p = 0.169). Results were similar in the eccentric phase. In standing, there was not a significant foot position by muscle interaction (p = 0.076) for the eccentric phase and significant main effects for foot position (p = 0.430) and muscle (p = 0.068) were not observed. Last, there was not as significant foot position by muscle interaction (p = 0.228) or a significant main effect for foot position (p = 0.220), but there was a significant main effect for muscle (F[2,33] = 4.67, p = 0.037). Post hoc tests with Bonferroni correction (p < 0.016) did not reveal significant within muscle differences in muscle activation, MG (p = 0.328), Soleus (p = 0.745) and LG (p = 0.048). Conclusions: The results indicate that foot position does not significantly effect muscle activation between and within muscles of the triceps surae when performing the heel raise exercise in both sitting and standing. Practical Applications: Changing foot position during the heel raise exercise does not increase or decrease activation of the calf muscles in both sitting and standing. Further research is needed using competitive athletes, injured individuals and performing the exercise with external resistance.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Differences in Energy Distribution Amongst the Lower Extremity Joints During the Concentric and Eccentric Phase of a Squat

P. Rodriguez,1 I. Cassady,2 T. Cushman,3 and S. Flangan2

1 California State University, Northridge; 2 California State University Northridge; and 3 Moorpark College

Introduction: The muscle-tendon complex can absorb more energy (during the eccentric phase) than it can generate (during the concentric phase); however, injuries are prone to occur during the eccentric phase of a movement. This suggests that the multi-joint mechanics during the eccentric phase are not mirrored to the concentric phase and, as a result, the joints are loaded differently. Purpose: This study explored the energy absorption and generation of the lower-extremity joints during a lifting task. We hypothesized that the energy absorbed by the lower-extremity joints during the eccentric phase would be different than the energy generated during the concentric phase of a squat. Methods: Ten subjects were biomechanically instrumented with reflective markers on their anatomical landmarks, primarily on the lower-extremity joints. They performed 12 bodyweight squats to a prescribed depth of 50% of their leg length while atop 2 force plates in the capture area of a motion analysis system. The energy generated/absorbed by the lower extremity joints were calculated using standard inverse dynamics techniques and summed bilaterally for a single joint measure for the ankle, knee, and hip. For statistical comparison purposes, the absolute value of the energy absorbed was compared to the energy generated using a 2 × 3 (phase × joint) factorial ANOVA with repeated measures. Results/Conclusions: The ankle generated more energy than it absorbed, and the knee absorbed more energy than it generated (P < 0.05). There were no significant differences in the energy generated and absorbed at the hip. These findings suggest that the energy generated by the ankle shifted to the knee during the eccentric phase of the prescribed depth squat. If these differences were intensified (with an increase in either load or speed), they may help explain why the knee is susceptible to injury during the eccentric phase of multi-joint movements. Practical Applications: Strength and conditioning professionals should ensure that the concentric and eccentric phases of a multi-joint movement are mirror images of each other so that the loading on each joint will be similar between the phases. Acknowledgments: Daniel Padilla, Desiree Di Lorenzo, Geovanny Aguirre-Burgos, Meital Aulker, Kirsten George, Gonzalo Lopez, Abdy Lang, Carina Ortega, Christine Boktor.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Reliability of Motor Unit Firing Rate and Action Potential Amplitude vs. Recruitment Threshold Relationships

M. Wray,1 A. Sterczala,2 J. Miller,1 H. Dimmick,1 and T. Herda1

1 University of Kansas; and 2 University of Pittsburgh

Introduction: Electromyographic decomposition methods have been used for decades to examine various properties of motor units (MU), such as, recruitment thresholds (RT), mean firing rates (MFRs), and action potential amplitudes (MUAPs) of the vastus lateralis (VL). MUAPs and MFRs when regressed against RT can provide valuable insight on exercise training-, aging-, and obesity-related changes on MU recruitment and firing rate behavior of a muscle. However, the reliability of these parameters via decomposition methods has yet to be assessed. Purpose: To examine the reliability of MFR vs. RT and MUAP vs. RT relationships of the VL during a moderate intensity contraction. Methods: Twelve healthy subjects (male = 6, female = 6, age = 20.16 ± 2.69 years, weight = 75.04 ± 21.32 kg, height = 174.08 ± 10.99 cm) volunteered for this investigation. Participants visited the laboratory for one familiarization trial and 2 experimental trials, separated by at least 24 hours. Participants performed 2 isometric maximal velocity contractions (MVCs) with strong verbal encouragement. The highest muscle force during MVC was used to generate templates for trapezoid muscle actions at 40% MVC. Participants were instructed to maintain force output as close as possible to the target force presented digitally in real time on a computer monitor. During trapezoid muscle actions, surface EMG signals were recoded from the VL using a 5-pin surface array sensor. Action potentials were extracted into single firing events of MUs, via decomposition methods. For each contraction, slopes and y-intercepts were calculated for the linear MFR vs. RT and MUAP vs. RT relationships and used for statistical analysis. Intraclass correlation coefficients (ICC) model “2,1”, standard error of the measurements, and repeated measures ANOVAs were calculated to assess reliability. Results: The ICCs, SEMs, and p values are presented in Table 1. Conclusions: There were no systematic differences in the mean values for the slopes or y-intercepts from the MFR and MUAP vs. RT relationships as indicated by the p values. The ICC values for the y-intercepts from the MFR vs. RT relationships and slopes from the MFR vs. RT relationships (ICC = 0.715, ICC = 0.818) were deemed to be good, whereas, the slope values from the MFR vs. RT and the y-intercept from the MUAP vs. RT relationships (ICC = 0.577, ICC = 0.561) were of moderate reliability. Of note, there were negative y-intercepts for 13 of 24 MUAP vs. RT relationships, which are not of physiological relevance. Future analysis should examine if regression models (linear vs. non-linear) alter the reliability and examine if differing recruitment threshold ranges of recorded MUs from trial-to-trial effects reliability. Practical Applications: These reliability statistics could help investigators interpret the effects of exercise training interventions, such as, cycling and resistance training on motor unit recruitment and firing rate behavior.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Effects of Practical Blood Flow Restriction Exercise on Muscle Coactivation

Y. Yamada,1 R. Sunday,1 J. Barnette,1 J. Thistlethwaite,1 and T. Cayot2

1 Ohio Dominican University; and 2 University of Indianapolis

Significant increases in muscular strength and muscle tissue thickness have been reported following practical blood flow restriction (pBFR) resistance training. However, the effects that pBFR resistance exercise has on muscle coactivation (CA) remains elusive. Understanding the CA response during pBFR resistance exercise may help identify joint stiffness trends following exercise and thus aid in the safe implementation of pBFR techniques during resistance training programs. Purpose: To investigate how elbow CA is affected during 4 sets of fatiguing pBFR resistance exercise and compare the pBFR CA response to the CA response during high and low load fatiguing resistance exercise. Methods: Seventeen healthy, active participants (age = 24 ± 5 years, height = 179 ± 8 cm, weight = 86.6 ± 14.2 kg, 1RM = 50.5 ± 8.8 kg) completed a biceps curl 1RM on session 1. During sessions 2–4, 4 sets of fatiguing biceps curls were performed with either high (65% 1RM, HI), low (30% 1RM, LO), or occluded low (30% 1RM, pBFR) loads. Exercising muscle activation was recorded from the biceps brachii (BI) and triceps brachii (TRI) using surface electromyography (sEMG) techniques. CA was estimated during the last 25% of each exercise set from the normalized concentric sEMG data (CA = (TRI/(BI + TRI)) × 100%). Maximal voluntary isometric force produced at 90° of elbow flexion was recorded before (PRE-MVIF) and after each exercise session to assess muscle fatigue (%[INCREMENT]). Two-way, repeated measures ANOVA was used to examine the effects of exercise condition and exercise set on CA. One-way, repeated measures ANOVA was used to examine the effect of exercise condition on PRE-MVIF and muscle fatigue (%[INCREMENT]). Results: No significant differences in PRE-MVIF were observed between exercise conditions (p > 0.05). pBFR (−54.2 ± 13.7%) resulted in significantly higher muscle fatigue (%[INCREMENT], p < 0.05) compared to LO (−44.6 ± 12.1%) or HI (−30.6 ± 10.3%). LO elicited significantly higher muscle fatigue compared to HI. No significant main effects or interactions were observed for CA for any exercise condition or exercise set 1 (HI = 29.3 ± 12.7%, LO = 27.7 ± 13.0%, pBFR = 30.9 ± 10.6%), set 2 (HI = 28.8 ± 11.9%, LO = 27.9 ± 12.3%, pBFR = 31.2 ± 11.0%), set 3 (HI = 30.4 ± 11.8%, LO = 29.1 ± 12.7%, pBFR = 31.2 ± 10.7%), or set 4 (HI = 30.3 ± 11.9%, LO = 31.0 ± 13.2%, pBFR = 31.4 ± 10.3%). Conclusions: According to our findings, a longer recovery may be necessary for individuals performing pBFR resistance exercise due to an increased muscle fatigue as opposed to changes in the CA response. Practical Applications: Since pBFR resistance exercise performed to volitional fatigue did not affect elbow CA during exercise, pBFR resistance exercise may not affect joint stiffness. Therefore, pBFR resistance exercise may be an attractive resistance training alternative used during certain periodization phases (e.g., in-season, active recovery) to help with the maintenance of athletic performance throughout a season.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Retrospective Analysis of Ulnar Collateral Ligament Reconstruction in Major League Baseball Pitchers: A Comparison of 'Tall & Fall' vs. 'Drop & Drive' Pitching Techniques

M. Beaudry,1 G. Holland,2 J. Bradley,3 B. Jacobson,1 S. Davis,4 and R. Chetlin1

1 Mercyhurst University, Department of Sports Medicine; 2 The Physical Therapy Institute; 3 University of Pittsburgh Medical Center; and 4 Marshall University, School of Physical Therapy

We previously demonstrated that the “Tall & Fall” (TF) pitching style incurred higher elbow valgus torque vs. the “Drop & Drive” (DD) method in collegiate baseball pitchers. The evidence indicates that TF may expose pitchers to increased risk of ulnar collateral ligament (UCL) injury, and; thus, heighten the necessity to repair ligamentous damage via UCL reconstruction (UCLR; aka “Tommy John” surgery). The prevalence of UCLR in Major League Baseball (MLB) pitchers, delineated by pitching technique (re: TF vs. DD), has not been examined previously. Purpose: To determine the prevalence of TF vs. DD in MLB pitchers who have undergone UCLR surgery over a 10-year period (2007–2017). Methods: Two hundred twenty-three MLB baseball pitchers (mean age = 27.5 ± 3.6 years; mean BMI = 27.6 ± 2.2 kg·m−2; mean throwing velocity = 92.9 ± 2.6 mph) experienced UCLR from 2007 to 2017. Thirteen pitchers underwent a second procedure during this time. Subjects were assigned to TF or DD groups based upon kinematic motion capture analysis using the HUDL technique. Pitchers whose lead leg landed in a flexed position at ball release were designated DD (n = 61), while those whose lead leg landed in an extended or hyperextended position at, or prior to, ball release were designated TF (n = 162). Subject demographic, performance, and videographic information was obtained from MLB-provided open data bases. Confidence intervals were used to determine UCLR prevalence between the 2 pitching groups; independent t-tests were used to determine biometric and performance differences. Level of significance was set at p ≤ 0.05. Results: UCLR was significantly more prevalent for TF (73%) vs. DD (27%) over the 10-year period (95% confidence interval = 0.67, 0.78 [TF]; 0.22, 0.33 [DD]; p < 0.05). Pitching velocity was not different between groups. Pitching style was not a determinant of the occurrence of a second surgery; overall, the probability that a pitcher would experience a second surgery was 5.8% (confidence interval = 3.4%, 9.7%; p < 0.05). Conclusions: MLB pitchers employing the TF technique were nearly 3 times more likely to incur a UCLR intervention vs. their DD counterparts, despite no difference in pitching velocity. Though TF comprised 85% of UCLR recurrence, the small number of repeat procedures likely accounted for no statistically prevalent determination based upon pitching style. The incidence of a second UCLR in MLB pitchers, regardless of pitching style, may be as high as 1-in-10. Practical Applications: Prior investigation revealed higher elbow valgus torque in collegiate baseball pitchers using TF vs. DD. The current results demonstrate a significantly higher prevalence of UCLR intervention for MLB pitchers employing the TF vs. DD method. These findings indicate that TF pitchers may incur a threshold effect for elbow valgus torque, possibly exposing them to increased UCL injury risk requiring UCLR. Future research should investigate a potential valgus torque yield point to determine the magnitude of such danger in MLB pitchers using TF or DD.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Electromyographic and Mechanomyographic Amplitude Response Patterns During Isometric vs. Concentric Dynamic Constant External Resistance Leg Extension Muscle Actions

A. Miramonti,1 N. Jenkins,2 E. Hill,1 C. Smith,1 T. Housh,1 and J. Bovaird1

1 University of Nebraska—Lincoln; and 2 Oklahoma State University

Many studies have characterized the electromyographic (EMG) and mechanomyographic (MMG) amplitude patterns across the force spectrum during isometric muscle actions (ISOM). However, few studies have compared them to patterns observed during dynamic constant external resistance (DCER) muscle actions. Purpose: To compare the response patterns for EMG and MMG amplitude during ISOM and DCER leg extension muscle actions across the force spectrum. Methods: Eight women and 3 men (mean ± SD, 22.9 ± 4.3 years) reported to the laboratory for 2 testing visits (ISOM & DCER) in randomized order. During the ISOM visit, subjects performed a maximal voluntary ISOM leg extension muscle action (MVIC) followed by 9 randomly ordered ISOM leg extension muscle actions at 10–90% of MVIC. During the DCER visit, subjects performed a one repetition maximum (1RM) leg extension followed by 9 randomly ordered leg extensions at 10–90% of 1RM. EMG and MMG signals were collected from the vastus lateralis (VL), rectus femoris (RF), and vastus medialis (VM) muscles. Recorded EMG and MMG signals were expressed as root mean square (RMS) amplitude values. Submaximal force was expressed relative to MVIC and 1RM for ISOM and DCER muscle actions, respectively. Submaximal EMG and MMG amplitude were expressed relative to those recorded during the MVIC and 1RM for ISOM and DCER muscle actions, respectively. Polynomial regression analyses (linear, quadratic, cubic) were used to examine the composite (i.e., mean) EMG and MMG amplitude vs. force relationships. The F-test for R 2-change was used to determine the highest-degree polynomial equation necessary to describe each relationship. Results: The ISOM EMG vs. force relationships (Figure 1A) were best fit by quadratic models (VL: R 2 = 0.998, RF: R 2 = 0.995, VM: R 2 = 0.999) and the ISOM MMG vs. force relationships (Figure 1B) were best fit by cubic models (VL: R 2 = 0.994, RF: R 2 = 0.989, VM: R 2 = 0.918). The DCER EMG vs. force relationships (Figure 1C) were best fit by linear models for the VL (r 2 = 0.993) and VM (r 2 = 0.993) and a quadratic model for the RF (R 2 = 0.992). The DCER MMG vs. force relationships (Figure 1D) were best fit by a linear model for the VL (r 2 = 0.761) and a cubic model for the RF (R 2 = 0.833; p < 0.05), but there was no significant relationship for the VM (p > 0.05). Conclusions: When comparing ISOM to DCER, the composite patterns for the EMG and MMG amplitude vs. force relationships were different for the VL and VM, but similar for the RF. Practical Applications: The different patterns observed during the submaximal to maximal ISOM vs. DCER muscle actions suggest that when EMG and/or MMG are used during DCER testing and/or training, researchers and practitioners cannot directly compare EMG and MMG amplitude responses with EMG and MMG responses during ISOM muscle actions. Future studies may wish to examine the inter-individual variability of response patterns for the EMG and MMG amplitude vs. force relationships during ISOM vs. DCER muscle actions.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

The Relationship Between Cortisol Levels, Positive Mental Affect and Rate of Force Development in Division-I Women's Volleyball Athletes

R. Aldret, M. McDermott, S. Aldret, A. Alwert, H. Corley, G. Hoffpauir, A. Mattox, and D. Bellar

University of Louisiana at Lafayette

Purpose: The purpose of the study was to examine the connections between the physical, biological and psychological markers of stress from a colligiate volleyball team from pre-season to completion of the competitive season. Methods: The study took place over the course of 15 weeks (3 pre-season, 12 in-season), with the athletes (n = 20) reporting to the exercise physiology lab for weekly data collection. During each visit, participants completed the 2 repetitions of a counter-movement jump on a force plate, with the average measure of lower body rate of force development being recorded. Participants gave a passive saliva sample into 1.5 ml tubes, then centrifuged at room temperature at 2,500 RPM. The saliva was then examined via colorimetric assay to measure Cortisol levels. Participants also completed the Positive and Negative Affect Schedule (PANAS), which provided a score of overall positive and negative mental affect rating for the previous week. Results: Multivariate analysis demonstrated a consistent pattern of a negative correlation between positive mental affect scores rated by the PANAS and Cortisol, and a positive correlation between positive mental affect scores rates by the PANAS and rate of force development. Due to the PANAS scores being non-parametric, statistical analysis with Spearman's was used with an adjusted alpha level of 0.025. Week 12 in the study demonstrated a statistically significant inverse relationship between positive PANAS and Cortisol (*r = −0.74; r 2 = 0.55; *prob = 0.001), and a statistically signficant positive correlation between positive PANAS and rate of force development (*r = 0.60; r 2 = 0.36; *prob = 0.018). Conclusions: As with any athletic team, there are ups-and-downs associated with the stressors of being a Division-1 athlete. Physical and mental stressors from physical activity, academic demands, and socio-familial commitments are abundant. The positivity of the study participants, as expressed by the positive PANAS score, demonstrated a consistent, not statistically significant, detrimental effect on the physical measures of these athletes. As their positive mental attitude declined, stress hormone production increased and rate of force production decreased. In addition, the 1 week where both correlations became statistically significant, a number of high-stakes events were taking place for the team on and off the court. During week 12, the team was participating in its last home game and senior night events, along with competing for a conference title. It is at the point when the team needed the most physically, that their mental affect was hampering their ability to perform at a high level physically. Additional investigation into understanding which stressor type (academic, athletic, social or family) is most pronounced in the individual will help target interventional therapies to help improve athletic performance. This is in step with current governing sports bodies and their mandate to destigmatize mental health issues in sports. Acknowledgment: The authors would like to thank Professor James M. Clemons for his assistance in the statistical analysis of the data.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Androgen and Glucocorticoid Receptor Phosphorylation Following Resistance Exercise and Pre-workout Supplementation

J. Nicoll, A. Fry, and E. Mosier

University of Kansas

Consumption of caffeine or caffeine containing pre-workout supplements (SUPP) augments steroid hormone responses to resistance exercise (RE), with concomitant increases in cortisol pre-exercise, and increased testosterone post-exercise. However, the activation of glucocorticoid (GR) and androgen receptors (AR) following supplementation and RE has not been investigated. Recent evidence suggests AR and GR can be phosphorylated in the absence of their ligand. However, it is not known if AR and GR are phosphorylated in human skeletal muscle and if RE and/or combination of SUPP augments this response. Purpose: To determine the influence of a pre-workout supplement on AR and GR phosphorylation following RE. Methods: In a randomized, counter-balanced, double-blind, placebo-controlled, within-subject crossover study, 10 resistance-trained males ((mean ± SD, age = 22 ± 2.4 years, height = 175 ± 7 cm, body mass = 84.1 ± 11.8 kg) performed 4 sets of 8 repetitions of barbell back squats at 75% of their 1-repetition maximum (1-RM) with 2 minutes of rest between sets and a fifth set of barbell back squats at 60% of 1-RM until concentric failure. A SUPP or flavor and color matched placebo (PL) was consumed 60-minutes prior to RE. Vastus lateralis muscle biopsies were obtained prior to supplementation at rest (BL), and 10 minutes post-exercise (POST). Biopsies were analyzed for phosphorylated GR (ser134, ser211, and ser226) and phosphorylated AR (ser81, ser213, ser515, ser650) via western blotting. Wilcoxon sign-rank tests determined differences in phosphorylation from PRE to POST for both conditions and at POST between SUPP and PL. A sequential Holm-Bonferroni correction was utilized for multiple comparisons. Data are presented as median and interquartile range [25th–75th]. Significance was determined at alpha-level p ≤ 0.05. Results: pGRser134 decreased from PRE to POST (SUPP: 0.12 [0.05–0.32] vs. 0.05 [0.03–0.10]; PL: 0.10 [0.07–0.25] vs. 0.04 [0.006–0.098], and pGRser226 increased following RE (SUPP: 0.63 [0.32–1.24] vs. 5.3 [3.1–9.5]; PL: 0.58 [0.42–1.46] vs 5.7 [3.0–10.1]; p < 0.05). pGRser211 was unchanged after RE (p > 0.05). pARser515 decreased (SUPP: 0.40 [0.02–0.06] vs. 0.02 [0.01–0.04]; PL: 0.04 [0.02–0.10] vs. 0.02 [0.007–0.03]; p < 0.05), and pARser213 increased (SUPP: 0.008 [0.003–0.02] vs. 0.018 [0.01–0.03]; PL: 0.007 [0.005–0.02] vs. 0.017 [0.007–0.05]) from PRE to POST (p < 0.05). pARser650 decreased after RE in the PL condition only (SUPP: 0.10 [0.05–0.34] vs. 0.11 [0.05–0.27]; PL: 0.13 [0.04–0.34] vs. 0.09 [0.04–0.19]; p < 0.05). Conclusions: RE augments AR and GR phosphorylation, and pre-workout supplementation minimally influences this response in the early recovery period. Practical Applications: RE differentially regulates specific sites on AR and GRs. Improvements in performance in this cohort have been previously reported ([Nicollet.al.2017]; J Strength Cond; 31[S83]). Changes AR and GR phosphorylation may occur independent or permissively relative to circulating hormonal concentrations. Thus, coaches and athletes may utilize pre-workout supplements to improve RE performance, with minimal detrimental effects on the anabolic-catabolic milieu at the skeletal muscle level in the early recovery period. Acknowledgments: This project was funded by a grant from the ISSN and MusclePharm Corporation.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Fiber Type-Specific Activation of AMPK Following Acute High Intensity Interval Exercise

C. Yen, K. Lazauskas, I. Tobias, N. Serrano, J. Siu, G. Seigler, J. Coburn, P. Costa, and A. Galpin

Biochemistry and Molecular Exercise Physiology Laboratory, Center for Sport Performance, California State University, Fullerton

Introduction: AMP-activated protein kinase (AMPK) is an energy-sensing regulator of cellular metabolism that is activated during acute exercise. Previous human skeletal muscle studies analyzed AMPK activation in mixed fiber type (FT) biopsy samples, though recent investigations show that AMPK's properties are dependent on myosin heavy chain (MHC) FT (i.e., slow vs. fast-twitch). Purpose: To assess the phosphorylation of 2 AMPK substrates (markers of AMPK activity), ACC and TBC1D4, in slow (MHC I) vs. fast (MHC IIa) fibers from skeletal muscle of trained men following acute high intensity interval exercise (HIIT). Methods: Six HIIT-trained males (age 31 ± 4 years; height 176 ± 10 cm; mass 81 ± 17 kg) participated in 2 testing visits separated by 7–14 days. During the first visit they performed a cycling test of maximal aerobic power (V[Combining Dot Above]O2max 52.6 ± 10.6 ml−1·kg−1·min−1). Participants kept a food log (MyFitnessPal) for 7 days prior to visit 2. Other controls included a 12 hours overnight fast, 48 hours refrainment from exercise, 24 hours caffeine fast, and 12 hours food fast (after consuming a standardized last meal). Upon arriving at the lab, hydration status was checked using urine specific gravity (USG < 1.030), and other controls were verbally confirmed. Participants then rested for 20 minutes before a muscle biopsy was extracted from their right vastus lateralis (VL). Participants then performed a self-administered warm-up. Once ready, they completed a HIIT bout consisting of 6 rounds of the following intervals: 1.5 minutes at 90–100% V[Combining Dot Above]O2max, then 2.5 minutes at ∼40%. Intensities were confirmed by continuous collection of metabolic gases and based on maximal performance in visit 1. A second biopsy (left VL) was performed immediately (within 15 seconds) after completion of the final exercise round. After a minimum of 7 days incubating in solubilizing solutions, single muscle fibers were mechanically isolated and dissolved for MHC analysis via SDS-PAGE to identify FT and combined into MHC I and MHC IIa pools of 5–8 fibers. FT-specific phosphorylation of ACC and TBC1D4 was quantified via capillary nano-immunoassay (CNIA). A 2-way analysis of variance (ANOVA) with Sidak's multiple comparison test was conducted to determine differences between biopsy time points and FT. Results: The average FT composition was 42.4 ± 5.2% MHC I, 3.1 ± 0.6% MHC I/IIa, 53.2 ± 4.7% MHC IIa, and 1.3 ± 0.7% MHC IIa/IIx (mean ± SEM). Phosphorylation of ACC was significantly (p < 0.05) different between rest and post-exercise for both MHC I and MHC IIa (6.2 and 9.1-fold higher, respectively). Phosphorylation of TBC1D4 was significantly (p < 0.05) greater in MHC IIa fibers both at rest and 0 minutes post-exercise (2.8 and 2.2-fold higher, respectively). Conclusions: The limited concentration of hybrid fibers (MHC I/IIa and MHC IIa/IIx) in our HIIT-trained participants, are in accordance with previous research and confirm a general inverse relationship between training status and hybrid abundance. HIIT exercise induced activation of AMPK in trained men, (via readout of ACC) with FT-specific differences detected for the TBC1D4 substrate. This knowledge enhances our understanding of the FT-specific molecular consequences of HIIT. Practical Applications: These findings improve our general understanding of the post exercise metabolic recovery process, and may eventually allow practitioners to develop more effective and/or FT-specific training programs and/or nutrition recommendations. Acknowledgments: This study was funded in part through a donation from Renaissance Periodization.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

The Relationship Between Objective and Subjective Monitoring and Performance in Division I Women’s Soccer Players

H. Cintineo,1 B. McFadden,2 A. Walker,1 M. Bello,1 D. Sanders,1 B. Bozzini,1 C. Ordway,3 R. Curtis,4 R. Huggins,5 D. Casa,4 and S. Arent1

1 Rutgers Center for Health and Human Performance; 2 Center for Health and Human Performance, Rutgers University; 3 Rutgers University; 4 Korey Stringer Institute at the University of Connecticut; and 5 Korey Stringer Institute

Athlete monitoring is critical for ensuring maximal performance during the season. The competitive stress is often more pronounced as teams enter conference play. Both objective and subjective assessments have been promoted as indicators of performance, readiness, and recovery in athletic populations. Purpose: To analyze the relationship between workload and performance outcomes, as well as corresponding changes in biomarkers and self-report questionnaires over the course of conference play in Division I collegiate female soccer players. Methods: Female (n = 21; Mage = 19.7 ± 1.35 years; Mweight = 66.0 ± 5.9 kg) collegiate soccer players were monitored throughout a competitive season. Training load (TL) and energy expenditure (Kcal·kg−1) were measured using the Polar TeamPro system. Athletes participated in a battery of testing every 4 weeks including reaction time (RT) and speed (SP) using computer simulation and full-body sensing technology. Athletes completed questionnaires, including Profile of Mood States (POMS) and Disability in the Physically Active (DPA), on the same day as performance testing. Athletes arrived fasted and euhydrated to blood draws (BD), which were conducted 2 weeks prior to each performance testing session and approximately 18–36 hours after a match. Biomarkers analyzed included catecholamines (E/NE), free cortisol (CORTF), and total cortisol (CORTT). RM MANOVAs and simple contrasts were conducted with significance set at p < 0.05. Results: TL and Kcal·kg−1 decreased during conference play compared to the period of non-conference play that preceded it (p <0.05). There was a trend for an increase in SP from T1 to T2 (ΔSP = 0.067 ± 0.03 m·s−1, p = 0.062) before stabilizing at T3. There was no significant effect on RT (p = 0.42). There was a trend for increased POMS Anger (p = 0.098), primarily due to an increase from T2 to T3 (ΔAnger = −2.5 ± 1.4; p < 0.10). There were no other significant POMS or DPA changes. There were significant time main effects for E/NE (p < 0.001), with a decrease from BD1 to BD2 (ΔE/NE = −139.3 ± 31.9 pg·ml−1; p < 0.001) with no further change from BD2 to BD3 (p = 0.33). CORTF had a significant linear increase from BD1 to BD3 (ΔCORTF = 0.63 ± 0.12 μg·dl−1; p < 0.001). CORTT did not significantly change (p = 0.93). Conclusions: A decrease in TL and Kcal·kg−1 corresponded to an improvement in SP and a decrease in E/NE. However, there was a continued rise in CORTF despite the reduction in workload. This would suggest that workload alone does not fully explain athlete stress or that there may be a residual and lasting influence of preseason and early season workload. This may be enhanced by in-conference competition. The lack of significant changes in subjective measures suggests that these may not be as useful for reporting athlete readiness as has previously been suggested. Instead, the changes in E/NE in the weeks prior to the performance assessments mirrored the changes in speed. There may be predictive utility in these assessments. CORTF also appears to be a more sensitive marker than CORTT. Practical Applications: While subjective measures of athlete readiness have practical appeal, the utility of their use heavily relies on both sensitivity and athlete honesty. As objective measures, E/NE appeared to have a more consistent association with workload and performance changes. CORTF may also be used to assess accumulating stress. Study supported by Quest Diagnostics and the NCAA.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Androgen Receptor Content and Phosphorylation in Skeletal Muscle—Sensitivity and Linearity of Western Blot Analyses

A. Fry and J. Nicoll

University of Kansas

The testosterone concentrations in response to acute and chronic resistance exercise is a highly studied phenomenon, but its role in muscle hypertrophy and strength has recently become controversial ([Westetal.2009], [2010]). A sometimes-overlooked aspect of basal and exercise-induced endocrine interactions with skeletal muscle includes androgen receptor content and phosphorylation status. Study of androgen receptor physiology requires western blot analyses and accurate interpretation of the results. However, concerns have recently been raised as to the proper use of these antibody-based techniques (Murphy and Lamb, J Physiol 2013). Purpose: To determine the linearity, sensitivity, and saturation characteristics of Western blot techniques for evaluating androgen receptor phosphorylation status in human skeletal muscle. Methods: One restance-trained man served as a subject (age = 26 years, height = 1.74 m, wgt = 78.2 kg) and provided a muscle biopsy sample from the vastus lateralis m. The tissue sample was lysed, analyzed for total protein concentration using Peterson's modification of the micro Lowry assay, and had protein bands separated via 5–15% SDS-PAGE before being transferred to PVDF membranes. Primary antibodies (total androgen receptors, and phosphorylation sites ser81, ser213, ser515, and ser650) and infrared-labelled secondary antibodies were used to determine relative androgen receptor content and relative phosphorylation at the 4 phosphorylation sites. Blots were analyzed with an infrared scanner to determine infrared pixel intensity (PI). Gels were serially loaded with 10, 20, 40 and 50 µg total protein. Linear regression was used to determine linearity (r 2), sensitivity (slope), and whether saturation occurred within the protein concentrations loaded. Results: See table below. Conclusions: These data indicate that western blots using infrared-labelled secondary antibodies are an acceptable method to assess total androgen receptor content and phosphorylation status in human skeletal muscle. The pixel intensity signal was extremely linear even when androgen receptor and phosphorylated androgen receptor concentrations varying by 5-fold (i.e., 10–50 µg total protein). At no time was signal saturation evident within the loading scheme used as evident by the highly linear nature of the regression analyses. Additionally, based on the slope of the regression lines for pixel intensity, the infrared signal was extremely sensitive to small changes in the amounts of the proteins examined. Practical Applications: These data support the use of infrared-labelled western blot techniques for analyzing total and phosphorylated androgen receptor content in human skeletal muscle. These methods permit a more thorough examination of the exercise-induced hormone-muscle interaction for testosterone than has been suggested in recent challenging perspectives on muscle growth and strength. Acknowledgments: This project was supported in part by an ISSN-MusclePharm Grant.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Muscle Fiber Type Composition of Elite American Male Weightiers

J. Siu,1 N. Serrano,1 K. Lazauskas,1 C. Yen,1 G. Schumaker,1 I. Tobias,1 R. Lockie,2 P. Costa,1 and A. Galpin1

1 Biochemistry and Molecular Exercise Physiology Laboratory, Center for Sport Performance, California State University, Fullerton; and 2 California State University Fullerton

Introduction: Success in the sport of weightlifting (WL) requires extreme strength and power. In fact, WL movements are responsible for some of the highest power outputs for resistance exercises documented in the literature. This physical capacity is typically associated with a high percentage of fast-twitch muscle fibers. Yet surprisingly, limited research exists on the fiber type (FT) composition of elite WL athletes. Purpose: Assess the FT composition of elite American male weightlifters. Methods: Resting muscle vastus lateralis biopsies were obtained from 6 male athletes (age = 26 ± 2 years, height = 169.0 ± 9.0 cm, body mass = 85.3 ± 26.9 kg) within 48 hours of their participation in the 2017 World Weightlifting Championship or American Open Finals (all were top 5 finishers). The average maximum snatch and clean & jerk for the group was 1.64 ± 0.25 kg/kg-BW and 2.04 ± 0.29 kg/kg-BW, respectively. Together, the athletes accounted for 12 National Championships, 17 American Open Championships, 1 Junior World Championship, 1 Junior National Championship, 1 Junior Pan American Championship, and 1 President's Cup appearance ([Russia, 2017]). The muscle samples were incubated in skinning solution for a minimum of 7 days before single muscle fibers (96 ± 7 per person, 580 total) were mechanically isolated and analyzed for their myosin heavy chain (MHC) content via SDS-PAGE with silver staining. Results: FT composition was 24 ± 4.6% MHC I, 3 ± 0.7% MHC I/IIa, 63 ± 7.8% MHC IIa, and 9 ± 5.4% MHC IIa/IIx. Only 2 MHC I/IIa/IIx and no pure MHC IIx fibers were found. The heaviest lifters (105 kg and 105 + kg) accounted for 89% of the MHC IIa/IIx content, while 3 of the remaining 4 lifters did not possess any MHC IIa/IIx. Conclusions: The dominance of MHC IIa fibers confirms the limited previous research and suggests a relationship exists between WL training and FT. The results also further supports the well-documented conclusions that trained individuals possess few hybrids (MHC I/IIa, MHC IIa/IIx, and MHC I/IIa/IIx) and rarely any pure MHC IIx fibers. Practical Applications: Strength and conditioning professionals could potentially use this information to assess talent or better improve program design prescriptions, but this application requires further research. Future studies should continue to explore the FT composition of high-level strength and power athletes from a variety of sports. Moreover, long-term research could assist in understanding the specific role of genetic inheritance vs. training in development of high concentrations of MHC IIa fibers. Acknowledgments: This study was funded in part through a donation from Renaissance Periodization.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Acute Effects of Eccentric-Enhanced Resistance Training on Anabolic Hormone Responses and Muscle Damage During a Hypertrophy Protocol

J. Ho, W. Yao, C. Liu, and J. Ding

National Taiwan Normal University

Introduction: In recent years, there is growing evidence to support the hypothesis that eccentric-enhanced resistance training (refers to dynamic accentuated external resistance training, DAER training) may produce superior strength and power gains than traditional resistance training (refers to dynamic constant external resistance training, DCER training). However, the investigation into the acute effects of multi-joint and whole body DAER training on anabolic hormone responses and muscle damage has received less attention. Purpose: To examine the effects of back squat and bench press using DAER training on blood lactate (LAC), growth hormone (GH), testosterone (T), and muscle damage marker. Methods: Twelve resistance-trained male subjects (23.6 ± 1.4 years, 177.2 ± 8.2 cm, 76.2 ± 9.9 kg) performed 3 experimental treatments in repeated-measures and counter-balanced research design. All participants performed 4 set of 10 repetitions for squat and bench press exercises with 2 minutes of rest between sets on smith machine at eccentric/concentric loadings of 70%/70% (DCER training), 80%/70%, and 90%/70% of concentric 1RM (DAER training). Blood samples were collected before and immediately after exercise and analyzed for LAC, GH and T concentrations. Moreover, creatine kinase (CK) and maximal isometric strength (MVIC) were measured before and 24 and 48 hours after resistance exercises. A 2-way repeated-measures ANOVA (treatment x time) and the LSD post hoc were used to analyze the data. A significant level is set at α = 0.05. Results: LAC, GH, and T levels significantly increased immediately after resistance exercise in all 3 treatments. However, only significantly higher GH levels were observed in 90%/70% treatment (1.72 ± 1.75 ng·ml−1) when compared with 80%/70% (0.61 ± 0.56) and 70%/70% (0.45 ± 0.54 ng·ml−1) treatments after resistance exercise. No significant differences were observed among 3 treatments in LAC and T level after resistance exercise. Regarding to muscle damage, CK significantly increased and MVIC significantly decreased at 24 hours after resistance exercise in all 3 treatments. However, no significant differences were observed among 3 treatments. Conclusions: DAER training may induce greater GH response and similar muscle damage when compared with DCER training during a hypertrophy protocol. Practical Applications: The results of this investigation suggest that DAER training performed at eccentric/concentric loadings of 90%/70% of concentric 1RM may be recommended to further induce metabolic stress and possibly muscle mass after long-term training. However, more studies are needed to confirm the benefits of DAER training on muscle hypertrophy. Acknowledgments: This study was funded by Taiwan's Ministry of Science and Technology.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Muscle Fiber Type Composition of World Championship Caliber American Female Weightiers

N. Serrano,1 K. Lazauskus,2 J. Siu,1 C. Yen,1 G. Schumaker,1 I. Tobias,1 R. Lockie,3 P. Costa,1 and A. Galpin1

1 Biochemistry and Molecular Exercise Physiology Laboratory, Center for Sport Performance, California State University, Fullerton; 2 Biochemistry and Molecular Exercise Physiology Laboratory; and 3 California State University Fullerton

Introduction: The Snatch and Clean and Jerk (and their derivatives) are commonly used exercises by strength and conditioning professionals as they allow some of the highest power outputs recorded in the literature. Thus, success in the sport of weightlifting also requires extensive muscle power, yet little is known about the skeletal muscle fiber type (FT) composition of elite competitive weightlifters. Moreover, no study to date has examined the FT of female strength or power athletes. Purpose: Determine the FT composition of elite American female weightlifters. Methods: Resting vastus lateralis biopsies were taken from 6 females (age = 28.2 ± 3.6 years, body mass = 81.2 ± 36.0 kg, height = 1.64 ± 0.11 m) following the 2017 World Weightlifting Championships. The pool included 2 Olympians ([Rio, 2016]), and the average snatch and clean and jerk (relative to body weight) among the entire group were 1.32 ± 0.31 kg/kg-BW, and 1.69 ± 0.40 kg/kg-BW, respectively. The group also collectively held 27 American and 9 Junior American Records, made the Pan American Games 23 times, and won 22 National Championships, 24 American Open Championships, 7 National University Championships, and 11 Junior National Championships. Muscle fibers (96 ± 9 per person, 575 total) were mechanically isolated, incubated in a skinning solution, and analyzed for myosin heavy chain (MHC) content via SDS-PAGE with silver staining. Results: FT composition was 16 ± 4.6% MHC I, 5 ± 1.9% MHC I/IIa, 71 ± 8.1% MHC IIa, and 7 ± 5.7% MHC IIa/IIx. Only one MHC I/IIa/IIx and no pure MHC IIx fibers were found. Four of the athletes had >80% MHC IIa (81%, 81%, 89%, and 85%). The largest athlete (by body mass) accounted for 30% and 90% of all MHC I/IIa and MHC IIa/IIx fibers, respectively. Conclusions: The prevalence of MHC IIa fiber is the highest ever documented in the literature for females and is comparable to previous research in elite strength trained men. This could be a result of genetic inheritance or training history (3 out of 6 had a 10+ year history of competing in weightlifting), but is most likely a combination of both. It also further supports the well-document conclusions that trained individuals possess few hybrids (MHC I/IIa, MHC IIa/IIx, and MHC I/IIa/IIx) and rarely any pure MHC IIx fibers. Practical Applications: The findings support a positive relationship between strength and percentage of fast-twitch fibers. Strength and conditioning professionals could potentially use this information to assess talent or improve program design prescriptions, but this application requires more research. Future studies should continue to explore the FT composition of high-level female athletes from a variety of sports. Moreover, long-term research could assist in understanding the specific role of genetic inheritance vs. training in development of high concentrations of MHC IIa fibers. Acknowledgments: This study was funded in part through a donation from Renaissance Periodization.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Analyzing Fat Free Mass Index in Division IAA Football Players

P. Collopy, M. Lane, L. Doernte, R. Bean, and Z. Owsley

Eastern Kentucky University

Introduction: The body mass index (BMI) is not the best method of predicting body composition in athletes. Athletes typically possess above average amounts of lean muscle mass. BMI does not accurately predict body composition in most athletes because it does not consider lean muscle mass as a factor in its calculation. Purpose: The purpose of this study was to examine the FFMI data from a Division IAA football team to observe if there was a FFMI inter-positional significant differences. The secondary purpose was to compare the FFMI data of the college football team with the FFMI data of 2006–2013 NFL combine athlete. Methods: FFMI was calculated in 64 Division IAA football players using data derived from Air displacement plethysmography (ADP) (Cosmed, USA) and Dual Energy X-ray absorptiometry (DEXA GE) scans. ADP measures total body mass with a precise scale and then air displacement to derive body volume. Body composition is then estimated through body density formulas appropriate for age, gender, and ethnicity. The DEXA uses a low-dose x-ray beam to determine body composition for this study. Height was recorded by stadiometer. Athlete total mass 103.23 ± 24.16 kg, body composition 18.86 ± 8.6 BF%, and height 2.11 ± 2.07 m was recorded through both measures. FFMI was calculated using = {KG of body mass × ([100 − BF%]/100) × [Height in m2]}. Additionally, FFMI was calculated on 3,614 athletes from the 2006–2015 NFL Combine. The NFL combine athletes' information was accessed online through publicly available data. A variance using SPSS One-Way ANOVA was used to see if there were significant variances of FFMI when comparing position groups on the college football team to the NFL Combine athletes. Results: A Post Hoc LSD comparison determined that running backs FFMI 26.42 ± 1.17 had a significantly different FFMI than DB's, WR's, LB's and TE's. WR's FFMI 21.55 ± 2.19 had a significantly different FFMI with every position except TE's, DB's, and ST's. The avg. college football RB FFMI examined was 26.42 ± 1.17 and the NFL combine RB's avg. FFMI was 26.76. The avg. college football OL FFMI examined was 25.12 ± 1.58 and the avg. NFL combine OL's avg. was 28.04. Conclusions: The FFMI average from the college football running backs was very similar to the FFMI calculate for NFL combine running backs. NFL offensive linemen had the highest FFMI average of all the FFMI's calculated by position between college football and NFL but did not have the lowest body fat percentage average by position group. Practical Applications: These FFMI findings are currently preliminary, but further investigation must be done to determine if FFMI is a useful predictor for calculating a football player's muscular development specific to football position. Additional research should consider factoring in the size of the athlete to determine if FFMI is only a good indicator of muscular development for certain positions because linemen tend to have a higher average FFMI when compared to other positions which could be due to their size. Additionally, normative values on the increase in FFMI observed in collegiate athletes over time should be investigated. Acknowledgments: A special thanks to the athletes who participated in the study and thank you to the people of Eastern Kentucky University's Exercise Physiology Lab.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Visceral Adipose Tissue Norms in Adults Ages 18–75 Years Measured Using Dual Energy X-ray Absorptiometry

K. Hirsch,1 M. Blue,2 E. Trexler,1 K. Anderson,1 A. Pihoker,3 A. Peterjohn,1 and A. Smith-Ryan1

1 University of North Carolina at Chapel Hill; 2 University of North Carolina Chapel Hill; and 3 University of Pittsburgh

In populations such as first responders, retired athletes, or athletes who compete at a higher body fat percentage, visceral adipose tissue (VAT) poses increased risk for metabolic, inflammatory, and endocrine dysfunction. Newer technology, such as dual energy x-ray absorptiometry (DEXA) automatically estimates VAT, providing a cost effective and lower radiation alternative to CT scans. However, there is limited normative data with which to compare results and quantify potential VAT associated disease risk. Purpose: To examine DEXA derived VAT values and create normative data stratified by sex and age. Methods: A cross-sectional analysis of 646 adults (Males: n = 348; Females: n = 298), 18–75 years of age, who participated in various research studies between 2015–2018 were pooled for evaluation (Mean ± SD [Range]: Males: height = 180.9 ± 7.7 cm [160.0–203.2 cm]; weight = 86.9 ± 21.4 kg [50.0–162.5 kg]; BMI = 26.4 ± 5.3 kg·m−2 [17.5–44.2 kg·m−2]; Females: height = 164.7 ± 6.7 cm [148.8–189.5 cm]; weight = 70.1 ± 19.3 kg [41.3–152.6 kg]; BMI = 25.8 ± 6.9 kg·m−2 [17.9–54.8 kg·m−2]). From a total body DEXA scan, VAT mass (kg) was quantified from the software-delineated region of interest, defined as 20% of the distance spanning from the top of the iliac crest to the base of the skull. Separate VAT mass percentiles were stratified by age range (18–19 years; 20–24 years; 25–50 years; 50+ years) for males and females. Results: In both males and females, VAT mass was positively associated with age (R = 0.655–0.684; p < 0.0001). In males, VAT mass ranged from 0.0 to 4.5 kg (Mean ± SD: 0.8 ± 0.9 kg; interquartile range (IQR) = 0.2–1.1 kg). The 50th percentile for VAT mass in males, stratified by age was: 18–19 years (0.2 kg; IQR = 0.1–0.3 kg), 20–24 years (0.2 kg; IQR = 0.2–0.4 kg), 25–50 years (0.7 kg; IQR = 0.4–1.4 kg), and 50+ years (1.6 kg; IQR = 1.1–2.2 kg). In females, VAT mass ranged from 0.0 to 3.8 kg (Mean ± SD: 0.4 ± 0.7 kg; IQR = 0.03–0.6 kg). The 50th percentile for VAT mass in females, stratified by age was: 18–19 years (0.04 kg; IQR = 0.01–0.1 kg), 20–24 years (0.06 kg; IQR = 0.01–0.1 kg), 25–50 years (0.5 kg; IQR = 0.2–1.2 kg), and 50+ years (1.2 kg; IQR = 0.7–1.7 kg). Conclusions: Measures of VAT provide important information about metabolic and cardiovascular health risk beyond that of BMI and percent body fat. These normative values provide a reference for VAT mass in adults 18–75 years of age. Practical Applications: Normative data can be used by practitioners to evaluate if amounts of VAT are greater than expected. Based on the normative values presented, a 20-year-old male with 0.5 kg of VAT would be near the 90th percentile for his age, well above an average amount. In contrast, a 40-year-old female with the same amount of VAT would be near the 50th percentile for her age.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Evaluating the Impact of Lean Mass and Body Fat Percentage on Broad Jump Performance

A. Bosak, M. Phillips, R. Sanders, J. Feister, H. Nelson, R. Lowell, and B. Ziebell

Liberty University

The broad jump (BJ) test is often utilized to determine how far a person can jump and is a measurement of an individual's horizontal power ability. Prior studies have assessed the impact that anthropometric and body composition values have on vertical jump (VJ) performance in no less than averagely fit males and females, but only 2 studies evaluated the impact of leg lean mass and trunk lean mass on VJ performance. Furthermore, it appears that no study has specifically evaluated the relationship between body fat percentage (BF%), skeletal muscle mass (SMM), lean leg mass (LLM), and trunk lean mass (TLM) on BJ performance using no less than averagely fit females. Purpose: To investigate the relationship between BF%, SMM, LLM, TLM, and BMI on BJ performance in no less than averagely fit college-age females. Methods: After collecting descriptive data, 33 above averagely fit college-age females had their BF%, SMM, LLM, and TLM assessed via a Body Composition Analysis System and also had their BMI calculated. Participants then participated in an 8 minutes dynamic warm-up. Subjects were then given a 4 minute passive recovery (PR) period after the warm-up and then completed 4 familiarization jumps (i.e., trials) using a BJ measurement device. After another 4 minutes PR period, subjects completed one series of 4 jumps with 60 seconds of PR between each jump. Pearson Correlations were then performed between BF%, SMM, LLM, TLM, BMI, and BJ (i.e., the farthest of the 4 jumps) with significance differences occurring at p ≤ 0.05. Results: There was a moderate negative correlation between BF% and BJ (r = −0.551, p = 0.001) and a low negative correlation between BMI and BJ (r = −0.358, p = 0.041). Also, a low positive correlation occurred between SMM and BJ (r = 0.200, p = 0.265) and TLM and BJ (r = 0.200, p = 0.267), while no relationship occurred between LLM and BJ (r = 0.182, p = 0.312). Conclusions: BF% appears to have a moderate negative relationship with BJ performance in no less than averagely fit females, while SMM, TLM, LLM, and BMI have little to no relationship with BJ performance. Practical Applications: The current study's results suggest that having a lower BF% may moderately predict farther jumping performance in no less than averagely fit females. Prior VJ studies did suggest a stronger relationship between lower body fat percentage values and higher VJ performance, but it cannot be assumed that just because an individual has a lower BF% that they will definitely jump farther during a BJ test or higher during a VJ test. Furthermore, the current study utilized BIA as the body fat percentage measurement tool, yet future research may be required to determine if gender, fitness level, sport specificity, or a different type of body fat percentage measurement technique may play a factor when considering if BF%, SMM, LLM, TLM, and BMI have a relationship with broad jump performance.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Influence of Total Body Water Estimation When Measuring Body Fat Percentage With a Dual Energy X-ray Absorptiometry-Based 4-Compartment Model

Z. Cicone,1 C. Holmes,1 B. Welborn,2 B. Hornikel,2 J. Moon,3 T. Freeborn,2 and M. Esco1

1 University of Alabama; 2 The University of Alabama; and 3 ImpediMed

Introduction: Dual-energy x-ray absorptiometry (DXA) has been shown to be a valid method of determining both total bone mineral (Mo) and body volume (BV) for use in a 4-compartment (4C) model for assessing body fat percentage (BF%). Total body water (TBW) is also a component within the 4C model, but measures usually require a relatively invasive procedure of consuming and measuring urine output of deuterium oxide (D2O). However, bioimpedance spectroscopy (BIS) is a technique that has been shown to provide a valid estimate of TBW when compared to the D2O technique. Therefore, it is possible that similar BF% values could be provided from a 4C model where Mo and BV were determined with DXA and TBW was determined either with D2O or BIS. Purpose: The purpose of this study was to determine the agreement between a 4C model when TBW was measured with D2O vs. predicted with BIS, while both Mo and BV were measured with DXA. Additionally, total regional percent fat from the DXA was compared to the D2O-based 4C BF%. Methods: Sixty-eight healthy adults (61.8% female, age = 26.1 ± 9.7 years, height = 170.5 ± 8.5 cm, weight = 72.6 ± 15.0 kg) volunteered to participate in this study. All BF calculations were made using the 4C model. A calibrated DXA was used to measure Mo and BV. TBW was measured with D2O and estimated with 2 separate BIS devices (BIS1 and BIS2). Total regional percent fat was measured directly from the DXA. Agreement of BIS1, BIS2, and DXA with D2O was assessed using the Bland-Altman method. Results: Values are reported as mean ± standard deviation (SD). Criterion 4C BF% values ranged from 9.43 to 57.54% (31.45 ± 9.81%). There were no significant differences between the 3 4C methods (Effect sizes ranging from −0.08 to 0.00) and strong correlations were found (r values ranged from 0.92 to 0.94, all p < 0.01). Compared to the 4C method with D2O, the limits of agreement (bias ± 1.96 × SD) were −0.76 ± 7.01 for the 4C via BIS 1, −0.03 ± 6.66 for the model via BIS 2. Total regional percent fat from the DXA (ES = −0.30, p < 0.01) significantly underestimated 4C BF% with D2O and produced wide limits of agreement (−2.89 ± 7.58). Conclusions: The results of this study show that when TBW is measured with BIS and used within a 4C model where DXA measures BV and Mo, the BF% estimates are very similar to the criterion method of D2O for TBW. The 2 BIS methods provided no significant difference, strong correlation, and tight limits of agreement. Using total regional percent fat values directly from DXA resulted in significant underestimation of 4C BF%. Practical Applications: These findings suggest that BIS may be a useful surrogate for D2O dilution in order to estimate TBW for use in a 4C model. The use of DXA alone for assessment of body fat may result in a significant underestimation, so the use of a 4C model is preferred when body composition is a primary outcome variable. Acknowledgments: This study was funded by Impedimed, Inc.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

A Method of Utilizing Skinfold and Bioelectrical Impedance Analysis for Estimating Body Fat Percentage via Four-Compartment Model

B. Welborn,1 Z. Cicone,2 B. Nickerson,3 and M. Esco2

1 The University of Alabama; 2 University of Alabama; and 3 Texas A&M International University

Accurate measures of body composition, usually demonstrated as body fat percentage (BF%), require multi-compartment modeling being calculated from a multitude of laboratory methods. For example, the Wang 4-compartment (4-C) method requires the measure of body volume (BV) via hydrostatic weighing (UWW), total body water (TBW) via either isotopic measures or bioelectrical impedance spectroscopy (BIS), and bone mineral content (BMC) via dual energy x-ray absorptiometry (DXA). However, these measurements can be estimated with field measures, such as the use of the skinfold technique (SF) for BV and the use of bioelectrical impedance analysis (BIA) for BMC and TBW. Therefore, it is possible that the components of a 4-C model could be obtained by these simple field techniques. Purpose: The purpose of this study was to determine if BIA and SF techniques collectively could serve as surrogates to laboratory methods for the estimation of BF% via 4-C model when compared to the laboratory methods. Methods: A convenience sample of 141 participants (m = 71, w = 70; 22 ± 4.9 years) participated in this study. BF% was determined via the Wang 4-C model (FM (kg) = 2.748 (BV) − 0.699 (TBW) + 1.129 (Mo) − 2.051 (BM). Where total body bone mineral (Mo) was calculated as follows: Mo = total body BMC (kg) × 1.0436 (23). Then BF% = (FM/BM) × 100. The lab 4-C model (4-C Lab) involved DXA for BMC, UWW for BV, and BIS for TBW. The field 4-C model (4-C Field) involved BIA for both BMC and TBW, and SF for BV. The BIA method was a single frequency hand-to-foot device. The SF techniques consisted of 7 sites (chest, triceps, subscapular, mid-axillary, suprailiac, abdomen, and thigh). Results: The mean ± SD of BF% was 20.5 ± 7.7% for 4-C Field and 22.0 ± 7.9% for 4-C Lab. The magnitude of the differences was calculated using Cohen's d effect size (ES). Paired-samples t-test revealed a significant mean difference between the lab and field measures of 4-C BF%, T(140) = −5.979, p < 0.001 and (ES = −0.19). Compared to the 4-C Lab, the 4-C Field demonstrated limits of agreements (Bias ± 1.96 SD) of −1.49 ± 5.81%. Pearson-product correlation coefficient between the 2 methods was r = 0.93 (p < 0.001). Conclusions: The 4-C Field showed a significant, yet trivial mean difference when compared to the 4-C Lab. It also provided a near perfect correlation and small limits of agreements. Practical Applications: This study demonstrated a novel method for estimating BF% via the Wang 4-C equation by using the field techniques of BIA and SF. Therefore, instead of using expensive and time consuming laboratory techniques, practitioners should consider using simpler field devices when using a multi-compartment model especially with participants who maybe hydrophobic or just uncomfortable in general. However, the 4-C Field should not be used as a surrogate to the 4-C Lab in research settings. Instead, it is recommended that researchers continue using the 4-C Lab method.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Anthropometric and Proportionality Characteristics in Female Collegiate Athletes Across Sports

L. Wentz,1 J. Arabas,2 L. Jorn,2 and J. Mayhew2

1 Appalachian State University; and 2 Truman State University

Introduction: Talent identification distinguishes between physical characteristics that may be beneficial to certain sports. Notably, characteristics of female athletes are understudied compared to their male counterparts. Purpose: To identify sport-specific anthropometric proportionality characteristics in Division II female collegiate athletes that could provide a competitive advantage. Methods: Eighty-four female athletes from 6 sports were included in analysis, including basketball, softball, soccer, volleyball, track (sprinters), and track (throwers). Participants were measured for 8 skinfolds, 7 bone breadths, and 9 muscle circumferences according to the Anthropometric Standardization Reference Manual. Proportionality was analyzed using Phantom Z-scores for measurements that were significantly different between sports in one-way ANOVA. Results: There were significant differences for height, sum of 6 skinfolds, girths, and bone breadth measurements between sports (p < 0.001). Basketball and volleyball heights were significantly greater than softball and soccer athletes (p < 0.050). Sum of 6 skinfolds were significantly lower for track sprinters compared to basketball (p = 0.049), softball (p < 0.001), and track throwers (p < 0.001). Sum of 6 skinfolds were significantly lower for soccer players compared to softball (p = 0.004) and throwers (p = 0.001). In muscle girths, basketball players had low neck circumference (Z-score = −2.3) and high calf circumference (Z-score = 5.4). Softball players had high waist circumference (Z-score = 2.6). Throwers had high circumference for flexed arm (Z-score = 2.3), forearm (Z-score = 2.3), waist (Z-score = 3.4), hip (Z-score = 2.7), and calf (Z-score = 2.1). Sprinters had small neck circumference (Z-score = −2.6). For bone breadths, softball players had large shoulder breadth (Z-score = 1.3) and hip breadth (Z-score = 1.5). Throwers had large hip breadth (Z-score = 1.6) and knee breadth (Z-score = 1.3). Volleyball players had small elbow breadth (Z-score = −1.1) and knee breadth (Z-score = −1.4). Small knee breadth was also found in sprinters (Z-score = −1.5), basketball (Z-score = −1.4), and soccer players (Z-score = −1.2) Conclusions: Female power athletes such as track throwers and softball players had larger muscle girths and bone breadths compared to speed and agility sports, suggesting that these physical characteristics are advantageous to their sport. Practical Applications: Differences in bone breadths were especially notable because they are genetically determined and may predispose athletes to select specific sports.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

The Effects of Overground and Treadmill Sprint Training on Body Composition

S. Dorgo,1 J. Perales,2 and S. Montalvo3

1 The University of Texas at El Paso; 2 El Paso Community College; and 3 University of Texas at El Paso

Recent studies have investigated the effects of sprint training on athletic performance, but little is known about the effects on body composition. Furthermore, it is unclear whether different sprint training modalities would have different effects on body composition and lean mass. Purpose: To determine the effects of a 6-week track (TR) and treadmill (TM) sprint training intervention on body composition in males and females in comparison to a non-exercising control group. Methods: Forty-nine recreational active subjects were recruited for this study (Age ± SD = 23.37 ± 2.83 years; BMI ± SD = 24.50 ± 3.83 kg·m−2) and were randomly assigned to the TR (n = 21; 11 males, 10 females), TM (n = 20; 10 males, 10 females) and control (n = 8; 5 males, 3 females) groups. Training groups performed 2 training sessions weekly with 4 maximal effort sprints per session and 3–4 minutes rest in between attempts. TR group trained exclusively on the track, while the TM group trained exclusively on the treadmill. Sprint attempts were closely replicated between the 2 conditions with a progressive acceleration to maximal speed and 5–6 seconds maximal sprint speed maintained before deceleration. TM group subjects were suspended in a safety harness in case of losing control during sprint attempts. Body composition was measured using a Dual-Energy X-ray Absorptiometry (DEXA) at baseline and after the 6-week sprint training intervention. Paired samples t-tests were conducted to observe mean pre-to post-test differences within groups and independent samples t-tests for between-group differences. Results: There were no significant between-group differences at pre-test for any of the DXA measures (p ≥ 0.142). DXA data were analyzed for total mass, total body tissue, fat mass, lean mass, and leg lean mass. No significant changes for any of these measures were observed for Control subjects (p ≥ 0.191). On the contrary, TR subjects showed significant decrease in total body fat (15.63 ± 8.07 kg to 14.91 ± 7.48 kg; p = 0.017) and increase in leg lean mass (15.54 ± 3.66 kg to 16.50 ± 3.59; p = 0.002). For TM subjects significant decrease was observed for total body tissue (66.36 ± 12.10 kg to 65.87 ± 12.07 kg; p = 0.043) and total body fat (17.04 ± 7.02 kg to 16.25 ± 7.42; p = 0.016), and significant increase for leg lean mass (16.37 ± 4.36 kg to 17.05 ± 5.03 kg; p = 0.012). Analyzing sex differences in body composition adaptations, we observed significant differences between males and females for changes in total body lean mass for the TR and TM groups (p < 0.001) where males showed greater improvements, but no sex differences for Control (p > 0.05). Similar trends were observed for leg lean mass with significant sex differences in the TR and TM groups (p < 0.0001), but not in the Control group (p > 0.05). Conclusions: Sprint training appears to improve body composition regardless of the training modality. Males appear to respond better in lean mass improvements for both treadmill and overground sprint training. Practical Applications: Sprint training with maximum effort repeated sprints may be an effective tool to positively impact body composition of recreationally trained individuals and elicit improvements in lean mass. To achieve these outcomes sprint training may be executed either overground or on a high-speed motorized treadmill.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Utility of a 4-Compartment Dual-Energy X-ray Absorptiometry-Derived Body Composition Estimate in Normal-Weight and Overweight Adults

E. Trexler,1 K. Anderson,1 A. Pihoker,2 G. Gerstner,1 K. Hirsch,1 M. Blue,3 A. Peterjohn,1 E. Ryan,1 and A. Smith-Ryan1

1 University of North Carolina at Chapel Hill; 2 University of Pittsburgh; and 3 University of North Carolina Chapel Hill

Monitoring body composition can play a critical role in assessing the health and physical fitness of athletes, in addition to the efficacy of strength and conditioning programs. Many common methods of body composition assessment yield 2- or 3-compartment (3C) models, which assume a constant value for the hydration fraction of fat-free mass. 4-compartment (4C) models aim to enhance the accuracy of estimates by accounting for total body water (TBW). A 4C model using dual-energy x-ray absorptiometry (DEXA) to estimate body volume has been validated in overweight and obese adults, but its use among more heterogeneous samples of adults has yet to be explored. Purpose: To assess the impact of using DEXA-derived 4C body composition estimates in a heterogeneous sample of healthy adults, in comparison to standard 3C DEXA estimates. Methods: Healthy participants (n = 226; 119 male, 107 female; mean ± SD: age, 20.8 ± 3.3 years; height, 172.1 ± 9.8 cm; weight, 67.6 ± 10.7 kg; body mass index [BMI], 22.7 ± 2.6 kg·m−2) between the ages of 18 and 35 years (BMI range: 17.6–36.3 kg·m−2) completed a single laboratory visit to determine body composition. Prior to testing, participants were instructed to fast and avoid caffeine for a minimum of 8 hours, and to abstain from strenuous exercise, alcohol, and tobacco for at least 24 hours. DEXA scans were performed by a trained technician according to manufacturer's instructions, and 3C estimates were computed using the default software. To compute 4C estimates, a previously validated equation was used to estimate body volume from DEXA, and TBW was assessed via bioelectrical impedance spectroscopy. Data were analyzed via t-tests, multiple regression, and segmented regression. Results: Body fat percentage (BF%) was significantly lower using 4C compared to 3C (18.7 ± 7.1 vs. 24.6 ± 6.9%, p < 0.001). As such, 4C yielded lower estimates of fat mass (12.8 ± 5.9 vs. 16.6 ± 5.3 kg, p < 0.001) and higher estimates of fat-free mass (54.8 ± 9.0 vs. 50.9 ± 9.8 kg, p < 0.001). For BF%, differences between 4C and 3C were larger in females compared to males (−7.1 ± 3.2 vs. −4.9 ± 2.8%, p < 0.001). Differences in BF% were linearly related to BMI (r = 0.40, p < 0.001), with 4C yielding particularly low BF% estimates compared to 3C in individuals with lower BMIs. Break point analysis for segmented regression indicated that the slope of the linear relationship between BF% differences and BMI changed at a BMI value of 25.1 kg·m−2, which closely corresponds with the overweight BMI threshold of 25.0 kg·m−2. The combination of sex and BMI explained 29% of the variance in the differences between 4C and 3C BF% estimates, with significant independent contributions from both predictor variables (p < 0.001). Conclusions: DEXA-derived 4C estimates of BF% were substantially lower than traditional 3C DEXA estimates. The difference between estimates was associated with both age and BMI; larger differences were observed in females, and in individuals with BMIs under 25.1 kg·m−2. To establish validity in individuals with lower BMIs, future research should compare DEXA 4C estimates to a gold standard criterion measurement in this population. Practical Applications: When assessing body composition in athletes, 4C models that derive body volume from DEXA produce highly divergent BF% estimates in leaner individuals and females when compared to DEXA 3C, and should be interpreted cautiously. Acknowledgments: This research was supported by funds provided by Naturex and the National Strength and Conditioning Association Foundation.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Establishing Normative Fat Free Mass Index Values in Female Athletes

M. Blue,1 K. Hirsch,2 E. Trexler,2 A. Pihoker,3 A. Peterjohn,2 K. Anderson,2 and A. Smith-Ryan2

1 University of North Carolina Chapel Hill; 2 University of North Carolina at Chapel Hill; and 3 University of Pittsburgh

Body size and training vary greatly across different sports. Fat free mass index (FFMI) accounts for height, therefore may provide insight beyond fat free mass (FFM) alone for female athletes when assessing body composition. Purpose: To investigate FFMI in female athletes in order to establish normative and upper limit values. Methods: Data presented in this cross-sectional study were collected from 2 separate studies targeting trained females. A subset of 174 trained females (Mean ± SD; Age: 19.6 ± 1.6 years, Height: 165.7 ± 6.9 cm, Weight: 61.6 ± 9.2 kg, %Body fat: 24.8 ± 5.3%) were included in analyses. Dual-energy X-ray absorptiometry was used to measure fat mass, bone mineral content (BMC), and lean mass (LM). Fat free mass index was calculated by the following equations: (a) FFM = BMC + LM; (b) FFMI = FFM/(HEIGHT)2, where BMC, LM and FFM were reported in kilograms and height was measured in meters. Participants were classified by sport including: Gymnastics (n = 38), Cross Country (XC, n = 39), Track (n = 22), Swimming and Diving (n = 28), and Resistance-trained (RT, n = 47). Resistance-trained females were included if they had participated in a minimum of 6 months of resistance training prior to study enrollment. All other participants were Division I varsity athletes in their respective sport. Results: For all females, mean ± SD for FFMI was 16.6 ± 1.7 kg·m−2. The minimum and maximum FFMI values were calculated to be 13.3 kg·m−2 and 25.5 kg·m−2, respectively. The interquartile range (IQR) for all females was 15.5–17.4 kg·m−2, and the 90th percentile was 18.7 kg·m−2. When classified by sport (Figure 1), XC (FFMI = 15.3 ± 0.96 kg·m−2; IQR = 14.7–15.8 kg·m−2) was significantly lower than all other female athletes (p < 0.01). FFMI for all other athletes were as follows; Gymnastics: FFMI = 17.3 ± 1.4 kg·m−2, IQR = 16.6–18.0 kg·m−2; Track: FFMI = 17.5 ± 1.4 kg·m−2, IQR = 16.5–18.6 kg·m−2; Swimming, FFMI = 16.9 ± 1.3 kg·m−2; IQR = 16.1–17.9 kg·m−2; RT: FFMI = 16.7 ± 2.2 kg·m−2; IQR = 15.6–17.4 kg·m−2. Conclusions: Establishing sport-specific normative FFMI values for female athletes may be beneficial for athletes, strength and conditioning coaches, and personal trainers by leading to more appropriate body composition goals. Future investigations should examine the relationship of FFMI percentile cutoffs and performance. Practical Applications: Determining normative FFMI values across various sports and modalities of training may allow strength and conditioning professionals the ability to create training regimens based on recommended FFMI, in addition to overall body fat percentage. Based on the results of this study, a female gymnast of average height (160 cm) with a FFMI of 15 kg·m−2 may gain approximately 5.5 kg of FFM in order to match the mean FFMI (17.3 kg·m−2) of her cohort. Identifying how to maximize an athlete's body composition to achieve optimal performance is important as the athlete develops throughout a collegiate and/or professional career. Acknowledgments: This study was supported by funds provided by the National Strength and Conditioning Association Foundation.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Relationships Among Muscle Quality and Rowing Performance: A Pilot Trial

O. Nabavizadeh, Y. Koh, and A. Herda

University of Kansas

Muscle quality can be defined as muscular strength relative to mass. Rowing is a popular sport where muscle mass and strength are critical in providing maximal drive, but, is also a coordinated movement with upper body and pull on the oar. Purpose: The purpose of this study was to explore the relationships among measurements of muscle quality and 6,000 meter rowing performance in college-aged men. Methods: Seven members of the men's crew team (mean ± SD: age: 22.7 ± 4.6 years; ht: 183.4 ± 4.2 cm; wt: 81.1 ± 11.6 kg) were recruited to participate in this study during their competition season. Participants were asked to refrain from caffeine and visit the laboratory on a single occasion euhydrated and rested for testing measurements. After assessment of body mass and stature using a digital platform scale and wall-mounted stadiometer, respectively, subjects rested supine on a padded table for at least 10 minutes. All participants underwent bioelectrical impedance analysis (BIA) following manufacturer guidelines. Contact electrodes were placed on the wrist, in line with the ulnar head and another 5 cm distal on the dorsal surface of the hand and 2 additional electrodes were placed on the ankle between the lateral and medial malleoli and 5 cm distal on the dorsal surface of the foot. Participant height, weight, and age were entered into the device and measurement was started with arms and legs not touching the body or each other. Data were stored and recorded for subsequent calculations of segmental fat-free mass (FFM) using the prediction equation defined by Kaysen et al., 2005. Furthermore, subjects completed 1-repetition maximum (1-RM) using a plate-loaded 45° hip sled. After 2–3 warm up sets at increasingly heavier weights, an initial 1-RM was attempted. Each attempt was only 1 repetition with 3–5 minutes rest between each subsequent attempt. A single rowing time trial of 6,000 meters was completed at maximal effort. Final time and 500 meter split times were recorded to the nearest 10th of a second. Muscle quality was calculated as the maximal 1-RM in kg divided by the leg FFM. Pearson's r correlation coefficient was determined between muscle quality and time trial performance. Results: Average 6,000 meter time trial was 1,392.7 ± 97.0 seconds (range: 1,295.3–1,569.5 seconds) and average 500 meter split was 116.1 ± 8.0 seconds (range: 107.9–130.8 seconds). The correlation of muscle quality and 6,000 meter time trial rowing performance was r = −0.937, p < 0.05 and the correlation of muscle quality and 500 meter split average was r = −0.935, p < 0.05. Conclusions: The results of this study indicated that there is a strong negative correlation among muscle quality as calculated from leg strength and lower body FFM. As lower body muscle quality increased, rowing performance on a stationary ergometer improved. Practical Applications: Coaches often monitor strength and performance throughout a season to ensure athletes are improving or maintaining optimal performance. Adding a simple measurement of BIA, coaches can then monitor FFM and muscle quality in their athletes. Based on the results of this study, the evaluation of muscle quality can assist in making sure athletes are comparable in making decisions on boat teams and teammate arrangement for competition.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Seasonal and Longitudinal Changes in Body Composition by Sport-Position in NCAA Division I Basketball Athletes

J. Fields, J. Merrigan, J. White, and M. Jones

George Mason University

Body composition (BC) is often a marker for athlete health and sports performance. Generally, excess body fat (BF%) and fat mass (FM) are unfavorable in sport, while increased fat-free mass (FFM) is beneficial. Previous research has examined cross-sectional BC data of collegiate athletes, as well as BC changes from off-to pre-season. However, limited data exist in regard to seasonal and longitudinal BC changes for both men's and women's collegiate basketball. Purpose: To document changes in BF%, FM, FFM, and body mass (BM) across season, year, and sport-position in a large sample of National Collegiate Athletic Association (NCAA) Division I men and women basketball athletes. Methods: NCAA Division I men's and women's basketball athletes (MBB, n = 127; WBB, n = 196) participated. BF%, FM, and FFM were assessed using air displacement plethysmography. Tests for significant differences across sport-position were performed via independent samples t-test with a Bonferroni correction, which reduced the level of significance (p < 0.0167). Separate repeated measures analysis of variance (MBB, WBB) were used to assess seasonal (pre-season, in-season, off-season) (MBB, n = 16; WBB, n = 29), and longitudinal (freshman, sophomore, junior) (MBB, n = 14; WBB, n = 8) changes. Results: For both MBB and WBB, guards had lower BF% (MBB: 8.6 ± 3.3% vs. 14.9 ± 4.8%, p < 0.001; WBB: 19.2 ± 6.3% vs. 24.2 ± 5.7%, p < 0.001), FM (MBB: 7.4 ± 3.1 kg vs. 15.9 ± 5.7 kg, p < 0.001; WBB: 13.4 ± 5.4 kg vs. 20.5 ± 7.4 kg, p < 0.001), FFM (MBB: 77.7 ± 6.4 kg vs. 89.4 ± 7.5 kg, p < 0.001; WBB: 54.63 ± 64.4 kg vs. 61.8 ± 6.0 kg, p < 0.001), and BM (MBB: 85.2 ± 7.5 kg vs. 105.3 ± 8.1 kg, p < 0.001; WBB: 68.0 ± 7.4 kg vs. 82.2 ± 12.5 kg) than forwards. While no seasonal differences were observed in MBB, a statistically significant reduction in FFM was observed in WBB during in-season play (58.8 ± 6.5 kg beginning of in-season to 55.9 ± 6.7 kg beginning of off-season, p < 0.05). Across years, MBB increased FFM from freshman (82.0 ± 8.4 kg) to sophomore year (83.5 ± 8.4 kg) (p < 0.05) and remained unchanged through junior year (84.1 ± 9.1 kg). No differences were observed in WBB across years for BM, BF%, and FM. FFM did not increase from freshman to sophomore years, but significant increases occurred from sophomore (56.2 ± 3.9 kg) to junior year (57.7 ± 4.0 kg). Conclusions: Sport-position differences existed for MBB and WBB, such that guards were smaller and leaner than forwards, perhaps due to differences in physiological demands. Guards spend considerably more time sprinting and engaging in specific high intensity movements compared to forwards, thus indicating the need for them to have smaller body sizes that are better apt to move quickly. Although MBB and WBB showed increased FFM across years, WBB lost FFM during in-season. Practical Applications: Due to the known importance of BC and the aforementioned seasonal and longitudinal shifts, strength and conditioning practitioners should periodically assess athletes to ensure preservation of FFM. Coaches and sport dietitians may alter training and nutrition programming when increased BF% and FM or decreased FFM is noticed.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Extracellular Fluid Volume as a Predictor of Normal Weight Obesity Among College Students

A. Peterjohn,1 K. Anderson,1 K. Hirsch,1 M. Blue,2 A. Pihoker,3 E. Trexler,1 and A. Smith-Ryan1

1 University of North Carolina at Chapel Hill; 2 University of North Carolina Chapel Hill; and 3 University of Pittsburgh

Introduction: Obesity has frequently been associated with higher amounts of extracellular fluid (ECF) stores in the body when compared to normal weight individuals. Elevated ECF is associated with conditions such as edema, hypertension, and congestive heart failure. Individuals are classified as normal weight obese (NWO) if they have a normal body mass index (BMI) with elevated body fat levels. These individuals have been shown to be at higher risk for developing cardiometabolic and endocrine dysfunction when compared to normal weight lean (NWL) individuals, and extracellular fluid may be a differentiating factor. Purpose: To compare extracellular fluid volume between NWO and NWL college students. Methods: Ninety-four normal weight (BMI = 18.5–24.9 kg·m−2), college students (Mean ± SD; Age: 19.9 ± 1.4 years; Height: 168.9 ± 3.8 cm; Weight: 62.9 ± 9.7 kg, 64 female, 30 male) volunteered to participate in the current study. All anthropometric and body composition assessments were conducted at the beginning of the Fall 2017 semester. Bioelectrical impedance spectroscopy was used to measure extracellular fluid volume (ECF). Body fat percentage (%BF) was estimated using dual energy X-ray absorptiometry (DEXA). Subjects were classified as either normal weight obese (NWO) or normal weight lean (NWL) according to a 50th percentile cut-off for %BF published in the National Health and Nutrition Examination Survey database. A subject was considered NWO if they were ≥20.7 %BF and ≥26.1% for 18–19 years old and 20–25 years old males, respectively; and ≥34 %BF for 18–19 years old and ≥37.8 %BF for 20–25 years old females, respectively. Any subject with %BF less than these values were considered NWL. Results: No significant difference in ECF was observed between the NWO group (15.6 ± 3.7 L) and the NWL group (14.3 ± 3.0 L, p = 0.210). BMI was moderately correlated with ECF in NWL subjects (R = 0.427; p < 0.001) while there was no significant correlation between BMI and ECF in NWO subjects (R = 0.070; p = 0.837). %BF was negatively correlated with ECF in both groups (NWO: R = −0.838, p = 0.001; NWL: R = −0.594, p < 0.001). Conclusions: In the present study, ECF volume was not significantly different between NWO and NWL college students. The lack of correlation between BMI and ECF in NWO subjects could be attributed to the higher adiposity levels in these subjects, as BMI in both groups was considered normal. Practical Applications: Cardiometabolic and endocrine dysfunction may go unnoticed in NWO individuals due to their outwardly healthy appearance. Since differences in ECF volume between NWO and NWL individuals were not detected, other cardiometabolic and body composition factors should be stressed when assessing health risk in this population.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Validity of Bodpod and Bioimpedance Spectroscopy When Used Alone or Together in a Multi-Compartment Model for Estimating Body Fat

C. Holmes,1 J. Moon,2 M. Esco,1 C. Tai,3 K. Crowley,4 and B. Spradley3

1 University of Alabama; 2 ImpediMed; 3 United States Sports Academy; and 4 Cphbusiness Academy

Introduction: Validated laboratory methods for measuring body fat percentage (%BF), such as the Siri 3-compartment (3-C) model using body volume (BV) from hydrostatic weighing (HW) and total body water (TBW) from deuterium oxide (D2O), are impractical for use in field settings. BodPod (BP) has become an attractive alternative for measuring BV, but has shown to have many sources of error when estimating %BF in athletes with the Pace 2-compartment (2C) model, primarily due to the assumption of a constant level of hydration of fat-free mass. Bioimpedance spectroscopy (BIS) is a technique that estimates TBW and less invasive than D2O. Thus, it is possible that both BP and BIS can be used together in a Siri 3-C model for assessing %BF that provides comparable values to laboratory methods of HW and D2O. However, no research is available to validate this postulation. Purpose: The purpose of this study was to determine the accuracy of the BP as a stand-alone measure using the Pace 2-C (BP2C), as well as with the addition of BIS for the Siri 3-C model (BP3C) in male and female athletes. Both methods were compared to the laboratory 3-C model (LAB3C) which used D2O and HW for determining TBW and BV, respectively. Methods: A total of 21 male (34.0 ± 10.7 years, 175.4 ± 6.2 cm, and 83.6 ± 13.8 kg) and 46 female (24.4 ± 9.3 years, 166.2 ± 5.8 cm, and 63.0 ± 8.5 kg) recreational athletes volunteered to participate in this study. Each participant performed a HW assessment and D2O technique for laboratory values of BV and TBW, respectively. BP was also used to measure BV, as well as BP2C %BF. TBW was also estimated with BIS. The Siri 3-C model employed the values of BV and TBW from HW and D2O, respectively, for LAB3C, and from BP and BIS, respectively, for BP3C. Results: Table 1 provides the validity statistics for the BP3C, BP2C, and BISBF% when compared to the criterion LAB3C method. Conclusions: The results of the study demonstrated that the BP3C was superior to the BP2C when compared to the criterion LAB3C. Additionally, BIS provided similar r values and limits of agreement as the BP3C. Practical Applications: It appears that BIS is a useful as a stand-alone device for estimating %BF, as well as for measuring TBW in a 3C model when BP is used for assessing BV. Acknowledgments: This study was funded by ImpediMed, Inc.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Validation of Total Body Water Measurements Utilizing a Novel Metal Electrode Bioimpedance Spectroscopy Device in Comparison to Deuterium Oxide

B. Hornikel,1 J. Moon,2 M. Esco,3 B. Welborn,1 Z. Cicone,3 C. Holmes,3 and T. Freeborn1

1 The University of Alabama; 2 ImpediMed; and 3 University of Alabama

Introduction: Bioimpedance spectroscopy (BIS) has been used to estimate total body water (TBW) for use in body composition and health monitoring, offering an alternative to the more expensive and time-consuming criterion isotope method, deuterium oxide (D2O). The new device being examined utilizes metal electrodes and can be used in either a seated or standing position. The validity of these measurements in comparison to the criterion isotope method has previously not been examined. Purpose: To examine the validity of a BIS device utilizing metal electrodes in seated and standing position for calculating TBW in comparison to the criterion measure, D2O. Methods: Sixty-nine subjects (m = 26, w = 43) participated in the study (26 ± 9.6 years, 170.4 ± 8.4 cm, 72.8 ± 15.0 kg). Measurements were taken standing with metal electrodes (StM) and sitting with metal electrodes (SiM). D2O was used as the criterion method to estimate TBW. Urine samples were collected for all subjects prior to D2O ingestions, the post sample was collected at the 4-hour equilibration point. All measurements were taken following an 8–12 hours fast. Urine specific gravity values <1.030 were required for all subjects, indicating sufficient hydration. Results: Mean ± SD were found to be 39.565 ± 8.725 L for SiM, 38.319 ± 8.329 L for StM, and 38.013 ± 8.107 L for D2O. SiM was significantly different to D2O (mean difference = −1.552 L, p = 0.001), while StM reported no significant difference to D2O (mean difference = −0.306 L, p = 0.468). Both the SiM (SEE = 3.408, r = 0.909) and StM (SEE = 3.371, r = 0.911) reported similar standard error of estimate (SEE) and r values to the criterion D2O TBW. Total error (TE) for SiM was 3.94 L and for StM was TE = 3.465. Larger limits of agreements where found for SiM (−8.695 to 5.591 L) vs. StM (−7.121 to 6.509 L). Conclusions: TBW estimates from SiM and StM were found to strongly correlate with the criterion D2O TBW estimate. StM showed stronger correlation, smaller SEE and TE, as well as tighter limits of agreement. Both SiM and StM appear to be acceptable alternatives in estimating TBW to D2O, with StM being the better alternative.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

The Effect of Continuous Cooling on Heart Rate Recovery Following Heat Stress in Elite Tennis Players

W. Dobbs,1 K. Crew,2 and M. Esco2

1 The University of Alabama; and 2 University of Alabama

Purpose: Heart rate recovery (HRR), a non-invasive reflection of parasympathetic rebound following exercise, has been shown to be delayed following exertion in the heat. However, cooling during athletic activity with heat stress may speed HRR. The primary aim of this investigation was to see if a continuous cooling method could improve HRR after an exhaustive bought of simulated tennis play in a hot environment. Methods: Eight male tennis athletes (20 ± 3 years of age, 74.5 ± 12.5 kg, 183.6 ± 6.6 cm) performed 3 counter-balanced trials which were randomly assigned consisting of high intensity exercise simulating the physiological demands of a tennis match in temperate (TE) (22° C, rH 31%) or hot (37° C, rH 38%) conditions. During the hot trails, participants were provided with no cooling (HOT), or continuous cooling with an ice vest (COOL) throughout the trial. Each high intensity intermittent exercise bout occurred on a motorized treadmill inside of a portable climate chamber that was structurally assembled in an indoor tennis facility. Heart rate (HR) was acquired throughout a 2-minute period following the cessation of each trial by continuous monitoring from a heart rate monitor. HRR was calculated as the HR obtained at exercise cessation minus the HR after 2-minute of passive recovery and expressed as beats per minute. Results: The HRR values (mean ± standard deviation) for each condition were as follows: TE = 70 ± 10 b·min−1, HOT = 53 ± 16 b·min−1, COOL = 67 ± 6 b·min−1. Compared to the HOT trial, TE (Effect Size [ES] = 1.3, p = 0.037) and COOL (ES = 1.3, p = 0.008) displayed faster HRR. Conclusions: Continuous cooling with an ice vest during a simulated high intensity tennis exercise in a hot environment appears to attenuate the effect of heat stress on HRR to a similar magnitude as TE condition vs. HOT trial. Practical Applications: Practitioners should consider the utilization of an unobtrusive ice vest as a cooling strategy when performing prolonged high-intensity bouts of tennis in hot environments. The current results suggest the use of continuous cooling can significantly improve HRR post-exercise which may improve the recovery process.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Acute Weighted Vest Running Effect on Outside Middle Distance Running Performance

M. Mawyer,1 T. Purdom,2 and L. Kravitz3

1 PartnerMD; 2 Longwood University; and 3 University of New Mexico

Running economy is the energy cost for a given running speed. There is a strong correlation between running mechanics and running economy. Weighted vest running (WVR) has been shown to alter running mechanics and therefore may impact running economy. Purpose: To investigate if a 20 minutes self-selected running warm-up with an additional 10% body mass weighted vest (WV) effects middle distance run performance without a WV. Methods: Using a repeated measures design, 17 recreationally trained male subjects (mean ± SD: 23.3 ± 4.8 years, 177.7 ± 8.6 cm, 74.1 ± 9.1 kg, 12.5 ± 4.2 %BF, 55.6 ± 7.1 ml−1·kg−1·min−1) who had been running for a minimum of 3 months participated in the study. Laboratory demographic and V[Combining Dot Above]O2max tests were initially completed prior to 2 trials. After a randomized 20 minutes self-selected run with or without 10% additional mass (WV) on an outside synthetic track surface, participants then completed an active rest until their heart rate returned to resting levels (∼20 minutes) prior to racing other participants without the WV in a 5 k time trial on the same synthetic track surface. Time trials were separated by a minimum of 48 hours. Run durations were recorded to the nearest 1/10th seconds at 400 m intervals and 5 K end time with hand-held chronometers. A 2 × 5 repeated measures ANOVA was used to compare WV condition to control using 400 m splits at pre-determined distances of 400 m, 800 m, 1600 m, 3200 m, & 5 K distances. The Bonferroni post-hoc test was used when significant differences were found. Paired t-tests were used to compare total run times for the WV and control conditions at distances of 400 m, 800 m, 1600 m, 3200 m, & 5 K. Results: No significant differences (p > 0.05) were observed between 400 m split times or total run time for all pre-determined distances despite a >15 seconds decrease in the 5 k WV condition compared to control. The 400 m split times decreased by an average of 2.1% (2.3 seconds) up to 3,200 m, after which 400 m split times increased by 0.3% (0.3 seconds) (Figure 1). Conclusions: Despite the consistent reduction in 400 m split times for the WV condition compared to the control, statistical relevance was not obtained. Our wide inclusion criteria supports the conclusion that a large standard deviation is likely the limiting factor, despite relevant mean differences in pre-designated run distances. Furthermore, there was a consistent reduction in 400 m split times in the WV condition until 3,200 m, which suggests that a fatigue threshold may exist. Further inquiry with a more homogenous sample is recommended in the area of WVR prior to middle distance running. Practical Applications: Weighted vest running at a self-selected pace prior to running distances <3,200 m may improve performance in recreationally trained runners. Acknowledgments: Thank you to IRONWEAR© for the generous equipment grant enabling our lab to conduct this research.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Effects of Continuous Plus Intermittent Cooling on Body Temperatures and the Impact on Shot Accuracy in Elite Tennis Players

K. Crew,1 P. Bishop,2 J. Wingo,1 M. Richardson,1 and M. Esco1

1 University of Alabama; and 2 University of alabama

Introduction: Elite tennis players are exposed to unique thermal challenges during matches of long duration in hot environments in addition to the stimuli of existing homeostatic cardiovascular challenges related to athletic performance in the heat. A variety of different cooling strategies may be available to mitigate the rise in body temperature and improve performance, while minimizing the risk of heat related illnesses, in matches occurring in hot environments. Purpose: The purpose of this study was to investigate the effects of a cooling strategy of continuous cooling plus intermittent ice-pack applications on body temperatures and performance of elite tennis players subjected to high intensity exercise in a hot environment. Methods: Nine male tennis athletes (age = 20 ± 3 years, height = 183.6 ± 6.6 cm, weight = 74.5 ± 12.5 kg) performed counter-balanced trials of exhaustive exercise simulating the physiological demands of tennis in a hot (37° C, 38% rH) environment while continually wearing a cooling vest and intermittent ice applications to the walls of the abdomen and thighs (COOLING) and one without (CONTROL). Rectal (Tre) and skin (Tsk) temperatures were recorded before and after the high intensity exercise bout and at the end of a 2 minutes recovery period. Following the recovery period, participants performed a 10-point shot accuracy test on an indoor tennis court where each shot was scored and recorded. Results: There was no significant difference in Tre between the start of the trial and the end of the trial in the COOLING condition; however there was a significant difference in Tre at the same time points in the CONTROL trial. The cooling strategy significantly lowered Tre during the 2 minutes recovery phase. Shot accuracy was significantly better in the COOLING condition than the CONTROL condition. Practical Applications: Coaches and practitioners working with elite tennis athletes may consider using this method during elite tennis matches played in hot environments, as it appears to be an optimal and unobtrusive way to mitigate the rise in core temperature and improve performance while minimizing the risk of heat related illnesses.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

The Effect of the Annual Training Cycle on Aerobic Capacity in Division I Female Soccer Players

C. McPherson, T. Purdom, K. Levers, J. Giles, L. Brown, P. Martin, and J. Howard

Longwood University

Purpose: To assess the effect of variable training stress throughout the annual training cycle on aerobic fitness in female soccer players. Methods: Eleven Division l female soccer players (Mean ± SD: 19.3 ± 1.0 years; 164 ± 6.4 cm; 60.1 ± 5.4 kg; 19.44 ± 3.5 %BF) were tested across the annual training cycle at 6 specific time points: post-season 2016 (B1), detraining (B2), spring-season (B3), pre-season (B4), mid-season (B5), and post-season 2017 (B6). Prior to each testing block, subjects arrived to the lab where height, weight were measured and body density was obtained using a 3-site skinfold method (triceps, suprailiac, thigh) with handheld skinfold calipers. Body composition was estimated using the Brozak conversion equation. After a 5 minutes self-selected run on the treadmill, subjects completed an incremental treadmill test to asses V[Combining Dot Above]O2max. Aerobic capacity was collected at minute intervals for the entirety of the tests using a 15 seconds average. A 1 × 6 repeated measures ANOVA was used to analyze aerobic capacity across time points (B1–B6). A LSD post hoc analysis was used when significant differences were observed between blocks. Results: Statistical analysis revealed a significant main effect of time on aerobic capacity (F 1,10 = 0.001) with an observed power of 1.0. Pairwise comparisons are shown in Table 1. A significant increase (p < 0.01) in aerobic capacity continued across B1-B3 following the 2016 competitive season (B1), after which capacity decreased prior to the 2017 competitive season (B4). Statistically, aerobic ability remained unchanged across the 2017 competitive season (B4–B6) despite a 9% precipitous drop in peak aerobic fitness that occurred after the spring season (B3) (Table 1). Conclusions: Following the 2016 competitive season (B1), aerobic capacity progressively increased (p < 0.05) after 9-weeks of detraining (B2) and throughout spring season training (B3), at which point aerobic capacity peaked (51.81 ml−1·kg−1·min−1). Aerobic capacity then progressively declined (−7.04%) over the summer training period (B3–B4) due to NCAA stipulated athlete-coach communication limitations facilitating a non-structured training regimen. Aerobic capacity was not significantly different across the 2017 competitive season (B4–B6). Interestingly, the drop in aerobic capacity remained despite 11 days between the completion of the competitive season and post-season testing (B6). Practical Applications: It is recommended to monitor aerobic capacity throughout the annual training cycle to monitor training stress. Periodic aerobic testing can ensure adequate rest and periodized programmed training stress where players will be able to maintain peak physiological capacity and therefore maintain a competitive advantage. Additionally, non-structured self-selected training programs were shown to be in-effective to maintain aerobic capacity in this population. Acknowledgments: A special thank you to Longwood Athletics for their cooperation in assisting us in completing this study.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Effect of Varying Training Load on Heart Rate Variability and Running Performance Among an Olympic Rugby Sevens Team

A. Flatt1 and D. Howells2

1 Georgia Southern University; and 2 England Rugby Sevens

The usefulness of heart rate variability (HRV) for reflecting responses and adaptation to training among elite rugby sevens players has yet to be investigated. This research is needed because rugby sevens players are exposed to rigorous training and competition schedules which may put them at risk of fatigue accumulation or injury. Purpose: The aims of this study were to evaluate weekly HRV responses to varying training load and to assess whether HRV responses informed on individual training adaptation. Methods: An elite rugby sevens team (n = 12; height = 184.9 ± 7.1 cm; weight = 90.7 ± 7.0) performed daily recordings of the natural logarithm of the root mean square of successive differences (LnRMSSD) in the seated position via smartphone. Daily wellness questionnaires were also completed where athletes rated their perceived level of sleep, energy, soreness and mood on a 9-point scale. Higher ratings indicated more positive responses for a given parameter. Session rating of perceived exertion (sRPE) and total high-speed running were recorded from each training session. Data from a 3-week period were retrospectively analyzed. Week 1 consisted of baseline training while weeks 2 and 3 involved a repeated microcycle sequence consisting of peak training loads from the Olympic preparatory period. Maximum aerobic speed (MAS) was evaluated at the beginning of week 1 and 3. The weekly mean (LnRMSSDm) and coefficient of variation (LnRMSSDcv) of LnRMSSD and weekly mean for psychometric and training load data were compared across weeks with linear mixed models and effect sizes. Relationships between changes in MAS and LnRMSSD-derived variables were quantified after adjusting for baseline MAS. Results: LnRMSSD, psychometric and training load values and comparison statistics are presented in Table 1. LnRMSSD and psychometric parameters did not significantly change across time despite significant increments in training load. Effect size analysis showed a Small increase in LnRMSSDcv after the first week of intensified training followed by a Moderate reduction in week 3. Week 2 LnRMSSDcv related with changes in MAS (r = −0.74, p = 0.01). Conclusions: Increased training load in week 2 resulted in Small increases in daily LnRMSSD fluctuations (i.e., LnRMSSDcv). In week 3, athletes accomplished greater high-speed running loads with minimal change in sRPE, indicating positive training adaptation. This was accompanied by a Moderate reduction in LnRMSSDcv, reflecting an improved ability to maintain cardiac-autonomic homeostasis during the repeated microcycle. Individuals who exhibited smaller daily fluctuation in LnRMSSD during the initial spike in training load displayed greater improvements in MAS. Practical Applications: Monitoring daily fluctuation in LnRMSSD in response to varying training load may aid in the evaluation of individual training adaptation among elite rugby sevens players. Athletes presenting large day-to-day fluctuation in LnRMSSD in response to increased training load (e.g., LnRMSSDcv >8%) should be monitored closely for fatigue and running performance decrements.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Evaluating Heart Rate Deflection Point During Upper Body Ergometry: Effects of Normobaric Hypoxia and Relationship With Respiratory Compensation Point

N. Clark,1 D. Fukuda,2 M. La Monica,2 and T. Starling-Smith2

1 Institute of Exercise Physiology and Wellness; and 2 University of Central Florida

Introduction: Understanding and employing appropriate exercise intensity during hypoxic training has become a focal point in aerobic training design for athletes. Therefore, establishing non-invasive and feasible methods to assess anaerobic thresholds is crucial. However, to date, no studies have analyzed the relationship between heart rate deflection point (HRDP) and respiratory compensation point (RCP) response during upper body graded exercise test (GXT) under different environmental conditions. Purpose: The purpose of this study was the examination of HRDP during upper body exercise by determining potential differences caused by changing environmental conditions and evaluating its relationship with a commonly utilized anaerobic threshold (RCP). Furthermore, the use of heart rate (HR) and power output (PO) based assessment of these thresholds were compared. Methods: Recreationally-active males (N = 16, Age: 21 ± 1.5 years, Weight: 86 ± 11 kg, Height: 1.7 ± 0.1 m) and females (N = 10, Age: 25 ± 3 years, Weight: 69 ± 9, Height: 1.7 ± 0.1 m) completed 2 GXTs with gas exchange analysis on an upper body ergometer under moderate normobaric hypoxia (FiO2: ∼14%) and normoxia (FiO2: ∼20%). HRDP was calculated using the D-max method, while RCP was defined as the second non-linear increase in the VE/V[Combining Dot Above]CO2 curve. A 3-way (condition × threshold × sex) repeated measures ANOVA was used to compare the HR and PO values at HRDP and RCP under hypoxic and normoxic conditions. Results: No significant interactions were noted for the HR-based assessment of HRDP and RCP (p > 0.05). Heart rate at RCP was significantly lower compared to HRDP (p < 0.01; 160.2 ± 12.1 bpm and 166.0 ± 12.8 bpm, respectively). For the PO-based assessment of HRDP and RCP, a main effect for condition (p = 0.025; hypoxia < normoxia) was noted, and a sex × threshold interaction (p < 0.01) was found with HRDP (103.8 ± 17.9 W) being lower than RCP (109.4 ± 16.4 W) in men (p < 0.01) and HRDP (87.0 ± 20.0 W) being greater than RCP (73.5 ± 8.8 W) in women (p < 0.01). Conclusions: The evaluation of HRDP differs between HR-based and PO-based assessments during upper body exercise. Practical Applications: While the fatigue threshold response during upper body ergometry under different environmental conditions remains unclear, consideration should be given to the method of threshold evaluation and differences in the response to this exercise modality between men and women.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

Effect of Biofeedback Deception During Cycling Exercise on Heart Rate Variability

N. Hanson,1 S. Martinez,1 E. Kishman,1 C. Diehl,1 C. Scheadler,2 D. Katsavelis,3 and M. Miller1

1 Western Michigan University; 2 Northern Kentucky University; and 3 Creighton University

Introduction: Heart rate variability (HRV) is a useful metric that shows the action of the parasympathetic and sympathetic nervous systems, which are subdivisions of the autonomic nervous system. HRV has been measured during resting conditions in numerous studies; it is generally thought that decreased variability is indicative of overtraining in athletes. Currently there are very few studies looking at HRV during exercise. Assessing HRV during dynamic exercise can provide useful information regarding autonomic nervous system control, especially using the frequency domain (spectral analysis) rather than analyzing the time or nonlinear domains. Biofeedback deception, or when subjects are deceived about details of some biological variable in an attempt to improve performance, may affect HRV during exercise independently of performance. Purpose: The primary purpose of this study was to examine the effect of biofeedback deception on HRV variables, with specific focus on the frequency domain. Methods: Nine subjects (7 men, 2 women; age: 26.9 ± 9.6 years; body mass index: 23.0 ± 3.5 kg·m−2) volunteered to participate in this study. They were asked to complete 2 functional threshold power (FTP) tests on a Wattbike cycle ergometer. After a warmup period, subjects were asked to hold as high of a power output as possible for a 20-minute period of time. This test is used to determine the upper limits of aerobic energy production in cyclists and provide an indication of fitness and race preparedness. Subjects were told that they were testing new heart rate (HR) monitoring equipment. They were fitted with both a Garmin HR chest strap and an optical sensor that was attached to the ear lobe. In reality, the HR displayed by the optical sensor was coded through Arduino software to provide, in one condition, an incorrect HR that was actually 10 b·min−1 lower (DEC). In another condition (CON) the correct HR was displayed. The goal was to make subjects believe that their HR was low, and they would increase their power output during the test. Subjects were blind to the Wattbike display and could only view their HR with the optical sensor, which recorded at 500 Hz. The HR data was collected by a laboratory computer and later analyzed with Kubios software. The order of testing was counterbalanced and there was at least 48 hours between sessions. Paired-samples t tests were used to compare means between conditions. Results: Average cadence (CON: 96.7 ± 11.9 vs. DEC: 95.6 ± 13.3 rpm; p = 0.340), speed (CON: 37.4 ± 4.5 vs. DEC: 37.5 ± 4.8 km·h−1; p = 0.592) and power (CON: 213.0 ± 71.9 vs. DEC: 215.2 ± 77.9 W; p = 0.484) were all similar between the 2 conditions. The ratio of low frequency to high frequency spectral band (LF:HF) power was 0.98 ± 0.71 in the CON condition and 0.66 ± 0.47 in the DEC condition (p = 0.265). There was not a difference in the low frequency (p = 0.869) or high frequency (0.196) band power. There were also no differences in any of the HRV time or nonlinear domain variables (e.g., RR, STD RR, RMSSD, NNxx, ApEn; all p > 0.05). Conclusions: This study showed that biofeedback deception was unable to improve performance in this group of individuals. The deception also did not alter the response of the autonomic nervous system during exercise. It is likely, though, that these 2 factors are closely related. Practical Applications: There may have been differences in HRV variables if the deception was shown to be beneficial to performance. Subjects were not verbally encouraged throughout testing to use the HR to guide their performance. Previous studies suggest that this verbal encouragement combined with the biofeedback deception may be necessary to convince subjects and thereby improve endurance performance. This is a strategy that may be used by coaches, perhaps infrequently, once the details are thoroughly researched. Acknowledgments: This study was funded by an internal university grant.

Thursday, July 12, 2018, 12:00 PM–1:30 PM

Back to Top | Article Outline

The Effects of High-Intensity Heavy Rope Exercise on Pulse Wave Reflection Characteristics in Resistance-trained Individuals

E. Marshall, J. Parks, Y. Tai, A. DeBord, S. Hembree, and J. Kingsley

Kent State University

High-intensity heavy rope exercise (HI-HRE) is suggested to improve muscular strength and increase power. However, the effects of HI-HRE on pulse wave reflection characteristics are unknown. Purpose: To examine alterations in pulse wave reflection characteristics after acute HI-HRE in resistance-trained individuals. Methods: Seven resistance-trained individuals (mean ± SD: age: 24 ± 2 years; height: 1.7.1 ± 0.07 m; weight: 66.6 ± 11.7 kg; body fat: 13.4 ± 4.3%; years of training: 7 ± 2 years) volunteered to participate in this study. Brachial blood pressure, and pulse wave reflection characteristics, were collected in the supine position at rest and 15, 30, and 60 minutes following HI-HRE. During the acute HI-HRE, participants performed 6, 15-second exercise bouts, using a double wave pattern, separated by 30-second recovery intervals that consisted of seated passive rest. The pace of the exercise was performed using a metronome set at 180 bpm. A one-way repeated measures analysis of variance (ANOVA) was used to analyze the effects HI-HRE across time (rest, recovery 1, recovery 2, and recovery 3). Significant main effects were analyzed using paired t-tests. Results: There was a significant main effect of time for heart rate (rest: 62 ± 8 bpm; recovery 1: 83 ± 4 bpm; recovery 2: 77 ± 11 bpm; recovery 3: 69 ± 7 bpm, p < 0.0001) such that it was augmented from rest to recovery 1, 2 and 3. There were no significant main effects of time for brachial systolic or diastolic blood pressures, or the augmentation index (AIx) from rest to recovery 1, 2, or 3 (p > 0.05). There was a significant main effect of time for the AIx normalized at 75 bpm such that it was augmented from rest to recovery 1 and recovery 2, but was similar to recovery 3 (rest: 5.7 ± 13%; recovery 1: 28.9 ± 14.8%; recovery 2: 20.1 ± 13.7%; recovery 3: 13.7 ± 11%, p = 0.002) and time of the reflected wave such that it was augmented from rest to recovery 1, 2 and 3 (rest: 150.7 ± 4 ms; recovery 1: 113.6 ± 3.9 m; recovery 2: 114.3 ± 2.9 m; recovery 3: 112.7 ± 4.3 m, p < 0.0001). There was a significant main effect of time for the subendocardial viability ratio such that it was augmented from rest to recovery 1, 2 and 3 (rest: 153 ± 20.9%; recovery 1: 91.9 ± 10.9%; recovery 2: 104.6 ± 34.3%; recovery 3: 132 ± 22%, p < 0.0001). Conclusions: These data demonstrate that recovery from high-intensity heavy rope exercise has a significant effect on pulse wave reflection characteristics lasting for up to 60 minutes post exercise, despite no alterations in brachial blood pressures. Practical Applications: These data suggest that high-intensity heavy rope exercise should be the last exercise performed in a workout regime or that individuals be given at least 30 minutes of rest prior to the start of a new exercise to allow for complete recovery of the vasculature.

Thursday, July 12, 2018, 1:00 PM–1:15 PM

Back to Top | Article Outline

Differential Motor Unit Behavior in Children and Adults in Response to Repetitive Contractions of the First Dorsal Interosseous

J. Miller,1 A. Sterczala,2 M. Trevino,3 M. Wray,1 and H. Dimmick1

1 University of Kansas; 2 University of Pittsburgh; and 3 Armstrong State University

Purpose: To examine neuromuscular adaptation to 2 repetitive submaximal contractions of the first dorsal interosseous (FDI) in children (CH) and adults (AD). Methods: Nineteen CH (11 male age = 9.0 ± 0.7 years, 8 female age = 8.9 ± 0.9 years) and 13 AD (6 males age = 21.0 ± 2.53 years, 7 females age = 24.6 ± 5.9 years) completed 3 maximum voluntary contractions (MVC) and 2 repetitive isometric contractions at 30% MVC that were held for 40 seconds. During the 30% MVCs, surface EMG was recorded from the FDI and decomposed into action potential trains for individual MUs. MU action potential amplitudes (MUAPAMP), recruitment thresholds (RT), and mean firing rates (MFR) were calculated, and EMG amplitude was normalized (NEMG) to the MVC. Average NEMG and MFRs over 10 seconds epochs at beginning of repetition R1 and R2 were used for further analysis. For each subject, exponential MUAPAMP vs. RT and MFR vs. MUAPAMP relationships were calculated for R1 and R2. Five 2-way mixed factorial ANOVAs (group [CH vs. AD] × repetition [R1 vs. R2]) analyzed the exponent (B terms) and base terms (A terms) and NEMG. An additional t-test investigated the change in predicted MUAPAMP at a RT level of 25% MVC from R1 to R2. Results: NEMG was greater (p = 0.003) for CH (58.05 ± 30.20%) than AD (29.90 ± 8.28%) regardless of repetition. For the MUAPAMP vs. RT relationships, there were no significant interactions or main effects for the A terms and no significant interaction or main effect for repetition for the B terms (p = 0.083–0.345), however, there was a main effect for group (p = 0.003) with the AD (0.0846 ± 0.018 mV/%MVC) possessing greater B terms than the CH (0.0627 ± 0.016 mV/%MVC) when collapsed across repetitions (Figure 1A). For the MFR vs. MUAPAMP relationships, there were no interactions or main effects for repetition (p = 0.150–0.681), however, there were significant main effects for group for the A (p = 0.001) and B (p = 0.048) terms. The A terms were significantly greater for CH (30.08 ± 5.78 pps) than AD (22.96 ± 2.93 pps) and the B terms were significantly less negative for AD (−1.23 ± 0.51 pps·mV−1) than CH (−1.93 ± 1.00 pps·mV−1) when collapsed across repetition (Figure 1B). In addition, the change in MUAPAMP from R1 to R2 at the 25% MVC RT level differed between groups (p = 0.050, CH = −0.07 ± 0.21 mV, AD = 0.14 ± 0.36 mV) (Figure 1A). Conclusions: CH required greater muscle activation to complete the contractions. The MFRs in relation to MUAPAMPS were much greater for CH for R1 and R2. MUAPAMPS were greater for AD for R1, and AD required the recruitment of additional MUs with greater MUAPAMPS to complete R2. CH relied on MUs with smaller APAMPS for R2. Practical Applications: Although speculative, CH appeared to recruit a greater percentage of total MUs during R1 which caused greater potentiation, allowing them to complete R2 without additional MUs. Resistance training prescription for CH should account for greater activation at loads relative to 1RM, which may favor higher volume at moderate loads. Acknowledgments: This study was supported by the Master's Research Grant fund from the National Strength and Conditioning Association, Colorado Springs, CO.

Thursday, July 12, 2018, 1:15 PM–1:30 PM

Back to Top | Article Outline

Co-activation, Estimated Anterior and Posterior Cruciate Ligament Forces, and Motor Unit Activation Strategies During the Time Course of Fatigue

C. Smith, T. Housh, E. Hill, J. Keller, G. Johnson, and R. Schmidt

University of Nebraska—Lincoln

Purpose: This study aimed to combine anterior and posterior cruciate ligament force estimations with the motor unit control strategies employed by agonist and antagonist primary muscles involved in movement and stabilization of the knee joint to better understand the potential fatigue-related mechanism utilized to avoid injury and increase or maintain joint stability during the time course of fatiguing leg extensions. Methods: Fourteen male subjects (mean ± SD age 22 ± 4 years; body mass 76 ± 9 kg; height 171 ± 5 cm) performed 25 maximal concentric isokinetic leg extension muscle actions at 120°·s−1. Simultaneous measurements of the electromyographic (EMG) and mechanomyographic (MMG) signals from the vastus lateralis and bicep femoris as well as force were used to measure co-activation and neuromuscular parameters during the time course of fatigue. Estimated anterior and posterior ligament forces were calculated for each repetition utilizing a model developed for isokinetic leg extensions which utilizes force, biomechanical/skeletal measures, and EMG to create the estimation. Results: There were decreases in quadriceps force (21%) with increases in hamstring force (13%) during the fatiguing isokinetic leg extensions. The decreases in quadriceps force were accompanied by decreases in motor unit recruitment (MMG amplitude; 43%), and motor unit action potential conduction velocity (EMG mean power frequency; 22%), but no changes in motor unit firing rates (MMG mean power frequency) from the vastus lateralis. The increases in hamstring force were accompanied by increases in muscle activation (EMG amplitude; 43%), motor unit recruitment (MMG amplitude; 33%), and motor unit firing rates (MMG mean power frequency; 40%) from the biceps femoris. In addition, the posterior cruciate ligament force was greater (∼3.5 fold) than the anterior cruciate ligament force during each maximal isokinetic leg extension. Both the posterior (22%) and anterior (23%) cruciate ligament forces, however, decreased during the fatiguing isokinetic leg extensions. Conclusions: The decreases in quadriceps force and increases in hamstring force during the fatiguing isokinetic leg extensions and the neuromuscular responses suggested that as the quadriceps became fatigued, there was increased hamstring muscle activation, motor unit recruitment, and firing rate. These responses were accompanied by decreased anterior and posterior cruciate ligament forces which may reflect a protective mechanism designed to reduce the risk of injury from the effects of exercise and temperature-related laxity during maximal isokinetic leg extensions. Practical Applications: Co-activation may result in a counter-pull force that likely contributes to force loss during fatigue which may be a protective mechanism to avoid injury. Medical and fitness professionals should be aware of this potential protective mechanisms and the role of co-activation in fatigued muscle so that rehabilitation/strength programs are developed to address both agonist and antagonist muscle groups for increased joint stability. The inverse relationship between the agonist and antagonist neuromuscular responses may be a useful tool in tracking training adaptations aimed at limiting the effects of counter-pull force on the production of force in a fatigued state. Thus, an increase in joint stability may assist in maintaining greater force production in a fatigued state by limiting the effect of the counter-pull force placed on the joint.

Thursday, July 12, 2018, 1:30 PM–1:45 PM

Back to Top | Article Outline

Movement Analysis via Motion Capture System Helps Identify Injury At-risk NCAA D1 Football Players–A Preliminary Study

E. Mosier,1 A. Fry,1 P. Moodie,2 N. Moodie,3 and J. Nicoll1

1 University of Kansas; 2 Dynamic Athletic Research Institute; and 3 Rockhurst University

Motion capture systems can be used to assess an individual's upper-and lower-body motions, both explosive and functional in nature. Advancements in technology and screening protocols may be capable of projecting future high-risk season ending injuries. Purpose: This study examined the ability of a pre-season performance motion analysis (PMA) screening using a markerless motion capture system to identify NCAA D1 football players who would experience non-contact season ending injuries. Methods: Sixty-eight division 1 football players (±SD; n = 68, age = 20.7 ± 1.5 years, height = 187.5 ± 5.3 cm, wgt. = 107.2 ± 14.6 kg) were screened pre-season using the PMA protocol, consisting of 19 motions. These include shoulder ranges of motions (i.e., shoulder abduction and adduction, horizonal abduction and adduction, internal and external rotation, flexion and extension). Also assessed were trunk rotation, bilateral overhead squat, unilateral squats, forward lunges, single leg balance, bilateral counter-movement vertical jump (CMVJ), unilateral CMVJs, concentric-only VJ, multiple unilateral CMVJs, and depth VJ. A 3-dimensional markerless motion capture system (MCS; DARI, Overland Park, KS) was used to analyze the kinetic and kinematic data, from which 192 variables were calculated. K-means clustering was used to identify 4 different Voronoi cells for which group centroids were determined. Four data clusters were identified sequentially as possible contributors to non-contact season ending injuries; vulnerability (consisting of neurological control, movement asymmetries, joint angles differences), MCS composite score (combination of strength, power and dysfunction scores), strength-power discrepancy (squat and jump performances), and joint torque differences (lower-body joints during jumps). Results: Of the 68 individuals examined, 30 individuals had vulnerability scores above 60, indicating sub-optimal movement patterns. Of these 30 individuals, 14 reported MCS composite scores Conclusions: Using 4 clusters of variables (i.e., vulnerability, MCS composite score, strength/power discrepancy, torque differences) 5 players were identified as at-risk for season-ending injuries. Although 2 individuals were reported as false positives, there were no false negatives (i.e., suffered a season-ending non-contact injury but not identified by the MCS testing). Practical Applications: A MCS such as used in the present study can help identify American football athletes at high risk for non-contact season-ending injuries. This may provide the strength and conditioning professional and sports medicine clinician helpful information when designing training programs and rehabilitation therapies across a season.

Thursday, July 12, 2018, 1:45 PM–2:00 PM

Back to Top | Article Outline

Relationships Between Resting Heart Rate, Heart Rate Variability, and Sleep Phases in Collegiate Cross-Country Runners

Y. Sekiguchi,1 W. Adams,2 C. Benjamin,1 R. Curtis,1 and D. Casa1

1 Korey Stringer Institute at the University of Connecticut; and 2 University of North Carolina at Greensboro

Introduction: Cardiac autonomic regulation as measured by resting heart rate (RHR) and heart rate variability (HRV), is related to training adaptations and overtraining, and can be used as indicators of the level of recovery following exercise stress. Given the role of sleep on recovery, anabolism and energy substrate mobilization, research is needed to elucidate the role of sleep on changes in RHR and HRV. Purpose: The purpose of this study was to examine the relationships between RHR, HRV, and sleep characteristics in Division I collegiate female cross-country athletes. Methods: Ten NCAA Division I collegiate female cross-country athletes (mean ± SD; age, 19 ± 1 years; height, 167.6 ± 7.6 cm; body mass, 57.7 ± 10.2 kg, V[Combining Dot Above]O2max, 53.3 ± 5.9 ml·kg−1·min−1) participated in this study. RHR, HRV, total time spent sleeping (SLhr), and time spent in the various sleep phases (SLpha) including light sleep time percent (LS%), slow wave sleep time percent (SWS%), and rapid eye movement sleep time percent (REM%), were captured using a wrist-worn actigraphy device beginning one week prior to preseason and lasting until the end of the competitive season (91 days over the 13 weeks). The mean of each variable across each week was calculated to determine weekly averages per individual athlete. Relationships between RHR, HRV, SLhr, and SLpha were evaluated using mixed effects models while accounting for within-athlete (random factor) variance. The t statistic and degrees of freedom (df) from the mixed model were used to calculate correlation coefficients (r) between each factor. Correlation coefficients with thresholds set at 0.1, 0.3, and 0.5 depicting small, moderate and large associations, respectively, were used. Significance was set at p < 0.05. Results: A reduction in HRV had a significant association with higher SWS% (Estimate ± SE; −1.7 ± 0.5%; r = 0.26; p = 0.001) and REM% (−2.4 ± 0.56%; r = 0.37; p < 0.001). However, there were no significant relationships between RHR on SLpha and SLhr, and HRV on SLhr (p > 0.05). An increase in SLhr was significantly associated with higher SWS% (12.4 ± 2.1%; r = 0.53; p < 0.001), REM% (11.3 ± 2.1%; r = 0.47; p < 0.001), and LS% (12.7 ± 2.1%; r = 0.22; p < 0.001). Conclusions: These data suggested that a reduction in HRV was associated with an increase in SWS% and REM%, which may be indicative of the body seeking the physiologic benefits of regenerative sleep time on recovery. Furthermore, an increased time spent sleeping may also improve physiologic recovery as indicated by its association with an increased in SWS% and REM%. However, future work is needed to examine the associations between these variables and athletes' training. Practical Applications: Athletes should aim to achieve an appropriate amount of sleep (7–9 hours) each night to optimize recovery from daily training and life stressors. This quantity of sleep will provide athletes with an opportunity to achieve a greater quality of sleep by optimizing sleep distribution and promoting regenerative sleep when needed. Thus, creating an environment that promotes proper sleep hygiene may enhance performance during a competitive collegiate cross-country season. Tracking sleep and recovery metrics over the course of the season may also assist coaches and clinicians in making informed decisions regarding training schedules and training load. Acknowledgments: We would like to acknowledge WHOOP, Inc., the sponsor of this study.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Individual and Combined Effect of Inter-Repetition Rest and Elastic Bands on Jumping Potentiation in Resistance-Trained Men

J. Carrillo,1 S. Narvaez,2 T. Williams,3 K. Park,2 and B. Nickerson2

1 Texas A&M International University; 2 Texas A&M International University; and 3 Samford University

Strength and conditioning practitioners often seek to incorporate training modalities that will maximize athletic performance. A method that has been suggested to enhance acute muscular performance is post-activation potentiation (PAP). A major consideration by practitioners is the fatigue-potentiation interaction that occurs following a heavy resistance exercise designed to induce PAP. Due to less fatigue accumulation, cluster set configuration and elastic band training might result in an earlier and more profound potentiation response than traditional resistance exercise. However, research in this area is insufficient. Purpose: The purpose of this study was to determine the individual and combined effects of cluster sets and elastic bands for jumping potentiation in resistance-trained men. Methods: Twelve resistance-trained men (age: 22 ± 3 years; height: 178.1 ± 9.4 cm; weight: 155.7 ± 26.8 kg) participated in this study. Participants randomly completed 1 set of 3 repetitions at 85% one-repetition maximum for the parallel back squat: (a) traditional set with continuous repetitions (TS); (b) continuous repetitions with elastic bands (BANDS); (c) cluster set with 30-seconds of rest between each repetition (CS30); (d) cluster set with 30-seconds of rest between each repetition and elastic bands (CS + BANDS). Participants performed a countermovement jump (CMJ) in order to determine vertical jump height (JH) and peak power (PP) prior to exercise (baseline) and at 1-, 4-, 7-, and 10-minutes post-exercise. The PAP effect was evaluated using a series (i.e., JH and PP) of 4 (condition) × 5 (time) repeated measures analysis of variance. Results: The results from the current investigation revealed PP at 10-minutes was significantly higher than 7-minutes for BANDS (p = 0.035) and that 4- and 7-minutes were both significantly higher than baseline for CS + BANDS (p = 0.008 and 0.031, respectively). No other significant differences were observed. There were medium effect sizes (ES) for PP with BANDS (ES = 0.58 at 10-minutes), CS30 (ES = 0.53 and 0.64 at 7- and 10-minutes, respectively), and CS + BANDS (ES = 0.64, 0.78, and 0.66 at 4-, 7- and 10-minutes, respectively). All remaining ES for JH and PP were trivial to small. Conclusions: The current study revealed that potentiation occurred earlier and to a greater extent for CS + BANDS. However, BANDS and CS30 also produced PAP effects as demonstrated by the medium ES. These findings suggest that cluster sets, elastic bands, and both modalities combined are effective methods for enhancing PP during a vertical jump. Practical Implications: The CMJ is commonly employed by athletes in various sporting events (e.g., volleyball, basketball, soccer, etc.). Due to this factor, plyometric training programs are commonly designed by strength and conditioning specialists in order to maximize kinetics and kinematics during the CMJ. The combined training effect of CS configurations and elastic bands (i.e., CS + BANDS) resulted in a more profound potentiation response than the individual training effects of both elastic bands and CS30. As a result, strength and conditioning specialist may seek to employ the use of CS + BANDS 4–7- minutes prior to plyometric training in order to enhance athletes' power output production (i.e., PP).

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Practical Blood Flow Restrictive Training as a Proactive Recovery Intervention

K. Brown,1 J. Allen,2 S. Mahoney,1 and R. Topp3

1 Bellarmine University; 2 University of Central Florida; and 3 University of San Diego

Introduction: Low-intensity resistance training coupled with blood flow restriction (LI-pBFR) has been shown to increase muscle strength and hypertrophy. Previous research has found that LI-pBFR can induce strength and hypertrophy adaptations to the same degree as traditional high intensity resistance training (HIT). However, no research has examined the effects of combined HIT and LI-pBFR or muscle strength and hypertrophy compared to traditional HIT, when used as a proactive recovery intervention. Purpose: To compare the effect of HIT plus LI-pBFR vs. HIT alone on muscle strength and cross sectional areas. Which included the left leg cross sectional area (LLCSA) and right leg cross sectional area (RLCSA), one repetition max (1RM). Methods: Seventeen male participants, (age = 21.091 ± 1.2), who have completed regular resistance training twice a week for a minimum of 1 year were recruited. Participants were randomly assigned to one of 2 groups: HIT alone (CON, n = 9) or HIT with LI-pBFR (PRI, n = 8). Both groups completed a 2-week training program utilizing the leg press. The CON group completed 4 HIT workouts with 2 days of rest between each HIT workout. The PRI group completed 4 identical HIT workouts with 2 days of LI-pBFR training between each HIT workout. Results: Repeated measures ANOVA were calculated for each of the outcome variables with Group (CON vs. PRI), Time (Baseline, and 2 weeks) and interaction of Group by Time as sources of variance. Significant (p < 0.05) main or interaction effects were further addressed through calculating Tukeys's post hoc comparisons. The ANOVA stastistic indicated a significant Group by Time LLCSA, RLCSA, and 1 RM. At Baseline the PRI group (176.71 ± 15.10, 180.00 ± 15.10, 685.00 ± 67.12) exhibited significantly lower LLCSA, RLCSA, and 1 RM than the Control (201.41 ± 14.24, 2,014.08 ± 14.28, 754.44 ± 63.28) respectively. While at the end of the trial the PRI had significantly increased on LLCSA, RLCSA and 1 RM (211.29 ± 15.08, 210.36 ± 14.34, 1,079.06 ± 75.84) to a significantly greater than the CON group (201.48 ± 14.22, 203.88 ± 13.52, 1,003.06 ± 71.51) respectively. Conclusions: Based on the findings from this study, HIT with LI-pBFR was effective in increasing leg press 1RM strength and mid-thigh CSA greater than HIT alone. Practical Applications: The results of this investigation suggest that individuals or sports teams can make significant gains in both cross sectional area and 1RM in a short off-season or during their regular season by employing HIT with LI-pBFR. Acknowldgements: The Office of Sponsored Projects from Bellarmine University has provided a research grant for $500.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Preparing for a National Weightlifting Championship: A Case Study

S. Travis,1 S. Mizuguchi,1 M. Stone,1 W. Sands,2 and C. Bazyler1

1 East Tennessee State University; and 2 United States Ski and Snowboard Association

Introduction: Monitoring an athlete's psychological, physiological, and performance level is important when preparing for a major competition. No study to date has tracked a high-level weightlifter peaking for a major competition all the way up to the day of competition. Assessing performance at a competition is vital to ascertain if the athlete has reached a peaked and if peak performance will actually be expressed during the competition. Purpose: Therefore, the purpose of this study was to determine when peak jumping performance was achieved and whether psychological or physiological variables explained any jump performance changes in a high-level female weightlifter preparing for a national competition. We hypothesized that jumping performance would peak on competition day corresponding with improved recovery and stress states and preserved muscle cross-sectional area (CSA) relative to baseline values. Methods: A USA national-level female weightlifter (23.5 years; 54.0 ± 0.6 kg; 155.4 cm) participated in this investigation. Laboratory testing was carried out over a 7-month period as part of an ongoing long-term athlete monitoring program. At 11-weeks out, testing was administered twice a week for each week leading up to competition, at the competition, and returning from the competition. Each testing session evaluated body mass, recovery-stress inventories using the short recovery and stress scale (SRSS), and vastus lateralis CSA via ultrasonography followed by a standardized warm-up preceding unloaded squat jumps (SJ) performed on dual force plates sampling at 1 kHz. Hopkin's effects size (ES) classifications for each data point was used to determine the potential magnitude of change observed for each test relative to baseline values. The smallest worthwhile change was used to determine a meaningful change relative to baseline values. This typical error and smallest worthwhile change were used to quantify the probability (i.e., precision) of performance change that took place. Values greater or less than baseline values with precision >95% signified a very likely change for each testing session relative to the competition. Results: Weightlifting performance goals were met for the national championship (snatch = 67 kg, clean and jerk = 92 kg, total = 159 kg). Jumping performance (precision = 99%, ES = 2.7) was almost certainly peaked on competition day with increased recovery (ES = 0.7) and decreased stress scores (ES = 0.5). However, the athlete possibly exhibited a small decrease in muscle CSA (precision = 64.8%; ES = 0.4) the week of competition that corresponded with very large changes in body mass (precision = 99%; ES = 2.8). Conclusions: The training program was effective in ensuring the athlete was peaked the day of competition based on jumping performance and recovery-stress scores despite small decreases in CSA. Thus, weightlifting coaches and sport scientists working with high-level athletes should monitor jumping performance and recovery-stress state to ensure athletes peak at an appropriate time. Practical Applications: SRSS and SJ testing can be used as monitoring tools for high-level weightlifters preparing for important competitions.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Effect of Self-Directed Summer Training on Competitive Season Anaerobic Power Maintenance in Division I Female Soccer Athletes

L. Brown, K. Levers, T. Purdom, P. Martin, J. Giles, C. McPherson, and J. Howard

Longwood University

Seasonal variation in sprint speed, jumping power and repeated-sprint capacity have been shown to occur throughout the annual training cycle. However, how these anaerobic variables are affected by changes in athlete training supervision, responsibility and resource accessibility throughout the annual athletic calendar is not currently clear. Purpose: The aim of this study was to investigate how self-directed summer training programs affect competitive season anaerobic power maintenance in Division I female soccer athletes. Methods: Multiple anaerobic power tests were performed on 13 Division I female soccer athletes (Mean ± SD: 19.62 ± 1.04 years; 60.75 ± 5.51 kg; 164.21 ± 5.88 cm; 20.44 ± 3.06 %BF; 49.18 ± 4.74 kg FFM) at the beginning of the spring season (PREss), end of the spring season (POSTss), beginning of competitive season (PREcs), and middle of the competitive season (MIDcs). Subjects arrived having fasted for 4 hours, refrained from caffeine for 12 hours, and avoided alcohol and exercise for 24 hours prior to each testing session. Anaerobic power testing consisted of vertical power (VPWR) analysis using countermovement vertical jump (CMJ) height with the Harmon formula and the 35-m running anaerobic sprint test (RAST) measuring average horizontal power (RASTapwr) and peak horizontal power (RASTppwr). All data are represented as absolute power in Watts (W). A 4 × 3 repeated measures ANOVA was used to compare changes in anaerobic power across the 4 time points. The LSD post hoc test was used to evaluate pairwise comparisons when significant interactions occurred. Results: The statistical analyses revealed a significant main time effect (p < 0.001) with an observed power of 0.996. Pairwise comparisons showed that all performance variables significantly increased from PREss to POSTss ([INCREMENT]VPWR 19.15%, p < 0.001; [INCREMENT]RASTapwr 16.89%, p = 0.001; [INCREMENT]RASTppwr 15.34%, p = 0.007). From POSTss to PREcs, all performance variables significantly decreased ([INCREMENT]VPWR −14.66%, p = 0.008; [INCREMENT]RASTapwr −5.39%, p = 0.011; [INCREMENT]RASTppwr −6.06%, p = 0.026). From PREcs to MIDcs, RASTapwr and RASTppwr significantly decreased ([INCREMENT]RASTapwr −8.12%, p = 0.001; [INCREMENT]RASTppwr −9.94%, p = 0.001). From PREss to MIDcs, no significant changes occurred in any of the performance variables (p > 0.05). Conclusions: Supervised training from PREss to POSTss had a significantly positive effect on all training variables. Self-directed summer training from POSTss to PREcs significantly decreased competitive season readiness, likely contributing to significant decrements in RASTapwr and RASTppwr from PREcs to MIDcs. Comparing PREss to MIDcs (9-month training duration), no significant differences were observed in any of the performance variables. Thus, 9 months of training had little impact on maintenance of anaerobic performance across the competitive season. Practical Applications: Decrements in anaerobic power during self-directed summer training programs are likely attributed to an unstructured training environment and NCAA-regulated athlete-coach communication restrictions. Due to high competitive season training volumes, Division I female soccer athletes are unable to recover from the lasting consequences of insufficient summer training during the competitive season. Our results imply that a self-directed summer training program is not effective in this population, suggesting the need for alternative summer training approaches in order to maintain athletic performance and decrease the risk of athlete injury during the competitive season. Acknowledgments: Special thanks to Longwood University athletics for their collaborative efforts in helping us complete this study.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Effects of Very Short-Term DCER Training on Strength and Power Production in Untrained Females

M. Byrd, T. Dinyer, and H. Bergstrom

University of Kentucky

The very short-term training (VST) model, utilizing 2–3 training sessions, has been shown to increase muscle strength for isometric and isokinetic modalities. This model has been used to examine early phase skeletal muscle, neural, and performance adaptations. Thus, this training model has potential implications for examining acute changes in strength and power. No previous studies, however, have applied the VST model to dynamic constant external resistance (DCER) training. In addition, there are limited data examining resistance training in females. Purpose: This study examined changes in strength and power production from the very short-term (VST) training model using an upper body (barbell bench press [BP]), dynamic constant external resistance (DCER) exercise, in females. Methods: Nine female (mean ± SD age: 21.7 ± 3.0 years, height: 165.1 ± 5.3 cm, body mass: 70.8 ± 11.2 kg) subjects with no resistance training experience within the last 3 months completed one familiarization visit, one pre-test visit, 3 training visits, and one post-test visit. During the pre-test visit the subject's BP 1 repetition maximum (1RM) was measured. The mean (BTMP) and peak (BTPP) power production was measured from the barbell bench press throw (BT) test, utilizing 35% of the subject's BP 1RM as resistance. The mean (RelMP) and peak (RelPP) power values were also expressed relative to the subject's body mass. The 3 training visits consisted of 5 sets of 6 repetitions, at 65% of the subject's 1RM, with the concentric phase of the BP performed at max barbell velocity. The post-test followed the same procedures as the pre-test visit. Statistical analyses included paired-sampled t tests to analyze pre to post change in 1RM, BTMP, BTPP, RelMP and RelPP. An alpha level of p ≤ 0.05 was considered statistically significant. Results: Table 1 shows the mean (±SD) values for 1RM, BTMP, BTPP, RelMP and RelPP during the pre-test and post-test. The paired-samples t tests indicated a significant increase from pre-to post-test for 1RM (p = 0.008), BTMP (p = 0.015) and RelMP (p = 0.012). There were, however, no significant differences for BTPP (p = 0.115) and RelPP (p = 0.108) from pre-to post-test. Conclusions: These findings indicated an increase in both strength and power production as a result of a VST upper body DCER exercise in untrained females. These strength and performance adaptation were likely related to neuromuscular adaptations, such as an increase in motor unit recruitment of the active muscles and/or decreased co-activation of the antagonist muscles. It is possible more training sessions are needed to significantly increase peak power in females. Practical Applications: The VST model can be used to observe early skeletal muscle and performance adaptations in females, and has potential implications for rehabilitation purposes, for examining acute changes in strength and power from nutritional interventions as well as for athlete in-season strength and power maintenance.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Changes in Motor Unit Behavior During a 12-Week Competition Period in Collegiate Powerlifters

R. Colquhoun, M. Magrini, P. Tomko, R. Eusufzai, M. Ferrell, T. Muddle, and N. Jenkins

Oklahoma State University

Purpose: To examine the changes in motor unit behavior, maximal strength, and muscle morphology across a 12-week competition period. Methods: Eight competitive collegiate male powerlifters (mean ± SD, age = 21 ± 1 year; powerlifting total = 597.2 ± 59.1 kg; Wilks Coefficient = 382.1 ± 33.7) completed 3 testing sessions before (PRE) and after (OVER) an 8-week period of high-volume and intensity training designed to over-reach the athletes, and 4–6 days after a subsequent 3-week tapering and competition period (POST). During each testing session, maximal voluntary isometric contraction (MVIC) strength of the right knee extensors was tested and recorded. Participants then completed a 70% MVIC ramp contraction, in which surface electromyographic signals were recorded from the vastus lateralis (VL) using a specialized 5-pin sensor. These signals were later decomposed offline into their constituent motor unit (MU) action potential trains to examine MU firing behavior. Linear regression analyses were run to examine pooled changes in the slopes and intercepts of mean firing rate (FRMEAN) and MU action potential size (MUAPPP) expressed as a function of BASE recruitment threshold (RT; % MVIC). Finally, muscle cross sectional area (CSA) and echo intensity (EI) of the right VL and rectus femoris (RF) were measured and analyzed using dependent samples t tests. Results: Analyses of the pooled MU data revealed a change in the slope of the FRMEAN − RT relationship from PRE to POST (p = 0.01) and the y-intercept from PRE to OVER, OVER to POST, and PRE to POST (all p < 0.02). A change in slope of the MUAPPP − RT relationship was seen from PRE to OVER (p < 0.001) and PRE to POST (p = 0.04) and a significant change in y-intercept was seen between each time point (all p < 0.001). Conclusions: An 8-week period of training designed to over-reach collegiate powerlifters resulted in an increase in VL CSA and EI, probably as a result of muscle damage. These athletes also experienced a decrease in maximal force production and altered MU behavior immediately following the tapering period. In the post-competition period, however, MVIC returned to baseline. In addition, an increase in the y-intercept and a decrease in slope of the FRMEAN − RT relationship was seen from pre-training to post-competition, which was observed in conjunction with a decrease in the slope of the MUAPPP-RT relationship. Practical Applications: Our results provide coaches and practitioners with evidence that an appropriately timed taper following a high-volume, high-intensity training period may result in improvements in neuromuscular efficiency in competitive collegiate powerlifters. However, it does not appear that the taper used in this study was sufficient for reducing the muscle damage experienced during the training period, as indicated by elevated CSA and EI. Therefore, future research is needed in order to better manage fatigue and recovery during the pre-competition period to further improve performance.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Performance and Body Composition Changes From 4 Weeks of an APRE Lifting Program in Division III Basketball Players

M. Caro,1 J. Mann,2 M. Lane,3 C. Koons,3 and R. Bean3

1 Emory & Henry College; 2 University of Missouri; and 3 Eastern Kentucky University

Introduction: Collegiate strength and conditioning has a limited number of contact hours with their athletes throughout a year. If the goal with a program is hypertrophy the most efficient method to enhancing this in the given time constraints must be utilized. Therefore different programming models must be compared to one another. Purpose: To compare a 2 adjustment set APRE training model to a 1 adjustment set APRE training model acutely for performance and body composition differences. Methods: Seventeen collegiate basketball athletes (11 women, 6 men, 1.68 ± 0.07 m, 71.9 ± 11.2 kg, 19.8 ± 1.3 year, mean ± SD) at a NCAA Division 3 institution were enrolled in this study. Height and weight were recorded prior to anthropometric measurements which were analyzed utilizing; total body Dual-energy X-ray Absorptiometry (DXA) scan, bioelectric impedance analysis (BIA), skinfold calipers, and girth measurements. Athletes were then divided into 2 separate groups where they underwent 4 weeks of training using and APRE training program with one adjustment set or 2 adjustment sets. Athletes were then post tested for changes in predicted one repetition maximum (1RM) and body composition. T tests were performed to test for significant changes in variables from the baseline testing. Results: Overall the one adjustment set group gained 0.65 ± 1.3 kg of total mass, of which 0.45 ± 0.76 kg of it were fat mass and 0.21 ± 1.19 kg of lean mass. The 2 adjustment sets group gained 1.64 ± 0.76 kg of which 0.08 ± 0.63 kg were fat mass and 1.57 ± 0.63 kg of it were lean mass. There was a significant increase in lean body mass for the 2 adjustment set group and a decrease in body fat percentage compared to the 1 adjustment set group (p < 0.05). The women in the 2 adjustment set APRE group lost significantly more fat mass than in the 1 adjustment set group (−0.02 kg, and 0.75 kg respectively) (p = 0.05) both total pounds and percent. Specifically there were greater regional changes in lean body mass in the APRE group compared to the control group (p < 0.05) in the trunk. Conclusions: In an acute training model there were greater changes in lean body mass in a 2 adjustment set APRE training group than 1 adjustment set model. Furthermore there were greater fat loss changes in the 2 adjustment set group compared to the 1 adjustment set group. Further research in to the different progress effects of each training model needs to be undertaken. Practical Applications: The results of this study suggest that collegiate strength and conditioning programs that have limited contact hours with athletes may see greater changes in fat loss and lean body mass gains from a 2 adjustment set APRE training model than a 1 adjustment set model.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Relationships Between Measures of Absolute and Relative Strength and Power With Linear and Change of Direction Speed in High School Offensive Linemen

K. Collins1 and R. Lockie2

1 California State University, Fullerton; and 2 California State University Fullerton

Introduction: Strength, power, linear speed, and change-of-direction (COD) speed are all necessary attributes for football players due to the explosive demands of the game. Strength and power are a foundation for linear and COD speed, but the relationships these qualities may have in developing high school football players may be different to that of collegiate or professional players. Further, the offensive line (OL) is a distinct position responsible for blocking, which emphasizes application of force over short distances and involves physical engagement of an opponent, and players tend to have greater body mass. Little research exists on the relationships between strength and power to linear and COD speed specifically for OL high school players. Purpose: To examine the relationships between measures of body mass, absolute and relative strength, and power measured with jump tests to linear and COD speed in high school OL. Methods: Fifteen male OL football players from the same high school participated in this study. The 0–2.29, 0–4.57, 0–9.14, and 0–36.58 m intervals of a 36.58-m (40-yard) sprint were recorded to measure linear speed. The first and second COD, and total time of the pro-agility shuttle were recorded to measure COD speed. COD deficit (difference between linear 0–9.14 m, and a COD in the pro-agility shuttle) was derived from both the first (COD deficit 1) and second COD (COD deficit 2) of the pro-agility shuttle. Absolute lower-body strength was assessed with a one-repetition maximum (1RM) back squat, and divided by body mass for relative strength. Lower-body power was assessed by the standing broad jump (SBJ), vertical jump (VJ), and VJ peak power (calculated by the Sayers equation: 51.9 × VJ + 48.9 × body mass − 2007). Pearson's correlations (p < 0.05) calculated relationships between the measures of linear and COD speed to body mass, absolute and relative 1RM back squat, SBJ, VJ, and VJ peak power. Results: Body mass showed relationships to all splits of the 36.58-m sprint (r = 0.592–0.671), 1RM back squat (r = 0.775) and VJ peak power (r = 0.868). The 1RM back squat had significant relationships with the initial 0–2.29 and 0–4.57 m splits of the 36.58-m sprint (r = 0.615–0.590) and COD deficit 2 (r = −0.662). The SBJ and VJ both showed relationships with the second COD and total time of the pro-agility shuttle (r = −0.703 to −0.885). Additionally, the SBJ related to the COD deficit 2 (r = −0.754), and the VJ related to the 0–36.58 m sprint interval (r = −0.668). VJ peak power related to the initial 0–2.29 m sprint interval (r = 0.594) and COD deficit 2 (r = −0.572). Conclusions: Greater leg strength and VJ peak power related to faster times for the COD deficit 2, but slower initial linear sprint times. The large body mass of the OL position may allow for greater absolute strength and power generation, but negatively impact linear speed. Nonetheless, a higher VJ related to a faster terminal 36.58 m linear sprint time in high school OL. Better performance in the SBJ and VJ also related to faster times for the second COD and total time of the pro-agility, and COD deficit 2. The second COD, and derived COD deficit 2, of the pro-agility shuttle may require greater braking and propulsive force, which may explain the relationship to greater leg strength and power. Practical Applications: In developing high school OL players, greater body mass is useful for strength and potentially playing the position, but may negatively affect linear speed. However, greater power appeared to benefit COD performance measured by the pro-agility shuttle and COD deficit, and speed over 36.58 m. Coaches should ensure high school OL can have appropriate strength and power relative to body mass to enhance linear and COD speed.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Mid-Thigh Pull Force-Time Characteristics in Elite and Recreational High-Intensity Functional Training Athletes

C. Almeda, Y. Feito, T. VanDusseldorp, G. Mangine, and T. Esmat

Kennesaw State University

Introduction: High-intensity functional training (HIFT) combines resistance training, gymnastics, and traditional aerobic exercise into single workouts that vary by day to elicit general physical preparedness. Although this strategy encompasses a wide variety of exercises, Olympic barbell movements and ballistic movements commonly appear in some form during training. Thus, it could be hypothesized that individuals who are strong and able to rapidly express their strength (i.e., a high rate of force development [RFD]) from the power position, would excel at this sport. However, little is known regarding the differences between elite and recreational athletes in terms of strength and RFD. Purpose: To compare force-time characteristics during a mid-thigh pull in elite and recreational HIFT athletes. Methods: Four male and 3 female elite athletes (EA), who had previously progressed to at least the regional rounds of a popular, international fitness competition featuring HIFT (27.1 ± 4.4 years; 169.6 ± 12.3 cm; 81.6 ± 13.2 kg), along with 4 males and 4 female experience (>2 years) recreational athletes (RA) who regularly (3–5 days per week) participate in HIFT (33.0 ± 8.2 years; 171.8 ± 13.5 cm; 76.3 ± 19.5 kg) volunteered to participate in this study. All athletes reported to Kennesaw State University's Human Performance Laboratory approximately 2 weeks prior to the commencement of the 2018 competition to complete an isometric mid-thigh pull (IMTP) assessment. Prior to exercise, the athletes were asked to step onto a force plate that was placed within a freestanding rig system and assume their preferred second pull, power position. The height of the barbell was then adjusted to be level with their mid-thigh. Following a standardized warm-up, the athletes completed 3 maximal, 6-second IMTP efforts. Peak force (PF), peak RFD (RFDPeak), average RFD (RFDAVG), and RFD at specific time bands from 0 to 30, 50, 90, 100, 150, 200, and 250 ms were recorded for all analyses. Data from the effort that produced the highest RFDPeak was used for all group comparisons. Results: Separate independent samples t Tests revealed no group differences in PF (EA: 1,871 ± 503 N; RA: 1,636 ± 449 N; p = 0.360), RFDPeak (EA: 1,295 ± 709 N; RA: 810 ± 421 N; p = 0.146), average RFD (EA: 1,346 ± 1,553 N; RA: 575 ± 421 N; p = 0.246), and RFD30 (EA: 11,800 ± 10,915 N; RA: 7,680 ± 7,624 N; p = 0.422), RFD50 (EA: 9,839 ± 7,733 N; RA: 6,522 ± 6,223 N; p = 0.383), RFD90 (EA: 7,867 ± 4,658 N; RA: 5,465 ± 4,246 N; p = 0.319), RFD100 (EA: 7,618 ± 4,069 N; RA: 5,285 ± 3,814 N; p = 0.276), RFD150 (EA: 7,183 ± 2,961 N; RA: 4,974 ± 2,785 N; p = 0.163), RFD200 (EA: 6,643 ± 2,524 N; RA: 4,837 ± 2,523 N; p = 0.191), and RFD250 (EA: 5,787 ± 2,028 N; RA: 4,460 ± 2,109 N; p = 0.237). Conclusions: Although EA generally outperformed RA in all facets of the IMTP test, no significant differences were observed between groups. It is possible that our findings may have been limited by our sample size and the large degree of variability observed across groups. Practical Applications: Force production and RFD from the power position may be relevant to HIFT competition performance. However, the force-time characteristics expressed during an isometric mid-thigh pull do not appear to distinguish between elite and recreational level athletes. Based on the results of the investigation, it is recommended that strength and conditioning professionals and athletes consider a more specific test to assess these characteristics in relation to HIFT performance.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Effect of Biofeedback on Muscle Activity During Fatiguing Sets of the Back Squat

D. Arndts,1 A. Askow,1 J. Stone,1 A. King,1 S. Goto,2 J. Hannon,2 J. Garrison,2 M. Jones,3 and J. Oliver1

1 Texas Christian University; 2 Texas Health Sports Medicine; and 3 George Mason University

Introduction: Velocity-based training (VBT) may be more accommodating to day-to-day performance fluctuations in athletes than traditional strength and conditioning programming. In particular, VBT allows the identification of an optimal training load and target velocity during a single session of training. Biofeedback (BFB), a form of neuromuscular training designed to enhance training, can be implemented as velocity-based BFB during training, which directs the athlete's attention externally during the resistive exercise. We have previously demonstrated that presentation of real-time velocity-based BFB results in higher velocities during a set of the back squat to volitional exhaustion. However, if that is attributed to differences in muscle activity is unknown. Purpose: To examine the effect of velocity-based BFB on muscle activity, as measured by electromyography (EMG), during a fatiguing set of the back squat exercise. Methods: Thirteen (n = 13) resistance-trained men (23.8 ± 4.9 years; 85.4 ± 17.3 kg; 14.4 ± 6.4% fat) completed a familiarization session prior (48 hours) to one-repetition maximum (1RM) testing. At least 72 hours after 1RM determination, subjects completed the same experimental testing procedures under 4 conditions, randomly, to volitional exhaustion: 75% 1RM with and without BFB and 90% 1RM with and without BFB. Prior to each experimental trial, a normalization trial was performed at a prescribed load equivalent to 50% 1RM. At least 72 hours separated each experimental trial. For each trial, subjects were instructed to move the load “as explosively as possible.” A commercially available linear position transducer attached to the right side of the barbell provided real-time feedback in the form of peak velocity achieved during the performance of each repetition in each experimental trial. Surface EMG (sEMG) was used to measure electrical activity of the vastus lateralis (VL), gluteus medius (GM), and biceps femoris (BF) bilaterally. The integrated sEMG data from the experimental trials were normalized to the normalization trial for the corresponding experimental condition. A repeated measures ANOVA was used to determine statistical significance of the findings. Results: There was no main effect for condition (BFB or no BFB) for any of the muscles examined (p > 0.05). No significant interaction with condition (BFB or no BFB) was observed for any of the muscles examined (p > 0.05). A main effect for time of repetition (first 3 repetitions, last 3 repetitions) was observed for all 3 muscles with increased muscle activity through the performance of each trial (VL, p = 0.001; BF, p = 0.027; GM, p = 0.024). Conclusions: Muscle activity as measured by sEMG increases as repetitions are performed to failure in the back squat. In the current study, muscle activity did not differ with the presence of BFB, despite the fact that BFB results in higher velocities, indicating that another potential underlying mechanism may be contributing to the performance differences. Practical Applications: Despite a lack of difference in muscle activity, practitioners should incorporate BFB when structuring programing for the enhancement of power as higher velocities are observed with the same absolute load.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Maturity-Related Differences in the Adaptations to Anaerobic Capacity Following Sprint Interval Training in Youth Male Athletes

K. Beyer,1 J. Stout,2 M. Redd,2 K. Baker,2 D. Church,3 H. Bergstrom,4 J. Hoffman,2 and D. Fukuda2

1 Bloomsburg University; 2 University of Central Florida; 3 University of Arkansas for Medical Sciences; and 4 University of Kentucky

Purpose: To assess the maturity-related differences in the adaptations to anaerobic capacity following a 4-week sprint interval training (SIT) program amongst youth male athletes. Methods: Twenty-seven youth male athletes (age: 11–17 year) were assessed for their years from peak height velocity (PHV), an estimation of somatic maturity status, and grouped into PRE (<−1.5 year), PERI (−1.5 to +1.5 year) and POST (>+1.5 year) PHV. During the SIT program, participants completed 8 sessions consisting of 4–7 repeated 20-second “all-out” sprints on a cycle ergometer against a load of 7.5% of body mass with 4-minute rest periods. During the first (SIT1) and last (SIT8) sessions, peak (PP) and mean power (MP), relative to body mass, were recorded for each sprint and averaged for each session. Individual sprint data were assessed via 3-way (group × training × sprint) ANOVA, while session averages were assessed via 2-way (training × group) ANOVA. Level of significance was set at p < 0.05 and trends were determined at p < 0.10. The magnitude of difference between the change scores of each maturity group were determined using Cohen's d coefficients. Results: No significant 3-way interactions existed for PP or MP. Average PP and MP are presented in Table 1. For average PP, there was a trend (p = 0.095) for a 2-way interaction with significant main effects of group (p = 0.030) and training (p < 0.001). For average MP, there were significant main effects of group (p = 0.003) and training (p = 0.042), and a significant 2-way interaction (p = 0.044). Post hoc tests revealed that PRE was significantly less than PERI and POST at SIT1 and SIT8. Furthermore, average MP significantly increased from SIT1 to SIT8 in PERI (p = 0.016) and POST (p = 0.007), with no change in PRE (p = 0.562). When comparing the changes in average MP from SIT1 to SIT8, POST was significantly (p = 0.016) greater than PRE, while a trend (p = 0.053) for a difference existed between PERI and PRE. Cohen's d revealed there were large differences in the change in average MP when comparing PRE and PERI (d = 0.874) and comparing PRE and POST (d = 1.073), with a small difference between PERI and POST (d = 0.352). Additionally, Cohen's d coefficients revealed that large differences existed for changes in average PP between PRE and PERI (d = 0.994) and PRE and POST (d = 0.846), with a moderate difference between PERI and POST (d = 0.518). Conclusions: PP appears to improve following SIT regardless of maturity status; however, adaptations to MP appear to be blunted amongst PRE. Furthermore, the greatest adaptations to MP appear to occur in POST, while PP adaptations may be greatest in PERI. Practical Applications: SIT may not be the most appropriate training modality prior to PHV, as adaptations to anaerobic capacity may be limited. Strength and conditioning professionals should consider these maturity-related differences in the adaptations to SIT when designing and implementing long-term athlete development training programs.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Positional Comparisons in Absolute and Relative Performance Measures in National Football League (NFL) Draft Prospects

J. Boone,1 J. Sklaver,2 Y. Feito,1 T. VanDusseldorp,1 R. Wildman,3 and G. Mangine1

1 Kennesaw State University; 2 Human Nutrition Institute, University of Bridgeport; and 3 Texas Womens University

Introduction: Athletes will train for the NFL Combine to improve their technical skills, physiological measures, and their chances of being drafted to a professional team. Typically, skill players will outperform larger and stronger down linemen in the areas of speed and agility. However, down linemen must still be able to block for (i.e., offensive linemen) or catch (i.e., defensive lineman and linebackers) skill players. It is possible that down linemen possess similar speed and agility capabilities when assessed independent of body mass and specific training. Purpose: To examine positional differences in absolute and relative performance measures in NFL Combine trainees. Methods: Fifty-eight football athletes (22.7 ± 1.2 years; 186.3 ± 7.2 cm; 110.9 ± 22.7 kg) completed off-season performance testing prior to initiating a preparatory training program for the NFL Combine. Following a standardized warm-up, the athletes completed assessments of 40-m sprinting time (SPR; in seconds) time, pro-agility time (AGL; in seconds), L-drill time (LD; in seconds) and standing broad jump (SBJ; in meters). Each athlete was allotted 2–3 maximal attempts for each measure with 3–5 minutes of rest between trials. Separate one-way analyses of variance with a Tukey's post-hoc tests were used to assess differences between defensive backs (DB), light defensive front 7 players (LF7; linebackers and defensive linemen lighter than 112.2 kg), heavy defensive front 7 players (HF7), offensive linemen (OL), offensive backs (OB), and receivers (RC) for all absolute and relative (to body mass) performance measures. Results: Differences (p < 0.001) were observed between positions for all absolute and relative performance measures. DB and RC outperformed (p < 0.05) OL and HF7 in all absolute performance measures. However, OL and HF7 outperformed (p < 0.05) DB and RC when considering body mass for all performance measures except BJ. Differences between other positions varied. Specific position differences are presented in Table 1. Conclusions: In NFL Draft prospects, skill players who are heavily involved in the passing game (i.e., DB and RC) outperformed down linemen (i.e., OL and HF7) in measures of speed, agility, and power. However, relative to body mass, down linemen were faster and more agile. Practical Applications: It is common for skill-position players to outperform down linemen in several measures of speed, agility, and power that occur at the NFL combine. However, our data suggests that down linemen are relatively faster and more agile than skill players. Although absolute measures of performance are known to influence athletic success, the effect of relative ability is less clear. General managers and strength coaches might consider relative ability when assessing athletic talent.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Static Stretching and Preconditioning Exercise Augments Power Output in Recreational Athletes

M. Mason, T. Butterfield, R. Shapiro, and M. Abel

University of Kentucky

Introduction: Power is an important element of performing numerous athletic tasks including sprinting, jumping, and throwing. Acutely enhancing power output may improve the performance of these skills. There are numerous factors that affect power development including muscle mechanics, morphological factors, and neural factors. Previous literature suggests that various neuromuscular exercises may be performed prior to competition to enhance subsequent power performance. Post-activation potentiation (PAP) of the agonist muscle and static stretching of the antagonist muscle have independently demonstrated increased power output. Purpose: To examine the combined and independent effects of performing static stretching on the antagonist musculature and a preconditioning exercise on vertical jump (VJ) performance and electromyographic (EMG) activity in recreational athletes. Methods: A convenience sample of 20 healthy, recreationally trained male rugby players (Age: 23.9 ± 3.5 year, height: 180.2 ± 5.3 cm, body mass: 90.1 ± 11.7 kg) participated in this study. All subjects completed a control condition of no treatment, stretching treament, potentiation treatment, and combined (stretching + potentiation) treatment prior to performing a VJ on a Vertec device. The stretching treatment consisted of a passive stretch of the hip flexors and ankle dorsiflexors (3 repetitions × 30 seconds per stretch). The potentiation treatment consisted of a maximal voluntary isometric contraction (MVIC) performed for all leg extensor muscle groups prior to the VJ test. The MVIC was executed by performing a functional isometric deadlift (3 repetitions × 5 seconds; 20 seconds recovery) with a tow strap. The subject performed this maneuver by standing in an athletic stance on the tow strap and holding the attached handles, simulating a deadlift. The subject then pressed as hard as possible from the deadlift position while gripping the strap firmly to prevent vertical movement. The subject performed 3 trials of the VJ with 20–30 seconds of recovery between attempts. Peak EMG activity was obtained from the gluteus maximus (GM), vastus lateralis (VL), medial gastrocnemius (MG), and tibialis anterior (TA). Repeated measures ANOVA were used to determine differences in VJ height and EMG activity between conditions. Paired-samples t tests were used for post-hoc analyses. A Bonferroni correction was used to account for the inflation of Type I error, therefore the level of significance was set at p < 0.008. Results: There was a main effect of condition on VJ height (F(3,17) = 9.125, p = 0.001; Effect size = 0.62; Power = 0.98) such that the combined treatment of stretching plus potentiation produced a significant increase (1.58 ± 1.42 cm) in VJ height compared to the control condition. Despite strong trends, there were no differences in VJ height in the stretching (p = 0.050) or potentiation (p = 0.012) only treatments. There were no differences in mean EMG activity between conditions. Conclusions: Although the individual exercises did not produce an independent effect, the combined treatment of stretching plus preconditioning exercise did enhance vertical jump performance, suggesting a potential synergistic neuromuscular effect. Practical Applications: The implications of this study propose that these neuromuscular exercises may be performed prior to short duration, power sports to acutely increase performance.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Comparison of Power Output During Three Different Bench Press Workouts

A. Bruenger and C. Buckman

University of Central Arkansas

It has been suggested that performing more sets with fewer repetitions (reps) allows for more power production during a weight workout compared to fewer sets with more reps at the same load/intensity. Purpose: To compare the differences in average power and change in power during 3 bench press workouts that differ in sets, reps, and recovery. Methods: Nine division 1 female athletes (age: 19.5 ± 1.5 years, height: 175 ± 7 cm, weight: 78 ± 20 kg) volunteered for this study. All participants performed a bench press 5 rep maximum (5RM) (mean: 422 ± 98 N) to determine the weight used during the following workouts. The participants completed 3 bench press workouts in random order over the next 3 weeks: 3 sets of 5 reps with 4 minutes rest between sets (3 × 5 × 4), 5 sets of 3 reps with 4 minutes rest between sets (5 × 3 × 4), and 5 sets of 3 reps with 2 minutes rest between sets (5 × 3 × 2). Each workout was performed with 90% of the participant's bench press 5RM for all sets. No training of the upper extremity was allowed for 48 hours prior to the workouts. Data were collected via video camera streamed into a computer and the average velocity of the lateral end of the bar during the concentric phase was determined using video analysis software. Power was determined by multiplying the workout weight times the bar velocity. The power of all 15 reps were averaged and a repeated measures ANOVA was performed to assess differences between workouts. Changes in bench press power output during the workout were assessed using a 2 × 3 (first 3 repetitions/last 3 repetitions × workout) repeated measures ANOVA. Alpha level was set at 0.05. Paired t tests with an alpha level of 0.01 were used to find differences when significant ANOVA results were found. Results: There was no statistical difference in average bench press power output during the 3 workouts (3 × 5 × 4: 206 ± 39 W, 5 × 3 × 4: 214 ± 43 W, 5 × 3 × 2: 208 ± 33 W, p = 0.519). However, there was a significant interaction effect (p = 0.024) when comparing the first and last 3 repetitions among the workouts. Paired t-tests found that the power during 3 × 5 × 4 significantly decreased (p = 0.01) while the power during the 5 × 3 × 2 trended toward a significant decrease (p = 0.024). Conclusions: This current study does not support the initial premise of more power production using more sets with fewer reps, at least for the bench press exercise. However, due to the small sample size and the significant decrease in power output during the 3 × 5 × 4 workout, more research is needed to assess the validity of this premise. Practical Applications: This current study would suggest that performing the bench press using a 3 × 5 × 4 or 5 × 3 × 2 protocol would allow for similar overall power output, with less time to complete compared to the 5 × 3 × 4 protocol.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Relationships Between 40-m Sprinting and Vertical Jump Kinetics in National Football League (NFL) Draft Prospects

G. Hampton,1 J. Sklaver,2 Y. Feito,1 R. Wildman,3 and G. Mangine1

1 Kennesaw State University; 2 Human Nutrition Institute, University of Bridgeport; and 3 Texas Womens University

Introduction: Vertical jump performance is thought to be indicative of peak sprinting velocity. However, this assumption is based upon relationships found between sprinting time and vertical jump displacement and kinetics. Little information is available that compares sprinting kinetics beyond the acceleration phase and vertical jump kinetics. Purpose: To determine the relationships between 40-m sprinting and vertical jump kinetics in NFL Draft prospects. Methods: Twenty-six NFL draft prospects (22.7 ± 1.0 years; 186.5 ± 7.9 cm; 109.3 ± 23.6 kg) completed off-season assessments of 40-m sprinting and vertical jump (VJ) performance and kinetics. Following a standardized warm-up, each athlete completed one maximal, 40-m sprint trial while tethered to a robotic sprinting device (RSD) at minimal resistance (1-kg), and an additional, untethered (0-kg) trial. Sprinting time was assessed by a laser timing system, while peak (PK) and average (AVG) sprinting velocity (V), force (F), and power (P) were measured by the RSD. Following 40-m sprinting assessments, the athletes completed 3 maximal VJ trials while tethered to a linear position transducer, which measured VJ PPK, PAVG, VPK, and VAVG on each jump. Additionally, average partial power (PPAVG) and force (PFAVG) were calculated from their respective values accumulated from the onset to 50% of each jump. Data from the jump that produced the highest PPK was used for analysis. Each maximal sprint and VJ trial was separated by 3–5 minutes of rest. Pearson's product-moment correlation coefficients were calculated between all sprinting and VJ variables. Results: VJ displacement was negatively related (p < 0.05) to sprinting time (1-kg: r = −0.63; 0-kg: r = −0.47) and positively related (p < 0.001) to sprinting VAVG (r = 0.70), FAVG (r = 0.45), and PAVG (r = 0.62). VJ VPK was negatively related (p < 0.05) to sprinting time at 1-kg (r = −0.48) and positively related to sprinting VPK (r = 0.52), VAVG (r = 0.50), FAVG (r = 0.42), and PAVG (r = 0.48). Interestingly, VJ PAVG was positively related to sprinting time at 1-kg (r = 0.63, p < 0.001) and negatively related (p < 0.05) to sprinting VPK (r = −0.44), VAVG (r = −0.58), FAVG (r = −0.39), and PAVG (r = −0.51). Further, VJ PPAVG was positively related to sprinting time at 1-kg (r = 0.46, p = 0.021) and negatively related to PVAVG (r = −0.45, p = 0.021), while VJ PFAVG was positively related (p < 0.05) to sprinting time (1-kg: r = 0.63; 0-kg: r = 0.49) and negatively related (p < 0.05) to sprinting VAVG (r = −0.55) and PAVG (r = −0.43). No other significant relationships were observed. Conclusions: Vertical jump displacement and velocity appear to positively impact 40-m sprinting performance, while force and power produced during the vertical jump may have a negative impact. Practical Applications: Although our findings may seem contradictory, it is worth noting that linear position transducers are specifically designed to measure time and displacement, whereas the reported power is derived from the resultant velocity calculation and the load that is manually entered. Consequently, the accuracy of power and force values reported by these devices may be less compared to the reported velocity. When attempting to utilize this technology to relate an athlete's vertical jump performance to their sprinting ability, strength and conditioning professionals should place more weight on vertical jump displacement and velocity.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Unique Contributions of Segmental Lean Body Mass to Peak Force and Rate of Force Development in Women and Men

C. Herring, D. Fukuda, M. Redd, T. Starling-Smith, R. Girts, and J. Hoffman

University of Central Florida

Introduction: The importance of peak force (PF) and rate of force development (RFD) in physical performance has been well documented in the literature. The isometric mid-thigh pull (IMTP) is a valid and reliable assessment tool to measure force-time characteristics, such as PF and RFD. However, the relationship between segmental lean body mass (LBM) and IMTP variables has yet to be explored. Purpose: To evaluate the contributions of segmental lean body mass (LBM), overall LBM, and training experience on IMTP performance. Methods: Thirty-eight women (n = 20; age = 21.7 ± 5.6 years, height = 1.7 ± 6.8 m, body mass = 66.5 ± 11.6 kg; training experience: 4.1 ± 4.1 years) and men (n = 18; age = 23.3 ± 3.3 years, height = 1.7 ± 5.1 m, body mass = 84.3 ± 17.6 kg; training experience: 5.9 ± 4.0 years) underwent multi-frequency bioelectrical impedance analysis to determine overall LBM, and LBM of the arms, legs, and trunk. Participants also performed an IMTP with a custom-built rack and force plates without straps to determine RFD and absolute PF. Relative PF was later calculated using participants' absolute PF and body mass. Stepwise linear regression was used to determine the relationships between segmental LBM and RFD, absolute PF, and PF relative to body mass for men and women. Independent samples t tests were used to evaluate sex-based differences in IMTP variables. Pearson correlations were used to compare overall LBM and training experience with RFD, absolute PF, and PF relative to body mass for men and women. Results: There was a significant difference between women and men in RFD (p < 0.03, 95% CI = 358.14–5,180.53 N·s−1), absolute PF (p < 0.01, 95% CI = 346.02–813.57 N), and relative PF (p < 0.01, 95% CI = 1.83–6.34 N·kg−1). For women, leg LBM had the greatest relationship with RFD (r 2 = 0.374; p = 0.02), and absolute PF (r 2 = 0.693; p < 0.01), while arm LBM was most related to relative PF (r 2 = 0.193; p < 0.03). Also for women, overall LBM was correlated with RFD (r = 0.525; p = 0.02), absolute PF (r = 0.862; p < 0.01), and relative PF (r = 0.499; p = 0.03), while training experience was correlated with RFD (r = 0.762; p < 0.01) and absolute PF (r = 0.671; p < 0.01). For men, arm LBM had the greatest relationship with absolute PF (r 2 = 0.635; p < 0.01); however, no variables were found to be related to RFD or relative PF. Also for men, overall LBM was correlated with only absolute PF (r = 0.769; p < 0.01), while training experience was not correlated with any of the other IMTP variables (p > 0.05). Conclusions: Overall, there are significant differences in RFD, absolute PF, and relative PF in the IMTP between women and men. While IMTP performance appears to be associated with segmental LBM, these relationships differ between men and women. Furthermore, training experience may play a greater role in IMTP performance when evaluating women as compared to men. Practical Applications: The results of this investigation may allow practitioners to better understand the specific influences segmental LBM and training experience have on PF and RFD in women and men.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Comparision of Exercise Program Modalities on Their Impact on Fitness and Body Composition Scores in Older Adults

S. Dorgo,1 E. Akehurst,2 D. Scott,3 and A. Hayes2

1 The University of Texas at El Paso; 2 Victoria University; and 3 Monash University

Regular physical activity has been shown to prevent age-related loss of muscle strength. Recent studies suggested that while muscle power declines faster than muscle strength in the aging process, power is assumed to be a greater predictor of physical function than strength. However, no systematic study has aimed at demonstrating this theory. Purpose: To assess the effects of a strength training program and a power/agility training focused program on various fitness measures among older adults. Methods: Eighty-five older adults (Age ± SD: 67.55 ± 6.75), 35 males and 50 females were assigned to 2 training groups using blocked randomization: (a) strength training group (ST; n = 56) that followed ACSM guidelines for elderly training and performed mostly aerobic and strength exercises; and (b) a group that performed reduced volume strength training but added power, agility and mobility exercises (PT; n = 29). Both groups engaged in two 90-minute sessions weekly for 16 weeks. Total training volume was equalized between groups. All subjects were tested before and after the 16-week program on strength, power, balance, speed and agility, as well as dual energy x-ray absorption (DXA) for body composition. Data were analyzed by repeated measures ANOVA with alpha level set at p < 0.05. Results: Both groups showed significant improvements (p < 0.05) in strength (measured by handgrip dynamometer), muscular endurance (30-second arm curl and chair stand tests), gait speed (flat ground and uphill maximum walking speed), upper body power (standing and seated medicine ball throws), agility (up-and-go test), and aerobic endurance (6-minute walk). Only the PT group showed significant improvement on lower-body power (vertical jump), while only the ST group improved significantly on the back-leg strength (dynamometer) test. No significant fitness improvement differences were observed between the groups for any measures, except the uphill speed walk test where the ST group showed greater improvements (p < 0.05). The ST group also demonstrated significant improvements in bone mineral density (BMD) and relative muscle mass (appendicular lean mass/height2) after training, while total lean mass increased for both ST and PT groups (p < 0.05). Conclusions: It appears that both 16-week programs were effective to elicit fitness performance adaptations from the older adult subjects. We did not observe differences between groups in regards to level of improvement for the majority of the measures. Strength training alone appeared to be effective to elicit comprehensive fitness improvements in older adults, even in power, speed and agility measures. Similarly, a power- and agility-based program with reduced strength training volume was also effective to elicit strength and muscular endurance improvements of older adult subjects. One key difference was observed in the lower-body as strength training elicited greater back-leg strength improvement whereas power training elicited greater vertical jump improvement. Improved BMD and relative muscle mass would contribute to reduced fracture risk and decreased incidence of sarcopenia. Practical Applications: Previously untrained older adults appear to respond well to both strength and power/agility training. A program following ACSM guidelines with aerobic and strength exercises may lead to comprehensive fitness improvements, but added power and agility exercises allow additional improvements, particularly in lower-body power. Acknowledgments: Funded by the NSCA Foundation International Collaborator Grant.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Muscle Quality and Muscular Dimensional Changes Between Young and Older Adults

H. Giuliani,1 N. Shea,2 G. Gerstner,1 J. Mota,1 T. Blackburn,3 and E. Ryan1

1 University of North Carolina at Chapel Hill; 2 Georgia Institute of Technology; and 3 University of North Carolina-Chapel Hill

Previous studies have demonstrated that aged muscles have an increased infiltration of non-contractile tissue (i.e., fat and fibrous tissue), and these reductions in muscle quality are linked to poor strength and function. The mechanism by which alterations in muscle quality reduces muscle function is unclear; however, a recent modeling study has suggested that poor muscle quality may cause a reduction in muscular dimensional changes during muscle contractions. Purpose: To determine if altered muscle quality influences muscular dimensional changes during incremental increases in isometric torque production. Methods: Twenty-three young males (mean ± SD: age = 21.7 ± 2.3 years; stature = 177.6 ± 5.9 cm; mass = 70.9 ± 7.6 kg; BMI = 22.4 ± 1.2) and 21 older males (mean ± SD: age = 69.5 ± 2.1 years; stature = 176.8 ± 6.4 cm; mass = 73.1 ± 8.2 kg; BMI = 23.3 ± 1.5) visited the laboratory on 2 occasions. The first visit consisted of resting ultrasonography of the vastus lateralis (VL), rectus femoris (RF), and vastus medialis (VM), followed by a familiarization of the isometric strength testing. At the second visit, participants completed the isometric strength testing protocol, during which ultrasonography was used to determine muscular dimensions of the RF muscle. Image analysis software was used to outline the resting VL, RF, and VM muscles individually to determine muscle quality as the mean gray-scale subcutaneous fat corrected echo intensity (EI) values. A single thigh specific EI value was calculated from the average of 3 muscles. The same software was also used to determine RF muscle cross-sectional area (CSA) from the resting and active muscle images. Following 3 submaximal warm-up muscle actions, each participant performed 2 isometric leg extension maximal voluntary contractions (MVC) on a calibrated isokinetic dynamometer. Participants then performed 9 separate submaximal isometric step contractions at 10–90% of their MVC, in increments of 10%. The submaximal contractions were performed in random order with a 2-minutes recovery period. All isometric testing was performed at 60° of leg flexion. Independent samples t tests were used to examine differences in all descriptive data and thigh EI between groups. The resting and active dimensional changes were examined using a 2 × 11 (group × intensity) mixed factorial ANOVA. All analyses were performed with an alpha level set a priori at p ≤ 0.05. Results: Stature, body mass, and BMI were not different between groups (p ≥ 0.057). Echo intensity was greater in the older adults (95.7 ± 13.4) than the young adults (82.3 ± 11.4, p ≤ 0.001). There was a significant interaction effect (p ≤ 0.001) for changes in CSA. Muscle CSA was greater at rest when compared to 30–100% (p ≤ 0.045), 10% was greater than 50–100% (p ≤ 0.035), and 20% was greater than 60–100% (p ≤ 0.046) for the young group. There were no significant changes from rest to 100% MVC for the older group (p ≥ 0.371). Conclusions: Older adults showed poorer muscle quality than young adults. This study showed that young muscle decreased in size with increases in torque production until 30% MVC, while older muscle did not change size from rest to 100% MVC. Practical Applications: These findings demonstrate that muscle quality may influence muscle dimensional changes during contractions. Future studies are needed to determine if altered dimensional changes influence muscle function, and what training and/or nutritional strategies can mitigate these age-related changes. Acknowledgments: This research study was funded by an NSCA Foundation Masters Research Grant.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Change of Direction Ability: Investigating its Relationship With Sprint Performance and Maturation in Youth Soccer Players

S. Phillips and E. Donaldson

University of Edinburgh

Introduction: The 505 agility test is a common test of change of direction (COD) ability in soccer players. However, the 505 test may be heavily influenced by linear sprint speed, which is independent of COD ability. This brings into question the validity of the 505 test for isolating and evaluating COD ability. One-metre directional changes are not as influenced by linear sprint speed, suggesting that a “101” test may be a more valid measure of COD ability. In youth, biological maturity influences the development of linear speed through a variety of morphological and neuromuscular mechanisms. Examining the relationship between tests of COD ability, linear speed, and maturation will allow further quantification of the validity of such tests. Purpose: (a) To examine the relationship between 505 and 101 agility test performance and linear sprint performance in youth soccer players, (b) To investigate the influence of maturation on agility and sprint performance. Methods: Anthropometric data (chronological age, body mass, sitting height, leg length) was collected from 28 elite academy soccer players (age 12.9 ± 2.0 years, height 1.62 ± 0.14 m, body mass 49.9 ± 14.7 kg). This data was used to estimate age at peak height velocity and maturity offset. Sprint performance was assessed via 20 and 5 m linear sprint tests. Participants performed 3 sprints at each distance, with the best performance used. On a separate day, participants completed the 505 and 101 agility tests. For each test, 3 attempts were made starting and turning on the right foot (505 R, 101 R), and 3 attempts starting and turning on the left foot (505 L, 101 L). The best performance for each test was used. Participants were habituated to linear sprint and 505 agility testing, and were also familiarised with all tests as part of the study. Pearson's correlation coefficient identified relationships between agility performance and sprint performance. Spearman's rank-order correlation coefficient correlated maturation with agility and sprint performance. Results: Moderate to large relationships were found between 505 and 5 m sprint performance (505R r = 0.68, p < 0.001; 505L r = 0.81, p < 0.001), and between 505 and 20 m sprint performance (505R r = 0.54, p = 0.003; 505L r = 0.56, p = 0.002). However, there were no significant relationships between 101 and 5 m (101R r = 0.10, p = 0.60; 101L r = 0.11, p = 0.59) or 20 m (101R r = −0.31, p = 0.113; 101L r = −0.31, p = 0.11) sprint performance. There were moderate relationships between maturation and sprint performance (5 m rho = −0.43, p = 0.02; 20 m rho = −0.63, p < 0.001) and 505 performance (505 R rho = −0.71, p < 0.001; 505L rho = −0.61, p = 0.001). However, there were no significant relationships between maturation and 101 performance (101 R rho = 0.15, p = 0.46; 101 L rho = −0.14, p = 0.49). Conclusions: In elite youth soccer players, the 505 agility test is significantly correlated with linear sprint performance and maturation. However, the 101 agility test is not significantly correlated with either factor. Therefore, the 101 agility test may offer a more valid assessment of COD ability. Practical Applications: These findings suggest that the 505 and 101 agility tests are influenced by different physical factors. For practitioners wishing to employ tests of COD that are not influenced by linear sprint speed or maturation, the 101 agility test is recommended over the 505 agility test.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

The Influence of Aging and Obesity on Muscle Characteristics

G. Gerstner,1 H. Giuliani,1 N. Shea,2 M. Laffan,1 A. Trivisonno,1 J. Mota,1 and E. Ryan1

1 University of North Carolina at Chapel Hill; and 2 Georgia Institute of Technology

Age-related changes in muscle size and quality (i.e., increases in fat and fibrous tissue) are well documented. Recent studies have suggested that the age-related changes in muscle morphology may be exacerbated with obesity. Few studies have investigated the impact of aging and obesity on muscle architecture. Purpose: The purpose of this study was to examine the influence of aging and obesity on muscle morphology and architecture. Methods: Twenty-four young normal weight males (YNW), 21 older normal weight males (ONW), and 16 older obese males (OB) volunteered for this study. Panoramic brightness-mode ultrasound images of the vastus lateralis (VL) were taken to determine anatomical muscle size (CSA), subcutaneous fat corrected echo intensity (EI) to represent muscle quality, fascicle length (FL), and pennation angle (PA). Participants rested in a supine position with their right leg propped at 50° of flexion. The probe was perpendicularly moved along the transverse plane of the VL for CSA and EI, whereas the FL and PA measurements were obtained along the fascicle plane. All assessments were analyzed with imaging software. To account for differences in CSA due to infiltration of fat or fibrous tissue, CSA was normalized to EI (CSA/EI). Additionally, FL was normalized to femur length. One-way analysis of variance (ANOVA) was used to determine differences between groups. Six separate ANOVAs were performed for BMI, CSA, EI, CSA/EI, FL, and PA, followed by post-hoc analyses. A familywise alpha level of p ≤ 0.05 was used to determine significance. Results: The results from the ANOVAs showed there was a significant difference between the 3 groups in BMI (p < 0.001), CSA (p < 0.001), EI (p < 0.001), and CSA/EI (p < 0.001). The results from the post-hoc analyses are displayed in Table 1. No differences were seen between groups for FL (p = 0.541). Differences in PA were greatest in the OB group, and smallest in the ONW group, but did not reach statistical significance (p = 0.078). Conclusions: These findings support previous studies that suggest CSA and EI are different between young, older, and older obese adults. The older men had poorer muscle quality than the young men, with obesity exacerbating the poor muscle quality in older men. When CSA was normalized to EI, differences between the ONW and OB groups were no longer present, possibly due to increased fat infiltration in the OB group. Although muscle architecture was not impacted by aging or obesity in the present study, previous research has indicated PA may differ between groups, with increased fat infiltration causing an increase in pennation angle. Practical Applications: Changes in muscle characteristics have been suggested to contribute to alterations in tissue structure and fiber contractile properties. Further studies are needed to discern the impact of aging vs. obesity and to what extent the subsequent alterations in muscle characteristics can be reduced with exercise and/or nutritional interventions. Acknowledgments: This research study was funded by an NSCA Foundation Masters Research Grant.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Positional Differences in Training Load During a Competitive Season in Male Collegiate Soccer Players

S. Rossi, M. Eisenman, P. Chrysosferidis, S. Henry, and G. Ryan

Georgia Southern University

The physical requirements of a soccer match have previously been demonstrated to vary between playing position. Previous studies have reported different positions require varying amounts of sprinting, jogging, and walking during match play. Purpose: The intent of this study was to compare training load by measuring active training minutes (ATmin) and percent of practice time spent in 3 defined zones of percent maximum heart rate (%HRHigh, %HRMid, %HRLow) as well as monitoring self-reported measures of exertion and recovery between positions (Forwards [F], Midfielders [M], and Backs [B]) during seasonal training in collegiate soccer players. Methods: Twenty-five Division I collegiate male soccer players (F = 6, M = 10, B = 9) participated in the study. Participants volunteered to wear a bio-harness during each training session during a 14 weeks competitive season (preseason [PS] = 5 weeks; in-season [IS] = 9 weeks). Heart rate (HR), physiological, and psychological data were continuously collected for each participant throughout the course of each training session. Data was uploaded post practice onto a computer for analysis. All data was presented as position mean ± SD. A multivariate ANOVA was used to test mean differences in each variable between position groups. A Kruskal-Wallis Test was run for PRS and RPE due to the categorical nature of the data. Post-hoc Tukey tests were run on any significant omnibus result. Results: No significant differences were present during PS among any of the positions for any of the variables of interest. A significant omnibus difference was noted IS between positions in %HRHigh (F[2, 469] = 10.520; p. Conclusions: The results suggest that the M and F positions required higher effort during training when compared to the B position. This positional data may be useful for managing stress and training load for individual athletes due to specific positional demands. Practical Applications: The use of GPS during training allows the coaching and strength and conditioning staff to monitor both internal and external training load by positions. This information can then be used to assess player and positional demands during training, quantify quality of training for the individual athlete and position, and provide useful information for planning future training sessions and upcoming phases of training.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Maturity-Related Differences in Muscle Hypertrophy Following Sprint Interval Training in Male Youth Athletes

M. Magee,1 K. Beyer,1 M. Redd,2 K. Baker,2 E. Arroyo,3 D. Church,4 H. Bergstrom,5 J. Hoffman,2 and D. Fukuda2

1 Bloomsburg University; 2 University of Central Florida; 3 Kent Stat University; 4 University of Arkansas for Medical Sciences; and 5 University of Kentucky

Purpose: To assess how maturity status impacts the changes in muscle thickness (MT) and cross-sectional area (CSA) following a 4-week sprint interval training (SIT) program amongst youth male athletes. Methods: Twenty-seven male youth athletes (age: 11–17 years) were assessed for their years from peak height velocity (PHV), an estimation of somatic maturity status. Based upon this assessment, athletes were placed into one of 3 groups: PRE (<−1.5 years), PERI (−1.5 to +1.5 year) and POST (>+1.5 year) PHV. Before the first (Pre-Training) and last (Post-Training) training session, ultrasound was used to collect MT and CSA of the vastus lateralis from each athlete's dominant leg. Three measurements were taken at Pre-Training and Post-Training for both CSA and MT, and the average of these measurements was recorded. The SIT program comprised 8 sessions, consisting of 4–7 repeated 20-second maximal effort sprints on a cycle ergometer against a load of 7.5% of body mass with 4-minutes rest between each sprint. MT and CSA data were assessed using (group × time) ANOVAs. Level of significance was set at p < 0.05. Changes in MT and CSA were calculated for each maturity group. Cohen's d coefficients were calculated to determine the magnitude between-group differences for the changes in MT and CSA. Results: Table 1 shows the MT and CSA values at Pre-Training and Post-Training and the change from Pre-Training to Post-Training. There was no significant group × time interaction for MT (p = 0.548). Furthermore, there was no main effect of time found for MT (p = 0.830); however, there was a significant main effect of group (p = 0.014). Post-hoc analysis revealed that, POST was significantly greater than PRE (p = 0.006) and PERI (p = 0.026) regardless of time, with no differences between PRE and PERI (p = 0.399). For CSA, there was no significant group × time interaction (p = 0.393); however, there was a significant main effect of time for CSA (p < 0.001) and significant main effect of group (p < 0.001). Post-hoc analysis revealed, that POST was significantly greater than PRE (p < 0.001) and PERI (p = 0.001), while PERI was significantly greater than PRE (p = 0.013) at Pre-Training and Post-Training. Furthermore, CSA significantly increased with training (p < 0.001) regardless of maturity group. When analyzing Cohen's d coefficients, it was observed that differences in the adaptations to MT were small between PRE and PERI (d = 0.336) and moderate between PRE and POST (d = 0.640). For changes in CSA, Cohen's d revealed that moderate differences existed between PRE and PERI (d = 0.680) and PRE and POST (d = 0.777). No observable differences were reported between PERI and POST for MT (d = 0.190) and CSA (d = 0.016). Conclusions: A 4-week SIT program did not elicit significant increases in MT, but did increase CSA in youth male athletes regardless of maturity group. In addition, the changes in MT and CSA appear to be limited prior to PHV. Practical Applications: When designing a long-term athlete development training program for male youth athletes, a strength and conditioning specialist can implement a SIT program to elicit favorable changes in muscle CSA for PERI and POST; however, PRE may not experience the same adaptations to SIT.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Predicting On-Field Contribution Using the National Football League (NFL) Combine Measureables in the 2017 NFL Rookie Class

G. Ryan,1 R. Herron,2 S. Bishop,2 and C. Katica3

1 Georgia Southern University; 2 University of Montevallo; and 3 Pacific Lutheran University

The National Football League (NFL) conducts an annual combine to assess athletic ability of prospective draftees in preparation for the draft. Following the combine, many of these players, as well as others, are drafted or sign as undrafted free agents to play in the NFL. However, the best performers during the combine, do not always experience on field success. Purpose: The purpose of this study was to determine how well the performance measures of the athletes invited to the 2017 NFL Combine predicted on-field contribution, as measured by snaps taken during the 2015 NFL regular season. Methods: Data from 6 performance tests of 326 athletes were used for analysis. The 6 tests used for analysis were: 40 yard sprint; 225 pound bench press repetitions; vertical jump; broad jump; 3-cone drill; and 20 yard shuttle. The number of offensive/defensive and special teams snaps for each rookie was averaged over the course of the season. A multiple linear regression was calculated to predict on-field contribution based on the 6 performance recorded during the NFL Combine. Results: A significant regression omnibus equation was found (F(6,165) = 2.927, p = 0.010), with an R 2 of 0.099. The average number of plays an athlete was involved in was equal to 200.832 + 6.439 (40 yard sprint) + 0.465 (bench press repetitions) + 0.294 (vertical) − 0.370 (broad jump) − 10.543 (3-cone drill) − 24.493 (20 yard shuttle), 95% CIs [26.497, 375.167], [−18.109, 30.986], [−0.216, 1.146], [−1.245, 1.834], [−1.142, 0.401], [−31.050, 9.964], and [−53.521, 4.535], respectively. Of all 6 performance tests, the 20 yard shuttle performance was most highly related to on-field contribution. Additionally, on-field contribution predictions differed among individual position groups: Running Backs (R 2 = 0.422); Quarterbacks (R 2 = 0.058); Tight Ends (R 2 = 0.319); Offensive Linemen (R 2 = 0.229); Wide Receivers (R 2 = 0.212); Defensive Linemen (R 2 = 0.372); Linebackers (R 2 = 0.614); and Defensive Backs (R 2 = 0.115). Conclusions: The findings of this study suggest that the performance testing conducted at the 2017 NFL Combine was somewhat effective at predicting on-field contribution of Rookie players during the 2017 NFL regular season, though the R2 prediction was varied widely among individual position groups. Practical Applications: These findings may help teams and scouts to assess performance and determine potential on-field contribution of draftees and undrafted free agents. However, due to the variable nature of the prediction across position groups, it is possible that the NFL and teams should reconsider what is measured at the NFL Combine to better the evaluation process of selecting and playing athletes.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Test-Retest Reliability of Novel Clinical Strength Measures of the Upper Body in an Older Population

H. Legg,1 J. Lanovaz,2 J. Farthing,2 A. Miko,2 F. Dale,2 J. Spindor,2 R. Dziendzielowski,2 S. Sharkey,2 and C. Arnold2

1 St Mary's University, London/University of Saskatchewan; and 2 University of Saskatchewan

Purpose: Strength capacity has been strongly linked to functional ability in an ageing population and a number of measurement tools are available to assess strength clinically in older adults. While traditional measures such as handgrip strength are well known, some novel tools have been recently developed which can assess strength in more functional movements. An understanding of the reliability of these measures and their relationship to the traditional strength tests is important if they are to be used in research and clinical practice. The purpose of this study is to assess the test-retest reliability and concurrent validity of novel clinical strength measures of the upper body in an older population. Methods: Seventeen older adults (6 males; 11 females; 71 ± 10 years, 1.6 ± 0.1 m, 76.9 ± 13.9 kg) visited the lab on 2 separate occasions, 48 hours apart. Participants performed 3 maximal repetitions of a novel vertical push-off test (POT) and a novel dynamometer-controlled single-arm press for both concentric (CON) and eccentric (ECC) actions. In addition, 3 maximal repetitions of the traditional clinical measures including hand-grip dynamometry (HG), isometric hand-held dynamometry for shoulder flexion (SF), shoulder abduction (SA) and elbow extension (EE) were collected. Peak values for the dominant (D) and nondominant (N) arms were utilized for analysis. Results: ICC analyses showed a strong test-retest reliability between the 2 data collection days (all ICC >0.9, p < 0.001). Pearson's correlation identified the novel strength measures had a significantly strong positive correlation to the traditional clinical measures (all r > 0.8, p < 0.001). CV%RMS precision error for all strength measures were >5% (N% and D% respectively: POT: 13.7, 16.1; CON: 12.1, 9.3; ECC: 11.5, 8.8; HG: 6.9, 6.6; SF: 7.9, 6.7; SA: 11.3, 13.1; EE: 7.6, 5.6). Conclusions: These results indicate high test-retest reliability of novel clinical strength measures of the upper limb in adults over age 60. Novel strength measures demonstrate similar test-retest reliability as the traditional clinical measure of handgrip strength. The increased precision error present in the novel measures may be due to the complexity of the multi-joint movement pattern requiring greater control. Practical Applications: The novel strength measures are a reliable assessment of older adults' upper body strength and provide insight into the profile of multi-joint dynamic arm strength that is lacking during traditional clinical strength measurements. However, higher precision error compared to traditional measures warrants caution when completing comparative clinical assessments over time.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Body Composition and Fitness in Female Soccer Players: Effects of Concurrent Strength and Metabolic Conditioning

L. Adlof, L. Cosio Lima, and A. Crawley

University of West Florida

Introduction: Soccer is highly demanding both metabolically and physically, so optimal strength and conditioning programs for female high school soccer players are essential. Resistance training and high intensity interval training are beneficial in young athletes; however, the effect of a concurrent strength and metabolic conditioning program on female soccer players and its effect on body composition has yet to be investigated. Purpose: This study examined the effects of an 8-week concurrent strength and metabolic conditioning program on body composition, speed, agility, anaerobic capacity, strength, and power in female soccer players. Methods: Subjects included 14 female high school soccer players, age (years) = 16.14 ± 1.02, height (cm) = 162.92 ± 6.95, weight (kg) = 58.7 ± 6.43. Body composition, performance testing, and strength testing were recorded before and after 8-weeks of concurrent high intensity interval training methods and periodized resistance training performed 3 days per week. A paired samples t test was used to compare mean change pre-to post, and Pearson's correlations between variables were calculated. Results: Significant (p < 0.05) improvements were made in vertical jump, pro agility test, 40 yd sprint, back squat, shoulder press, bench press, and power clean. Percent body fat (%BF) was significantly (p < 0.05) higher posttest. There were strong correlations between power, agility and speed performance, as well as correlations between power and strength. No correlations were observed between BMI and %BF. Conclusions: An 8-week concurrent strength and conditioning program was effective for improving measures of fitness and performance in female soccer players. Overall, power, strength, and speed significantly improved. The increase in %BF did not seem to interfere with improvements of strength and power in female soccer players. Practical Applications: Concurrent strength and metabolic conditioning can improve soccer players' explosive strength and performance. Training protocols that use low volume and high loads (3 sets of 5-RM) may improve neural adaptations and avoid muscular hypertrophy, which can minimize the interference effect.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

A Pilot Study of Objective and Subjective Athlete Monitoring: Do Coaches and Players Agree?

J. Bunn, O. Sisson, and C. Johnston

Campbell University

Purpose: Players and coaches frequently disagree about the physiological demands of practice and training sessions. This is particularly true for practice sessions in skill-specific sports that are not easily quantified through typical volume measures like distance run or repetitions completed. Therefore the purpose of this pilot study was to evaluate agreement between objective training load (heart rate) and subjective assessments from players and coaches during practice sessions. Methods: Division I female lacrosse players (n = 13) and their coaches (n = 3) completed subjective ratings of perceived exertion (RPE) for each practice session (n = 25). Coaches provided RPE before the start of practice and players provided RPE after the completion of practice. The players also wore a heart rate monitor during practice to provide an objective measure of cardiovascular load (CVL). Total CVL (in arbitrary units) was calculated by taking the time spent in each heart rate zone and multiplying it by the zone number (e.g., 5, 4, 3, 2, or 1), and then adding in the mean heart rate for the session. Spearman correlations were run to address concordance between each metric. Results: The mean CVL for the practices was 146.3 ± 10.1, mean player RPE was 5.89 ± 0.59, and mean coach RPE was 5.67 ± 0.96. Player RPE and Coach RPE had a strong correlation (ρ = 0.762, p < 0.001), and the Player RPE was also strongly correlated with CVL (ρ = 0.672, p = 0.001). Coach RPE and CVL showed a low correlation (ρ = 0.247, p = 0.256). Conclusions: These data show that coaches and players in this specific setting frequently agreed on the difficulty of the practice, and that players' subjective rating had good agreement with their objective CVL. However, the coaches tended to estimate the practice RPE lower than the players, resulting in very little agreement with player CVL. Practical Applications: The results from this pilot study will be useful for the lacrosse coaches to structure their practices in better alignment with CVL. This helps to close the gap between coaches and players regarding the physiological demands of practice. These data, in combination with other objective measures related to mechanical loading and subjective assessments related to recovery and sleep, would further assist coaches and support staff to ensure optimal performance of the athletes. Further, staff may want to address the data by position and/or by drill, as certain practice days may be more difficult for a specific position.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Comparison of Traditional vs. Non-Traditional Warm-Up Routines on Strength Development of First Grade Students

R. Bonnette and H. Hardin

Texas A&M University Corpus Christi

Purpose: The purpose of this study was to determine if a creative warm-up routine would yield greater improvements in elementary school children's physical strength than a traditional warm-up. Methods: Subjects (76 males and 56 females, age = 7 ± 1) were first-grade students enrolled in a local elementary school. Students enrolled at one class period served as the experimental group, (37 males, 29 females), while the control group, (39 males, 27 females), were enrolled in another period. At the beginning of the semester, both groups were pretested on 3 FitnessGram strength variables (curl-up, push-up, and trunk lift). The control group performed a traditional warm-up (e.g., static stretching, exercises, jogging), while the experimental group participated in a non-traditional warm-up (e.g., games, races, competitions). Throughout the semester each group would warm up prior to class in the same fashion (Control–Traditional; Experimental–Non-Traditional). Subjects were re-tested at the end of the semester. A 2 (group) × 2 (time) ANOVA with repeated measures was calculated to determine the effects of time and group on the curl-up, push-up, and trunk lift. Alpha level was p ≤ 0.05 for all comparisons. Results: There was no significant interaction effect of time and group on the curl-up (F 1,130 = 0.323; p = 0.571), push-up (F 1,130 = 0.669; p = 0.415), or trunk lift (F 1,130 = 3.813; p = 0.053). Furthermore, there was no main effect of group in the curl-up (F 1,130 = 0.320; p = 0.573), push-up (F 1,130 = 0.902; p = 0.344), or trunk lift (F 1,130 = 1.004; p = 0.318). However, there was a main effect of time in the curl-up (F 1,130 = 63.815; p = 0.000), push-up (F 1,130 = 68.872; p = 0.000) and trunk-lift (F 1,130 = 22.352; p = 0.000). Conclusions: Statistical data did not conclude any significant gains due to the inventive warm-up compared to the traditional. However, the means for each exercise were higher for the experimental group. Thus, it's theorized that a greater time period (e.g., 1 year) may yield significant changes. Practical Applications: Although there is little research comparing these 2 types of warm-ups, the practical significant results suggest physical education teachers may consider including non-traditional warm-ups in their classes.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Changes in Exercise Data Management Improves Insights Into Individualized Crew Care

R. Buxton1 and K. Kalogera2

1 University of Houston; and 2 KBRwyle

Introduction: The suite of exercise hardware aboard the International Space Station (ISS) generates an immense amount of data. The data, collected from the treadmill, cycle ergometer, and resistance strength training hardware, are basic exercise parameters (time, heart rate, speed, load, etc.). The raw data are processed in the laboratory and more detailed parameters are calculated from each exercise data file. Purpose: To improve data storage, adding an additional level of security, increasing accessibility, and resulting in overall increased efficiency of medical report delivery. Methods: Consolidation all of the exercise system data in a single repository enables a quick response to both the medical and engineering communities. A SQL server database has been developed and provides a secure location for all of the exercise data starting at ISS Expedition 1 to current date. Commercial tools were evaluated to help aggregate and visualize data from the SQL database. The database has been structured to update derived metrics automatically, making analysis and reporting available within minutes of dropping the in-flight data into the database. Results: The commercial software currently in use provides a manageable interface, which has improved the laboratory's output time of crew reports by 67%. Conclusions: Questions regarding exercise performance or how exercise may influence other variables of crew health frequently arise within the crew health care community. Inquiries regarding the health of the exercise hardware often need quick analysis and response to ensure the exercise system is operable on a continuous basis. Practical Applications: The implementation of a SQL database with a live connection to data visualizations has created a faster turnaround time for reporting to the crewmember's crew surgeon. It is also important to have custom dashboards for other teams who care for crewmembers to help improve how they interact with the crewmembers. Each person who cares for a crewmember or a team athlete will have different data needs and those should be taken into account. Inclusion of a database and dashboards can help disseminate information faster from team game stats to individual player performance metrics. From off-season training to injury rehabilitation quality data storage and analytics can give athletes an edge over the competition.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Postural Performance, Neurocognition, and Self-Reported Concussion: A Longitudinal Study of Incoming Division II Collegiate Freshmen Football Players

R. Chetlin,1 S. Farbacher,1 C. Paddock,1 R. Riemedio,2 and B. Jacobson1

1 Mercyhurst University, Department of Sports Medicine; and 2 Mercyhurst University

We previously reported an inverse relationship between number of self-reported concussions and multiple postural stability outcomes in one group of incoming, asymptomatic Division II freshmen football players, while; ImPACT and agility scores demonstrated no relationship to prior self-reported concussion number. Postural stability deficit following acute and chronic sport-related concussion is well-documented. Potential latent effects on postural performance and neurocognitive function in newly matriculating, asymptomatic, Division II freshmen collegiate football players have not been examined in longitudinal fashion. Purpose: To determine the longitudinal relationship between agility, postural stability, neurocognitive measures, and self-reported concussion in Division II freshmen football players, prior to beginning their collegiate careers, and; to examine longitudinal differences in these outcomes by position group. Methods: Seventy-eight Division II freshmen football players (mean age = 18.0 ± 0.5 years; mean height = 182.9 ± 6.9 cm; mean bodyweight = 94.8 ± 18.3 kg), across 3 different seasons (2015–2017), participated in offseason testing: agility was measured using a 20-yard shuttle run; postural stability was assessed with the BIODEX Balance System, and; neurocognitive function was evaluated with the ImPACT inventory. Self-reported concussion occurred at least 6 months post-insult. Players were grouped by position for further statistical analysis: lineman/linebackers (L/LB, n = 36); running backs (RB, n = 10); wide receivers/defensive backs (WR/DB, n = 23), and; quarterbacks/kickers (QB/K, n = 9). Results: Self-reported concussion number was significantly, inversely correlated to overall postural stability (r = −0.52, p = 0.008), anterior-posterior postural stability (r = −0.50, p = 0.012), and medial-lateral postural stability (r = −0.46, p = 0.021) for 2015 freshmen, but; was not related to agility or ImPACT outcomes. Self-reported concussion number was significantly correlated to ImPACT reaction time composite score (r = 0.49, p = 0.034) for 2016 freshmen, but; was not related to agility or postural stability outcomes. Self-reported concussion number was significantly correlated to shuttle run time (r = 0.40, p = 0.05) and ImPACT visual motor speed composite score (r = 0.35, p = 0.047) for 2017 freshmen, but; was not related to postural stability measures. Overall, RB scored better in: ImPACT verbal memory composite score vs. L/LB (p = 0.03) and QB/K (p = 0.046); ImPACT visual memory composite score vs. L/LB (p = 0.05), and; ImPACT impulse control composite score vs. L/LB (p = 0.01), WR/DB (p = 0.001), and QB/K (p = 0.001). Conclusions: The variable relationships between number of self-reported concussions, postural stability measures, and neurocognitive function is supported by the literature. The findings of this study delineated these associations by year of matriculation, and; demonstrated these relationships may not be predictable on an annual basis in freshmen football players beginning their collegiate careers. Previously concussed athletes may develop shifting neuromuscular compensation strategies to account for individual somatosensory deficits, which may manifest in different performance and neurocognitive outcomes. Practical Applications: Long-term concussion management should include a regular, multifaceted approach to consistently evaluate potentially unpredictable neurocognitive and kinesthetic changes in athletes participating in collision sports. Strength and conditioning professionals, who work with previously concussed athletes, should be aware of such individual differences, and; utilize all appropriate data to develop specific training programs addressing possibly complex, latent deficits associated with concussive insult.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

A Preliminary Analysis of Health and Fitness Characteristics for Custody Assistant Recruits in a Law Enforcement Agency Prior to Academy

K. Cesario,1 M. Moreno,2 A. Munoz,1 J. Dulla,3 M. Stierli,4 A. Bloodgood,2 R. Orr,2 J. Dawes,6 and R. Lockie1

1 California State University Fullerton; 2 California State University, Fullerton; 3 Los Angeles County Sheriff's Department; 4 NSW Police; 5 Bond University; and 6 University of Colorado-Colorado Springs

Introduction: The use of physical ability and fitness testing as an employment qualification is common among law enforcement occupations, due to the physical demands of the job. Most law enforcement agencies (LEAs) require candidates to meet a certain physical fitness level, or standard, as part of their selection process. However, custody assistant (CA) and correctional populations may only use height and body mass measures that cannot often be strictly enforced, as courts have commonly ruled against using height and body mass standards based on the assumption that they are not job-related. Nonetheless, the health and physical fitness of a CA recruit prior to academy could influence whether they are capable of successfully completing academy and graduating, in addition to their future job performance or longevity. Purpose: To determine the overall health and fitness characteristics of CA recruits entering academy training relative to population norms. Methods: Retrospective analysis was conducted on 90 (males = 49: females = 41:18–57 years) CA recruits from 3 LEA academy classes. Physical fitness testing occurred 3 days prior to the start of academy for each class. The health assessments included: height, body mass, body mass index (BMI), and waist girth measurements; resting blood pressure; grip strength measured by a hand dynamometer; maximal push-ups in 60 seconds; and recovery heart rate from Young Men's Christian Association (YMCA) step test as a measure of aerobic fitness. Data from the CA recruits were compared to age- and sex-related norms established by the American College of Sports Medicine (ACSM). Results: Of the recruits measured, 4.76% had a BMI score of underweight, 46.43% were normal, 39.29% overweight, and 9.52% were defined as obese. With regards to resting blood pressure 24.44% of the recruits were considered normal, while 44.44% were pre-hypertensive, 21.11% had Stage 1 hypertension, and 10.00% had stage 2 hypertension. When considering disease risk based on BMI and waist circumference, 52.38% of the CA recruits had no increased disease risk, while 34.52% had an increased risk, 9.52% had a high risk, and 3.57% had a very high risk. The grip strength score (based on the average of the 2 hands), resulted in 69.51% being defined as poor, 20.00% as below average, 7.78% as average, and 8.89% above average. With regards to the push-up test, 5.56% had a score categorized as needing improvement, 11.11% were fair, 13.33% were good, 16.67% were very good, while 53.33% were excellent. Regarding the recovery heart rate measured after the YMCA step test, 67.42% of the recruits had a score of very poor, 16.85% were poor, 8.99% were below average, 3.37% were average, and 3.37% were above average. Conclusions: The results from this study indicated that, when considering the assessed parameters, the CAs have higher health risk and poorer fitness when compared to general population norms as established by ACSM and YMCA. For example, the current standards where height and body mass are used as a physical guide for hiring resulted in 49% of recruits being above a normal BMI. Although there is no current hiring standard for strength or aerobic fitness, 83% of recruits have a grip score rated as being below average, while 93% of recruits performed below average or worse on the YMCA step test. Practical Applications: The training stafffor CA recruits should be aware that the majority of their classes may feature individuals with lesser health and fitness status than the population from which they are drawn. The lesser health and fitness of recruits could influence graduation rates, future job performance, and longevity, and requires further investigation. In addition, recruits should ensure they develop their physical fitness prior to academy to enhance their ability to complete training. Future research should analyze more LEA academy classes to confirm the results of this preliminary analysis.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Effects of Resistance Training on Intraocular Pressure of Glaucoma Patients

M. Conte,1 A. Soares,2 and S. Tamura2

1 Escola Superior de Educação Física de Jundiaí; and 2 Sorocaba Ophthalmological Hospital

Intraocular pressure (IOP) is a concern for glaucoma patients; therefore, activities which increase blood pressure, such as exercise, may be of concern for this population. Previous research has been limited to aerobic exercise, but few studies have reported the influence of resistance training (RT) on IOP in glaucoma patients. Review articles have suggested that RT may pose a risk for glaucoma patients (advanced glaucoma or other severe subtypes such as pigmentary dispersion glaucoma). Purpose: The purpose of this study was to quantify effects of RT session on IOP of patients with Primary Open-Angle Glaucoma (POAG). Methods: After approval by Sorocaba Ophthalmological Hospital Ethics Committee, we studied 10 POAG patients (56.4 + 5.3 years old) who were being treated with medication to control IOP to normalized levels. Subjects performed one RT session of 8 exercises comprising 3 sets of 8 repetitions with 80% of 1RM using a 60-second recovery between sets. The IOP measurement was taken in the right eye with Perkins tonometer before the first set (M1) and 5 minutes after ending of the RT session (M2). Statistical analysis included Shapiro-Wilk normality and homoscedasticity test (Barlett criterion). All variables showed normal distribution and homoscedasticity. Student's t test was used to compare the means of IOP, alpha was set at p ≤ 0.05. Results: Mean IOP before RT statistically significant compared to after RT (13.2 + 0.9 vs. 11.0 + 1.58 mm Hg; p = 0.0402). The magnitude of the mean IOP reduction (2.2 mm Hg) or about 16% was similar to that reported in previous studies with subjects without glaucoma undergoing same intensity and volume of RT. Conclusions: A single resistance training session to promote an acute reduction of IOP in POAG patients. Practical Applications: suggesting this modality of exercise is safe for this population and may be used for individuals with risk factors or with high IOP. Acknowledgments: Sorocaba Ophthalmological Hospital.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Effects of Body Temperature and Sympathetic Activity Upon Repeat Resting Metabolic Rate Measurements: A Pilot Study

G. Davis, M. Lavergne, D. Scott, and D. Bellar

University of Louisiana at Lafayette

Recent research has suggested that a single measurement of resting metabolic rate (RMR) via indirect calorimetry may not be reliable. Repeated measurements of RMR under identical conditions find that values tend to decrease over time and thus, repeated measures of RMR are a more accepted standard of measurement. However, the reason for the decrease in RMR over time is unclear. It is possible that minor changes in body temperature as well as changes in sympathetic activity could account for different RMR measurements. Purpose: To examine core temperature, skin temperature, and heart rate to determine if any of these factors affect repeat RMR measurements. Methods: Ten recreationally active men age 20–29 were recruited to participate. Participants will report to the lab following an overnight fast of 10–12 hours and sat quietly for 30 minutes. Participants did not consume any caffeine or alcohol nor engage in any moderate or high intensity physical activity for 24 hours prior to the lab visit. Heart rate was monitored continuously using a validated non-invasive blood pressure monitoring system, core temperature was monitored via an ingested sensor, and skin temperature monitors will be attached to the mid-clavicular line, anterior bicep, anterior thigh, and posterior calf. RMR was determined via indirect calorimetry using a canopy system. Data was collected with the participants lying in a supine position, hips and knees flexed to 90°, and lower legs resting on an adjustable-height step. Following the RMR procedure, participants sit quietly for 30 minutes and all procedures were repeated. Results are presented as mean ± SEM. Results: The first RMR measurement was 2,023.52 ± 92.12, the second was 2,005.24 ± 100.52 kcal·d−1. These values were not significantly different as determined by a paired t-test; t = −0.47, p = 0.33. The first and second core temperature values were 36.95 ± 0.17 and 36.84 ± 0.16° C, respectively; t = −1.23, p = 0.11. Skin temperature measured at the mid-clavicular line did decrease significantly from 33.18 ± 0.15 to 32.49 ± 0.26° C; t = −2.96, p < 0.01. No other skin temperature changes were significant. Heart rate decreased significantly from 62.11 ± 2.68 to 59.13 ± 2.85 b·min−1; t = −3.31, p < 0.01. Changes in core temperature, skin temperature, and heart rate were not significantly correlated to changes in RMR. Conclusions: Core temperature, skin temperature, and heart rate do not appear to change during repeat RMR measurements in this population. A larger number of participants are needed to determine if changes in these parameters affect repeat RMR values. Practical Applications: If heart rate, core temperature, and skin temperature remain stable, RMR will not change. Furthermore, a single measurement of RMR appears to be effective for healthy, fit young men. This could serve as a time-saving measurement when collecting RMR data.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

The Relationship Between Quality of Life and Stair-Climb Performance in Career Firefighters

M. Laffan,1 G. Gerstner,1 C. Kleinberg,2 A. Tweedell,3 H. Giuliani,1 J. Mota,1 A. Trivisonno,1 and E. Ryan1

1 University of North Carolina at Chapel Hill; 2 Under Armour; and 3 Army Research Laboratory

Firefighters play a critical role in public safety and firefighting is often recognized as a hazardous occupation. Their job demands are not only physically strenuous, but are mentally and emotionally stressful due to the spontaneous nature of the job. Stair-climbing has been shown to be one of the most demanding firefighter activities and is a critical occupational task. With rising health concerns, it is important to consider firefighter quality of life (QOL) in relation to job performance. However, it is unclear whether or not functional performance in tasks such as the stair climb are associated with firefighter QOL. Purpose: The purpose of this study was to examine relationship between QOL and stair climb performance (SCP) in career firefighters. Methods: Seventy-two male career firefighters (mean ± SD = age: 36.13 ± 16.33 years; stature: 179.94 ± 6.58 cm; mass: 102.28 ± 5.46 kg; BMI: 31.45 ± 5.46 kg·m−2) volunteered to participate in this study. A 100 mm visual analog scale was used to assess QOL. Participants were asked to draw a straight line intersecting a 100 mm line anchored at 0 and 100 representing their perceived “worst imaginable health” and “perfect health,” respectively. SCP was assessed by a timed stair assent and subsequent decent of 26 stairs (20 cm stair height) 4 times as fast as possible. The participants were fitted with a weighted vest (22.73 kg) over the shoulders to simulate the load of their personal protective equipment and self-contained breathing apparatus. Performance was measured in time to completion. A Pearson's correlation coefficient was used to examine the relationship between QOL and SCP. To determine if age and/or BMI influenced this relationship, partial correlations were used while adjusting for age and BMI, separately. An alpha of p ≤ 0.05 was used to determine statistical significance. Results: There was a significant negative correlation (r = −0.445, p < 0.001) between QOL (64.82 ± 18.89 mm) and SCP (83.56 ± 16.33 seconds). This relationship was relatively unaffected when controlling for age (r = −0.423, p ≤ 0.001); however, the relationship was weaker and no longer significant when controlling for BMI (r = −0.213, p = 0.075). Conclusions: Slower SCP was related to a poorer QOL in firefighters. Interestingly, this relationship appeared to be influenced by BMI, but not age. Practical Applications: With less than 30% of U.S. fire departments successfully implementing exercise programs, these findings may provide further justification to partner with strength and conditioning professionals. Future studies are needed to determine if training and/or nutritional strategies that improve body composition may also improve SCP and perceived QOL. Acknowledgments: Supported in part by a grant from the National Institute of Occupational Safety and Health (T42OH008673) and by a Junior Faculty Award from UNC-Chapel Hill.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Body Fat Percentage and Stair Climb Performance in Firefighters

M. Haischer,1 R. Flees,1 E. Smith,1 C. Tesch,2 and K. Ebersole3

1 University of Wisconsin-Milwaukee; 2 City of Milwaukee Fire Department; and 3 Integrative Health Care and Performance Unit, University of Wisconsin-Milwaukee

Introduction: Previous research in U.S. Army soldiers has indicated that body fat percentage is negatively associated with performance on tests of aerobic and anaerobic capacity, as well as muscular strength and endurance. Firefighters are a tactical population that also need to perform physically demanding occupational tasks such as stair-climbing while concurrently carrying an added load from the personal protective equipment (PPE). Previous research has suggested that a decrease in muscle quality may hinder the ability of firefighters to perform occupational tasks. However, the relative contribution of other factors such as aerobic fitness and the PPE weight on performance of a stair climbing task is unknown. Purpose: To investigate the predictive capability of PPE weight, body composition, anthropometric measures, and aerobic fitness on stair climb task performance in urban firefighter recruits. Methods: Eighteen firefighter recruits (Age: 20.3 ± 0.49 years; Body Mass: 82.35 ± 10.39 kg) volunteered to participate in the study. A 3-site skinfold was performed and used with the Siri equation to determine percent body fat (BF). In addition, the Forestry submaximal step test was used to estimate V[Combining Dot Above]O2max. The stair climb (SC) task required each participant to ascend and descend 4 flights of stairs inside a burn tower, as quickly as possible while donning PPE and carrying a hose pack, covering a total of 149 stairs across a total distance of 189.52 m. The total weight of the PPE and hose pack was 26.7 kg. Ratios for PPE weight-to-height (PPEHt) and PPE weight-to-height squared (PPE Index) were calculated to examine the potential contributions of PPE weight distribution across body height on stair climb performance. Time to task completion was recorded in seconds and used to represent SC performance. Bivariate Pearson correlation coefficients were calculated to the relationship between SC performance and PPEHt, PPE Index, estimated V[Combining Dot Above]O2, waist-to-hip ratio (WHR), BF, fat free mass (FFM), BMI, and height (Ht). Variables that were significantly correlated to SC were then entered into a stepwise regression analysis to determine if any of the variables were predictors of SC performance. Results: The results were as follows: SC (127.56 ± 16.73 seconds), PPEHt (14.98 ± 0.45 kg·m−1), PPE Index (8.42 ± 0.50 kg·m−2), V[Combining Dot Above]O2 (44.64 ± 4.55 ml·kg−1·min−1), WHR (0.83 ± 0.04), BF (14.5 ± 4.5%), FFM (70.45 ± 8.36 kg), BMI (25.94 ± 3.64 kg·m−2), and Ht (1.78 ± 0.05 m). Factors that were significantly correlated with SC and entered into the stepwise regression included PPEHt (r = 0.542, p = 0.01), PPE Index (r = 0.549, p = 0.009), V[Combining Dot Above]O2 (r = −0.598, p = 0.004), BF (r = 0.605, p = 0.004), and Ht (r = −0.527, p = 0.012). The stepwise regression analysis determined that only BF was a significant predictor of SC time to completion (β = 0.605, t = 3.037, p = 0.008, R 2 = 0.366). Conclusions: Although multiple factors were significantly correlated to SC performance, only BF remained in the stepwise regression model. However, BF as a significant predictor only accounted for 37% of the variation in SC performance in this cohort of firefighter recruits. Thus, there are likely other physical performance factors not measured in this study which contribute to the SC performance. Practical Applications: The strong relationship between PPEHt and PPE Index and SC performance suggests that the distribution of the PPE load across the body height may have implications on performance in functional tasks such as a SC. In addition, the predictability of SC performance by BF suggests that training strategies designed to improve muscle quality may be important to consider when preparing future firefighters for functional-based firefighter skills in a recruit program. Future research should examine other physical factors that may be related to the SC performance, thereby improving the predictability of performance on the SC task.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Effect of an Academy Training Program on the Fitness Outcomes of Law Enforcement Cadets

G. Martinez and M. Abel

University of Kentucky

Introduction: Law enforcement requires officers to perform vigorous occupational tasks that necessitates adequate levels of various physical fitness outcomes. Law enforcement training academies routinely implement an exercise training program to improve the cadets' physical fitness. Currently, there is limited research on the effectiveness of these training programs to improve a variety of fitness outcomes throughout the duration of an academy. Furthermore, there is a paucity of research evaluating female cadet responses to these training programs. Purpose: To assess the effectiveness of a Basic Law Enforcement Training program to improve physical fitness outcomes in male and female police cadets. Methods: A convenience sample of 138 male (26.8 ± 5.5 years, 178.4 ± 7.8 cm, 88.1 ± 15.7 kg) and 8 female police cadets (24.6 ± 5.9 years, 171.8 ± 7.8 cm, 72.7 ± 10.1 kg) participated in an academy's 22 weeks Basic Law Enforcement Training program. Data were retrospectively collected from a state's Department of Criminal Justice Training Academy (DOCJT). Specifically, demographic, anthropometric, and physical fitness data were obtained from 5 consecutive academy classes held from October 2016 through June 2017. The physical fitness tests and assessment standards used by the DOCJT were adapted from the Cooper Fitness Institute and included the 1.5 mile (1.5 mi) run time, number of sit-ups (SU) completed in one minute, number of push-ups (PU) completed in 2 minutes, one-repetition maximum bench press (BP), and 300 m run time. Repeated measures ANOVA was used to compare physical fitness outcomes between the 3 time points in the entire cohort. Paired-sample T-tests where used for post-hoc analysis. The limited number of female cadets prohibited inter-sex comparisons with parametric statistics. The change in fitness outcomes across the time points was calculated relative to the baseline value: (% Difference = posttest value − entrance value)/entrance value) × 100). The level of significance was set at p < 0.05 for all statistical analyses. Results: Regarding the entire cohort, there were significant improvements in all fitness outcomes throughout the Academy's training program (BP: F(2,144) = 111.3, p% difference: 13.8%; SU: F(2,144) = 149.2, p(BP: 14.2 vs. 9.5%; PU: 48.5 vs. 34.9%; SU: 10.6 vs. 22.8%; 300 m: −6.5 vs. −4.3%; 1.5 minutes: −9.5 vs. −7.6%) and mid-point to exit assessment (BP: 4.6 vs. 3.5%; PU: 15.8 vs. 15.4%; SU: 6.2 vs. 5.9%; 300 m: −3.2 vs. −1.1%; 1.5 minutes: −7.2 vs. −3.9%). Conclusions: Overall, the Academy's training program improved a variety of physical fitness outcomes with greater improvements occurring earlier in the program. In addition, there were greater relative improvements in muscular endurance outcomes compared to maximal strength, anaerobic and aerobic endurance outcomes. The female cadets tended to experience greater improvement in most fitness tests. Practical Applications: The results of this investigation indicate that the Academy's current training program is adequate for improving muscular endurance, however a periodized strength and conditioning program should be implemented for all cadets, with an emphasis on the development of upper body strength, anaerobic and aerobic endurance to optimize these fitness attributes throughout the duration of the Academy.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Differences in Performance on an Occupationally Specific Physical Ability Test Are Explained by Fitness

J. Dawes,1 K. Lindsay,2 R. Lockie,3 C. Kornhauser,4 and R. Holmes4

1 University of Colorado-Colorado Springs; 2 UCCS; 3 California State University Fullerton; and 4 Colorado State Patrol

Purpose: Physical ability tests (PATs) are frequently used by law enforcement agencies to evaluate occupational readiness. These tests are designed to replicate essential occupational tasks performed by officers in the field. The purpose of this study was to determine whether significant differences in fitness exist between officers in the high, average and low performer categories on an occupationally specific PAT. Methods: Archival data consisting of PAT and field-based fitness tests scores for 275 (males, n = 256; females, n = 19) highway patrol officers was utilized for this analysis. This data was collected as part of the agencies annual fitness evaluations, and included age, anthropometrics (height, weight, BF%), as well as fitness scores for the vertical jump VJ, 1-minute push-up (PU) and sit-up (SU) tests, sit and reach test (SR) 2.4 km run (2.4R). A principle component analysis was utilized to determine if significant differences in fitness existed between high, average and low performers on the PAT. Results: The principal component analysis revealed that a lack of dynamic fitness (demonstrated by performance in the VJ, PU, SU, and 2.4 R), in addition to BF explained 50% of the variance in performance on the PAT, with flexibility explaining an additional 15% of this variance. Overall the 2.4 R predicted PAT performance in both sexes, whereas the SU, SR and 2.4 R and age best predicted male PAT performance. Furthermore, a cluster analysis revealed that high male performers scored significantly better in the SU, PU, SR and 2.4 R when compared with average and lower performers on the PAT. Average male performers also scored significantly better in these measures when compared to low performers. Conclusions: Dynamic fitness as well as flexibility appear to have a significant impact on performance in an occupationally specific PAT. Practical Applications: Tactical Strength and Conditioning Facilitators should focus on developing aerobic capacity, muscular endurance, flexibility and anaerobic power to improve occupational performance among police officers.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Influence of Age on Firefighters' Occupational Performance and Exercise Training Habits

A. Saari,1 G. Renz,1 P. Davis,2 and M. Abel1

1 University of Kentucky; and 2 First Responder Institute

Introduction: Firefighting is a strenuous occupation. Greater physical fitness levels promote occupational readiness and may protect firefighters from injuries. Due to behavioral and physiological factors, physical fitness levels decrease with age which also affects firefighters' occupational performance. It is important that firefighters maintain adequate levels of physical fitness across the career span. Currently, there is limited evidence indicating whether the decline in firefighters' occupational performance is due to changes in exercise training habits or physiological factors associated with aging. Purpose: To evaluate the influence of age on occupational performance and physical training behaviors in male structural firefighters. Methods: Sixty-two male firefighters participated in this cross-sectional study and were stratified into younger (<37 years; n = 29; Age: 31.8 ± 3.5 years) and older cohorts (≥37 years; n = 33; Age: 44.7 ± 5.3 years) based on median age of the overall cohort. The subjects were competitors in the Scott Firefighter World Combat Challenge in which each firefighter performed a timed simulated fire ground test (SFGT) to evaluate occupational performance and completed a survey to assess exercise habits. The SFGT included the following occupational tasks: Stair climb with highrise hose pack, hose hoist, forcible entry, charged hoseline advance, and victim rescue. The SFGT was performed while wearing personal protective equipment (PPE) and using a self-contained breathing apparatus. The survey included questions regarding the firefighters' weekly training intensity (i.e., typical exercise session rating of perceived exertion) and volume (i.e., typical exercise session duration multiplied by typical weekly training frequency) for resistance training (RT) and cardiovascular training (CT) modalities. Training volume and intensity variables were multiplied to quantify a weekly training stress variable for both exercise modalities. A Pearson Product Moment Correlation was used to assess the relationship between the overall cohort's age and SFGT time. Independent samples t tests were used to compare the SFGT time and training behavior outcomes between older and younger cohorts. The level of significance was set at p < 0.05 for all statistical analyses. Results: A significant positive correlation was identified between age and SFGT time (r = 0.28, p = 0.026). Older firefighters completed the SFGT 8.8% slower than the younger firefighters (105.5 ± 16.5 vs. 115.7 ± 19.6 seconds; p = 0.042). There was no significant difference between age groups for RT or CT volume (p ≥ 0.75), intensity (p ≥ 0.27), or training stress (p ≥ 0.66). Conclusions: Age was significantly correlated with firefighters' occupational performance. The occupational performance of older firefighters was lower compared to younger counterparts. Given the similarities in self-reported training parameters between firefighter cohorts, the lower occupational performance of older firefighters may be attributed to physiological factors associated with age. Practical Applications: Older firefighters may experience some decline in occupational performance despite utilizing a relatively high training load. Although similar training loads did not produce an equal occupational performance outcome, older firefighters were still able to effectively perform strenuous occupational tasks. Tactical strength and conditioning practitioners should assist firefighters in designing appropriately periodized training programs to enhance occupational readiness across the career span. Acknowledgments: We would like to thank Scott Firefighter Combat Challenge organization and the competitors for allowing us to collect data during the Scott Firefighter World Challenge competition. There are no sources of funding or conflicts of interests to be reported.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Factors Contributing to Self-Reported Lower Extremity Pain in Probation Officers

J. Mota, G. Gerstner, M. Laffan, A. Trivisonno, K. Newman, L. Freile, H. Giuliani, and E. Ryan

University of North Carolina at Chapel Hill

The ability to manage pain is critical for maintaining high levels of overall health and work performance. Unfortunately, many law enforcement officers (LEOs) suffer from chronic and acute musculoskeletal pain of the upper and lower body. Additionally, many LEOs are reportedly overweight or obese and have a decreased ability to produce lower body power as they age. As such, it is imperative to understand risk factors potentially related to self-reported musculoskeletal pain in LEOs. Purpose: The purpose of the present investigation is to examine the influence of age, percent body fat (%fat), and vertical jump power (VJP) on self-reported lower body pain in LEOs. Methods: Data were collected from 36 probation officers (mean ± SD [range] age, 37.78 ± 9.30 [25–55] years; %fat, 34.61 ± 6.78 [19.7–48.3] %) at their workplace. Percent fat was estimated using a bio-electrical impedance device following a 4 hour fast. Lower body mean power was assessed by a linear transducer that was attached to the waist of the participants during 3 maximal countermovement vertical jumps. The highest of 3 mean power outputs was used for analysis. A visual analog scale was used to determine the participant's self-reported lower body pain. Specifically, participants were asked to select a number between 0–10 in increments of 0.5, with 0 being “no pain” experienced in the last 7 days and 10 being “pain as bad as it could be” in the last 7 days. A stepwise multiple regression analysis was utilized to determine the relative contributions of age, %fat, and VJP on self-reported pain. An alpha level of p ≤ 0.05 was used to determine statistical significance. Results: Results from the stepwise multiple regression procedure are detailed in Table 1. Briefly, the results indicated that both %fat and age contributed significantly to the prediction of self-reported lower body pain. However, VJP did not significantly contribute to the stepwise multiple regression model. Conclusions: The results from the present investigation suggest that probation officers who are older and have greater %fat may be experiencing increased levels of lower body pain. However, lower body dynamic performance appeared to not influence perceived pain. Practical Applications: Tactical strength and conditioning facilitators may wish to carefully consider implementing a training program designed to decrease %fat, especially in LEOs with advanced age, in efforts to alleviate perceived pain. Additionally, it may be important for practitioners to understand that assessing VJP may be important for occupational performance, but it may have little influence on predicting perceived lower body pain. Acknowledgments: This project was supported by a North Carolina Occupational Safety and Health Education and Research Center Pilot Award (National Institute of Occupational Safety and Health T42OH008673).

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Arterial Stiffness in Law Enforcement Officers

J. Keeler,1 M. Abel,2 B. Fleenor,3 J. Clasey,2 and A. Stromberg2

1 Kentucky State University; 2 University of Kentucky; and 3 Ball State University

The prevalence of cardiovascular disease (CVD) among law enforcement officers (LEOs) is slightly higher than the general population at younger ages and doubles following retirement. The measure of arterial stiffness serves as an independent risk factor that has prognostic value for future incidence of CVD. However, there is limited research on lifestyle, occupational, and demographic factors that may be associated with increased arterial stiffness in LEOs. Identifying predictors of arterial stiffness among LEOs will allow for occupationally-specific interventions to be developed and implemented. Purpose: The purpose of this investigation was to compare the level of arterial stiffness among LEOs vs. the general population and to identify lifestyle, occupational, and demographic predictors of arterial stiffness in LEOs. Methods: Seventy male career LEOs between the ages of 24–54 years from Kentucky and southwest Ohio participated in this study. LEOs completed a variety of questionnaires related to health/occupational history, occupational stress, and diet. LEOs' body composition (bioelectrical impedance), central and brachial blood pressures, and physical activity (triaxial accelerometry) were assessed. The dependent variable of arterial stiffness was measured by carotid-femoral pulse wave velocity (cfPWV). One sample t tests were utilized to compare the sample's cfPWV to values reported in the general population. Pearson product moment correlations and multiple linear regression were used to identify correlates of cfPWV. Significance set at p < 0.05. Results: Compared to the general population [1], cfPWV was lower among LEO's under 30 years of age (n = 15; mean difference = −0.6 m·s−1; p −1; p 2 = 0.56, p < 0.001). Conclusions: The primary findings of this investigation demonstrate that arterial stiffness may progress more rapidly in middle-aged LEOs compared to the general population, and that LEOs should focus on maintaining appropriate levels of relative body fat and blood pressure to regulate arterial stiffness and risk of CVD. Practical Applications: Collectively, these findings indicate that LEOs are at greater risk of accelerated arterial stiffness, which increases the risk of developing CVD. The practical measures of relative body fat, blood pressure, and age in the regression model along with a normative PWV table may enhance risk assessment for at-risk LEOs, when arterial stiffness measures are not available. This could lead to primary care interventions to protect LEOs rather than tertiary treatments. Acknowledgments: Funding was received from the University of Kentucky College of Education, through the Turner Thacker Research Fund Award.

Reference

1. Determinants of pulse wave velocity in healthy people and in the presence of cardiovascular risk factors: “Establishing normal and reference values.” Eur Heart J 2010;31(19).

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

The Effect of Aerobic Fitness on Psychological Stress as Measured by Heart Rate Response During Academy Training in a Custody Assistant Recruit Population

M. Moreno,1 K. Cesario,2 J. Dawes,3 R. Orr,4 M. Stierli,5 A. Bloodgood,1 J. Dulla,6 and R. Lockie2

1 California State University, Fullerton; 2 California State University Fullerton; 3 University of Colorado-Colorado Springs; 4 Bond University; 5 NSW Police; and 6 Los Angeles County Sheriff's Department

Introduction: Custody Assistants (CAs) have a job that often will subject them to high levels of psychological stress. During performance of daily job tasks, CAs may encounter high anxiety situations and may need to make effective decisions under stressful conditions. One of the goals of academy training is to prepare CA recruits for stressful situations by subjecting them to high levels of psychological stress. Previous research has shown that aerobic fitness can potentially moderate the effects of high anxiety and stressful situations. Given the importance of decision making and stress tolerance in this population, research is needed to determine the physiological response to situations of high stress. Purpose: To determine the effect of aerobic fitness on the physiological response of CAs to a high stress situation on the first day of academy training using heart rate (HR) data. Methods: Retrospective analysis was performed on data from a one CA class of 26 recruits (15 males, 11 females). The session was designed to elicit an elevated stress response via verbal commands from training staff, with limited physical activity. HR data were gathered using HR monitors, and categorized (relative to age-predicted maximum HR; HRmax) according to American College of Sports Medicine (ACSM) guidelines (very light: <57% HRmax, light: 57–63% HRmax, moderate: 64–76%, HRmax, vigorous: 77–95% HRmax, very vigorous: >95% HRmax). Recruits were grouped into fitness ability levels based on their estimated maximal aerobic capacity from a 2.4-km run relative to ACSM general population age norms (Superior, Excellent, Good, Fair, Poor, Very Poor). Superior and Excellent categories were collapsed into High Fit (HF; n = 4); Good and Fair were combined into Moderate Fit (MF; n = 8); and Poor and Very poor were considered Low Fit (LF: n = 14). A one-way ANOVA (p < 0.05) was used to assess the differences in time spent in the various HR zones between the 3 fitness groups. Results: The total time for the session was 75 minutes. There were no significant between-group differences on the time spent in the different HR zones or the percentage of total time spent in the different zones. Collectively, the 3 groups spent the largest percentage of total training time in the vigorous zone (HF = ∼61.37%; MF = ∼58.81%; LF = ∼50.99%). This equated to 45, 44, and 33 minutes spent at a vigorous intensity for the HF, MF, and LF groups, respectively. Conclusions: These data suggested that a psychological stress session provided a similar intensity to a vigorous aerobic training session (as defined by ACSM). Aerobic fitness did not seem to significantly attenuate the physiological response to stress in this CA class, contrary to previous research. One potential reason for this is that 14/26 recruits (53%) were classified as having poor or very poor aerobic fitness. Individual recruits are seldom the sole recipient of consequences due to any mistakes made within the group. This may have meant that poorer fit recruits made errors that impacted the HR response of HF recruits. Practical Applications: Law enforcement agencies should be aware of the aerobic fitness of their CA recruits, which was generally poor in this class. Further research is needed with a larger sample as the current class was relatively homogenous in its physical ability. In addition, more research is necessary to analyze specific decisions made under stress in relation to physical fitness.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Training Stress on Lower Body Mobility, Joint Symmetry and Anaerobic Power in Division I Female Soccer Players Across the Competitive Season

J. Giles, T. Purdom, K. Levers, C. McPherson, J. Howard, L. Brown, and P. Martin

Longwood University

Purpose: The purpose of this study was to observe the effect of training stress on mobility, stability, joint symmetry, and anaerobic power throughout the competitive season in Division I female soccer players. Methods: Fifteen Division I female soccer players (Mean ± SD: 20.0 ± 1.0 years, 60.3 ± 5.2 kg, 166.1 ± 5.2 cm, 19.0 ± 3.0 %BF) were tested for mobility and stability imbalances, joint symmetry, and vertical power across the competitive season; specifically: prior to the competitive season (PRE), mid-season (MID) and post-season (POST). Inclusion criteria further described subjects who competed in a minimum of 700 in-game minutes throughout the competitive season. Maximal vertical jump (MVJ) height was measured using a countermovement style jump and a Vertec. The Harman formula was used to convert changes in jump height to power. Participant's leg lengths (LLSYM) were measured from the anterior superior iliac spine to the medial malleolus prior to completion of a Y-Balance test (YBT). The YBT measures anterior (ANT), posteromedial (PTM), and posterolateral (PTL) reach achieved in single leg stance by the reaching non-stance foot. Statistical analysis included normalized composite scores (NCOMP) of all 3 reach directions divided by leg length for both the left and right limb [(ANT + PTL + PTM/limb length × 3) × 100] as well as right and left normalized ANT reach scores (NANT) [(ANT/limb length) × 100] for all 3 testing blocks. A 3 × 4 ANOVA was used to analyze raw differences between NCOMP, NANT, LLSYM, and MVJ across all 3 testing blocks. The LSD post hoc test was used to evaluate pairwise comparisons when statistical relevance was achieved. Results: A significant main effect was observed (F 1,14 = 62.92, p < 0.001) between NCOMP difference and testing block. Pairwise comparisons revealed a significant increase in NCOMP left-right difference between MID and POST (Mean ± SD: MID: 2.7 ± 1.9%, POST: 4.6 ± 3.4%; p = 0.050). At mid-season (MID), two-thirds of the subject pool had total NCOMP scores below the YBT reach standard (≤94% of 3× limb length), indicating a significant risk of injury within the population. Significant differences were not observed in NANT despite a 19.3% relative increase in left-right difference across the competitive season (PRE-POST). No significant differences between LLSYM and MVJ were observed throughout the competitive season. Conclusions: The increase in left-right NCOMP difference from MID to POST season indicates a limitation in stability, mobility, and muscular activation with increased training stress. The lack of NANT and LLSYM differences suggest that hip and ankle joint symmetry were maintained from MID to POST, and the NCOMP differences were primarily attributed to PTL and PTM discrepancies. Interestingly, MVJ did not change throughout the competitive season despite compromised stability and mobility (NCOMP). Subjects could be at heightened risk of injury due to their ability to maintain maximal power despite their movement-related asymmetries. Therefore, we conclude that implementing the YBT to monitor left-right symmetry and lower body maximal power testing across the competitive season is necessary to limit injury risk. Practical Applications: It is recommended to periodically record YBT and LLSYM throughout an athlete's annual training cycle to assess the impact of training stress on movement quality in order to minimize injury risk and improve performance. Acknowledgments: Special thanks to Longwood Athletics in helping us complete this study.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

An Investigation of Different Warm-Up Methods in Female Collegiate Volleyball Players

B. Church and L. Cline

Arkansas State University

Purpose: Volleyball is a sport requiring maximum jumping performance for success. It is well-established that adequate warm-up is an essential component of preperformance activity. Self-myofascial release and massage are techniques that have been used primarily as a form of recovery from exercise, but have not been investigated adequately as a form of preperformance warm-up. The purpose of this research was to investigate different warm-up techniques on jumping performance in female volleyball players. Methods: Six NCAA Division I volleyball athletes (middle and outside hitters) volunteered to participate in this investigation (height: 185.4 ± 6.2 cm, body mass: 82.5 ± 11.2 kg). The participants engaged in 3 different types of warm-up activities prior to practice: (a) a standardized dynamic warm-up, or control (CON) (b) a dynamic warm-up plus self-myofascial release (SMR), commonly known as foam rolling, of the legs and (c) a standardized dynamic warm-up plus massage (MASS) on the legs by a licensed massage therapist. Following each warm-up the participants performed 3 maximum vertical jumps on a force plate. Maximum vertical jump height (VJH) and average jumping power (VJP) were recorded. In addition, the participants performed a modified sit-and-reach (SR) protocol for 3 trials in order to measure their lower back and hamstring flexibility. The participants then engaged in their normally scheduled volleyball practice. During practice, the participants wore a Vert Belt device, which recorded maximum and average jumping height. These values were recorded for approximately 50 jumps each day of testing. Following the 50 jumps, the participants again performed 3 maximum vertical jumps on the force plate. A repeated measures analysis of variance was conducted to determine if there were significant differences in prepractice jumping performance, sit-and-reach, in-practice average and maximum vertical jump and postpractice jumping performance, between the 3 conditions. Results: The results showed there were no significant differences in either VJH (p = 0.459), VJP (p = 0.805), or SR (p = 0.609) during prepractice testing. In addition, there were no significant differences in in-practice maximum (p = 0.334) or average (p = 0.319) jumping performance for 50 jumps. Lastly, there was no significant difference in post practice VJH (p = 0.302) or VJP (p = 0.511). See Table 1. Conclusions: These results showed there was no effect of type of warm-up on either prepractice, in-practice, or postpractice jumping ability. SMR and MASS appear to be viable methods for preperformance warm up although they were no more effective in improving jumping performance than a dynamic warm-up. Practical Applications: In this investigation, SMR or MASS neither diminished nor enhanced jumping performance compared to the control. Knowledge of effective warm-up techniques may help professionals design adequate warm-up routines while minimizing the time required for warm-up.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Passive Static Stretching Alters the Characteristics of the Force-Velocity Curvature

J. Xu

Louisiana State University

Introduction: A. V. Hill in 1938 described the relationship between force and velocity as part of a rectangular hyperbola: (P + a) × (V + b) = C. Studies have shown that a more curved hyperbola has a smaller value of b/Vmax, (i.e., a lower maximal power output per % max force). While a less curved hyperbola has a larger value of b/Vmax, and is further away from the zero point of the coordinate. Thus, faster muscles have a greater b value than that of slower muscles. Static stretching has been shown to negatively affect muscle performance. However, no studies so far have been done to investigate stretching's effects on force-velocity curve of different types of muscle, namely the impacts on the hyperbola constant variable b. Purpose: The purpose was twofold: (a) to distinguish fast muscle group and slow muscle group among the subjects by calculating Hill's equation constant b value; (b) to investigate the effects of static stretching on each group. Methods: Sixty-five (23 males, 42 females; Age: 21.4 ± 2.0 years; Height: 168.5 ± 10.2 cm; Mass: 68.9 ± 10.4 kg) physically active college students participated in the study at the biomechanics lab of Louisiana State University. Students came to the lab 3 times, including one familiarization trial and 2 testing trials. In the testing trials, each student was tested isokinetic leg extension peak torque at 5 speeds: 0.52, 1.57, 3.14, 4.19, 5.24 rad·s−1, following either a stretch or no-stretch intervention, randomly assigned across day 1 or day 2. The stretching intervention, consisting of 3 sets of four 30 seconds reps with 15 seconds rest for 2 different assisted static quadriceps stretches on subject's right leg, was applied before the peak torque test. The no-stretch intervention, also applied before the peak torque test, was quiet sitting for 12 minutes. Using the equations published by Wohlfart and Edman (Exp Physiol 1994;79(2):235–9), Hill's hyperbolic equation constant variable b was calculated for both the stretched and non-stretched condition for each subject. To obtain groups of individuals whose b was lesser or greater than the norm, a Z score was obtained for each no-stretched b value. A Z score above or below one standard deviation of mean was chosen to determine the lesser curvature (faster) or greater curvature (slower) group respectively. To determine the effects of stretching, a paired t test for each of the 2 groups was determined separately. The statistic p ≤ 0.05 was set as significant. Results: There were 8 subjects in greater curvature (slower) group and 6 subjects in lesser curvature (faster) group. Static stretching quadriceps significantly affected both groups, negatively for the faster group (t = 0.00051), positively for the slower group (t = 0.00612). Conclusions: These data suggest that static stretching negatively affects faster muscles, which makes its force-velocity curve move towards to zero point of coordinate producing less power and force. On the other hand, it pulls slow muscle away from zero pint of the coordinate, producing more power and force. Practical Applications: Static stretching should be applied differently. Caution should be applied to static stretching velocity and power dominate sports. On the other hand, static stretching could be applied to the slower muscle populations, to improve their power performance.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Acute Effects of Neural Gliding on Hamstring Flexibility and Athletic Performance in College Basketball Players

K. Smith and A. Waldhelm

University of South Alabama

Introduction: The pre-activity warm-up is a very important aspect in preparation for competitive sports. Prior research has demonstrated the ability of a warm-up to influence flexibility, strength and power. Neural gliding is a fairly new intervention used in rehabilitation to improve muscle flexibility and joint mobility of individuals with musculoskeletal and neuromuscular injuries but the use of neural gliding in sports performance is limited. Purpose: The objective of this study is to compare the effects of neural gliding and dynamic stretching exercises on hamstring flexibility and athletic performance in collegiate basketball players. Methods: Eight-teen NCAA Division II basketball players (8 males, 10 females; age: 18.1 ± 0.24; height: 1.78 ± 0.09 m; weight: 60.0 ± 17.2 kg) volunteered for the study. Data was collect during a single session and block assignment was used with 9 individuals (4 males, 5 females) in each group: neural gliding and dynamic stretching. Before testing each subject performed the same 5 minute warm-up which included jogging, running and sprinting. Pre-and post-intervention testing included bilateral hamstring flexibility using the active straight leg test, 10 and 40-yard dash, countermovement vertical jump, and 20-yard shuttle run. Between testing the participants performed a 5 minute exercise protocol which included bilateral sciatic nerve gliding or a dynamic lower extremity stretching program. Multiple 2 × 2 (time by group) repeated measures ANOVAs with p ≤ 0.05 was used to examine differences. Results: The results did not show a significant time by group interactions for all 6 measurements and all main effects were insignificant except for the countermovement vertical jump (F = 15.0, p = 0.005). Post-hoc paired t tests with Bonferroni correction (p ≤ 0.025) did not show as significant difference in countermovement vertical jump performance in both the dynamic stretching (p = 0.250) or the neural gliding group (p = 0.107). Conclusions: The results demonstrate that both neural gliding and dynamic stretching exercises did not have a significant effect on hamstring flexibility or 4 athletic performance tests. Practical Applications: Neural gliding can be used as part of the pre-participation warm-up without a negative effect on athletic performance, but more research is needed to truly determine if neural gliding should be part of a warm-up.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Examination of Stretching Practices and Perceptions of Division I and Division III College Soccer Programs in the United States

L. Judge,1 D. Bellar,2 N. Nordmann,1 J. Avedesian,3 C. Dickin,1 J. Langley,4 D. Hoover,5 and B. Craig1

1 Ball State University; 2 University of Louisiana at Lafayette; 3 University of Nevada, Las Vegas; 4 University of Southern Indiana; and 5 Western Michigan University

The current research based pre-activity stretching guidelines have evolved to optimize performance. This research seeks to add to the available knowledge of warm-up and cool down practices by examining the impact of pre- and post-activity stretching practices on performance in collegiate soccer programs. Purpose: The aim of this study was to determine if NCAA Division I and Division III soccer coaches perceptions and practices of pre- and post-activity stretching are aligned with the current scientific recommendations. Methods: A total of 276 questionnaires were distributed via email to collegiate soccer coaches from NCAA Division I and III universities. The questionnaire was designed to gather demographic, professional, and educational information, as well as specific pre- and post-stretching practices by coaches. The responses were examined by computing frequency counts, frequencies, and means where applicable. Statistical analysis was performed using Pearson's Chi-square tests to assess potential differences. Results: Respondents represented coaches from 73 conferences of D1 (41.7%) and D3 (58.3%) soccer programs. Of the 209 respondents, the majority were older than 36 years of age (68.9%), and predominately male (76.5%). The majority of coaches (48.8%) had 2–12 years relevant experience. The results of the Chi-Square analysis failed to reveal significant differences in 5 categories compared to stretching warm-up practices; years of coaching experience (χ2 = 24.520, p = 0.432), D1 and D3 (χ2 = 6.034, p = 0.419), certification level (χ2 = 19.568, p = 0.721), coaches' age (χ2 = 24.850, p = 0.414), and coaches' sex (χ2 = 9.754, p = 0.135). Of the 209 respondents, 84.9% responded with greater than average on a 7-point Likert scale on the importance of stretching prior to activity. Coaches typically prescribed dynamic stretching activities (62.7%) or a combination of static and ballistic stretching activities (16.5%) prior to athletic practice and events. In examining sources of information, 31.2% of coaches receive their pre-activity stretching knowledge from strength and conditioning coaches, 18.2% from another soccer coach, 20.5% from coaching education programs, and 29.9% from other sources. In addition, 85.2% of coaches reported having one or more USSF, UEFA, or NSCAA certifications with only 31 of the respondents (14.8%) reporting no certification. Conclusions: When comparing coaching demographic information and pre-activity and stretching practices to current guidelines, the majority of soccer coaches prescribe to recommended research practices. Practical Applications: This study indicates it is important for soccer coaches to evaluate their own practices against current scientific recommendations and ongoing research, perhaps cross-checking them with the practices of their peers.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Examining the Acute and Residual Effects of a Static Hip Flexor Stretch on Vertical Jump Performance

D. Kivi, K. Migliazza, C. Zerpa, and P. Sanzo

Lakehead University

The vertical jump is frequently used as a test to assess lower extremity power, and vertical jump height is a factor which often determines success in many sports. One method of increasing vertical jump performance which has recently gained attention is through the completion of static stretching of the hip flexor muscles immediately before jumping, although the research in this area is limited. In addition, there has been no research examining whether this type of stretching has any residual effect on vertical jump performance. Purpose: The purpose of this study was to examine the acute and residual effects of performing a static stretch of the hip flexor muscles on vertical jump height during a countermovement jump. Methods: Twenty male participants (age = 21.0 ± 1.8 years; height = 1.80 ± 0.10 m; mass = 80.9 ± 6.2 kg) were recruited for this study. All participants were experienced in performing the vertical jump, having played competitively in sports which involved jumping. Participants signed an informed consent form prior to participating in the study, which was approved by the Institutional Research Ethics Board. After completing a dynamic warm-up, participants completed 3 maximal effort countermovement jumps on an AMTI force platform and using a Vertec device to provide a visual target. A static stretch of the hip flexors was then performed by completing 3 repetitions of a lunge stretch for 30-seconds per hip. This stretching technique was immediately followed by 3 additional maximal effort countermovement jumps. A 5-minute active rest period was then provided, after which a final set of 3 maximal effort countermovement jumps were completed. One minute rest was provided between trials for each set of jumps, and the best jump height in each set as determined from the ground reaction force data was included in the analysis. Descriptive statistics (mean and SD) were calculated for all participants under each jump condition (pre-stretch, post-stretch, post-rest). A repeated measures ANOVA was used to compare the mean maximal vertical jump height across the 3 jump conditions. If significant differences were found, a Bonferroni post-hoc analysis was performed to determine differences between mean pair jump conditions. The alpha level was set at p ≤ 0.05. Results: Mean maximal jump heights were as follows (mean ± SD): pre-stretch = 37.6 ± 5.7 cm; post-stretch = 38.8 ± 6.0 cm; post-rest = 38.5 ± 6.2 cm. A significant difference was seen in vertical jump height across the 3 jump conditions (p < 0.001). Post-hoc analysis revealed significantly greater jump heights between the pre-stretch and post-stretch (p = 0.001) and between the pre-stretch and post-rest (p = 0.01) conditions. No significant differences were seen between post-stretch and post-rest. Conclusions: Performing a static stretch of the hip flexor muscles immediately before jumping resulted in a 2.9% increase in vertical jump height, with a 2.2% increase seen after a 5-minute active rest period. Practical Applications: The results of this study suggest that performing a static stretch of the hip flexor muscles can be beneficial for experienced jumping athletes in improving their vertical jump performance. A static hip flexor stretch included at the end of a dynamic warm-up and before testing or competition is recommended.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Self-Myofascial Release vs. Static Stretching: The Effects on Hamstring Range of Motion

K. Madden and B. Church

Arkansas State University, Human Performance Laboratory, Jonesboro, AR

Introduction: Physically active individuals endeavor to seek the best warm-up techniques prior to activity on order to prepare muscles for subsequent activity. Static stretching (SS) or holding limbs in extended positions in order to lengthen a muscle, is a common form of pre-exercise warm-up. Self-myofascial release (SMR), also known as foam rolling, has become popular more recently as a warm-up method. In SMR, individuals position the muscle they intend to warm-up on a rigid foam cylinder while rolling back and forth. The rolling pressure is purported to relax muscles. Purpose: The purpose of this research was to compare the effects of self-myofascial release (SMR) and static stretching (SS) on hamstring range of motion. Methods: Twenty recreationally active men (Age: 21.4 ± 1.2 years; Height: 178.9 ± 10.7 cm; Mass: 81.2 ± 18.7 kg) volunteered to serve as participants by attended 2 testing sessions where hamstring range of motion (ROM) of the dominant leg was measured using a modified sit-and-reach box before and after either SS or SMR. Prior to the SS and SMR sessions, participants completed a standardized warm-up. The warm-up was done on a treadmill and the desired speed was to elicit a jog or just faster than walking pace for at least 6 minutes. In the SMR procedure, participants crossed their nondominant leg over their dominant leg which was placed on the roller. Rolling was performed from the ischial tuberosity to the popliteal fossa for 6 sets of 20 seconds each of rolling with 10 seconds rest between each set for a total of 2 minutes. On a separate day, participants performed a seated static stretch where the dominant leg was extended and nondominant leg bent in what is commonly known as an inverted hurdle stretch. Participants followed both SS and SMR with a post-treatment ROM measure. The order of treatment was counterbalanced. Warm up type (SS vs. SMR) and time interval (pre vs. post) data were analyzed using a mixed design repeated measures ANOVA. Results: The results indicated there was an overall effect of warm-up on hamstring ROM (p = 0.014) but there was no interaction for type of warm-up (p = 0.219) (SSPRE 40.6 ± 4.7 cm; SSPOST = 42.6 ± 4.9 cm) (SMRPRE = 40.1 ± 5.0 cm; SMRPOST = 41.1 ± 4.6 cm). Conclusions: These results demonstrated that a warm-up was effective in increasing ROM, although the type of warm-up did not produce significantly different results. Practial Applications: Active people who engage in pre-exercise warm-up can expect SMR to be as effective as SS for increasing ROM of the hamstring muscles. Acknowledgments: This research was funded by a Student Undergraduate Research Fellowship

Thursday, July 12, 2018, 2:00 PM–3:30 PM.

Back to Top | Article Outline

Influence of Movement Quality on Anaerobic Performance Throughout the Competitive Season in Division I Female Soccer Players

P. Martin, K. Levers, T. Purdom, L. Brown, J. Giles, C. McPherson, and J. Howard

Longwood University

It is widely known that soccer performance is greatly influenced by the production of anaerobic power. However, how movement quality affects anaerobic performance over time is unclear. Purpose: The purpose of this study is to determine if changes in movement quality during the competitive season influences anaerobic performance in female soccer athletes. Methods: Twenty-three Division I female soccer athletes (Mean ± SD: 19.1 ± 1.2 years; 61.13 ± 4.81 kg; 164.57 ± 4.67 cm; 19.50 ± 3.75 %BF; 49.11 ± 3.10 kg FFM) completed the functional movement overhead deep squat assessment (ODSA) and anaerobic performance testing prior to (PRE) and mid-competitive season (MID) (45-day duration). Anaerobic performance tested vertical power (VPWR) utilizing countermovement vertical jump (CMJ) height and the Harmon formula to convert to power, agility T-test (TTEST), and running anaerobic sprint test (RAST) to capture fatigue index (RASTfindx), peak (RASTppwr) and average (RASTapwr) anaerobic power. Performance measures were defined by completion time (seconds) and absolute power (W). Movement quality was evaluated using 7 standardized evaluation parameters formulated from commonalities between the Landing Error Scoring System (LESS) appraisal characteristics and Functional Movement Screen (FMS) ODSA scoring guidelines. Participants arrived for testing having fasted for a minimum of 4 hrs, abstained from caffeine for 12 hours, and refrained from exercise and alcohol for 24 hours. A 2 × 8 repeated measures ANOVA was used to assess movement quality (ODSA characteristics) over the competitive season. A separate 2 × 5 repeated measures ANOVA was used to assess changes in performance variables over the competitive season. Post hoc analysis using the LSD test was used to evaluate pairwise comparisons when significant interactions were observed. Results: Statistical analysis revealed no overall main time effect within the 2 × 8 repeated measures ANOVA model (p = 0.321, observed power = 0.689), but the 2 × 5 repeated measures ANOVA model revealed a significant main time effect (p = 0.03, observed power = 0.976). Pairwise comparisons revealed that from PRE to MID, the prevalence of ODSA excessive spinal flexion significantly decreased by 68.64% (PRE 0.48; MID 0.17, p = 0.016) and the overall ODSA ordinal FMS score significantly increased by 17.50% (PRE 1.74; MID 2.04, p = 0.016). No other significant changes in the 7 ODSA evaluation parameters were observed over the competitive season. From PRE to MID, linear power and change of direction (COD) performance significantly declined ([INCREMENT]RASTppwr −7.28%, p = 0.003; [INCREMENT]TTEST 2.75%, p = 0.007), while anaerobic capacity improved ([INCREMENT]RASTfindx −16.74%, p = 0.009). No changes in VPWR or RASTapwr were observed. Conclusions: Improvements in the ODSA ordinal score and reduced incidence of excessive spinal flexion suggest improved movement quality from PRE to MID. Despite movement quality improvements from PRE to MID, linear power and COD ability decreased, vertical power was maintained, yet anaerobic capacity improved. Lower RASTppwr and no change in RASTapwr rationalizes the improved anaerobic capacity (RASTfindx). In this population, monitoring movement quality via ODSA ordinal score and standardized evaluation parameters linked to LESS and FMS appraisal characteristics does not align with changes in anaerobic performance. Practical Applications: The ODSA and associated evaluation parameters alone do not determine changes in anaerobic performance in Division I female soccer athletes. A more comprehensive assessment of functional movement in this population is likely required to use movement quality as a predictor of anaerobic performance during the competitive season. Acknowledgments: Special thanks to Longwood Athletics and my faculty mentors for their help in this study.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Self-Administered Unilateral Foam Rolling Exercise Improves Contralateral Hamstring Flexibility

X. Ye,1 B. Killen,2 and J. Carr3

1 University of Mississippi; 2 University of Mississippi Medical Center; and 3 University of Oklahoma

Self-administered foam rolling (SAFR) is an effective exercise technique often used in sports and rehabilitation fields for the purpose of myofascial release. However, its effects on non-intervened contralateral limb's performance (flexibility, strength, etc.) are not well-understood. Purpose: To examine the potential crossover effects of unilateral hamstring SAFR on the contralateral hip flexion passive ROM and the strength performance. Methods: Thirteen men (mean ± SD age = 26 ± 3 years; height = 176.9 ± 6.6 cm; body weight = 84.2 ± 12.5 kg) and 10 women (age = 27 ± 2 years; height = 164.1 ± 3.5 cm; body weight = 59.3 ± 11.4 kg) participated in this investigation which consisted of a familiarization visit and an experimental visit. At least 24 hours after the familiarization, the subjects returned to the laboratory for the experimental visit, during which 10 sets of 30-seconds SAFR with 30-seconds rest between sets were performed by the subjects with their dominant hamstring muscles. Specifically, the cadence was set at one second up (roll to the ischial tuberosity) and one second down (roll to the popliteal fossa) for the foam rolling exercise. Before (Pre-) and immediately after (Post-) the SAFR intervention, the contralateral hip flexion passive ROM, the isometric strength of the contralateral knee flexors, along with the surface electromyography (EMG) amplitude of the biceps femoris and the semitendinosus muscles were measured. Separate 2-way mixed factorial (time [Pre vs. Post] × sex [Men vs. Women]) analyses of variance (ANOVAs) were used to examine the potential changes in the dependent variables. Results: There were no 2-way time × sex interactions for all the dependent variables. However, there was a main effect for time for the contralateral hip flexion passive ROM (marginal mean ± SE: Pre = 66.9 ± 4.0° vs. Post = 71.1 ± 4.4°, p < 0.001). Conclusions: The SAFR intervention improved the contralateral hip flexion passive ROM, but did not induce any changes in the strength performance and EMG activity of the interested muscle group. Practical Applications: The current findings may provide beneficial information for individuals who are going through the early stage of hamstring/knee injury rehabilitation. As the injured limb can still be immobilized, and/or the patients may not be able to subjectively tolerate ROM interventions, thus applying SAFR on the contralateral healthy limb may serve as an alternative means to facilitate the post-injury rehabilitation.

Thursday, July 12, 2018, 2:00 PM–3:30 PM

Back to Top | Article Outline

Effectiveness of Four Common Methods of Prescribing Intensity on Maximal Strength Development: A Systematic Review

S. Thompson, D. Rogerson, A. Barnes, and A. Ruddock

Sheffield Hallam University

Purpose: Several strategies are available to Strength & Conditioning coaches to aid the development of maximal strength. The 4 most utilised methods in practice are percentage of 1 repetition maximum (%1RM), repetitions to fatigue against an absolute load (RTF), rating of perceived exertion (RPE) and velocity-based assessments. However, it is unclear which method aids the greatest improvement in maximal strength. The aim of this study was to examine the effects of training-intensity prescriptions on maximal strength. Methods: A systematic search strategy was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Four electronic databases (SPORTDiscus, Scopus, MEDLINE and CINAHL complete) were searched using medical subject headings, indexing terms, keywords, titles and abstracts. The search strategy was limited to those most common in practice (%1RM; RPE; velocity-based assessments and RTF). Studies were included if participants were healthy; between the ages of 18–40; a training intervention ≥4 weeks was used; strength was assessed using 1 RM; the training programme was reported in full; and a control group was the comparator whereby this group did no resistance training. All studies analysed within and between-group differences using null hypothesis-based significance testing (p < 0.05). Percentage changes (SD) were calculated for training and control groups along with 95% confidence intervals. Results: Thirty-two studies with a total of 1,003 participants (801 male and 202 female) met the inclusion criteria. All studies prescribed intensity using %1RM; no studies using the other prescriptive methods (RPE; velocity-based assessments; RTF) met the inclusion criteria. Fifteen prescribed intensity using true percentage based methods (e.g., 90% of 1RM), whilst 17 used repetition maximum targets (e.g., 3 RM) based on an initial 1RM test. All 32 studies demonstrated statistically significant improvements in maximal strength in the training group from pre-to post intervention and compared to the control group. Across all studies, maximal strength improved by 23.2% (±16.4%) (95% CI: 21.78–24.62%) and 2.92% (±4.49%) (95% CI: 2.46–3.37%) for the training group and control group, respectively. Conclusions: Prescribing intensity using %1RM is an effective method for developing maximal strength. The effectiveness of the other 3 methods could not be assessed due to the lack of appropriate, well controlled research. Future research is needed to investigate the efficacy of RPE, velocity-based assessments and RTF based methods in developing maximal strength. Future research should include more robust methodological approaches utilising control groups, progressive and accurate prescriptions, and a wide range of exercises and periodised approaches to ensure sufficient quality. Practical Applications: Due to a lack of sound methodological approaches in studies that have prescribed intensity using RPE, velocity-based assessments and RTF, coaches should be cautious when reviewing scientific literature using these methods as well as applying them in practice. Despite our findings we must highlight that certain practical issue's such as fatigue effects and acute/chronic changes to maximal strength can make prescribing training intensity via %1RM problematic. Better controlled studies are required to assess the effectiveness of more practical approaches such as RPE and velocity-based assessments (that have the potential to overcome these limitations) before a definitive conclusion can be made regarding the suitability of all methods to prescribe training intensity.

Back to Top | Article Outline

Effects of Early Morning Training on Sleep in NCAA Division 1 Female Cross-Country Runners

C. Benjamin,1 W. Adams,2 Y. Sekiguchi,1 and D. Casa1

1 Korey Stringer Institute at the University of Connecticut; and 2 University of North Carolina at Greensboro

Purpose: To investigate the influence of training start time on sleep metrics in collegiate female cross-country athletes. Methods: Eleven Division I female collegiate cross-country runners (mean ± SD; age, 19 ± 1 years; body mass, 58.8 ± 9.6 kg; height, 168.4 ± 7.7 cm; V[Combining Dot Above]O2max, 53.6 ± 5.6 ml·kg−1·min−1) participated in this study, which took place during the 2016 NCAA cross-country season. Sleep characteristics were captured using a wrist-worn actigraphy device. Metrics captured included: Hours of Sleep (HoS), Wake Time (W), Wake Time Percent (W%), Light Sleep Time (LS), Light Sleep Time Percent (LS%), Slow Wave Sleep Time (SWS), Slow Wave Sleep Time Percent (SWS%), Rapid Eye Movement Sleep Time, (REM), Rapid Eye Movement Sleep Time Percent (REM%), Time in Bed (TiB), and Sleep Efficiency (SE). Training start time was differentiated as morning (AM), defined as training starting between the hours of 05:00–08:00 and non-morning (NAM), which was defined as every other night throughout the season (including days of afternoon training, races, and rest). Linear mixed effect models were used to assess differences in sleep metrics between AM and NAM (fixed factor) while accounting for within-athlete (random factor) variance. The total number of observations and the number of athletes included in each night type were recorded (AM: 180 observations, 11 athletes; NAM: 424 observations, 11 athletes). Significance was set a priori at 0.05. Results: All results are presented in Table 1 as mean ± SD with associated statistical outcomes. Conclusions: Early morning practices resulted in athletes spending less time in bed and accumulating less total time sleeping than on all other nights of sleep. Quality of sleep also suffered on AM nights, as demonstrated by an increase in REM sleep time, REM%, SE and a decrease in W% on NAM nights. Although these results indicate that AM practices led to less time spent in REM sleep, which may have a detrimental effect on memory consolidation, there were no differences in time spent in SWS. Regardless of early morning practice, female collegiate cross-country athletes did not meet the recommended quantity of sleep, 7–9 hours per night, throughout the entire competitive season. Practical Applications: Collegiate athletes are a unique population who must balance training, competition, academic, and social life. Coaches and sports medicine staff responsible for scheduling training should be cognizant of the impact early training sessions may have on sleep quantity and certain aspects of sleep quality. These individuals can help athletes understand the compounding of effects of sleep loss on performance and recovery potential. Additionally, educating the athletes on sleep hygiene practices and the importance of planning for early morning training sessions could potentially mitigate some of the sleep decrements seen in the current study. Acknowledgments: We would like to acknowledge WHOOP, Inc., the sponsor of this study.

Friday, July 13, 2018, 8:45 AM–9:00 AM

Back to Top | Article Outline

Rapid-Phase Excess Post-Exercise Oxygen Consumption Following a Damaging Plyometric Exercise Bout in Resistance-Trained Males

P. Harty, H. Zabriskie, R. Stecker, and C. Kerksick

Lindenwood University

Excess post-exercise oxygen consumption (EPOC) is a transient increase in oxygen consumption that remains elevated above baseline following completion of exercise. EPOC manifests as both a rapid phase present during the first hour after exercise as well as a prolonged component which may last up to 72 hours. While the EPOC responses resulting from resistance training, aerobic exercise, and circuit training are well-quantified, no study to date has examined rapid-phase EPOC after damaging plyometric exercise. Purpose: To assess the effect of a damaging plyometric exercise bout on rapid-phase EPOC and to determine the presence of any relationship between lean body mass and EPOC. Methods: Thirteen healthy resistance-trained males (Mean ± SD; Age: 21.6 ± 1.7 years; Height: 178.1 ± 4.3 cm; Mass: 84.3 ± 6.3 kg; Percent Body Fat 18.4 ± 5.0%) participated in this study. Following anthropometric assessments and body composition analysis via DEXA, resting metabolic rate (RMR) was measured to establish baseline oxygen consumption. Subjects then performed 5 sets of 20 drop jumps from a height of 0.6 m with 10 seconds between jumps and 2 minutes of rest between sets. After exercise, EPOC was assessed via indirect calorimetry for 1 hour or until baseline oxygen consumption was reached. Repeated measures factorial ANOVA was used to identify time points where oxygen consumption and RER were elevated above resting values using a Bonferroni adjustment. Pearson product-moment correlations were used to assess relationships between body composition variables such as lean mass and net EPOC. Results: Net EPOC (2.78 ± 0.99 L O2) was significantly elevated (p < 0.05) compared to baseline during minutes 1–5, 7–8, and 10–30 post-exercise (See Figure below), while RER was significantly elevated (p < 0.05) during minutes 1–20 following exercise. A significant positive correlation was identified between RMR and lean body mass (r = 0.848, p < 0.001). No significant relationships were found between net EPOC and lean mass or between net EPOC and self-reported weekly resistance training time. Conclusions: A bout of damaging plyometric exercise is of sufficient intensity to elicit rapid-phase EPOC for up to 30 minutes following cessation of exercise. No significant relationships exist between net EPOC and body composition variables. Practical Applications: The results of this investigation suggest that high-volume plyometric exercise has the potential to significantly disrupt metabolic homeostasis for at least 30 minutes following cessation of exercise. Strength and conditioning professionals should be cognizant of the effect of EPOC on subsequent athletic performance and plan training sessions accordingly to maximize their athletes' recovery.

Friday, July 13, 2018, 9:00 AM–9:15 AM

Back to Top | Article Outline

Sleep Distribution and Heart Rate-Derived Autonomic Nervous System Responses to Acute Training Load Changes in Collegiate Soccer Players

R. Curtis,1 W. Adams,2 C. Benjamin,1 Y. Sekiguchi,1 R. Huggins,3 and D. Casa1

1 Korey Stringer Institute at the University of Connecticut; 2 University of North Carolina at Greensboro; and 3 Korey Stringer Institute

Purpose: The aim of this investigation was to quantify the responsiveness of heart-rate derived autonomic nervous system (HR-ANS) status and sleep stage distribution to acute (previous day) changes in training load (TL). Methods: Twenty-two collegiate male soccer athletes (mean ± SD; age, 20 ± 2 years; height, 181.2 ± 6.5 cm; weight, 79.4 ± 6.9 kg; body fat, 11.8 ± 2.4%; V[Combining Dot Above]O2max, 51.2 ± 4.1 ml × kg−1 × min−1) were monitored during the full 2016 collegiate soccer season. TL was collected using GPS-enabled player tracking devices, while sleep and HR-ANS metrics were collected via a wrist-worn motion and optical sensing device. Within-athlete standardized difference scores from a chronic (28-day moving average) baseline were calculated for resting heart rate (RHR), heart-rate variability (HRV), slow wave sleep (SWS%), and rapid eye movement (REM%). Standardized differences in previous day TL (total distance, n = 1,169) from 28-day moving baseline were grouped according to the following thresholds: 0.2–0.6 = small, 0.7–1.1 = moderate, or >1.2 = large. Linear mixed effects models with tukey post hoc testing was utilized to assess mean differences. Differences were considered practically important where there was >75% likelihood of exceeding the smallest important effect (0.2) and classified as 75–94%, likely; 95–99%, very likely; and >99%, almost certainly. Effect sizes (ES ± 90% CI) and magnitude-based inferences were used to assist in interpretation. Results: Figure 1 displays mean values for each magnitude of TL change with a predominant increase in RHR, decrease in HRV, and decrease in REM% as TL changes progressed from large decreases to large increase from baseline. Practically important differences in RHR were found between TL increases and TL decreases (ES = 0.46–1.04, likely-almost certainly) and in HRV between large TL increases and large to moderate decreases (ES = 0.46–0.35, likely-very likely). SWS% showed likely differences between large increases in TL and moderate to small decreases (ES = 0.32–0.30, likely). REM% showed differences between large increases in TL and large to small (ES = 0.41–0.36, likely) decreases. All other differences across variables were trivial or unclear. Conclusions: HR-ANS markers and sleep distribution were responsive to acute changes in TL when expressed as within-athlete changes from a moving, chronic baseline. While dose-response relationships were evident for HR-ANS and sleep markers, RHR appeared most responsive to changes in TL. Interestingly, REM%, which shares a known relationship with sleep duration showed greater responsiveness to TL changes than SWS%, which is important for physiological regeneration. Practical Applications: Practitioners and scientists monitoring responses to TL may find utility in monitoring within-athlete physiological changes from a moving baseline, particularly RHR. Large deviations in TL may result in small to moderate, although practically meaningful differences in HR-ANS markers and sleep distribution. Acknowledgments: The authors report no conflicts of interest. The authors alone were responsible for the content and writing of this article. They are thankful for the financial support for this research provided by WHOOP Inc.

Friday, July 13, 2018, 9:15 AM–9:30 AM

Back to Top | Article Outline

Comparison of Weekly HRV Measures Collected From 2 Different Recording Times and Their Relation to Performance in Collegiate Female Rowers

S. Sherman,1 C. Holmes,2 B. Hornikel,1 M. Fedewa,3 H. MacDonald,3 and M. Esco2

1 The University of Alabama; 2 University of Alabama; and 3 Department of Kinesiology, The University of Alabama

Introduction: Root-mean-square difference of successive normal-to-normal RR intervals (RMSSD) is a common heart rate variability (HRV) metric used in the realm of athletic monitoring. Time constraints in a collegiate sport environment and irregular practice hours are typical challenges that make obtaining the mean value (RMSSDM) and coefficient of variation (RMSSDCV) of daily RMSSD assessment difficult. Currently, it is unclear whether the time of day (i.e., measured immediately upon waking vs. immediately prior to morning practice) influences these metrics and their relationships to performance. Purpose: To compare HRV values when recorded immediately upon waking to the values recorded later in the morning prior to practice, and to determine the associations of these HRV measures with performance outcomes in competitive female rowers. Methods: Thirty-one (19.74 ± 1.2 years, 67.91 ± 2.0 in, 78.1 ± 12.6 kg) NCAA Division I rowers from the same varsity team were monitored for 6 consecutive days. Two seated RMSSD measurements were taken by the rowers on at least 3 mornings using a photoplethysmography smartphone application. Each RMSSD measure was recorded during a 1-minute timeframe that followed a 1-minute stabilization period. The first RMSSD (T1) measurement occurred at the athlete's home following waking and elimination, the second (T2) upon arrival at the team's boathouse immediately before practice. From the daily measures, RMSSDM1, RMSSDCV1, RMSSDM2 and RMSSDCV2 were calculated. Rank was determined by the coaches based on performance for that week. Two objective performance assessments were conducted on an indoor rowing ergometer on separate days: timed 2,000 m and distance covered in 30 minutes. Paired samples t-test was used to assess the potential differences between T1 and T2. Bivariate correlations were assessed using an intraclass correlation coefficient (ICC). Statistical significance assessed using an α-level of 0.05. Results: No differences in RMSSDM and RMSSDCV were observed between T1 and T2 (p = 0.73 and 0.66, respectively). Significant intra-class correlations were found between RMSSDM1 and RMSSDM2 (ICC = 0.82, 95% CI = 0.63–0.92), as well as between RMSSDCV1, RMSSDCV2 (ICC = 0.75, 95% CI = 0.48–0.88) (both p < 0.01). Conclusions: Ultra-short RMSSD can be measured immediately upon waking or prior to practice based on the strong ICC, however assessing HRV immediately upon waking appears to show a stronger correlation with athletic performance. Practical Applications: Though it is preferred to collect measures immediately upon awakening, practitioners should be aware of the potential decreased sensitivity of later in the morning measures when attempting to monitor athletes using HRV. Acknowledgments: The University of Alabama Women's Rowing Team.

Friday, July 13, 2018, 9:30 AM–9:45 AM

Back to Top | Article Outline

Performance, Inflammatory and Perceptual Responses to Post Resistance Exercise Hydrotherapy in Junior International Male Volleyball Athletes

B. Horgan,1 C. Colomer,2 C. Fonda,3 J. Tatham,4 N. Tee,3 J. Broatch,5 M. Caine,3 N. West,6 S. Halson,3 E. Drinkwater,7 D. Chapman,3 and G. Haff8

1 Edith Cowan University, Australian Institute of Sport, Brumbies Rugby; 2 Brumbies Rugby, University of Canberra; 3 Australian Institute of Sport; 4 Volleyball Australia, Australian Institute of Sport; 5 Victoria University, Australian Institute of Sport; 6 Griffith University; 7 Deakin University; and 8 Edith Cowan University

Introduction: Hydrotherapy strategies such as cold, hot and contrast water immersion are used by athletes on completion of, and in preparation for, training and competition. Previous research is inconsistent on the use of hydrotherapy as a post-exercise recovery strategy, with negative and positive effects on subsequent performance, inflammatory and perceptual responses. Purpose: We investigated the acute (≤36 hours) effects of post-exercise hydrotherapy on performance, inflammatory and perceptual responses following a whole-body resistance exercise bout. Methods: Junior international, and sub-elite semi-professional, male, volleyball athletes (n = 18, 19.9 ± 3.4 years, 195.2 ± 9.7 cm, 84.4 ± 9.2 kg) were assessed for performance: squat jump (SJ), and counter-movement jump (CMJ) height; inflammatory: creatine kinase (CK), tumor necrosis factor alpha (TNFa), thigh (TG) and calf girth (CG); and perceptual: perceived muscle soreness (MS), and recovery status (RS) responses before and after a single whole-body resistance exercise bout and a post-exercise hydrotherapy session. Over a 4 week period using a within athlete randomized crossover design, athletes completed 1 of 4 15 minutes recovery strategies (control [CON]: passive seated rest at 23.7 ± 0.4° C; cold water immersion [CWI]: 14.8 ± 0.2° C; contrast water therapy [CWT]: 7 × 2 minutes, alternating hot 39.1 ± 0.5° C and cold 14.8 ± 0.2° C; and hot water immersion [HWI]: 39.1 ± 0.5° C), with >96 hours between trials. Performance and inflammation markers plus MS was assessed pre (−8 hours) and at 0, 2, 4, 12 and 36 hours post-exercise. Percent change (%[INCREMENT] ± SD) was calculated using post-exercise (0 hours) baseline. RS was assessed at pre (−8 hours), 12 and 36 hours post-exercise. A repeated-measures ANOVA was used to analyze main and interaction effects with significance accepted at p < 0.10. Fisher's LSD was used for post-hoc analysis. The magnitude of change was interpreted using partial eta squared (η2) and Cohen d effect size statistics. Results: A significant interaction effect was observed for CG%[INCREMENT] (p = 0.004, η2 = 0.105), with CG significantly lower in CWI vs. CON at 2 hours (−0.50 ± 0.49 vs. 0.39 ± 0.83%, p = 0.000, d = 1.07), 4 hours (0.27 ± 0.52 vs. 0.81 ± 0.62%, p = 0.001, d = 0.87) and 12 hours (−0.67 ± 0.92 vs. −0.25 ± 0.74%, p = 0.099, d = 0.58) post-exercise. No significant interaction effects were observed for performance (SJ%[INCREMENT] [p = 0.655, η2 = 0.033]; CMJ%[INCREMENT] [p = 0.702, η2 = 0.029]), inflammatory (CK%[INCREMENT] [p = 0.485, η2 = 0.036]; TNFa%[INCREMENT] [p = 0.602, η2 = 0.034]; TG%[INCREMENT] [p = 0.943, η2 = 0.019]) or perceptual (MS%[INCREMENT] [p = 0.833, η2 = 0.017]; RS [p = 0.856, η2 = 0.018]) variables. Conclusions: This research suggests that hydrotherapy strategies do not influence acute changes in SJ, CMJ, CK, TNFa, TG, MS and RS for ≤36 hours following a single bout of resistance exercise. However, CWI reduced calf girth when compared with CON at 2, 4 and 12 hours post resistance exercise. Despite this significant finding, practitioners should be cognizant of the inter- and intra-tester variability associated with anthropometric girth measurement, as well as the effects that diurnal variation may have on muscular performance, inflammatory and perceptual responses to resistance exercise. Practical Applications: Choice of hydrotherapy strategy does not appear to influence acute muscular performance, inflammatory and perceptual responses for ≤36 hours post resistance training. These data suggest that practitioners working with high performance athletes should prioritize other recovery strategies for ≤36 hours following a bout of resistance training.

Friday, July 13, 2018, 9:45 AM–10:00 AM

Back to Top | Article Outline

Neuromuscular and Hypertrophic Adaptations to Low-Intensity Blood Flow Restriction Training

E. Hill, T. Housh, J. Keller, C. Smith, R. Schmidt, and G. Johnson

University of Nebraska, Lincoln

Purpose: Low-intensity blood flow restriction training has been demonstrated to elicit increases in muscle strength and size comparable to training at high intensities without blood flow restriction. Compared to concentric muscle actions, however, eccentric muscle actions are a more potent stimulus to induce favorable adaptations in muscle. The purpose of this investigation, therefore, was to examine early-phase eccentric blood flow restriction (Ecc-BFR) vs. concentric BFR (Con-BFR) training on muscle strength, neuromuscular, and hypertrophic adaptations. Methods: Thirty-six women volunteered to participate in this investigation and were randomly assigned to either Ecc-BFR (n = 12), Con-BFR (n = 12) or control (n = 12) group. Ecc-BFR trained at 30% of their eccentric peak torque (PT) and Con-BFR trained at 30% of concentric PT. Training was performed 3 times per week for 4-weeks and consisted of 75 repetitions each training session performed over 4 sets (1 × 30, 3 × 15). Each set was separated by 30-second of rest. All training and testing procedures were performed on a calibrated isokinetic dynamometer at a velocity of 120°·s1. At baseline, 0-week, and after 2-week and 4-week of training, indices of muscle strength (eccentric PT, concentric PT, and maximal voluntary isometric contraction), neuromuscular and hypertrophic adaptations (efficiency of electrical activity via surface electromyography), and muscle size (muscle thickness via ultrasound) were assessed. Results: Muscle strength (Figure 1) increased similarly as a result of Ecc-BFR and Con-BFR, but there were no changes in muscle strength for the control group. In addition, muscle thickness increased similarly at 2-week (13.2 and 10.8%) and at 4-week (14.6 and 11.7%) for Ecc-BFR and Con-BFR, respectively, but there were no changes for the control group. The 4-week increases in muscle strength as a result of the Ecc-BFR and Con-BFR were associated with similar neuromuscular (36.3 and 41.4%, respectively) and hypertrophic (63.7 and 58.6%, respectively) adaptations. Conclusions: The Ecc-BFR and Con-BFR low-intensity training induced comparable, positive adaptations in skeletal muscle strength and size. Thus, Ecc-BFR or Con-BFR may be used to promote early-phase increases in strength and skeletal muscle mass. Practical Applications: These findings contribute to the growing body of evidence that BFR training is sufficient to elicit increases in strength- and hypertrophy-related outcomes and were not affected by the mode of training. Thus, coaches and practitioners can utilize low-intensity Ecc-BFR or Con-BFR as a training intervention to elicit positive adaptations to skeletal muscle. Acknowledgments: This research was supported by the National Strength and Conditioning Association Doctoral Research Grant and NASA Nebraska Space Grant.

Friday, July 13, 2018, 10:00 AM–10:15 AM

Back to Top | Article Outline

Is the Magnitude of Cross-Education Dependent on Initial Strength Levels?

J. Carr,1 X. Ye,2 M. Stock,3 N. Wages,4 and J. DeFreitas5

1 University of Oklahoma; 2 University of Mississippi; 3 University of Central Florida; 4 Ohio University; and 5 Oklahoma State University

Introduction: Unilateral strength training has been shown to improve contralateral limb strength, an adaptation often called cross-education. Cross-education may therefore provide an effective exercise intervention to attenuate strength loss and muscle atrophy during asymmetrical orthopedic injuries. Yet, the factors that influence the magnitude of cross-education are poorly understood. Purpose: To examine the influence of initial strength levels on the magnitude of interlimb strength transfer. Methods: Ten healthy participants (3 female, age = 23.0 ± 2.0 years, stature = 175.9 ± 10.2 cm, mass = 74.3 ± 10.1 kg) completed this strength training study. The participants had not engaged in programmed strength training for at least 3 months prior to enrollment. The participants completed a total of 11 unilateral isometric strength training sessions of the non-dominant elbow flexors across 4 weeks. Isometric MVC values were determined for the dominant, untrained arm before and after the training intervention. For each training session, the intensity was set at 80% of isometric MVC. The training required the participants to complete 5 sets of 5, 5 second contractions. The participants received visual force feedback during training and were instructed to match their force output as closely as possible to the force template. Recovery intervals were set at 10 and 90 seconds between contractions and sets, respectively. Bivariate regression was used to examine the association between the baseline strength values and the percent change in isometric MVC for the untrained arm. Results: The regression analysis showed a non-significant association between baseline strength values and the relative magnitude of cross-education (R 2 = 0.0015, p = 0.915). Conclusions: These data suggest that initial strength levels do not moderate the degree of contralateral strength gains following short-term unilateral isometric training. These data indicate that factors other than initial strength are responsible for the magnitude of strength transfer. Practical Applications: These findings show that in a small sample of healthy individuals, initial strength levels were not associated with the degree of strength gain in the untrained arm. These observations have important return to play applications for athletes undergoing unilateral limb injuries. Coaches and trainers may use unilateral strength training protocols to mitigate muscle strength loss for the injured athlete.

Friday, July 13, 2018, 10:15 AM–10:30 AM

Back to Top | Article Outline

Low-Intensity vs. High-Intensity Resistance Training to Failure on 1-Repetition Maximum Strength in Untrained Females

T. Dinyer,1 M. Byrd,1 M. Garver,2 A. Rickard,3 W. Miller,4 S. Burns,2 and H. Bergstrom1

1 University of Kentucky; 2 University of Central Missouri; 3 United States Air Force; and 4 University of Mississippi

Previous studies have examined the effects of resistance training (RT) to failure at low-intensity vs. high-intensity training loads in both trained and untrained males. Significant increases in one-repetition maximum (1RM) strength have been observed when repetitions were taken to failure, regardless of the intensity lifted or the training status of the individuals. At the time of the present study, authors found no evidence regarding the efficacy of this training modality in untrained females. Purpose: The purpose of this study was to examine the effects of RT to failure at low- (30% of 1RM) vs. high-intensities (80% 1RM) on 1RM strength in untrained females. Methods: Twenty-three females (Age: 21 ± 2 years; Height: 167.1 ± 5.66 cm; Weight: 62.28 ± 16.16 kg) with no prior RT experience completed 9 weeks of RT to failure at either 30% 1RM or 80% 1RM on 4 exercise machines: leg extension (LE), seated military press (SMP), leg curl (LC), and lat pull down (LPD). Pre-, mid-, and post-1RM testing took place during weeks 1, 5, and 12, respectively, with no RT sessions performed during 1RM testing weeks. RT sessions consisted of 2 sets to failure during weeks 2–4 and 6–7, and 3 sets to failure during weeks 8–11 on all exercises. Training progression included adjustment of intensity for weeks 6–11 using the 1RM recorded during week 5. Statistical analyses of 1RM strength included a 2 (group: 30% and 80% 1RM) × 3 (time: pre-, mid-, and post-training) × 4 (exercise: LE, SMP, LC, and LPD) mixed factorial ANOVA at an alpha level of p ≤ 0.05, with follow-up two-way and 1-way repeated measures ANOVAs and Bonferroni corrected pairwise comparisons. Results: There was no 3-way interaction (p = 0.351), but there was a significant 2-way interaction for time × exercise (p < 0.001). The follow-up analyses revealed significant increases in 1RM strength (collapsed across groups) in all exercises from pre-to post-testing (LE: 31.7 ± 23.6%; SMP: 16.7 ± 13.8%; LC: 22.9 ± 25.5%; LPD: 25.4 ± 12.8%); from pre-to mid-testing (LE: 17.8 ± 15.6%; SMP: 9.29 ± 11.4%; LC: 12.2 ± 21.8%; LPD: 13.4 ± 9.10%); and from mid-to post-testing (LE: 11.4 ± 9.06%; SMP: 7.05 ± 9.04%; LC: 9.59 ± 7.29%; LPD: 10.9 ± 11.2%). Conclusions: These data suggested RT to failure, regardless of the intensity (30 vs. 80%), may result in increased 1RM upper- and lower-body strength in previously untrained females. Although both intensities led to significant increases in 1RM strength, the mechanisms underlying these adaptations may differ at low (30% 1RM) vs. high (80% 1RM) intensities. Specifically, the increased 1RM strength in the 80% group was likely related to the greater mechanical tension compared to 30% group, while the greater metabolic stress from increased volume of exercise and time under tension may explain the increased 1RM strength in the 30% group. Practical Applications: The results of the current study suggested untrained females may be able to increase 1RM strength when training at a recommended intensity (80% 1RM), or lower than recommended intensity (30% 1RM), if the repetitions are taken to failure during training sessions. Females tend to self-select intensities below that recommended by governing bodies, and the present research provides rationale that strength increases may still occur when training to failure at 30% 1RM.

Friday, July 13, 2018, 10:30 AM–10:45 AM

Back to Top | Article Outline

The Effects of a 6 Week Velocity Based Resistance Training Intervention on Maximal Strength in Trained Males

H. Dorrell, M. Smith, and T. Gee

University of Lincoln

Purpose: To assess the effects of 6 weeks velocity based resistance training on free weight back squat, bench press, and conventional deadlift maximal strength in resistance trained males. Methods: Sixteen resistance trained males (mean ± SD; age: 22.8 ± 4.5 years; stature: 180.2 ± 6.4 cm; body mass: 90.8 ± 17.2 kg) were recruited. Following familiarisation, participants completed a back squat, bench press, and deadlift 1 repetition maximum (1RM; 140.2 ± 26.0 kg; 102.7 ± 18.2 kg; 176.6 ± 27.2 kg respectively), with mean concentric velocity (MCV) monitored via a linear positional transducer. Participants were then assigned to either velocity based training (VBT), or percentage based training (PBT) groups. For both groups, relative training loads (% 1RM), number of sets and repetitions, and inter-set rest time were equated. The VBT group's load, and repetitions were dictated via real-time MCV monitoring, while the PBT group's programme was designed utilising pre-testing 1RM data. Participants completed 2 sessions per week for 6 weeks, with each session comprising of 3 sets of a given number of repetitions of each movement, periodised in a wave-like structure. At the end of the 6 weeks, participants retested 1RM. Independent sample t-tests were completed to examine the pre-training inter-group differences, as well as post-training total volume relationship. A 2-way mixed ANOVA, using 1 inter-factor (VBT vs. PBT) and one intra-factor (pre-vs. post-training), was conducted to examine the pre to post between group differences. Inferential statistics based on the magnitude of effects were calculated using a custom-built spreadsheet. Results: No significant differences (p > 0.05) were present between groups for pre-testing data. Training resulted in significant increases in maximal strength for back squat, bench press, and deadlift for the VBT group (9; 8; 6% respectively), and back squat and bench press only, for the PBT group (8; 4% respectively). No significant interaction effect was witnessed between training groups for the back squat or deadlift, however, for the back squat a significant difference was present between the total volume lifted (VBT − 9%). A significant interaction was recorded between groups for the bench press (VBT vs. PBT), with this group lifting significantly less total volume (VBT −6%). Inferential statistics revealed the VBT intervention to be “most likely” beneficial for the back squat and bench press, and “very likely” for the deadlift, as opposed to “most likely,” “likely,” and “possibly” for the PBT group respectively. Conclusions: The VBT intervention induced favourable adaptations in back squat, bench press, and deadlift maximal strength in a resistance trained population. This finding is furthered when considering the significant reduction in total volume completed by the VBT group across both the back squat and bench press movements. Practical Applications: The data presented provides sufficient evidence to support the use of VBT interventions within a resistance trained population for eliciting favourable adaptations in maximal strength. While no interaction was witnessed between groups for the back squat or deadlift, the VBT group achieved the same percentage increases for back squat (9 vs. 8%), and a greater percentage increase for deadlift (6 vs. 3% respectively), with less training volume. Within the applied setting, the ability to produce the same, or significantly greater adaptations, with lower total training volume, is essential for a competitive athlete. A reduction in total volume will link directly to a decreased risk of injury and a reduction in training induced fatigue.

Friday, July 13, 2018, 10:30 AM–12:00 PM

Back to Top | Article Outline

Back Squat Assessment in Pre- and Post-Peak Height Velocity Male Cricketers Following 4-Weeks of Neuromuscular Training

I. Dobbs, M. Wong, I. Moore, J. Oliver, and R. Lloyd

Cardiff Metropolitan University

Teaching the back squat to youth athletes can have a positive impact on performance by improving motor skills and decreasing injury risk. The back squat assessment (BSA) evaluates an athlete's movement competency and identifies technical deficits during the squat pattern (1). Neuromuscular training can enhance the physical abilities needed in sport through the development of a variety of fundamental movements (2); however, the effects of a short-term intervention on movement competency in the BSA remain unknown. Purpose: To examine the effects of 4-week of neuromuscular training on movement competency in the BSA in pre- and post-peak height velocity (PHV) male cricketers. Methods: Fifteen pre-PHV (age = 11.2 ± 0.7 years; height = 147.8 ± 5.7 cm; mass = 39.8 ± 8.9 kg; maturity offset = −2.2 ± 0.6 years) and eleven post-PHV (age = 15.4 ± 0.9 years; height = 171.9 ± 8.4 cm; mass = 64.4 ± 9.2 kg; maturity offset = 1.3 ± 1.3 years) male youth cricketers participated in the study. Participants were instructed to perform the BSA according to guidelines outlined previously (1). The BSA is scored off a 10 point-criteria with one-point given for each fault, with a lower total score reflecting more favourable squat technique. For pre- and post-testing, participants performed 10 continuous repetitions with a dowel on their back with feet positioned slightly wider than hip-width and were told to descend until thighs were parallel to the ground. All 10 repetitions were recorded using 2 2D cameras placed 5 m away and 1 m high from the participant in both frontal and sagittal planes; with BSA scoring conducted retrospectively. After pre-testing, both groups underwent 4 weeks of neuromuscular training 2 days per week for 1 hour each session. Results: Mann-Whitney U-test revealed no difference between groups for pre-testing (p > 0.05) and a significant difference between groups in post-testing BSA scores (p < 0.05). Mann-Whitney U-test also revealed no difference in change score between groups (p > 0.05). Wilcoxon signed rank test revealed a significant within-group difference (p < 0.05) in median BSA score within the pre-PHV cohort (5.0 to 3.0). Median BSA score was also significantly different (p < 0.05) in the post-PHV group from pre-to post-testing (2.0 to 1.0). Conclusions: These findings suggest that 4 weeks of neuromuscular training may be enough to improve movement competency in the back squat for both pre- and post-PHV male cricketers. Changes in BSA scores were similar between groups, which suggests improvement of movement competency is not dependent on maturational status. Practical Applications: Movement competency during the back-squat technique can be improved in youth athletes of different maturational status following relatively short-term exposure to neuromuscular training.

Friday, July 13, 2018, 10:30 AM–12:00 PM

Back to Top | Article Outline

The Influence of Loading Intensity on EMG Frequency Spectrum During Fatiguing Contractions

A. Stranieri,1 D. Hatfield,2 and J. Earp1

1 University of Rhode Island; and 2 The University of Rhode Island

Introduction: Previous studies have shown that when exercise is performed to failure, similar hypertrophy magnitude appears to be independent of training load. However, as motor unit recruitment is load dependent it is unclear if low-load training can similarly activate high-frequency type 2 motor units. Spectral analysis of EMG can provide vital information as to the activity of high (type 2) and low (type 1) frequency motor units during fatiguing exercise at different intensities. Purpose: To compare EMG frequency spectrum characteristics between low and moderate intensity contractions performed to failure. Methods: Sixteen subjects (Age: 23.8 ± 4.4 years; Height: 1.69 ± 6.7 m; Mass: 75.8 ± 15.6 kg) completed 2 testing sessions. In the first session, a maximal voluntary isometric contraction (MVIC) knee extension was performed to assess strength. Then, in the second session, subjects performed isometric knee extensions to failure at both a 50 and 75% MVIC in a random order with 5 minutes between intensities. During contractions vastus lateralis EMG was measured and signals were decomposed using a fast Fourier transform and sectioned into very low (VL: 8–16 Hz), low (L: 16–31 Hz), moderate (M: 31–63 Hz), high (H: 63–125 Hz) and very-high (VH: 125–250 Hz) intensity bands that have been previously established in the literature. The percentage of total EMG signal power (%P) in each EMG frequency band was first compared between intensities. The duration of each contraction was then divided into quarters and compared between intensities using a 4 × 2 (Time × Intensity) repeated measures MANOVA with Bonferroni post-hoc corrections. Results: Overall muscle activity was greater with 75% than 50% MVIC as indicated by larger root mean square (p = 0.047). Greater %P in the VL and L bands were observed in the 50% than 75% MVIC, while greater %P in the H band was observed in the 75% than 50% MVIC (p < 0.05). However, %P in M (p = 0.12) and VH (p = 0.86) were similar between intensities. Additionally, over time %P activity in VL, L and M increased while %P in H and VH decreased in both loading conditions (p = 0.000–0.003). Conclusions: During 50% MVIC, lower frequency motor units contributed more to overall muscle activation than higher frequency motor units. Similarly, during 75% MVIC, higher frequency motor units had greater contributions to overall muscle activation than lower frequency motor units. It was also observed that over the duration of the exercise at both 50 and 75% MVIC, activity in lower frequency motor units increased, while activity in higher frequency motor units decreased. Thus, there was no evidence of increased activation of high-frequency motor units when performing low-intensity exercise to failure. Practical Applications: The results of this study suggest that training at lower intensities with goal of inducing hypertrophy, even to failure, elicit greater responses from lower frequency motor units than higher frequency motor units. This suggests that previously observed hypertrophy when training to failure at lower intensities may be selective hypertrophy of lower frequency motor units, whereas training to failure at even a moderate intensity load may be advantageous in recruiting and inducing a hypertrophy response in higher frequency motor units.

Friday, July 13, 2018, 10:30 AM–12:00 PM

Back to Top | Article Outline

Neuromuscular Responses During a Sustained, Submaximal Isometric Leg Extension Muscle Action at a Constant Perception of Effort

J. Keller, T. Housh, C. Smith, E. Hill, R. Schmidt, and G. Johnson

University of Nebraska, Lincoln

Purpose: Anchoring exercise intensity with ratings of perceived exertion (RPE) can be used to examine the mechanisms underlying the perception of effort. The purpose of the present study was to examine the fatigue-related patterns of responses for electromyography (EMG) and mechanomyography (MMG) as well as force production during a sustained, submaximal isometric leg extension muscle action anchored by RPE. Methods: Ten women (±SD: 23.1 ± 2.3 years) performed a sustained, submaximal isometric leg extension muscle action at RPE = 5 (OMNI-RES 10-point scale) to volitional exhaustion or a maximal time limit of 5-minute. Two, maximal voluntary isometric contractions (MVIC) were performed prior to (pretest) and following (posttest) the sustained isometric muscle action. Both EMG and MMG signals were recorded from the vastus lateralis muscle of the dominant leg during the sustained muscle action. The mean normalized EMG amplitude (AMP), EMG mean power frequency (MPF), MMG AMP, MMG MPF, and force values were calculated every 5% of the time to exhaustion (TTE). Polynomial regression (linear and quadratic) analyses were used to examine the neuromuscular parameters and force vs. TTE relationships during the sustained, submaximal isometric muscle action. A paired t-test was used to compare mean differences in the pretest MVIC vs. posttest MVIC. Results: The mean pretest MVIC (mean ± SD: 46.9 ± 8.9 kg) was significantly (p = 0.003) greater than the mean posttest (36.4 ± 5.3 kg) MVIC. Three out of 10 subjects reached the 5-minute (300 seconds) time limit and the mean TTE was 180 ± 90.9 seconds (range: 84.8–300-second). Furthermore, the mean percent decline in force production was 34.7 ± 17.1%, and there was a significant negative, quadratic force vs. TTE relationship (p < 0.001; R = −0.983). In addition, there was a significant positive, quadratic MMG AMP vs. TTE relationship (p < 0.001; R = 0.852) for the sustained, submaximal isometric leg extension. There was no other significant neuromuscular parameter (EMG AMP, EMG MPF, or MMG MPF) vs. TTE relationships. Conclusions: All subjects reduced force to maintain a constant RPE = 5 during the sustained, submaximal isometric leg extension, but muscle activation (EMG AMP) did not change. EMG AMP were influenced by motor unit recruitment, firing rate, and synchronization. The current findings suggested that motor unit recruitment (as reflected in MMG AMP) increased but firing rate (MMG MPF) did not change. Thus, the lack of fatigue-induced change in EMG AMP may have reflected a decrease in synchronization that offset the increase in motor unit recruitment and tracked the decrease in force. In addition, the lack of change in EMG MPF suggested there were no changes in muscle fiber action potential conduction velocity. Collectively, the current findings suggested that muscle activation and motor unit firing rate were most closely associated with RPE and tracked the perception of effort during the sustained, submaximal isometric muscle action, while motor unit recruitment did not. Practical Applications: These findings provided insight related to the neuromuscular mechanisms associated with RPE, during sustained, fatiguing isometric muscle actions of the leg extensors. Furthermore, this study supported the use of intensity anchored by RPE during a sustained muscle action to examine the responses of neuromuscular parameters.

Friday, July 13, 2018, 10:30 AM–12:00 PM

Back to Top | Article Outline

Relationship Between the Athletic Ability Assessment and Performance Tests in Division II Collegiate Rugby Athletes

M. Tracey

Roanoke College

Introduction: Strength and conditioning professionals use movement screenings to assess common movement patterns in athletes at all levels. While there are many movement screenings used to assess athletes, most fail to address the unique requirements of such individuals. A movement screening for athletes should assess movement under a higher load than movement screenings of a non-athlete due to the stressful demands faced during competition. Typical stresses athletes face include rapid rates of acceleration (both positive and negative), cutting at a high velocity, and large forces produced by the musculoskeletal system. The Athletic Ability Assessment (AAA) analyzes athlete motion under conditions of greater physical stress than other movement screenings, and therefore may correlate with traditional measures of performance better. Purpose: To examine the relationship between the Athletic Ability Assessment scores and traditional performance measures including the 40-meter dash, pro agility shuttle, vertical jump, and back squat 1RM. Methods: Twelve Division II college club rugby players with no current injuries participated in this study. The movement screening was performed on a single day and consisted of 9 different movements. These movements (front plank, side plank, overhead squat with a 20-pound bar, single leg squat off box, forward lunge with 45-pound bar on back, single leg forward hop, lateral bound, push-ups, and neutral grip pull-ups) were performed in the order listed. Performance tests took place across 2 separate days and consisted of a 40-m sprint, pro agility shuttle, vertical jump, and relative back squat. Results: All 12 participants were able to complete the AAA successfully, with scores ranging from 72 to 99 (out of a possible 126), and an average score of 87.5. One participant was not able to complete the performance tests due to an injury suffered after completing the AAA (this individual had the second lowest AAA score of 73), and another individual was not able to participate in the vertical and back squat measurements. The 40-m sprint speed times (R 2 = 0.71) and pro agility speed times (R 2 = 0.66) showed the strongest correlation with participants' AAA total scores. The vertical jump heights (R 2 = 0.60) showed a correlation to the 10 participants' AAA total scores, while the estimated 1RM relative back squat (R 2 = 0.33) showed a relatively weak correlation to the 10 participants' AAA total scores. Conclusions: The 40-m sprint times, pro agility times, and vertical jump height all had strong correlations to the participants' AAA total scores. The estimated 1RM back squat, relative to the participants' bodyweight, had a weak correlation to participants' AAA total scores. Practical Applications: This study shows that the AAA movement screening is a good predictor of performance tasks present in most sports (speed, acceleration, change of direction, force production, etc.). This screening can be a useful tool for strength & conditioning professionals to measure movement patterns that translate well to movements observed during competition, particularly those with higher physical stress. The AAA may be a useful tool for customizing resistance training programs that focus on weak areas of performance and movement control and/or for assessing return to play for athletes who are towards the end of the injury rehabilitation process.

Friday, July 13, 2018, 10:30 AM–12:00 PM

Back to Top | Article Outline

Force Production Asymmetry Is Task Dependent in Collegiate Baseball Players

C. Bailey,1 T. McInnis,1 K. Nilson,1 J. Batcher,2 and T. North1

1 LaGrange College; and 2 Toronto Blue Jays

Introduction: Research evaluating strength and force production asymmetry has demonstrated that it may be a detriment to performance of a given task. Research has also shown a lack of association between bilateral isometric and dynamic performance symmetry measures such as the isometric mid-thigh pull and countermovement jumps (CMJ). To the current authors knowledge, the association of asymmetry measures of bilateral movements and actual sport tasks has not been evaluated. Purpose: The purpose of this study was to evaluate the association of asymmetry direction and magnitude between jumping and batting performance in collegiate baseball players. Methods: Thirteen collegiate baseball players volunteered for this study (19.9 ± 1.3 years, 82.2 ± 10.9 kg). All athletes participated in a dynamic warm-up focusing on all major muscle groups prior to CMJ and bat swing testing. CMJ and bat swing testing was completed on 2 PASCO 2142 force plates (Roseville, CA) sampling at 1,000 Hz. Force-time curve analyses were completed with a custom program coded in R to evaluate peak force (PF), rate of force development (RFD), and impulse (Imp) for both the CMJ and bat swing. Asymmetry magnitude was evaluated with the Symmetry Index (SI) Score ([left value − right value]/[sum of values] × 100) where positive values indicate a left side asymmetry, negative values indicate a right side asymmetry, and the distance from zero indicates the asymmetry magnitude as a percentage. Association between CMJ SI and bat swing SI variables were evaluated via Pearson's bivariate correlation coefficients. Results: No statistical or practical relationships were found between CMJ SI and counterpart bat swing SI variables (PF SI r = −0.240, RFD SI r = 0.024, Imp SI r = −0.002). A statistically significant relationship between CMJ Imp SI and bat swing RFD SI was observed, but it was in the opposite direction (r = −0.518). Statistically and practically significant relationships between SI variables of the same assessment were also present. Conclusions: Baseball is a sport that produces many repetitive movements that likely result in movement and strength asymmetries. That being said, it seems that some of this asymmetry may be task dependent. The current study did not find any statistical or practical association of force production asymmetry between CMJ and bat swing performance. In fact, some of the asymmetry measures switched direction from the CMJ to the bat swing. Practical Applications: While force production asymmetry may be detrimental to performance of some tasks, it is important to note that asymmetry in one task may not be indicative of equal asymmetry in another. Strength and conditioning professionals should use caution if programming based upon asymmetry results of a single task.

Friday, July 13, 2018, 10:30 AM–12:00 PM

Back to Top | Article Outline

Muscle Activation Patterns Change During Repeated Runs to Exhaustion Measured With Sports Performance Wearables

K. Balfany,1 D. Feeney,2 and S. Lynn1

1 California State University, Fullerton; and 2 ATHOS (MAD Apparel Inc.)

Introduction: Fatigue is reported when individuals experience a reduction in their ability to produce force and perform a desired movement. Research into the etiology of fatigue has mainly focused on isometric contractions, which limits the applicability to athletic performance. Sports require dynamic movements where activation patterns will differ from isometric contractions. It is well understood that the amplitude of the electromyography (EMG) signal muscle activity responses to fatigue show increasing amplitude increases during an isometric contraction performed to failure. With the use of sports performance wearable technology (SPW), it is possible to quantify the biomechanical response at the muscular level EMG activity during sport-specific activities, making dynamic muscle contractions much easier to measure. Purpose: The purpose of this study is to explore how muscle activation patterns respond to exercise performed to failure with the use of SPW. Methods: Four moderately-trained males performed repeated 400 m runs to exhaustion. Each athlete was fitted with a compression short embedded with EMG sensors. Following a dynamic warm up, athletes performed a 400 m sprint run at an RPE of 8 (scale 1–10). Following 60 seconds rest, athletes performed at least 5 more 400 m sprints runs within 2 seconds of the first runs' time. Between 6 and 9 runs were completed by the athletes, but subsequent analysis was performed on only the first 6. EMG activity from the quadriceps, biceps femoris, and gluteus maximus were sampled at 1 kHz, band pass filtered, and normalized to MVC. The primary outcome was the sum of the normalized EMG signal for each muscle group during each interval (not sure how to report data processing and normalization but that should go here). Results: A paired samples t-test was conducted to compare muscle activation (%MVIC% of total EMG activity during each interval) of the muscle groups measured (quadriceps, gluteus maximus, hamstrings) in the first sprint run (sprint r1) and the last sprint run (rs6) of the workout. Statistically, there was no significant difference in r1 to r6 in any muscle group (p > 0.05). Visual representation of the results is displayed in Figure 1 indicating the upward and downward trends of hamstring and glute contributions, respectively. Conclusions: The statistical analysis was underpowered, but consistent trends emerged. Glute contribution decreased, while hamstring contribution increased, and quadriceps contribution stayed the same constant for each athlete across 6 intervals. This suggests that as glute contribution decreased, hamstring contribution increased to produce maintain the same output. Across the 6 runs, muscle contribution changed to allow athletes to perform the run in relatively the same duration, while compensating with weaker muscles. Practical Applications: SPW may be used to assess muscular stress activity and biomechanical changes during dynamic movements and sport. To establish statistical significance, further research needs to be conducted. This case study provides encouraging data that may suggest increases in EMG amplitude be less of a significant indicator of fatigue during dynamic movements but rather muscle contribution changes across measured muscles will shift to complete a movement with the same output constraints. Acknowledgments: Supported by a grant through the Foundation to Eradicate Duchenne (FED).

Friday, July 13, 2018, 10:30 AM–12:00 PM

Back to Top | Article Outline

Comparing Muscle Activity at Parallel and Staggered Stance During Biceps Curl Exercise

J. Blandino and M. Krackow

Virginia Military Institute

Introduction: Control of upright posture during physical activity requires a coordinated effort of multiple body segments. Performing a front-loaded movement such as the biceps curl or lifting objects in front of the body puts stress on the spine and lower back muscles. Studies have shown that front-loaded movement shifted the center of mass of the body forward. In order to maintain balance due to changing of center of mass, coordinated muscular activity between the lower back and lower extremities was employed. Studies have shown that increased base of support such as width of feet and staggered stance affect postural control mechanism. Purpose: To compare lower back and the lower extremity muscle activity during biceps curl exercise between parallel and staggered stances. Methods: Eighteen healthy college-aged students with an average age of 20 ± 1.2 years old volunteered to participate in the study. Subjects' average body weight was 81.0 ± 15.0 kg, and had 0–11 years of resistance training experience. Participants performed 3 repetitions of the biceps curl exercise using barbell weight of 33.5 ± 2.6% of each subject's body weight at the 3 various stances: parallel, staggered left foot forward, and staggered left foot backward. At the parallel stance position, subjects stood with feet at shoulder width. For both staggered stances, the subject's right foot remained in place while the left foot was placed one-foot length forward or backward from the right foot while maintaining the same shoulder width distance between the 2 feet. EMG electrodes were placed on both left and right External Obliques, Lumbar Erector Spinae, Tibialis Anteriors, and Peroneus Longus. The maximum muscle activity for each muscle was compared among the 3 stances using repeated measures ANOVA with alpha set at 0.05. Post hoc analysis for pairwise comparisons with Bonferroni adjustment was followed to determine which 2 groups were different. Results: The results demonstrated no statistical significant difference in the maximum muscle activity in the external oblique and lumbar erector spinae among the 3 stances. There was also no statistical significant difference in the maximum muscle activity in the tibialis anterior and peroneus longus with the 3 stances. However, the muscle activity for lumbar erector spinae, tibialis anterior, and peroneus longus in the left side were lower than that of the right side for all 3 stances. Conclusions: Lifting or performing front-loaded exercises shift the center of mass anteriorly. The results of this study showed that muscle activity in the lower back (external oblique and lumbar erector spinae) and the 2 muscles in the lower extremities (tibialis anterior and peroneus longus) were not affected by the stance positions while carrying out biceps curl exercise. Interestingly, the left side of the muscle activity was lower than the right side for all 3 stances. This may indicate an unconscious adjustment in postural control. Practical Applications: When people perform front-loaded exercises such as biceps curl, they can employ whatever stance position (parallel or staggered stance with left foot forward or left foot backward) that they feel most comfortable without adversely affecting muscle activity in the lower back and lower extremities. Acknowledgments: This study was supported by funding from VMI Center for Undergraduate Research.

Friday, July 13, 2018, 10:30 AM–12:00 PM

Back to Top | Article Outline

Rate of Force Development and Reactive Strength Responses to a Composite Training Session

P. Byrne,1 J. Moody,2 S. Cooper,2 and S. Kinsella1

1 Institute of Technology Carlow; and 2 Cardiff Metropolitan University

Introduction: Previous research has provided evidence that plyometric exercises can elicit post activation potentiation (PAP) leading to acute enhancement in sprinting. To date, no investigation has examined the responses to a plyometric-sprint PAP protocol during a session on average eccentric rate of force development (ECC-RFD) and the reactive strength index (RSI). To describe this, the authors accepted the term composite training; defined as the combination of a plyometric exercise with an explosive activity such as a sprint run performed in the same set and session. Purpose: The purpose of this study was to investigate the ECC-RFD and RSI responses to a composite training session and to determine if a new performance level could be induced after a period of recovery. Methods: Eight hurling players (mean ± SD: age = 20.0 ± 2.1 years; mass = 85.0 ± 5.8 kg; height = 185.3 ± 3.1 cm; 3RM back squat = 125.0 ± 2.3 kg; countermovement jump [CMJ] height = 40.2 ± 1.5 cm) volunteered to participate by first completing a bounce drop-jump (BDJ) test to identify individual BDJ drop height. This was followed 72 hours later with a composite training session. ECC-RFD was derived from the CMJ test and RSI from the BDJ test. These measures were tested 10 minutes pre- and immediately post-session and 168 hours' post-session. Once players had warmed–up and completed the pre-test, 6 repetitions of the composite training protocol were performed with a 4-minute inter-repetition recovery period. The protocol consisted of 3 BDJs followed by a 20 m sprint after a 15-second rest. A 1-way repeated measures ANOVA was performed with post-hoc pair-wise comparisons using a Dunn-Sidak adjustment to identify where significant differences existed between pre-, post- and post-168 hours of the session. Cohen's d was used to determine effect size (ES). Results: The ANOVA reported a decrease in absolute (−13.2%; p = 0.03; ES = −0.56) and relative (−14.5%; p = 0.04; ES = −0.54) ECC-RFD from pre-to post-session. An increase from post-session to post-168 hours was found for relative ECC-RFD (23.7%; p = 0.04; ES = 0.59) and RSI (11.1%; p = 0.05; ES = 0.57). Furthermore, absolute ECC-RFD increased from post-session to post-168 hours and surpassed the pre-session score (27.1%; p = 0.07; ES = 0.79). A correlation was observed between the post-168 hours scores for absolute ECC-RFD and RSI (r(6) = 0.77; p = 0.02). Conclusions: These data indicate that composite training results in an immediate decline in ECC-RFD and RSI post-session, most likely related to metabolic by-products that negatively affect the muscle fiber contractile mechanism. Furthermore, relative ECC-RFD and RSI returned to pre-session levels, however, absolute ECC-RFD exhibits enhanced performance after 168 hours of recovery. Further research is required to examine adaptations to a short-term composite training program. Practical Applications: Composite training is a novel training approach and it results in an immediate decline in ECC-RFD and RSI post-session possibly due to metabolic responses. Furthermore, after 168 hours of recovery, composite training results in the enhancement of ECC-RFD possibly due to improved neural drive, phosphorylation and adaptations to muscle architecture leading to enhancements in explosive activity such as sprinting. Acknowledgments: The authors express their gratitude to the players that participated in this study especially as they had to peform no trainnig for a 7 day period. We would like to thank the Insititue of Technology Carlow for providing funding to make this study possible.

Friday, July 13, 2018, 10:30 AM–12:00 PM

Back to Top | Article Outline

A Comparison of Normalization Methods for Electromyography Data of the Biceps Femoris During the Glute-Ham Raise Exercise

M. Cuthbert,1 N. Walker,1 N. Ripley,1 J. McMahon,1 T. Suchomel,2 and P. Comfort1

1 University of Salford; and 2 Carroll University

Introduction: Accurate assessment of muscle activation, via electromyography (EMG), is important when comparing relative contributions of different muscles between activities or exercises. In order to achieve EMG is usually normalized to a percentage of maximal voluntary isometric contraction (MVIC). However, numerous single joint methods have been used to normalize EMG, including manual muscle testing, isometric dynamometry and strain gauges. Such assessments can be time consuming and may not represent the resultant movement (e.g., knee flexion vs. hip extension) during resistance training especially in relation to bi-articular muscles, such as the biceps femoris (BF). It is therefore important, not only to determine if normalization to MVIC during single joint or multi-joint exercise is effective, but also to determine the effect of normalization procedures for bi-articular muscles. Purpose: To compare 3 different methods of EMG normalization for the BF and identify the most suitable method during each of the 4 phases (Phase 1—knee extension; Phase 2—hip flexion; Phase 3—hip extension; Phase 4—knee flexion) of a glute-ham raise. Methods: Subjects (n = 11; age = 23 ± 4 years, height = 175.95 ± 6.9 cm; mass = 75.15 ± 9.65 kg) had EMG electrodes placed on the BF muscles in accordance with SENIAM guidelines. Subjects performed knee flexion (KF) and hip extension (HE) MVIC trials, on an isokinetic dynamometer, and an isometric Romanian deadlift (RDL), in order to normalize the EMG data collected during the phases of a glute-ham raise. EMG data were analysed in a bespoke Excel spreadsheet, identifying the 4 phases based on thresholds of >2 SD + mean EMG acquired during periods of residual EMG. A series of 1-way ANOVAs, with Bonferroni post-hoc analyses, were performed to identify differences in normalized values between methods and non-parametric equivalents, when appropriate, for each phase of the exercise. Cohen's d effect sizes were also calculated. An a priori alpha level was set at p < 0.05. Results: Normalization using RDL consistently resulted in large (d = 1.204–2.204) and significantly greater (p < 0.001) normalized EMG value across all phases, compared to both HE and KF (Table 1). In addition, KF normalization resulted in a trivial to moderate (d = 0.182–0.546) significantly (p ≤ 0.036) lower normalized EMG value compared to HE normalization (Table 1). Conclusions: Normalization of BF EMG using a RDL should not be used, to prevent erroneous interpretation of the percentage of maximal activation, which is clearly inflated by this process (Table 1). Only trivial to small differences were noted between KF and HE, with HE resulting in consistently higher values. Practical Applications: Bi-articular muscles should be normalized using a technique specific to the movement (KF or HE) being assessed during the dynamic task in question.

Friday, July 13, 2018, 10:30 AM–12:00 PM

Back to Top | Article Outline

The Concurrent Validity of Vertical Jump Measuring Devices

S. Montalvo,1 S. Dorgo,2 J. Sanchez,2 M. Gonzalez,2 and C. Tune2

1 University of Texas at El Paso; and 2 The University of Texas at El Paso

The vertical jump has been regarded as a valid measure of explosiveness in sports. Several devices for field-testing have been introduced to the market in recent years as alternatives to costly laboratory testing. Previous reports have looked at the validation of different accelerometers, motion capture, video applications, and photoelectric cells in vertical jump measurement. However, validation of some of these new devices is lacking, particularly concurrently validating multiple devices within the same study session. Purpose: The aim of this study was to provide a concurrent validation of 5 different vertical jump measuring devices. Methods: Fifty physically active subjects (males = 26, females = 24; mean ± SD age = 23.5 ± 3.78) were recruited for this study. Upon arrival to our laboratory, subjects performed a 5-minute general warm up, followed by an 8–10 minutes dynamic warm up and a few repetitions of countermovement jumps (CMJ), and squat jump (SQJ) for movement familiarization purposes. The validation testing protocol included 3 repetitions of CMJ and SQJ with 1-minute rest between repetitions. The devices used in this project were set up to be used synchronically at the same time, where each jump was captured by all devices. These devices included: force plates, motion capture, accelerometer, photoelectric cells device, and 2 different video applications. Data were analyzed on SPSS 23 and an intra-class correlation was used to find the correlation between the devices. Results: In the CMJ jump, compared to the force plates, the first video app showed a strong correlation (r = 0.976). The motion capture device, the second video app, and the accelerometer based app showed a moderately strong correlation to the force plates (r = 0.768, r = 0.724, and r = 0.641, respectively). Lastly, the photoelectric cells device showed a moderate correlation of r = 0.462. Compared to the motion capture, the force plates, the first video app, and the second video app showed a moderately strong correlation (r = 0.768, r = 0.776, and 0.599, respectively). Lastly, the accelerometer based app (r = 0.362) and the photoelectric cell system (r = 0.331) showed a moderately low correlation to the motion capture in the CMJ. In the SQJ, we observed a moderately strong correlation with the accelerometer, photoelectric cells device and the second video app (r = 0.805, r = 0.849, and r = 0.779 respectively) to the force plates. Finally, we observed a strong correlation between the force plates with the motion capture (r = 0.936) and first video app (r = 0.979). Compared to the motion capture, the first video app, the photoelectric cells, and second video app showed a moderately strong correlation (r = 0.817, r = 0.817, and r = 0.570 respectively). Lastly, the force plates and first video app were highly correlated to the motion capture (r = 0.936 and r = 0.929 respectively) in the SQJ. Conclusions: The findings from this study show that the video applications are viable and affordable alternatives to the costly laboratory devices. Practical Applications: Video applications present a cost effective solution to the costly laboratory devices; these video applications also offer a practical way to analyze the CMJ and SQJ in the field. Strength and conditioning professionals may consider video applications as reliable assessment devices to use in field settings when lacking access to laboratory based force plate or motion capture assessment protocols.

Friday, July 13, 2018, 10:30 AM–12:00 PM

Back to Top | Article Outline

Relationship of Countermovement Vertical Jump Performance and Pitching Velocity

P. Donahue, 1 E. Beiser, 2 C. Williams, 1 S. Wilson, 1 C. Hill, 1 L. Luginsland, 1 and J. Garner 3

1 University of Mississippi; 2 Minnesota Twins Baseball Club; and 3 Troy University

Purpose: Pitching a baseball requires the highly coordinated sequential movements of the whole body that requires both high levels of velocity and accuracy. This whole body movement is initiated by the ground reaction forces (GRF) transferred up the kinetic chain and through the upper extremity to the baseball itself. The countermovement vertical jump (CMVJ) is commonly used as a testing measure in athletic performance and has been shown to have varying degrees of relationships to athletic performance. Lower body power output is a highly emphasized aspect to training baseball pitcher as this is the initiation of force and power generation in the throwing motion. Thus, the purpose of this study was to investigate between CMVJ performance and pitching velocity in professional pitchers. Methods: Twenty-eight professional baseball pitchers (age 23.31 ± 2.41 years, mass 97.09 ± 10.36 kg, height 189.63 ± 4.27 cm, body fat 11.87 ± 3.69%) were recruited for this investigation. All testing occurred as part of the minor league spring training physical assessment protocol. Variables of interest in the CMVJ included mean concentric force (MCF), peak power (PP), mean power (MP), normalized peak power (PPKG), normalized mean power (MPKG) peak velocity (PV), and mean velocity (MV). Each subject performed a standardized dynamic warm up prior to testing. Each subject then took 3 CMVJ with a wooden dowel placed across the upper back in a high bar squat position. Subjects were allowed to self-select stance width and countermovement depth. A linear position transducer was attached to the right end of the dowel. The mean of the 3 jumps was calculated for each variable of interest and used in the analysis. Pitching velocity (peak and mean) was measured during a minor league spring training games with the use of a hand-held radar device. Results: Pearson product correlations were calculated between all variables. No significant relationship was found between CMVJ variables and either peak and mean pitching velocities. Conclusions: This investigation showed that CMVJ performance and pitching velocity do not have the statically significant relationship that is seen between CMVJ and other athletic performance measures. The use of field based test such as the CMVJ, may not be best suited as a predictor of throwing velocity. These results may not be generalizable to pitchers at all levels, but only those in the professional ranks. Practical Applications: Strength and conditioning professionals should still use CMVJ as a method of assessing lower body power output. Results from CMVJ testing can be used to assist in the design and implementation of total-body training protocols to improve strength and power in professional baseball players. Caution should be taken in interpreting that increases in CMVJ performance will impact pitching velocity at the professional level.

Friday, July 13, 2018, 10:30 AM–12:00 PM

Back to Top | Article Outline

The Role of Strength and Power in Football Striking

W. Ebben, C. Takahashi, D. Janeshek, M. Neal, and J. Reisimer

Lakeland University

Football players use a variety of striking techniques. The role of strength and power when performing football striking techniques, and how the forces produced during these techniques compare to each other, have not been investigated. Purpose: This study assessed the relationship between select measures of lower and upper body strength and power and the force produced during football striking techniques. This study also assessed the differences in forces produced during 3 football striking techniques. Methods: Fifteen men (age = 20.6 ± 2.4 years; height = 178.31 ± 9.12; weight = 103.68 ± 15.84; collegiate football experience = 1.4 ± 0.99 years) served as subjects for this study. Subjects provided written informed consent prior to participating in the study, which was approved by the Institutional Review Board. Subjects participated in a general and specific warm-up, habituation session, and testing session. The habituation session included tests of upper and lower body strength and power, and a demonstration of the football striking techniques used in the study. Strength tests included the 5 repetition maximum (RM) back squat (5RM-BS), and the 5RM bench press (5RM-BP). Power tests included the countermovement jump (CMJ), performed on a force platform and a 6.80 kg medicine ball throw (MBT) performed on a wall-mounted force platform. Football striking techniques included the 2-hand shiver, shoulder strike, and the forearm shiver. These techniques were demonstrated and subjects practiced each. Subjects returned 3–5 days after the habituation session to test the football striking techniques, which were performed on a wall-mounted force platform. Subjects performed 3 sets of 1 repetition of each. Peak ground reaction forces (GRF) were assessed. Pearson's correlation coefficients were used to determine the relationship between subject 5RM-BS strength, 5RM-BP strength, CMJ-GRF and MBT-GRF and the GRF produced during the 3 football striking techniques. Kinetic differences between the football striking techniques were determined using a repeated measures ANOVA. The trial-to-trial reliability of all outcome variables were determined using Intraclass correlation coefficients. Results: Results reveal that the two-hand shiver correlated with the CMJ-GRF (r = 0.55, p = 0.046), MBT-GRF (r = 0.56 p = 0.029), and the combined CMJ-GRF and MBT-GRF value (r = 0.70, p = 0.004). The shoulder strike GRF was correlated with the MBT GRF (r = 0.51, p = 0.46). All 3 striking technique GRF are correlated with each other (r = 0.51 to 0.70, p = 0.004–0.054). Shoulder striking produced approximately 45 and 49% more force than forearm and 2 hand shiver striking, respectively (p ≤ 0.001). Intraclass correlation coefficient for the test exercises and all dependent variables ranged from 0.73 to 0.98. Conclusions: Total body power and lower body power may be the most important determinants for the development of force during simulated football striking techniques. Of these techniques, the shoulder strike produced the most force compared to the others, potentially due to the impact of more of the subject's mass as well as total body inertia. Strength and conditioning coaches should consider prioritizing the use of total body power and lower body power exercises to increase force production of football players. Acknowledgments: We thank the Lakeland University football coaches including head coach Colin Bruton, as well as Eric Treske and Tyler Wellman for their assistance during this study.

Friday, July 13, 2018, 10:30 AM–12:00 PM

Back to Top | Article Outline

The Effects of High-Intensity Exercise on Isometric Strength Parameters

T. Farney,1 M. MacLellan,2 P. Parra,2 A. Gonzalez,3 C. Hearon,3 and A. Nelson2

1 Texas A&M University, Kingsville; 2 Louisiana State University; and 3 Texas A&M University, Kingsville

Purpose: The focus of the investigation was to examine the effects of an increased total time of work on muscle activation (EMG), peak force production and different time points along the slope/time curve. Methods: Eleven, apparently healthy males (age: 22.1 ± 1.6 years; height: 175 ± 5 cm; weight: 79.8 ± 8.4 kg) participated within this protocol. The testing sessions consisted of performing one high-intensity exercise session along with a pre- and post-exercise (PRE/POST) isometric mid-thigh pull test to measure peak force production (PF), rate of force development (RFD), and EMG amplitude and median frequency shifts (MDF). Exercise session consisted of 4 exercises performed in this order: barbell thrusters (with a 45 lb barbell), squat jumps, lunge jumps, and forward jumps. The completion of all 4 exercises and rest times between each exercise being designated as a round. A total of 3 rounds were completed during testing. All exercises were completed for 20 seconds, with participants completing as many repetitions as possible during the 20 seconds. Rest periods were 30 seconds between each exercise, and 1 minute between each round. Upon completion of the 3 rounds, the post isokinetic leg extension was performed. Beginning with the onset of contraction, rate of force development was measured in 50 milliseconds time segments (TIME) at: 0–50 milliseconds, 51–100 milliseconds, 101–150 milliseconds, 151–200 milliseconds, and 201–250 milliseconds. Electromyography was recorded to measure changes in amplitude and MDF within the vastus lateralis (VL), rectus femoris (RF), and vastus medialis (VM). Results: For RFD, there was a main effect for TIME (p = 0.0040) where when pooled across PRE/POST, 0–50 milliseconds (4,102 ± 2,659 N·s−1) was lower than the other 4 exercise time segments (8,658 ± 5,044; 8,636 ± 3,504; 10,079 ± 4,949; 8,553 ± 4,522 N·s−1, respectively). There was also a trend towards a significant PRE/POST effect (p = 0.059) where when pooled across TIME, RFD tended to be higher pre-exercise (8,314 ± 3,072 N·s−1) than post-exercise (7,697 ± 2,982 N·s−1). No significant PRE/POST × TIME interaction was detected for RFD (p > 0.05). Post-exercise PF (3,498 ± 1,164 N) was significantly lower (p = 0.010) compared to pre-exercise PF (3,793 ± 1,323 N). There were no statistical significance (p > 0.05) found between pre-/post-exercise EMG amplitude in the VL, RF, or VM. Additionally, there were no statistical significance (p > 0.05) found between pre-/post-exercise in MDF in the RF or VM. However, there was trend towards significance (p = 0.066) in MDF of the VL (107.1 ± 15.46/84.65 ± 22.26 Hz) from pre-/post-exercise. Conclusions: Fatigue was reported within this protocol with peak force production being reduced following exercise. However, EMG data demonstrates that the nervous system is highly resistant to change following a high-intensity exercise session. Practical Applications: The rationale of this study was to gain a better understanding of neural changes following a common exercise regime that is used by both strength and conditioning coaches as well as in the general fitness arena. Additionally, the isometric mid-thigh pull test still remains a good way to measure performance variables for the strong reliability and correlations to strength found throughout the literature. Future research should continue to investigate how the nervous system adapts during a fatigue situation to ensure performance is maintained.

Friday, July 13, 2018, 10:30 AM–12:00 PM

Back to Top | Article Outline

Relationships Between Various Isometric Force Measures

P. Fullmer,1 J. DeWitt,2 E. Hwang,2 J. Ryder,2 and K. English3

1 JES-Tech; 2 KBRWyle; and 3 University of Houston Clear Lake

Introduction: Isometric strength measures are a popular and well-documented method of obtaining peak force in a variety of positions. Of particular interest to the human spaceflight community is the isometric mid-thigh pull as it is believed to be a valid and reliable means of assessing whole body strength and is being developed as a potential strength measure to provide crewmember fitness status during spaceflight. The relationship between the isometric mid-thigh pull and other possible isometric strength measures is still not well understood. Purpose: To examine the correlation between a variety of upper and lower body isometric strength measures. Methods: As part of an existing spaceflight study, 8 resistance-trained subjects (4 males and 4 females; Age: 45.1 ± 9.1 years; Mass: 82.9 ± 12.5 kg) were recruited. In 1 data collection session, subjects completed a battery of isometric strength measures to establish their peak force output in various upper and lower body movements. Peak force measures were obtained for the following isometric tests: mid-thigh pull (IMTP); leg press with a 90° knee angle (LP); bench press with the barbell set approximately 5 cm anterior to the chest (BP); a 3-position overhead pull on a force gauge affixed to a bar with the elbows fully extended (IOPH), the elbows flexed at 90° (IOPM), and the bar set just inferior to the chin (IOPL). The order of the IOP was assigned in a balanced, randomized order, but the other measures were performed in the order of LP, IOP, BP, and IMTP. Pearson correlation coefficients with 95% confidence intervals were used to establish the relationship between the peak force measures. Results: The results of the statistical comparisons demonstrated a very strong correlation between the measures as shown below, with the exception of the IOPL and LP (r = 0.44, p = 0.28). Conclusions: Significant correlations exist between most isometric strength measures that were performed for this limited data set. A larger dataset of 60 subjects will become available within the year that may provide additional insight into the relationship between the isometric force measures being measured for human space flight research studies. Practical Applications: The results of the study suggest that the many isometric measures used are very strongly correlated. The IMTP appears to be an efficient method for assessing peak isometric force and should be considered in evaluating maximal strength for future human space flight operations.

Friday, July 13, 2018, 10:30 AM–12:00 PM

Back to Top | Article Outline

Performance Differences Among Skilled Soccer Players of Different Playing Positions During the Standing Long Jump and Standing Long Jump Landing

J. Harry, M. Gonzalez, and B. Palmer

Texas Tech University

Introduction: Jumping ability is strongly correlated to athletic performance during competitive sports such as soccer. Although the vertical jump is primarily emphasized in contemporary research, the standing long jump (SLJ) appears to be more strongly related to acceleration performance in trained athletes. Thus, a larger body of evidence regarding SLJ performance in skilled soccer athletes is warranted. While vertical jumping and landing performance and mechanics differ among skilled soccer players of different playing positions, it remains unknown if such differences also characterize SLJ performance. Purpose: To examine differences among skilled soccer players of different playing positions during maximum effort SLJ and SLJ landings. Methods: Fifteen NCAA Division I male soccer players (180.8 ± 9.4 cm, 80.3 ± 22.4 kg, 19 ± 1 year) were stratified into the following groups according to playing position: attackers (ATK; n = 2), midfielders (MID; n = 6), defenders (DEF; n = 3), and goalkeepers (GK; n = 4). Participants performed 3 maximum effort SLJs and 3 maximum effort SLJ landings (SLJ landings performed from the distance jumped during the SLJ) while vertical and anterior/posterior ground reaction force (vGRF; yGRF) data were obtained (1,000 Hz). For the SLJ, the following variables were compared across groups using Cohen's d effect sizes (ES; large ≥1.2): amount of body weight unloaded in the vertical (unload vGRF) and anterior/posterior (unload yGRF) axes, the time to reach the unload vGRF and yGRF magnitudes, peak vGRF and peak yGRF magnitudes, the time to the peak vGRF (vGRFt) and yGRF (yGRFt), and the vertical and anterior/posterior rates of force development (vRFD & yRFD). For the SLJ landing, the following variables were compared across groups using Cohen's d effect sizes: peak vGRF and peak yGRF magnitudes, vGRFt, yGRFt, and the vertical and anterior/posterior rates of force attenuation (vRFA & yRFA). All variables were averaged across trials and normalized to body mass for analysis. Results: Highly skilled soccer players exhibited position-specific SLJ and SLJ landing characteristics. Attackers showed a greater vRFD and yRFD than defenders during SLJ (ES = 1.73, ES = 1.54). Attackers also showed a greater unload yGRF during SLJ than both defenders and midfielders (ES = 2.29, ES = 2.20). Midfielders showed a greater unload vGRF time and a greater peak yGRF during SLJ when compared to goalkeepers (ES = 1.67, ES = 1.43). During SLJ landings, midfielders showed greater peak yGRF and yRFA magnitudes than goalkeepers (ES = 1.48, ES = 1.49). Conclusions: The results of the study revealed positional differences during SLJ and SLJ landing among position groups. It may be that the demands of the different playing positions contributed to the differences observed among position groups. For instance, ATK and MID positions require creative jumping and landing movements, while DEF are required to perform reactive jumping and landing movements. This appears to explain the greater vRFD and yRFD observed in ATK compared to DEF. The greater peak yGRF in MID compared to GK during SLJ landing might indicate a need for MID to prioritize landing, as a greater peak impact force might expose MID to greater overuse injury risk. Practical Applications: Strength and conditioning coaches might consider position-based training programs when targeting SLJ and SLJ landing abilities. Thus, coaches could more aptly target improvements in specific abilities related to certain position groups.

Friday, July 13, 2018, 10:30 AM–12:00 PM

Back to Top | Article Outline

Relationship of Ponderal Somatogram to One-Repetition Maximum Bench Press and Squat in Division II Football Players

M. Hunter,1 R. Schumacher,1 L. Wentz,2 J. Mayhew,1 and W. Brechue3

1 Truman State University; 2 Appalachian State University; and 3 A.T. Still University

American football required unique body types that vary dramatically by playing position. Limited studies have described the difference in body types among the basic positions. Purpose: To describe the ponderal somatogram in Division-II football player and assess the relationships with body composition components and maximal strength. Methods: Players (n = 64) from a Division-II team were divided into 5 positions: offensive line (OL), defensive line (DL), offensive backs (OB), defensive backs (DB), and linebackers/tight-ends (LB/TE). Each player was assessed for 6 muscular and 5 non-muscular circumferences which were converted to ponderal equivalents for muscular (PE-M) and non-muscular (PE-NM) components. In addition, body composition (lean mass and fat mass) was assessed by dual-energy x-ray absorptiometry (DXA). One-repetition maximum (1RM) for bench press (BP) and squat (SQ) were also determined during the same week. Results: Collectively as a team, players had exaggerated PE-M (98.4 ± 14.8) compared to PE-NM (86.4 ± 14.5), which yielded a high PE-M/PE-NM ratio (1.145 ± 0.074). Comparison of PE-M to body mass indicated that shoulders and chest were appropriately developed compared to body mass, while biceps and forearm were overdeveloped compared to body mass. Comparison of PE-NM revealed that abdomen was appropriately developed for a given body mass, while hips, knee, ankle, and wrist were underdeveloped compared to body mass. By position, OL and DL were significantly higher than OB, DB, and LB/TE on PE-M (shoulder, chest, thigh, and calf) and for PE-NM (abdomen, hip, knee, ankle, and wrist). The PE-M/PE-NM ratio was not significantly different (p = 0.07) among positions. The correlation between fat mass from DXA and PE-NM (r = 0.91) was significantly higher (p < 0.01) than the correlation between lean mass and PE-M (r = 0.78). Correlations of lean mass to BP (R = 0.58) and SQ (r = 0.59) were similar and nonsignificantly higher than for PE-M (r = 0.53 and 0.44, respectively). The correlations of PE-M/PE-NM to BP (r = 0.07) and SQ (r = 0.17) were not significant. Conclusions: The findings of this study are consistent with previous work that noted exaggerated muscular develop relative to body mass in football players. Comparison with more elaborate body composition analysis supports the use of the ponderal somatogram for evaluating player muscular development and perhaps muscular strength as its relationship to 1RM bench and squat strength was similar to that of lean mass. Practical Applications: PE-M is highly related to lean mass, and both lean mass and PE-M are similarly related to upper- and lower-body strength, such that PE-M could be used to track changes in muscular components and strength with training to enhance player assessment.

Friday, July 13, 2018, 10:30 AM–12:00 PM

Back to Top | Article Outline

Associations Among Body Circumferences, BMI, %Fat, and Grip Strength in College-Aged Men

G. Leahy,1 J. Mayhew,2 T. Crowder,3 and A. Smith-Ryan4

1 Kirtland Air Force Base; 2 Truman State University; 3 United States Military Academy; and 4 University of North Carolina at Chapel Hill

The military continues to search for accurate and unbiased methods for estimating fitness for service. Recent changes proposed for functional field testing to evaluate fitness for combat operations include isometric grip strength. Limited research is available to determine the association between grip strength and body composition parameters in military-age men. Purpose: To evaluate the associations between selected anthropometric dimensions, body fat estimations, and isometric grip strength in college-aged men. Methods: Untrained college men (n = 930, age = 20.3 ± 1.9 years, height = 178.8 ± 6.7 cm, weight = 77.3 ± 13.2 kg) were evaluated for neck, waist, and hip circumferences, BMI, selected skinfold, and 4 isometric strength measures. Skinfolds included chest, abdomen, and thigh which allowed prediction of body fat (%fat) using gender-specific equation (GSE). Body fat was also estimated from the Army tape test (ATT) and 3 BMI equations derived on large population samples. A waist-to-height ratio (WC/Ht2) was also calculated. Two trials of right and left grip strength were performed, with the better trial used for analysis. Total grip strength (TG) was calculated as the sum of right and left grips. Results: All %fat estimations were significantly different from each other but highly correlated (r = 0.70–0.99). BMI-estimated %fat values were significantly (p < 0.001) related to TG (r = 0.34–0.35), although they accounted for <12% of the variance in TG. The ATT and GSE were also significantly correlated (r < 0.13) with TG but accounted for <1% of the variance with TG. BMI was significantly correlated with WC (r = 0.82) and had a higher correlation with TG (r = 0.34) than did WC/Ht2 (r = 0.12). TG was significantly correlated with back (r = 0.53) and leg strengths (r = 0.26) but accounted for <28% of their variance. If a random sample (n = 739) was used to construct a prediction equation to estimate TG, the best variables selected were weight, waist and neck circumferences (R = 0.42, SEE = 15.4 kg, CV = 14.6%). Cross-validation (n = 191) produced a correlation of r = 0.41 (p < 0.001) and nonsignificant different (p = 0.67) between predicted and actual TG, with 48% of the group having predicted TG within ±10% of actual TG. Conclusions: These results suggest that body size measurements can be used with limited accuracy to estimate isometric strength in military-age men. Practical Applications: Additional research should be done to determine the accuracy of body dimensions and composition to identify other performance task identified with battlefield demands.

Friday, July 13, 2018, 10:30 AM–12:00 PM

Back to Top | Article Outline

Body Composition and Body Composition Relative to Height in Collegiate Softball Players

M. Lane,1 R. Bean,1 K. Grassenberger,1 L. Doernte,1 R. Hartsell,1 J. Wagganer,2 and J. Barnes2

1 Eastern Kentucky University; and 2 Southeast Missouri State University

Introduction: Softball requires a wide variety of skills and abilities. Softball players that are faster and more powerful tend to be more successful on the field. Coaches might pressure athletes to be leaner and more muscular than is feasible. Information on typical body composition in softball, specifically relative to the height of the player must be investigated. Purpose: To identify body composition and lean body mass relative to athlete height in collegiate softball players. Methods: Seventy-seven athletes on a collegiate varsity softball team (1.68 ± 0.07 m, 71.9 ± 11.2 kg, 19.8 ± 1.3 years, mean ± SD) at multiple NCAA Division 1 institutions were enrolled in this study. Height and weight were recorded prior to body composition, which was analyzed utilizing a total body Dual-energy X-ray Absorptiometry (DXA) scan. Lean body mass (LBM) was then entered relative to the height of the athlete to calculate the lean body mass body mass index (LBMBMI) of the athlete utilizing standard BMI calculation methodology. Athletes were divided into position groups; infielder, outfielder, pitcher, or catcher based upon the primary defensive role. ANOVA tests were conducted between the groups with least square differences post hoc analysis. Results: Overall BF% was 30.1 ± 7.6 with position values: infielders = 31.5 ± 7.5, outfielders = 24.19 ± 5.7, catchers = 36.5 ± 8.2, and pitchers = 32.1 ± 6.4. Outfielders were significantly leaner than the other position groups (p < 0.01). The Lean body mass values were: infielders = 110.4 ± 14.9, outfielders = 106.7 ± 13.3, catchers = 103.8 ± 14.8, and pitchers = 109.7 ± 9.9 with no significant differences between the groups. Overall, lean body mass index values were 17.7 ± 1.7 with position values: infielders = 18.1 ± 1.8, outfielders = 17.8 ± 1.4, catchers 16.8 ± 2.0, and pitchers 17.2 ± 1.3 with no significant differences between the groups. Pitchers were significantly taller than infielders (p < 0.05) and outfielders were significantly lighter than infielders and pitchers (p < 0.01). Conclusions: Overall there were significant differences in body fat percentage in the positional roles. However, lean body mass both total and relative to height showed no significant differences. This data is useful in helping to inform coaches as to what body composition are found in the sport based upon position and their ranges. Practical Applications: Overall there seems to be little difference in the LBMBMI observed in softball players regardless of positional grouping. Perhaps athletes in this sport gravitate to carry a certain amount of lean mass relative to their body size regardless of position. Further research should be performed to elucidate any relationship to athletic success and the LBMBMI observed in athletes relative to others inside of the same positional role. Acknowledgments: The authors would like to thank the athletes for participating in this study.

Friday, July 13, 2018, 10:30 AM–12:00 PM

Back to Top | Article Outline

Evaluation of Body Composition via Motion Capture System

P. Moodie,1 E. Mosier,2 A. Fry,2 J. Nicoll,2 and N. Moodie3

1 Dynamic Athletic Research Institute; 2 University of Kansas; and 3 Rockhurst University

Understanding and controlling body composition has been shown to be impactful on performance in sports. Specifically, alteration to body composition is the manifestation of change in an athlete's development over time. Therefore, tracking body composition consistently is a valuable data point in assessing an athlete's readiness. However, the time needed for a validated body composition test is time consuming. On a team scale it becomes unmanageable to perform this evaluation as regularly as needed. Purpose: To test an identified and validated way of collecting body composition with an existing technology inside current athlete performance testing to reduce testing time on the athletes while maximizing available data points for player tracking. Methods: Two-hundred and eighteen subjects were tested using a motion capture system (MCS; DARI, Overland Park, KS) and an electrical impedance device (InBody 770; Cerritos, CA). The already validated method for calculating body composition, which relies on girth measures, was selected and used to calculated body composition from the MCS. The girth measurements needed as inputs for the formula were calculated using a motion capture technology (DARI). The results from that formula was compared to a gold standard in body composition, the InBody 770, and compared between subjects. The calculated body composition of whole body fat mass (%) and lean mass (%) were statically compared between the 2 systems. Results: The 2 methods showed no statistically difference when calculating whole body fat mass and lean mass percentages. Bland Altman showed a repeatability range of (±3.2%). This range is within the current acceptable range of body composition testing utilized in athletic testing. Discussion: This study demonstrates that a MCS can provide a valid way of collecting body composition data which would result in a reduction in testing time and more testing opportunities for athletes. Additionally, support staff for the athletes would continue to gather insightful information to better prepare athletes. Further research is needed on other predictive body composition models using the same technology. Practical Applications: A MCS used in the present study can help identify whole body fat mass and lean mass percentages. This may provide the strength and conditioning professional and sports medicine clinician helpful time sensitive information when monitoring athletes across a season and career.

Friday, July 13, 2018, 10:30 AM–12:00 PM

Back to Top | Article Outline

Agreement Between Bioelectrical Impedance Analysis and Dual-Energy X-Ray Absorptiometry in Assessing Bone Mineral Content in Adluts With Down Syndrome

A. Russell,1 M. Richardson,2 M. Fedewa,3 F. Conners,4 M. Stran,4 and M. Esco2

1 Auburn University at Montgomery; 2 University of Alabama; 3 Department of Kinesiology, The University of Alabama; and 4 The University of Alabama

Individuals with Down syndrome (DS) are at greater risk of osteoporosis compared to individuals in the general population. Providing accurate measures of bone mineral content (BMC) to those with DS can help these individuals to manage their health. Although dual-energy x-ray absorptiometry (DXA) can assess BMC with high precision, it is often unavailable and can induce anxiety in people with DS. Recently, bioelectrical impedance analysis (BIA) has been used to assess BMC in the general population, but its agreement with DXA in people with DS has not been examined. Purpose: The purpose of this study was to examine the agreement between BIA and DXA in measuring BMC in adults with DS. Methods: Twenty-one adults (8 men, 13 women) over age 23 with DS completed the study. Twenty-one healthy adults (8 men, 13 women) over age 23 without DS served as a control group. For each group, BMC was assessed with both BIA and DXA. BMC values from BIA and DXA were compared using dependent t-tests. Pearson correlation was also used and 95% limits of agreement were determined using the method Bland and Altman. Results: In adults with DS, BIA overestimated BMC compared to DXA when men and women were analyzed together (t = −5.237, df = 20, p < 0.000). There were no differences in mean BMC between BIA and DXA in males with DS (t = −1.116, df = 7, p = 0.301), but individual data points suggest that agreement is different for individuals with high and low BMC. In females with DS, BMC from DXA was significantly lower than BMC from BIA (t = −7.978, df = 12, p = 0.000). BIA underestimated BMC compared to DXA in control males (t = 5.641, df = 7, p = 0.001), but was not significantly different from DXA in control females (t = 0.879, df = 12, p = 0.397). Conclusions: The BIA method tested in the present study provides reasonable estimates of BMC for adult females in the general population. However, additional population-specific equations should be developed for adult males in the general population and for adult males and females with DS. Practical Applications: BIA shows promise as an alternative method to DXA for assessing BMC in the general population. It is more widely available than DXA, does not involve radiation, and is better tolerated by adults with DS. Further study is warranted to develop a population specific equation for determining BMC from BIA in DS.

Friday, July 13, 2018, 10:30 AM–12:00 PM

Back to Top | Article Outline

Effects on BMI after 8 Weeks, Twice a Week of Calisthenics Training in Undergraduate College Students

M. Silva 1 and L. Silva 2

1 University of Puerto Rico, Mayaguez; and 2 Albizu University

Exercise can be an effective way to prevent weight gains, lose weight or simply to maintain a desirable body weight. Although there are different ways to diagnose body fatness of an individual, the calculation of the body mass index (BMI) requires only height and weight, therefore it can be used as a practical screening tool for assessment of overweight and obesity especially, among adult non-athletes. Exercise frequency and intensity are determined by different factors, among the most relevant are goals, age and condition of the individual. Previous studies have found that while attending college, students have a tendency to gain weight; however, it is possible they might gain height too. Purpose: To evaluate the effectiveness of an 8 weeks twice a week calisthenics training program in undergraduate males and females college students. Methods: A total of 46 (n = 23 males and n = 23 females) undergraduate college students (mean ± SD: age 18.71 ± 1.08 years, height = 166.86 ± 8.63 cm, and weight = 68.17 ± 17.10 kg) served as subjects for this study. The first 4 weeks of the program each of the participants performed a total of 40 minutes of different upper and lower body calisthenics exercises, including core training, and on the last 4 weeks of the program, time of the exercise session was increased to 50 minutes total. Before and after the 8 weeks program, measurements were made for height (centimeters), and weight (kilograms) to calculate the BMI. The SPSS 22.0 statistical software package was used to analyze the collected data. For this study, a paired samples t-test was used to determine if any significant difference (p ≤ 0.05) existed before and after the 8 weeks twice a week calisthenics training program. Results: No significant differences (p ≤ 0.05) were observed for weight (p = 0.90), height (p = 0.323), or BMI (p = 0.090); however, there were significant increases in the number of squats (50%), push-ups (75%) and sit-ups (60%). Conclusions: these results show that exercising twice a week with calisthenics training for 8 weeks was not effective in improving the BMI; however it was effective for improving muscular endurance. Practical Applications: The findings of the present study indicate calisthenics training twice a week can help improve muscle endurance, but has no significant effects on BMI. The findings in the present study should be taken with caution because the academic workload and life style for the sample analyzed in the present study, might be different for students from other colleges or universities and this could affect their body composition.

Friday, July 13, 2018, 10:30 AM–12:00 PM

Back to Top | Article Outline

The Effects of a Ketogenic Diet on Body Composition in Resistance Training Females

K. Skemp,1 D. Baumann,2 and M. Stehly2

1 University Wisconsin, La Crosse; and 2 University Wisconsin La Crosse

Introduction: Very low carbohydrate and high fat ketogenic diets have been shown to decrease fat mass while preserving lean body mass. This diet is often targeted toward the overweight population with the promotion of fat loss; or endurance training male athletes with the intention of increasing exercise capacity; however, analysis of this dietary pattern on resistance-trained athletes, particularly female fitness competitors, has not been predominantly studied. Purpose: Since ketogenic resistance training research is limited for the female population, the purpose of this study was to examine if the ketogenic diet would produce a favorable impact on body composition by producing high fat loss while maintaining muscle. Participants adhering to a ketogenic diet or a non-ketogenic diet were compared by overall weight loss, fat loss, and loss of lean body mass. Methods: A sample of 20 women (mean age = 20.27, SD = 1.60) were assigned to either the ketogenic group (N = 10) who followed a ketogenic diet of 10% CHO, 70% fat, and 20% protein or a control group (non-ketogenic group) (N = 10) who followed their usual standard diet. Both groups participated in resistance training of at least 3 times a week for a duration of 4 weeks. Those in the ketogenic diet group were not given a calorie target; instead, they were asked to meet their macronutrient goals daily. Body fat and lean body mass were measured using air displacement plethysmography. Ketone and glucose values were determined using urinary analysis strips. All measurements were taken at week 0 and end of week 4. Results: Both the ketogenic group and control group participants lost overall body mass (2.35 ± 3.67 lbs) and fat mass (1.06 ± 2.97 lbs). Ketogenic group participants lost more overall body mass (4.36 ± 3.59 lbs) than the control participants (0.34 ± 2.57 lbs) (p = 0.005). Fat mass decreased to a greater extent in the ketogenic group (2.15 lbs ± 2.46) compared to the control group (0.17 ± 3.08 lbs) (p = 0.03). Those in the ketogenic group did not lose more lean muscle mass than the control group. Conclusions: The present study found that participants following a ketogenic diet lost more overall body mass, fat mass, and maintained lean body mass to a greater extent than control subjects. Thus, the ketogenic diet has a favorable impact on body composition of lean muscle mass and fat mass in female resistance training athletes. Practical Applications: While the application of ketogenic diets is relatively new, little to no research has been done on female resistance training athletes. The results of the present study suggest that this diet may have a favorable impact on achieving weight loss goals, but more importantly preserving lean body mass among this population and may be an effective adjunct to reaching desired training and body composition goals.

Friday, July 13, 2018, 10:30 AM–12:00 PM

Back to Top | Article Outline

Comparison of Air Displacement Plethysmography and Bioelectrical Impedance Measurements on NCAA Division I Female Collegiate Athletes