Valid but Invalid: Suboptimal ImPACT Baseline Performance in University Athletes : Medicine & Science in Sports & Exercise

Journal Logo


Valid but Invalid: Suboptimal ImPACT Baseline Performance in University Athletes


Author Information
Medicine & Science in Sports & Exercise 50(7):p 1377-1384, July 2018. | DOI: 10.1249/MSS.0000000000001592
  • Free



This study aimed to investigate the frequency of valid yet suboptimal Immediate Postconcussion Assessment and Cognitive Test (ImPACT) performance in university athletes and to explore the benefit of subsequent ImPACT administrations.


This descriptive laboratory study involved baseline administration of ImPACT to 769 university athletes per the institution’s concussion management protocol. Testing was proctored in groups of ≤2 participants. Participants who scored below the 16th percentile according to ImPACT normative data were readministered the ImPACT test up to two additional times because these scores were thought to be potentially indicative of suboptimal effort or poor understanding of instructions. Descriptive analyses were used to examine validity indicators and individual Verbal and Visual Memory, Visual Motor Speed, and Reaction Time ImPACT composite scores in initial and subsequent administrations.


On the basis of ImPACT’s validity criteria, 1% (9/769) of administrations were invalid and 14.6% (112/769) had one or more composite score of <16th percentile but were considered valid. After one readministration, 71.4% (80/112) achieved scores of ≥16th percentile and an additional 18 of 32 scored ≥16th percentile after a third administration. Verbal Memory was most commonly <16th percentile on the first administration (43%), Verbal Memory and Visual Motor Speed on the second administration (44% each), and Visual Motor Speed alone on the third administration (50%).


Approximately 16% of ImPACT records were flagged as invalid or had one or more composite scores of <16th percentile, potentially indicative of suboptimal performance. Upon readministration, 88% of those participants scored >16th percentile. Clinicians must be aware of suboptimal ImPACT performance as it limits the clinical utility of the baseline assessment. Further research is needed to address factors leading to “valid” but invalid baseline performance.

Computerized neurocognitive tests (CNT) have become widely used for the assessment of sport-related concussion (SRC) (1–4). On the basis of several surveys of certified athletic trainers (AT), the Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT) program is the most commonly used CNT to assist in the management of SRC (2–4). The ImPACT has demonstrated variable evidence of test–retest reliability over varying time points, with correlation values (Pearson r or intraclass correlation coefficient (ICC)) ranging from 0.12 to 0.91 and time between administrations spanning between 1 d and 2 yr (5–13). ImPACT has also demonstrated a wide range of values for sensitivity (53.8%–97.3%) and a narrower range for specificity (69.1%–97.3%) (12,14–18). These variable measurement properties complicate the clinical interpretation of both preinjury baseline and postinjury assessments (19–21).

Baseline testing is purported to add value to the postinjury evaluation, as it provides the ability for clinicians to compare injured values to premorbid neurocognitive functioning and may be particularly important in cases with comorbid conditions (e.g., attention or learning difficulties); however, limited empirical evidence exists to support the practice of conducting baseline assessments (19–22). A few studies have shown that most CNT users administer baseline tests to their patient population (1,2,4). For those clinicians who later readminister ImPACT after an injury, it is imperative that the baseline assessment results are as close as possible to the athlete’s true ability by ensuring the validity of test results and controlling for extraneous sources of error where possible. By doing this, the practicing clinician can help to optimize postinjury clinical decision making.

The ImPACT has automated validity criteria for identifying whether or not a baseline assessment is “valid” and interpretable, at least at a basic level. The ImPACT produces both raw scores and normative values for each of its composite scores: Verbal Memory (VEM), Visual Memory (VIM), Visual Motor Speed (VMS), and Reaction Time (RT) (23). A list of ImPACT’s subtests and related validity criteria may be found in Table 1, and it should be noted that the validity of an assessment is based on individual subtest outcomes rather than on the provided composite scores (23). As such, it is worth noting that there is a possibility for test takers to score in the very bottom percentiles of these outcome scores when compared with normative data without the test results being flagged as an invalid performance. Possible explanations for a suboptimal baseline test include an intentionally poor performance or “sandbagging,” poor motivation, presence of learning disability and/or attention-deficit/hyperactivity disorder (ADHD), sleep quantity, psychological distress, test group size, and testing environment (24–35). Suboptimal baseline performance impedes the postinjury clinical decision-making process, because those results cannot be used to serve as a basis for interpretation of cognitive impairment.

ImPACT’s subtests and corresponding invalidity criteria.

The importance of making good clinical decisions cannot be understated; therefore, the value of CNT programs like ImPACT as a part of the multimodal evaluation rests on the production of reliable and valid data. Traditional neuropsychological assessments have used the 16th percentile compared with normative values as a criterion score to indicate statistically abnormal (and potentially impaired) performance (20,36). The 16th percentile represents a performance that is 1 SD below the mean (50th percentile) on a normal distribution curve for that particular outcome. Some performances below the 16th percentile may represent an individual’s true ability; however, some may represent cases of suboptimal performance or “Valid but Invalid” (VBI) composite scores. These VBI composite scores may severely limit the clinical utility of the postinjury comparison to the baseline, as athletes may outperform their baseline scores after an injury. To date, the frequency of VBI ImPACT results has not been investigated at any level of sport. The purpose of our study was to examine the frequency of VBI performances in ImPACT records at the university level of sport. In addition, we sought to examine the benefit of subsequent administration of the ImPACT baseline assessment for those student-athletes with potentially VBI performances, to determine if those performances were truly indicative of each test takers’ true ability on those measures.



Participants consisted of National Collegiate Athletic Association Division I student-athletes from an urban-based university. All subjects completed preinjury baseline ImPACT assessments as part of their institution’s SRC management protocol. This study was institutional review board approved, and written informed consent was obtained from each participant before their inclusion within this study.


After providing consent, student-athletes were administered ImPACT (Version 2.1; ImPACT Applications, Inc., Pittsburgh, PA) while seated at a desktop or laptop computer, each equipped with an external keyboard and mouse. Computers were running the Windows 7 operating system. Participants were assessed individually or in pairs in a quiet room with limited environmental distractions. If two test takers were tested simultaneously, they used computers that faced away from each other, so as to reduce distractions. Subjects were asked if they wanted to read the instructions provided by ImPACT, if they wanted supplemental verbal instruction by the investigators, or both. Participants were allowed to ask questions between test sections to clarify instructions. Supplemental instruction did not deviate from the written instruction. In the absence of the investigators, an AT trained in ImPACT administration administered the test and addressed questions when necessary. ImPACT administration took approximately 25 min to complete.

After test administration, each participant’s results were examined to determine ImPACT-rated validity (baseline ++ vs valid baseline), as well as the presence of composite scores below the 16th percentile compared with the ImPACT report-generated normative values. If a subject’s results had one or more composite scores below the 16th percentile, they were asked to repeat the ImPACT no earlier than 7 days after their initial baseline test. If that individual’s second report yielded one or more composite scores below the 16th percentile, they were asked to complete the ImPACT baseline assessment a third time. All subsequent administrations of ImPACT were done using the same methodology as the initial baseline test, and all assessments were performed using the ImPACT’s “Baseline” form. Composite scores that were below the 16th percentile in one assessment but were improved to be greater than the 16th percentile on a later assessment were deemed to be VBI. After the third administration, those participants who still had one or more composite scores below the 16th percentile had their case reviewed by a neuropsychologist.

Data analysis

Descriptive statistics were used to analyze demographic data of all participants. These data included the following: sex, sport participation, concussion history, age at the time of the first baseline assessment, and diagnoses of ADHD, learning disorder (LD), and treatment for depression/anxiety. Each demographic variable was self-reported by participants within the ImPACT battery. Frequencies of composite scores below the 16th percentile were calculated for the first (B1), second (B2), and third (B3) baseline assessments. Participants were categorized into the VBI group at each time point if they had one or more ImPACT composite score below the 16th percentile. Participants who achieved all composite scores greater than the 16th percentile were categorized into the valid group. Independent t-tests were used to compare composite scores from the initial baseline (VEM, VIM, VMS, RT), age, total symptom score, hours of sleep, and years of experience in sport between the VBI and valid groups. Paired-samples t-tests were performed to observe differences in scores between B1 and B2, between B2 and B3, and between B1 and B3 in the VBI group due to variable sample sizes between time points as the number of scores below the 16th percentile decreased over time. Mann–Whitney U tests were used to compare concussion injury history between groups. Additional analyses consisted of a comparison of the frequencies of potentially confounding factors such as the presence of depression/anxiety, LD, and ADHD in each group as well as the use of psychostimulant medications in those who endorsed being diagnosed with ADHD. All analyses were performed with α = 0.05 and were performed on SPSS Version 23.0 (Armonk, NY).


A total of 769 student-athletes who had completed the institution’s baseline testing battery at the time of data analysis were included. Nine (1.17%) of the student-athletes attained ImPACT profiles which were flagged as invalid by the ImPACT validity criteria. Seven had invalid assessments on their first attempt, one on their second attempt, and one on their third attempt. Five of the student-athletes who had invalid assessments on their first attempt achieved composite scores at or above the 16th percentile upon their second attempt, whereas one reached this criterion on their third assessment and the other was not tested a third time as they had completed sport participation after their second attempt. The other two student-athletes had at least one composite score below the 16th percentile through their third attempt and were referred to the neuropsychologist for consultation. Eight of the nine invalid assessments met only one of the ImPACT validity criterion scores (Design Memory Learning Percent Correct < 50% = 5; Three Letters Total Letters Correct < 8 = 2; and Word Memory Learning Percent Correct < 69% = 1), and one student-athlete met two of the criteria (Design Memory Learning Percent Correct < 50% and Three Letters Total Letters Correct < 8). Only test takers with assessments that met ImPACT’s automated validity criteria were included in subsequent analyses. Therefore, a total of 760 student-athletes in the sample were included in our analyses. Demographic information may be found in Table 2.

Participant demographics and group comparisons, n (%).

Group differences

There were no differences between those in the VBI group and those in the valid group in terms of concussion history, age, total symptom score, hours of sleep, or the endorsement of having treatment for depression/anxiety. However, those participants in the VBI group self-reported significantly more years of participation in their sport at the university level compared with those in the valid group (P = 0.02). The VBI group achieved significantly lower scores on the VEM, VIM, and VMS composite scores and significantly higher values on the RT composite score (P < 0.01), which indicated worse performance compared with the valid group for all composite scores (Table 2). Levene’s test for equality of variances was significant for the years of participation, VEM, VIM, and RT outcomes, so comparisons were made for these outcomes with equal variances not assumed and were still found to present with statistically significant differences.

Regarding sex, 16.3% (72/442) of male subjects produced VBI assessments at B1 compared with 12.6% (40/318) of female subjects. Those in the VBI group were more likely to have been diagnosed with LD or ADHD, or self-reported attending special education courses in school (30/112 (26.8%) vs 73/648 (11.3%)). Specifically, a higher percentage of student-athletes in the VBI group endorsed having ADHD than those in the valid group (17/112 (15.2%) vs 47/648 (7.3%)). Among subjects with ADHD, those who took prescribed psychostimulant medication at the time of their initial baseline were observed to have a lower rate of VBI performance (4/29; 13.8%) compared with those who did not take medication (12/34; 35.3%). Of the male subjects with ADHD, 20 (43.5%) of 46 took psychostimulant medication at B1, whereas 9 (50.0%) of 18 female subjects with ADHD took such medication.

Suboptimal performance frequencies

After the exclusion of those files that were determined to be invalid by ImPACT, there were 24 (0.8%) of 3040 individual composite scores that fell at or below the 3rd percentile when compared with the ImPACT normative data (6 VEM, 5 VIM, 11 VMS, and 2 RT), which would be deemed “impaired” by the ImPACT user manual; 69 (2.3%) of 3040 composite scores that fell in the fourth through ninth percentiles (23 VEM, 22 VIM, 13 VMS, and 11 RT), which would be deemed “borderline” impairment; and an additional 234 (7.7%) of 3040 composite scores, which would be considered “low average” (49 VEM, 67 VIM, 56 VMS, and 62 RT) because they fell below the 25th percentile (37). Overall, approximately 14.7% (112/760) of the subjects had one or more scores below the 16th percentile and were subsequently assigned to the VBI group. The remainder (648/760) were assigned to the valid group. The most common ImPACT composite score below the 16th percentile at B1 was VEM (48/112) followed by VIM (40/112), RT (30/112), and VMS (29/112). Those with scores below the 16th percentile were readministered the baseline ImPACT test. The average time between these tests was 60.2 ± 63.6 d. After the second baseline assessment, 71.4% (80/112) of the sample achieved scores at or above the 16th percentile. The most common composite scores below the 16th percentile at B2 were VEM and VMS (14/32, each) followed by VIM (8/32) and RT (6/32). The remaining 32 participants with scores below the 16th percentile underwent a third administration of ImPACT. The average time between the second and third assessments was 74.8 ± 97.5 d. Of the 32 subjects assessed a third time, 56.3% (18/32) scored at or above the 16th percentile on all composite scores in the third assessment. A total of 14 (1.84%) of all student-athletes had at least one score below the 16th percentile after the third baseline assessment but were not retested a fourth time (Fig. 1). The most common composite score below the 16th percentile at B3 was VMS (7/14) followed by VIM (6/14) and lastly by VEM and RT (3/14, each).

Baseline assessment invalid, valid, and VBI frequencies.

After one or two readministrations, more than 98% of all student-athletes were able to achieve scores at or above the 16th percentile, in comparison to 85.4% after the initial baseline alone. Altogether, 13% of participants (98/760) achieved VBI scores on their initial baseline assessment as defined by outcome score performances below the 16th percentile, which were improved with subsequent testing. Paired-sample t-tests (Table 3) revealed improvements in VEM, VIM, VMS, and RT composite scores between assessments B1 and B2 as well as between B1 and B3 (P < 0.05). Only VEM and VMS showed significant differences between assessments B2 and B3 (P < 0.05). VIM and RT showed nonsignificant improvement.

Reliable change between baseline assessments in the valid but invalid (VBI) group.

As an exploratory analysis, we also examined the proportion of outcome scores that were deemed to have reliably changed according to the ImPACT’s built-in reliable change index (Table 3) (37). Between the first two assessments (n = 112), the rate of reliable improvement ranged between 30% and 41%, which is three to four times what would be expected because of chance for each composite score (20). According to the ImPACT’s 80% reliable change confidence interval, 10% of all performances on each composite score would theoretically be expected to reliably decline by chance alone (20). The same would be true for reliable improvement. Between the second and third assessments (n = 32) as well as between the first and third assessments (n = 32), reliable improvement was near or below expected values (≤13%). Reliable decline in performance was below expected rates between all assessment time points for all composite scores (0%–8%).


The primary purposes of our study were to examine the frequency of potentially suboptimal ImPACT results in university athletes, as well as to examine the benefit of subsequent ImPACT administration. Retesting was performed in an effort to obtain ImPACT values reflective of a student-athlete’s true ability and inherently increase the clinical utility of the test results. Our findings suggest that 13% of these college-age student-athletes demonstrated evidence of suboptimal performance on their initial ImPACT baseline test, which was not captured by ImPACT’s automated validity criteria. However, within two subsequent administrations, these same individuals were able to achieve higher scores, which may better represent their true ability. Previous work regarding multivariate base rates posits that we should have expected to observe approximately 40% of our student-athletes to have at least one composite score below the 16th percentile (20). Our findings called into question whether or not the current institution’s normative ImPACT composite score data would be similar to the normative data provided by the ImPACT manufacturer’s. A manuscript that includes all of the participants from the current study is currently in preparation which will expound on this topic further.

Each assessment was performed using the ImPACT’s “Baseline” form. For many of the subtests in this form, the stimuli do not differ from administration to administration; however, the order in which they were presented did change. The time intervals between B1 and B2 and between B2 and B3 were on average 60 and 75 days, respectively. Test–retest reliability has not been measured in the specific time frames seen in the current study, but has been previously reported using ICC in test–retest periods of 45 and 50 d for VEM (ICC = 0.19–0.78), VIM (ICC = 0.32–0.74), VMS (ICC = 0.38–0.91), and RT (ICC = 0.39–0.8) (5,10,11,13). Test–retest reliability is important because it is an essential ingredient in the calculation of reliable change indices (20). In this study, the rate of reliable improvement between B1 and B2 was observed to be three to four times higher than would have been expected by chance alone, yet was near or below expected rates between B2 and B3. The observed pattern of reliable changes, in conjunction with the time between each assessment time point, would make it reasonable to believe that the observed improvement was more than simply regression to the mean. The average time between assessments also implies that improvements seen between tests were not necessarily due to a learning effect regarding repeated exposure to the ImPACT alone but was also subject to other factors, which may have included increased effort, increased attention, and/or a better understanding of the test instructions. This finding is an important one in the context of the postinjury assessment. If one of these student-athletes attained suboptimal performance at baseline and then went on to significantly improve after an injury, the information gained from neurocognitive testing would be null.

The occurrence of suboptimal ImPACT performance may be due to one of the many factors discussed previously including but not limited to the following: sandbagging, poor motivation, testing environment/distractions, test group size, insufficient sleep, psychological distress such as depression or anxiety, the presence of LD, and/or ADHD. The authors did not administer stand-alone measures of performance validity (effort) to provide an external measure of sandbagging or reduced motivation in the present study, but each assessment was closely proctored. Therefore, objective assessment of each participant’s best effort cannot be made. However, the authors did control the testing environment (38), tested no more than two participants at one time (29), and ensured the clarity of written and verbal instructions by addressing questions and providing supplemental instruction if requested or if the proctors observed any visible signs of confusion (e.g., not responding to stimuli appropriately via keyboard or mouse).

Participants with a self-reported history of ADHD/LD were observed to be more likely to be in the VBI group, which is supported by previous literature (25,32,33,39). Those with self-reported LD or ADHD were respectively 3.0 times and 2.1 times more likely to have performances below the 16th percentile on their initial administration of ImPACT, compared with participants without a self-reported diagnosis. The pattern of psychostimulant use being associated with better test performance is also consistent with related reports (Fig. 2) (25,26).

Use of medication by student-athletes diagnosed with ADHD. There were 17 student-athletes who were diagnosed with ADHD in the VBI group at the initial assessment and second assessment time points. There were only 9 at the third assessment time point. *One student-athlete in the VBI group did not indicate whether or not he/she was taking medication at the time of the initial assessment.

No differences were observed between groups regarding endorsement of psychological distress–related comorbidities (depression and/or anxiety). In fact, 5.4% of both the VBI and valid groups endorsed a history of treatment for depression/anxiety (Table 2). There was also no significant difference observed in the number of hours slept the night before the initial baseline assessment between the VBI (7.21 ± 1.24 h) and valid (7.21 ± 1.31 h) groups, which is similar to findings from previous research (40). Another interesting finding of the current study was that those in the VBI group had significantly more experience at the university level of their sport than those in the valid group (1.5 ± 1.69 yr vs 1.2 ± 1.40 yr). This could be a case of statistical significance with no real clinical implication, although it also may be representative of unknown and unmeasured factors influencing performance, such as differences in knowledge or culture regarding SRC. For example, the majority of more experienced athletes did not previously take part in a testing paradigm which included follow-up baseline testing, as was the case in the current paradigm. As such, these more experienced athletes could have approached the testing as more of a routine annual event, as opposed to the incoming athletes who might have viewed the testing as a more formal event. In addition, we did not collect information on each student-athlete’s exposure to the ImPACT at previous institutions and therefore cannot account for the influence of previous exposure on outcomes within this study.

Covassin et al. (1) have reported that only approximately half (55%) of all AT who administer baseline ImPACT to their student-athletes examine the results for validity, and it is unclear how often other health care professionals using ImPACT do so. Our data suggest that additional steps to examine the validity of ImPACT data are necessary to maximize the number of valid baseline assessments. Although this may be more time- and cost-intensive, the burden of administering baseline assessments in a controlled environment and reviewing baseline files for validity would be desirable in comparison to the possible alternative of patients outperforming their own baseline assessments on their postinjury assessment. This particular instance would result in clinically irrelevant data for the clinician to use in return to play decision making. Although research exists analyzing the use of postinjury ImPACT outcome comparisons to normative data in lieu of a baseline assessment (thereby removing the burden of baseline testing), there are risks of false-positive and false-negative findings that could be deleterious to patient health (22). Furthermore, it is unknown how each individual institution’s ImPACT outcome distributions compares with the normative data provided by the ImPACT manufacturer.


A limitation of the current study was the lack of a stand-alone measure of effort during the assessment process. Although the measurement of effort is not a typical part of most baseline testing protocols, it may have provided valuable information to aid in the interpretation of invalid and VBI ImPACT results. That being said, each test was administered individually or in pairs and was proctored by trained AT. Many of the second and third baseline administrations were not performed before the start of each athlete’s respective athletic season or off-season training. Much of the testing was performed between the months of June and October, and therefore most, if not all, of the athletes were participating in regular training for their respective sports for one or more of their follow-up assessments. Another limitation of this study includes the lack of knowledge regarding each student-athlete’s prior exposure to ImPACT. The number of previous ImPACT assessments at previous institutions, as well as the time since the most recent exposure for each student-athlete, was not measured but may have added value to the analyses performed in this study, especially with regard to improvement over multiple exposures at our institution. Similarly, although the average time between tests was approximately 2 months, a practice effect may have been present. Future research should investigate changes in outcomes after retesting using these same time intervals for those who did not have a score below the 16th percentile. This information would serve two purposes: first, to aid in determining whether or not a practice effect is present, and second, to observe whether or not the use of the 16th percentile as a cutoff score is applicable to all levels of student abilities. In the current study, those who had composite scores above the 16th percentile may still have exhibited suboptimal performance and/or effort which was not detected. However, our methodology accounted for 13% (98/760) of participants who scored below our criterion score and later achieved substantially improved values.

Clinical application

ImPACT is one of the most commonly used CNT programs to assess athletes before and after SRC at all levels of sport. Our results support the need for clinicians to be aware of the test’s limitations to improve its clinical utility. Similarly, clinicians should be aware of potential patient-specific factors that may affect the ability to obtain the most clinically relevant information from baseline assessments.

  • Those with ADHD who have been prescribed psychostimulant medication should use that medication for ImPACT testing (25,26).
  • To avoid outperforming a baseline assessment during postinjury testing, a valid preinjury baseline should be obtained. Care should be given to testing environment and administration procedures to eliminate extraneous sources of error, which may result in suboptimal performance (19,38,39).
  • Similarly, proctoring the test may improve understanding of test instructions and/or attention given to the test (38,39).
  • Our findings suggest that repeated baseline assessments in cases where performance is suspected to be suboptimal may result in important improvements in performance which might then increase the sensitivity of postconcussion assessments in recognizing impairment due to the concussion.
  • Although the ImPACT’s built-in invalidity criteria flagged 1.17% of the student-athletes in the current study, an additional 15% of the student-athletes were identified as performing below the 16th percentile on at least one composite score, which raised suspicions of suboptimal performance (36).


Overall, 15% of our sample showed evidence of potentially suboptimal performance on their initial ImPACT baseline assessment despite a highly controlled administration environment. Upon readministration of the ImPACT, 88% (98/112) of those test takers experienced improvement in their performance. Subjects with self-reported LD or ADHD and those individuals with a greater number of years of participation at the university level were more likely to be categorized into the VBI group. Our results demonstrate that subsequent testing for those with a suboptimal baseline performance may increase the quality of baseline data, thereby inherently increasing the clinical utility of the test. Future research should more closely examine the incidence and modifying factors associated with suboptimal performance, investigate the sensitivity and specificity of ImPACT using the VBI retesting methodology, and determine how institutionally based normative values compare with national and manufacturer-provided normative values with and without using the VBI retesting methodology.

The authors would like to thank all of the contributing student-athletes and athletics staff who participated in and/or have supported our work in one way or another.

No authors received funding for this study.

The authors have no conflicts of interest to report. The results of the current study do not constitute endorsement by the American College of Sports Medicine. The authors declare that the results of this study are clear and honest, and were not fabricated or falsified, and that the data were not manipulated in any way to intentionally portray anything but those outcomes that were empirically observed.


1. Covassin T, Elbin RJ III, Stiller-Ostrowski JL, Kontos AP. Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT) practices of sports medicine professionals. J Athl Train. 2009;44(6):639–44.
2. Meehan WP 3rd, d’Hemecourt P, Collins CL, Taylor AM, Comstock RD. Computerized neurocognitive testing for the management of sport-related concussions. Pediatrics. 2012;129(1):38–44.
3. Lynall RC, Laudner KG, Mihalik JP, Stanek JM. Concussion-assessment and -management techniques used by athletic trainers. J Athl Train. 2013;48(6):844–50.
4. Buckley TA, Burdette G, Kelly K. Concussion-management practice patterns of National Collegiate Athletic Association Division II and III athletic trainers: how the other half lives. J Athl Train. 2015;50(8):879–88.
5. Broglio SP, Ferrara MS, Macciocchi SN, Baumgartner TA, Elliott R. Test–retest reliability of computerized concussion assessment programs. J Athl Train. 2007;42(4):509–14.
6. Elbin RJ, Schatz P, Covassin T. One-year test–retest reliability of the online version of ImPACT in high school athletes. Am J Sports Med. 2011;39(11):2319–24.
7. Schatz P. Long-term test–retest reliability of baseline cognitive assessments using ImPACT. Am J Sports Med. 2010;38(1):47–53.
8. Schatz P, Ferris CS. One-month test–retest reliability of the ImPACT test battery. Arch Clin Neuropsychol. 2013;28(5):499–504.
9. Register-Mihalik JK, Kontos DL, Guskiewicz KM, Mihalik JP, Conder R, Shields EW. Age-related differences and reliability on computerized and paper-and-pencil neurocognitive assessment batteries. J Athl Train. 2012;47(3):297–305.
10. Resch J, Driscoll A, McCaffrey N, et al. ImPact test–retest reliability: reliably unreliable? J Athl Train. 2013;48(4):506–11.
11. Nakayama Y, Covassin T, Schatz P, Nogle S, Kovan J. Examination of the test–retest reliability of a computerized neurocognitive test battery. Am J Sports Med. 2014;42(8):2000–5.
12. Nelson LD, LaRoche AA, Pfaller AY, et al. Prospective, head-to-head study of three computerized neurocognitive assessment tools (CNTs): reliability and validity for the assessment of sport-related concussion. J Int Neuropsychol Soc. 2016;22(1):24–37.
13. Resch JE, Schneider M, Munro Cullum C. The test–retest reliability of three computerized neurocognitive tests used in the assessment of sport concussion. Int J Psychophysiol. 2017; [Epub ahead of print]. doi:10.1016/j.ijpsycho.2017.09.011.
14. Schatz P, Pardini JE, Lovell MR, Collins MW, Podell K. Sensitivity and specificity of the ImPACT Test Battery for concussion in athletes. Arch Clin Neuropsychol. 2006;21(1):91–9.
15. Schatz P, Sandel N. Sensitivity and specificity of the online version of ImPACT in high school and collegiate athletes. Am J Sports Med. 2013;41(2):321–6.
16. Van Kampen DA, Lovell MR, Pardini JE, Collins MW, Fu FH. The “value added” of neurocognitive testing after sports-related concussion. Am J Sports Med. 2006;34(10):1630–5.
17. Broglio SP, Macciocchi SN, Ferrara MS. Sensitivity of the concussion assessment battery. Neurosurgery. 2007;60(6):1050–7; discussion 7–8.
18. Resch JE, Brown CN, Schmidt J, et al. The sensitivity and specificity of clinical measures of sport concussion: three tests are better than one. BMJ Open Sport Exerc Med. 2016;2(1):e000012.
19. Resch JE, McCrea MA, Cullum CM. Computerized neurocognitive testing in the management of sport-related concussion: an update. Neuropsychol Rev. 2013;23(4):335–49.
20. Iverson GL, Schatz P. Advanced topics in neuropsychological assessment following sport-related concussion. Brain Inj. 2015;29(2):263–75.
21. Broglio SP, Cantu RC, Gioia GA, Guskiewicz KM, Kutcher J, Palm M, et al. National Athletic Trainers’ Association Position Statement: management of sport concussion. J Athl Train. 2014;49(2):245–65.
22. Schatz P, Robertshaw S. Comparing post-concussive neurocognitive test data to normative data presents risks for under-classifying “above average” athletes. Arch Clin Neuropsychol. 2014;29(7):625–32.
23. Lovell MR. ImPACT Administration and Interpretation Manual. Available at 2016. Accessed March 9, 2018.
24. Cottle JE, Hall EE, Patel K, Barnes KP, Ketcham CJ. Concussion baseline testing: preexisting factors, symptoms, and neurocognitive performance. J Athl Train. 2017;52(2):77–81.
25. Littleton AC, Schmidt JD, Register-Mihalik JK, et al. Effects of attention deficit hyperactivity disorder and stimulant medication on concussion symptom reporting and computerized neurocognitive test performance. Arch Clin Neuropsychol. 2015;30(7):683–93.
26. Gardner RM, Yengo-Kahn A, Bonfield CM, Solomon GS. Comparison of baseline and post-concussion ImPACT test scores in young athletes with stimulant-treated and untreated ADHD. Phys Sportsmed. 2017;45(1):1–10.
27. Rabinowitz AR, Merritt VC, Arnett PA. The return-to-play incentive and the effect of motivation on neuropsychological test-performance: implications for baseline concussion testing. Dev Neuropsychol. 2015;40(1):29–33.
28. Sufrinko A, Johnson EW, Henry LC. The influence of sleep duration and sleep-related symptoms on baseline neurocognitive performance among male and female high school athletes. Neuropsychology. 2016;30(4):484–91.
29. Moser RS, Schatz P, Neidzwski K, Ott SD. Group versus individual administration affects baseline neurocognitive test performance. Am J Sports Med. 2011;39(11):2325–30.
30. Erdal K. Neuropsychological testing for sports-related concussion: how athletes can sandbag their baseline testing without detection. Arch Clin Neuropsychol. 2012;27(5):473–9.
31. Kuhn AW, Solomon GS. Supervision and computerized neurocognitive baseline test performance in high school athletes: an initial investigation. J Athl Train. 2014;49(6):800–5.
32. Elbin RJ, Kontos AP, Kegel N, Johnson E, Burkhart S, Schatz P. Individual and combined effects of LD and ADHD on computerized neurocognitive concussion test performance: evidence for separate norms. Arch Clin Neuropsychol. 2013;28(5):476–84.
33. Zuckerman SL, Lee YM, Odom MJ, Solomon GS, Sills AK. Baseline neurocognitive scores in athletes with attention deficit-spectrum disorders and/or learning disability. J Neurosurg Pediatr. 2013;12(2):103–9.
34. Bailey CM, Samples HL, Broshek DK, Freeman JR, Barth JT. The relationship between psychological distress and baseline sports-related concussion testing. Clin J Sport Med. 2010;20:272–7.
35. Hunt TN, Ferrara MS, Miller LS, Macciocchi S. The effect of effort on baseline neuropsychological test scores in high school football athletes. Arch Clin Neuropsychol. 2007;22(5):615–21.
36. Heaton RK, Miller SW, Taylor MJ, Grant I. Revised Comprehensive Norms for an Expanded Halstead–Reitan Battery: Demographically Adjusted Neuropsychological Norms for African American and Caucasian Adults. Lutz (FL): Psychological Assessment Resources; 2004.
37. Lovell MR. ImPACT User Manual. Hilton Head, SC: ImPACT Applications Inc; 2003.
38. Rahman-Filipiak AA, Woodard JL. Administration and environment considerations in computer-based sports-concussion assessment. Neuropsychol Rev. 2013;23(4):314–34.
39. Schatz P, Moser RS, Solomon GS, Ott SD, Karpf R. Prevalence of invalid computerized baseline neurocognitive test results in high school and collegiate athletes. J Athl Train. 2012;47(3):289–96.
40. Silverberg ND, Berkner PD, Atkins JE, Zafonte R, Iverson GL. Relationship between short sleep duration and preseason concussion testing. Clin J Sport Med. 2016;26:226–31.


Copyright © 2018 by the American College of Sports Medicine