Journal Logo

SPECIAL COMMUNICATIONS: Letters to the Editor-in-Chief

Response

WALTON, SAMUEL; BROSHEK, DONNA; FREEMAN, JASON; CULLUM, C. MUNRO; RESCH, JACOB E.

Author Information
Medicine & Science in Sports & Exercise: October 2018 - Volume 50 - Issue 10 - p 2178-2179
doi: 10.1249/MSS.0000000000001665
  • Free

Dear Editor-in-Chief,

Thank you for the opportunity to address Mr. Asken’s comments (1) regarding our recent study highlighting “low” scores on computerized neurocognitive testing (i.e., ImPACT) that may be interpreted as valid, but also may represent suboptimal performance. In response, we will address our criteria for determining potentially suboptimal baseline performance, the role of effort in suboptimal performance, and the implications of retesting those with suboptimal baseline performance. Our investigation identified student-athletes scoring below an evidence-based criterion score (<16th percentile) that often defines impairment (2). Upon retesting, those subjects performed significantly better, indicating that the initially impaired scores were not valid representations of those individuals’ actual cognitive capacities. Only neurocognitive outcome scores that showed improvement with retesting were deemed “valid but invalid” (VBI), whereas those that did not improve were deemed to reflect legitimate ability.

The ImPACT test was designed for use by many types of healthcare professionals, although it is well documented that caution is warranted when interpreting ImPACT scores without input from a clinical neuropsychologist (3,4). Using the 16th percentile cutoff is “normal and customary” when interpreting individual test scores and in multivariate base rate analyses (5). Administering a greater number of tests increases the likelihood of achieving a single score < 16th percentile assuming the sample fits manufacturer-provided normative values. Although student-athletes scoring < 16th percentile for any ImPACT score may be “broadly normal,” our hypotheses were predicated on the assumption that our sample would exceed “broadly normal” values provided the academic rigor of the institution (5,6). Therefore, participant values < 16th percentile could reflect suboptimal performance, particularly when they showed normalization upon retest in this uninjured sample.

In keeping with our hypothesis, neurocognitive outcome scores markedly improved after subsequent readministration. Table 3 in our article demonstrates that the rate of improved performance exceeded chance (i.e., regression to the mean), whereas the occurrence of declining performance was rare and below chance (5). Although questionable effort was posited as a potential reason for suboptimal performance, effort was not objectively measured beyond the automated criteria used by ImPACT and was only one potential reason for VBI performance. “Sandbagging” may go undetected in up to 30% of high-achieving ImPACT participants and may reflect intentional suboptimal performance (6,7). Our results represent an important additional step in identifying ImPACT reports that have little to no clinical utility as a comparator after injury.

Regarding the question of cost and time burden of repeated testing, baseline assessment worth is based on the representation of an athlete’s true premorbid ability for postinjury comparison as part of the comprehensive post-concussion evaluation. In the event that a premature return-to-play decision is made, the financial burden of potential morbidity, mortality, and litigation associated with poor outcomes moderates these costs. Our study also emphasizes the necessity of a multimodal assessment (8) and best practices when administering computerized neurocognitive tests (9). This practice includes using all available evidence to identify suboptimal performance in order to obtain valid baselines that have true utility in the return-to-play process.

Samuel Walton

Department of Kinesiology

University of Virginia

Charlottesville, VA

Donna Broshek

Department of Psychiatry and

Neurobehavioral Sciences

University of Virginia

Charlottesville, VA

Jason Freeman

University of Virginia Athletics

Charlottesville, VA

C. Munro Cullum

Neuropsychology Section

University of Texas

Southwestern Medical Center

Dallas, TX

Jacob E. Resch

Department of Kinesiology

University of Virginia

Charlottesville, VA

REFERENCES

1. Asken B. Isolated “low” test scores are often normal and valid. Med Sci Sports Exerc. 2018;50(10):2177.
2. Heaton RK, Miller SW, Taylor MJ, Grant I. Revised Comprehensive Norms for an Expanded Halstead-Reitan Battery: Demographically Adjusted Neuropsychological Norms for African American and Caucasian Adults. Psychological Assessment Resources: Lutz, FL; 2004.
3. Lovell MR. ImPACT Administration and Interpretation Manual. 2016.
4. Ott SD, Bailey CM, Broshek DK. An interdisciplinary approach to sports concussion evaluation and management: the role of a neuropsychologist. Arch Clin Neuropsychol. 2018;33:319–29.
5. Iverson GL, Schatz P. Advanced topics in neuropsychological assessment following sport-related concussion. Brain Inj. 2015;29(2):263–75.
6. Brown CN, Guskiewicz KM, Bleiberg J. Athlete characteristics and outcome scores for computerized neuropsychological assessment: a preliminary analysis. J Athl Train. 2007;42(4):515–23.
7. Schatz P, Glatts C. “Sandbagging” baseline test performance on ImPACT, without detection, is more difficult than it appears. Arch Clin Neuropsychol. 2013;28:236–44.
8. McCrory P, Meeuwisse W, Dvorak J, et al. Consensus statement on concussion in sport—the 5th international conference on concussion in sport held in Berlin, October 2016. Br J Sports Med. 2017;51(11):838–47.
9. Moser RS, Schatz P, Lichtenstein JD. The importance of proper administration and interpretation of neuropsychological baseline and postconcussion computerized testing. Appl Neuropsychol Child. 2015;4(1):41–8.
© 2018 American College of Sports Medicine