Identifying Impairments after Concussion: Normative Data versus Individualized Baselines : Medicine & Science in Sports & Exercise

Journal Logo


Identifying Impairments after Concussion

Normative Data versus Individualized Baselines


Author Information
Medicine & Science in Sports & Exercise 44(9):p 1621-1628, September 2012. | DOI: 10.1249/MSS.0b013e318258a9fb
  • Free


As many as 3.8 million sport-related traumatic brain injuries occur annually in the United States, with evidence that many go unrecognized, unreported, and untreated (20,36). Sports medicine professionals are faced with the challenging task of evaluating and managing sport-related concussion. Evaluation of concussion should involve a multifaceted approach including a thorough clinical evaluation, a self-reported symptom checklist, postural control assessment, and computerized neurocognitive testing (13,25). Recent concussion consensus statements urge clinicians to establish preinjury baseline scores for each athlete (25). Baseline scores are thought to account for individual preinjury differences in neurocognition, symptoms, and postural control abilities, thereby providing a valid comparison for postconcussion outcomes. Despite a strong theoretical rationale for using baseline measures, there are several concerns regarding the application of baseline testing.

Completing a comprehensive baseline testing battery on every athlete can be very time-intensive and cost-prohibitive. Sports medicine professionals who have limited time to complete baseline testing may be forced to test multiple athletes at once. Environmental distractions such as talking, loud typing, or movement could negatively affect test performance, resulting in invalid representations of the athletes’ true capabilities (27,33). Baseline testing provides a single cross-sectional representation of an individual athlete’s state at the time of testing, which can easily be influenced by external factors such as the previous night’s sleep (8), temporary states of psychological distress (1), and effort put forth during testing (16). Athletes aware that their baseline scores will be used for postconcussion comparisons may intentionally choose to “throw” their baseline by extending less-than-maximal effort (16). Invalid baselines are a concern, as a recent study suggests that only 52% of athletic trainers verify that baseline neurocognitive scores are a valid representation of each athlete’s individualized performance (10). These results hold a significant clinical implication because comparing postconcussion scores to invalid baseline scores could cause a clinician to make a premature decision to return an athlete to play. Although the risks of premature return to play are not fully understood (26,29), recent guidelines suggest that athletes refrain from physical and cognitive activity until they have fully recovered (25).

Recent reports suggest that many concussion assessment tools do not meet the diagnostic criteria needed to properly track an athlete’s recovery after concussion (29,30). The most widely used neurocognitive (30), postural control (12), and symptom assessment tools (21) have limited research regarding sensitivity, reliability, validity, and reliable change algorithms for identifying clinical impairment. Some studies suggest that these tools may have poor reliability, thereby limiting their clinical applicability (5–7,35,37). The combined use of symptom scores and neurocognitive values increases the sensitivity of concussion diagnoses but may simultaneously increase the rate of false-positive diagnoses in athletes without concussion (39). Using assessment tools with poor sensitivity and low reliability may yield unreliable and potentially invalid data for making return-to-play decisions.

The psychometric properties of neurocognitive scores, and the cognitive domains that they represent, can be difficult to understand and interpret without proper training. Most athletic trainers and team physicians do not receive formal training in neurocognitive score interpretation as part of their educational or medical training. There are no standard qualifications that clinicians must maintain to assess an athlete with concussion. Although educational workshops are offered for most neurocognitive test platforms, most clinicians who use the test batteries choose not to attend (10). Clinicians who have never received formal education regarding score interpretation should not be in charge of determining whether declines or improvements in postconcussion scores represent a true clinical change. Uncertainty and misunderstanding of postconcussion scores could cause a sports medicine professional to either act conservatively and unnecessarily hold a recovered athlete from returning to play or, worse yet, act hastily and prematurely return an athlete to play.

Many medical professionals rely on normative values to diagnose a wide variety of pathological conditions because individualized baseline values are not available for their patient populations. Normative values are available for most commonly used concussion assessment tools and are derived by administering the test to a specific group (or groups) of individuals. A normative sample provides a standard against which an individual’s performance can be compared. In the absence of an individualized baseline, normative data are available for the most commonly used concussion assessment tools and may be useful in clinical scenarios where baseline testing is not feasible (8,21,31,34). Using normative neurocognitive, postural control, and symptom severity scores would allow clinicians to bypass the lengthy process of baseline testing while ensuring that valid scores were used for postconcussion comparison. If the two comparison methods, baseline comparison and normative comparison, identify the same postconcussion impairments, clinicians would be better served to use normative values.

We aimed to determine whether agreement existed in identifying immediate impairments after concussion using two comparison methods: 1) comparing postconcussion scores to individualized baseline scores and 2) comparing postconcussion scores to gender-specific normative means. We hypothesized that there would be strong agreement between the two comparison methods in identifying impairment.



Between 2001 and 2010, 1060 Division I male and female collegiate student-athletes completed preseason baseline testing at the University of North Carolina at Chapel Hill as part of an ongoing clinical program. We computed gender-specific normative means from preseason baseline measures collected in a subsample of 673 athletes with no history of self-reported concussions, learning disabilities, or attention-deficit disorders. The normative sample consisted of incoming freshmen and transferees, approximately the same age, completing baseline testing for the first time at our institution (Table 1). Two hundred fifty-eight student-athletes (males = 182, females = 76) were later diagnosed with concussion. Concussion was defined as an injury resulting from a direct or an indirect (i.e., impulsive) blow to the head that resulted in an alteration of mental status and any of the following symptoms: headache, nausea, vomiting, poor balance, sensitivity to noise, sensitivity to light, blurred vision, difficulty concentrating, difficulty remembering, trouble falling asleep, drowsiness, fatigue, sadness, and/or irritability, difficulty remembering, and/or difficulty concentrating (14). Athletes were evaluated for possible concussion if our medical staff observed demonstrable signs of concussion or if an athlete reported symptoms consistent with concussion to our medical staff. Data from athletes with concussion were obtained from two separate concussion assessment programs. Of the 258 athletes with concussion, 175 were baseline tested and later evaluated at the University of North Carolina at Chapel Hill. An additional 83 athletes were evaluated as part of a separate multisite assessment program. All athletes completed their first postconcussion assessment within 10 d after injury (2.66 ± 2.35 d after injury). We chose to exclude athletes with concussion who were not evaluated during the acute stages, within 10 d, of their recovery (15,23,24). All athletes read and signed an informed consent form approved by the university’s institutional review board. Demographic data for our normative and concussed sample are presented in Table 1.

Demographics by gender for the normative and concussed samples at baseline.

Multifaceted baseline and postconcussion test battery

The multifaceted concussion baseline test battery consisted of a computerized neurocognitive test (Automated Neuropsychological Assessment Metrics (ANAM)), postural control assessment (Sensory Organization Test (SOT)), and a 15-item graded symptom checklist. Athletes completed baseline testing in groups no larger than four people. This same test battery was repeated for all athletes diagnosed with a concussion as part of an ongoing clinical program. Although most participants had complete neurocognitive, postural control, and symptom severity data, the clinical nature of this study resulted in a small number of missing outcome measures for individual athletes. Unequal sample sizes across analyses resulted from physician-driven clinical decisions to only use some portions of the battery, participant error, and minor research-driven changes in testing procedures as more literature became available over the 10 yr of data collection. Because of the slight differences in the evaluation tools between sites, athletes who were not evaluated at the University of North Carolina at Chapel Hill did not complete the Procedural Reaction Time and Code Substitution subtests of ANAM and the SOT.

Computerized neurocognitive assessment

The ANAM test battery consists of a series of subtests designed to examine several neurocognitive domains. The ANAM test battery has been observed to be both reliable and valid (4,11). The following subtests (and their cognitive domains) were included in our ANAM test battery: Simple Reaction Time Test 1 (reaction time), Simple Reaction Time Test 2 (reaction time), Mathematical Processing (concentration and working memory), Sternberg Memory Search (working memory), Match to Sample (visual memory), Procedural Reaction Time (reaction time and working memory), and Code Substitution (delayed memory).The Simple Reaction Time subtest was completed twice, once at the beginning of the test battery and again at the end to measure reaction time before and after a period of cognitive exertion. During baseline and postconcussion testing, athletes were given instructions before completing each ANAM subtest displayed on the computer monitor. As stimuli appeared on the screen, athletes were required to respond as quickly and accurately as possible by clicking either the right or the left mouse button. A variable interstimulus interval (time between consecutive stimuli) was used throughout all subtests to decrease anticipatory responses.

Data were collected, processed, and stored on a personal computer as the ANAM battery was completed. Throughput scores were calculated as the product of speed (mean reaction time) and accuracy (percentage of correct responses) to represent the overall efficiency for each subtest (3,8). A higher throughput score is indicative of a better performance for all the ANAM subtests.

Postural control assessment

Student-athletes at the University of North Carolina at Chapel Hill completed postural control testing using the SOT on the SMART Balance Master (NeuroCom International, Clackamas, OR). Shoeless athletes were positioned with a standardized foot placement relative to their height, and instructed to stand with their arms relaxed at their sides, looking straight forward, and standing as still as possible. Athletes stood on two 9 × 18-inch force plates connected by a pin joint. Both the support surface and the visual surround rotate in the anterior–posterior plane referenced to the athlete’s sway and sway velocity. Center-of-pressure data were sampled at 100 Hz.

The SOT consists of six sensory conditions repeated three times for a total of eighteen 20-s trials. Each athlete was acclimated to the test by completing the first six trials in the following order: eyes opened and stationary support surface (condition 1), eyes closed with stationary support surface (condition 2), sway-referenced visual input with stationary support surface (condition 3), eyes opened with sway-referenced support surface (condition 4), eyes closed with sway referenced support surface (condition 5), and eyes opened with sway-referenced visual and support surface (condition 6). The next six trials were randomized across the sensory conditions. Once complete, the final six trials were repeated at random once more.

For each of the 18 trials, an equilibrium score was generated based on an algorithm developed for the SMART Balance Master. Percentages were computed expressing the angular differences between each athlete’s displacement of his or her center of pressure in the sagittal plane and his or her theoretical limit of stability (approximately 12.5° in the sagittal plane). Less postural sway in the anterior–posterior directions results in a higher equilibrium score and, thus, indicates greater postural control. An overall composite score was computed by averaging the following 14 equilibrium scores: the mean of all condition 1 trials, the mean of all condition 2 trials, and the individual trial equilibrium scores for conditions 3–6. A higher composite score is indicative of better postural control.

Graded symptom checklist

The graded symptom checklist is a self-report symptom scale that assesses the presence and severity of 15 concussion-related symptoms using a seven-point Likert scale. Each athlete was asked to his or her symptoms at baseline and after concussion by indicating which of the following numbers best described the severity: not experiencing = 0, mild = 1–2, moderate = 3–4, and severe = 5–6. During the preseason baseline evaluation, athletes were instructed to rate the severity of concussion-related symptoms they regularly experienced at least three times per week. During postconcussion test sessions, athletes were asked to rate the severity of their symptoms based on how they felt at the time of testing. The graded symptom checklist has been published previously (22). The total symptom severity score for this study is the sum of 15 severity scores for headache, nausea, vomiting, poor balance, sensitivity to noise, sensitivity to light, blurred vision, difficulty concentrating, difficulty remembering, trouble falling asleep, drowsiness, fatigue, sadness, irritability, and neck pain.

Identification of impairments

Impairments for each injured athlete were determined by computing two postconcussion difference scores for each outcome measure as follows:

  • baseline difference = postconcussion score − athlete’s individualized baseline score
  • normative difference = postconcussion score − gender-specific normative mean

The baseline difference score compared each athlete’s postconcussion outcome to the athlete’s individualized baseline score, whereas the normative difference scores compared postconcussion outcomes to gender-specific normative means.

Reliable change parameters were computed to provide a point range for which normal variation may occur while accounting for practice effect. We used data derived from a subset of the athletes with concussion (n = 132) included in this study and a second sample of healthy control athletes (n = 38) who completed the test battery twice, at least 2 wk apart and no more than 4 months apart. We used the standard error of the measurement from baseline (SEM1) and session 2 (SEM2) and the z-score associated with an 80% confidence interval (z = 1.282) to compute the predictive cutoff values for each outcome measure. This methodology was chosen because using the SE of the measurement to compute predictive cutoff values allows for some generalization across samples, it accounts for some random measurement error, and it expresses reliable change parameters in the same units as the measure (40).

80% confidence interval predictive cutoff value = 1.282Sdiff.

An 80% confidence interval predictive cutoff was used because it provides the most clinically conservative method for identifying clinically meaningful declines in performance. Reliable change parameters are presented in Table 2. Significant practice effects were observed for Simple Reaction Time Test 1, Mathematical Processing, Sternberg Memory Search, Match to Sample, Procedural Reaction Time, and Code Substitution.

Gender-specific normative values and reliable change parameters for each outcome measure.

Athletes’ postconcussion scores were then categorized as either “impaired” or “unimpaired” relative to their baseline score and then relative to the normative scores for each outcome measure. Athletes were identified as impaired if the difference score exceeded the reliable change parameter and unimpaired if the difference score did not exceed the reliable change score.

Statistical analyses

Nine separate 2 × 2 McNemar tests for paired proportions were used to assess agreement on impairment status (impaired or unimpaired) between comparison methods for each of the ANAM throughput scores, the SOT composite score, and total symptom severity score. Results were considered significant at an a priori α level of 0.05. The McNemar test is a form of the χ2 statistic that examines the differences between marginal proportions for matched pairs of data. A significant result indicates that the two marginal proportions are significantly different from each other and thus do not agree.


Gender-specific normative means used for comparison are presented in Table 2. The baseline comparison method identified 2.6 times more impairments than the normative comparison method for Simple Reaction Time Test 1 (P = 0.043). However, the normative comparison method identified 7.6 times more impairments than the baseline comparison method for Mathematical Processing (P < 0.001). Disagreements between baseline and normative comparison methods were not observed for any of the other ANAM throughput scores. Likewise, no disagreements were observed for the SOT composite score and total symptom severity score. Cell frequencies and total percentages for all analyses are presented in Table 3.

Acute impairment disagreement and agreement frequencies (percentages of total sample) for each outcome measure.


To our knowledge, our study is the first to compare the agreement between normative and baseline comparison evaluation methods. Our results indicate that, for most postconcussion outcomes, baseline and normative comparison methods identify the same impairments.

Neurocognitive impairments.

We observed two contradicting results for the Simple Reaction Time Test 1, favoring baseline comparison, and Mathematical Processing, favoring normative comparison. Although we observed significant disagreement between the baseline and normative comparison methods when identifying impairments for Simple Reaction Time Test 1, we note that only a small overall percentage of impairments occurred for this neurocognitive domain. We speculate that most of these observed impairments are false-positives. The baseline comparison method identified 18 impairments in the simple reaction time that the normative comparison did not detect. These 18 athletes achieved a mean Simple Reaction Time Test 1 throughput score of 271.3 at baseline that was well above the normative mean. In contrast, the normative comparison method identified a significantly greater number of impairments than the individualized baseline comparison method for the Mathematical Processing subtest. Unlike Simple Reaction Time Test 1, the normative comparison method identified a large overall percentage of impairments for Mathematical Processing. Because people differ in their neurocognitive capabilities, some individuals may never be able to perform to a “normative” level. Some preexisting conditions have been shown to negatively influence neurocognitive scores (2,9), and the normative mean may have been higher than some injured athletes were able to achieve at their baseline. These athletes would always be considered impaired relative to normative values, causing clinicians to unnecessarily hold them from play when they may have truly recovered. Therefore, using a normative comparison method may lead to a more reasonable and possibly more conservative management of an injured athlete. Normative values are available for most commonly used concussion assessment tools and are often specific to gender, age, and sometimes sport (8,18,21,31). Clinicians who choose to use normative data should be certain to use both gender- and age-matched values to ensure a valid point of comparison. Age-specific normative means were not used in this study because our normative and concussed samples did not differ largely in age.

The results of this study suggest that sports medicine professionals, without adequate resources and time, can mostly identify the same impairments using normative neurocognitive values. Using normative values would allow clinicians to bypass the lengthy process of establishing individualized baseline measures as part of a multifaceted concussion evaluation program. Because this is the first known study to explore these two comparison methods, additional research is necessary to further explore the utility of baseline testing. For many sports medicine professionals, computerized neurocognitive testing is the most time costly and expensive aspect of baseline testing. Sports medicine professionals who do implement baseline testing should ensure that environmental distractions are minimized (27), athletes get adequate sleep the night prior (38), and maximal extension of effort is encouraged (1). At a very minimum, individual scores derived from baseline testing should be closely evaluated for validity. All clinicians, regardless of whether they use a neurocognitive test battery at baseline or just after injury, must make an effort to understand and stay familiar with the psychometric properties of the neurocognitive test battery that they use to ensure proper interpretation of postconcussion scores. Sound clinical judgment must be used when interpreting neurocognitive score, regardless of which comparison methods is used.

Our results may have been influenced by our decision to use a true normative sample by excluding athletes with a history of self-reported concussion, learning disabilities, or attention-deficit disorders from our normative sample but not in our concussed sample. Our concussed sample consisted of athletes with various preexisting conditions that may have influenced both their baseline and postconcussion scores. Having said this, we believe our sample is representative of a typical college varsity athletic population. Sports medicine professionals should identify athletes with diagnosed preexisting conditions known to negatively or positively affect neurocognitive and postural control scores and obtain individualized preseason baseline for these individuals or compare these results to condition- or group-specific norms if available.

Postural control impairment.

We did not observe disagreement between comparison methods for measures of postural control on the SOT. These findings suggest that comparing the SOT composite score to normative values is an appropriate evaluation technique for identifying postural control impairments after concussion. The SOT used in our study is a sophisticated measure of postural control that is often unavailable to sports medicine professionals; however, similar postconcussion deficits have been identified on more sideline-friendly clinical tests such as the Balance Error Scoring System (BESS) (15,32). Future research is necessary to determine whether comparing postconcussion postural control measures from other clinical balance measures, such as the BESS, properly identifies impairments.

Graded symptom checklist impairment.

In this study, agreement existed between the normative and the baseline comparison methods for identifying athletes who were symptomatic immediately after injury. Although most athletes present with total symptom severity scores close to the normative mean, we maintain that sports medicine professionals should continue to complete the graded symptom checklist as part of a preseason baseline screening, if resources permit (21). The graded symptom checklist is easily administered, neither time costly or expensive, and provides an individualized measure of self-reported symptoms. In addition, total symptom severity scores are not influenced by group administration (27), could aid clinicians in identifying other preexisting pathologic conditions, and could easily be incorporated into a standard preparticipation examination. Thus, the demand placed on sport medicine professionals is minimal. Comparing postconcussion symptom scores to normative values may be difficult if an athlete states that he or she also experienced concussion-related symptoms before injury. Interviewing an athlete about his or her preconcussion symptom severity may result in underreporting (17). Given the subjective nature of these data, we maintain that clinicians should administer a baseline graded symptom checklist to all athletes.

Practice effects.

The goal of baseline testing is to allow participants to serve as their own postconcussion controls. However, practice effects may cause score inflation during serial administration of some concussion assessment tools (19,28,35,37). Previous studies suggest that two administrations of ANAM (19), two administrations of the SOT (12), and three administrations of the BESS (7) may be necessary to offset practice effects and derive a stable baseline measure to be used during concussion assessment. Although we recognize the necessity of obtaining a valid comparison, these suggestions intensify the demand already placed on sports medicine professionals who are challenged to complete baseline testing. Although comparison to normative values collected during one period presents a similar problem, an arithmetic mean of a samples’ performance provides a more stable measure. Normative values derived from samples that have undergone multiple administrations of the test battery may act as the most stable comparison points for athletes with concussion who undergo serial evaluations. Using the normative comparison method in conjunction with reliable change parameters may provide the most feasible model for concussion evaluation. The purpose of reliable change parameters is to provide a point range for which normal variation may occur while accounting for practice effect. Clinicians can conclude with a given probability that the decline is due to something other than chance (e.g., concussion). Using reliable change paramaters in this study allowed us to identify impairments while accounting for practice effects.


The present study only used a computerized neurocognitive exam (ANAM), the sensory organization test, and a 15-item graded symptoms checklist. Future research is necessary to determine whether these same results apply to other neurocognitive, postural control, and symptom severity assessment tools. Agreement analyses for some outcome measures presented with low cell counts for disagreement between comparison methods. Although we accounted for this by using a McNemar exact test, this may explain why these results were not statistically significant. Our data included postconcussion scores from 258 athletes with concussion collected during a 10-yr period. Most athletes were identified as unimpaired by both normative and baseline comparison methods for all outcome measures. Among the athletes who were impaired, we mostly observed agreement between normative and baseline comparison methods. These two factors contributed to low cell counts for disagreement because there were low overall impairments and few disagreements on those impairments. Because we observed two contradicting results (Simple Reaction Time Test 1 (favoring baseline comparison) and Mathematical Processing (favoring normative comparison)), we chose not to emphasize one comparison method as the gold standard. Future studies that seek to determine which method is superior might consider using multivariate classifiers to determine which comparison method best identifies lingering impairments after concussion.

These results only apply to those athletes who are evaluated during the immediate stage after their concussion. Future research is necessary to determine whether the normative and baseline comparison methods identify the same impairments using different concussion evaluation tools and during evaluations that take place after the immediate stage of injury. Both our normative and concussed samples consisted of male and female collegiate athletes. It is possible that comparing postconcussion scores to normative values derived from a different sample could influence impairment identification. Future research is necessary to determine how normative and baseline comparison methods agree for high school, professional, recreational, and other collegiate athletes.


Comparing postconcussion scores to normative values can be used after injury as part of a multifaceted evaluation for identifying acute neurocognitive and postural control impairments. Although previous emphasis has been placed on obtaining individual baseline measurements, our data suggest that, when using these concussion assessment tools, comparing postconcussion scores to normative values provides an appropriate and feasible evaluation approach. Clinicians should recognize that, regardless of which evaluation method they may use, return-to-play decisions should never be based solely on results from concussion assessment tools. A thorough clinical evaluation is paramount to safely managing concussion.

No conflicts of interest declared for all authors. This study was not funded.

Results of the present study do not constitute endorsement by the American College of Sports Medicine.


1. Bailey CM, Samples HL, Broshek DK, Freeman JR, Barth JT. The relationship between psychological distress and baseline sports-related concussion testing. Clin J Sport Med. 2010; 20 (4): 272–7.
2. Balint S, Czobor P, Komlosi S, Meszaros A, Simon V, Bitter I. Attention deficit hyperactivity disorder (ADHD): gender- and age-related differences in neurocognition. Psychol Med. 2009; 39 (8): 1337–45.
3. Bleiberg J, Cernich AN, Cameron K, et al.. Duration of cognitive impairment after sports concussion. Neurosurgery. 2004; 54 (5): 1073–8; discussion 1078–80.
4. Bleiberg J, Garmoe WS, Halpern EL, Reeves DL, Nadler JD. Consistency of within-day and across-day performance after mild brain injury. Neuropsychiatry Neuropsychol Behav Neurol. 1997; 10 (4): 247–53.
5. Broglio SP, Ferrara MS, Macciocchi SN, Baumgartner TA, Elliott R. Test–retest reliability of computerized concussion assessment programs. J Athl Train. 2007; 42 (4): 509–14.
6. Broglio SP, Ferrara MS, Sopiarz K, Kelly MS. Reliable change of the sensory organization test. Clin J Sport Med. 2008; 18 (2): 148–54.
7. Broglio SP, Zhu W, Sopiarz K, Park Y. Generalizability theory analysis of balance error scoring system reliability in healthy young adults. J Athl Train. 2009; 44 (5): 497–502.
8. Brown CN, Guskiewicz KM, Bleiberg J. Athlete characteristics and outcome scores for computerized neuropsychological assessment: a preliminary analysis. J Athl Train. 2007; 42 (4): 515–23.
9. Collins MW, Grindel SH, Lovell MR, et al.. Relationship between concussion and neuropsychological performance in college football players. JAMA. 1999; 282 (10): 964–70.
10. Covassin T, Elbin RJ 3rd, Stiller-Ostrowski JL, Kontos AP. Immediate post-concussion assessment and cognitive testing (impact) practices of sports medicine professionals. J Athl Train. 2009; 44 (6): 639–44.
11. Daniel JC, Olesniewicz MH, Reeves DL, et al.. Repeated measures of cognitive processing efficiency in adolescent athletes: implications for monitoring recovery from concussion. Neuropsychiatry Neuropsychol Behav Neurol. 1999; 12 (3): 167–9.
12. Dickin DC. Obtaining reliable performance measures on the sensory organization test: altered testing sequences in young adults. Clin J Sport Med. 2010; 20 (4): 278–85.
13. Guskiewicz KM, Bruce SL, Cantu RC, et al.. National Athletic Trainers’ Association position statement: management of sport-related concussion. J Athl Train. 2004; 39 (3): 280–97.
14. Guskiewicz KM, McCrea M, Marshall SW, et al.. Cumulative effects associated with recurrent concussion in collegiate football players: the NCAA Concussion Study. JAMA. 2003; 290 (19): 2549–55.
15. Guskiewicz KM, Ross SE, Marshall SW. Postural stability and neuropsychological deficits after concussion in collegiate athletes. J Athl Train. 2001; 36 (3): 263–73.
16. Hunt TN, Ferrara MS, Miller LS, Macciocchi S. The effect of effort on baseline neuropsychological test scores in high school football athletes. Arch Clin Neuropsychol. 2007; 22 (5): 615–21.
17. Iverson GL, Brooks BL, Ashton VL, Lange RT. Interview versus questionnaire symptom reporting in people with the postconcussion syndrome. J Head Trauma Rehabil. 2010; 25 (1): 23–30.
18. Iverson GL, Kaarto ML, Koehle MS. Normative data for the balance error scoring system: implications for brain injury evaluations. Brain Inj. 2008; 22 (2): 147–52.
19. Kaminski TW, Groff RM, Glutting JJ. Examining the stability of Automated Neuropsychological Assessment Metric (ANAM) baseline test scores. J Clin Exp Neuropsychol. 2009; 31 (6): 689–97.
20. Langlois JA, Rutland-Brown W, Wald MM. The epidemiology and impact of traumatic brain injury: a brief overview. J Head Trauma Rehabil. 2006; 21 (5): 375–8.
21. Lovell MR, Iverson GL, Collins MW, et al.. Measurement of symptoms following sports-related concussion: reliability and normative data for the post-concussion scale. Appl Neuropsychol. 2006; 13 (3): 166–74.
22. McCaffrey MA, Mihalik JP, Crowell DH, Shields EW, Guskiewicz KM. Measurement of head impacts in collegiate football players: clinical measures of concussion after high- and low-magnitude impacts. Neurosurgery. 2007; 61 (6): 1236–43.
23. McCrea M, Barr WB, Guskiewicz K, et al.. Standard regression–based methods for measuring recovery after sport-related concussion. J Int Neuropsychol Soc. 2005; 11 (1): 58–69.
24. McCrea M, Guskiewicz KM, Marshall SW, et al.. Acute effects and recovery time following concussion in collegiate football players: the NCAA Concussion Study. JAMA. 2003; 290 (19): 2556–63.
25. McCrory P, Meeuwisse W, Johnston K, et al.. Consensus statement on concussion in sport—The 3rd International Conference on Concussion in Sport held in Zurich, November 2008. PM R. 2009; 1 (5): 406–20.
26. McCrory PR, Berkovic SF. Second impact syndrome. Neurology. 1998; 50 (3): 677–83.
27. Moser RS, Schatz P, Neidzwski K, Ott SD. Group versus individual administration affects baseline neurocognitive test performance. Am J Sports Med. 2011; 39( 11): 2325–30.
28. Peterson CL, Ferrara MS, Mrazik M, Piland S, Elliott R. Evaluation of neuropsychological domain scores and postural stability following cerebral concussion in sports. Clin J Sport Med. 2003; 13 (4): 230–7.
29. Randolph C. Baseline neuropsychological testing in managing sport-related concussion: does it modify risk? Curr Sports Med Rep. 2011; 10 (1): 21–6.
30. Randolph C, McCrea M, Barr WB. Is neuropsychological testing useful in the management of sport-related concussion? J Athl Train. 2005; 40 (3): 139–52.
31. Reeves DL, Bleiberg J, Roebuck-Spencer T, et al.. Reference values for performance on the Automated Neuropsychological Assessment Metrics v3.0 in an active duty military sample. Mil Med. 2006; 171 (10): 982–94.
32. Riemann BL, Guskiewicz KM. Effects of mild head injury on postural stability as measured through clinical balance testing. J Athl Train. 2000; 35 (1): 19–25.
33. Schatz P, Neidzwski K, Moser RS, Karpf R. Relationship between subjective test feedback provided by high-school athletes during computer-based assessment of baseline cognitive functioning and self-reported symptoms. Arch Clin Neuropsychol. 2010; 25 (4): 285–92.
34. Solomon GS, Haase RF. Biopsychosocial characteristics and neurocognitive test performance in national football league players: an initial assessment. Arch Clin Neuropsychol. 2008; 23 (5): 563–77.
35. Valovich McLeod TC, Perrin DH, Guskiewicz KM, Shultz SJ, Diamond R, Gansneder BM. Serial administration of clinical concussion assessments and learning effects in healthy young athletes. Clin J Sport Med. 2004; 14 (5): 287–95.
36. Valovich McLeod TC, Schwartz C, Bay RC. Sport-related concussion misunderstandings among youth coaches. Clin J Sport Med. 2007; 17 (2): 140–2.
37. Valovich TC, Perrin DH, Gansneder BM. Repeat administration elicits a practice effect with the balance error scoring system but not with the standardized assessment of concussion in high school athletes. J Athl Train. 2003; 38 (1): 51–6.
38. Van Der Werf YD, Altena E, Vis JC, Koene T, Van Someren EJ. Reduction of nocturnal slow-wave activity affects daytime vigilance lapses and memory encoding but not reaction time or implicit learning. Prog Brain Res. 2011; 193: 245–55.
39. Van Kampen DA, Lovell MR, Pardini JE, Collins MW, Fu FH. The “value added” of neurocognitive testing after sports-related concussion. Am J Sports Med. 2006; 34 (10): 1630–5.
40. Wyrwich KW, Wolinsky FD. Identifying meaningful intra-individual change standards for health-related quality of life measures. J Eval Clin Pract. 2000; 6 (1): 39–49.


©2012The American College of Sports Medicine