Still burdened by a need for meaningful clinical data, by a lack of widely disseminated public education, and by inconsistent clinical care, the world of sports concussion has become even more complicated as of late. It is now, most definitely, a business as well. As evidence, with the marketplace being flooded with products claiming to diagnose and manage concussion, the U.S. Food and Drug Administration held a workshop in June for the purpose of defining the scientific criteria needed for successful Food and Drug Administration approval in this arena.
Central to the discourse at this workshop were many of the issues that Dr. Randolph brought to light in his original publication. There was strong consensus that any neuropsychological tool must meet the basic statistical criteria of validity, stability, and reliable change. Obviously, any tool should be acceptably sensitive to the cognitive effects of concussion. Given equal import, however, was the idea that any tool specifically claiming to diagnose concussion or to be useful in concussion management also should be shown to be acceptably specific. If a software product, for example, can reliably measure cognitive change but cannot differentiate between possible causes of that change, it should not claim to be measuring the effects of concussion but to be measuring cognitive change. Finally, there was clear agreement that, prior to commercial release, these tools be shown to perform reliably within a model that accounts for some of the common causes of variability seen in the real world. The effects of environmental distraction, motivation, and serial administration, for example, were all thought to be issues that warranted specific consideration.
These issues aside, the complexity and individual variability of brain function underscore the clear, albeit potential, value in baseline testing. Unfortunately, I know of no data demonstrating that any baseline test affects clinical outcomes. That being said, I do not believe that all baseline tests are devoid of value for every clinician in every situation. This value is only realized, however, when the test in question is reliable and measures clinically relevant phenomena.
While I do agree with the basic tenets of Dr. Randolph's view, I find his target to be conspicuously narrow. Many readers, I fear, may come away with the idea that all baseline tests are the same and that arguments applied to one product, in this case, ImPACT, can be easily applied to others. Certainly, the basic hurdles of validity, stability, and reliable change must be cleared by any baseline test. Whether they have been cleared by ImPACT or not, I will leave to others to debate. The next hurdle, however, is accounting for the noise that is introduced in test results by less tangible and thus less easily studied variables such as motivation and fatigue.
I urge the sports clinician who is considering the value of baseline testing to consider the true complexity of these issues, to appreciate that not all tests are created equal, and to be as leery of fool's gold as of dirty bath water.
Jeffrey S. Kutcher, MD
University of Michigan
Ann Arbor, MI