Subscribe to eTOC


Evidence-based practice parameters are all the rage nowadays. In May alone, both the AANews and Neurology Today featured stories about how such parameters don't compromise physician autonomy, how some drugs they recommend for epilepsy don't have FDA approval, and how residents can help write them. If some of these sound suspiciously like public relations bites – announcing trivial accomplishments without considering the many problems such guidelines present – you might have a point. What's missing from the picture is a discussion of how to ensure that practice parameters truly reflect our best medical practices, and whether they should be evidence-based, or based on the best available evidence.


According to evidence-based guidelines, no procedures or treatments can be recommended unless first tested and proven in blinded randomized trials. That may sound reasonable in theory, but in practice, that often doesn't work. In fact, there are many situations where application of evidence-based criteria would lead to substandard care.

For example, would you recommend doing a biopsy to diagnose vasculitic neuropathy, or giving B12 to a patient with pernicious anemia and myelopathy? The correct answer in both cases is yes, but not according to evidence-based criteria, because in neither case was the intervention proven in a randomized trial. Some of the confusion results from the term “evidence-based” itself, which in fact restricts the use of evidence, as it excludes all information from peer review publications that have not been tested in randomized trials.

The unstated presumption in evidence-based medicine is that the only way to know – with sufficient certainty – that a diagnosis, procedure, or treatment is reliable or valid is through randomized trials, and that other types of evidence are anecdotal, and thus suspect.

No one doubts that randomized trials are useful, particularly where bias is an issue, or where the benefits are sufficiently small or delayed so as not to be obvious. However, the question is: are trials necessary in all situations? For example, do we need trials to know with sufficient certainty that the sun will rise tomorrow, that a bicycle rides better on round rather than square wheels, or that a broken bone would mend better if the ends are brought together. There is no easy formula for knowing, and the evidence needs to fit the question. In some instances, controlled trials are needed, whereas in others, common sense or simple outcome studies would do.

Evidence-based medicine may be of academic interest, but it is not particularly suitable for deciding practice guidelines, for various reasons. Controlled clinical trials are not designed to decide management in any particular case, but only to compare one procedure or treatment to another. For many clinical situations, there are no controlled trials on which to base a decision. Medical decision-making is complex, involving multiple variables such as disease severity, speed of progression, age, gender, co-existing medical conditions, other medications, and genetic or biological differences; it would be impossible to design trials that compare all the options (J Neurol Neurosurg Psychiat 2001;71:569–576).

Patients in clinical trials require fulfillment of strict inclusion criteria that would not be applicable to the broader patient population. In addition, requiring clinical trials would be unethical in many situations. It is ironic that evidence-based practice guidelines themselves have never been subjected to any sort of trials, or compared to other types of guidelines, to show that they improve outcome.

In particular, the need to subject established practices to randomized trials – so that they can be recommended – should be questioned. Blinded randomized trials are a newcomer to medicine, whereas most current practices were developed and proven through careful observation, reproducibility, and predictability over many decades – without randomization. This type of “anecdotal” evidence is responsible for most of the great scientific and medical advances to date, including the discovery of gravity, the rotation of planets, the neurological exam, EMG and nerve conduction studies, penicillin, and the Babinski reflex, among most everything else. It would be unreasonable to require that these advances now undergo randomized trials. This exercise would be unlikely to yield important new information; it would waste valuable resources, and require withholding established treatments from control patients. There is no reason to abandon what we already know as we add newly acquired information.


In practice, a physician is often faced with deciding care on the basis of incomplete or imperfect evidence – for example, when the diagnosis is not entirely certain, or standard therapy fails or is contraindicated. In these situations, physicians use the best available evidence, including information from non-randomized trials, in addition to their own experience and clinical judgment, to decide between alternatives and on the best possible treatment. Practice parameters need to help the physician through that process, and recommend therapies that have been reported to be beneficial, even if they have not been subjected to randomized trials.


Ideally, practice parameters should represent the best current practices. As such, they need to consider all the evidence, particularly from randomized trials, but also from peer-reviewed publications of uncontrolled trials, retrospective reviews, and case series and reports, that represent our collective experience. This process is more complex than is required for evidence-based guidelines, but more closely reflects the realities of medical practice.

That information should be evaluated and synthesized by experts, who bring to bear not just their knowledge of the literature, but also their vast experience and respected clinical judgment, to formulate the guidelines. The method by which consensus can best be reached has been the subject of some discussion and analyses (Psychopharmacol Bull 1997;33:631–639), but newer procedures for quantifying expert opinion and minimizing bias can facilitate the process, as has been demonstrated in developing guidelines for epilepsy treatment (Epilepsy Behav 2001;2:A1–A50).


The AAN requirement that practice guidelines be evidence-based can have the undesirable effect of limiting options. For example, it would prevent the AAN from issuing clinical guidelines for the diagnosis of chronic inflammatory demyelinating polyneuropathy – even as insurance companies use the more restrictive research guidelines to deny treatment, because there are no controlled trials on which to base a recommendation. Similar issues arise in deciding Medicare coverage for diagnostic testing, as most existing tests were not developed through a process of randomized trials. At a time when we need a strong national organization to provide leadership and representation, the AAN is shackled by rules that impair its ability to take effective action. At the least, these issues need to be further examined, and opened to general discussion.


Dr. Norman Latov: “According to evidence-based guidelines, no procedures or treatments can be recommended, unless first tested and proven in blinded randomized trials. That may sound reasonable in theory, but in practice, that often doesnt work.”


• Caplan LR. Evidence based medicine: Concerns of a clinical neurologist. J Neurol Neurosurg Psychiat 2001;71:569–576.
    • Kahn DA, Docherty JP, Carpenter D, Frances A. Consensus methods in practice guideline development: a review and description of a new method. Psychopharmacol Bull 1997;33:631–639.
      • Karceski S, Morrell M, Carpenter D. The expert consensus guideline series. Treatment of epilepsy. Epilepsy Behav 2001;2:A1–A50.