Every medical school and residency program spends a great amount of time and effort teaching our doctors-in-training to take care of patients using the principle of “evidence-based medicine.” This means that the evaluation of every patient's medical problem, and all treatment recommendations, are based on knowledge that comes from clinical research. Such a rigorous approach, based on the best possible evidence, is the only way to ensure the highest quality care. It allows us to decide which tests are likely to help with a diagnosis, which treatments are most likely to be helpful—and the risk-benefit ratio for each.
So how do we get the evidence we need to make these often very difficult decisions? The answer is that the evidence comes from well-done, rigorous clinical research studies. When we think of clinical research, we often think of studies testing how effective a new treatment might be for a particular medical problem. This is one very important type of clinical research. From these studies, we learn about the effectiveness of new therapies. We also learn about potential side effects, whether there is a particular type of patient who might benefit more or less from the new therapy, and information about whether the new therapy has interactions with other medications the patient might be taking.
Other types of clinical research studies provide information about how common a particular condition is, what the risk factors might be, and whether there are any environmental or genetic factors that are important for causing the condition or making it worse.
In a recent “From the Editor,” I talked about the importance of animal research in ensuring continued advances in knowledge and therapeutic options for people with neurologic disease. This basic knowledge forms an important foundation, but clinical research is often the crucial next step that is needed to take these discoveries into the clinics.
Clinical research is broadly defined as research that involves human beings or human tissues. It follows logically, then, that human beings must be willing to volunteer to take part in clinical research, or the discovery process comes to a screeching halt. In this issue of Neurology Now, we have re-introduced a department called “Clinical Trial Watch,” in which we describe an ongoing clinical study and interview study personnel and volunteers. Although the article in this issue challenges the way one particular kind of clinical trial is carried out, the author's basic assumption is that clinical trials are crucial to the advancement of science. We hope that this information will encourage some of our readers to volunteer for the study described if they are eligible, or think about volunteering for another study.
As I talk with my patients in the clinic and hospital, I find that they are asking very sophisticated and important questions about the need for certain tests, and the benefits and risks of therapy. These are some of the questions I have been asked recently: “What are my chances of having another stroke if I take aspirin every day?” “Is it better to take the new oral medication for multiple sclerosis or should I just stick with the medication I am taking now?” “What are the chances that I will have memory problems if I have epilepsy surgery?” When these questions are asked, my patients expect answers that are based on real knowledge, not my best guess based on experience from the last several patients with similar neurologic problems. Without clinical research, and the volunteers in clinical research studies, none of us would have the kind of answers that I need and my patient's demand.
If you have volunteered for a clinical research study, please tell us about it.
Take good care,
Robin L. Brey, M.D.