Podcasting is the new blogging — it's easy, virtually free, and anyone can do it. That does not mean, of course, that anyone will actually listen. Since I am as much of a sucker for a fad as the next guy, I have decided to join the podcasting movement. But I am also savvy enough to just go ahead and skip the part where no one listens. So while you can choose to listen [http://bit.ly/1MlOJ58], this pod is going to print no matter what. Here goes....
DB: Welcome to The Medically Clear Podcast. You are fortunate enough to join us for our inaugural broadcast. And depending on how this goes, perhaps you will be joining us for our final one as well. I am Dustin Ballard, an emergency physician, writer, and researcher, and “it” is known quite simply as the Fabulous “Creature” of the Literature.
Each episode I will be asking the “Creature” to pick a notable recent addition to the literature, and then flush and suss it out for us. We'll get to our inaugural article in a moment, but first, Creature, can you tell us a little bit more about yourself and your prolific podcasting resume?
CREATURE: Well, thanks, Dustin. You are too kind. I must admit that today does represent a milestone in my podcasting career. It's, in fact, my very first podcast.
DB: Excellent and welcome to the world of podcasting. So Creature, our featured science this week is from the British Medical Journal (2014;349:g7346), and addresses how medical television media packages evidence for health consumers. OK, let's start our Knowledge Translate segment and ask you to drop a Synopsis Bomb regarding this study.
CREATURE: A synopsis bomb? That sounds explosive. How about instead I just skim the paper and point out any methodological land mines I come across?
DB: That'll do.
CREATURE: Well, then, here goes. This study from a team out of Alberta, Canada assessed the accuracy of recommendations from two popular syndicated medical televisions talk shows: “The Dr. Oz Show” and “The Doctors.” The study team reviewed 40 episodes of each show from 2013 (poor bastards!), identified medical recommendations made on the show, and the “whys” behind them. Examples of recommendations included “Vitamin E improves brainpower” and “Sneezing into your elbow prevents the spread of germs.” The team then combed through the scientific literature to look for supporting evidence. Out of 160 total recommendations, 80 from each show, they found that believable (or somewhat believable) evidence supported 33 percent of the recommendations from Dr. Oz and 53 percent from “The Doctors.” A sizeable minority of recommendations — 15 percent for Oz, 14 percent for the Docs — actually had evidence contradicting them. So, the authors conclude, and I quote:
“Consumers should be skeptical about any recommendations provided on television medical talk shows, as details are limited and only a third to one-half of recommendations are based on believable or somewhat believable evidence.”
DB: Well, that is far from medicine's version of must-see-TV. But, Creature, I am having a hard time believing this. That Travis Stork fella from “The Doctors” seems so likeable ... so handsome and wickedly charismatic. Are you telling me that I shouldn't believe him?
CREATURE: Yes, that's exactly what I'm telling you. Remember, looks can be deceiving. Just because you're a looker doesn't mean you're a booker.
DB: Hmm. Deep thoughts. So who should we believe then? The tobacco industry? Big pharma? The Doctor Julius Erving?
CREATURE: Great question, but before I answer, aren't you forgetting something? I know this is my first podcast, but I swear you told me that I'd have a chance to evaluate the evidence at hand.
DB: Of course. You are referring to our S&M segment. Science & Methods. This is where you, like a highly trained and overpaid structural engineer, probe for weaknesses in the study's foundational structure.
CREATURE: Exactly. Well, I have to tell you, Dustin, that if our gold standard of medical investigation is the randomized double-blind trial, well, then this study is really stretching to make bronze.
DB: I'm listening.
CREATURE: The authors call this a prospective observational study, and if that were actually true, that might put it in the bronze-to-silver category, but it's really a well-structured fact-checking exercise. Facts are important; don't get me wrong, but this type of study design is surely not a shining beacon in the history of evidence-based medicine.
DB: Got it. Well, we could dwell here. And let me just note that the authors do, in fact, have a data analysis supplement, but I think we'll have more fun getting to the broad perspective here. So, this feels like the right time for us to turn to the Citation Station, where the Creature reviews relevant evidence from other sources. Personally, I really love the jazz that the Citation Station brings to this discussion. But, first, let's review the ground rules. Creature, we know you are a literature expert, but we also respect our listeners' time. So, here's the deal. Three citations, and then we take a cognitive break. Got it?
CREATURE: Yep. Got it. I am all for limiting cognitive fatigue.
DB: Ok, well, let's get 'er going then.
CREATURE: Let's start with another publication from BMJ. (2014;349:g7015.) This is from Summer and colleagues from Wales, the UK, and the University of Wollongong in Australia.
DB: Always wanted to visit Wollongong.
CREATURE: You are not alone there, Dustin. So, the title of this pub is “The Association Between Exaggeration in Health-Related Science News and Academic Press Releases: Retrospective Observational Study.” Essentially, what this study did was examine the link between the press releases that academic institutions put out regarding medical studies on animals and the type of lay news coverage those studies received. Now, we both know from our own clinical experiences as well as intuitively that people tend to be impressed by expertise — or shall I say, the perception of expertise. This is why nutritional supplements with complex-sounding biochemical activity may sell better. So, it's not surprising that in this case, exaggerated findings in academic news releases frequently got translated into the lay press. In particular, these researchers found that there was a lot of exaggeration when it came to taking the results of animal studies and then making inferences as to how those results might apply to humans.
DB: Let's give an example here, Creature. So, for instance, if giving ibuprofen to a worm helped it to live twice as long, that doesn't necessarily mean ibuprofen will help humans live to age 200. Worms are not people!
CREATURE: You got it. And, really, this wasn't a minor effect the researchers were seeing. They found that if academic press releases exaggerated animal to human inferences, the lay press was 56 times more likely to do the same.
DB: Fifty-six times? The cardiology literature would die for an odds ratio one tenth of that!
CREATURE: Fifty-six. And, not only that, they saw a similar effect with regard to exaggerated causal claims (vaccines cause autism, that sort of thing) and exaggerated advice as well.
DB: Oh, my. So how the “experts” frame something makes a big difference whether they are on TV or at an academic institution.
DB: OK, good stuff. You are rocking it. And while I know you could go S&M on this study, we've got to keep moving. Let's discuss Citation 2.
CREATURE: Well, sure. Citation 2 is from PLOS ONE, and was written by a group of neuroscientists from Europe. (Aug. 12, 2014; http://bit.ly/1EW9KBf.) They studied how accurately the lay press reported neuroscience research results.
DB: People do love their brain studies. Let me ask you, Creature, do you do neurobics regularly? You know, walk backwards on the treadmill, write with your off hand?
CREATURE: Dustin, I do believe you are straying off topic and into an area of exaggerated benefit.
DB: True. Apologies.
CREATURE: OK, so these researchers also trolled through the press and found a pretty low rate of accurate reporting and rather high rates of overly optimistic interpretation.
DB: More fact checking.
CREATURE: Yes. And in over 1,000 articles published between 2008 and 2012 — so, a pretty good-sized sample — they found that accuracy as well as optimism was dependent on a few key variables. One being whether the news being reported was part of a, I quote, “neuroscience news wave,” defined as a period of six consecutive days when there was a statistically significant burst of articles on neuroscience) as well as the type of publication the article appeared in: free, popular, quality. When articles were written during a “news wave,” they were more likely to be optimistic in tone. And, when written in a quality publication, they were more likely to be accurate. Overall accuracy, however, was quite low, about half as accurate as Dr. Oz and “The Doctors.” Furthermore, they found that only 13 percent of articles were balanced in that they discussed both the strengths and limitations of the work.
DB: Fair and balanced, Creature. Not everyone can do it. We can't all be Fox News.
CREATURE: If only we could.
DB: But let's try. Give us the quick S&M on this study.
CREATURE: I give them a bronze medal. Not bad for a descriptive fact-checking study.
DB: Got it. OK, Citation 3?
CREATURE: Well, it's really a recommendation. Dr. Ben Goldacre's book Bad Science. It's a few years old now but still very relevant, especially given our recent Mickey Mouse measles outbreak here in California. His chapter on how the British media let the Dr. Wakefield MMR scare get out of control really should really be mandatory reading. And, to be quite honest, he really captures the essence of today's topic.
DB: Which is?
CREATURE: Misinformation and misinterpretation are everywhere around us, sometimes deliberate, sometimes not. Unfortunately, it seems that when it comes to evidence-based medicine (which is already a complex model), evidence is even more likely to be abused or misapplied, and conclusions are more likely to be warped. This is happening exactly in the field where we probably least want it to happen!
DB: So, what you are saying is that we shouldn't be surprised that medical recommendations on TV might not be supported by legitimate evidence and that the way the news media reports scientific or medical research may not be accurate or adequately balanced.
DB: So, what can we do? What can our listeners, especially those without training in how to evaluate scientific research, do to protect themselves?
CREATURE: Well, here I agree with Goldacre. Your best protection is education. And it's not that hard. If you can learn how to assess evidence, you can decide for yourself whether it is worth paying attention to.
DB: Creature, I couldn't agree more, and one of the goals of this podcast will be to help readers do exactly that. And, with that, I think we are just about ready to wrap up this inaugural podcast, but before we go, can you Take It Home by ranking this BMJ article? Should our listeners do which of the following? Frame it on the desktop and refer to it to change their daily life practice, file it and refer to it when feeling skeptical, or trash it along with all the other Annals of the Asinine. Those are the choices: frame it, file it, or trash it. How would you deal with this paper and why?
CREATURE: “Frame it,” Dustin. Not because it's good science; it really isn't. But because it helps illustrate a truism in evidence-based medicine; evidence is in the eye of the beholder. Don't be blind to the potential for misinterpretation or misuse. And, of course, savvy listeners will recognize that this advice applies to how they perceive this podcast as well.
DB: Well, thanks, Creature, and thanks to our at least two listeners — Mrs. Dustin and Mr. Creature. We you are grateful for your loyal support and know you help keep us honest. Until next time, we hope that y'all live your lives based on medical clarity.