Recently, two research articles were published that remind us of both the awesome power and the frailty of meta-analyses.
The first, published in Lancet Oncology by Sipahi and colleagues,1 reports on the association between one of the seven currently available angiotensin-receptor blockers (ARBs), a class of drugs used to treat hypertension, congestive heart failure, diabetic nephropathy, and for cardiovascular risk reduction, and subsequent cancer occurrence. For those of us who haven't taken our medical boards for some time, these include losartan, telmisartan, candesartan, valsartan, irbesartan, eprosartan, telmisartan, and olmesartan. Apparently, we have our -mibs and -mabs, and the cardiologists have their -sartans.
In this article, the authors focused their analyses on large, randomized control trials, from which they identified over 60,000 patients, 35,000 of whom had received an ARB. Control groups received either placebo, a beta-blocker, or an angiotensin converting enzyme (ACE) inhibitor.
The authors found that patients receiving an ARB had a risk of developing a new cancer that was 1.2% higher than for patients in the control group, which was significant. These included, for ARB-treated patients vs controls, lung cancer (diagnosed in 0.9% of vs 0.7%); prostate cancer (1.7% vs 1.3%); and breast cancer (1.2% vs 1.1%). The authors conclude that ARBs are associated with a modest increase in the risk of new cancer occurrence, and that these finding warrant further investigation.
The second study, published by Jafri and colleagues in the Journal of the American College of Cardiology,2 examines the relationship between high-density lipoprotein (HDL) levels and the risk of cancer development in large, randomized control trials of lipid-altering interventions. This time, instead of comparing a treatment, such as a cholesterol-lowering medicine, with placebo, the authors looked at baseline HDL and low-density lipoprotein (LDL) levels in both treated and untreated patients, focusing on a test result, rather than a medication.
With 625,000 person-years of follow-up (if you follow one person for one year, that equals one person-year of follow-up; three people for two years equals six person-years of follow-up; etc.), the authors found that, for every 10mg/dl drop in HDL level, the risk of developing cancer increases by 36%. Cancer risk also increases with higher LDL levels. This was irrespective of therapy—in other words, the association held whether a patient's HDL level was high on or off a cholesterol medication.
Now, the results of these studies occupied about a millisecond of national attention in the lay press, quickly dwarfed by seagulls dripping in oil in the Gulf Coast, an earthquake in Ontario, Canada, and by the resignation of General McChrystal from leading our troops in Afghanistan, roughly in that order. Unless, of course, you take one of these medications.
“One more question for you, doc. I'm on this medication for my blood pressure, telmisarten. Should I stop it because of the cancer risk? And if I do, should I drink a lot of red wine, so I raise my HDL and further reduce the chance I'll get cancer?”
Either Love or Hate
People either love meta-analyses, or they hate them. One of the beauties of these studies is that they allow us to ask questions that might be impossible in real-time, with smaller studies. The numbers of patients included are staggering.
In the first study, enough subjects were included to fill Progressive Field, home to the Cleveland Indians baseball team. Can you imagine the feasibility of asking a stadium full of people to take a blood pressure medication for at least one year, to determine whether or not they have a higher rate of developing cancer? In the second, you would have to identify 8300 people and follow them for the duration of their natural life to reach similar conclusions.
Meta-analyses allow us to take information from large studies that have been conducted already and pool all of the data, to assess even small increases in cancer risk. In the first study, for example, the included information was published over only a six-year period.
Thus, meta-analyses are also efficient—we can reach conclusions about specific questions much faster than we could by asking these questions prospectively, and for much less money. What would it cost to follow 8300 people for their entire lives to assess their cancer risk? More or less than what BP oil put in reserve for legal claims resulting from their oil catastrophe?
We can also ask multiple questions from the same study. Remember, none of the trials included in either of the meta-analyses described above had “development of cancer” as a primary outcome—the first collection of studies were focused on lowering blood pressure or reducing cardiovascular morbidity, while the second collection were eying HDL and LDL cholesterol levels. But all of the studies happened to collect information about cancer, as it was considered an adverse event of special interest, and all potential drug toxicities must be collected in treatment trials. We could use the same studies to ask a question about the risk of developing, say, Crohn's disease.
Only as Good as Studies Included
Meta-analyses also have their foibles, and they may be substantial. As detractors like to say, “crap in leads to crap out.” Meta-analyses are only as good as the studies they include. Unfortunately, there is a well-known publication bias favoring positive studies—those that show a beneficial effect of an intervention.
That being said, meta-analyses will tend to include such studies, and thus themselves will tend to be positive. So, authors of meta-analyses must be careful to scour the published literature, along with abstracts and even unpublished data, in an attempt to paint a balanced picture.
In focusing on non-primary endpoints, meta-analyses also may be finding significant associations that may, in fact, be completely spurious.
We routinely accept that an association is significant at a p-value of 0.05—in other words, we are willing to accept a 5% chance that although the numbers report that an association exists (say, between use of an ARB and a beneficial cardiovascular endpoint), in fact it really doesn't.
In real terms, this means that, if we look at 20 possible associations in the same study, by chance alone we will determine that one exists, when in fact it really doesn't. When meta-analyses explore non-predefined endpoints, such as cancer risk in studies of blood pressure medications, they add to the chance of erroneously identifying an association. How could this happen? What if patients who are treated with ARBs live longer? People who live longer have a greater chance of developing cancer, because they are alive to get cancer for a greater period of time. While the authors of this particular study attempted to control for this, not all meta-analyses do.
Finally, in interpreting results of meta-analyses, we have to determine in our own minds if statistically significant associations are truly clinically significant. Is a 1.2% increase in cancer risk from a meta-analysis meaningful? What about when balanced against the risk reduction of cardiovascular disease?
Meta-analyses are provocative, and should spur debate and future analyses of cancer risks. But they should not be viewed as conclusive in their own right, and only rarely should they drive us to alter clinical management of an individual patient.