Secondary Logo

Defining Evidence for Precision Medicine

A Patient Is More Than a Set of Covariates

Janssens, A. Cecile J.W.

doi: 10.1097/EDE.0000000000000992
Methods
Free

From the Department of Epidemiology, Rollins School of Public Health, Emory University, Atlanta, GA.

Editor’s Note: A related article appears on p. 334.

The author reports no conflicts of interest.

Correspondence: A. Cecile J.W. Janssens, Department of Epidemiology, Rollins School of Public Health, Emory University, 1518 Clifton Road NE, Atlanta, GA 30322. E-mail: cecile.janssens@emory.edu.

Precision medicine is the envisioned future model of health care in which treatments and care are tailored to the individual patient. The precision medicine slogan “One size does not fit all” stems from the fact that heterogeneity in treatment effects is common. Precision medicine research aims to find the best treatment for each patient through a better understanding of the underlying biology, finding novel treatments, and developing novel methods to better identify subgroups for whom treatments will work.

The biggest challenge in precision medicine is to make it evidence based. Tailoring treatments to each individual patient seems the only right thing to do, but how do we know how well a treatment will work for an individual patient? And, when multiple treatments are available, how do we know which one will work best for an individual patient? The answer is simple: we do not. And it is not likely that we ever will. However, we may be able to do better in identifying those groups of patients who are more likely to benefit.

In this issue, VanderWeele and colleagues propose a new method for assigning a treatment based on the expected benefit for individual patients, which is estimated using data from a randomized controlled trial (RCT).1 The expected benefit–the difference in the patient’s prognostic outcome following treatment and no treatment–cannot be directly obtained from RCT data as a patient is either in the treatment or in the control group, not both. For each patient in the trial, we do not know what their outcome would have been if they had been in the other arm of the trial. But that outcome is not entirely unknown either.

The outcome that a patient would have experienced in the other arm of the trial is expected to be similar to the outcomes of people in that arm who share relevant characteristics, covariates, with the patient. Based on these covariates, we can predict what the patient’s outcome would have been in the other arm. Similarly, for patients who did not participate in the trial, we can predict the outcomes for both arms of the trial and calculate the difference as the expected treatment benefit.

The method of VanderWeele et al. fits two regression models: one to predict the outcome in patients who received treatment and one in those who did not. The first model estimates the probability of the outcome if the patient is treated and the second estimates the same probability in the absence of treatment. The difference between the two probabilities is the increase in the probability of the outcome, the expected treatment benefit for an individual patient based on their covariates.

In the next step, the authors show four different scenarios how the method can be used to assign patients to treatment. When resources are not constrained, treatment can be given to everyone who is expected to benefit from treatment, but when resources are constrained, treatment may, for example, be restricted to the top q% of people with the highest expected treatment benefit. The authors show how threshold k that restricts treatment to the top q% can be derived from computing and comparing the total treatment benefits of a population for various values of k. They also show how optimization works when the expected benefit needs to be higher than some level of δ, and when the difference in the total benefit between treated and untreated patients needs to be maximized.

The method reminds me of one that we developed for the performance of diagnostic tests, a while ago.2 Like the benefits of treatment, the performance of a diagnostic test is not the same for all patients; it varies with patients’ characteristics and prior screening results.

In our method, we also fitted two regression models, both with covariates but one with and one without the diagnostic test. VanderWeele and all explain that if one wants to know which covariates seem most responsible for the variation in the treatment effect, one can simply take the difference between the regression coefficients of the two models. That is what we did. We constructed a new regression model by taking the differences in the regression coefficients of all covariates. We demonstrated that this new model estimates the likelihood ratio of a positive and negative diagnostic test result based on covariates, from which the sensitivity and specificity of the test can be calculated.

When we applied the method to a clinical example, the results showed exactly what we expected: we observed substantial differences in the sensitivity and specificity of the diagnostic test for different combinations of the covariate values. However, the clinical example also taught us several lessons for the practical application of the method that are also relevant here.

First, the sample size needs to be sufficiently large to fit the regression models. The method of the authors fits a regression model in each arm of an RCT, which might work well when each arm has 1500 patients,3 but not when a trial randomizes only 120 patients.4 Knowing that the sample size of a trial is inversely related to the expected treatment effect, the method could be helpful to identify a small subgroup that benefits from a drug that has a minor effect in the population at large, if that is to be expected and determined by a limited set of covariates.

Second, the sample needs to reflect the target population in which the treatment is going to be applied. RCTs often have a substantial number of exclusions that restrict the heterogeneity of the patient population. A systematic review of 319 trials targeting chronic diseases reported that 79% of the trials excluded patients with concomitant chronic conditions.5 Exclusion of patients may change regression coefficients and, with that, the expected treatment benefits and their ranking if applied in. The models will only apply to included patient groups; the treatment benefits of those excluded remain unknown.

Third, the covariates need to be associated with the treatment benefit, and these may not be the same as those that predict the outcome. In our clinical example, we found that the covariate that had the strongest impact on the sensitivity and specificity of the diagnostic test was one that was not associated with the risk of the disease that had to be diagnosed. Our example is no exception; it may even be the rule in pharmacogenomics where genes that impact the efficacy of drugs are mostly unrelated to the risk of the outcomes themselves.6

When the covariates mainly impact the treatment benefit and not the prognostic risk, then there will be limited variation in the expected outcomes in the control arm and the regression model of the treatment arm will determine the variation in the treatment benefit. In that scenario, the method may not be able to distinguish whether a patient moves from a 0% to 20% probability of a better prognostic outcome or from 60% to 80%. Additional criteria will be needed to assign treatments if this is deemed relevant.

Fourth, as the authors acknowledge, the value of the method will for a large part be determined by how well the regression models can predict expected outcomes. Further studies should determine how high the predictive ability needs to be to justify the use of this method for treatment decisions. When their predictive ability is limited, there will be minimal variation in the expected treatment benefits, which, given the small sample and population selection, may not be a fair basis for making treatment decisions.

Finally, and most importantly, the method assumes that patients and doctors agree that only one consideration matters in treatment decisions, namely the expected improvement in a specific prognostic outcome. But this is far from evident. What if the highest treatment benefit also comes with the most severe side effects? What if a treatment positively impacts multiple outcomes and the ranking of patients differs with the expected benefits of each of these outcomes? If the goal is personalized medicine, should not the patient be the one who decides what outcome matters? The variability in patient preferences and how they weigh the various benefits and harms of one or more treatments is impossible to predict and incorporate in an algorithm.

The authors assert that subgroup analyses have dominated the assessment of treatment benefits and that it would be more desirable to make such decisions taking multiple covariates into account. I expect that doctors will agree. I expect that many, if not all, will say that that is what they always do. That taking into account unique patient characteristics is the reason why their decisions often diverge from what would follow from using prediction models.7

Patients are unique and may have other signs, symptoms, comorbidities, and patient characteristics that prompt doctors to make decisions other than those predicted by models and rules. One size does not fit all because every patient is unique, but this uniqueness is hard to model–at least not using current prediction methods. Predictors only end up in a model when they are not too rare and their effects on the outcome not too small. Prediction models, by design, predict the rule, not the exceptions. Yet, for precision medicine, taken literally, every patient is an exception. That is why we may need to accept that for many treatments, one size may just have to fit all–even if it does not.

Back to Top | Article Outline

ABOUT THE AUTHOR

A. CECILE J.W. JANSSENS is a professor of epidemiology at the Rollins School of Public Health of Emory University. Her research focuses on the predictive ability and utility of genetic testing for complex diseases and traits. Her research spans methodologic, statistical, psychological, ethical, and societal aspects of genetic testing. She teaches critical thinking, quality of evidence, and research ethics in graduate and doctoral programs.

Back to Top | Article Outline

REFERENCES

1. VanderWeele TJ, Luedtke AR, van der Laan MJ, Kessler RC. Selecting optimal subgroups for treatment using many covariates. Epidemiology. 2019;30:377384.
2. Janssens AC, Deng Y, Borsboom GJ, Eijkemans MJ, Habbema JD, Steyerberg EW. A new logistic regression approach for the evaluation of diagnostic test results. Med Decis Making. 2005;25:168–177.
3. FOCUS Trial Collaboration. Effects of fluoxetine on functional outcomes after acute stroke (FOCUS): a pragmatic, double-blind, randomised, controlled trial. Lancet. 2019;393:265–274.
4. Basch EM, Scholz M, de Bono JS, et al. Cabozantinib versus mitoxantrone-prednisone in symptomatic metastatic castration-resistant prostate cancer: a randomized phase 3 trial with a primary pain endpoint. Eur Urol. 2018.
5. Buffel du Vaure C, Dechartres A, Battin C, et al. Exclusion of patients with concomitant chronic conditions in ongoing randomised controlled trials targeting 10 common chronic conditions and registered at ClinicalTrials.gov: a systematic review of registration details. BMJ Open 2016;6:e012265.
6. Giudicessi JR, Kullo IJ, Ackerman MJ. Precision cardiovascular medicine: state of genetic testing. Mayo Clin Proc. 2017;92:642–662.
7. Kappen TH, van Klei WA, van Wolfswinkel L, et al. Evaluating the impact of prediction models: lessons learned, challenges, and recommendations. BMC Diagn Progn Res. 2018;2:11.
Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.