# Compound Treatments, Transportability, and the Structural Causal Model: The Power and Simplicity of Causal Graphs

From the Divisions of Biostatistics and Epidemiology, School of Public Health, University of California, Berkeley, CA.

Correspondence: Maya Petersen, University of California, Berkeley, 101 Haviland Hall, Berkeley, CA 94720–7358. E-mail: mayaliv@berkeley.edu.

In “Compound treatments and transportability of causal inference” in this issue of Epidemiology, Hernán and VanderWeele (H&V)^{1} argue that causal analyses often fail to specify a causal question that adequately addresses the motivating policy issue. H&V highlight the issue of “compound treatments,” or treatments with multiple versions. An example used by H&V is an intervention to prevent obesity. They discuss how compound treatments can result in ill-defined counterfactuals, complicate the articulation and evaluation of the consistency assumption, and threaten the relevance of analyses for policy decision-making. In this commentary, I sketch how the issues raised by H&V can be addressed through standard application of the causal graph or structural causal model framework without new notation or assumptions.^{2} I focus on 2 distinct issues: specification of the causal model and query, and evaluation of transportability.

## CONSISTENCY WITHIN THE FRAMEWORK OF STRUCTURAL CAUSAL MODELS

A causal model encodes background knowledge and assumptions about the causal system of interest. In structural causal models, this knowledge is represented as a directed acyclic graph, or, equivalently, as a system of nonparametric structural equations.^{2} A well-specified causal model can be used to define counterfactuals indexed by interventions on any one or more nodes in the graph. For example, under the causal model in Figure A (taken from H&V Fig. 4), *Yr*, the counterfactual value of *Y* under treatment *r*, is defined as the value of *Y* generated by a modified version of the graph in which *R* is deterministically set equal to *r*, thus removing any other influences on *R*. (Fig. B).

Counterfactuals indexed by a conjunction of variable values (for example, an intervention to set exercise duration *A* to 36 minutes and vitamin dose *W* to 5 mg) are always well defined for any set of nodes in the graph. Counterfactuals indexed by a disjunction of variable values (for example, an intervention to set *A* to 30 minutes or more) are likewise well defined, but require that we add to the graph an additional node *R* that represents the restriction that the intervention imposes on its versions. For example, the intervention “exercise at least 30 minutes” can be represented (following H&V) by including an *R* node that restricts the set of possible values that *A* can take: if *R* is set equal to “at least 30 minutes,” A can take only those values greater than or equal to 30 minutes. The values that *A* actually takes in individual *i* (namely the function *Ai(r)* of H&V), emerge from the model depending on the other factors that influence subject *i*.

Linking counterfactuals to the observed data requires the assumption that the data in our study were generated by a causal system that is accurately represented by the graph. This assumption guarantees the consistency of any observed variable and its counterfactual value under the observed level of intervention, including treatments with multiple versions.^{3} On applying a conventional structural causal model, we are no longer confronted with the concern that “when the version ... of treatment ... individual *i* receives is unknown, the consistency condition cannot be articulated” (H&V). Using conventional notation, this situation would be represented by the variable *A*, representing versions of treatment, remaining unmeasured (at least for some individuals). The effect of *R* on *Y* is defined and the consistency of *YR* with *Y* is ensured, regardless of whether the versions *A* of *R* are explicitly articulated or measured. Depending on the causal model, unmeasured *A* may or may not impede identification of the effect of *R* on *Y*, just as it may or may not impede transportability of that effect. The graph framework provides a systematic approach for evaluating these concerns.

### Specifying a Casual Model and Question

H&V acknowledge that within the framework of structural causal models, compound treatments pose no challenge to the articulation of consistency or the definition of counterfactuals. They argue, however, that evaluation of consistency as “a substantive assumption that needs to be evaluated” is “the most relevant [approach] for policy and decision making” because it guards against ill-posed causal questions. Well-posed causal questions are clearly crucial to informative analyses. Here again, however, the conventional causal graph framework provides the tools needed.

Specification of a good causal question requires the researcher to do 2 things well. First, she must accurately represent knowledge about the data-generating process (and limitations of that knowledge). Although this is undeniably a challenging task, it is challenging in any causal paradigm, regardless of whether treatments are compound or simple. Causal graphs provide a rigorous, accessible, and transparent language to facilitate this translation process.

Assume, as H&V do for much of their article, that *R* (treatment) causes *A* (versions of that treatment). The graph shows us that we can treat the multiple versions *A* just as we treat any descendant of *R* (and possible intermediate between *R* and *Y*). (Compound treatments similarly require no special treatment when *A* causes *R*; in the interests of space we defer further discussion of this case.) Just as any 2 nodes can have innumerable unmeasured intermediates, any treatment (as H&V point out) can be considered a compound treatment. The questions then arise as to whether (1) it is necessary or helpful to represent the versions of treatment explicitly as a separate node on the graph and (2) it is necessary or helpful to elaborate and measure versions of treatment. Again, the graph provides a clear answer. In Figure C, if *U* is unmeasured, then measurement of *A* is needed for identifiability of the effect of *R* on *Y* (via the front-door criterion). In Figure A elaboration and measurement of *A* are not needed for identifiability of this effect. Indeed, Figure A could be specified as Figure D, omitting the versions of treatment all together; inspection of both graphs quickly reveals that *W* and *L* satisfy the back-door criterion.

Second, the researcher must specify a counterfactual intervention that accurately reflects the policy being considered. This requires that counterfactuals be defined using the relevant type of intervention on the relevant variables. For example, if a researcher wishes to estimate the effect of an obesity prevention program that will “eliminate low physical activity and high caloric intake” (H&V), counterfactuals should be indexed by an intervention on these 2 variables, rather than an intervention on BMI. Similarly, a policy that increases exercise duration to 30 minutes only among those individuals who are not currently exercising at least 30 minutes is not accurately captured by counterfactuals indexed by an intervention to set exercise duration to 30 minutes for all subjects; when defining counterfactuals indexed by such dynamic interventions, a standard structural causal model again suffices.^{2,4}

### Transporting Causal Effect Estimates

H&V next address whether a counterfactual parameter estimated in the study population can be interpreted as the effect of a proposed policy in a new population. Answering this question requires additional assumptions regarding the ways in which the target population may differ from the study population. The structural causal model paradigm provides a language for expressing these assumptions transparently and for evaluating whether they are sufficient for transportability. Recently, Pearl and Bareinboim^{5} demonstrated the use of causal graphs for this purpose; in this study, I outline their proposed approach.

The casual graph can be augmented to specify the ways in which the data-generating process may differ between the study and target populations. Consider the example where *R* represents the decision to exercise at least 30 minutes, *A* is actual duration of exercise, and Figure A (taken from H&V) represents a correctly specified causal model for the study population. One now wishes to transport an estimate of the effect of *R* on *Y* to a new population. How might the target population differ from the study population? One possibility (among many) is that the process by which duration of exercise is determined may change. For example, perhaps the target population has a media campaign that encourages 45 minutes of exercise per day. In the target population, a subject who decides to exercise at least 30 minutes may be more likely to exercise at least 45 minutes than would an equivalent subject in the study population.

This potential disparity in the process that determines duration of exercise can be represented on the causal graph using an auxiliary node, *S* (Fig. E).^{2,5,6} The variable *S* functions like a switch. If *S* is “off,” the variable affected by *S* is generated as it was in the original causal model for the study population. If *S* is “on,” the same variable is generated according to the possibly distinct process operating in the target population. Note that assumptions regarding uniformity of the data-generating process between the 2 populations are encoded in the absence of *S* nodes affecting some variables. For example, Figure E assumes that the processes by which *Y*, *R*, *L*, and *W* are determined remain constant between the 2 populations.

In addition to providing a language for expressing assumptions about potential disparities between the target and study populations, the graph also provides a tool for visually assessing (1) whether these assumptions allow the initial effect estimate to be interpreted without adjustment as an estimate of a policy's impact in the target population (a property referred to as “direct transportability” or “external validity”) and, (2) when and how data from the target population can be combined with data from the study population to provide an estimate of the policy's impact in the target population. I focus here on question 1; clear graphical criteria and resulting transportability formulas have also been developed for question 2.^{5}

An initial effect estimate can be directly transported to a new target population when *R* blocks all paths from *S* to *Y* after deleting from the graph all arrows into *R*.^{5} The intuitive motivation for this result is clear—we wish to assess whether the counterfactual distribution of the outcome is the same in the 2 populations. As shown in Figure B, an intervention to generate the counterfactual outcome *Yr* removes all arrows into *R*. In this modified graph, we can use standard graphical criteria to assess whether *R* blocks all potential sources of dependence between the population-specific data-generating process *S* and the outcome. In other words, we can assess directly from the graph if, after we intervene on *R*, the distribution of the counterfactual outcomes will be the same regardless of whether the data are generated with *S* switched “on” or “off.”

This graphical criterion fails in Figure E, implying that the initial effect estimate cannot be directly transported to the target population (without additional assumptions). In contrast, Figure F illustrates a case where direct transportability holds; as is obvious from inspecting the graph, changing how the versions of treatment are generated will not affect *Y*. (This particular example assumes that R has no effect on Y; however, the same approach applies to examples where this assumption is not made.) Note that the causal model in Figure F represents the assumption of “Treatment Variation Irrelevance.”^{7} The graph makes clear what this assumption means, why it is useful, and whether it is likely to hold for a given applied problem.

Graphs can clarify when elaboration of the versions of treatment is needed for transportability, just as they clarify when elaboration of the versions of treatment is relevant for identifiability. If Figure A is represented without making the versions of treatment explicit (Fig. D), the resulting graph can still be augmented to show disparity in the process that generates A; this disparity would result in disparity in the process that generates Y (Fig. G). Inspection of a graph with or without the versions of treatment explicitly included (Fig. E or G) results in the same conclusion—the initial estimate is not directly transportable. Interestingly, H&V use this specific causal model to motivate their argument for the importance of elaborating and measuring versions of treatment. However, the graph makes clear that under this causal model, neither elaboration of the versions of treatment nor their measurement in the target population are needed to estimate the impact of a policy in the target population—simply measuring *R*, *W*, *L*, and *Y* in the target population would suffice. Pearl and Bereinboim^{5} point out alternative causal structures in which measurement of an intermediate variable (such as that created by a compound treatment) does enable transportability through careful selection and combination of information from both populations.

## DISCUSSION

The current article and commentary are not a debate between competing approaches to causal inference. The potential outcomes and causal graph frameworks are logically equivalent, and there is no controversy surrounding the validity of either.^{2,4} The question is simply: Which of these equally valid languages is most helpful in facilitating clear thinking and good practice? H&V use causal graphs throughout their paper to motivate arguments and illustrate examples, implicitly acknowledging the conceptual clarity that graphs provide for posing the right questions and articulating substantive knowledge. H&V raise 2 distinct and important issues in their article: (1) when is it important to explicitly consider the multiple versions of a treatment and (2) when can an effect estimate from one population be transported to a new population. Their paper makes clear the difficulty of thinking through these issues without unleashing the full power of causal graphs. Why not take full advantage of this power?

## ABOUT THE AUTHOR

MAYA PETERSEN is an Assistant Professor of Biostatistics and Epidemiology at the University of California, Berkeley School of Public Health. Her research focuses on the development of causal inference methods and their application to improve treatment and prevention of HIV. Current work includes investigation of resource-efficient approaches to patient monitoring and impact evaluation for community-based interventions.

## ACKNOWLEDGMENTS

I gratefully acknowledge Judea Pearl's extensive insights and suggestions.

## REFERENCES

*Epidemiology*. 2011;22:368–377.

*Causality: Models, Reasoning, and Inference.*2nd ed. New York: Cambridge University Press; 2009.

*Epidemiology*. 2010;21:872–875.

*Longitudinal Data Analysis*.

*Handbooks of Modern Statistical Methods*. Boca Raton, FL: Chapman and Hall/CRC; 2009:566–568.

*Transportability Across Studies: A Formal Approach*. Technical Report. Los Angeles: Computer Science Department, University of California; 2010.

*Int Stat Rev*. 2002;70:161–189.

*Epidemiology*. 2009;20:880–883.