import { Callout } from 'nextra/components';
Sources of Causal Information
Causal Inference by Experiment
Randomized experiments have always been the gold standard for establishing causal effects. For instance, in the drug approval process, controlled experiments are mandatory. Without first having established and quantified the treatment effect, and any associated side effects, no new drug could win approval by the Federal Drug Administration.
Causal Inference from Observational Data and Theory
However, in many other domains, experiments are not feasible, be it for ethical, economic, or practical reasons. For example, it is clear that a government could not create two different tax regimes to evaluate their respective impact on economic growth. Neither would it be possible to experiment with two different levels of carbon emissions to measure a warming effect on the global climate.
“So, what does our existing data say?” would be an obvious question from policymakers, especially given today’s high expectations concerning Big Data. Indeed, in lieu of experiments, we can attempt to find instances in which the proposed policy already applies (by some assignment mechanism) and compare those to other instances in which the policy does not apply.
However, as we will see in this chapter, performing causal inference on the basis of observational data requires an extensive range of assumptions, which can only come from theory, i.e., domain-specific knowledge. Despite all the wonderful advances in analytics in recent years, data alone, even Big Data, cannot prove the existence of causal effects.
Historical Context
Today, we can openly discuss how to perform causal inference from observational data. For the better part of the 20th century, however, the prevailing opinion had been that speaking of causality without experiments is unscientific. Only towards the end of the century, this opposition had slowly eroded (Rubin 1974, Holland 1986), which subsequently led to numerous research efforts spanning philosophy, statistics, computer science, information theory, etc. The Potential Outcomes Framework has played an important role in this evolution of thought.
Potential Outcomes Framework
Although there is no question about the common-sense meaning of “cause and effect,” for formal analysis, we require a precise mathematical definition. In the fields of social science and biostatistics, the potential outcomes framework is a widely accepted formalism for studying causal effects (the potential outcomes framework is also known as the counterfactual model, the Rubin model, or the Neyman-Rubin model). Rubin (1974) defines "causal effect" as follows:
“Intuitively, the causal effect of one treatment, , over another, , for a particular unit and an interval of time from to is the difference between what would have happened at time if the unit had been exposed to initiated at and what would have happened at if the unit had been exposed to initiated at : ‘If an hour ago I had taken two aspirins instead of just a glass of water, my headache would now be gone,’ or because an hour ago I took two aspirins instead of just a glass of water, my headache is now gone.’ Our definition of the causal effect of versus treatment will reflect this intuitive meaning.”
In this quote, we altered the original variable name to and to in order to be consistent with the nomenclature in the remainder of this chapter. is commonly used in the literature to denote the treatment condition.
- Potential outcome of individual given treatment (e.g., taking two Aspirins)
- Potential outcome of individual given treatment (e.g., drinking a glass of water)
The individual-level causal effect (ICE) is defined as the difference between the individual’s two potential outcomes, i.e.,
Given that we cannot rule out differences between individuals (effect heterogeneity), we define the average causal effect as the unweighted arithmetic mean of the individual-level causal effects:
denotes the expected value, i.e., the unweighted arithmetic mean.
The challenge is that (treatment) and (non-treatment) can never be both observed for the same individual at the same time. We can only observe treatment or non-treatment, but not both.
So, where does this leave us? What we can produce easily is the “naive” estimator of association between the “treated” and the “untreated” sub-populations:
For notational convenience, we omit the index because we are now referring to sub-populations and not to an individual.
Because the sub-populations in the treated and untreated groups contain different individuals, is not necessarily a measure of causation, in contrast to.
The question is, how can we move from what we can measure, i.e., the naive association, to the quantity of interest, i.e., the causal effect? Determining whether we can measure causation from association is known as identification analysis.
We must check whether there were any conditions under which the measure of association, , equals the measure of causation, . As a matter of fact, this would be the case if the sub-populations were comparable with respect to all confounders, i.e., the factors that could also influence the outcome.
Ignorability
Remarkably, the conditions under which we can measure causal effects from observational data are very similar to those that justify causal inference in randomized experiments. A pure random selection of treated and untreated individuals does indeed remove any potential selection bias and leaves the confounding factor distributions identical in the sup-populations, thus allowing the estimation of the effect of the treatment alone. This condition is known as “ignorability,” which can be formally written as:
This means that the potential outcomes, , and must jointly be independent of the treatment assignment, . This condition of ignorability holds in an ideal experiment. Unfortunately, this condition is very rarely met in observational studies. However, conditional ignorability may hold, which refers to ignorability within subgroups of the domain defined by the values of (note that can be a vector).
In words, conditional on variables , , and are jointly independent of , the assignment mechanism. If conditional ignorability holds, we can utilize the estimator, , to estimate the average causal effect, .
\eqalign{ & ACE|X = E[{Y_1}|X] - E[{Y_0}|X] \cr & = E[{Y_1}|A,X] - E[{Y_0}|A,X] \cr & = E[Y|T = 1,X] - E[Y|T = 0,X] \cr & = S|X \cr}
How can we select the correct set of variables among all variables in a system? How do we know that such variables are observed or even exist in a domain? This is what makes the concept of ignorability highly problematic in practice. Pearl (2009) states:
The difficulty that most investigators experience in comprehending what “ignorability” means, and what judgment it summons them to exercise, has tempted them to assume that it is automatically satisfied, or at least is likely to be satisfied if one includes in the analysis as many covariates as possible. The prevailing attitude is that adding more covariates can cause no harm (Rosenbaum 2002, p. 76) and can absolve one from thinking about the causal relationships among those covariates, the treatment, the outcome, and, most importantly, the confounders left unmeasured (Rubin 2009).
The absence of hard-and-fast criteria makes ignorability a potentially dangerous concept for practitioners.