Why weight? Weighting approaches for causal inference with panel and cross-sectional data
Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Why weight? Weighting approaches for causal inference with panel and cross-sectional data

  • Author(s): Ben-Michael, Elijahu
  • Advisor(s): Feller, Avi;
  • Ding, Peng
  • et al.
Abstract

In observational studies, researchers wish to study the effect of a treatment without directly controlling treatment assignment. These studies are particularly useful when it is uneconom- ical, unethical, or infeasible for researchers to manipulate treatment in a controlled setting. They also offer insight into how treatment affects large, naturally occurring populations, and so they are indispensible counterparts to randomized trials, which are typically conducted on smaller, unrepresentative study samples. A key feature of randomized trials is that re- searchers can use randomized treatment assignment to ensure that the treated and control sample do not differ in observed and unobserved characteristics, on average. Observational studies have no such guarantee, as treatment assignment occurs through some unknown pro- cess. In practice, this unknown process often results in substantial differences between the treated and control samples, leading to substantial biases in naive comparisons between the two groups and confounding the relationship between the outcome and treatment.

There are many methods that attempt to overcome this bias by searching for ways in which the treatment and control groups are comparable, e.g. by restricting the sample to the region around a discontinuity in treatment assignment, or matching treated and control units based on baseline characteristics. In this thesis we take a similar approach, using weighting estimators that take a weighted average of treated and control outcomes, with weights that make the two groups directly comparable. If the treated and control groups differ on pre-treatment characteristics that are highly correlated with the outcome, then comparisons between the two groups will be highly biased. However, if we can find weights so that the two groups are balanced on these pre-treatment characteristics after weighting, then the bias will be negligible. Therefore, in this thesis we address the problem of confounding by addressing the problem of imbalance, finding weights that directly optimize for balance between the weighted treated and control samples.

Each of the chapters in this thesis follows a common “recipe”. First, we write the estimation error of a weighting estimator explicitly in terms of balance. This informs what aspects of the pre-treatment characteristics we should balance. Then we show how to achieve balance, constructing a convex optimization problem that directly controls the balance, with a tradeoff between better balance and lower variance. Finally, in some settings we cannot find weights that achieve a sufficient level of balance. In these cases, we can account for any remaining imbalance by combining the weighting estimator with a predictive model of the outcome. Chapter 1 briefly covers the broad strokes of this general recipe in a simplified observational study setting, where the goal is to estimate the average treatment effect for the treated population. The subsequent chapters apply this recipe to answer questions in the social sciences by developing weighting approaches to estimate the treatment effect on the treated in three different settings.

Chapter 2 considers estimating treatment effects in comparative case-studies, where a singleunit is treated and there is access to a long series of pre-intervention outcomes. In this setting, variants of weighting estimators that ensure balance on pre-intervention outcomes are known as the synthetic control method (SCM), where the “synthetic control” is a weighted average of comparison units. By inspecting the estimation error we see that an important feature of the original SCM proposal is to use it only when the weights have excellent balance on pre-intervention outcomes. This chapter primarily focuses on the final step, proposing Augmented SCM as an extension of SCM when it is not possible to achieve good- enough pre-treatment fit. The main proposal is to use ridge regression to de-bias the original SCM estimate; we show that this estimator can itself be written as a modified synthetic controls problem, allowing for limited extrapolation in order to improve pre-treatment fit. We then use this framework to inspect the impact of an aggressive tax cut in Kansas in 2012, finding evidence that the tax cuts hindered economic growth. We implement this estimation procedure in a new R package, augsynth.

Chapter 3 builds on Chapter 2 to adapt the synthetic control method to estimating treatment effects with staggered adoption of treatment by different units at different times. Current practice is to fit SCM separately for each treated unit, averaging the resulting estimates. Following the recipe above, we show that the estimation error depends on both the average imbalance across the synthetic controls and the imbalance of the average of the synthetic controls. We propose finding “partially pooled” SCM weights that minimize both the average and treated-unit specific fits. Finally, we combine these weights with a fixed effects estimate of the outcomes. We then apply this method to measure the impact of teacher collective bargaining laws on school spending, finding minimal impacts. As in Chapter 2, we implement this procedure in the augsynth R package.

Finally, Chapter 4 focuses on estimating treatment effects for subgroups in observational studies with cross-sectional data, analyzing a pilot study on letters of recommendation in UC Berkeley undergraduate admissions. Here, we are interested in understanding how the effect of submitting a letter of recommendation varies for under-represented students and for applicants with different a priori probabilities of admission. Again following the general recipe, we build on results in Chapter 3 to see that the estimation error for a subgroup depends on the “local balance” within the subgroup. Using this, we develop balancing weights that solve a convex optimization problem to directly optimize for the local balance within subgroups while maintaining global covariate balance between the overall treated and control samples. We then show that this approach has a dual representation as inverse propensity score weighting with a hierarchical propensity score model and use a random forest to de-bias the weighting estimator. Overall, we find that the impact of letters of recommendation is higher for applicants with a higher predicted probability of admission, and find mixed evidence of differences for under-represented minority applicants.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View