Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Assessing and planning for unmeasured confounding in weighted observational studies

Abstract

The ability to compare similar groups is central to causal inference. If two groups are the same except that one group received a treatment and the other group did not, we can attribute the difference in an outcome of interest to the treatment (Cochran, 1965). For this reason, randomized experiments are often considered to be the "gold standard" for estimating causal effects: when the treatment is randomly assigned, the treatment and control groups are comparable on average. In many settings, it might be unethical or otherwise infeasible for a researcher to randomly assign treatment. In these cases, researchers must rely on observational data to investigate their causal hypotheses.

Aside from their greater feasibility in many instances, there are a few possible benefits of observational studies compared to randomized experiments. Observational studies typically consist of larger, naturally occurring samples that more closely resemble a target population. However, there is no guarantee that the treatment and control groups are comparable in an observational study since units can select into a group. For example, in a study evaluating the effectiveness of a medication on a health outcome of interest, patients that are sicker to begin with might be more likely to take the treatment, biasing direct comparison of treatment and control groups. A common strategy to attempt to mitigate this bias is to adjust for observed covariates so that the adjusted treatment and control groups are comparable in terms of these covariates.

These methods that attempt to adjust for observed covariates rely on the key assumption that there are no unmeasured confounders that simultaneously impact the treatment and outcome, often referred to as ignorability or unconfoundedness. However, this assumption is not verifiable from observed data and never exactly holds in most real-world settings. Since we are still interested in studying causal relationships from observational data, the ignorability assumption is at the core of this thesis. First, we develop a framework to evaluate how robust causal effect estimates are to violations of the ignorability assumption. Then, we investigate how to design observational studies to improve robustness to unmeasured confounding, rather than selecting designs that are optimal under the ignorability assumption. Chapter 1 briefly reviews these topics, and the following chapters detail our proposed frameworks.

Chapter 2 focuses on assessing the robustness of weighted observational studies to violations of the ignorability assumption. We develop a sensitivity analysis framework for a broad class of weighting estimators that allows for specified levels of unmeasured confounding, resulting in a range of possible effect estimates, rather than a single point estimate. We prove that the percentile bootstrap procedure can yield valid confidence intervals for causal effects under our sensitivity analysis framework. We also propose an amplification --- a mapping from a one-dimensional sensitivity analysis to a higher dimensional sensitivity analysis --- to enhance the interpretability of our sensitivity analysis's results, aiding researchers in reasoning about plausible levels of confounding in particular observational studies. We illustrate our sensitivity analysis procedure through real data examples.

Chapter 3 builds on Chapter 2 by focusing on how to design observational studies such that they are robust to unmeasured confounding, rather than optimal under ignorability. Specifically, we introduce a measure called design sensitivity for weighting estimators, which describes the asymptotic power of a sensitivity analysis. By comparing design sensitivities, we assess of how different design decisions impact sensitivity to unmeasured confounding. While sensitivity analysis is conducted post-hoc as a secondary analysis, design sensitivity enables researchers to plan ahead and optimize for robustness at the design stage. We illustrate our proposed framework on data evaluating the drivers of support for the 2016 Colombian peace agreement.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View