This paper concerns the assessment of the effects of actions or policy interventions from a combination of: (i) nonexperimental data, and (ii) substantive assumptions. The assumptions are encoded in the form of a directed acyclic graph, also called “causal graph”, in which some variables are presumed to be unobserved. The paper establishes a necessary and sufficient criterion for the identifiability of the causal effects of a singleton variable on all other variables in the model, and a powerful sufficient criterion for the effects of a singleton variable on any set of variables.

## Type of Work

Article (7) Book (0) Theses (0) Multimedia (0)

## Peer Review

Peer-reviewed only (6)

## Supplemental Material

Video (0) Audio (0) Images (0) Zip (0) Other files (0)

## Publication Year

## Campus

UC Berkeley (0) UC Davis (0) UC Irvine (0) UCLA (7) UC Merced (0) UC Riverside (0) UC San Diego (0) UCSF (0) UC Santa Barbara (0) UC Santa Cruz (0) UC Office of the President (0) Lawrence Berkeley National Laboratory (0) UC Agriculture & Natural Resources (0)

## Department

Department of Statistics, UCLA (7)

## Journal

## Discipline

Life Sciences (6) Social and Behavioral Sciences (6) Arts and Humanities (5) Physical Sciences and Mathematics (5) Education (1) Medicine and Health Sciences (1)

## Reuse License

BY - Attribution required (1)

## Scholarly Works (7 results)

This paper deals with the problem of esti- mating the probability that one event was the cause of another in a given scenario. Us- ing structural-semantical de nitions of the probabilities of necessary or su cient cau- sation (or both), we show how to optimally bound these quantities from data obtained in experimental and observational studies, given various assumptions concerning the data-generating process. In particular, we strengthen the results of Pearl (1999) by weakening the data-generation assumptions and deriving theoretically sharp bounds on the probabilities of causation. These results delineate precisely the assumptions that must be made before statistical measures (such as the excess-risk-ratio) could be used for as- sessing attributional quantities (such as the probability of causation).

The validity of a causal model can be tested only if the model imposes constraints on the probability distribution that governs the generated data. In the presence of unmeasured variables, causal models may impose two types of constraints: conditional independencies, as read through the d-separation criterion, and functional constraints, for which no general criterion is available. This paper offers a systematic way of identifying functional constraints and, thus, facilitates the task of testing causal models as well as inferring such models from data.

We offer a complete characterization of the set of distributions that could be induced by local interventions on variables governed by a causal Bayesian network. We show that such distributions must adhere to three norms of coherence, and we demonstrate the use of these norms as inferential tools in tasks of learning and identification. Testable coherence norms are subsequently derived for networks containing unmeasured variables.

We offer a complete characterization of the set of distributions that could be induced by local interventions on variables governed by a causal Bayesian network of unknown structure, in which some of the variables remain unmeasured. We show that such distributions are constrained by a simply formulated set of inequalities, from which bounds can be derived on causal effects that are not directly measured in randomized experiments.

This article considers the problem of estimating the average controlled direct effect (ACDE) of a treatment on an outcome, in the presence of unmeasured confounders between an intermediate variable and the outcome. Such confounders render the direct effect unidentifiable even in cases where the total effect is unconfounded (hence identifiable). Kaufman et al. (2005, Statistics in Medicine 24, 1683–1702) applied a linear programming software to find the minimum and maximum possible values of the ACDE for specific numerical data. In this article, we apply the symbolic Balke–Pearl (1997, Journal of the American Statistical Association 92, 1171–1176) linear programming method to derive closed-form formulas for the upper and lower bounds on the ACDE under various assumptions of monotonicity. These universal bounds enable clinical experimenters to assess the direct effect of treatment from observed data with minimum computationaleffort, and they further shed light on the sign of the direct effect and the accuracy of the assessments.