Sparse Causal Network Estimation with Experimental Intervention
Causal Bayesian networks are graphically represented by directed acyclic graphs (DAGs). Learning causal Bayesian networks from data is a challenging problem due to the size of the space of DAGs, the acyclic constraint placed on the graphical structures and the presence of equivalence classes. Most existing methods for learning Bayesian networks are either constraint-based or score-based. In this dissertation, we develop new techniques for learning sparse causal Bayesian networks via regularization.
In the first part of the dissertation, we develop an L1-penalized likelihood approach with the adaptive lasso penalty to estimate the structure of causal Gaussian networks. An efficient blockwise coordinate descent algorithm, which takes advantage of the acyclic constraint, is proposed for seeking a local maximizer of the penalized likelihood. We establish that model selection consistency for causal network structures can be achieved with the adaptive lasso penalty and sufficient experimental interventions. Simulations are used to demonstrate the effectiveness of our method. In particular, our method shows satisfactory performance for DAGs with 200 nodes which have about 20,000 free parameters.
In the second part, we perform a principled generalization of the methodology developed for Gaussian variables to discrete data types by replacing the linear model with the multi-logit model. The adaptive group lasso penalty is utilized that encourages sparsity pattern at the factor level. Another blockwise coordinate descent algorithm is proposed to solve the corresponding optimization problem and asymptotic theory parallel to the one developed for Gaussian Bayesian networks is established.
Finally, we illustrate a real-world application of our penalized likelihood framework using a flow cytometry data set generated from a signaling network in human immune system cells.