Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Advancing Neural Granger Causality and Penalization Techniques

Abstract

Thanks to its interpretability and absence of experimental data, Granger causality has become one of the most powerful tools in causal discovery. Focusing on temporal data, Granger causality evaluates where one time series can be predictive (Granger-causal) of another time series. However, there are several assumptions in traditional Granger causality: no unknown confounder, stationarity, and linearity. Linearity is especially challenging since linear time processes don’t happen often in real life. Unlike stationarity which has established methods such as logarithmic transformation, resolving linearity is a much more difficult task. To capture these complex relationships, researchers have proposed many methods: additive nonlinear model, kernel space, and more. To improve interpretability, Structured Neural Networks is particularly powerful. A crucial component of the method is the regularization (penalty) as it helps with lag selection and ensures related features are dealt with together. This thesis first reviews a number of nonlinear Granger causality methods- additive nonlinear Granger causality, independent innovation analysis (IIA), kernel Granger causality, nonlinear permuted Granger causality- then focuses on neural Granger causality. It also explores different penalties including hierarchical lasso, group sparse group lasso, and elastic net which we illustrate on a data set on missing migrants.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View