Non-asymptotic Analysis of Learning Long-range Autoregressive Generalized Linear Models for Discrete High-dimensional Data
Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Non-asymptotic Analysis of Learning Long-range Autoregressive Generalized Linear Models for Discrete High-dimensional Data

Abstract

Fitting multivariate autoregressive (AR) models is fundamental for analysis of time-series data in a wide range of applications in science, engineering, econometrics, signal processing, and data-science. This dissertation considers the problem of learning a $p$-lag multivariate AR generalized linear model (GLM). In this model, the state of the time-series at each time step, conditioned on the history, is drawn from an exponential family distribution with the mean parameter depending on a linear combination of the last $p$ states. The problem is to learn the linear connectivity tensor from a single observed trajectory of the time-series. We provide non-asymptotic error bounds on the regularized Maximum Likelihood estimator in high dimensions.

We focus on the sparse tensor setting, which arises in applications where there exists a limited number of direct connections between variables. For such problems, $\ell_1$-regularized maximum likelihood estimation (or M-estimation more generally) is often straightforward to apply and works well in practice. The M-estimator can be posed as a convex optimization problem and hence can also be solved efficiently.

However, the statistical analysis of such methods is difficult due to the feedback in the state dynamics and the presence of a non-linear link function, especially when the underlying process is non-Gaussian. Our main result in Chapter 3 provides a bound on the mean-squared error of the estimated connectivity tensor as a function of the sparsity and the number of samples, for a class of discrete multivariate AR($p$) GLMs, in the high-dimensional regime. Importantly, the bound indicates that, with sufficient sparsity, consistent estimation in cases where the number of samples is significantly less than the total number of free parameters. Towards proving the main result, we present a general framework to establish the Restricted Strong Convexity (RSC) property for time-averaged loss functions often seen in time-series analysis. We also derive new concentration inequalities of functions of discrete non-Markovian random variables. These intermediate results may be of independent interest to the reader.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View