- Main
Scalable Quantile Learning
- Pan, Xiaoou
- Advisor(s): Zhou, Wenxin
Abstract
Quantile regression (QR) is a powerful tool for learning the relationship between a continuous outcome and a set of covariates while exploring heterogeneous effects. This dissertation focuses on statistical learning (estimation and inference) in the increasing dimensional regime with random designs, and the outcome is possibly subject to random censoring. We provide a comprehensive analysis on three problems: (i) the classical QR and multiplier bootstrap inference; (ii) QR with a convolution-based smoothed approach that achieves adequate approximation to computation and inference; (iii) censored QR with a smoothed martingale-based sequential estimating equations approach, and a regularized regression problem in the high-dimensional regime. The unified principle of these methods is to turn the non-differentiable check function into a twice-differentiable, globally convex and locally strongly convex surrogate, which admits fast and scalable gradient-based algorithms to perform optimization. For all the aforementioned tasks, we theoretically establish explicit non-asymptotic bounds on estimation and Bahadur- Kiefer linearization errors, from which we show that the asymptotic normality holds, when the covariate dimension grows with the sample size at a sublinear rate. In particular, uniform convergence rate (over a range of quantile indexes) and weak convergence are established for censored quantile regression process. The multiplier bootstrap inference, as a companion, is also rigorously justified for all the problems. Extensive numerical experiments confirm the computational scalability and reliability to large-scale data, and demonstrate the advantage of our methods over existing ones.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-