Skip to main content
eScholarship
Open Access Publications from the University of California

UC Irvine

UC Irvine Electronic Theses and Dissertations bannerUC Irvine

Randomized Fast Solvers for Linear and Nonlinear Problems in Data Science

Creative Commons 'BY' version 4.0 license
Abstract

We construct a preconditioner for solving the linear least square problems, which are simplest and most popular arising in data fitting, imaging processing and high dimension data analysis. The existed methods for solving least squares problems either has a large computational cost or depends highly on the condition number of the matrix. Recently, there is a surge of interest in developing randomized algorithms for solving least squares problems for the purpose of efficiency and scalability. We construct a new preconditioner equipped with sampling procedure to reduce computational complexity and apply Gauss Seidel iterations to grab the high frequency component of the solution, which reduces the dependence of performance of the conditioner number. Experimental studies compared with Conjugate Gradient Descent method (CG) are presented on six different simulations including dense Gaussian matrix, `semi Gaussian' matrix, Sparse random matrix, `UDV' matrix and Graph Laplacian matrix and a non-negative constrained problem to show the effectiveness of the proposed preconditioner.   

A general scheme for solving non-constraint convex and smooth minimization problem $ \min_{x \in \mathcal V} f(x)$ is developed in this thesis. The scheme does gradient descent on each subspace based on a stable space decomposition of $\mathcal V$. With assumptions of Lipschitz continuous of the gradient on $\mathcal V$ and its subspaces, convexity or strong convexity on $\mathcal V$, we prove linear convergence for strongly convex objective function for both non-uniform sampling and uniform sampling. For non-uniform sampling, the convergence depends on the expected condition number, and for uniform sampling, the convergence depends on the supreme condition number. Moreover, we also show sublinear convergence for convex function. Numerical examples on Nestrov's worst function and linear regression both outperform randomized coordinate method. We conclude that our scheme generalizes the gradient descent methods, randomized (block) coordinate descent methods and full approximation scheme.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View