First-Order Methods for Trace Norm Minimization
Minimizing the trace norm (sum of singular values) of a matrix has become popular as a convex heuristic for computing low rank approximations, with numerous applications in control, system theory, statistics, and machine learning. However, it is usually too expensive to solve these matrix optimization problems with second-order methods, such as the interior-point method, given that the scale of the problems is relatively large. In this thesis, we compare several first-order methods for nondifferentiable convex optimization based on splitting algorithms, and apply them to primal, dual, or primal-dual optimality conditions. The implementation aspects of the algorithms are discussed in detail and their performance is compared in experiments with randomly generated data as well as system identification test sets. We show that on large-scale problems, several of the first-order methods reach a modest accuracy within a shorter time than the interior-point solvers. Based on the experiments, the most promising methods are the Alternating Direction Method of Multipliers and the Pock-Chambolle semi-implicit algorithm.