Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Decomposition methods for semidefinite optimization

Abstract

Semidefinite optimization problems (SDPs) arise in many applications, including combinatorial optimization, control and signal processing, structural optimization, statistics, and machine learning. Currently, most SDPs are solved using interior-point methods. These methods are typically robust and accurate, and converge in few iterations. However, their per-iteration cost may be high. At each iteration, an interior-point method solves a large and generally dense system of linear equations, and this limits the scalability of these methods. In contrast, first-order methods may require many iterations to converge and often reach a much lower accuracy, but have a very low per-iteration complexity and memory requirement. This allows them to scale to much larger problems. However, both interior-point methods and first-order methods have difficulty exploiting sparsity in SDPs, because the matrix inequality constraint introduces a nonlinear coupling between all the elements of the matrix variable.

In this thesis, we present decomposition methods for sparse semidefinite optimization. The techniques exploit partial separability properties of cones of chordal sparse matrices with a positive semidefinite completion. They can be used for general sparsity patterns by applying them to chordal extensions. Partial separability allows us to break the large semidefinite constraint into a set of smaller constraints that can be handled by decomposition and splitting algorithms. We first use these methods to solve large sparse matrix nearness problems, in which a large symmetric matrix is projected on the set of sparse matrices with a positive semidefinite or Euclidean distance matrix completion. Second, we present a method that combines a proximal splitting method with an interior-point method to solve large linear SDPs. In both cases, the decomposition techniques are shown to effectively exploit sparsity in very large problems, and offer a significant reduction in memory and runtime over existing methods.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View