This thesis is concerned with the design and analysis of algorithms that solve nonsmooth convex optimization problems in (possibly infinite dimensional) Hilbert spaces. There are many algorithms available to solve such problems, but the methods detailed in this thesis are particularly well-suited for solving complicated problems that are built from many simpler pieces. There are a wealth of applications for which such structure is present, and this has driven the recent resurgence of interest in so-called Operator-Splitting methods; these splitting methods completely disentangle complex problem structure and give rise to algorithms that repeatedly solve a series of simpler subproblems sequentially or in parallel. These algorithms are easy to implement, they have low per-iteration cost, and in practice, they are observed to quickly converge to solutions of modest accuracy. These qualities make splitting methods attractive options for solving large-scale problems in machine learning and signal processing.
Although splitting algorithms are known to converge, a general theoretical analysis of their convergence rates has remained elusive since their inception nearly in the 1950s. Furthermore, since 2000, no new splitting algorithms have been developed that do not reduce to one of three general methods (for solving general monotone inclusion problems). The purpose of this thesis is to address these theoretical challenges by deriving sharp convergence rates of existing splitting algorithms and developing a new splitting method that does not appear to reduce to any existing method. The analysis presented is particularly simple, and applies in many cases.