Optimization is among the richest modeling languages in science. In statistics and machine learning, for instance, inference is typically posed as an optimization problem. While there are many algorithms designed to solve optimization problems, and a seemingly greater number of convergence proofs, essentially all proofs follow a classical approach from dynamical systems theory: they present a Lyapunov function and show it decreases. The primary goal of this thesis is to demonstrate that making the Lyapunov argument explicit greatly simplifies, clarifies, and to a certain extent, unifies, convergence theory for optimization.
The central contributions of this thesis are the following results: we
• present several variational principles whereby we obtain continuous-time dynamical systems useful for optimization;
• introduce Lyapunov functions for both the continuous-time dynamical systems and discrete-time algorithms and demonstrate how to move between these Lyapunov functions;
• utilize the Lyapunov framework as well as numerical analysis and integration techniques to obtain upper bounds for several novel discrete-time methods for optimization, a few of which have matching lower bounds.