- Main
Hybrid Methods for Optimization with High Performance and Robustness
- Hustig-Schultz, Dawn
- Advisor(s): Sanfelice, Ricardo G
Abstract
Optimization has valuable applications in many areas of technology, including smart grids, transportation systems, multiagent systems, wireless sensor and communications networks, and deep learning. This dissertation focuses on developing hybrid algorithms for accelerated gradient descent, both for convex and nonconvex objective functions, with fast convergence, stability, and robustness. The first two algorithms, developed using hybrid system tools, feature a uniting control strategy, in which two standard heavy ball algorithms, one used globally and another used locally, with properly designed gravity and friction parameters, are employed. The proposed hybrid control strategy switches the parameters to converge quickly to the minimizer of the nonstrongly convex objective function without oscillations. A hybrid control algorithm implementing a switching strategy that measures the objective function and its gradient, and another algorithm that only measures its gradient, are designed. Key properties of the resulting closed-loop systems, including existence of solutions, asymptotic stability, and convergence rate, are analyzed. The second two algorithms -- one for strongly convex objective functions and the other for nonstrongly convex objective functions -- also employ a uniting control strategy, in which Nesterov's accelerated gradient descent is used ``globally'' and the heavy ball method is used ``locally,'' relative to the minimizer. The proposed hybrid control strategy switches between these accelerated methods to ensure convergence to the set of minimizers without oscillations, with a (hybrid) convergence rate that preserves the convergence rates of the individual optimization algorithms. We analyze key properties of the resulting closed-loop system including existence of solutions, uniform global asymptotic stability, and convergence rate. Based on the uniting algorithms above, a uniting framework is designed, which allows the use of any accelerated gradient method for the global and local algorithms. Sufficient conditions are determined, which lead to general results on well-posedness, existence of solutions, and uniform global asymptotic stability for the hybrid closed-loop framework. Then, we propose a hybrid algorithm for optimization, to ensure convergence to a local minimimzer of a nonconvex Morse objective function $L$ with a single, scalar argument. Developed using hybrid system tools, and based on the heavy ball method, the algorithm features switching strategies to detect whether the state is near a critical point and enable escape from local maximizer, using measurements of the gradient of $L$. Key properties of the resulting closed-loop system, including existence of solutions and practical global attractivity, are revealed. In addition, we propose a totally asynchronous multiagent algorithm, based on the heavy ball method, that guarantees fast convergence to the minimizer of a $\mathcal{C}^2$, convex objective function. The algorithm is parallelized in the sense that the decision variable is partitioned into blocks, each of which is updated only by a single agent. We consider two types of asynchrony: in agents' computations and in communication between agents. We show that, for certain parameter values, the heavy ball algorithm monotonically converges to a minimizer, even under asynchrony. Key properties of the algorithm, including existence of solutions, convergence rate, and asymptotic stability, are analyzed. Numerical results validate the findings.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-