Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Learning to Optimize with Guarantees

Abstract

Machine learning (ML) has evolved dramatically over recent decades, from relative infancy to a practical technology with widespread commercial use. This same time period increasingly saw applications modeled as large scale optimization problems, reviving interest in first-order optimization methods (due to their low per-iteration cost and memory requirements). ML enables computers to improve automatically through experience. At first glance, this may appear to signal a pending end to hand crafted optimization modeling.Yet, optimization can provide intuition for effective ML model design. Also, some ML models cannot be considered trustworthy without guarantees (e.g. satisfaction provided constraints) and/or explanations of their behaviors. Design intuition from optimization led to experiments unrolling optimization algorithms into ML model architectures via what is now known as the "learn to optimize'' (L2O) paradigm. L2O models generalize hand-crafted optimization for use with big data and provide promising numerical results.

This work investigates L2O theory and implementations. Chiefly, we investigate L2O convergence guarantees, training of L2O models, how to interweave prior and data-driven knowledge, and how to certify L2O inferences comply with desired specifications. Thus, our core contribution is a fusion of optimization and machine learning, merging desirable properties from each field -- model intuition, guarantees, practical performance, and the ability to leverage big data. A progression of novel frameworks and theory are developed herein for L2O models, extending prior work in multiple directions.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View