Learning Robust Models for Control: Tradeoffs, Fundamental Insights, and Benchmarking Control Design
Skip to main content
eScholarship
Open Access Publications from the University of California

UC Riverside

UC Riverside Electronic Theses and Dissertations bannerUC Riverside

Learning Robust Models for Control: Tradeoffs, Fundamental Insights, and Benchmarking Control Design

Creative Commons 'BY' version 4.0 license
Abstract

In the field of machine learning, the quest to optimize the performance of machine learning models while maintaining robustness against perturbations stands as a fundamental challenge. Performance of a machine learning model refers to its capacity to execute a desired task, such as classification, prediction, or generation. Conversely, robustness of a machine learning model refers to its capacity to maintain consistent and reliable performance when encountering perturbed data or data generated under unforeseen conditions.

This thesis investigates the inherent tradeoff between performance and robustness in both classification and control learning problems. Our contribution is threefold. First, we formally show that, in a quest to optimize their performance, machine learning models tend to exhibit reduced robustness against adversarial manipulation of the data. Our results suggest that this tradeoff, fundamental in nature, is deeply rooted in the way in which data is drawn and does not depend on the complexity of the learning model itself. Second, we leverage the insights acquired from this characterization of the tradeoff to establish a benchmark for learning controllers. In particular, we introduce a robust feed-back control policy learning framework based on Lipschitz-constrained loss minimization, where the feedback policies are learned directly from expert demonstrations. Our work integrates robust learning, optimal control and robust stability into a unified framework, enabling the learning of controllers that prioritize both performance and robustness. Finally, we revisit the linear quadratic Gaussian (LQG) optimal control problem through the perspective of input-output behaviors, where we derive direct data-driven expression for the optimal LQG controller using a dataset of input, state, and output trajectories. We show that our data-driven expression is consistent, since it converges as the number of experimental trajectories increases, we characterize its convergence rate, and we quantify its error as a function of the system and data properties. This analysis highlights the limitations and challenges posed by noisy data and unknown system dynamics in learning control problems.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View