Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Provably correct optimization and estimation: continuous, discrete, and dynamical

Abstract

Many engineering tasks are, at their core, a series of large, high-stakes decision-making problems. Generally we are faced with some issue to resolve, then asked to incorporate all of the relevant information, context, and data to produce the best solution possible. Often we can only determine if we made an optimal choice in hindsight, but we generally take comfort in knowing that every decision we have made so far is the best possible given all of our current information.

When synthesizing the information, context, and data into a decision-making problem, we are implicitly creating an optimization problem, asking ourselves to determine the option that satisfies our specifications while performing the best, according to some chosen metric. While many tasks may be framed this way, the resulting optimization problems are not necessarily computationally tractable.

The first part of this thesis considers one such class of typically intractable optimization problems: problems with joint continuous and discrete decision variables. This class of problems is NP-Hard to solve in general, but we show that by leveraging submodularity, a property of functions over partially-ordered sets, we can identify a new special subset for which we provide provably exact algorithms that run in polynomial time. In the larger, decision-making context, this result should provide the trepidatious engineer with confidence that, given all the data, constraints, and information they have at the current moment, they have selected the best possible solution.

Many optimization problems still fall outside of this special subset, where the functions involved are not submodular. We address part of this issue by showing how some problems outside this class--in particular, quadratic optimization problems with combinatorial regularizers--may be approximately solved by instead solving a suitably chosen surrogate problem from within our previously identified subset. The suboptimality of this approach is then naturally bounded by the distance between the original optimization problem and our class of submodular ones.

The second part of this thesis considers nonlinear state estimation. In this scenario, we collect measurements from a nonlinear system (e.g., a mobile robot), and from the knowledge of the system's dynamics and these measurements are asked to estimate the system's state as accurately as possible. In this dynamic estimation context, we are faced with a sequence of these optimization problems (select the best choice of state), each closely related to the previous.

Feedback control typically relies on such an estimate of the system state provided by an estimation scheme. These estimates, however, are always affected by errors that have non-negligible impacts on control performance. Various stabilizing and safety-critical control frameworks address this issue, but all require some characterization of the current estimation error to determine when to apply more or less conservative control inputs. Current methods of bounding these errors either take a very coarse worst-case bound or employ computationally expensive time-varying set-valued methods.

To tackle this problem, we turn to a state estimation scheme based on polynomial least-squares, termed Savitzky-Golay filtering. This scheme relies on approximating the output of the system and its derivatives via polynomial least-squares, then using information about the system dynamics to convert these derivatives into an estimate of the system state. Our analysis presents a new, online error bound that highlights the connection between the suboptimality of the optimization problem's solution and the quality of the state estimate. In our analysis, we show several intuitive properties of these bounds, with the main intuition that when the system dynamics are well-behaved and the measurements are noiseless, the function approximation task becomes easier and the guarantees tighten.

Further, these error bounds provide an online, deterministic measure of uncertainty, which a downstream control algorithm can use to adapt its levels of robustness in real-time. This particular interaction appeals to the simple intuition that a robot should only make aggressive maneuvers when it is highly confident in its current position. The frameworks of measurement-robust control barrier functions and robust control Lyapunov functions in particular are immediate candidates for this type of interface, as they would naturally accommodate the estimation error while maintaining safety and stability guarantees.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View