Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Robust Model Predictive Control with Data-Driven Learning

Abstract

In the design of robust Model Predictive Control (MPC) algorithms, data can be used for primarily two purposes: (A) shrinking the feasible domain of the system uncertainty, and (B) enlarging the safe operating region of the system. In modern literature (A) is often referred to with model learning, or model adaptation, and (B) can be interpreted as using data to learn the model of the surrounding agents in the environment, or to learn environment constraints. Both (A) and (B) can enlarge the region of attraction of the MPC policy and improve its performance measured in terms of the closed-loop cost. However, the majority of the existing MPC algorithms that tackle (A) and (B) suffer from at least one the following deficiencies: (i) do not provide closed-loop guarantees of feasibility and stability, (ii) present conservative behavior as a result of over-approximation of the system uncertainty, (iii) are computationally expensive during online control synthesis, and (iv) cannot simultaneously handle system and environment constraint uncertainty for safe policy design.

In this dissertation, we present a unified framework to systematically incorporate data-driven learning in robust MPC design for linear dynamical systems. The proposed algorithms in the dissertation provide closed-loop guarantees, reduce conservatism in control design, and are computationally efficient and amenable for real-time implementation. The dissertation is divided into three parts where we focus on three aspects of learning during control design: model learning, disturbance distribution support learning, and environment constraint learning. Model learning and disturbance distribution support learning are instances of problem type (A), and environment constraint learning is an instance of problem type (B).

In the first part of the dissertation we consider model learning in linear time-invariant (LTI) and linear parameter-varying (LPV) systems where a reduction in the controller's conservatism is obtained by coupling novel ways of incorporating model learning in MPC with novel ways of robustifying the imposed constraints in MPC. We consider both parametric and non-parametric representation of the model uncertainty and present adaptive MPC algorithms that ensure robust satisfaction of the imposed state and input constraints.

In the second part we focus on learning the support of an additive disturbance's distribution. We consider the case when the disturbance belongs to the class of parametric distributions, and construct estimates of its unknown support via the confidence intervals of the underlying parameters. Robust MPC design with these learned supports can ensure satisfaction of the imposed constraints with any user-specified probability, while lowering conservatism by avoiding large outer-approximations of the true support.

Finally in the third part, we focus on learning unknown environment constraints imposed in the MPC optimization problem. We present a machine learning based algorithm to learn approximate constraint sets and validate their safety with samples of trajectory data. We prove that satisfying these approximated constraints with a robust MPC can guarantee probabilistic satisfaction of the actual constraints in closed-loop. The value of this probability can be chosen based on the desired trade-off between safety and performance of the controller.

We conclude the dissertation by presenting two applications where the proposed theory has been successfully tested. The first is for a robotic manipulator learning to play the cup-and-ball game, where we learn the support of a position measuring camera's measurement noise distribution from data and enable the robotic manipulator to play the game successfully using noisy camera feedback. For this case we present both high-fidelity simulation and experimental validations. The second application is for collaborative robotics, where we apply the concept of constraint learning in a decentralized collaborative robotic transportation scenario with partially known environment information to develop an obstacle avoidance algorithm. The algorithm allows the robots to adaptively assume leader-follower roles in the task while learning and avoiding unknown obstacles in their proximity.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View