The topic of learning in control has garnered much attention in recent years, with many researchers proposing methods for combining data-based learning methods with more traditional control design. For systems repeatedly performing a single task, iterative learning controllers provide a structured, model-based way of using collected data to iteratively improve on a particular task while guaranteeing constraint satisfaction during the learning process. However, it remains difficult to design model-based learning controllers that both perform well and act safely in a variety of changing or unknown environments.
This dissertation considers a particular problem: how to use stored trajectory data from a system solving an initial set of tasks in order to design a controller that performs a related task in a new environment both safely (satisfying all new constraints) and effectively (maximizing a desired objective). We approach this question from a Model Predictive Control (MPC) perspective. Fundamentally, we ask how traditionally model-based terminal sets and cost functions of the MPC may be replaced with data-driven counterparts while maintaining feasibility guarantees that classical MPC theory offers.We consider various instantiations of the changing environment problem, including known or unknown task environments with time-invariant or time-varying constraints. For each scenario, we propose approaches for safe and effective control design. Using tools from MPC theory, optimization, and statistics, we demonstrate the safety (with high probability) for each proposed control scheme. The presented control approaches are validated in simulations and experiments in a variety of applications, including autonomous racing, robotic manipulation, and computer game tasks. The evaluations demonstrate the potential for safely integrating data in model-based control design.