Skip to main content
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Efficient Uncertainty Propagation for Stochastic Model Predictive Control

  • Author(s): Buehler, Eddie;
  • Advisor(s): Mesbah, Ali;
  • et al.

As the complexity and scale of chemical processes has increased, engineers have desired a process control strategy that can successfully and safely regulate modern processes. These processes are often subject to random noise and uncertainties in parameters or model structure which make regulation and control difficult. Stochastic model predictive control (SMPC) is an advanced process control strategy that can systematically seek tradeoffs between multiple (possibly competing) control objectives in the presence of constraints for multi-input multi-output systems. Furthermore, SMPC explicitly considers the stochasticity of system states and parameters, allowing for the systematic trade off of robustness and performance. Another key advantage of SMPC is the ability to implement chance constraints, which can be violated with a specified probability, instead of hard constraints which must be satisfied in all conditions. Because SMPC employs a probabilistic description of states and parameters, uncertainty propagation techniques are needed to determine the time evolution of these probability distributions. Monte Carlo methods are a widely used uncertainty propagation technique, but are too computationally expensive for real time optimization and control. Three approaches to efficient SMPC implementations are investigated in this dissertation. A SMPC algorithm with hard input constraints and joint chance constraints in the presence of (possibly) unbounded noise is presented. By recasting the nonlinear SMPC optimal control problem as a convex iterative deterministic program, the computational cost of determining the optimal control policy is significantly decreased. Two uncertainty propagation methods, the Fokker-Planck equation and adaptive polynomial chaos, are presented in the context of optimal control. The Fokker-Planck equation is a partial differential equation that can propagate the full probability distribution of uncertainties. This allows for shaping of the probability distribution function of states to desired distributions in an open loop sense as well as eliminating the need for conservative approximations when evaluating chance constraint violation. Closed-loop simulations of a SMPC controller regulating a bioreactor demonstrate the ability to shape the product probability distribution function when using the Fokker-Planck equation as an uncertainty propagation technique. Adaptive polynomial chaos (aPC) is an efficient technique that propagates the moments of (possibly) cross-correlated distributions. Unlike similar techniques, aPC only requires knowledge of the statistical moments of uncertain states and parameters, which can be readily estimated from experimental data. The error convergence properties of aPC are investigated in a continuous stirred tank reactor (CSTR) with correlated reaction kinetic parameters. All three approaches to uncertainty propagation and constraint handling in the context of SMPC discussed in this dissertation demonstrate promise for decreasing computational cost and reducing reliance on conservative approximations.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View