Skip to main content
eScholarship
Open Access Publications from the University of California

Healthcare Decision Making and Stochastic Model Predictive Control: Output-Feedback, Optimality, and Duality

  • Author(s): Sehr, Martin Arno
  • Advisor(s): Bitmead, Robert R
  • et al.
Abstract

Model Predictive Control has become a prevailing technique in practice by virtue of its natural inclusion of constraint enforcement in sub-optimal feedback design through repeated solution of finite-horizon, open-loop control problems. However, many approaches are lacking in proper accommodation of output feedback using imperfect measurements, as is normally required in practice. The conventional workaround for this disconnect between control theory and practice is the use of certainty equivalent control laws, which subsume best available state estimates in place of the system state in order to salvage methods available for state-feedback Model Predictive Control.

This dissertation explores Stochastic Model Predictive Control in the general, nonlinear output-feedback setting. Starting the receding horizon development from Stochastic Optimal Control, we attain inherent accommodation of imperfect measurement data through propagation of the conditional state density, the information state. This setup further results in the control signals being of dual, probing nature: the control balances the typically antagonistic requirements of regulation and exploration. However, these conflicting tasks inherent to Stochastic Optimal Control also embody the associated computational intractability. While properties such as optimal probing and numerical performance bounds on the infinite time-horizon require solution of Stochastic Optimal Control problems, obtaining these solutions is typically not possible in practice due to the exorbitant computational demands.

We suggest two methods for tractable Stochastic Model Predictive Control. Firstly, we propose approximation of the information state update by a Particle Filter, which may be merged naturally with scenario optimization to generate control laws. While computationally tractable, this method does not maintain duality without additional measures. Alternatively, the nonlinear output-feedback problem can be approximated - or even cast - as a Partially Observable Markov Decision Process, a special class of systems for which Stochastic Optimal Control is numerically tractable for reasonable problem size, enabling dual optimal control with provable infinite-horizon properties.

Throughout this dissertation, we examine two classes of examples from healthcare: individualized appointment scheduling, a problem not requiring duality; medical treatment decision making, where dual control decisions are often required to balance optimally when to order diagnostic tests and when to apply medical intervention.

Main Content
Current View