Skip to main content
eScholarship
Open Access Publications from the University of California

UC San Diego

UC San Diego Electronic Theses and Dissertations bannerUC San Diego

State Estimation for Control

Abstract

Deterministic control theory is built on the presumptive luxury of complete access to the states. This premise is false in many applications. Hence, rendering questionable, the validity of the implemented control algorithms. On the other hand, acknowledging the lack or partial access to the states is an invitation to the realms of stochastic optimal control. A field of many platonic objects \cite{penrose2006road} and folk definitions, with hopes of advancement harshly met with computational intractability.

The dissertation focuses on finding suboptimal solutions to the stochastic control problem, via utilizing the machinery developed for deterministic control. This is done through reducing the infinite dimensional information state, represented by the state filtered density, to one state value of (small) finite dimension. This reduction is guided by two arguments forming the bases of Chapters~2 and 3.

The first approach is built on statistical means, in which a point estimate, like the Maximum Likelihood Estimate, may have more claims to certainty equivalence in some applications than the typically used conditional mean. We derive a Maximum Likelihood recursive state estimator for non-linear state-space models, by combining the Expectation Maximization algorithm, and a particle filter. We prove that for nonlinear state-space systems with linear measurements and additive Gaussian noises, our formulation reduces to a gradient-free optimization in a form of a fixed-point iteration. The convergence properties of the sequences out of these iterations are inherited from the Expectation Maximization algorithm.

The second method engages directly with the control objective to achieve a state value or ``estimate'' that helps achieving this control objective. The State Selection Algorithm, which will be presented in Chapter~3, compiles the information about state statistics, dynamics, constraints, and a given controller, and returns the best state value based on optimizing a prescribed finite-horizon performance function. The set of candidate states, is provided by a particle filter. In the linear quadratic problem with polyhedral constraints, we show that the algorithm reduces to a quadratic program for the state value.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View