Skip to main content
eScholarship
Open Access Publications from the University of California

UC San Diego

UC San Diego Electronic Theses and Dissertations bannerUC San Diego

Uncertainty Estimation in Continuous Models applied to Reinforcement Learning

Abstract

We consider the model-based reinforcement learning framework where we are interested in learning a model and control policy for a given objective. We consider modeling the dynamics of an environment using Gaussian Processes or a Bayesian neural network. For Bayesian neural networks we must define how to estimate uncertainty through a neural network and propagate distributions in time. Once we have a continuous model we can apply standard optimal control techniques to learn a policy. We consider the policy to be a radial basis policy and compare it's performance given the different models on a pendulum environment.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View