Skip to main content
eScholarship
Open Access Publications from the University of California

Uncertainty Estimation in Continuous Models applied to Reinforcement Learning

  • Author(s): Akbar, Ibrahim
  • Advisor(s): Atanasov, Nikolay
  • et al.
Abstract

We consider the model-based reinforcement learning framework where we are interested in learning a model and control policy for a given objective. We consider modeling the dynamics of an environment using Gaussian Processes or a Bayesian neural network. For Bayesian neural networks we must define how to estimate uncertainty through a neural network and propagate distributions in time. Once we have a continuous model we can apply standard optimal control techniques to learn a policy. We consider the policy to be a radial basis policy and compare it's performance given the different models on a pendulum environment.

Main Content
Current View