Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Mobile Robot Learning

Abstract

In order to create mobile robots that can autonomously navigate real-world environments, we need generalizable perception and control systems that can reason about the outcomes of navigational decisions. Learning-based methods, in which the robot learns to navigate by observing the outcomes of navigational decisions in the real world, offer considerable promise for obtaining these intelligent navigation systems. However, there are many challenges impeding mobile robots from autonomously learning to act in the real-world, in particular (1) sample-efficiency---how to learn using a limited amount of data? (2) supervision---how to tell the robot what to do? and (3) safety---how to ensure the robot and environment are not damaged or destroyed during learning?

In this thesis, we will present deep reinforcement learning methods for addressing these real world mobile robot learning challenges. At the core of these methods is a predictive model, which takes as input the current robot sensors and predicts future navigational outcomes; this predictive model can then be used for planning and control. We will show how this framework can address the challenges of sample-efficiency, supervision, and safety to enable ground and aerial robots to navigate in complex indoor and outdoor environments.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View