Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Learning, Planning, and Acting with Models

Abstract

While the classical approach to planning and control has enabled robots to achieve various challenging control tasks, it requires domain experts to specify transition dynamics as well as inferring hand-designed symbolic states from raw observations. Therefore, bringing such method into a diverse, unstructured environment is still a grand challenge. Recent successes in computer vision and natural language processing have shed light on how robot learning could be pivotal in tackling such complexity. However, there are many challenges in deploy learning-based systems such as (1) data efficiency -- how to minimize the amount of training data required, (2) generalization -- how to handle tasks that the robots are not explicitly trained on, and (3) long-horizon tasks -- how to simplify the optimization complexity when presented with temporal-extended tasks.

In this thesis, we present learning-and-planning methods that utilize deep neural networks to model the environments in different forms in order to facilitate planning and acting. We validate the efficacy of our methods in data efficiency, generalization, and long-horizon tasks on simulated locomotion benchmarks, navigation tasks, and real robot manipulation tasks.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View