Skip to main content
eScholarship
Open Access Publications from the University of California

UC Santa Barbara

UC Santa Barbara Electronic Theses and Dissertations bannerUC Santa Barbara

Analyses and Robustness Quantification of Underactuated Biped Robot Locomotion

Abstract

Humanoid locomotion control is challenging due to the presence of underactuated dynamics, with constraints at the ground-foot contact imposing dynamic limitations on feasible motions. At the same time, deliberate underactuation in bipeds can potentially provide more energy efficient locomotion, making trade-offs between efficiency and stability a particularly interesting problem in biped control. Two approaches have become prominent in recent times for the control of legged locomotion. Model-based trajectory optimization has shown impressive results, for example in its application within the DARPA Robotics Challenge. Also, with the advent of improved computational capabilities, the field of deep reinforcement learning (DRL) is now being successfully applied to generate control policies for complicated systems like humanoids.

In the first part of this dissertation we use trajectory optimization methods to generate trajectories for a 5 link planar biped walker and control them via partial feedback linearization based controller. We perform experiments to demonstrate the importance of (a) considering and quantifying not only energy efficiency but also robustness of gaits, and (b) optimization not only of nominal motion trajectories but also of robot design parameters and feedback control policies.

In the second part we apply meshing tools to improve and analyze the performance of a 5-link planar biped model to random push perturbations. Creating a mesh for a 14-dimensional state space would typically be infeasible. However, as we show in this dissertation, low level controllers can restrict the reachable space of the system to a much lower dimensional manifold, which makes it possible to apply our tools to improve the performance. We demonstrate the effectiveness of our tools by performing simulations on both: trajectories generated via optimization and policies generated using deep reinforcement learning.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View