Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

A Virtual Kinematic Chain Perspective for Robot Task and Motion Planning

Abstract

This dissertation presents a novel Virtual Kinematic Chain (VKC) perspective of modeling and planning sequential manipulation tasks for mobile robots and proposes a unified graph-based representation that bridges robot Task and Motion Planning (TAMP) and robot perception. Existing manipulation planning methods designed for mobile robots often treat the mobile base and arm separately and aim to tackle a specific task setup (e.g. object relocation, or doors/drawers opening). Unlike them, we propose to construct a Virtual Kinematic Chain (VKC) that readily consolidates the kinematics of the mobile base, the arm, and the object to be manipulated as a whole without losing versatility for a variety of manipulation tasks. Consequently, a manipulation task is treated as altering the structure and state of the constructed VKCs, which can be formulated as a standard motion planning problem for robot execution. From this novel VKC perspective, a mobile robot can produce well-coordinated base-arm trajectories in challenging real-world settings such as confined household environments. For more complex manipulation skills such as robotic tool-use, the constructed VKC can be augmented by external tools and can incorporate physical understandings learned from human demonstrations to produce sophisticated physical effects. Moreover, the proposed VKC perspective generalizes to a variety of manipulation tasks and simplifies conventional symbolic planning domains (e.g. Planning Domain Definition Language (PDDL))) for long-horizon sequential manipulation planning problems while preserving task and motion correspondences. To closely bridge robot TAMP with robot perception, we further devise a 3D scene graph representation as an alternative to conventional symbolic planning languages. This graph-based representation abstracts scene layouts using geometric information obtained from the robot's perception module and retains predicate-like attributes similar to those of conventional representations while ensuring robot TAMP feasibility from the VKC perspective. Throughout this dissertation, we show that having the VKC serve as a unified representation benefits complex manipulation task modeling and planning, improves computational efficiency in long-horizon tasks, and naturally facilitates robot perception for robot planning.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View