Skip to main content
eScholarship
Open Access Publications from the University of California

Robots learning to manipulate: real-time application-oriented algorithms using feature-based and machine learning techniques

  • Author(s): Balaguer, Benjamin Daniel
  • Advisor(s): Carpin, Stefano
  • et al.
Abstract

In this dissertation, we present four application-driven robotic manipulation tasks that are solved using a combination of feature-based, machine learning, dimensionality reduction, and optimization techniques. First, we study a previously-published image processing algorithm whose goal is to learn how to classify which pixels in an image are considered good or bad grasping points. Exploiting the ideas behind dimensionality reduction in general and principal component analysis in particular, we formulate feature selection and search space reduction hypotheses that provide approaches to reduce the algorithm's computation time by up to 98% while retaining its classification accuracy. Second, we incorporate the image processing technique into a new method that computes valid end-effector orientations for grasping tasks, the combination of which generates a unimanual rigid object grasp planner. Specifically, a fast and accurate three-layered hierarchical supervised machine learning framework is developed, where the robot is kinesthetically taught a set of valid end-effector orientations by a human-in-the-loop. Third, we solve the challenge of bimanual regrasping, where a pick-and-place operation requires an object transfer from one manipulator to another, by casting it as an optimization problem where the objective is to minimize execution time. The optimization problem is supplemented by the image processing and unimanual grasping algorithm that jointly identify two good grasping points on the object and the proper orientations for each end-effector. Fourth, we target deformable objects by solving the problem of using cooperative manipulators to perform towel folding tasks. We solve this problem with a new learning algorithm that combines both imitation and reinforcement learning in such a way that human demonstrations are used to reduce the search space of the reinforcement learning algorithm, resulting in quick convergence and fast learning capabilities. Collectively, the tasks solved in this dissertation establish application-oriented feature-based and machine learning techniques in robotics. Although the tasks are different from each other, ranging from unimanual to bimanual manipulation and handling both rigid and deformable objects, the mathematical frameworks and design principles behind their implementations are similar. In addition to their common use of features, machine learning, and dimensionality reduction, the tasks are commonly designed to be general, efficient, modular, anthropomorphic, and manipulator-, end-effector-, and sensor-independent. These properties not only affect choices made during the algorithms' development but also alleviates the problem of sharing contributions amongst roboticists, each with their own sensors, hardware platforms, and research agendas. With all of these considerations, the algorithms are experimentally validated in offline and online scenarios, respectively consisting of synthetic and real data. The real scenarios are executed on a dual manipulator torso equipped with two Barrett WAM manipulators, two Barrett Hands, and a single stereo camera. Furthermore, the algorithms presented were all successfully executed and validated on the real robot under numerous differing conditions. This essential element of the dissertation bridges the gap between the algorithms' theory and applicability.

Main Content
Current View