- Main
Bridging Safety and Learning in Human-Robot Interaction
- Bajcsy, Andrea
- Advisor(s): Dragan, Anca;
- Tomlin, Claire
Abstract
From autonomous cars to systems operating in people's homes, robots must interact with humans. What makes this hard is that human behavior--especially when interacting with other agents--is vastly complex, varying between individuals, environments, and over time. A modern approach to deal with this problem is to rely on data and machine learning throughout the design process and deployment to build and refine models of humans. However, by blindly trusting their data-driven human models, robots might confidently plan unsafe behaviors around people, resulting in anything from miscoordination to potentially even dangerous collisions.
This dissertation aims to lay the foundations for formalizing and ensuring safety in human-robot interaction, particularly when robots learn from and about people. It discusses how treating robot learning algorithms as dynamical systems driven by human data enables safe human-robot interaction. We first introduce a Bayesian monitor which infers online if the robot's learned human model can evolve to well-explain observed human data. We then discuss how a novel, control-theoretic problem formulation enables us to formally quantify what the robot could learn online from human data and how quickly this learning could be achieved. Coupling these ideas with robot motion planning algorithms, we demonstrate how robots can safely and automatically adapt their behavior based on how trustworthy their learned human models are. This thesis ends by taking a step back and raising the question: "What is the ‘right’ notion of safety when robots interact with people?'' and discusses opportunities for how rethinking our notions of safety can capture more subtle aspects of human-robot interaction.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-