Human-Robot Teaming in Safety-Critical Environments: Perception of and Interaction with Groups
Skip to main content
eScholarship
Open Access Publications from the University of California

UC San Diego

UC San Diego Electronic Theses and Dissertations bannerUC San Diego

Human-Robot Teaming in Safety-Critical Environments: Perception of and Interaction with Groups

Abstract

Human-Robot Teaming in Safety-Critical Environments: Perception of and Interaction with Groups

Angelique Taylor, Ph.D. CandidateComputer Science and Engineering, UC San Diego amt062@eng.ucsd.edu

The field of robotics is growing at a rapid pace with robot deployments in everyday environments such as hospitals, schools, and retail settings. On average, 70% of people in these environments are in groups: they walk, work, and interact in groups. Recent work in the field has highlighted the importance of designing robots that can interact with groups. To enable robots to fluently assist and interact with groups, they need a high-level understanding of team dynamics, including how to sense groups and engage in intelligent decision making to support them. However, the human-robot interaction (HRI) field has focused on dyadic interaction (i.e., one human and one robot) which does not represent real-world situations where robots might interact with any number of people at a given time.

The goal of my Ph.D. research is to design robotic systems that enable robots to work seamlessly in teams in real-world, safety-critical settings. In this dissertation, I discuss four main contributions of my work. First, I designed the Robot-Centric Group Estimation model (RoboGEM), which enables robots to detect human groups in complex, real-world environments. Prior group perception work tends to: (1) focus on exo-centric perspective approaches, (2) use data captured in well-controlled environments which cannot support real-world operating scenarios, and (3) use supervised learning methods that may potentially fail when robots encounter new situations. In contrast, RoboGEM is unsupervised and works well on ego-centric, real-world data, where both pedestrians and the robot are in motion at the same time. RoboGEM outperforms the current top-performing method by 10% in terms of accuracy, and 50% in terms of recall, and it can be used in real-world environments to enable robots to work in teams.

Second, I expanded the scope of RoboGEM to design RoboGEM 2.0 which enables a robot to track groups over time in crowded environments. RoboGEM 2.0 is based on the intuition that pedestrians are most likely in groups when they have similar trajectories, ground plane coordinates, and proximities. RoboGEM 2.0 leverages deep learning techniques for group data association which enables robots to track groups when ego-motion uncertainty is high. It includes new methods for group tracking that employ Convolutional Neural Network (CNN) feature maps for group data association, and Kalman filtering to track group states over time.

I compared RoboGEM 2.0 to three state-of-the-art methods and showed that it outperforms them in terms of precision, recall, and tracking accuracy. Unlike prior methods that require multiple sensors and substantial computational resources, RoboGEM 2.0 enables robots to detect and track groups of people in real-time from a moving platform using a single RGB-D sensor.

Third, I explored using RoboGEM within a real-world application: teaming in healthcare. This is a dynamic setting in which teams experience coordination, communication, and decision-making challenges, rendering it a well-suited application domain for my work. I was interested in how robots might be used to reduce the degree of preventable patient harm, which in the US, kill over 400,000 patients and injure 5 million patients annually in hospitals alone. Here, nurses are the primary advocate for patients and thus are uniquely positioned to identify and prevent patient harm. However, strict hierarchical structures and asymmetrical power dynamics between physicians and nurses often result in penalties for nurses who speak up to “stop the line” of behavior that causes medical errors.

This inspired my work, which involved collaborating with nurses to envision how robots might empower and support them in clinical teams. For example, our study revealed that nurses want robots to assist with team decision-making, supply delivery, and team “choreography” during surgery and resuscitation procedures. This work provided exciting design concepts for future robot technology in acute settings, which inspired later work in my PhD.

Fourth, I continued my investigation into how robots can support clinical teams by exploring the use of robots in the Emergency Department (ED). The ED is a safety-critical environment in which providers are overburdened, overworked, and have limited resources to do their jobs. To place robots in these complex spaces, robots need to understand many features of the environment in order to operate safely and effectively, including patient acuity to prevent robots from interrupting treatment.

To address this, I developed the Safety-Critical Deep Q-Network (SafeDQN), a new reinforcement learning system that enables robots to socially navigate while taking patient level of acuity into account. The main contribution of this work is a new computational model of patient acuity to enable robots to socially navigate in the ED. I compared SafeDQN to three classic navigation methods, and found that SafeDQN generates the safest, quickest path in a simulated ED environment. Using SafeDQN, mobile robots can fetch and deliver supplies to ED staff in a manner that does not interrupt patient care, thereby less likely to cause patient harm

My Ph.D. research contributes to building real-time robotic systems that can work alongside people in real-world environments. My work enables robots to effectively identify groups, track them over time, and navigate and interact among them in safety-critical, real-world settings. This work will enable more robust, realistic HRI, and support safe operation of mobile robots in human-centered environments.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View