- Main
Find it like a dog: Using Gesture to Improve Object Search
Abstract
Pointing is an intuitive and commonplace communication modality. In human-robot collaborative tasks, human pointing has been modeled using a variety of approaches, such as the forearm vector or the vector from eye to hand. However, models of the human pointing vector have not been uniformly or comprehensively evaluated. We performed a user study to compare five different representations of the pointing vector and their accuracies in identifying the human's intended target in an object selection task. We also compare the vectors' performances to that of domestic dogs to assess a non-human baseline known to be successful at following human points. Additionally, we developed an observation model to transform the vector into a probability map for object search. We implemented our system on our robot, enabling it to locate and fetch the user's desired objects efficiently and accurately.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-