Skip to main content
eScholarship
Open Access Publications from the University of California

UC San Diego

UC San Diego Electronic Theses and Dissertations bannerUC San Diego

Behavior Prediction of Intelligent Agents in and Around Safe Autonomous Vehicles

Abstract

Autonomous vehicles operate in highly interactive environments. They share the road with humans and human driven vehicles, and they may share control with humans in the cabin. Consider scenarios such as freeway merges, unsignalized intersections or unprotected turns. These require cooperating with other on-road agents and predicting their intent and future motion. Similarly, consider scenarios in partial or conditional autonomy where control needs to be transferred to a human driver. It is critical to predict the driver's takeover readiness and reaction times to ensure safe transfer of control.

The goal of this dissertation is to develop models for predicting agent behavior in and around autonomous vehicles. We present our contributions in two parts. In part I, we address the task of trajectory prediction of surrounding agents. We propose models that incorporate multi-agent behavior, generalize to novel scene layouts, and output a multimodal distribution over future trajectories. Concretely, our contributions are: (i) A unified framework for maneuver classification and trajectory prediction for highway traffic. (ii) An LSTM encoder-decoder with convolutional social pooling for modeling agent-agent interaction, and maneuver conditioned decoders for predicting a multimodal distribution. (iii) P2T: a model that infers goals and path preferences of agents in novel scenes using a discrete grid-based policy and predicts scene compliant trajectories. (iv) PGP: a model that predicts trajectories conditioned on paths traversed by a discrete policy in a graph representation of the scene, leading to computational efficiency and better accuracy.

In part II we focus on agents inside the autonomous vehicle. We address control transitions where control needs to be transferred from the vehicle to a human driver via a takeover request. Given the driver's gaze, hand and foot activity prior to the takeover request, we predict their readiness and reaction times during takeovers. Our contributions are: (i) a metric for the driver's takeover readiness based purely on observable cues and a model to estimate it, (ii) a model for predicting takeover time and analysis using a real-world dataset of control transitions.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View