Skip to main content
eScholarship
Open Access Publications from the University of California

UC Santa Cruz

UC Santa Cruz Electronic Theses and Dissertations bannerUC Santa Cruz

A Framework for Generating Dangerous Scenes: Towards Explaining Realistic Driving Trajectories

Creative Commons 'BY-SA' version 4.0 license
Abstract

Deep neural networks are black box models that are hard to interpret by humans. However, organizations developing AI models must ensure transparency and accountability by providing the public with a comprehensive understanding of model functionality. We suggest integrating explainability information as feedback during the development, verification, and testing of models. Our testing framework provides the following insight during the neural network training: Does the model equally effective for minor variations in the input data? In this thesis, we showed the explainability differences by comparing original and altered autonomous driving datasets for neural network training and explainability. We propose a framework for perturbing autonomous vehicle datasets, the DANGER framework, which generates edge-case images on top of current autonomous driving datasets. The inputs to DANGER are photorealistic datasets from real driving scenarios. We present the DANGER algorithm for vehicle position manipulation and the interface towards the renderer module and present five scenario-level dangerous primitives generation applied to the virtual KITTI, virtual KITTI 2, and Waymo datasets. We contribute two innovations in our study: (a) Our experiments prove that DANGER can be used as a framework for expanding the current datasets to cover generative while realistic and anomalous corner cases; (b) We tested the feasibility of providing interpretable information feedback in a generic deep neural network training by providing the Grad-CAM instability level.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View