Safe Learning and Verification of Neural Network Controllers for Autonomous Systems
Skip to main content
eScholarship
Open Access Publications from the University of California

UC Irvine

UC Irvine Electronic Theses and Dissertations bannerUC Irvine

Safe Learning and Verification of Neural Network Controllers for Autonomous Systems

Abstract

The last decade has witnessed tremendous success in using machine learning (ML) to control physical systems, such as autonomous vehicles, drones, and smart cities. On the one hand, learning-based controller synthesis enjoys the scalability and flexibility benefits offered by purely data-driven architectures. Nevertheless, these end-to-end learning approaches suffer from the lack of safety, reliability, and generalization guarantees. On the other hand, control-theoretic and formal-methods techniques enjoy the guarantees of satisfying high-level specifications. Nevertheless, these algorithms need an explicit model of the dynamic systems and suffer from computational complexity whenever the dynamical models are highly nonlinear and complex. The objective of this dissertation is to develop learning algorithms and verification tools that bridge ideas from symbolic control/reasoning techniques to design ML-controlled autonomous systems with certifiable trust and assurance. The contributions of this dissertation are multi-fold. (1) We propose a neurosymbolic framework that integrates machine learning and symbolic techniques in training neural network (NN) controllers for robotic systems to satisfy temporal logic specifications. In particular, the trained NN controllers enjoy strong correctness guarantees when applying to unseen tasks, i.e., the exact task (including the environment, specifications, and dynamic constraints of a robot) is unknown during the training of NNs. (2) We introduce the first framework to formally reason about the safety of autonomous systems equipped with a neural network controller that processes LiDAR images to produce control actions. Given a NN-controlled autonomous system that processes the environment with a LiDAR sensor, our framework computes a set of safe initial states such that the autonomous system is guaranteed to be safe when starting from these initial states. (3) We propose a novel approach called NNSynth that uses machine learning techniques to guide the design of abstraction-based controllers. Thanks to the use of ML, NNSynth achieves significant performance improvement compared to traditional controller synthesis while maintaining probabilistic guarantees in the meantime. (4) We consider the problem of automatically designing neural network architectures and exhibit a systematic methodology for choosing NN architectures that are guaranteed to implement a controller that satisfies the given high-level specification. (5) Finally, we present an efficient multi-robot motion planning algorithm for missions captured by temporal logic specifications in the presence of bounded disturbances and denial-of-service (DoS) attacks.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View