- Main
Formal Verication of Neural Networks
- Khedr, Haitham
- Advisor(s): Shoukry, Yasser
Abstract
Neural networks(NNs) have been widely used over the past decade at the core of intelligentsystems from sensing modules to learning-based controllers. They've also been deployed in dierent safety-critical domains including healthcare and transportation. However, recent work has shown that NNs are fragile and can make dangerous mistakes that are either unintentional or adversarial. As a consequence, formal verication of NNs holds the promise of providing safety guarantees on the behaviour of such systems. We focus our work on ReLU networks as it is the most widely used activation function. Exact formal verication of ReLU NNs was proved to be NP-hard due to the combinatorial nature of the problem, therefore all of the current verication methods use some relaxation of the problem. We propose a novel framework for formal verication of ReLU neural networks that can ensure that they satisfy some polytopic specications on the input and the output of the network. Our approach uses a relaxed convex program to mitigate the combinatorial complexity of the problem together with some optimization heuristics to eciently verify the satisfaction of the specication on a given network. We have implemented our algorithm in a toolkit, PeregriNN. To test PeregriNN, we run it on two test benches in dierent domains. First, we achieve SOTA results on verifying the adversarial robustness of dierent networks on the MNIST dataset. Second, we verify the safety of a neural network controlled autonomous robot in a structured environment.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-