- Main
Assured Autonomy for Safety-Critical and Learning-Enabled Systems
- Rubies Royo, Vicenc
- Advisor(s): Tomlin, Claire
Abstract
Autonomous systems are becoming ever more complex. This growth in complexity stems primarily from continual improvements in computational power, which have enabled, among many things, the use of more sophisticated high-dimensional dynamical models or the use of deep neural networks for perception and decision-making. Unfortunately, this increase in complexity is coupled with an increase in uncertainty on how these systems might behave in safety-critical settings where guarantees of performance are needed.
In this dissertation, we will first address the challenges involved in the computation of safety certificates for high-dimensional safety-critical systems and how machine learning, and in particular artificial neural networks, can provide scalable approximate solutions which work well in practice. However, reliance on neural networks for autonomy poses itself a challenge, since these function approximators can sometimes produce erroneous behaviors when exposed to noise or adversarial attacks, for example. With this in mind, in the second half of the dissertation we will address the challenges involved in the verification of neural networks, and in particular, how to assess whether deep feedforward neural networks adhere to safety specifications.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-