Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Security and Privacy in Dynamical Systems

Abstract

Dynamical systems have found applications in many domains including control and optimization, which have risen to great prominence. Physical processes in nature can be classified as dynamical systems. Control theory tries to understand these systems, in order to design certain mechanisms and to obtain desired behaviors. On the other hand, optimization algorithms are inherently recursive and therefore can be modeled as dynamical systems. Such systems give rise to an abundance of applications, therefore, addressing their unreliability is important. In this dissertation, we focus on challenges arising from vulnerabilities of such systems against (active) attacks on physical components and (passive) attacks to infer about sensitive information. We take steps forward toward understanding these challenges and toward making progress in building robust systems.

Many control systems have a cyber-physical nature, meaning there is a tight interaction between cyber (computation and communication) and physical (sensing and actuation) components of the system. Cyber-Physical Systems (CPS) have enabled numerous applications in which decisions need to be taken depending on the environment and sensory information. However, addressing the unreliability that may stem from communication, software security, and physical vulnerabilities still remains a fundamental challenge. In the first part of this dissertation, we focus on the physical vulnerabilities of sensing and actuation modules, in which an adversary manipulates these components. Particularly, two problems of ''state estimation'' and ''system identification'' are analyzed in an adversarial environment. In order to make the system robust against such attacks, we propose several schemes to mitigate the adversarial agent’s impact.

In recent years, personal data from health care, finance, and etc are becoming available that enables learning high complexity models for applications ranging from medical diagnosis and financial portfolio strategies among others. The common paradigm to learn such models is to optimize a cost function involving the model parameters and the data. Acquiring data from individuals and publishing models based on them compromises the privacy of users against a passive adversary observing the training procedure. Addressing this vulnerability is crucial in this increasingly common scenario where we build models based on sensitive data. For instance, the privacy concern is a major roadblock in large scale use of sensitive personal data in health care. In the second part of this dissertation, we investigate two problems in this area: ''private linear-regression'' and ''private distributed optimization''. These methods develop and analyze private learning mechanisms which guarantee utility while ensuring a given privacy level.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View