Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Privacy in Control over the Cloud and Learning to Control From Expert Demonstrations

Abstract

In this thesis, we consider two problems relevant to the control of complex closed-loop systems. In the first chapter, we focus on the implications that control over the cloud has for privacy of control systems and propose a method that protects privacy without sacrificing control performance. In the second chapter, we revisit the problem of learning a controller from a finite number of demonstrations, while guaranteeing stability.

The first chapter considers the following question: "Given the need to offload control of a system to a third-party (i.e., a cloud), can we still guarantee the privacy of information about the said system and its control objective?" Cloud computing platforms are being increasingly used for closing feedback control loops, especially when computationally expensive algorithms, such as model-predictive control, are used to optimize performance. Outsourcing of control algorithms entails an exchange of data between the control system and the cloud, and, naturally, raises concerns about the privacy of the control system's data (e.g., state trajectory, control objective). Moreover, any attempt at enforcing privacy needs to add minimal computational overhead to avoid degrading control performance. We propose several transformation-based methods for enforcing data privacy. We also quantify the amount of provided privacy and discuss how much privacy is lost when the adversary has access to side knowledge. We address three different scenarios: a) the cloud has no knowledge about the system being controlled; b) the cloud knows what sensors and actuators the system employs but not the system dynamics; c) the cloud knows the system dynamics, its sensors, and actuators. In all of these three scenarios, the proposed methods allow for the control over the cloud without compromising private information (which information is considered private depends on the considered scenario).

The second chapter addresses the problem of learning control from expert demonstrations. Learning control from expert demonstrations is useful for control tasks, where providing examples of the desired behaviour is easier than defining such behaviour formally (e.g., driving a car comfortably). This problem has been addressed in the literature by using tools from statistical machine learning. However, many of the methods proposed in the literature lack formal guarantees on stability and safety. Using tools from control theory and by first focusing on feedback linearizable systems, we show how to combine expert demonstrations into a stabilizing controller, provided that demonstrations are sufficiently long and there are at least n+1 of them, where n is the number of states of the system being controlled. When we have more than n+1 demonstrations, we discuss how to optimally choose the best n+1 demonstrations to construct the stabilizing controller. We then extend these results to a class of systems that can be embedded into a higher-dimensional system containing a chain of integrators. The feasibility of the proposed algorithm is demonstrated by applying it on a CrazyFlie 2.0 quadrotor.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View