Learning-enabled Cyber-Physical Systems: Challenges and Strategies
Skip to main content
Open Access Publications from the University of California


UCLA Electronic Theses and Dissertations bannerUCLA

Learning-enabled Cyber-Physical Systems: Challenges and Strategies


Cyber-physical systems (CPS) are increasingly adopting learning-enabled components having deep neural networks in their decision-making pipelines. Deep neural networks show the promise to simplify the CPS pipelines for high-dimensional sensors as they require little pre-processing of data and are shown to be more accurate than their traditional counterparts. However, integrating neural networks into the sense-infer-actuate pipeline of CPS faces several challenges. In this dissertation, we study the following challenges in the context of learning-enabled CPS and propose new algorithms and system design strategies to address them.

First, we study the challenge of characterizing uncertainty in sensor data timestamps and its impact on multimodal fusion applications. Motivated by smartphones' integration in several CPS applications, we quantify the data timestamp uncertainty across modern smartphone devices. To our surprise, we find drastic timestamping errors ranging up to multiple seconds in Android devices. Then, we explore if these timing errors are significant enough to impact the neural network's performance. Our evaluation shows that the observed timing errors can cripple the deep neural networks doing multimodal fusion due to data misalignments. Our finding signifies the need to rethink the shared notion of time on smartphones. To mitigate timestamp errors, we introduce approaches to improve time across smartphones having up to 200 microseconds of timing accuracy. We also propose a novel time-shift data augmentation technique to train time-resilient neural networks robust to the inevitability of timing errors and, as such, degrade gracefully in the face of timing errors.

As a second challenge, we explore the impact of variable delays on the emerging deep reinforcement learning (RL) controllers, which are preferred due to their capability to handle high-dimensional data. Conventional controllers can model and account for delay variations in their design. However, handling variable delays in deep-RL is challenging as a black-box neural network represents the controller policy. Researchers currently use domain randomization and worst-case delay modeling to train deep-RL policies on a spread of expected delay variations. We demonstrate a significant performance degradation in applications even when using the state-of-the-art domain randomization approach. To address this, we propose Time-in-State RL, a delay-aware deep RL approach that augments the agent's state with temporal properties (sampling interval and execution latency). Time-in-State RL trains policies that show superior performance by adapting to the variable timing characteristics at runtime. We further show the superior performance of Time-in-State to the worst-case delay controllers when worst-case delays are significant. We demonstrate the efficacy of Time-in-State RL on HalfCheetah, Ant, and car in simulation and on a real scaled car robot.

Thirdly, we study the challenge of modeling the CPS environment to train end-to-end controllers using deep-RL for closed-loop systems.We specifically consider the example of autonomous pan-tilt-zoom (PTZ) controllers. Existing autonomous PTZ controllers have multiple stages: detecting objects of interest, short-term tracking, and control of pan, tilt and zoom parameters to keep objects in the field of view. The multiple stages suffer from performance bottlenecks as it is difficult to optimize each step. Further, these multiple stages are computationally intensive to be realized in real-time on embedded camera platforms. Despite these shortcomings, developers adopt existing multi-stage solutions due to the lack of simulators needed to develop end-to-end controller policies. We propose Eagle, an end-to-end deep-RL approach using raw images to control a PTZ camera. To enable successful training of Eagle, we also introduce EagleSim, a simulation framework to study PTZ cameras in photo-realistic virtual worlds. Our evaluation across a suite of PTZ tracking scenarios shows that Eagle outperforms current multi-stage approaches by providing superior tracking performance. Further, we also show that Eagle policies are transferable to real-scene videos and are lightweight to enable real-time deployment on Raspberry PI and Jetson Nano class devices.

Finally, we study the challenge of developing machine learning classifiers having optimal accuracy within the desired resource budget of CPS applications. Selecting an optimal classifier is becoming increasingly complex, with many choices for classifiers and their rich hyperparameter parameter spaces. Although several hyperparameter tuning frameworks exist, their practical adoption is hindered due to inferior search algorithms, inflexible architecture, software dependencies, or closed source nature. As a solution, we propose designing a lightweight library with a flexible architecture and state-of-the-art parallel optimization algorithms. We present Mango, a parallel hyperparameter tuning library, to realize the proposed design. Mango is currently used in production at Arm for more than 30 months and is available open-source. We evaluate Mango on several benchmarks to highlight its superior performance. We discuss production use cases of Mango in an AutoML framework and commercial CPU design pipeline. We also showcase another advantage of Mango in enabling hardware-aware neural architecture search to transfer deep neural networks to TinyML platforms (microcontroller class devices) used by CPS/IoT applications.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View