Skip to main content
Open Access Publications from the University of California

Reinforcement Learning for Power Management of Batteryless Sensing Systems

  • Author(s): Fraternali, Francesco
  • Advisor(s): Gupta, Rajesh
  • et al.

Edge devices are embedded sensing or actuation devices accessible via wireless sensor networks in applications such as monitoring of structural health or environmental conditions in buildings. To avoid retrofitting costs and ease the deployment, these devices are often battery-powered, thus requiring manual battery replacement to maintain their operations over time. Yet, as the sensor network scales up to thousands of sensors, maintenance becomes a time consuming and labor-expensive task. Energy harvesting is often used to extend the lifetime of sensor nodes and avoid battery replacement.

In this dissertation, we present techniques that combine hardware, software, and artificial intelligence techniques to extend the lifetime of the sensor nodes devices for decades without sacrificing application performance even in low energy availability environments.

As a hardware solution, we present a sensing platform that can be deployed anywhere inside a building and monitor a wide range of parameters without needing periodic battery replacement that is typical of current solutions. Instead of a rechargeable battery, it uses a supercapacitor to store the energy harvested from the environment. To facilitate deployment and integration with existing buildings, the platform uses Bluetooth Low Energy (BLE) to relay data.

Since the amount of energy harvested can change between sensor node locations, and the applications can have different energy requirements over time, we present a learning-based method to extend the operating lifetime of network-connected edge devices while increasing the application performance with available energy. We describe design choices that enable an indoor environment sensing device to exploit reinforcement learning for periodic and event-driven sensing with ambient light energy harvesting. Using simulations and real deployments, we show that our sensor nodes adapt to ambient lighting conditions and send measurements and events continuously during nights and weekends without interruptions. We use real-world deployment data to continually adapt sensing to changing environmental patterns and use transfer learning to reduce the training time.

To be effective these techniques require prior knowledge of the environment in which the sensor nodes are deployed. In the absence of historical data, the application performance deteriorates. To address this problem, we present an approach that leverages meta reinforcement learning to increase the application performance of newly deployed batteryless sensor nodes without historical data. Our method exploits information from other sensor node locations to expedite the learning of newly deployed sensor nodes and improves the application performance after a few days of deployment.

Main Content
Current View