Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Robust and Efficient Neural Inertial Localization and Complex Activity Recognition

Abstract

Inertial complex activity recognition and neural inertial navigation are challenging due to missing samples, misaligned data timestamps across sensor channels, variations in sampling rates and high model deployment costs. In this thesis, we introduce a robust training pipeline for complex activity detection that handles sampling rate variability, missing data, and misaligned data timestamps using intelligent data augmentation techniques. Specifically, we use controlled jitter in window length and add artificial misalignments in data timestamps between sensors, along with masking representations of missing data. In addition, we exploit end-to-end sequential learning, alpha-beta filters, Madgwick filters, hardware and quantization-aware Bayesian neural architecture search and a temporal convolutional neural network backbone to form the basis of scalable, real-time and sub-meter GPS-free inertial localization on wide spectrum of target resource-constrained hardware. We also provide a compact, ultra-low-power, environmentally resilient and modular sensor tag configuration that pushes the state-of-the-art in inertial odometry hardware. On average, the network found via our efficient pipeline provided 3x peak activation and 6x memory savings over the state-of-the-art neural inertial algorithms and taking at most 24 hours to train and search pareto-optimal models in the backbone search space. Moreover, we evaluate the complex activity pipeline on state-of-the-art complex activity recognition dataset, achieving test accuracies of 88% and 72% respectively for coarse and granular-activity classification while ranking 3rd in the 2020 Cooking Activity Recognition Challenge out of 78 submissions.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View