Machine Learning in IoT Systems: From Deep Learning to Hyperdimensional Computing
With the emergence of the Internet of Things (IoT), devices are generating massive amounts of data. Running machine learning algorithms on IoT devices poses substantial technical challenges due to their limited resources. The focus of this dissertation is to dramatically increase computing efﬁciency as well as the learning capability of today’s IoT systems by accelerating existing algorithms in hardware and designing new classes of light-weight machine learning algorithms. Our design makes a modiﬁcation to storage-class memory to support search-based and vector-based computation in memory. We show how this architecture can be used to accelerate deep neural networks in both training and inference phases, resulting in 303× faster and 48× more energy efﬁcient training as compared to the state-of-the-art GPU.
Hardware acceleration alone does not provide all the efﬁciency and robustness that we need. Therefore, we present Hyperdimensional (HD) computing, an alternative method of learning that implements principles of the functionality in the brain: (i) fast learning, (ii) robustness to noise/error, and (iii) intertwined memory and logic. These features make HD computing a promising solution for today’s embedded devices with limited resources as well as future computing systems in deep nanoscaled technology that have issues of high noise and variability. We exploit emerging technologies to enable processing in-memory which is capable of highly-parallel computation and data movement reduction. Our evaluations show that HD computing provides 39X faster and 56X more energy efﬁciency as compared to state-of-the-art deep learning accelerator.