- Main
High-Performance Training Algorithms and Architectural Optimization of Spiking Neural Networks
- Zhang, Wenrui
- Advisor(s): Li, Peng
Abstract
The spiking neural network (SNN) is an emerging brain-inspired computing paradigm with the more biologically realistic spiking neuron model. As the third generation of artificial neural networks (ANNs), SNNs are theoretically shown to possess greater computational power than the conventional non-spiking ANNs and are well suited for spatio-temporal information processing and implementation on ultra-low power event-driven neuromorphic hardware. This dissertation aims to usher SNNs into the mainstream practice by addressing two key roadblocks: lack of high-performance training algorithms and lack of systematic exploration of computationally-powerful recurrent SNNs.
First, existing SNNs training algorithms suffer from major limitations in terms of learning performance and efficiency. To handle these challenges, we proposed a comprehensive set of solutions including synaptic plasticity (SP) and intrinsic plasticity (IP) to embrace energy-efficient SNNs with high performance. To enable SP based training algorithms, we developed two innovative backpropagation (BP) methods to boost the performance of SNNs. We proposed a Spike-Train level RSNNs Backpropagation (ST-RSBP) algorithm for training deep recurrent SNNs (RSNNs) while addressing the training difficulty introduced by non-differentiability of spiking activation function and improving training efficiency at the spike-train level. To allow for learning temporal sequences with precise timing, we propose a BP method called Temporal Spike Sequence Learning Backpropagation (TSSL-BP), breaking down error backpropagation across two types of inter/intra-neuron dependencies and precisely capturing the temporal dependencies with ultra-low latency. To train a given SNN using IP, we proposed a method called SpiKL-IP based on a rigorous information-theoretic approach for maintaining homeostasis and shaping the dynamics of neural circuits.
While recurrence is prevalent in the brain, designing practical recurrent spiking neural networks (RSNNs) is challenging due to the intracity introduced by recurrent connections both in time and space. RSNNs are often randomly generated without optimization in the current practice, which however fails to fully exploit the computational potential of RSNNs. We explored and proposed a family of RSNN architectures aiming at building scalable large-scale RSNNs with high performance We first demonstrated a new type of RSNNs called Skip-Connected Self-Recurrent SNN (ScSr-SNN) which contain self-recurrent connections in each recurrent layer and skip connections across non-adjacent layers. It achieves improved performance over existing randomly generated RSNNs. Inspired by the potential of self-recurrent connectivity, we proposed another novel structure called the Laterally-Inhibited Self-Recurrent Unit (LISR), which consists of one excitatory neuron with a self-recurrent connection wired together with an inhibitory neuron through excitatory and inhibitory synapses. SNNs leveraging the LISR as a basic building block significantly improve performance over feedforward SNNs trained by the BP method with similar computational costs. Finally, we developed a systematic optimization-based neural architecture search framework to synthesize high-performance globally-feedforward and locally-recurrent multi-layer RSNNs.
The proposed work achieves the state-of-the-art performances on various image and speech datasets such as MNIST, FashionMNIST, CIFAR10, TI46 and common neuromorphic datasets including NMNIST, NTIDIGITS, DVS-Gesture.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-