In recent years the field of neuromorphic low-power systems that consume
orders of magnitude less power gained significant momentum. However, their
wider use is still hindered by the lack of algorithms that can harness the
strengths of such architectures. While neuromorphic adaptations of
representation learning algorithms are now emerging, efficient processing of
temporal sequences or variable length-inputs remain difficult. Recurrent neural
networks (RNN) are widely used in machine learning to solve a variety of
sequence learning tasks. In this work we present a train-and-constrain
methodology that enables the mapping of machine learned (Elman) RNNs on a
substrate of spiking neurons, while being compatible with the capabilities of
current and near-future neuromorphic systems. This "train-and-constrain" method
consists of first training RNNs using backpropagation through time, then
discretizing the weights and finally converting them to spiking RNNs by
matching the responses of artificial neurons with those of the spiking neurons.
We demonstrate our approach by mapping a natural language processing task
(question classification), where we demonstrate the entire mapping process of
the recurrent layer of the network on IBM's Neurosynaptic System "TrueNorth", a
spike-based digital neuromorphic hardware architecture. TrueNorth imposes
specific constraints on connectivity, neural and synaptic parameters. To
satisfy these constraints, it was necessary to discretize the synaptic weights
and neural activities to 16 levels, and to limit fan-in to 64 inputs. We find
that short synaptic delays are sufficient to implement the dynamical (temporal)
aspect of the RNN in the question classification task. The hardware-constrained
model achieved 74% accuracy in question classification while using less than
0.025% of the cores on one TrueNorth chip, resulting in an estimated power
consumption of ~17 uW.