In this thesis, we assess the role of short-term synaptic plasticity in an artificial neural
network constructed to emulate two important brain functions: self-sustained activity and
signal propagation. We employ a widely used short-term synaptic plasticity model (STP)
in a symbiotic network, in which two subnetworks with differently tuned STP behaviors are
weakly coupled. This enables both self-sustained global network activity, generated by one
of the subnetworks, as well as faithful signal propagation within subcircuits of the other
subnetwork. Finding the parameters for a properly tuned STP network is difficult. We
provide a theoretical argument for a method which boosts the probability of finding the
elusive STP parameters by two orders of magnitude, as demonstrated in tests.
We then combine STP with a novel critic-like synaptic learning algorithm, which we call
ARG-STDP for attenuated-reward-gating of STDP. STDP refers to a commonly used long-
term synaptic plasticity model called spike-timing dependent plasticity. With ARG-STDP,
we are able to learn multiple distal rewards simultaneously, improving on the previous reward
modulated STDP (R-STDP) that could learn only a single distal reward. However, we also
provide a theoretical upperbound on the number of distal reward that can be learned using
ARG-STDP.
We also consider the problem of simulating large spiking neural networks. We describe
an architecture for efficiently simulating such networks. The architecture is suitable for
implementation on a cluster of General Purpose Graphical Processing Units (GPGPU). Novel
aspects of the architecture are described and an analysis of its performance is benchmarked
on a GPGPU cluster. With the advent of inexpensive GPGPU cards and compute power,
the described architecture offers an affordable and scalable tool for the design, real-time
simulation, and analysis of large scale spiking neural networks.
DP.