- Main
Learning in the machine: Recirculation is random backpropagation
Published Web Location
https://doi.org/10.1016/j.neunet.2018.09.006Abstract
Learning in physical neural systems must rely on learning rules that are local in both space and time. Optimal learning in deep neural architectures requires that non-local information be available to the deep synapses. Thus, in general, optimal learning in physical neural systems requires the presence of a deep learning channel to communicate non-local information to deep synapses, in a direction opposite to the forward propagation of the activities. Theoretical arguments suggest that for circular autoencoders, an important class of neural architectures where the output layer is identical to the input layer, alternative algorithms may exist that enable local learning without the need for additional learning channels, by using the forward activation channel as the deep learning channel. Here we systematically identify, classify, and study several such local learning algorithms, based on the general idea of recirculating information from the output layer to the hidden layers. We show through simulations and mathematical derivations that these algorithms are robust and converge to critical points of the global error function. In most cases, we show that these recirculation algorithms are very similar to an adaptive form of random backpropagation, where each hidden layer receives a linearly transformed, slowly-varying, version of the output error.
Many UC-authored scholarly publications are freely available on this site because of the UC's open access policies. Let us know how this access is important for you.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-