Skip to main content
eScholarship
Open Access Publications from the University of California

Learning input-output mappings with sparse random neural networks

Abstract

In many parts of the brain neurons seem to have sparse random connectivity as suggested, in part, by the apparently random branching of axons and dendrites. Here, 1 advance the thesis that randomly-connected networks of neurons with biological synaptic enhancement mechanisms such as long term potentiation (LTP), can, in the limit of large numbers of cells and synapses per cell, learn by example to do either pattern classification or nonparametric regression, depending on circuitry assumptions. That is, the neural circuits learn input-output mappings. One class of networks studied approximates Bayes classifiers via Parzen's method. Other networks explored approximate the Nadaraya and k-nearest neighbor estimators used for nonparametric regression. In addition, I have designed, fabricated, and tested CMOS chips implementing winner-take-all and classification algorithms inspired by these biological networks.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View