 Main
Noise, Quantization, and Priors in Neural Networks
Abstract
A fundamental task for both biological perception systems and humanengineered agents is to infer underlying causes from incoming measurements. Approximate inference in a probabilistic graphical model is a method to generate algorithms that solve such a problem. Not only does the approximate inference seek to reduce the exponential computation necessary in a naive application of Bayes theorem, but approximations are made to handle varied hardware constraints.
The first chapter explores the computations done by the brain in order to decode the signals from a moving retina to achieve high acuity vision. In this case, the problem is formulated as a probabilistic generative model and the decoding computations are a result of doing approximate Bayesian inference. The parameters of the inference computation are derived from the parameters of the generative model instead of being learned.
The second chapter explores the problem of prediction of noisy temporal sequences. Here, recurrent neural networks which are good at learning nonlinear temporal dependencies are combined with Kalman filters which are good at dealing with noise and uncertainty. While the resulting computations could, in principle, be approximated by a neural network, our model results in better prediction performance for eye movements than a simple neural network.
The third chapter explores neural networks with binary weights and activations. Such networks are constrained to be binary and need a modified learning algorithm, but they both use less power and less time to execute. This work develops a theory, based on highdimensional geometry, that explains why binary neural networks have similar performance to their 32 bit floating point counterparts.
Main Content
Enter the password to open this PDF file:













