A Study of Different Aspects of Neural Networks: Neural Representations, Connectivity and Computation
Skip to main content
eScholarship
Open Access Publications from the University of California

UC Davis

UC Davis Electronic Theses and Dissertations bannerUC Davis

A Study of Different Aspects of Neural Networks: Neural Representations, Connectivity and Computation

No data is associated with this publication.
Abstract

This dissertation is split into 3 parts. In the first part (Chapter 2) we look at shapesor manifolds on which neural activity over time lies. In a neural state space, where each axis represents a neuron, neural activity over time forms a point cloud. This point cloud often occupies a small region in the space of all possible activity patterns thus revealing structure in data. We consider point clouds from neural activities from common population codes known as “tuning curve models”. In these models, the firing rate of each neuron is a function of a latent variable which might be a stimulus variable or a variable related to an internal state and a tuning curve parameter which labels each neuron. We address the question: How close are point clouds formed by such models to a linear subspace? To answer this, we define the linear dimension of the data to be the number of dimensions which captures a very high fraction of variance, for example 95% variance in this data. We show that the linear dimension grows exponentially with the number of latent variables encoded by the population. Thus the manifolds formed by the neural activities from these models are extremely non-linear. Linear dimension is not a good measure for the intrinsic dimension of the manifold on which this point cloud lies. In the second part (Chapter 3), we model connections between distant brain regions by sparse random connections. We start by observing that such a network has a special property known as the expander property. Using this property it can be shown that information can be transmitted efficiently from a source region to a target region even if the target region has fewer neurons than the source region. We also consider if the compressed patterns in the target region can be re-coded or expanded to perform some computation. We show that the compressed patterns can be re-expanded by algorithms known as Locally Competitive Algorithms (LCA) and the re-expanded patterns can be separated by a downstream neuron into arbitrarily defined classes. We next consider whether long range reciprocal connections between two regions can be used to maintain persistent activity in both the regions. Such activity is thought to be a substrate for working memory, the ability to hold things in mind. We show that the network can indeed maintain sparse patterns of activity through simple network dynamics. We conclude that sparse random connections can be used to transmit information effectively and improve the performance of certain computations compared to dense random connections. In the last part (Chapter 4), we built a computational rate model for the pre-cortex biological neural circuit responsible for the localisation of sound in the vertical plane. Interaction of incoming sound waves with the outer ear filters out energy from specific frequency bands in the spectrum of the incoming sound. The frequency bands with zero or reduced power in them are known as notches. The position of the notches is a function of the angle of elevation of the sound source. There is a dedicated set of neurons in the auditory pathway which are sensitive to the position of these notches and hence thought to be responsible for the localization of sound in the vertical plane. These neurons show different levels of excitation or inhibition above or below their spontaneous rates for different combinations of frequencies and intensities of sound. We built a computational model to probe how this complex set of responses arise from the interaction between the various populations of neurons in the auditory pathway.

Main Content

This item is under embargo until August 1, 2024.