Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Modeling Visual Cortical Development

Abstract

Representation is a critical component of visual neuroscience. While there is an extensive body of literature on the nature of visual representations, we lack a set of guiding principles for understanding how representations are learned during development. Our analysis here focuses on this question at a computational level. The first set of results addresses how representations are learned under the assumption of a sparse prior on the data. It is well known that sparse coding models trained on natural images learn basis functions whose shapes resemble the receptive fields (RFs) of simple cells in the primary visual cortex (V1). However, it is unclear whether certain types of basis functions emerge more quickly than others, or whether they develop simultaneously. We train an overcomplete sparse coding model (Sparsenet) on natural images and find that there is a spectral bias in the order of development of its basis functions, with basis functions tuned to lower spatial frequencies emerging earlier and higher spatial frequency basis functions emerging later. We observe the same trend in a biologically plausible sparse coding model (SAILnet) that uses leaky integrate-and-fire neurons and synaptically local learning rules, suggesting that this result is a general feature of sparse coding. These results are consistent with recent experimental evidence that the distribution of optimal stimuli for driving neurons to fire shifts towards higher frequencies during normal development in mouse V1. We find that the input data statistics can fully account for the spectral bias in sparse coding, and propose that visual experience is sufficient to drive the spectral bias in receptive field development. Our analysis of sparse coding models during training yields experimentally testable predictions for V1 development.

In the next set of results, we investigate the potential for innately generated neural activity to drive the development of efficient representation in the visual cortex. Prior to the onset of vision, neurons in the developing mammalian retina spontaneously fire in correlated activity patterns known as retinal waves. Experimental evidence suggests that retinal waves strongly influence the emergence of sensory representations before visual experience. We model this early stage of functional development by using movies of neurally active developing retinas as pre-training data for neural networks. Specifically, we use unsupervised learning to train models on movies of retinal waves, then evaluate its performance on image classification tasks. We find that pre-training on retinal waves significantly improves performance on tasks that test object invariance to spatial translation, while slightly improving performance on more complex tasks like image classification. Notably, these performance boosts are realized on held-out natural images even though the pre-training procedure does not include any natural image data. We then propose a geometrical explanation for the increase in network performance, namely that the spatiotemporal characteristics of retinal waves facilitate the formation of separable feature representations. In particular, we demonstrate that networks pre-trained on retinal waves are more effective at separating image manifolds than randomly initialized networks, especially for manifolds defined by sets of spatial translations. These findings indicate that the broad spatiotemporal properties of retinal waves prepare networks for higher order feature extraction.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View