Modeling the biological visual system: from static and computational to active and data-driven
Skip to main content
eScholarship
Open Access Publications from the University of California

UC Santa Barbara

UC Santa Barbara Electronic Theses and Dissertations bannerUC Santa Barbara

Modeling the biological visual system: from static and computational to active and data-driven

Abstract

A more complete understanding of the biological visual system can inspire the design of computer vision algorithms, and building accurate models constitutes an important step to such an understanding. We utilize computational and deep learning approaches to close the gaps in the literature on modeling the retina and the primary visual cortex (V1), the two important components of the early visual processing pathway.Firstly, to address the lack of a comprehensive computational model of retinal degeneration, we present a biophysically detailed model of the cone pathway in the retina that simulates responses to light and electrical stimulation. Anatomical and neurophysiological changes due to retinal degenerative diseases were systematically introduced. The model was not only able to reproduce common findings about retinal ganglion cell (RGC) activity in the degenerated retina, but also offered testable predictions about the underlying neuroanatomical mechanisms. These insights may further our understanding of retinal processing and inform the design of retinal prostheses. Secondly, to argue for more emphasis on freely moving experimental design, we propose an analysis of the retinal input during free exploration in mice. Mice were able to employ compensatory and gaze-shifting eye-head movements to sample the visual environment during natural locomotion. We found that eye movements preferred features such as edges and textures. A deep learning predictive model of gaze shifts indicated that the upper peripheral visual field contributed most to the prediction, consistent with animal behavior such as predator detection. These results may provide implications for visual processing beyond head-fixed preparations. Lastly, to bridge the gap in predictive modeling tailored for neural data gathered from freely moving experimental paradigms, we introduce a multimodal recurrent neural network that integrates gaze-contingent visual input with behavioral and temporal dynamics to explain V1 activity in freely moving mice. The model achieves state-of-the-art predictions of V1 activity during free exploration. Analyzing our model using maximally activating stimuli and saliency maps, we reveal new insights into cortical function, including the prevalence of mixed selectivity for behavioral variables in mouse V1. Our model offers a comprehensive deep-learning framework for exploring the computational principles underlying V1 neurons in freely-moving animals engaged in natural behavior.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View