Artificial neural networks are treated as black boxes. Generally,only the states of a subset of the network are considered to determine its efficacy, while the relationship between a neural network’s topology and its function remains under-theorized. For my analysis, I use a new class of event-driven recurrent neural networks—a geometric dynamic network modeled on canonical neurobiological signaling principles that allows to directly encode input data into its evolving dynamics—to forward a new type of machine learning approach. I accomplish this by first, mapping causal neuronal signal flows in the C. elegans connectome to show how the dynamic evolution of signal flows results in a unique internal representation of particular input data. Second, I propose two distinct approaches to determine the upper-bound for the amount of network dynamics needed for capturing the signaling evolution of the system. Using the upper-bound values, I construct a mathematical object representing the causal neuronal signaling dynamics, and delineate the interaction of sub-sub structures at various scales/heights of sub-graphs. Finally, based on recent theoretical propositions regarding optimal signaling in a geometric dynamic network, I show that neurons modify their axonal morphology so that the propagation time of an action potential, and the membrane’s refractory period become balanced. Thus, this work not only lays the foundation to construct and analyze a new class of artificial neural networks whose overall behavior and underlying dynamics are transparently coupled, it also provides fertile grounds for future work on biologically inspired artificial intelligence.
Pattern recognition is a core facet of learning any concept, whether it be a literal pattern in an image or a predictable stimulus-response pattern as in operant conditioning. From the molecular mechanisms up to the behavioral consequences, patterns can be embedded to allow humans to learn from past experiences to better predict future outcomes. A key player in this embedding process is the spike-timing-dependent plasticity (STDP) of the connections between neurons within the brain. STDP is theorized to leverage the spiking activity of neurons resulting from various stimuli according to the initial rule proposed by Donald Hebb: neurons that fire together wire together. However, the actual process by which STDP learning takes place remains disputed, especially in cases such as inhibition, where neurons do not actually support subsequent firing. This dissertation tackles a few of these issues from multiple angles: (1) An evaluation of many different proposed STDP rules and how they have been justified experimentally and employed computationally. (2) The creation of a computational model that highlights the value of the STDP-mediated embedding of a stimulus into the architecture of a spiking neural network in a manner that could resemble brain embeddings. (3) A proposed approach to teaching students by taking advantage of thenatural tendency of the brain to identify patterns and use them to create connections in new contexts. Here, we employ a biologically-inspired spiking neural network model with edge latencies, refractory periods, and modern STDP rules to demonstrate the potential of the brain to embed information within its architecture via synaptic weights. These embeddings may be quite relevant in the brain where the astrocyte syncytium may be able to access them to communicate local learning information on a more global scale.
Machine learning and neuroscience have enjoyed a golden era of prosperity over the past decade as the perfect confluence of technological advances have enabled extraordinary experiments and discovery. Though tightly intertwined in the past, advances in both fields have largely diverged such that the application of deep learning techniques to microscopic neural systems remains relatively unexplored. In this thesis, I present work bridging recent advances in machine learning and neuroscience. Specifically, relying on recent advances in whole-brain imaging, we examined the performance of deep learning models on microscopic neural dynamics and resulting emergent behaviors using calcium imaging data from the nematode C. elegans. We show that neural networks perform remarkably well on both neuron-level dynamics prediction and behavioral state classification. In addition, we compared the performance of structure agnostic neural networks and graph neural networks to investigate if graph structure can be exploited as a favourable inductive bias. To perform this experiment, we designed a graph neural network which explicitly infers relations between neurons from neural activity and leverages the inferred graph structure during computations. In our experiments, we found that graph neural networks generally outperformed structure agnostic models and excel in generalization on unseen C. elegans worms. These results imply a potential path to generalizable machine learning in neuroscience where pre-trained models are evaluated on unseen individuals.
Cookie SettingseScholarship uses cookies to ensure you have the best experience on our website. You can manage which cookies you want us to use.Our Privacy Statement includes more details on the cookies we use and how we protect your privacy.