Artificial intelligence and machine learning (AI/ML) have been extremely successful in predicting, optimizing, and controlling the behavior of complex interacting systems. Robustness and explainability of existing AI/ML methods, however, remain big challenges, and clearly new approaches are needed. The human brain motivated the early development of deep learning, and neuroscientific concepts have contributed to the pro- found success of deep learning algorithms across many areas. The next leap in AI/ML may again come from a deeper understanding of brain architectures and processes—this dissertation focuses on deepening this understanding with machine learning models.
We first discuss a convex optimization framework to analyze and integrate multimodal brain data to infer brain subnetworks and understand heterogeneity across tasks. Next, we propose a novel deep learning model to learn representations of multimodal and dynamic brain signals. Although these models are mostly considered black-box, we can characterize the input brain signals with attribution methods, study brain organizational structures, and unveil the heterogeneity among brain regions, tasks, and individuals. We then demonstrate that semantic representation is an essential piece in the human visual system. With added text modality, we are able to reconstruct complex high-fidelity imagery from input brain signals and infer brain activities from visual stimuli. Further studies are then presented to explore the redundancy and dependency in these brain signals related to visual information processing. Lastly, we apply neuroscience tools and insights to deep learning models, gaining a deeper understanding of the latter and developing more computation- and memory-efficient models. These works demonstrate that advances and challenges in neuroscience and AI/ML benefit each other and drive both fields forward.