- Main
Enhancing the Discovery of Neural Representations: Integrating Task-Relevant Dimensionality Reduction and Domain Adaptation
- Orouji, Seyedmehdi
- Advisor(s): Peters, Megan M.A.K.P
Abstract
In human neuroscience, machine learning models can be used to discover lower-dimensional neural representations relevant to behavior. However, these models often require large datasets and can be overfit with the small sample sizes typical in neuroimaging. To address this, we developed the Task-Relevant Autoencoder via Classifier Enhancement (TRACE) to extract behaviorally relevant representations. When tested against standard autoencoders and principal component analysis, TRACE showed up to 12% increased classification accuracy and 56% improvement in discovering task-relevant representations using fMRI data from ventral temporal cortex (VTC) of 59 subjects, highlighting its potential for behavioral data.
Machine learning models applications also extend to predictive modeling and pattern discovery in modern biology. However, these models often fail to generalize across different datasets due to statistical differences. This issue also exists in neuroscience, where data are collected across various laboratories using different experimental setups. Domain adaptation can align statistical distributions across datasets, enabling model transfer and mitigating overfitting issues. In the second chapter we discussed domain adaptation in the context of small-scale, heterogeneous biological data, outlining its benefits, challenges, and key methodologies. We advocate for integrating domain adaptation techniques into computational biology, with further customized developments.
Building on these insights, we used DA for understanding brain region interactions during visual processing. We examine the ventral temporal cortex (VTC) and prefrontal cortex (PFC) using Domain Adaptive Task-Relevant Autoencoding via Classifier Enhancement (DATRACE) to explore shared neural representations. DATRACE leverages domain adaptation techniques within an encoder-decoder architecture to predict voxel activities from a shared latent space, in order to ensure relevance for object recognition tasks. Preliminary results indicate that shared representations capture similar object categories in both VTC and PFC. We computed the representational dissimilarity matrix (RDM) of the shared representation between VTC and PFC and contrasted that to the RDM obtained from the low dimensional representation of VTC. Our results suggest that the nature of the information shared with PFC is very similar to those encoded in VTC. Additionally, feature perturbation analysis suggests the need for further studies to reveal the semantic interpretations of shared dimensions in these brain regions. This integrated approach underscores the potential of advanced machine learning techniques in both neuroscience and biology.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-