Brain Encoding using Randomized Recurrent Networks
Skip to main content
eScholarship
Open Access Publications from the University of California

Brain Encoding using Randomized Recurrent Networks

Abstract

Seeking plausible models for brain computation has been a continuing effort in brain encoding and decoding. Most prior works have mapped the association between stimulus representation from language models and fMRI brain activity using ridge regression. However, these models are not biologically plausible from the perspective of representing neural dynamics of the brain underlying the fMRI recordings. In this work, our primary motivation is to challenge ridge regression models with simple neural architectures such as echo state network (ESNs) and long short-term memory (LSTMs) on the brain encoding task that requires full-sentence processing in the task of reading short sentences. We explore various pre-trained Transformer language models for computing sentence representations and predict the fMRI brain activity from simple neural architectures that include initial layers with random parameterization and that do not require explicit training. Experiment results show that (i) ESNs with online learning can accurately predict the fMRI brain activity comparable to ridge regression models, (ii) Both cell-state (internal memory representation related to long term memory) and out-gate (related to short term memory) of LSTM display an equal level performance during short sentences in random LSTMs, (iii) left hemisphere language area has higher predictive brain activity versus right hemisphere language area, (iv) ESNs with online learning yield superior performance over offline learning, indicating the biological plausibility of ESNs and the cognitive process of sentence reading, and (v) among all the variants of transformer models, Longformer features facilitate better accuracy when utilized with both ridge regression and ESN online learning models. The proposed framework that combines input featurization, dynamic memory and learning modules offers a flexible, biologically plausible architecture for investigating brain encoding in neuroscience.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View