Episodic Memory Contributions to Working Memory–Supported Reinforcement Learning
Published Web Location
https://osf.io/preprints/psyarxiv/64hxe_v1Abstract
Reinforcement learning (RL) frameworks have been extremely successful at capturing how biological agents learn to make rewarding choices. However, there is also increasing evidence that multiple cognitive processes, including working memory (WM) and episodic memory (EM), support such learning in parallel with value-based mechanisms such as RL. Here, we investigate EM's role in a context where both RL and WM are known to strongly support learning. We develop two new experimental paradigms to isolate EM's contributions, using trial-unique signals (Experiment 1) and temporal context effects (Experiment 2) to tag EM. As predicted, our results across both experiments consistently showed a weak role of EM in learning alongside RL and WM. However, surprisingly, we showed that EM's contributions did not improve overall behavior; instead, participants appeared to primarily encode in or retrieve from the EM part of a past trial's information (the stimulus-action choice, without outcome), leading to characteristic error patterns. Across both experiments, computational modeling confirmed a small contribution of traces of past stimulus-action (association) events stored in EM to learning behavior. Our results shed light on the format of EM traces and how they support decision making. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
Many UC-authored scholarly publications are freely available on this site because of the UC's open access policies. Let us know how this access is important for you.