Skip to main content
eScholarship
Open Access Publications from the University of California

Incorporating a cognitive model for evidence accumulation into deep reinforcement learning agents

Abstract

Recent neuroscience studies suggest that the hippocampus encodes a low-dimensional ordered representation of evidence through sequential neural activity. Cognitive modelers have proposed a mechanism by which such sequential activity could emerge through the modulation of the decay rate of neurons with exponentially decaying firing profiles. Through a linear transformation, this representation gives rise to neurons tuned to a specific magnitude of evidence, resembling neurons recorded in the hippocampus. Here we integrated this cognitive model inside reinforcement learning agents and trained the agents to perform an evidence accumulation task designed to mimic a task used in experiments on animals. We found that the agents were able to learn the task and exhibit sequential neural activity as a function of the amount of evidence, similar to the activity reported in the hippocampus.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View