Skip to main content
eScholarship
Open Access Publications from the University of California

Simulating Early Word Learning in Situated Connectionist Agents

Abstract

Recent advances in Deep Learning (DL) and ReinforcementLearning (RL) make it possible to train neural network agentswith raw, first-person visual perception to execute language-like instructions in 3D simulated worlds. Here, we inves-tigate the application of such deep RL agents as cognitivemodels, specifically as models of infant word learning. Wefirst develop a simple neural network-based language learningagent, trained via policy-gradient methods, which can inter-pret single-word instructions in a simulated 3D world. Tak-ing inspiration from experimental paradigms in developmentalpsychology, we run various controlled simulations with the ar-tificial agent, exploring the conditions in which established hu-man biases and learning effects emerge, and propose a novelmethod for visualising and interpreting semantic representa-tions in the agent. The results highlight the potential util-ity, and some limitations, of applying state-of-the-art learningagents and simulated environments to model human cognition.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View