Skip to main content
eScholarship
Open Access Publications from the University of California

Modelling Cross-Situational Learning on Full Sentences in Few Shots with Simple RNNs

Creative Commons 'BY' version 4.0 license
Abstract

How do children bootstrap language through noisy supervision? Most prior works focused on tracking co-occurrences between individual words and referents. We model cross-situational learning (CSL) at sentence level with few (1000) training examples. We compare reservoir computing (RC) and LSTMs on three datasets including complex robotic commands. For most experiments, reservoirs yield superior performance over LSTMs. Surprisingly, reservoirs demonstrate robust generalization when increasing vocabulary size: the error grows slowly. On the contrary, LSTMs are not robust: the number of hidden units needs to be dramatically increased to follow up vocabulary size increase, which is questionable from a biological or cognitive perspective. This suggests that that random projections used in RC helps to bootstrap generalization quickly. To our knowledge, this is a new result in developmental learning modelling. We analyse the evolution of internal representations during training of both recurrent networks and suggest why reservoir generalization seems more efficient.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View