Can a Recurrent Neural Network Learn to Count Things?
Skip to main content
eScholarship
Open Access Publications from the University of California

Can a Recurrent Neural Network Learn to Count Things?

Abstract

We explore a recurrent neural network model of counting based on the differentiable recurrent attentional model of Gregor et al. (2015). Our results reveal that the model can learn to count the number of items in a display, pointing to each of the items in turn and producing the next item in the count sequence at each step, then saying ‘done’ when there are no more blobs to count. The model thus demonstrates that the ability to learn to count does not depend on special knowledge relevant to the counting task. We find that the model’s ability to count depends on how well it has learned to point to each successive item in the array, underscoring the importance of coordination of the visuospatial act of pointing with the recitation of the count list. The model learns to count items in a display more quickly if it has previously learned to touch all the items in such a display correctly, capturing the relationship between touching and counting noted by Alibali and DiRusso. In such cases it achieves performance sometimes thought to result from a semantic induction of the ‘cardinality principle’. Yet the errors that it makes have similarities with the patterns seen in human children’s counting errors, consistent with idea that children rely on graded and somewhat variable mechanisms similar to our neural networks.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View