Skip to main content
eScholarship
Open Access Publications from the University of California

Deep learning and the rules and statistics debate in cognitive science, applied to a simple case

Abstract

Artificial Neural Networks can be used to build a general theory of intelligent systems, connecting the computational, algorithmic and implementational levels. I analyze the generalization of learning in simple but challenging problems as a way to build the theory. I report simulations of learning and generalizing sameness, using Simple Recurrent Networks (SRN), Long-Short Term Memories (LSTM) and Transformers. We show that even when minimal requirements to implement sameness in SRNs are met, and a SRN network that can compute sameness theoretically exists, we failed to obtain it by training with backpropagation using all the possible input pairs. LSTMs come close to learn sameness, but the best networks require an inordinate amount of examples and the enrichment of the sample with positive examples. The same happens with Transformers. A similar task applied to ChatGPT revealed related problems. We discuss what this implies for Cognitive and Neural Sciences.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View