- Main
What Transformers Might Know About the Physical World: T5 and the Origins of Knowledge
Abstract
Features of the physical world may be acquired from the statistical properties of language. Here we investigate how the Transformer Language Model T5 is able to gain knowledge of the visual world without being able to see or feel. In a series of four studies, we show that T5 possesses an implicit understanding of the relative sizes of animals, their weights, and their shapes, but not their colors, that aligns well with that of humans. As the size of the models was increased from 60M to 11B parameters, we found that the fit to human judgments improved dramatically, suggesting that the difference between humans and these learning systems might ultimately disappear as the parameter sizes grow even larger. The results imply that knowledge of the perceptual world—and much of semantic memory—might be acquired in dis-embodied learning systems using real-time inferential processes
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-