- Main
Extracting and Utilizing Abstract, Structured Representations for Analogy
Abstract
Human analogical ability involves the re-use of abstract, struc-tured representations within and across domains. Here, wepresent a generative neural network that completes analogiesin a 1D metric space, without explicit training on analogy.Our model integrates two key ideas. First, it operates overrepresentations inspired by properties of the mammalian En-torhinal Cortex (EC), believed to extract low-dimensional rep-resentations of the environment from the transition probabil-ities between states. Second, we show that a neural networkequipped with a simple predictive objective and highly generalinductive bias can learn to utilize these EC-like codes to com-pute explicit, abstract relations between pairs of objects. Theproposed inductive bias favors a latent code that consists ofanti-correlated representations. The relational representationslearned by the model can then be used to complete analogiesinvolving the signed distance between novel input pairs (1:3:: 5:? (7)), and extrapolate outside of the network’s trainingdomain. As a proof of principle, we extend the same architec-ture to more richly structured tree representations. We suggestthat this combination of predictive, error-driven learning andsimple inductive biases offers promise for deriving and utiliz-ing the representations necessary for high-level cognitive func-tions, such as analogy.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-