Skip to main content
eScholarship
Open Access Publications from the University of California

Domain-General Learning of Neural Network Models to Solve Analogy Tasks– A Large-Scale Simulation

Abstract

Several computational models have been proposed toexplain the mental processes underlying analogical reasoning.However, previous models either lack a learning componentor use limited, artificial data for simulations. To address theseissues, we build a domain-general neural network model thatlearns to solve analogy tasks in different modalities, e.g., textsand images. Importantly, it uses word representations andimage representations computed from large-scale naturalisticcorpus. The model reproduces several key findings in theanalogical reasoning literature, including relational shift andfamiliarity effect, and demonstrates domain-general learningcapacity. Our model also makes interesting predictions oncross-modality transfer of analogical reasoning that could beempirically tested. Our model makes the first step towards acomputational framework that is able to learn analogy tasksusing naturalistic data and transfer to other modalities.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View