- Main
Knowledge transfer in a probabilistic Language Of Thought
Abstract
In many domains, people are able to transfer abstract knowl-edge about objects, events, or contexts that are superficiallydissimilar, enabling striking new insights and inferences. Weprovide evidence that this ability is naturally explained as theaddition of new primitive elements to a compositional mentalrepresentation, such as that in the probabilistic Language OfThought (LOT). We conducted a transfer-learning experimentin which participants learned about two sequences, one afterthe other. We show that participants’ ability to learn the secondsequence is affected the first sequence they saw. We test twoprobabilistic models to evaluate alternative theories of how al-gorithmic knowledge is transferred from the first to second se-quence: one model rationally updates the prior probability ofthe primitive operations in the LOT based on what was used inthe first sequence; the other stores previously likely hypothesesas new primitives. Both models perform better than baselinesin explaining behavior, with the human subjects appearing totransfer entire hypotheses when they can, and otherwise updat-ing the prior on primitives.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-