Skip to main content
eScholarship
Open Access Publications from the University of California

Determinantal Point Processes for Memory and Structured Inference

Abstract

Determinantal Point Processes (DPPs) are probabilisticmodels of repulsion, capturing negative dependenciesbetween states. Here, we show that a DPP inrepresentation-space predicts inferential biases towardmutual exclusivity commonly observed in word learning(mutual exclusivity bias) and reasoning (disjunctivesyllogism) tasks. It does so without requiring explicitrule representations, without supervision, and withoutexplicit knowledge transfer. The DPP attempts tomaximize the total ”volume” spanned by the set ofinferred code-vectors. In a representational system inwhich combinatorial codes are constructed by re-usingcomponents, a DPP will naturally favor the combinationof previously un-used components. We suggest thatthis bias toward the selection of volume-maximizingcombinations may exist to promote the efficient retrievalof individuals from memory. In support of this, we showthe same algorithm implements efficient ”hashing”,minimizing collisions between key/value pairs withoutexpanding the required storage space. We suggestthat the mechanisms that promote efficient memorysearch may also underlie cognitive biases in structuredinference.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View