Skip to main content
eScholarship
Open Access Publications from the University of California

Learning deep taxonomic priors for concept learning from few positive examples

Abstract

Human concept learning is surprisingly robust, allowing forprecise generalizations given only a few positive examples.Bayesian formulations that account for this behavior requireelaborate, pre-specified priors, leaving much of the learningprocess unexplained. More recent models of concept learningbootstrap from deep representations, but the deep neural net-works are themselves trained using millions of positive and neg-ative examples. In machine learning, recent progress in meta-learning has provided large-scale learning algorithms that canlearn new concepts from a few examples, but these approachesstill assume access to implicit negative evidence. In this paper,we formulate a training paradigm that allows a meta-learningalgorithm to solve the problem of concept learning from fewpositive examples. The algorithm discovers a taxonomic prioruseful for learning novel concepts even from held-out supercat-egories and mimics human generalization behavior—the firstto do so without hand-specified domain knowledge or negativeexamples of a novel concept.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View