Skip to main content
eScholarship
Open Access Publications from the University of California

Abstracted Gaussian Prototypes for One-Shot Concept Learning

Creative Commons 'BY' version 4.0 license
Abstract

While humans have the remarkable ability to learn concepts from few examples, machine learning algorithms oftentimes require complex architectures that struggle to learn from minimal data. We introduce a simple computational framework for one-shot learning to encode higher-level representations of visual concepts using Gaussian Mixture Models (GMMs). Distinct topological subparts of concepts are represented as inferred Gaussian components, which can generate abstracted subparts to build robust prototypes for each concept. Our framework addresses both one-shot classification tasks through a similarity metric inspired by Tverksy's (1977) contrast model, as well as one-shot generative tasks through a novel pipeline employing variational autoencoders (VAEs) to generate new class variants. Our approach yields impressive classification accuracy while also performing a breadth of conceptual tasks that most approaches do not even attempt. Results from human judges reveal that our generative pipeline produces novel classes of visual concepts broadly indistinguishable from those made by humans.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View