Skip to main content
eScholarship
Open Access Publications from the University of California

Disentangling Generativity in Visual Cognition

Creative Commons 'BY' version 4.0 license
Abstract

Human knowledge is generative: from everyday learning people extract latent features that can recombine to producenew imagined forms. This ability is critical to cognition, but its computational bases remain elusive. Recent researchwith -regularized Variational Autoencoders (-VAE) suggests that generativity in visual cognition may depend on learningdisentangled (localist) feature representations. We tested this proposal by training -VAEs and standard autoencoders toreconstruct bitmaps showing a single object varying in shape, size, location, and color, and manipulating hyperparame-ters to produce differentially-entangled feature representations. These models showed variable generativity, with somestandard autoencoders capable of near-perfect reconstruction of 43 trillion images after training on just 2000. However,constrained -VAEs were unable to reconstruct images reflecting feature combinations which were systematically withheldduring training (e.g. all blue circles). Thus, deep auto-encoders may provide a promising tool for understanding visualgenerativity and potentially other aspects of visual cognition.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View