Skip to main content
eScholarship
Open Access Publications from the University of California

Bayesian Structured Representation Learning

  • Author(s): Vikram, Sharad Mandyam
  • Advisor(s): Dasgupta, Sanjoy
  • et al.
Abstract

Bayesian methods offer the flexibility to both model uncertainty and incorporate domain knowledge into the modeling process. Deep generative modeling and Bayesian deep learning methods, such as the variational autoencoder (VAE), have expanded the scope of Bayesian methods, enabling them to scale to large, high-dimensional datasets. Incorporating prior knowledge or domain expertise into deep generative modeling is still a challenge, often resulting in models where Bayesian inference is prohibitively slow or even intractable. In this thesis, I first motivate using structured priors, presenting a contribution in the space of interactive structure learning. I then define Bayesian structured representation learning (BSRL) models, which combine structured priors with the VAE, and present foundational work along with applications of BSRL models.

Main Content
Current View