Skip to main content
eScholarship
Open Access Publications from the University of California

UC Irvine

UC Irvine Electronic Theses and Dissertations bannerUC Irvine

Scalable Approximate Bayesian Inference

Abstract

The availability of massive computational resources has led to a wide-spread application and development of Bayesian methods. However, in recent years, due to the explosive growth of data volume, developing advanced Bayesian methods for large-scale problems is still a very active area of research. This dissertation is an effort to develop more scalable computational tools for Bayesian inference in big data problems.

At its core, Bayesian inference involves evaluating high dimensional integrals with respect to the posterior distribution of model parameters and/or latent variables. However, the integration does not have closed form in general, and approximation methods are usually the only feasible option. Approximation can be divided into two main categories: deterministic approximation based on variational optimization, and stochastic approximation based on sampling methods.

We start with developing a new variational framework --- geometric approximation of posterior (GAP) --- based on ambient Fisher geometry. As a variational method, GAP has the potential to scale well to large problems compared to computationally expensive sampling methods. It not only has a well-established mathematical basis --- information geometry, but also works as a better alternative to other variational methods such as variational free energy and expectation propagation under certain scenarios.

Next, we focus on another class of approximation scheme based on MCMC sampling. Our method combines auto-encoders with Hamiltonian Monte Carlo (HMC). While HMC is efficient in exploring parameter space with high dimension or complicated geometry, it is computationally demanding since it has to evaluate additional geometric information of the parameter space. Our proposed method, Auto-encoding HMC, is designed to simulate Hamiltonian dynamics in a latent space with a much lower dimension, while still maintaining efficient exploration of the original space. Our method achieves a good balance between efficiency and accuracy for high-dimensional problems.

Besides our work on scalable approximation methods for Bayesian inference, we have also developed a variational auto-encoder (VAE) model based on determinantal point process (DPP) for big data classification problems with imbalanced classes. VAE is a generative model based on variational Bayes and is typically applied to high-dimensional data such as images and texts. In the presence of imbalanced data, our method balances the latent space by using a DPP prior to up-weight the minor classes. We successfully applied our method, henceforth called DPP-VAE, to neural data classification and hand-written digits generation, which are both high-dimensional in nature. Our method provides better results compared to standard VAE when datasets have imbalanced classes.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View