We propose a new computationally efficient sampling scheme for Bayesian
inference involving high dimensional probability distributions. Our method maps
the original parameter space into a low-dimensional latent space, explores the
latent space to generate samples, and maps these samples back to the original
space for inference. While our method can be used in conjunction with any
dimension reduction technique to obtain the latent space, and any standard
sampling algorithm to explore the low-dimensional space, here we specifically
use a combination of auto-encoders (for dimensionality reduction) and
Hamiltonian Monte Carlo (HMC, for sampling). To this end, we first run an HMC
to generate some initial samples from the original parameter space, and then
use these samples to train an auto-encoder. Next, starting with an initial
state, we use the encoding part of the autoencoder to map the initial state to
a point in the low-dimensional latent space. Using another HMC, this point is
then treated as an initial state in the latent space to generate a new state,
which is then mapped to the original space using the decoding part of the
auto-encoder. The resulting point can be treated as a Metropolis-Hasting (MH)
proposal, which is either accepted or rejected. While the induced dynamics in
the parameter space is no longer Hamiltonian, it remains time reversible, and
the Markov chain could still converge to the canonical distribution using a
volume correction term. Dropping the volume correction step results in
convergence to an approximate but reasonably accurate distribution. The
empirical results based on several high-dimensional problems show that our
method could substantially reduce the computational cost of Bayesian inference.