Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Previously Published Works bannerUC Berkeley

How to Combine Fast Heuristic Markov Chain Monte Carlo with Slow Exact Sampling

Abstract

Given a probability law π on a set S and a function g: S → R, suppose one wants to estimate the mean g = ∫ g dπ. The Markov Chain Monte Carlo method consists of inventing and simulating a Markov chain with stationary distribution π. Typically one has no a priori bounds on the chain’s mixing time, so even if simulations suggest rapid mixing one cannot infer rigorous confidence intervals for. But suppose there is also a separate method which (slowly) gives samples exactly from π. Using n exact samples, one could immediately get a confidence interval of length O(n−1/2). But one can do better. Use each exact sample as the initial state of a Markov chain, and run each of these n chains for m steps. We show how to construct confidence intervals which are always valid, and which, if the (unknown) relaxation time of the chain is suffciently small relative to m=n, have length O(n-1 log n) with high probability.© 2001 Rocky Mountain Mathematics Consortium.

Many UC-authored scholarly publications are freely available on this site because of the UC's open access policies. Let us know how this access is important for you.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View