Given a probability law π on a set S and a function g: S → R, suppose one wants to estimate the mean g = ∫ g dπ. The Markov Chain Monte Carlo method consists of inventing and simulating a Markov chain with stationary distribution π. Typically one has no a priori bounds on the chain’s mixing time, so even if simulations suggest rapid mixing one cannot infer rigorous confidence intervals for. But suppose there is also a separate method which (slowly) gives samples exactly from π. Using n exact samples, one could immediately get a confidence interval of length O(n−1/2). But one can do better. Use each exact sample as the initial state of a Markov chain, and run each of these n chains for m steps. We show how to construct confidence intervals which are always valid, and which, if the (unknown) relaxation time of the chain is suffciently small relative to m=n, have length O(n-1 log n) with high probability.© 2001 Rocky Mountain Mathematics Consortium.