How Do Heuristics Expedite Markov Chain Search? Hitting-time Analysis of the Independence Metropolis Sampler
- Author(s): Maciuca, Romeo;
- Zhu, Song-Chun
- et al.
Solving vision problems often entails searching a solution space for optimal states that have maximum Bayesian posterior probability or minimum energy. When the volume of the space is huge, exhaustive search becomes infeasible. Generic stochastic search (e.g. Markov chain Monte Carlo) could be even worse than exhaustive search as it may visit a state repeatedly. To expedite the Markov chain search, one may use heuristics as proposal probability to guide the search in promising portions of the space. Empirically the recent data-driven Markov chain Monte Carlo (DDMCMC) scheme[14,12,2] achieves fast search in a number of vision tasks, attributed by two observations: (i). The posterior probabilities in vision tasks often have very low entropy and thus are narrowly focused on a tight portion of the state space; (ii). The proposal probability computed in bottom-up methods can approximate the posterior well. In this paper we study an independent Metropolis sampler which is a simple case used as components in designing complex MCMC algorithms. We obtain an analytic formula for the expected time to first hit a certain state ("first hitting-time"), as well as very tight lower and upper bounds which depend on the total variation between the target posterior and the heuristic probabilities. These results show, though in humble cases, that one can indeed reach optimal solutions in very few steps with good proposal probabilities regardless of the size of the original search space. This result is different from previous analysis on the Markov chain convergence rate which is bounded by the second largest eigen-value (modulus) and often corresponds to the worst case in the entire search space. In comparison our analysis bears more relevance to the optimization tasks in vision.