Image Parsing: Unifying Segmentation, Detection, and Recognition
In this paper we present a Bayesian framework for parsing images into their constituent visual patterns. The parsing algorithm optimizes the posterior probability and outputs a scene representation as a "parsing graph", in a spirt similar to parsing sentences in speech and natural language. The algorithm constructs the parsing graph and re-configures it dynamically using a set of moves, which are mostly reverisble Markov chain jumps. this computational framework integrates two popular inference approaches- generative (top-down) methods and discriminative (bottom-up) methods. The former forumlates the posterior probability in terms of generative models for images defined by likelihood functions and priors. The latter computes discriminative probabilites are used to construct proposal probabilities to drive the Markov chain. Intuitively, the bottom-up discriminative probabilites activate top-down generative models. In this paper, we focus on two types of visual patterns - generic visual patterns, such as texture and shading, and object patterns including human faces and text. These types of patterns compete and cooperate to explain the image and so image parsing unifies image segmentation, object detection, and recognition (if we use generic visual patterns only the image parsing will correspond to image segmentation ). We illustrate our algorithm on natural images of complex city scenes and show examples where image segmentation can be improved by allowing object specific knowledge to disambiguate low-level segmentation cues, and conversely where object detection can be improved by using generic visual patterns to explain away shadows and occlusions.