Multiscale Generative Model of Human Faces
- Author(s): Xu, Zijian
- Chen, Hong
- Zhu, Song-Chen
- et al.
In this paper, we propose a framework for modelling human faces over scales. As a person walks towards the camera, more details of the face will be revealed and thus more random variables and parameters have to be introduced. Accordingly, a series of existing generative models are organized as five regimes, which form nested probabilistic families. The generative model in higher regime is augmented by (1) adding more latent variables, features extractors, and (2) enlarging the dictionary of description, e.g. PCA bases, local parts or sketch patches. The minimum description length (MDL) is used as a criterion for the model selection and transition. As observed in our experiment, the optimal model switches among the different regimes when the scale changes. A sequence of tasks, such as face detection, recognition, sketching and super-resolution etc. can be accomplished based on the models in the different regimes.