Human hair is a very complex visual pattern whose representation is rarely studied in the vision literature despite its important role in human recognition. In this paper, we propose a generative model for hair representation and hair sketching, which is far more compact than the physically based models in graphics. We decompose a color hair image into three bands: a color band (a) (by Luv transform), a low frequency band (b) for lighting variations, and a high frequency band (c) for the hair pattern. Then we propose a three level generative model for the hair image (c). In this model, image (c) is generated by a vector field (d) that represents hair orientation, gradient strength, and directions; and this vector field is in turn generated by a hair sketch layer (e). We identify five types of primitives for the hair sketch each specifying the orientations of the vector field on the two sides of the sketch. With the five-layer representation (a-e) computed, we can reconstruct vivid hair images and generate hair sketches. We test our algorithm on a large data set of hairs and some results are reported in the experiments.
In this paper, we propose a framework for modelling human faces over scales. As a person walks towards the camera, more details of the face will be revealed and thus more random variables and parameters have to be introduced. Accordingly, a series of existing generative models are organized as five regimes, which form nested probabilistic families. The generative model in higher regime is augmented by (1) adding more latent variables, features extractors, and (2) enlarging the dictionary of description, e.g. PCA bases, local parts or sketch patches. The minimum description length (MDL) is used as a criterion for the model selection and transition. As observed in our experiment, the optimal model switches among the different regimes when the scale changes. A sequence of tasks, such as face detection, recognition, sketching and super-resolution etc. can be accomplished based on the models in the different regimes.
In this paper we present a generative, high resolution face representation which extends the well- known active appearance model (AAM)[5], [6], [7] with two additional layers. (i) One layer refines the global AAM (PCA) model with a dictionary of learned face components to account for the shape and intensity variabilities of eyes, eyebrows, nose and mouth. (ii) The other layer divides the face skin into 9 zones with a learned dictionary of sketch primitives to represent skin marks and wrinkles. This model is no longer of fixed dimensions and is flexible for it can select the diverse representations in the dictionaries of face components and skin features depending on the complexity of the face. The selection is modulated by the grammatical rules through hidden ”switch” variables. Our comparison experiments demonstrate that this model can achieve nearly lossless coding of face at high resolution (256 × 256 pixels) with low bits. We also show that the generative model can easily generate cartoon sketches by changing the rendering dictionary. Our face model is aimed at a number of applications including cartoon sketch in non-photorealistic rendering, super-resolution in image processing, and low bit face communication in wireless platforms.
Cookie SettingseScholarship uses cookies to ensure you have the best experience on our website. You can manage which cookies you want us to use.Our Privacy Statement includes more details on the cookies we use and how we protect your privacy.