Skip to main content
eScholarship
Open Access Publications from the University of California

A Generative Model of Human Hair for Hair Sketching

  • Author(s): Hong Chen
  • Song Chun Zhu
  • et al.
Abstract

Human hair is a very complex visual pattern whose representation is rarely studied in the vision literature despite its important role in human recognition. In this paper, we propose a generative model for hair representation and hair sketching, which is far more compact than the physically based models in graphics. We decompose a color hair image into three bands: a color band (a) (by Luv transform), a low frequency band (b) for lighting variations, and a high frequency band (c) for the hair pattern. Then we propose a three level generative model for the hair image (c). In this model, image (c) is generated by a vector field (d) that represents hair orientation, gradient strength, and directions; and this vector field is in turn generated by a hair sketch layer (e). We identify five types of primitives for the hair sketch each specifying the orientations of the vector field on the two sides of the sketch. With the five-layer representation (a-e) computed, we can reconstruct vivid hair images and generate hair sketches. We test our algorithm on a large data set of hairs and some results are reported in the experiments.

Main Content
Current View