Skip to main content
Open Access Publications from the University of California


UCLA Electronic Theses and Dissertations bannerUCLA

Variational Models for Fine Structures


Mathematical models in imaging science attempt to understand and analyze the underlying quantitative structure of images. The most popular mathematical techniques tend to center around a variational principle. In general, variational methods are formulated by specifying an energy whose minimizers contain properties associated with an ideal image. Thus far, variational models have been successful in addressing the classical problems in imaging, namely the problems of denoising, deblurring, segmentation, and inpainting. Most work has concentrated around reconstructing homogeneous intensity regions with jump discontinuities (i.e. edges) -- one type of fine structure. More recent work has included models which incorporate tools for texture recovery. In practice, the most challenging components to recover from images are those which reside on fine-scales, namely the jumps and textures. This thesis focuses on the recovery and understanding of fine-scale information.

In many image segmentation methods, the edge set is regularized by the Hausdorff measure (i.e. length). It is known that minimizers of models containing length regularizers have segments whose endpoints either terminate perpendicularly to the boundary of the domain, terminate at a triple junction where three segments connect, or terminate at a free endpoint where the segment does not connect to any other edges or the boundary of the domain. However, standard segmentation methods (those that are based on the level set method) are only able to capture edge structures which contain the first two types of segments. Part I generalizes the level set based image segmentation methods to be able to detect free endpoint structures. This results in the ability to capture a larger class of edge structures using the length regularizer while recovering homogeneous regions.

Aside from edge recovery, cartoon-texture regularization applied to ill-posed imaging problems allows for the reconstruction of many small-scale (patterned) details. The cartoon component is typically modeled by functions of bounded variation and has been shown to be a successful descriptor of the large geometric structures in images. However, current texture models are not universal and may depend on the problem or the particular class of images. In general, texture is defined by its highly oscillatory nature and its well patterned structure. Exploiting each of these properties, two texture models are provided, one using weak functional spaces to promote oscillations and the other using matrix theory to define patterns.

The first texture regularization is measured by duality with a space of functions which approximates W^(1, infinity), thereby encouraging oscillations. In order to provide a differentiable approximation to the L^infinity norm, a concentration of measures approach is taken. This model works well for reconstructing texture in highly degraded blurry images.

The second texture model is defined by the nuclear norm applied to patches in the image, interpreting the texture patches to be low-rank. This provides a mathematical description for highly patterned texture as well as an easy to implement numerical method. This particular texture model has the advantage of separating noise from texture and has been shown to better reconstruct texture for other applications such as denoising, known deblurring, sparse reconstruction, and pattern regularization.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View