Skip to main content
eScholarship
Open Access Publications from the University of California

UC San Diego

UC San Diego Electronic Theses and Dissertations bannerUC San Diego

Modelling and MMSE reconstruction solutions for image and video super-resolution

Abstract

Super-resolution considers the problem of increasing the spatial resolution of an image or video from one or more observation images or frames. In both cases, the problem seeks to determine a representation of the content at a higher spatial resolution than was originally possible to acquire, store, or transmit. For the case of images, the problem has been considered in the literature for over two decades and a variety of techniques exist. Far fewer results exist for the case of digital video, which is becoming a problem of increasing importance as presence of digital video becomes more prevalent. The problem is considered through two separate processes: modelling, which describes how multiple individual low-resolution observed images/frames are related to a single high- resolution equivalent, and reconstruction, which recovers the unknown high-resolution version from the observations and the results of the modelling process. As presented, the complete problem is driven by the proposed reconstruction solution, and novel aspects of the modelling problem are introduced based on the needs of the particular reconstruction solution. For the case of images, a linear minimum mean-squared error (LMMSE) frequency domain solution is proposed using a filter bank model and a stationary stochastic signal assumption. The solution requires estimation of a high-resolution image's spectral density from its low-resolution observations. Novel parametric spectral models for images are introduced and applied to the problem. In the case of video sequences, the presence of temporal motion over multiple frames necessarily leads to more complex registration models, generally prohibiting the application of most standard still-image solutions. Previously, super-resolution methods intended for video have been limited to relatively simple motion models, e.g., global translational motion, based on a reconstruction requirement that the distortion and motion models commute. Relying on a reverse motion model, the proposed approach removes this limitation, consequently extending the result to cases of arbitrary motion models. With the required modelling in place, a LMMSE spatial domain reconstruction is used to determine the reconstructed sequence

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View