Skip to main content
eScholarship
Open Access Publications from the University of California

UC Riverside

UC Riverside Electronic Theses and Dissertations bannerUC Riverside

Video Enhancement with Internal Learning and Blind Priors

Creative Commons 'BY-NC-SA' version 4.0 license
Abstract

With the increasing popularity of mobile cameras in various computer vision and multimedia applications, the demand for high-quality visual content is also increasing. However, videos captured using current consumer-grade cameras often suffer from a variety of quality issues such as motion blur, low frame-rate, low resolution, and rolling shutter artifacts. The reasons vary, including low shutter frequency, long exposure times, type of imaging sensors, and the movement of the device itself. These factors limit the quality of videos captured. As a vast majority of videos is captured using mobile cameras these days, it calls for improved quality of the video captured by these devices. In this thesis, we focus on enhancing the quality of videos by leveraging the spatio-temporal internal structure of the given video along with the external information available from the external dataset. Most of the existing works make a prior assumption on the degradation model that affects the quality of videos. Examples of such assumptions in degradation models include knowledge that all input frames are blurry, known degradation kernel for spatio-temporal down-sampling, and absence of rolling shutter artifacts. Nevertheless, in many real-world applications, these assumptions don't hold true as the input priors are usually unknown. We term these unknown priors as blind priors for the task of video enhancement. In this regard, we first present our work on joint video deblurring and interpolation with no prior assumption that input frames are always blurry. We utilize internal information available from neighbouring frames to deblur and interpolate between frames. Then, we describe our approach on blind spatio-temporal video super-resolution with the unknown down-sampling kernel, by leveraging an external dataset and internal structure of a given video. Next, we present our work on joint rolling shutter correction and super-resolution to recover the high-resolution global shutter video using patch-recurrence property in videos. Finally, we show an application of enhancement techniques in biomedical imaging, where we utilize quantization in feature space for unsupervised image denoising. We demonstrate that the proposed approaches effectively utilize the internal learning for the task of video enhancement and show impressive performance in different real-world blind prior settings.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View