- Main
Generative Models for Low-Dimensional Video Representation and Reconstruction
Published Web Location
https://doi.org/10.1109/tsp.2020.2977256Abstract
Generative models have received considerable attention in signal processing and compressive sensing for their ability to generate high-dimensional natural image using low-dimensional codes. In the context of compressive sensing, if the unknown image belongs to the range of a pretrained generative network, then we can recover the image by estimating the underlying compact latent code from the available measurements. In practice, however, a given pretrained generator can only reliably generate images that are similar to the training data. To overcome this challenge, a number of methods have been proposed recently to use untrained generator structure as prior while solving the signal recovery problem. In this paper, we propose a similar method for jointly updating the weights of the generator and latent codes while recovering a video sequence from compressive measurements. We use a single generator to generate the entire video. To exploit the temporal redundancy in a video sequence, we use a low-rank constraint on the latent codes that imposes a low-dimensional manifold model on the generated video sequence. We evaluate the performance of our proposed methods on different video compressive sensing problems under different settings and compared them against some state-of-the-art methods. Our results demonstrate that our proposed methods provide better or comparable accuracy and low computational and memory complexity compared to the existing methods.
Many UC-authored scholarly publications are freely available on this site because of the UC's open access policies. Let us know how this access is important for you.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-