Skip to main content
eScholarship
Open Access Publications from the University of California

Application of Generative Adversarial Network on Image Style Transformation and Image Processing

  • Author(s): Wang, Anshu
  • Advisor(s): Wu, Yingnian
  • et al.
Abstract

Image-to-Image translation is a collection of computer vision problems that aim to learn a mapping between two different domains or multiple domains. Recent research in computer vision and deep learning produced powerful tools for the task. Conditional adversarial net- works serve as a general-purpose solution for image-to-image translation problems. Deep Convolutional Neural Networks can learn an image representation that can be applied for recognition, detection, and segmentation. Generative Adversarial Networks (GANs) has gained success in image synthesis. However, traditional models that require paired training data might not be applicable in most situations due to lack of paired data.

Here we review and compare two different models for learning unsupervised image to im- age translation: CycleGAN and Unsupervised Image-to-Image Translation Networks (UNIT). Both models adopt cycle consistency, which enables us to conduct unsupervised learning without paired data. We show that both models can successfully perform image style trans- lation. The experiments reveal that CycleGAN can generate more realistic results, and UNIT can generate varied images and better preserve the structure of input images.

Main Content
Current View