Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Towards Fast and Stable GAN via Free Adversarial Training

  • Author(s): Zhong, Jiachen
  • Advisor(s): Hsieh, Cho-Jui
  • et al.
Abstract

State-of-the-art Generative Adversarial Network (GAN) often relies on stabilization methods to stabilize the training by constraining the global Lipschitz continuity. However, the global constraint may result in under-fitting and slow convergence. RobGAN proposed a method to control the local Lipschitz value by adversarial training and achieved improved performance and convergence speed. However, the adversarial training procedure in RobGAN leads to significantly increased computational time which makes RobGAN less useful in practice.

In this thesis, we propose to improve the training speed of RobGAN by free adversarial training. In addition, we improve the loss function to diminish the natural flaw of using auxiliary classifier in RobGAN. We evaluate our method on three datasets, CIFAR10, CIFAR100, and IMAGENET-143. Compared with previous works, our methods lead to significant improvements in term of both generation quality and training time in all experiments on these datasets. Moreover, we propose a new point to explain why adversarial training on the discriminator can improve the training of GAN.

Main Content
Current View