Augmenting and Sampling for GAN Training and Generating Natural Adversaries
- Author(s): Zhao, Zhengli
- Advisor(s): Singh, Sameer
- et al.
Generative Adversarial Networks (GANs) provide a novel framework and powerful tools for machine learning, especially for deep representation learning and generative models. GANs have been successfully applied to a variety of prominent applications including image synthesis, photo editing, video prediction, text generation, and domain adaptation. Despite the development and advance of GANs, there are still crucial challenges to their applications, such as the unstable training of GANs and frequent mode collapse.
In this dissertation, we develop a series of methods for improving the training of GANs from different perspectives, and utilize GANs to generate natural adversaries for evaluating the robustness and interpreting the behavior of machine learning models. We propose to apply consistency regularization to the training procedure of GANs with respect to the augmentations of both real data and generated examples, along with regularizing the generator directly with latent optimizations. We thoroughly research on the effectiveness of a broad set of image augmentations for improving generation performance when interacting with unsupervised learning techniques such as consistency regularization and contrastive loss. We also develop a simple yet effective sampling schema based on the predictions of the discriminator, which has been shown to stabilize GAN training and alleviate mode collapse. All these methods generalize to different GAN variants, result in state-of-the-art generation performance, and can be adopted as a standard part of the GAN training toolkit. Finally, as a novel application of GANs, we incorporate and utilize GANs to generate natural adversarial examples that are meaningful and semantically close to input examples for both visual and textual data, which can help evaluate and interpret black-box models.