- Main
Comapring Adversarial Unsupervised Domain Adaptation to Zero-Shot Classification in Contrastive Language-Image Pre-Training Embedding Space
- Deshpande, Kaustubh
- Advisor(s): Wu, Yingnian
Abstract
This paper explores how Unsupervised Domain Adaptation (UDA) measures up to zero-shotclassification using contrastive language-image pre-training (CLIP) models. We begin by intro- ducing the concept of domain adaptation and its necessity in various real-world applications. Next, we introduce the ideas behind CLIP models followed by an introduction to the UDA method called Adversarial Discriminative Domain Adaptation (ADDA). Then, we conduct an experimental evaluation of this method by applying it to embeddings obtained from two dif- ferent CLIP models, ViT-B-32 and ViT-g/14. Finally, we compare the performance of ADDA versus zero-shot classification (ZSC) using the same two CLIP models and provide insights into the implications of the results.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-