Comapring Adversarial Unsupervised Domain Adaptation to Zero-Shot Classification in Contrastive Language-Image Pre-Training Embedding Space
Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Comapring Adversarial Unsupervised Domain Adaptation to Zero-Shot Classification in Contrastive Language-Image Pre-Training Embedding Space

Abstract

This paper explores how Unsupervised Domain Adaptation (UDA) measures up to zero-shotclassification using contrastive language-image pre-training (CLIP) models. We begin by intro- ducing the concept of domain adaptation and its necessity in various real-world applications. Next, we introduce the ideas behind CLIP models followed by an introduction to the UDA method called Adversarial Discriminative Domain Adaptation (ADDA). Then, we conduct an experimental evaluation of this method by applying it to embeddings obtained from two dif- ferent CLIP models, ViT-B-32 and ViT-g/14. Finally, we compare the performance of ADDA versus zero-shot classification (ZSC) using the same two CLIP models and provide insights into the implications of the results.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View