Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Learning Transferable Representations across Domains

Abstract

Deep neural networks have achieved great success in learning representations on a given dataset. However, in many cases, the learned representations are dataset-dependent and cannot be transferred to datasets with different distributions, even for the same task. How to deal with domain shift is crucial to improve the generalization capability of models. Domain adaptation offers a potential solution, allowing us to transfer networks from a source domain with abundant labels onto target domains with only limited or no labels.

In this dissertation, I will present the many ways that we can learn transferable representations under different scenarios, including 1) when the source domain has only limited labels, even only one label per class, 2) when there are multiple labeled source domains, 3) when there are multiple unseen unlabeled target domains. These approaches are general across different data modalities (e.g. vision and language) and can be easily combined to solve other similar domain transfer settings (e.g. adapting from multiple sources with limited labels), enabling models to generalize beyond the source domains. Many of the works transfer knowledge from simulation data to real-world data in order to alleviate the need for expensive manual annotations. Finally, I present our pioneering work on building a LiDAR point cloud simulator, which has further enabled a large amount of domain adaptation work on LiDAR point cloud segmentation adaptation.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View