Skip to main content
Open Access Publications from the University of California

Learning from Sparse and Deficient Data and its Applications

  • Author(s): Li, Ruirui
  • Advisor(s): Wang, Wei
  • et al.

Deep learning models have been attracting substantial attention in the last few years as they successfully demonstrated remarkable performance on different tasks (e.g., classification and ranking) in various fields. The success of most deep learning models significantly depends on massive training data as these models inherently work by memorizing or distinguishing massive training instances in the data. However, such a huge amount of training data may not be available or accessible all the time due to privacy issues, user experience concerns, or corporation constraints, etc. This leads to the data deficiency issue, which corresponds to the scenarios where we have just a few training instances to accomplish a task. Even though sufficient training data could be acquired, sometimes the data could be very sparse due to the large base of elements (e.g. users and items) in the dataset. This results in the data sparsity issue, which corresponds to the scenarios where we have very limited training instances to pinpoint the characteristics of each element in the data. Both the data deficiency and data sparsity issues limit the expressiveness of strong model capacities of the deep learning models and generally lead to inferior performances on a variety of tasks.

In this dissertation, we propose several deep learning frameworks to compensate for the data deficiency and data sparsity in the context of three concrete applications, i.e., customer recommendation in location-based social networks, query recommendation in search engines, and automatic speaker recognition. The methodologies presented in these frameworks span different research areas, including geographical influence modeling on location data, automatic data augmentation via adversarial training, comprehensive instance utilization through metric-learning-based few-shot learning, and knowledge transfer via gradient-based meta-learning. As a result, these methodologies not only tackle specific challenges in the applications mentioned above but also shed light on other relevant applications.

Main Content
Current View