Skip to main content
eScholarship
Open Access Publications from the University of California

UC Santa Barbara

UC Santa Barbara Electronic Theses and Dissertations bannerUC Santa Barbara

Efficient Natural Language Processing with Limited Data and Resources

Abstract

Natural language processing (NLP) has long been regarded as the pinnacle of artificial intelligence, aiming to achieve a comprehensive understanding of human languages. In recent years, the field has experienced significant advancements with the transition from rule-based approaches to deep learning methodologies. However, the standard approaches often rely on vast amounts of data for learning, highlighting the necessity for more data-efficient techniques. Additionally, effectively utilizing available resources while addressing the challenges of frequent model updates and safeguarding against malicious attacks that exploit limited resources presents another significant problem in NLP.

This dissertation focuses on the development of efficient natural language processing (NLP) models under limited data and the effective utilization of available resources. In the first part, we address the challenge of learning models with limited data. For scenarios where only a few examples are available, we propose a meta-learning approach that leverages task-specific meta information to effectively learn new models. For cases with a moderate amount of data but still insufficient for more demanding tasks, we introduce self-supervised learning techniques to enhance performance by incorporating additional learning tasks from the available data. We also explore the limitations of even state-of-the-art language models, such as GPT-3, in handling out-of-distribution data shifts and propose a tutor-based learning approach that converts out-of-distribution problems into in-distribution ones through step-by-step demonstrations.

In the second part, we shift our focus to optimizing resource utilization in NLP. Given the rapidly changing nature of the world, frequent updates of deployed models with new data are crucial. We present innovative approaches for effectively updating models in lifelong learning scenarios. As the adoption of large language models as backbone dialogue systems gains popularity, resource limitations become a significant concern. To counter malicious attacks, particularly Distributed Denial of Service (DDoS) attacks, we investigate the detection of bot imposters using a single question. By accurately distinguishing between human users and bots, our objective is to maximize resource allocation for real users and ensure uninterrupted service.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View