Unsupervised and Zero-Shot Learning for Open-Domain Natural Language Processing
Skip to main content
eScholarship
Open Access Publications from the University of California

UC Riverside

UC Riverside Electronic Theses and Dissertations bannerUC Riverside

Unsupervised and Zero-Shot Learning for Open-Domain Natural Language Processing

Abstract

NLP has yielded results that were unimaginable only a few years ago on a wide range of real-world tasks, thanks to deep neural networks and the availability of large-scale labeled training datasets. However, the critical assumption of existing supervised methods that labeled training data is available for all classes is unscalable: the acquisition of such data is prohibitively laborious and expensive. Therefore, zero-shot (or unsupervised) models that can seamlessly adapt to new unseen classes are indispensable for NLP methods to work in real-world applications effectively; such models mitigate (or eliminate) the need for collecting and annotating data for each domain. This dissertation addresses three critical NLP problems in contexts where training data is scarce (or unavailable): intent detection, slot filling, and paraphrasing. Having reliable solutions for the mentioned problems in the open-domain setting pushes the frontiers of NLP a step towards practical conversational AI systems.First, this thesis addresses intent detection --- extracting the intents implied in natural language utterances. We propose RIDE: a zero-shot intent detection model that captures domain oblivious semantic associations between an utterance and an intent by analyzing how the phrases in an utterance are linked to an intent label via commonsense knowledge. RIDE significantly and consistently outperforms SOTA intent detection models. The second contribution of this dissertation is a zero-shot model for the slot filling task --- extracting the required query parameters, given a natural language utterance. Our model, LEONA, exploits domain-independent pre-trained NLP models and context-aware utterance-slot similarity features via attention mechanisms by taking advantage of the fact that slot values appear in similar contexts across domains. LEONA significantly and consistently outperforms SOTA models in a wide range of experimental setups. Finally, we propose an unsupervised model, PUP, for paraphrasing. Unsupervised paraphrasing has applications in conversational AI, among others. PUP uses a variational autoencoder (trained using a non-parallel corpus) to generate a seed paraphrase that warm-starts a deep reinforcement learning model. Then, it progressively tunes the seed paraphrase guided by a novel reward function which combines semantic adequacy, language fluency, and expression diversity measures. PUP achieves an unprecedented balance across the paraphrase quality metrics.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View