Skip to main content
eScholarship
Open Access Publications from the University of California

UC Irvine

UC Irvine Electronic Theses and Dissertations bannerUC Irvine

Incorporating and Eliciting Knowledge in Neural Language Models

Creative Commons 'BY' version 4.0 license
Abstract

Neural language models have drastically changed the landscape of natural language processing (NLP). Originally used for language generation (e.g., in summarization and dialogue systems) and scoring (e.g., in automatic speech recognition and statistical machine translation), these models now widely serve as the universal starting point for transfer learning on almost all NLP tasks. This development is largely due to matters of scale—because language models require only raw text for supervision, they can be trained using the massive amounts of text readily available on the web. Accordingly, when this process is carried out on enough data, it exposes models to knowledge that subsequently improves their performance when they are transferred to downstream tasks. As neural language models become increasingly prevalent, it is crucial to be able to characterize andeffectively leverage this knowledge, as well as provide recourse when it is incorrect.

In this dissertation, we address this need by exploring two research directions: 1) using the technique of prompting as a means for eliciting knowledge from language models, and 2) approaches for integrating the knowledge stored in language models and external knowledge bases.

In the first direction, we will begin by demonstrating how prompts can be used to reformulate NLP tasks as fill-in-the-blanks and complete-the-sentence problems that can naturally be solved using language models. We will then use prompts to diagnose which kinds of factual and task-specific knowledge are learned by language models during pretraining. Our analysis reveals that, while language models struggle to memorize facts, they possess surprisingly powerful capabilities to perform certain tasks. Next, we will introduce AutoPrompt, a technique to automate prompt construction, and show that by using automatically constructed prompts, language models can achieve near state-of-the-art performance on some tasks without requiring task-specific finetuning. Based on this insight, we conclude our exploration of this topic by looking into how prompting canbe best combined with finetuning to apply language models in few-shot learning settings.

In the second direction, we will explore how language models and knowledge bases can be integrated in order to improve their coverage of facts. We will first introduce two approaches, the knowledge graph language model (KGLM) and KnowBERT, for endowing language models with the means to condition on information from entity and knowledge graph embeddings. We then use the prompts developed in the previous section to show that conditioning on this information improves language models’ recall of facts. We will conclude our treatment of this research direction by studying whether the converse is true, i.e., whether languages models can be used to help maintain consistency in knowledge bases when their content is updated to reflect new information. Taken together, our work on these research topics provides a collection of insights into the nature and applications of knowledge in neural language models.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View