Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Representing linguistic knowledge with probabilistic models

Abstract

The use of language is one of the defining features of human cognition. Focusing here on two key features of language, productivity and robustness, I examine how basic questions regarding linguistic representation can be approached with the help of probabilistic generative language models, or PGLMs. These statistical models, which capture aspects of linguistic structure in terms of distributions over events, can serve as both the product of language learning and as prior knowledge in real-time language processing. In the first two chapters, I show how PGLMs can be used to make inferences about the nature of people's linguistic representations. In Chapter 1, I look at the representations of language learners, tracing the earliest evidence for a noun category in large developmental corpora. In Chapter 2, I evaluate broad-coverage language models reflecting contrasting assumptions about the information sources and abstractions used for in-context spoken word recognition in their ability to capture people's behavior in a large online game of “Telephone.” In Chapter 3, I show how these models can be used to examine the properties of lexicons. I use a measure derived from a probabilistic generative model of word structure to provide a novel interpretation of a longstanding linguistic universal, motivating it in terms of cognitive pressures that arise from communication. I conclude by considering the prospects for a unified, expectations-oriented account of language processing and first language learning.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View