Skip to main content
eScholarship
Open Access Publications from the University of California

Do We Need Neural Models to Explain Human Judgments of Acceptability?

Creative Commons 'BY' version 4.0 license
Abstract

Native speakers can judge whether a sentence is an acceptableinstance of their language. Acceptability provides a means ofevaluating whether computational language models are pro-cessing language in a human-like manner. We test the abilityof language models, simple language features, and word em-beddings to predict native speakers’ judgments of acceptabil-ity on English essays written by non-native speakers. We findthat much sentence acceptability variance can be captured by acombination of misspellings, word order, and word similarity(r = 0.494). While predictive neural models fit acceptabilityjudgments well (r = 0.527), we find that a 4-gram model isjust as good (r = 0.528). Thanks to incorporating misspellings,our 4-gram model surpasses both the previous unsupervisedstate-of-the art (r = 0.472), and the average native speaker(r = 0.46), demonstrating that acceptability is well capturedby n-gram statistics and simple language features.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View