Skip to main content
eScholarship
Open Access Publications from the University of California

Leveraging Neural Networks for Feature Selection in Sentence Processing Models

Creative Commons 'BY' version 4.0 license
Abstract

Previous research under the cue-based retrieval framework has assumed that general and discrete retrieval cues, such as [+subject] and [+singular] are employed in selecting a retrieval target during dependency building. However, to explain the effects of semantic compatibility between the target and the head of a dependency, as demonstrated in Cunnings and Sturt (2018), an infinite amount of lexically specific features and retrieval cues are needed. Smith and Vasishth (2020) offered a principled way for feature selection using word embeddings. However, even with a giant corpus, some dependencies are missing. To solve the coverage problem, we leverage pre-trained neural language models (GPT-2) to select features. The metric used in this paper is highly correlated with those in Smith and Vasishth (2020) and can predict the results reported in Cunnings and Sturt (2018). We argue that it the method offers a broader-coverage yet convenient way for feature selection.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View