Skip to main content
eScholarship
Open Access Publications from the University of California

Transformer-Maze: An Accessible Incremental Processing Measurement Tool

Abstract

The lesser known G(rammatical)-Maze task (Forster, Guerrera, & Elliot, 2009) is arguably a better choice than self-paced reading (Mitchell, 2004) for detecting difficulty from word to word in online sentence processing over crowdsourcing platforms. In G-Maze, a participant must choose between each successive word in a sentence and a distractor word that does not make sense based on the preceding context. If a participant chooses the distractor as opposed to the actual word, then the trial ends and they may not complete the sentence. Thus, G-Maze automatically filters out data from inattentive participants, and more effectively localizes differences in processing difficulty. Still, the effort required to pick contextually inappropriate distractors for hundreds of words might cause an experimenter to hesitate before picking this method. To save experimenters this time and effort, Boyce, Futrell, and Levy (2020) developed A(uto)-Maze, a tool that automatically generates distractors using a computational language model. We now introduce the next generation of A-Maze: T(ransformer)-Maze. Transformer models are the current state of the art in natural language processing, and thousands, pretrained in a variety of languages, are freely available on the internet, specifically through Huggingface’s Transformers package (Wolf et al., 2020). In our validation experiment, T-Maze proves itself to be as effective as G-Maze with handmade materials, run in a lab. This tool thus allows psycholinguists to easily gather high-quality online sentence processing data in many different languages.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View