Skip to main content
eScholarship
Open Access Publications from the University of California

Actively Detecting Patterns in an Artificial Language to Learn Non-AdjacentDependencies

Abstract

Many grammatical dependencies in natural language involve elements that are not adjacent, such as between thesubject and verb in ”the dog always barks”. We recently showed that non-adjacent dependencies are easily learnable withoutpauses in the signal when speech is presented rapidly. In this study, we used an online measure to look at the relationshipbetween online parsing and the learning performance from the offline assessment of non-adjacent dependency learning. Wefound that participants who showed current parsing of the language online also learned the dependencies better. However, thispattern disappeared when they are explicitly told where the boundaries are before parsing. Theories of non-adjacent dependencylearning are discussed.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View