Skip to main content
eScholarship
Open Access Publications from the University of California

Application of machine learning to signal entrainment identifies predictive processing in sign language.

Creative Commons 'BY' version 4.0 license
Abstract

We present the first analysis of multi-frequency neural entrainment to dynamic visual features which drives sign language comprehension. Using the measure of EEG coherence to optical flow in video stimuli, we are able to classify fluent signers’ brain states as denoting online language comprehension, or non-comprehension during watching of non-linguistic videos that are equivalent in low-level spatiotemporal features and high-level scene parameters. The data also indicates that lower frequencies, such as 1 Hz and 4 Hz, contribute substantially to brain state classification, indicating relevance of neural coherence to the signal at these frequencies to language comprehension. These findings suggest that fluent signers rely on predictive processing during online comprehension.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View