Skip to main content
eScholarship
Open Access Publications from the University of California

Embodied attention resolves visual ambiguity to support infants’ real-time word learning

Creative Commons 'BY' version 4.0 license
Abstract

The input for early language learning is often viewed as a landscape of ambiguity with the occasional high-quality naming event providing resources to resolve uncertainty. Word learning from ambiguous naming events is often studied using screen-based cross-situational learning tasks. Little is known, however, on how ambiguity impacts real-time word learning in free-flowing interactions. To explore this question, we asked parent-infant dyads to play in a home-like environment with unfamiliar objects while wearing head-mounted eye trackers. After the play session, we tested whether infants learned any of the object-label mappings and categorized individual words as learned or not learned. Dyadic behaviors and the visual information available to infants during the naming moments of learned and not learned words were analyzed. The results show that infants’ embodied attention during ambiguous naming moments was the key to predicting learning outcomes. Specifically, infants held and looked at the target object longer in ambiguous instances that led to learning. Our results emphasize the importance of studying word learning in naturalistic environments to better understand the cues infants use to resolve ambiguity in everyday learning contexts.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View