Skip to main content
eScholarship
Open Access Publications from the University of California

In-the-Moment Visual Information from the Infant's Egocentric View Determines the Success of Infant Word Learning: A Computational Study

  • Author(s): Amatuni, Andrei;
  • Schroer, Sara E;
  • Zhang, Yayun;
  • Peters, Ryan E;
  • Reza, Md. Alimoor;
  • Crandall, David;
  • Yu, Chen
  • et al.
Abstract

Infants learn the meaning of words from accumulated experiences of real-time interactions with their caregivers. To study the effects of visual sensory input on word learning, we recorded infant's view of the world using head-mounted eye trackers during free-flowing play with a caregiver. While playing, infants were exposed to novel label-object mappings and later learning outcomes for these items were tested after the play session. In this study we use a classification based approach to link properties of infants' visual scenes during naturalistic labeling moments to their word learning outcomes. We find that a model which integrates both highly informative and ambiguous sensory evidence is a better fit to infants' individual learning outcomes than models where either type of evidence is taken alone, and that raw labeling frequency is unable to account for the word learning differences we observe. Here we demonstrate how a computational model, using only raw pixels taken from the egocentric scene image, can derive insights on human language learning.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View