Skip to main content
eScholarship
Open Access Publications from the University of California

Multimodality on screen: Multimodal spatial directions enhance children's spatial performance on virtual visual-spatial maps

Abstract

In the context of online education, the impact of a speaker's gestures on children's spatial performance during learning still needs more exploration. Previous work found that spatial directions presented with gestures enhance children's performance on physical visuospatial arrays (Austin & Sweller, 2014). Here, we investigate whether spatial directions with or without gestures relate differently to 5-year-old monolingual Turkish children's spatial performance on a computerized map task. Children engaged in a task on a tablet that required them to recall the route directions presented in the videos either multimodally (speech-gestures combined) or in speech alone. Responses were coded for the target information in the route descriptions for actions (running), locations (school), and spatial directions (behind). Results only revealed better performance for encoding spatial directions presented multimodally (p=.013). Summarizing, the results emphasize the importance of multimodal input in enhancing children's spatial performance and highlight gestures' role in virtual visual-spatial learning environments.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View