Skip to main content
eScholarship
Open Access Publications from the University of California

Cross-linguistically shared spatial mappings of abstract concepts guide non-signers inferences about sign meaning

Abstract

Abstract concepts like valence and magnitude are represented through space in co-speech gestures and linguistic metaphors.Recent work has shown that such spatial mappings are also reflected in the motion patterns of signs in sign languages,suggesting that sign languages may reveal cross-linguistically shared ways of spatializing abstract concepts. We probedthis possibility further by testing whether non-signers are sensitive to vertical spatial mappings encoded in signs in Amer-ican Sign Language (ASL). Non-signers were presented with videos of ASL signs and asked to judge the likely valenceand magnitude of their meanings. Judgments were well predicted by the direction of hand movement along the verticalaxis but not other axes, implying that participants spontaneously relied on vertical mappings of valence and magnitude tomake semantic inferences. These findings suggest that sign languages encode spatial mappings of abstract concepts thatare readily accessible to non-signers, and potentially useful for language learning.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View