- Main
Human similarity judgments of emojis support alignment of conceptual systems across modalities
Abstract
Humans can readily generalize their learning to new visual concepts, and infer their associated meanings. How do people align the different conceptual systems learned from different modalities? In the present paper, we examine emojis— pictographs uniquely situated between visual and linguistic modalities—to explore the role of alignment and multimodality in visual and linguistic semantics. Simulation experiments show that relational structures of emojis captured in visual and linguistic conceptual systems can be aligned, and that the ease of alignment increases as the number of emojis increases. We also found that emojis with subjective impressions of high popularity are easier to align between their visual and linguistic representations. A behavioral experiment was conducted to measure similarity patterns between 48 emojis, and to compare human similarity judgments with three models based on visual, semantic and multimodal-joint representations of emojis. We found that the model trained with multimodal data by aligning visual and semantic spaces best accounts for human judgments.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-