Skip to main content
eScholarship
Open Access Publications from the University of California

Variability in communication contexts determines the convexity of semantic category systems emerging in neural networks

Creative Commons 'BY' version 4.0 license
Abstract

Artificial neural networks trained using deep-learning methods to solve a simple reference game by optimizing a task-specific utility develop efficient semantic categorization systems that trade off complexity against informativeness, much like the category systems of human languages do. But what exact type of structures in the semantic space could result in efficient categories, and how are these structures shaped by the contexts of communication? We propose a NN model that moves beyond the minimal dyadic setup and show that the emergence of convexity, a property of semantic systems that facilitates this efficiency, is dependent on the amount of variability in communication contexts across partners. We use a method of input representation based on compositional vector embeddings that is able to achieve a higher level of communication success than regular non-compositional representation methods, and can achieve a better balance between maintaining the structure of the semantic space and optimizing utility.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View