Computational challenges in explaining communication: How deep the rabbit hole goes
Skip to main content
eScholarship
Open Access Publications from the University of California

Computational challenges in explaining communication: How deep the rabbit hole goes

Creative Commons 'BY' version 4.0 license
Abstract

When people are unsure of the intended meaning of a word, they often ask for clarification. One way of doing so-often assumed in models of communication-is to point at a potential target: "Do you mean [points at the rabbit]?'' However, what if the target is unavailable? Then the only recourse is language itself, which seems equivalent to pulling oneself up from a swamp by one's hair. We created two computational models of communication, one able to point and one not. The latter incorporates inference to resolve the meaning of non-pointing signals. Simulations show agents in both models reach perceived understanding equally quickly. While this means agents think they are successfully communicating, non-pointing agents understand each other only at chance level. This shows that state-of-the-art computational explanations have difficulty explaining how people solve the puzzle of underdetermination, and that doing so will require a fundamental leap forward.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View