Skip to main content
eScholarship
Open Access Publications from the University of California

Language captures rich information about perceptibility: Evidence from LMMs and humans

Creative Commons 'BY' version 4.0 license
Abstract

Trained on text only, Large Language Models (LLMs) provide a unique way to approach the age-old question of how language captures sensory experiences. Such models have showcased human-level performance in several domains. However, what they capture about the sensory world remains uncertain. We prompted state-of-the-art LLMs (GPT-3.5 and GPT-4) as well as sighted and congenitally blind adults to judge the likelihood of successful visual and auditory perception using verbal scenarios. Scenarios varied in distance of the observer from the object (next to, across the street, a block away), duration of perception (glance vs. stare) and properties of perceived object (e.g., size for vision). Sighted and blind humans produced highly consistent perceptibility judgments, and these correlated highly with GPT-3.5 and GPT-4. GPT-4 showed human-like effects of size, distance, and duration, though both LLMs underestimated humans' ability to perceive. Language captures detailed quantitative information about perceptibility.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View