Skip to main content
eScholarship
Open Access Publications from the University of California

Estimating human color-concept associations from multimodal language models

Abstract

People's color-concept associations influence many processes underlying visual cognition from object recognition to information visualization interpretation. Thus, a key goal in cognitive science is developing efficient methods for estimating color-concept association distributions over color space to model these processes. Here, we investigated the ability of GPT-4, a multimodal large language model, to estimate human-like color-concept associations. We collected human association ratings between 70 concepts spanning abstractness and 71 colors spanning perceptual color space and compared these ratings to analogous ratings from GPT-4, when it was given concepts as words and colors as hexadecimal codes. GPT-4 ratings were correlated with human ratings, comparably to state-of-the-art image-based methods. Variation in human-GPT rating correlations across concepts was predicted by concept abstractness, but this effect was superseded by specificity (peakiness; inverse entropy) of color-concept association distributions. Our results highlight the viability of using model-generated color-concept association ratings to better understand human color semantics.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View