Skip to main content
eScholarship
Open Access Publications from the University of California

Evaluating human-like similarity biases at every scale in Large Language Models: Evidence from remote and basic-level triads.

Abstract

In the remote triad task, participants judge the relatedness between randomly chosen words in a three-alternative choice triadic judgement task. While most word pairs in these triads are weakly related, humans agree on which to choose. This is theoretically interesting as it contradicts previous claims that suggest that the notion of similarity is unconstrained in principle (e.g., Goodman, 1972}. Here, we present new evidence from GPT-4, showing that context-aware LLMs provide excellent predictions of this task. Moreover, the strength of this effect was even larger than that found for basic-level comparisons, which involve highly similar items. Together, this implies that the similarity of human representations is highly structured at every scale, even in tasks with limited context. Follow-up analysis provides insights into how LLMs are successful in this task. Further implications of the ability to compare words at every scale are discussed.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View