Skip to main content
eScholarship
Open Access Publications from the University of California

Large Language Models Show Human-Like Abstract Thinking Patterns: A Construal-Level Perspective

Abstract

This research explores the capabilities of Large Language Models (LLMs) to engage in abstract and concrete thought processes, challenging the common belief that LLMs are incapable of human-like, abstract thinking. Drawing upon the Construal Level Theory (Trope & Liberman, 2010), we demonstrate how prompts tailored for each construal level (abstract versus concrete) influence LLMs' performance in tasks requiring different cognitive approaches. Our key findings include: 1) LLMs exhibit a statistically significant difference in construal level depending on the prompt conditions, and 2) LLMs display superior performance in tasks aligned with the prompted construal level; sentiment analysis in concrete conditions and natural language inference in abstract conditions. This research contributes to the scientific understanding of LLMs, offering practical insights for their effective use in tasks necessitating diverse cognitive capabilities.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View