Skip to main content
eScholarship
Open Access Publications from the University of California

Pragmatic Reasoning in GPT Models: Replication of a Subtle Negation Effect

Creative Commons 'BY' version 4.0 license
Abstract

This study explores whether Large Language Models (LLMs) can mimic human cognitive processes, particularly pragmatic reasoning in language processing. Focusing on how humans tend to offer semantically similar alternatives in response to negated statements, the research examines if LLMs, both base and fine-tuned, exhibit this behavior. The experiment involves a cloze task, where the models provide completions to negative sentences. Findings reveal that chat models closely resemble human behavior, while completion models align worse with human responses. This indicates that mere linguistic input statistics might be inadequate for LLMs to develop behaviours consistent with pragmatic reasoning. Instead, conversational fine-tuning appears to enable these models to adopt behaviors akin to human pragmatic reasoning. This research not only sheds light on LLMs' capabilities but also prompts further inquiry into language acquisition, especially the role of conversational interactions in developing pragmatic reasoning.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View