Skip to main content
eScholarship
Open Access Publications from the University of California

LLMs Don’t “Do Things with Words” but Their Lack of Illocution Can Inform the Study of Human Discourse

Abstract

Despite the long-standing theoretical importance of the concept of illocutionary force in communication (Austin, 1975), quantitative measurement of it has remained elusive. The following study seeks to measure the influence of illocutionary force on the degree to which subreddit community members maintain the concepts and ideas of previous community members' comments when they reply to each other's content. We leverage an information-theoretic framework implementing a measurement of linguistic convergence to capture how much of a previous comment can be recovered from its replies. To show the effect of illocutionary force, we then ask a large language model (LLM) to write a reply to the same previous comment as though it were a member of that subreddit community. Because LLMs inherently lack illocutionary intent but produce plausible utterances, they can function as a useful control to test the contribution of illocutionary intent and the effect it may have on the language in human-generated comments. We find that LLMs indeed have statistically significant, lower entropy with prior comments than human replies to the same comments. While this says very little about LLMs on the basis of how they are trained, this difference offers a quantitative baseline to assess the effect of illocutionary force on the flow of information in online discourse.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View