Skip to main content
eScholarship
Open Access Publications from the University of California

Emergent Mental Lexicon Functions in ChatGPT

Creative Commons 'BY' version 4.0 license
Abstract

Traditional theories of the human mental lexicon posit dedicated mechanisms of processing that develop as sustained functions of brain and mind. Large Language Models (LLMs) provide a new approach in which lexical functions emerge from the learning and processing of sequences in contexts. We prompted lexical functions in ChatGPT and compared numeric responses with averaged human data for a sample of 390 words for a range of lexical variables, some derived from corpus analyses and some from Likert ratings. ChatGPT responses were moderately to highly correlated with mean values, more so for GPT-4 versus GPT-3.5, and responses were sensitive to context and human inter-rater reliability. We argue that responses were not recalled from memorized training data but were instead soft-assembled from more general-purpose representations. Emergent functions in LLMs offer a new approach to modeling language and cognitive processes.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View