- Main
Do Large language Models know who did what to whom?
Abstract
Large Language Models (LLMs), which match or exceed human performance on many linguistic tasks, are nonetheless commonly criticized for not “understanding” language. These critiques are hard to evaluate because they conflate “understanding” with reasoning and common sense—abilities that, in human minds, are dissociated from language processing per se. Here, we instead focus on a form of understanding that is tightly linked to language: mapping sentence structure onto an event description of “who did what to whom” (thematic roles). Whereas LLMs can be directly trained to solve to this task, we asked whether they naturally learn to extract such information during their regular, unsupervised training on word prediction. In two experiments, we evaluated sentence representations in two commonly used LLMs—BERT and GPT-2. Experiment 1 tested hidden representations distributed across all hidden units, and found an unexpected pattern: sentence pairs that had opposite (reversed) agent and patient, but shared syntax, were represented as more similar than pairs that shared the same agent and same patient, but differed in syntax. In contrast, human similarity judgments were driven by thematic role assignment. Experiment 2 asked whether thematic role information was localized to a subset of units and/or to attention heads. We found little evidence that this information was available in hidden units (with one exception). However, we found attention heads that reflected thematic roles independent of syntax. Therefore, some components within LLMs capture thematic roles, but such information exerts a much weaker influence on their sentence representations compared to its influence on human judgments.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-