Skip to main content
eScholarship
Open Access Publications from the University of California

Narrative flow quantification of autobiographies reveals signatures of memory retrieval

Creative Commons 'BY' version 4.0 license
Abstract

The development of large language models (LLMs) enables the investigation of cognitive phenomena at an unprecedented scale. We applied LLM-derived measures on large narrative datasets to characterize the structure and dynamics of memory retrieval. Specifically, we found that autobiographical narratives flow less linearly from sentence to sentence than biographical narratives. Furthermore, the treatment of topics within biographies tends to be more coherent and are also written at a higher level of complexity than autobiographies. In summary, the narrative flow differences suggest that when authors rely on their own memory, retrieval proceeds in a less organized manner likely reflecting spontaneous cueing of associated memories. Our results demonstrate the utility of applying LLMs to narrative text to study cognitive phenomena.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View