Skip to main content
eScholarship
Open Access Publications from the University of California

Recovering Mental Representations from Large Language Models with Markov Chain Monte Carlo

Creative Commons 'BY' version 4.0 license
Abstract

Simulating sampling algorithms with people has proven a useful method for efficiently probing and understanding their mental representations. We propose that the same methods can be used to study the representations of Large Language Models (LLMs). While one can always directly prompt either humans or LLMs to disclose their mental representations introspectively, we show that increased efficiency can be achieved by using LLMs as elements of a sampling algorithm. We explore the extent to which we recover human-like representations when LLMs are interrogated with Direct Sampling and Markov chain Monte Carlo (MCMC). We found a significant increase in efficiency and performance using adaptive sampling algorithms based on MCMC. We also highlight the potential of our method to yield a more general method of conducting Bayesian inference with LLMs.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View