Skip to main content
eScholarship
Open Access Publications from the University of California

Approximate Inference through Sequential Measurements of LikelihoodsAccounts for Hicks Law

Abstract

In Bayesian categorization, exactly computing likelihoods and posteriors might be hard for humans. We propose anapproximate inference framework inspired by Bayesian quadrature and Thompson sampling. An agent can pay a fixedcost to make a noisy measurement of the likelihood of one category. By sequentially making measurements, the agentrefines their beliefs over the likelihoods. When the agent stops measuring and chooses a category, they get rewarded forbeing correct; the agent chooses the category that maximizes probability correct. To decide whether to make anothermeasurement, the agent simulates one measurement for each category. If any of the gains in expected reward exceedsthe cost, they make a real measurement corresponding to the simulation with the largest gain. We find that the averagenumber of measurements grows approximately logarithmically with the number of categories, reminiscent of Hicks law.Furthermore, our model makes predictions for decision confidence among multiple alternatives.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View