In Bayesian categorization, exactly computing likelihoods and posteriors might be hard for humans. We propose anapproximate inference framework inspired by Bayesian quadrature and Thompson sampling. An agent can pay a fixedcost to make a noisy measurement of the likelihood of one category. By sequentially making measurements, the agentrefines their beliefs over the likelihoods. When the agent stops measuring and chooses a category, they get rewarded forbeing correct; the agent chooses the category that maximizes probability correct. To decide whether to make anothermeasurement, the agent simulates one measurement for each category. If any of the gains in expected reward exceedsthe cost, they make a real measurement corresponding to the simulation with the largest gain. We find that the averagenumber of measurements grows approximately logarithmically with the number of categories, reminiscent of Hicks law.Furthermore, our model makes predictions for decision confidence among multiple alternatives.