Skip to main content
eScholarship
Open Access Publications from the University of California

Planning to plan: a Bayesian model for optimizing the depth of decision tree search

Creative Commons 'BY' version 4.0 license
Abstract

Planning, the process of evaluating the future consequences of actions, is typically formalized as search over a decision tree. This procedure increases expected rewards but is computationally expensive. Past attempts to understand how people mitigate the costs of planning have been guided by heuristics or the accumulation of prior experience, both of which are intractable in novel, high-complexity tasks. In this work, we propose a normative framework for optimizing the depth of tree search. Specifically, we model a metacognitive process via Bayesian inference to compute optimal planning depth. We show that our model makes sensible predictions over a range of parameters without relying on retrospection and that integrating past experiences into our model produces results that are consistent with the transition from goal-directed to habitual behavior over time and the uncertainty associated with prospective and retrospective estimates. Finally, we derive an online variant of our model that replicates these results.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View