Computational Models of Learning and Hierarchy
The aim of this thesis is to create precise computational models of how humans create and use hierarchical representations when solving complex problems. In the process, the thesis aims to understand human learning more generally, and investigates the method of computational modeling itself. The main result of the thesis is that hierarchical reinforcement learning --the layering of multiple reinforcement-learning processes at different levels of abstraction-- provides a precise and comprehensive model of human behavior in complex tasks, and has the promise to explain how hierarchical representation can be created when interacting with a problem. Our investigation of human learning shows that learning proceeds differently at different ages, and suggests that different stages of life might be optimized to solve different problems. Our investigation of computational modeling reveals that even though computational models are powerful tools for compressing complex datasets into a small number of model parameters, these parameters are not generic and task-independent, as commonly believed. Instead, model parameters should be interpreted as maximally-compact behavioral measures that are fundamentally tied to task context.