- Colas, Jaron;
- Dundon, Neil;
- Gerraty, Raphael;
- Saragosa-Harris, Natalie;
- Szymula, Karol;
- Tanwisuth, Koranis;
- Tyszka, J;
- van Geen, Camilla;
- Ju, Harang;
- Toga, Arthur;
- Gold, Joshua;
- Bassett, Dani;
- Hartley, Catherine;
- Shohamy, Daphna;
- Grafton, Scott;
- ODoherty, John
The model-free algorithms of reinforcement learning (RL) have gained clout across disciplines, but so too have model-based alternatives. The present study emphasizes other dimensions of this model space in consideration of associative or discriminative generalization across states and actions. This generalized reinforcement learning (GRL) model, a frugal extension of RL, parsimoniously retains the single reward-prediction error (RPE), but the scope of learning goes beyond the experienced state and action. Instead, the generalized RPE is efficiently relayed for bidirectional counterfactual updating of value estimates for other representations. Aided by structural information but as an implicit rather than explicit cognitive map, GRL provided the most precise account of human behavior and individual differences in a reversal-learning task with hierarchical structure that encouraged inverse generalization across both states and actions. Reflecting inference that could be true, false (i.e., overgeneralization), or absent (i.e., undergeneralization), state generalization distinguished those who learned well more so than action generalization. With high-resolution high-field fMRI targeting the dopaminergic midbrain, the GRL models RPE signals (alongside value and decision signals) were localized within not only the striatum but also the substantia nigra and the ventral tegmental area, including specific effects of generalization that also extend to the hippocampus. Factoring in generalization as a multidimensional process in value-based learning, these findings shed light on complexities that, while challenging classic RL, can still be resolved within the bounds of its core computations.