Skip to main content
eScholarship
Open Access Publications from the University of California

n-task Learning: Solving Multiple or Unknown Numbers of Reinforcement Learning Problems

Abstract

Temporal difference (TD) learning models can perform poorlywhen optimal policy cannot be determined solely by sensoryinput. Converging evidence from studies of working memorysuggest that humans form abstract mental representations thatalign with significant features of a task, allowing such condi-tions to be overcome. The n-task learning algorithm (nTL) ex-tends TD models by utilizing abstract representations to formmultiple policies based around a common set of external in-puts. These external inputs are combined conjunctively withan abstract input that comes to represent attention to a task.nTL is used to solve a dynamic categorization problem that ismarked by frequently alternating tasks. The correct number oftasks is learned, as well as when to switch from one task repre-sentation to another, even when inputs are identical across alltasks. Task performance is shown to be optimal only when anappropriate number of abstract representations is used.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View