Skip to main content
eScholarship
Open Access Publications from the University of California

Too many cooks: Coordinating multi-agent collaboration through inverse planning

Abstract

Collaboration requires agents to coordinate their behavior onthe fly, sometimes cooperating to solve a single task togetherand other times dividing it up into sub-tasks to work on in par-allel. Underlying the human ability to collaborate is theory-of-mind, the ability to infer the hidden mental states that driveothers to act. Here, we develop Bayesian Delegation, a decen-tralized multi-agent learning mechanism with these abilities.Bayesian Delegation enables agents to rapidly infer the hid-den intentions of others by inverse planning. These inferencesenable agents to flexibly decide in the absence of communi-cation when to cooperate on the same sub-task and when towork on different sub-tasks in parallel. We test this model ina suite of multi-agent Markov decision processes inspired bycooking problems. To succeed, agents must coordinate boththeir high-level plans (e.g., what sub-task they should work on)and their low-level actions (e.g., avoiding collisions). BayesianDelegation bridges these two levels and rapidly aligns agents’beliefs about who should work on what. Finally, we testedBayesian Delegation in a behavioral experiment where partici-pants made sub-task inferences from sparse observations of co-operative behavior. Bayesian Delegation outperformed heuris-tic models and was closely aligned with human judgments.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View