Skip to main content
eScholarship
Open Access Publications from the University of California

Learning with friends and foes

Abstract

Social agents, both human and computational, inhabiting a world containing multiple active agents, need to coordinate their activities. This is because agents share resources, and without proper coordination or "rules of the road", everybody will be interfering with the plans of others. As such, we need coordination schemes that allow agents to effectively achieve local goals without adversely affecting the problem-solving capabilities of other agents. Researchers in the field of Distributed Artificial Intelligence (DAI) have developed a variety of coordination schemes under different assumptions about agent capabilities and relationships. Whereas some of these research have been motivated by human cognitive biases, others have approached it as an engineering problem of designing the most effective coordination architecture or protocol. We propose reinforcement learning as a coordination mechanism that imposes little cognitive burden on agents. More interestingly, we show that a uniform learning mechanism suffices as a coordination mechanism in both cooperative and adversarial situations. Using an example block-pushing problem domain, we demonstrate that agents can use reinforcement learning algorithms, without explicit information sharing, to develop effective policies to coordinate their actions both with agents acting in unison and with agents acting in opposition.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View