Skip to main content
eScholarship
Open Access Publications from the University of California

Achieving Consensus to Learn an Efficient and Robust Communication via Reinforcement Learning

Abstract

Human communication usually exhibits two fundamental and essential characteristics under environmental pressure, efficiency, i.e., using less communication frequency to achieve comparable performances in cooperation; and robustness, i.e., maintaining a relative performance when communicating in a complicated environment. Since a critical goal of designing artificial agents is making them human-like in many scenarios. How artificial agents could learn a human-like communication mechanisms in terms of efficiency and robustness is a long-existing problem which has not yet been solved. Reinforcement Learning, due to its trail-and-error paradigm, provides a promising framework in solving the above research problem. With reinforcement learning, this paper develops architectures that help agents learn an efficient and robust communication, and carries out extensive experiments which uncover that artificial agents are cognitively capable to learn such human-like communication protocol in various environments (tasks).

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View