Skip to main content
eScholarship
Open Access Publications from the University of California

UC Irvine

UC Irvine Electronic Theses and Dissertations bannerUC Irvine

Computing Nash Equilibria in Adversarial Stochastic Team Games

Abstract

Computing Nash equilibrium policies in Multi-agent Reinforcement Learning (MARL) is a fundamental question that has received a lot of attention both in theory and in practice. However, beyond the single-agent setting, provable guarantees for computing stationary Nash equilibrium policies only exist for limited settings, e.g., two-player zero-sum stochastic games and Markov Potential Games. This thesis investigates what happens when a team of players faces an adversary in the absence of coordination. It is an initial step for understanding non-cooperative two-team zero-sum stochastic games, even in this case, where one of the teams is composed of only one agent. In this thesis we consider the aforementioned setting. The contributions are twofold. We prove the existence of Nash equilibria in these settings and also design a decentralized algorithm for computing Nash equilibria in such games. One of the main technical challenges in the proof of correctness of the algorithm is the analysis of a non-linear program with non-convex constraints that arises when applying the Bellman equations; the analysis makes use of the so-called constraint qualifications and Lagrange duality.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View