Skip to main content
eScholarship
Open Access Publications from the University of California

UC San Diego

UC San Diego Electronic Theses and Dissertations bannerUC San Diego

Nash equilibrium seeking in noncooperative games and multi -agent deployment to planar curves

Abstract

We present two research topics in multi-agent systems: Nash equilibrium seeking for players in unknown noncooperative games and the deployment of agents onto families of planar curves. The designed controllers either extract or encode the information necessary for the agents to achieve their objectives. Selfish players converge to a Nash equilibrium despite not knowing the game's mathematical structure, and follower agents achieve deployment to planar curves without explicit knowledge of the desired formation. To solve static, noncooperative games, we introduce a non-model based learning strategy, based on the extremum seeking approach, for N-player and infinitely-many player games. In classical learning algorithms, each player has knowledge of its payoff's functional form and of the other players' actions, whereas in the proposed algorithm, the players measure only their own payoff values. For games with non-quadratic payoffs, convergence to a Nash equilibrium is not perfect, but is biased in proportion to the algorithm's perturbation amplitudes and the payoff functions' higher derivatives. In games with infinitely-many players, no single player may affect the outcome of the game, and yet, we show that a player may converge to the Nash equilibrium by measuring only its own payoff value. Inspired by the infinitely-many player case, we also present an extremum seeking approach for locally stable attainment of the optimal open-loop control sequence for unknown discrete-time linear systems, where not even the dimension of the system is known. To achieve deployment to a family of planar curves, we model the agents' collective dynamics with the reaction- advection-diffusion class of PDEs. The PDE models, whose state is the position of the agents, incorporate the agents' feedback laws, which are designed based on a spatial internal model principle. Namely, the agents' feedback laws allow the agents to deploy to a family of geometric curves that correspond to the model's equilibrium profiles. Stable deployment is ensured by leader feedback, designed using techniques for the boundary control of PDEs. A nonlinear PDE model is also presented that does not require leader feedback and enables the agents to deploy to planar arc formations

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View