Skip to main content
eScholarship
Open Access Publications from the University of California

Deception in two-player zero-sum stochastic games : theory and application to warfare games

  • Author(s): Singh, Rajdeep
  • et al.
Abstract

In this work, two-player zero-sum stochastic games, under imperfect information, are investigated in the discrete- time/discrete-state case. We focus on the case where only one player, Blue, has incomplete or partial information and the other player, Red, has complete state information. In stochastic games with partial information the Information State is a function of a conditional probability distribution. In the problem form here, the payoff is only a function of the terminal state of the system, and the initial information state is a max-plus sum of max-plus delta functions. The Blue player can achieve robustness to the effect of Red's control on its observations. Using the recently established deception- robust theory, we demonstrate that the full state-feedback optimal control applied at the Maximum Likelihood State ('MLS') is not optimal for the Blue player in a partially- observed game and hence the Certainty Equivalence Principle does not hold. An automated deception-enabled control algorithm is derived for the Red player with an assumption that Red can model the Blue algorithm completely. An example game is used to demonstrate that even for the Red player, with complete state information, the optimal control is not the state-feedback optimal control. A future study of deception-enabled Red approach is proposed in the mixed strategy framework. Lastly, some modelling ideas are presented for Urban Warfare. The example cases considered in this study are simple enough to allow an intuitive understanding of optimal strategies, while complex enough to demonstrate real-world difficulties. The theory discussed here is more general than the specific application which has been presented owing to the critical nature of imperfect information and hence its utility in war games

Main Content
Current View