The Impact of Information in Cooperative and Noncooperative Systems
Large-scale autonomous systems are systems comprised of many components, each acting according to its own preferences, local information and capabilities. Such systems are ubiquitous in our world and include automated warehouses, UAV swarms, traffic systems, sensor networks, the internet of things, auctions, ridesharing systems, and social networks. We focus on autonomous systems which are engineered: each component is human-designed.
While such systems are attractive because they can process a high amount of data and are generally robust against single points of failure, there are often many challenges in their design. System designers must take into account that each component has its own capabilities, model of the environment, data set, and local objective. Furthermore, the system designer often cannot make decisions for each component at each time step, rather, decision-making rules are assigned to allow components to react autonomously. Small adjustments to such rules can often have cascading effects throughout the system. Finally, the interconnected nature of the system opens doors to new kinds of system-wide attacks and vulnerabilities.
In this work, we focus on the challenge of information sharing constraints: each agent does not have access to all of the system information at every given time step. These constraints often arise naturally (i.e., no router can access all available data on the internet before making a routing decision), but they can also arise from privacy, trust or political issues. Thus it is imperative for the system designer to understand the relevant information constraints on the system and their effects on the emergent behavior. In this work we endeavor to answer the two following questions:
1. How do a set of information sharing constraints impact the resulting emergent system-wide behavior?2. How can a system designer strategically set decision-making rules for the components to offset any negative effects caused by information sharing constraints?
We answer the first question by assuming that the emergent behavior is a system equilibrium, and then comparing the value of the worst-case equilibrium to the value of the optimal decision set, where value is based on the system designer objective. Different types of information sharing constraints among the components are evaluated on this basis. We answer the second question in certain settings by showing that small deviations from standard decision-making rules can improve the system's performance guarantees.
These questions are addressed in two settings: first in a cooperative setting, where the system designer can design the decision-making rules for each agent. The system designer objective function is assumed to be submodular, and this property is leveraged to show closeness of equilibrium to optimal. The second is a noncooperative setting, where the system design must operate in the presence of an attacker. Here, the constraints on information sharing are related to how much knowledge about the attacker the other players have.