Multi-Objective Learning over Adaptive Networks
Motivated by the expanding interest in applications where online learning and decision making by networked agents is a necessity, this dissertation deals with the design of adaptive networks where agents need to carry out strategic decisions toward different but complementary objectives under uncertainty and randomness.
Adaptive networks consist of collections of agents with learning abilities that interact with each other locally in order to solve distributed processing and inference tasks in real-time. In our first constrained problem, we examine the design and evolution of adaptive networks in which agents have multiple but complementary objectives. In these scenarios, selfish learning strategies by individual agents can influence the network dynamics in adverse ways. This situation is even more challenging in nonstationary environments where the solution to the multi-objective optimization problem can drift with time due to changes in the statistical distribution of the data.
We specifically formulate multi-objective optimization problems where agents seek to minimize their individual costs subject to constraints that are locally coupled. The coupling arises because the individual costs and the constraints can be dependent on actions by other agents in the neighborhood. In these types of problems, the Nash equilibrium is a desired and stable solution since at this location no agent can benefit by unilaterally deviating from the equilibrium. We therefore focus on developing distributed online strategies that enable agents to approach the Nash equilibrium. We illustrate an application of the results to a stochastic version of the network Cournot competition problem, which arises in a variety of useful problems such as in modeling economic trading with geographical considerations, power management over smart grids, and resource allocation protocols.
Using this formulation, we then extend earlier contributions on adaptive networks, which generally assume that the agents work together for a common global objective or when they observe data that is generated by a common target model or parameter vector. We relax this condition and consider a broader scenario where individual agents may only have access to partial information about the global target vector, i.e., each agent may be sensing only a subset of the entries of the global target, and the number of these entries can be different across the agents. We develop cooperative distributed techniques where agents are only required to share estimates of their common entries and still can benefit from neighboring agents. Since agents' interactions are limited to exchanging estimates of select few entries, communication overhead is significantly reduced.
We also examine the behavior of adaptive networks where information-sharing is subject to a positive communications cost over the edges linking the agents. In this situation, we show that if left unattended, the optimal strategy for the agents is to behave in a selfish manner and not to participate in the sharing of information. We hence develop mechanisms to help turn selfish agents into cooperative formations. In one method, we design an adaptive reputation protocol to adjust agents' reputation in accordance to their past actions, which can then be used to predict their subsequent actions. In a second method, we allow agents to decide with whom to cluster and share information. When the communication cost is small, the proposed mechanisms entice agents to cooperate and thus enhance the overall social benefit of the network.