Skip to main content
eScholarship
Open Access Publications from the University of California

UC San Diego

UC San Diego Electronic Theses and Dissertations bannerUC San Diego

Distributed Algorithms for Convex Optimization: noisy channels, online computation, nuclear norm regularization, and separable constraints

  • Author(s): Mateos-Núñez, David
  • Advisor(s): Cortés, Jorge
  • et al.
Abstract

This thesis contributes to the body of research in the design and analysis of distributed algorithms for the optimization of a sum of convex functions, that finds applications in networked and multi-agent systems. In this framework, a group of agents cooperate with each other to optimize the sum of their local objectives in a decentralized way by means of local interactions. We consider four aspects. In the first scenario, the agents need to agree on a global decision vector that minimizes the unconstrained sum. In this case, we study a family of distributed,

continuous-time algorithms that have each agent update its estimate of the global optimizer doing gradient descent on its local cost function while, at the same time, seeking to agree with its neighbors’ estimates via proportional-integral feedback on their disagreement. Our aim is to characterize the algorithm robustness properties against the additive persistent noise resulting from errors in communication and computation. We model this algorithm as a stochastic differential equation and develop a novel Lyapunov technique to establish the noise-to-state stability property in 2nd moment.

In the second scenario, we consider the online case, whereby each agent in the network commits to a decision and incurs a local cost given by functions that are revealed over time and whose unknown evolution might be adversarially adaptive to the agent’s behavior. The goal of each agent is to incur a cumulative cost over time with respect to the sum of local functions across the network that is competitive with the best centralized decision in hindsight. The proposed algorithms evolve in discrete time using first-order information of the objectives in the form of subgradients, and the communication topology is modeled as a sequence of time-varying weight-balanced digraphs such that the consecutive unions over time periods of some length are strongly connected. We illustrate our results in an application to medical diagnosis, where networked hospitals use patient data to improve their decision models cooperatively in an online fashion.

In the third scenario, we depart from the cooperative search of a global decision vector. Instead, the agents now wish to estimate local decision vectors that minimize the sum of their objectives and are coupled through a constraint that is a sum of convex functions. Motivated by dual-decompositions of constrained optimization problems through the Lagrangian formulation, we consider subgradient algorithms to find a saddle-point of general convex-concave functions under

agreement constraints. This framework also encodes minimization problems with semidefinite constraints, which results in novel distributed strategies that are scalable if the order of the matrix inequalities is independent of the size of the network or under decompositions using chordal sparsity.

In the fourth scenario, we show a distributed treatment of nuclear-norm regularization, a widely used convex surrogate of the rank function on the spectral ball. To this end, we exploit our previous strategies for saddle-point problems using two variational characterizations of the nuclear norm that are separable under an agreement condition on auxiliary matrices that are independent of the size of the network. As a result, we derive two novel distributed algorithms to address standard optimization models for multi-task learning and low-rank matrix completion.

Main Content
Current View