Skip to main content
eScholarship
Open Access Publications from the University of California

UC Riverside

UC Riverside Electronic Theses and Dissertations bannerUC Riverside

Distributed Algorithms to Convex Optimization Problems

Abstract

This dissertation studies first a distributed algorithm to solve general convex optimization

problems and then designs distributed algorithms to solve special optimization problems

related to a system of linear equations.

First, a wider selection of step sizes is explored for the distributed subgradient

algorithm for multi-agent optimization problems with time-varying and balanced commu-

nication topologies. The square summable requirement of the step sizes commonly adopted

in the literature is removed. The step sizes are only required to be positive, vanishing

and non-summable, which provides the possibility for better convergence rates. Both un-

constrained and constrained optimization problems are considered. It is proved that the

agents’ estimates reach a consensus and converge to the minimizer of the global objective

function with the more general choice of step sizes. The best convergence rate is shown to

be the reciprocal of the square root of iterations for the best record of the function value at

the average of the agents’ estimates for the unconstrained case with the wider selection of

step sizes.

Then we design a distributed algorithm for a special optimization problem to find

the solution of the linear equations Ax = b with minimum energy, i.e. the minimum weighted

norm associated with the weighted inner product. We first prove that for a special case

when the norm is two-norm, the algorithm can make multiple agents reach the minimum

two-norm solution of the global linear equations Ax = b if the agents are initialized at

the minimum two-norm solutions of their local equations. We then prove that if there are

bounded initialization errors, the final convergence of the algorithm is also bounded away

from the minimum two-norm solution of the global linear equations. Next, we prove the

case with the two-norm replaced with a weighted norm associated with the weighted inner

product.

Next, we solve a system of linear equations Ax = b in a distributed way motivated

by a special subgradient algorithm. A discrete-time distributed algorithm to solve a system

of linear equations Ax = b is proposed. The algorithm can find a solution of Ax = b from

arbitrary initializations at a geometric rate when Ax = b has either unique or multiple

solutions. When Ax = b has a unique solution, the geometric convergence rate of the

algorithm is proved by analyzing the mixed norm of homogeneous M-Fejer type mappings

from the subgradient update. Then when Ax = b has multiple solutions, the geometric

convergence rate is proved through orthogonal decompositions of the agents’ estimates onto

the row space and null space of A, and the relationship between the initializations and the

final convergence point is also specified. Quantitative upper bounds of the convergence rate

for two special cases are given.

Finally, we investigate communication efficient distributed algorithm to solve Ax =

b for matrices with a similar sparsity structure to that of the Laplacian matrix of communi-

cation topology. We propose the algorithm based on gradient descent method with constant

step size and prove the convergence in finite time or at a linear rate. We also provide a way

to select the step sizes in a distributed way.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View