Skip to main content
eScholarship
Open Access Publications from the University of California

Fast Approximation Algorithms for Positive Linear Programs

  • Author(s): Wang, Di
  • Advisor(s): Rao, Satish
  • et al.
Abstract

Positive linear programs (LPs), or equivalently, mixed packing and covering LPs, are LPs formulated with

non-negative coefficients, constants, and variables. Notable special cases of positive LPs include packing LPs and covering LPs. Positive LPs model a wide range of fundamental problems both in theory of computation as well as in practice, thus have long drawn interest in theoretical computer science, operations research, and optimization communities.

In this work, we study iterative methods for approximately solving positive LPs efficiently. Given a positive LP of size $N$, we are interested in iterative methods that can converge to a $(1\pm \epsilon)$-approximate optimal solution with complexity that is nearly linear in $N$, and polynomial in $\frac{1}{\epsilon}$.

For the special cases of packing LPs and covering LPs, we design improved sequential and parallel approximate solvers, both using first-order (i.e. gradient based) descent methods from (continuous) convex optimization. In particular, we build upon the linear coupling framework introduced by Allen-Zhu and Orecchia. The performance of first-order methods are sensitive to the geometry of the problem space, as well as properties of the objective function. At a high level, we design (discrete) techniques that exploit the combinatorial structures of the LPs, which lead to significantly improved theoretical behavior of the optimization method. More specifically, our sequential (i.e., non-parallelizable) solver is based on a $\tilde{O}(N/\epsilon)$ algorithm for packing LPs in a previous breakthrough by Allen-Zhu and Orecchia, and we provide a unified method with running time $\tilde{O}(N/\epsilon)$ for both packing LPs and covering LPs. Our parallel algorithm has depth (i.e., parallel running time) $\tilde{O}(1/\epsilon^2)$, which is $\Omega(1/\epsilon)$ faster than the previous best algorithm.

For general positive LPs, we take a more combinatorial framework, called Lagrangian relaxation, but we crucially adapt the framework to leverage insights from continuous convex optimization. The incorporation of continuous optimization techniques provides a primal-dual perspective of the method, and leads to faster convergence. More specifically, we develop a $\tilde{O}(1/\epsilon^3)$ depth parallel algorithm, improving a long-standing bound of $\tilde{O}(1/\epsilon^4)$ for positive LPs.

At a high level, we benefit from a integrated view of continuous and combinatorial techniques to obtain the improved results in this work. This combination brings intriguing new perspectives to algorithm design, and is promising for further exciting progress.

Main Content
Current View