Beyond Chance-Constrained Convex Mixed-Integer Optimization: A Generalized Calafiore-Campi Algorithm and the notion of $S$-optimization
Skip to main content
eScholarship
Open Access Publications from the University of California

Department of Mathematics

Faculty bannerUC Davis

Beyond Chance-Constrained Convex Mixed-Integer Optimization: A Generalized Calafiore-Campi Algorithm and the notion of $S$-optimization

Published Web Location

https://arxiv.org/pdf/1504.00076.pdf
No data is associated with this publication.
Abstract

The scenario approach developed by Calafiore and Campi to attack chance-constrained convex programs utilizes random sampling on the uncertainty parameter to substitute the original problem with a representative continuous convex optimization with $N$ convex constraints which is a relaxation of the original. Calafiore and Campi provided an explicit estimate on the size $N$ of the sampling relaxation to yield high-likelihood feasible solutions of the chance-constrained problem. They measured the probability of the original constraints to be violated by the random optimal solution from the relaxation of size $N$. This paper has two main contributions. First, we present a generalization of the Calafiore-Campi results to both integer and mixed-integer variables. In fact, we demonstrate that their sampling estimates work naturally for variables restricted to some subset $S$ of $\mathbb R^d$. The key elements are generalizations of Helly's theorem where the convex sets are required to intersect $S \subset \mathbb R^d$. The size of samples in both algorithms will be directly determined by the $S$-Helly numbers. Motivated by the first half of the paper, for any subset $S \subset \mathbb R^d$, we introduce the notion of an $S$-optimization problem, where the variables take on values over $S$. It generalizes continuous, integer, and mixed-integer optimization. We illustrate with examples the expressive power of $S$-optimization to capture sophisticated combinatorial optimization problems with difficult modular constraints. We reinforce the evidence that $S$-optimization is "the right concept" by showing that the well-known randomized sampling algorithm of K. Clarkson for low-dimensional convex optimization problems can be extended to work with variables taking values over $S$.

Item not freely available? Link broken?
Report a problem accessing this item