- Main

## Aspects of optimization with increasing concave stochastic order constraints

- Author(s): Haskell, William Benjamin
- Advisor(s): Shen, Zuo-Jun M
- Shanthikumar, J G
- et al.

## Abstract

This dissertation applies convex optimization techniques to a class of stochastic optimization problems. This class of problems is distinguished by a multivariate increasing concave stochastic order constraint on a random-variable-valued mapping. The analysis is divided into four chapters to address four different aspects of optimization with increasing concave stochastic order constraints.

In Chapter 3, we describe this class of stochastic optimization problems in detail. This class of problems does not satisfy the Slater condition, and we must overcome this technical difficulty to apply convex optimization. We introduce a perturbation of the original problem for this purpose and we discuss semi-infinite programming and nonlinear programming relaxations of this perturbation. It is shown that increasing concave functions act as the Lagrange multipliers of increasing concave stochastic order constraints. We conclude this chapter with a discussion about a transformation of these problems via coupling on finite probability spaces.

In Chapter 4, we consider sample average approximation for increasing concave stochastic order constrained programs. We verify consistency of sample average approximation. We also study sample average approximation for the semi-infinite programming and nonlinear programming relaxations presented in Chapter 3. Again, we verify consistency of sample average approximation for these relaxations. The solution method in this chapter is based on the coupling transformation from Chapter 3. This transformation necessarily requires a large-scale implementation. We comment on aggregation, column generation, and row generation techniques as a possible large-scale approach.

In Chapter 5 we introduce a robust version of the increasing concave stochastic order. Specifically, we consider model misspecification for the underlying probability distribution. Robust versions of the semi-infinite programming and nonlinear programming relaxations from Chapter 3 are also presented and analyzed. The case of polyhedral uncertainty is emphasized for finite probability spaces. There is a large-scale implementation issue here as well, and aggregation is suggested again as a possible solution approach.

In Chapter 6 we consider multi-period stochastic optimization problems with increasing concave stochastic order constraints. We put a multivariate increasing concave stochastic order constraint on a vector of performance measures across all time periods. We derive optimality conditions similar to those in Chapter 3. We also show that there is a companion auxiliary control problem for this class of multi-period problems that can be solved with dynamic programming. We comment on using duality decomposition to solve instances of this problem on finite probability spaces, such as those obtained via conditional sampling.