Optimization is essential in data science literature. The data science optimization studies all optimization problems that have applications in data science. The polynomial function is broadly used in data science optimization. In data science optimization, we are mostly interested in stochastic optimization, equilibrium games and loss function optimization.
The stochastic optimization studies optimization problems that are constructed with random variables. A classic kind of stochastic optimization is to find the optimizer of a function that is given by the expectation of some random variables. For stochastic optimization with polynomials, we propose an efficient perturbation sample average approximation model. It can be solved globally by Moment-SOS relaxations, and gives a robust approximation of the original problem.
The distributionally robust optimization (DRO) is another kind of stochastic optimization. It assumes the uncertainty is described by an ambiguity set, and aims to optimize the objective function under the worst-case of the ambiguity. For DRO defined with polynomials and under moment ambiguity, we transform it into a linear conic optimization with moment and psd polynomial cones, and give a semidefinite algorithm to solve it globally.
The bilevel optimization is a kind of challenging optimization problems whose feasible set is constrained by the optimizer set of another optimization problem. For bilevel optimization defined with polynomials, we propose a semidefinite algorithm to solve it globally. Under some general assumptions, the algorithm can either get the global minimizer(s), or detect the nonexistence of them.
The generalized Nash equilibrium problem (GNEP) is formed by a group of mutually parametrized optimization problems. It aims to find a equilibrium such that each objective function cannot be solely further optimized. For GNEPs with rational polynomial functions, we propose a new approach for solving them with a hierarchy of rational optimization problems. Under some general assumptions, we show that the proposed hierarchy can compute a GNE, if it exists, or detect its nonexistence.
Loss functions are essential in data science optimization. We study loss functions for finite sets and propose a kind of efficient sum-of-square (SOS) polynomial loss functions for general finite sets. We show how to compute the SOS loss functions of the lowest degree. In addition, we give a special kind of SOS loss functions such that all their local minimizers are also global minimizers.