New Results for Online and Offline Stochastic Optimization and Decision-Making
Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

New Results for Online and Offline Stochastic Optimization and Decision-Making

Abstract

This dissertation presents several contributions at the interface of methods for convex optimization problems and decision-making problems in both online and offline settings. The first part of the dissertation focuses on new optimization methods for computing an approximate solution path for parameterized optimization problems. We develop and analyze several different second-order algorithms for computing a near-optimal solution path of a convex parametric optimization problem with smooth Hessian. Our algorithms are inspired by a differential equation perspective on the parametric solution path and do not rely on the specific structure of the objective function. We present computational guarantees that bound the oracle complexity to achieve a near-optimal solution path under different sets of smoothness assumptions. Under the assumptions, the results are an improvement over the best-known results of the grid search methods. We also develop second-order conjugate gradient variants which avoid exact computation of Hessian and solving linear equations. We present computation results that demonstrate the effectiveness of our methods over the grid search methods on both real and synthetic datasets. On large-scale problems, we demonstrate significant speedups of the second-order conjugate variants as compared to the standard versions of our methods. The second part of the dissertation focuses on the statistical properties of the recently introduced surrogate ``SPO+'' loss function in the ``Smart Predict-then-Optimize (SPO)'' framework. We greatly expand upon the consistency results for the surrogate loss in previous literature. We develop risk bounds and uniform calibration results for the surrogate loss relative to the original loss, which provide a quantitative way to transfer the excess surrogate risk to excess true risk. By combining our risk bounds with generalization bounds, we show that the empirical minimizer of the surrogate loss achieves low excess true risk with high probability. We first demonstrate these results in the case when the feasible region of the underlying optimization problem is a polyhedron, and then we show that the results can be strengthened substantially when the feasible region is a level set of a strongly convex function. We perform experiments to empirically demonstrate the strength of the SPO+ surrogate, as compared to standard $\ell_1$ and squared $\ell_2$ prediction error losses, on portfolio allocation and cost-sensitive multi-class classification problems. The third part of the dissertation focuses on the online contextual decision-making problem with resource constraints. We propose an algorithm that mixes a prediction step based on the SPO method with a dual update step based on mirror descent. We prove regret bounds and demonstrate that the overall convergence rate of our method depends on the $\mathcal{O}(T^{-1/2})$ convergence of online mirror descent as well as risk bounds of the surrogate loss function used to learn the prediction model. Our algorithm and regret bounds apply to a general convex feasible region for the resource constraints, including both hard and soft resource constraint cases, and they apply to a wide class of prediction models in contrast to the traditional settings of linear contextual models or finite policy spaces. We also conduct numerical experiments to empirically demonstrate the strength of our proposed SPO-type methods, as compared to traditional prediction-error-only methods, on multi-dimensional knapsack and longest path instances.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View