Skip to main content
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Topics in Modeling Uncertainty with Learning


It is fair to say that in many real world decision problems the underlying models cannot be accurately represented. Uncertainty in a model may arise due to lack of sufficient data to calibrate the model, non-stationarity, or due to wrong subjective assumptions. Hence optimization in presence of model uncertainty is a very important issue. In the last few decades, there has been a lot of work on finding robust solutions to model uncertainty in operations research. With advances in the field of convex optimization, many robust optimization problems are efficiently solvable. Still there are many challenges and open questions related to model uncertainty, specially when learning is also involved. In this thesis, we study the following challenges and problems related to model uncertainty with learning:

First, defining and computing a robust solution to problems with model uncertainty is a challenging task, specially in dynamic optimization problems. Many dynamic problems are tractable only because the structure of the solutions can be analyzed and therefore the dimension of a solution space can be reduced. It is therefore important to design and study robust equivalents of dynamic optimization problems and analyze the structure of robust solutions.

Second, in many situations the need for robust solution arises because of lack of sufficient data to calibrate model parameters. It is important to study the properties of traditional robust solutions. Are these robust solutions good for decision making in long run as compared to non-robust solutions? If not, is it possible to design alternative approaches?

Third, it is not possible have robustness to model uncertainty if the attention is restricted to a wrong model class. One may use a relatively model free learning and decision making methodology such as reinforcement learning, which can achieve an optimal solution asymptotically. However, these approaches may take a long time to learn and therefore it is important to look at non-parametric methodologies which can achieve good small sample performance.

The main contribution of this thesis can be summarized as follows:

We study an infinite-time queuing control system, where both arrival and departure rates can be controlled. We consider the case when arrival and departure processes

can not be accurately modeled. We prove that a threshold policy is optimal under a max-min robust optimization objective when the uncertainty in the processes is characterized

by a novel notion of relative entropy. The new notion of relative entropy accounts for different levels of modeling errors in arrival and departure processes.

We perform numerical tests to study the performance of traditional robust optimization solutions with learning using past few data points. It is shown that the performance of a robust solution may even be worse than a classical point estimate based non-robust solution. We introduce the notion of generalized operational statistics that guarantees a better solution than a classical solution over a set of uncertain parameters, while incorporating subjective prior information. We apply operational statistics approach to mean-variance portfolio optimization problem with uncertain mean returns. We show that the operational statistics portfolio problem can be efficiently solvable by reformulating it as a semi-definite program. Various extensions are discussed and numerical experiments are done to show the efficacy of the solution.

We introduce objective operational learning, a new non-parametric approach that incorporates structural information to improve small sample performance. We show how structural and objective information can be incorporated in the objective operational learning algorithm. We apply the algorithm to an inventory control problem with demand dependent on inventory level and prove convergence.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View