Skip to main content
eScholarship
Open Access Publications from the University of California

Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization

Published Web Location

https://arxiv.org/abs/2109.12213
No data is associated with this publication.
Creative Commons 'BY' version 4.0 license
Abstract

We consider unconstrained stochastic optimization problems with no available gradient information. Such problems arise in settings from derivative-free simulation optimization to reinforcement learning. We propose an adaptive sampling quasi-Newton method where we estimate the gradients using finite differences of stochastic function evaluations within a common random number framework. We develop modified versions of a norm test and an inner product quasi-Newton test to control the sample sizes used in the stochastic approximations and provide global convergence results to the neighborhood of a locally optimal solution. We present numerical experiments on simulation optimization problems to illustrate the performance of the proposed algorithm. When compared with classical zeroth-order stochastic gradient methods, we observe that our strategies of adapting the sample sizes significantly improve performance in terms of the number of stochastic function evaluations required.

Item not freely available? Link broken?
Report a problem accessing this item