Skip to main content
eScholarship
Open Access Publications from the University of California

Central Finite-Difference Based Gradient Estimation Methods for Stochastic Optimization

Abstract

This paper presents an algorithmic framework for solving unconstrained stochastic optimization problems using only stochastic function evaluations. We employ central finite-difference based gradient estimation methods to approximate the gradients and dynamically control the accuracy of these approximations by adjusting the sample sizes used in stochastic realizations. We analyze the theoretical properties of the proposed framework on nonconvex functions. Our analysis yields sublinear convergence results to the neighborhood of the solution, and establishes the optimal worst-case iteration complexity (O(ε−1)) and sample complexity (O(ε−2)) for each gradient estimation method to achieve an ε-accurate solution. Finally, we demonstrate the performance of the proposed framework and the quality of the gradient estimation methods through numerical experiments on nonlinear least squares problems.

Main Content
For improved accessibility of PDF content, download the file to your device.