Skip to main content
eScholarship
Open Access Publications from the University of California

UC San Diego

UC San Diego Electronic Theses and Dissertations bannerUC San Diego

Second-Derivative SQP Methods for Large-Scale Nonconvex Optimization

Abstract

The class of stabilized sequential quadratic programming (SQP) methods for nonlinearly constrained optimization solve a sequence of related quadratic programming (QP) subproblems formed from a two-norm penalized quadratic model of the Lagrangian function subject to shifted, linearized constraints. While these methods have been shown to exhibit superlinear local convergence even when the constraint Jacobian is rank deficient at the solution, they generally have no global convergence theory. To address this issue, primal-dual SQP methods (pdSQP) employ a certain primal-dual augmented Lagrangian merit function and solve a subproblem that involves the minimization of a quadratic model of the merit function subject to simple bound constraints. The model of the merit function is constructed so that the resulting primal-dual subproblem is equivalent to the stabilized SQP subproblem. When used in conjunction with a flexible line-search, the merit function guarantees convergence from any starting point, while the connection with the stabilized subproblem allows pdSQP to retain the superlinear local convergence that is characteristic of stabilized SQP methods.

A new dynamic convexification framework is developed that is applicable for nonconvex general standard form, stabilized, and primal-dual bound-constrained QP subproblems. Dynamic convexification involves three distinct stages: pre-convexification, concurrent convexification and post-convexification. New techniques are derived and analyzed for the implicit modification of symmetric indefinite factorizations and for the imposition of temporary artificial constraints, both of which are suitable for pre-convexification. Concurrent convexification works synchronously with the active-set method used to solve the subproblem, and computes minimal modifications needed to ensure that the QP iterates are uniformly bounded. Finally, post-convexification defines an implicit modification that ensures the solution of the subproblem yields a descent direction for the merit function.

A new exact second-derivative primal-dual SQP method (dcpdSQP) is formulated for large-scale nonconvex optimization. Convergence analysis is presented that demonstrates guaranteed global convergence. Extensive numerical testing indicates that the performance of the proposed method is comparable or better than conventional full convexification while significantly reducing the number of factorizations required.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View