Computing Publications

Publications Home » Equality and inequality constrain...

Equality and inequality constrained optimization algorithms with convergent stepsizes

Berc Rustem

Journal Article
Journal of Optimization Theory and Applications
Volume 76
Issue 3
pp.429–453
1993
ISSN 0022-3239
Abstract

In this paper, we consider two algorithms for nonlinear equality and inequality constrained optimization. Both algorithms utilize stepsize strategies based on differentiable penalty functions and quadratic programming subproblems. The essential difference between the algorithms is in the stepsize strategies used. The objective function in the quadratic subproblem includes a linear term that is dependent on the penalty functions. The quadratic objective function utilizes an approximate Hessian of the Lagrangian augmented by the penalty functions. In this approximation, it is possible to ignore the second-derivative terms arising from the constraints in the penalty functions.

The penalty parameter is determined using a strategy, slightly different for each algorithm, that ensures boundedness as well as a descent property. In particular, the boundedness follows as the strategy is always satisfied for finite values of the parameter.

These properties are utilized to establish global convergence and the condition under which unit stepsizes are achieved. There is also a compatibility between the quadratic objective function and the stepsize strategy to ensure the consistency of the properties for unit steps and subsequent convergence rates.

BibTEX file for the publication
 

pubs.doc.ic.ac.uk: built & maintained by Ashok Argent-Katwala.