\title{ A Global Convergence Theory for General Trust-Region-Based Algorithms for Equality Constrained Optimization \footnotemark[2]} \author{J. E. DENNIS, Jr. \footnotemark[3] \and MAHMOUD~EL-ALEM\footnotemark[4] \and Maria C. MACIEL\footnotemark[5]} \begin{document} \maketitle \renewcommand{\thefootnote}{\fnsymbol{footnote}} \footnotetext[2]{Research was supported by DOE DE-FG005-86ER25017, CRPC CCR-9120008, AFOSR-F49620-9310212, and the REDI Foundation.} \footnotetext[3]{Department of Computational and Applied Mathematics \& Center for Research on Parallel Computation, Rice University, P. O. Box 1892, Houston TX 77251.} \footnotetext[4]{Department of Mathematics, Faculty of Science, Alexandria University, Alexandria, Egypt.} \footnotetext[5]{Departamento de Matematica, Universidad Nacional del Sur, Avenida Alem 1253, 8000 Bahia Blanca, Argentina.} \renewcommand{\thefootnote}{\arabic{footnote}} \begin{abstract} This work presents a global convergence theory for a broad class of trust-region algorithms for the smooth nonlinear programming problem with equality constraints. The main result generalizes Powell's 1975 result for unconstrained trust-region algorithms. The trial step is characterized by very mild conditions on its normal and tangential components. The normal component need not be computed accurately. The theory requires a quasi-normal component to satisfy a fraction of Cauchy decrease condition on the quadratic model of the linearized constraints. The tangential component then must satisfy a fraction of Cauchy decrease condition on a quadratic model of the Lagrangian function in the translated tangent space of the constraints determined by the quasi-normal component. The Lagrange multipliers estimates and the Hessian estimates are assumed only to be bounded. The other main characteristic of this class of algorithms is that the step is evaluated by using the augmented Lagrangian as a merit function and the penalty parameter is updated using the El-Alem scheme. The properties of the step together with the way that the penalty parameter is chosen are sufficient to establish global convergence. As an example, an algorithm is presented which can be viewed as a generalization of the Steihaug-Toint dogleg algorithm for the unconstrained case. It is based on a quadratic programming algorithm that uses a step in a quasi-normal direction to the tangent space of the constraints and then does feasible conjugate reduced-gradient steps to solve the reduced quadratic program. This algorithm should cope quite well with large problems for which effective preconditioners are known. \end{abstract} { \bf{Key Words:}} Constrained Optimization, Global Convergence, Trust Regions, Equality Constrained, Nonlinear Programming, Conjugate Gradient, Inexact Newton Method. \begin{AMS} 65K05, 49D37. \end{AMS} \pagestyle{myheadings} \thispagestyle{plain} \markboth{J.~DENNIS, M.~EL-ALEM , AND M.~MACIEL} {A THEORY FOR GENERAL TRUST-REGION-BASED ALGORITHMS}