doi: 10.3934/jimo.2019075

An adaptively regularized sequential quadratic programming method for equality constrained optimization

1. 

School of Mathematics, China University of Mining and Technology, Xuzhou, Jiangsu, China

2. 

School of Mathematical Sciences, Soochow University, Suzhou, Jiangsu, China

* Corresponding author: Songqiang Qiu

Received  June 2018 Revised  March 2019 Published  July 2019

Fund Project: The first author is supported by the Fundamental Research Funds for the Central Universities 2014QNA62

In this paper, we devise an adaptively regularized SQP method for equality constrained optimization problem that is resilient to constraint degeneracy, with a relatively small departure from classical SQP method. The main feature of our method is an adaptively choice of regularization parameter, embedded in a trust-funnel-like algorithmic scheme. Unlike general regularized methods, which update regularization parameter after a regularized problem is approximately solved, our method updates the regularization parameter at each iteration according to the infeasibility measure and the promised improvements achieved by the trial step. The sequence of regularization parameters is not necessarily monotonically decreasing. The whole algorithm is globalized by a trust-funnel-like strategy, in which neither a penalty function nor a filter is needed. We present global and fast local convergence under weak assumptions. Preliminary numerical results on a collection of degenerate problems are reported, which are encouraging.

Citation: Songqiang Qiu, Zhongwen Chen. An adaptively regularized sequential quadratic programming method for equality constrained optimization. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2019075
References:
[1]

P. ArmandJ. Benoist and D. Orban, From global to local convergence of interior methods for nonlinear optimization, Optim. Method. Softw., 28 (2013), 1051-1080.  doi: 10.1080/10556788.2012.668905.  Google Scholar

[2]

S. Arreckx and D. Orban, A Regularized Factorization-Free Method for Equality-Constrained Optimization, Technical Report G-2016-65, GERAD HEC Montréal, 2016. doi: 10.1137/16M1088570.  Google Scholar

[3]

R. H. Bielschowsky and F. A. M. Gomes, Dynamic control of infeasibility in equality constrained optimization, SIAM J. Optim., 19 (2008), 1299-1325.  doi: 10.1137/070679557.  Google Scholar

[4]

P. T. Boggs and J. W. Tolle, Sequential quadratic programming, Acta Numer., 4 (1995), 1-51.  doi: 10.1017/s0962492900002518.  Google Scholar

[5]

J. R. Bunch and L. Kaufman, Some stable methods for calculating inertia and solving symmetric linear systems, Math. Comput., 31 (1977), 163-179.  doi: 10.1090/S0025-5718-1977-0428694-0.  Google Scholar

[6]

R. M. Chamberlain, M. J. D. Powell, C. Lemarechal and H. C. Pedersen, The watchdog technique for forcing convergence in algorithms for constrained optimization, in Algorithms for Constrained Minimization of Smooth Nonlinear Functions (eds. A. G. Buckley and J. L. Goffin), Springer, Berlin, Heidelberg, 1982, 1–17. doi: 10.1007/bfb0120945.  Google Scholar

[7]

A. R. ConnN. I. M. GouldD. Orban and P. L. Toint, A primal-dual trust-region algorithm for non-convex nonlinear programming, Math. Program., 87 (2000), 215-249.  doi: 10.1007/s101070050112.  Google Scholar

[8]

A. Conn, N. Gould and P. Toint, Trust-Region Methods, SIAM, Philadelphia, PA, USA, 2000. doi: 10.1137/1.9780898719857.  Google Scholar

[9]

H. DanN. Yamashita and M. Fukushima, Convergence properties of the inexact Levenberg-Marquardt method under local error bound conditions, Optim. Method. Softw., 17 (2002), 605-626.  doi: 10.1080/1055678021000049345.  Google Scholar

[10]

DEGEN, Http://w3.impa.br/~optim/DEGEN_collection.zip. Google Scholar

[11]

E. D. Dolan and J. J. Moré, Benchmarking optimization software with performance profiles, Math. Program., 91 (2002), 201-213.  doi: 10.1007/s101070100263.  Google Scholar

[12]

I. S. Duff, Ma57–-a code for the solution of sparse symmetric definite and indefinite systems, ACM Trans. Math. Softw., 30 (2004), 118-144.  doi: 10.1145/992200.992202.  Google Scholar

[13]

J.-Y. Fan and Y.-X. Yuan, On the quadratic convergence of the Levenberg-Marquardt method without nonsingularity assumption, Computing, 74 (2005), 23-39.  doi: 10.1007/s00607-004-0083-1.  Google Scholar

[14]

D. FernándezE. A. Pilotta and G. A. Torres, An inexact restoration strategy for the globalization of the sSQP method, Comput. Optim. Appl., 54 (2013), 595-617.  doi: 10.1007/s10589-012-9502-y.  Google Scholar

[15]

R. Fletcher, Second order corrections for non-differentiable optimization, in Numerical Analysis: Proceedings of the 9th Biennial Conference Held at Dundee, Scotland, June 23–26, 1981 (ed. G. A. Watson), Springer, Berlin, Heidelberg, 1982, 85–114.  Google Scholar

[16]

R. FletcherS. Leyffer and Ph. L. Toint, On the global convergence of a filter–SQP algorithm, SIAM J. Optim, 13 (2002), 44-59.  doi: 10.1137/S105262340038081X.  Google Scholar

[17]

R. FletcherN. I. M. GouldS. LeyfferPh. L. Toint and A. Wächter, Global convergence of a trust-region SQP-filter algorithm for general nonlinear programming, SIAM J. Optim., 13 (2002), 635-659.  doi: 10.1137/S1052623499357258.  Google Scholar

[18]

R. Fletcher and S. Leyffer, Nonlinear programming without a penalty function, Math. Program., 91 (2002), 239-269.  doi: 10.1007/s101070100244.  Google Scholar

[19] R. FourerD. M. Gay and B. W. Kernighan, AMPL: A Modeling Language for Mathematical Programming, The Scientific Press, 1993.   Google Scholar
[20]

M. P. Friedlander and D. Orban, A primal–dual regularized interior-point method for convex quadratic programs, Math. Program. Comput., 4 (2012), 71-107.  doi: 10.1007/s12532-012-0035-2.  Google Scholar

[21]

M. Fukushima, A successive quadratic programming algorithm with global and superlinear convergence properties, Math. Program., 35 (1986), 253-264.  doi: 10.1007/BF01580879.  Google Scholar

[22]

P. E. Gill, W. Murray, D. Ponceleón and M. Saunders, Solving Reduced KKT Systems in Barrier Methods for Linear and Quadratic Programming, Technical Report Technical Report SOL 91-7, Systems Optimization Laboratory, Stanford University, Stanford, 1991. doi: 10.21236/ADA239191.  Google Scholar

[23]

P. E. Gill, V. Kungurtsev and D. P. Robinson, A stabilized SQP method: Superlinear convergence, Math. Program., 163 (2017), 369-?10. doi: 10.1007/s10107-016-1066-7.  Google Scholar

[24]

P. E. Gill and D. P. Robinson, A globally convergent stabilized SQP method, SIAM J. Optim., 23 (2013), 1983-2010.  doi: 10.1137/120882913.  Google Scholar

[25]

P. E. Gill and E. Wong, Sequential quadratic programming methods, in Mixed Integer Nonlinear Programming (eds. J. Lee and S. Leyffer), Springer, New York, 2012,147–224.  Google Scholar

[26]

J. Gondzio, Matrix-free interior point method, Comput. Optim. Appl., 51 (2012), 457-480.  doi: 10.1007/s10589-010-9361-3.  Google Scholar

[27]

N. I. M. Gould and Ph. L. Toint, Nonlinear programming without a penalty function or a filter, Math. Program., 122 (2010), 155-196.  doi: 10.1007/s10107-008-0244-7.  Google Scholar

[28]

N. I. M. Gould and Ph. L. Toint, Global convergence of a non-monotone trust-region filter algorithm for nonlinear programming, in Multiscale Optimization Methods and Applications (eds. W. W. Hager, S.-J. Huang, P. M. Pardalos and O. A. Prokopyev), Springer US, Boston, MA, 2006,125–150. doi: 10.1007/0-387-29550-X_5.  Google Scholar

[29]

W. W. Hager, Stabilized sequential quadratic programming, Comput. Optim. Appl., 12 (1999), 253-273.  doi: 10.1023/A:1008640419184.  Google Scholar

[30]

A. F. IzmailovM. V. Solodov and E. I. Uskov, Combining stabilized SQP with the augmented Lagrangian algorithm, Comput. Optim. Appl., 62 (2015), 405-429.  doi: 10.1007/s10589-015-9744-6.  Google Scholar

[31]

A. F. Izmailov and M. V. Solodov, Stabilized SQP revisited, Math. Program., 133 (2012), 93-120.  doi: 10.1007/s10107-010-0413-3.  Google Scholar

[32]

K. Levenberg, A method for the solution of certain problems in least squares, Q. Appl. Math., 2 (1944), 164-168.  doi: 10.1090/qam/10666.  Google Scholar

[33]

D.-H. Li and L. Qi, A Stabilized SQP Method via Linear Equations, Applied mathematics technical report AMR00/5, The University of New South Wales, 2000. Google Scholar

[34]

X. W. Liu and Y. -X Yuan, A sequential quadratic programming method without a penalty function or a filter for nonlinear equality constrained optimization, SIAM J. Optim., 21 (2011), 545-571.  doi: 10.1137/080739884.  Google Scholar

[35]

C. M. Maes, A Regularized Active-Set Method for Sparse Convex Quadratic Programming, PhD thesis, Institute for Computational and Mathematical Engineering, Stanford University, CA, 2010. Google Scholar

[36]

D. Q. Mayne and E. Polak, A surperlinearly convergent algorithm for constrained optimization problems, in Algorithms for Constrained Minimization of Smooth Nonlinear Functions (eds. A. G. Buckley and J.-L. Goffin), Springer Berlin Heidelberg, Berlin, Heidelberg, 1982, 45–61.  Google Scholar

[37]

L. Minchenko and S. Stakhovski, On relaxed constant rank regularity condition in mathematical programming, Optimization, 60 (2011), 429-440.  doi: 10.1080/02331930902971377.  Google Scholar

[38]

J. J. Moré, The Levenberg-Marquardt algorithm: Implementation and theory, in Numerical Analysis, Springer, Berlin, 1978, 105–116.  Google Scholar

[39]

B. A. Murtagh and M. A. Saunders, Minos 5.4 User's Guide (Revised), Technical report, Technical Report SOL 83-20R, Department of Operations Research, Stanford University, Stanford, CA 94305, USA, 1993; Revised, 1995. Google Scholar

[40]

M. J. D. Powell and Y.-X. Yuan, A recursive quadratic programming algorithm that uses differentiable exact penalty functions, Math. Program., 35 (1986), 265-278.  doi: 10.1007/BF01580880.  Google Scholar

[41]

M. J. D. Powell and Y.-X. Yuan, A trust region algorithm for equality constrained optimization, Math. Program., 49 (1990), 189-211.  doi: 10.1007/BF01588787.  Google Scholar

[42]

S. Q. Qiu and Z. W. Chen, Global and local convergence of a class of penalty-free-type methods for nonlinear programming, Appl. Math. Model., 36 (2012), 3201-3216.  doi: 10.1016/j.apm.2011.10.009.  Google Scholar

[43]

S. Q. Qiu and Z. W. Chen, A globally convergent penalty-free method for optimization with equality constraints and simple bounds, Acta Appl. Math., 142 (2016), 39-60.  doi: 10.1007/s10440-015-0013-6.  Google Scholar

[44]

T. Rusten and R. Winther, A preconditioned iterative method for saddlepoint problems, SIAM J. Matrix Anal. Appl., 13 (1992), 887-904.  doi: 10.1137/0613054.  Google Scholar

[45]

M. Saunders and J. Tomlin, Solving Regularized Linear Programs Using Barrier Methods and KKT Systems, Technical Report SOL Report 96-4, Dept. of EESOR, Stanford University, 1996. Google Scholar

[46]

D. Silvester and A. Wathen, Fast iterative solution of stabilised Stokes systems part Ⅱ: Using general block preconditioners, SIAM J. Numer. Anal., 31 (1994), 1352-1367.  doi: 10.1137/0731070.  Google Scholar

[47]

W. Y. Sun and Y.-X. Yuan, Optimization Theory and Methods, Springer, 2005.  Google Scholar

[48]

M. Ulbrich and S. Ulbrich, Non-monotone trust region methods for nonlinear equality constrained optimization without a penalty function, Math. Program., 95 (2003), 103-135.  doi: 10.1007/s10107-002-0343-9.  Google Scholar

[49]

M. UlbrichS. Ulbrich and L. N. Vicente, A globally convergent primal-dual interior-point filter method for nonlinear programming, Math. Program., 100 (2004), 379-410.  doi: 10.1007/s10107-003-0477-4.  Google Scholar

[50]

S. Ulbrich, On the superlinear local convergence of a filter-SQP method, Math. Program., 100 (2004), 217-245.  doi: 10.1007/s10107-003-0491-6.  Google Scholar

[51]

R. J. Vanderbei, Symmetric quasidefinite matrices, SIAM J. Optim., 5 (1995), 100-113.  doi: 10.1137/0805005.  Google Scholar

[52]

R. J. Vanderbei, Loqo: An interior point code for quadratic programming, Optim. Methods Softw., 11 (1999), 451-484.  doi: 10.1080/10556789908805759.  Google Scholar

[53]

R. J. Vanderbei and D. F. Shanno, An interior-point algorithm for nonconvex nonlinear programming, Comput. Optim. Appl., 13 (1999), 231-252.  doi: 10.1023/A:1008677427361.  Google Scholar

[54]

A. Wächter and L. T. Biegler, Line search filter methods for nonlinear programming: Motivation and global convergence, SIAM J. Optim., 16 (2005), 1–31 (electronic). doi: 10.1137/S1052623403426556.  Google Scholar

[55]

A. Wächter and L. T. Biegler, On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming, Math. Program., 106 (2006), 25-57.  doi: 10.1007/s10107-004-0559-y.  Google Scholar

[56]

S. J. Wright, Superlinear convergence of a stabilized SQP method to a degenerate solution, Comput. Optim. Appl., 11 (1998), 253-275.  doi: 10.1023/A:1018665102534.  Google Scholar

[57]

W. J. XueC. G. Shen and D. G. Pu, A penalty-function-free line search SQP method for nonlinear programming, J. Comput. Appl. Math., 228 (2009), 313-325.  doi: 10.1016/j.cam.2008.09.031.  Google Scholar

[58]

H. YamashitaH. Yabe and T. Tanabe, A globally and superlinearly convergent primal-dual interior point trust region method for large scale constrained optimization, Math. Program., 102 (2005), 111-151.  doi: 10.1007/s10107-004-0508-9.  Google Scholar

[59]

N. Yamashita and M. Fukushima, On the rate of convergence of the Levenberg-Marquardt method, in Topics in Numerical Analysis, Springer, 2001,239–249. doi: 10.1007/978-3-7091-6217-0_18.  Google Scholar

[60]

J.-L. Zhang, On the convergence properties of the Levenberg-Marquardt method, Optimization, 52 (2003), 739-756.  doi: 10.1080/0233193031000163993.  Google Scholar

show all references

References:
[1]

P. ArmandJ. Benoist and D. Orban, From global to local convergence of interior methods for nonlinear optimization, Optim. Method. Softw., 28 (2013), 1051-1080.  doi: 10.1080/10556788.2012.668905.  Google Scholar

[2]

S. Arreckx and D. Orban, A Regularized Factorization-Free Method for Equality-Constrained Optimization, Technical Report G-2016-65, GERAD HEC Montréal, 2016. doi: 10.1137/16M1088570.  Google Scholar

[3]

R. H. Bielschowsky and F. A. M. Gomes, Dynamic control of infeasibility in equality constrained optimization, SIAM J. Optim., 19 (2008), 1299-1325.  doi: 10.1137/070679557.  Google Scholar

[4]

P. T. Boggs and J. W. Tolle, Sequential quadratic programming, Acta Numer., 4 (1995), 1-51.  doi: 10.1017/s0962492900002518.  Google Scholar

[5]

J. R. Bunch and L. Kaufman, Some stable methods for calculating inertia and solving symmetric linear systems, Math. Comput., 31 (1977), 163-179.  doi: 10.1090/S0025-5718-1977-0428694-0.  Google Scholar

[6]

R. M. Chamberlain, M. J. D. Powell, C. Lemarechal and H. C. Pedersen, The watchdog technique for forcing convergence in algorithms for constrained optimization, in Algorithms for Constrained Minimization of Smooth Nonlinear Functions (eds. A. G. Buckley and J. L. Goffin), Springer, Berlin, Heidelberg, 1982, 1–17. doi: 10.1007/bfb0120945.  Google Scholar

[7]

A. R. ConnN. I. M. GouldD. Orban and P. L. Toint, A primal-dual trust-region algorithm for non-convex nonlinear programming, Math. Program., 87 (2000), 215-249.  doi: 10.1007/s101070050112.  Google Scholar

[8]

A. Conn, N. Gould and P. Toint, Trust-Region Methods, SIAM, Philadelphia, PA, USA, 2000. doi: 10.1137/1.9780898719857.  Google Scholar

[9]

H. DanN. Yamashita and M. Fukushima, Convergence properties of the inexact Levenberg-Marquardt method under local error bound conditions, Optim. Method. Softw., 17 (2002), 605-626.  doi: 10.1080/1055678021000049345.  Google Scholar

[10]

DEGEN, Http://w3.impa.br/~optim/DEGEN_collection.zip. Google Scholar

[11]

E. D. Dolan and J. J. Moré, Benchmarking optimization software with performance profiles, Math. Program., 91 (2002), 201-213.  doi: 10.1007/s101070100263.  Google Scholar

[12]

I. S. Duff, Ma57–-a code for the solution of sparse symmetric definite and indefinite systems, ACM Trans. Math. Softw., 30 (2004), 118-144.  doi: 10.1145/992200.992202.  Google Scholar

[13]

J.-Y. Fan and Y.-X. Yuan, On the quadratic convergence of the Levenberg-Marquardt method without nonsingularity assumption, Computing, 74 (2005), 23-39.  doi: 10.1007/s00607-004-0083-1.  Google Scholar

[14]

D. FernándezE. A. Pilotta and G. A. Torres, An inexact restoration strategy for the globalization of the sSQP method, Comput. Optim. Appl., 54 (2013), 595-617.  doi: 10.1007/s10589-012-9502-y.  Google Scholar

[15]

R. Fletcher, Second order corrections for non-differentiable optimization, in Numerical Analysis: Proceedings of the 9th Biennial Conference Held at Dundee, Scotland, June 23–26, 1981 (ed. G. A. Watson), Springer, Berlin, Heidelberg, 1982, 85–114.  Google Scholar

[16]

R. FletcherS. Leyffer and Ph. L. Toint, On the global convergence of a filter–SQP algorithm, SIAM J. Optim, 13 (2002), 44-59.  doi: 10.1137/S105262340038081X.  Google Scholar

[17]

R. FletcherN. I. M. GouldS. LeyfferPh. L. Toint and A. Wächter, Global convergence of a trust-region SQP-filter algorithm for general nonlinear programming, SIAM J. Optim., 13 (2002), 635-659.  doi: 10.1137/S1052623499357258.  Google Scholar

[18]

R. Fletcher and S. Leyffer, Nonlinear programming without a penalty function, Math. Program., 91 (2002), 239-269.  doi: 10.1007/s101070100244.  Google Scholar

[19] R. FourerD. M. Gay and B. W. Kernighan, AMPL: A Modeling Language for Mathematical Programming, The Scientific Press, 1993.   Google Scholar
[20]

M. P. Friedlander and D. Orban, A primal–dual regularized interior-point method for convex quadratic programs, Math. Program. Comput., 4 (2012), 71-107.  doi: 10.1007/s12532-012-0035-2.  Google Scholar

[21]

M. Fukushima, A successive quadratic programming algorithm with global and superlinear convergence properties, Math. Program., 35 (1986), 253-264.  doi: 10.1007/BF01580879.  Google Scholar

[22]

P. E. Gill, W. Murray, D. Ponceleón and M. Saunders, Solving Reduced KKT Systems in Barrier Methods for Linear and Quadratic Programming, Technical Report Technical Report SOL 91-7, Systems Optimization Laboratory, Stanford University, Stanford, 1991. doi: 10.21236/ADA239191.  Google Scholar

[23]

P. E. Gill, V. Kungurtsev and D. P. Robinson, A stabilized SQP method: Superlinear convergence, Math. Program., 163 (2017), 369-?10. doi: 10.1007/s10107-016-1066-7.  Google Scholar

[24]

P. E. Gill and D. P. Robinson, A globally convergent stabilized SQP method, SIAM J. Optim., 23 (2013), 1983-2010.  doi: 10.1137/120882913.  Google Scholar

[25]

P. E. Gill and E. Wong, Sequential quadratic programming methods, in Mixed Integer Nonlinear Programming (eds. J. Lee and S. Leyffer), Springer, New York, 2012,147–224.  Google Scholar

[26]

J. Gondzio, Matrix-free interior point method, Comput. Optim. Appl., 51 (2012), 457-480.  doi: 10.1007/s10589-010-9361-3.  Google Scholar

[27]

N. I. M. Gould and Ph. L. Toint, Nonlinear programming without a penalty function or a filter, Math. Program., 122 (2010), 155-196.  doi: 10.1007/s10107-008-0244-7.  Google Scholar

[28]

N. I. M. Gould and Ph. L. Toint, Global convergence of a non-monotone trust-region filter algorithm for nonlinear programming, in Multiscale Optimization Methods and Applications (eds. W. W. Hager, S.-J. Huang, P. M. Pardalos and O. A. Prokopyev), Springer US, Boston, MA, 2006,125–150. doi: 10.1007/0-387-29550-X_5.  Google Scholar

[29]

W. W. Hager, Stabilized sequential quadratic programming, Comput. Optim. Appl., 12 (1999), 253-273.  doi: 10.1023/A:1008640419184.  Google Scholar

[30]

A. F. IzmailovM. V. Solodov and E. I. Uskov, Combining stabilized SQP with the augmented Lagrangian algorithm, Comput. Optim. Appl., 62 (2015), 405-429.  doi: 10.1007/s10589-015-9744-6.  Google Scholar

[31]

A. F. Izmailov and M. V. Solodov, Stabilized SQP revisited, Math. Program., 133 (2012), 93-120.  doi: 10.1007/s10107-010-0413-3.  Google Scholar

[32]

K. Levenberg, A method for the solution of certain problems in least squares, Q. Appl. Math., 2 (1944), 164-168.  doi: 10.1090/qam/10666.  Google Scholar

[33]

D.-H. Li and L. Qi, A Stabilized SQP Method via Linear Equations, Applied mathematics technical report AMR00/5, The University of New South Wales, 2000. Google Scholar

[34]

X. W. Liu and Y. -X Yuan, A sequential quadratic programming method without a penalty function or a filter for nonlinear equality constrained optimization, SIAM J. Optim., 21 (2011), 545-571.  doi: 10.1137/080739884.  Google Scholar

[35]

C. M. Maes, A Regularized Active-Set Method for Sparse Convex Quadratic Programming, PhD thesis, Institute for Computational and Mathematical Engineering, Stanford University, CA, 2010. Google Scholar

[36]

D. Q. Mayne and E. Polak, A surperlinearly convergent algorithm for constrained optimization problems, in Algorithms for Constrained Minimization of Smooth Nonlinear Functions (eds. A. G. Buckley and J.-L. Goffin), Springer Berlin Heidelberg, Berlin, Heidelberg, 1982, 45–61.  Google Scholar

[37]

L. Minchenko and S. Stakhovski, On relaxed constant rank regularity condition in mathematical programming, Optimization, 60 (2011), 429-440.  doi: 10.1080/02331930902971377.  Google Scholar

[38]

J. J. Moré, The Levenberg-Marquardt algorithm: Implementation and theory, in Numerical Analysis, Springer, Berlin, 1978, 105–116.  Google Scholar

[39]

B. A. Murtagh and M. A. Saunders, Minos 5.4 User's Guide (Revised), Technical report, Technical Report SOL 83-20R, Department of Operations Research, Stanford University, Stanford, CA 94305, USA, 1993; Revised, 1995. Google Scholar

[40]

M. J. D. Powell and Y.-X. Yuan, A recursive quadratic programming algorithm that uses differentiable exact penalty functions, Math. Program., 35 (1986), 265-278.  doi: 10.1007/BF01580880.  Google Scholar

[41]

M. J. D. Powell and Y.-X. Yuan, A trust region algorithm for equality constrained optimization, Math. Program., 49 (1990), 189-211.  doi: 10.1007/BF01588787.  Google Scholar

[42]

S. Q. Qiu and Z. W. Chen, Global and local convergence of a class of penalty-free-type methods for nonlinear programming, Appl. Math. Model., 36 (2012), 3201-3216.  doi: 10.1016/j.apm.2011.10.009.  Google Scholar

[43]

S. Q. Qiu and Z. W. Chen, A globally convergent penalty-free method for optimization with equality constraints and simple bounds, Acta Appl. Math., 142 (2016), 39-60.  doi: 10.1007/s10440-015-0013-6.  Google Scholar

[44]

T. Rusten and R. Winther, A preconditioned iterative method for saddlepoint problems, SIAM J. Matrix Anal. Appl., 13 (1992), 887-904.  doi: 10.1137/0613054.  Google Scholar

[45]

M. Saunders and J. Tomlin, Solving Regularized Linear Programs Using Barrier Methods and KKT Systems, Technical Report SOL Report 96-4, Dept. of EESOR, Stanford University, 1996. Google Scholar

[46]

D. Silvester and A. Wathen, Fast iterative solution of stabilised Stokes systems part Ⅱ: Using general block preconditioners, SIAM J. Numer. Anal., 31 (1994), 1352-1367.  doi: 10.1137/0731070.  Google Scholar

[47]

W. Y. Sun and Y.-X. Yuan, Optimization Theory and Methods, Springer, 2005.  Google Scholar

[48]

M. Ulbrich and S. Ulbrich, Non-monotone trust region methods for nonlinear equality constrained optimization without a penalty function, Math. Program., 95 (2003), 103-135.  doi: 10.1007/s10107-002-0343-9.  Google Scholar

[49]

M. UlbrichS. Ulbrich and L. N. Vicente, A globally convergent primal-dual interior-point filter method for nonlinear programming, Math. Program., 100 (2004), 379-410.  doi: 10.1007/s10107-003-0477-4.  Google Scholar

[50]

S. Ulbrich, On the superlinear local convergence of a filter-SQP method, Math. Program., 100 (2004), 217-245.  doi: 10.1007/s10107-003-0491-6.  Google Scholar

[51]

R. J. Vanderbei, Symmetric quasidefinite matrices, SIAM J. Optim., 5 (1995), 100-113.  doi: 10.1137/0805005.  Google Scholar

[52]

R. J. Vanderbei, Loqo: An interior point code for quadratic programming, Optim. Methods Softw., 11 (1999), 451-484.  doi: 10.1080/10556789908805759.  Google Scholar

[53]

R. J. Vanderbei and D. F. Shanno, An interior-point algorithm for nonconvex nonlinear programming, Comput. Optim. Appl., 13 (1999), 231-252.  doi: 10.1023/A:1008677427361.  Google Scholar

[54]

A. Wächter and L. T. Biegler, Line search filter methods for nonlinear programming: Motivation and global convergence, SIAM J. Optim., 16 (2005), 1–31 (electronic). doi: 10.1137/S1052623403426556.  Google Scholar

[55]

A. Wächter and L. T. Biegler, On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming, Math. Program., 106 (2006), 25-57.  doi: 10.1007/s10107-004-0559-y.  Google Scholar

[56]

S. J. Wright, Superlinear convergence of a stabilized SQP method to a degenerate solution, Comput. Optim. Appl., 11 (1998), 253-275.  doi: 10.1023/A:1018665102534.  Google Scholar

[57]

W. J. XueC. G. Shen and D. G. Pu, A penalty-function-free line search SQP method for nonlinear programming, J. Comput. Appl. Math., 228 (2009), 313-325.  doi: 10.1016/j.cam.2008.09.031.  Google Scholar

[58]

H. YamashitaH. Yabe and T. Tanabe, A globally and superlinearly convergent primal-dual interior point trust region method for large scale constrained optimization, Math. Program., 102 (2005), 111-151.  doi: 10.1007/s10107-004-0508-9.  Google Scholar

[59]

N. Yamashita and M. Fukushima, On the rate of convergence of the Levenberg-Marquardt method, in Topics in Numerical Analysis, Springer, 2001,239–249. doi: 10.1007/978-3-7091-6217-0_18.  Google Scholar

[60]

J.-L. Zhang, On the convergence properties of the Levenberg-Marquardt method, Optimization, 52 (2003), 739-756.  doi: 10.1080/0233193031000163993.  Google Scholar

Figure 1.  Performance compared to MINOS
Figure 2.  Performance compared to LOQO
Table 1.  Compared to MINOS
Use MINOS'Stopping rule.
MINOS New Alg.
Problem solved
$ <1500 $ evals.
100 103
Robustness
($ <1500 $ evals.)
95.24% 98.10%
Average evals
($ <1500 $ evals.)
84.04 67.5
Median evals
($ <1500 $ evals.)
27 21
Use MINOS'Stopping rule.
MINOS New Alg.
Problem solved
$ <1500 $ evals.
100 103
Robustness
($ <1500 $ evals.)
95.24% 98.10%
Average evals
($ <1500 $ evals.)
84.04 67.5
Median evals
($ <1500 $ evals.)
27 21
Table 2.  Compared to LOQO.
Use LOQO's Stopping rule.
LOQONew Alg.
Problem solved
($ <1500 $ evals.
7896
Robustness
($ <1500 $ evals.)
74.30%91.43%
Average evals
($ <1500 $ evals.)
26.662.1
Median evals
($ <1500 $ evals.)
2021
Use LOQO's Stopping rule.
LOQONew Alg.
Problem solved
($ <1500 $ evals.
7896
Robustness
($ <1500 $ evals.)
74.30%91.43%
Average evals
($ <1500 $ evals.)
26.662.1
Median evals
($ <1500 $ evals.)
2021
[1]

Stefan Kindermann, Antonio Leitão. Convergence rates for Kaczmarz-type regularization methods. Inverse Problems & Imaging, 2014, 8 (1) : 149-172. doi: 10.3934/ipi.2014.8.149

[2]

Zhongwen Chen, Songqiang Qiu, Yujie Jiao. A penalty-free method for equality constrained optimization. Journal of Industrial & Management Optimization, 2013, 9 (2) : 391-409. doi: 10.3934/jimo.2013.9.391

[3]

Chunlin Hao, Xinwei Liu. Global convergence of an SQP algorithm for nonlinear optimization with overdetermined constraints. Numerical Algebra, Control & Optimization, 2012, 2 (1) : 19-29. doi: 10.3934/naco.2012.2.19

[4]

Changjun Yu, Kok Lay Teo, Liansheng Zhang, Yanqin Bai. On a refinement of the convergence analysis for the new exact penalty function method for continuous inequality constrained optimization problem. Journal of Industrial & Management Optimization, 2012, 8 (2) : 485-491. doi: 10.3934/jimo.2012.8.485

[5]

Stefan Kindermann, Andreas Neubauer. On the convergence of the quasioptimality criterion for (iterated) Tikhonov regularization. Inverse Problems & Imaging, 2008, 2 (2) : 291-299. doi: 10.3934/ipi.2008.2.291

[6]

Bülent Karasözen. Survey of trust-region derivative free optimization methods. Journal of Industrial & Management Optimization, 2007, 3 (2) : 321-334. doi: 10.3934/jimo.2007.3.321

[7]

Chunlei Zhang, Qin Sheng, Raúl Ordóñez. Notes on the convergence and applications of surrogate optimization. Conference Publications, 2005, 2005 (Special) : 947-956. doi: 10.3934/proc.2005.2005.947

[8]

Gaohang Yu, Lutai Guan, Guoyin Li. Global convergence of modified Polak-Ribière-Polyak conjugate gradient methods with sufficient descent property. Journal of Industrial & Management Optimization, 2008, 4 (3) : 565-579. doi: 10.3934/jimo.2008.4.565

[9]

Biao Qu, Naihua Xiu. A relaxed extragradient-like method for a class of constrained optimization problem. Journal of Industrial & Management Optimization, 2007, 3 (4) : 645-654. doi: 10.3934/jimo.2007.3.645

[10]

Alexander Mielke. Weak-convergence methods for Hamiltonian multiscale problems. Discrete & Continuous Dynamical Systems - A, 2008, 20 (1) : 53-79. doi: 10.3934/dcds.2008.20.53

[11]

Qiyu Jin, Ion Grama, Quansheng Liu. Convergence theorems for the Non-Local Means filter. Inverse Problems & Imaging, 2018, 12 (4) : 853-881. doi: 10.3934/ipi.2018036

[12]

Chun-Hsiung Hsia, Chang-Yeol Jung, Bongsuk Kwon. On the global convergence of frequency synchronization for Kuramoto and Winfree oscillators. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3319-3334. doi: 10.3934/dcdsb.2018322

[13]

Qingguang Guan, Max Gunzburger. Stability and convergence of time-stepping methods for a nonlocal model for diffusion. Discrete & Continuous Dynamical Systems - B, 2015, 20 (5) : 1315-1335. doi: 10.3934/dcdsb.2015.20.1315

[14]

Yanxing Cui, Chuanlong Wang, Ruiping Wen. On the convergence of generalized parallel multisplitting iterative methods for semidefinite linear systems. Numerical Algebra, Control & Optimization, 2012, 2 (4) : 863-873. doi: 10.3934/naco.2012.2.863

[15]

Stig-Olof Londen, Hana Petzeltová. Convergence of solutions of a non-local phase-field system. Discrete & Continuous Dynamical Systems - S, 2011, 4 (3) : 653-670. doi: 10.3934/dcdss.2011.4.653

[16]

Weijun Zhou, Youhua Zhou. On the strong convergence of a modified Hestenes-Stiefel method for nonconvex optimization. Journal of Industrial & Management Optimization, 2013, 9 (4) : 893-899. doi: 10.3934/jimo.2013.9.893

[17]

Lili Ju, Wensong Wu, Weidong Zhao. Adaptive finite volume methods for steady convection-diffusion equations with mesh optimization. Discrete & Continuous Dynamical Systems - B, 2009, 11 (3) : 669-690. doi: 10.3934/dcdsb.2009.11.669

[18]

Guozhi Dong, Bert Jüttler, Otmar Scherzer, Thomas Takacs. Convergence of Tikhonov regularization for solving ill-posed operator equations with solutions defined on surfaces. Inverse Problems & Imaging, 2017, 11 (2) : 221-246. doi: 10.3934/ipi.2017011

[19]

Markus Grasmair. Well-posedness and convergence rates for sparse regularization with sublinear $l^q$ penalty term. Inverse Problems & Imaging, 2009, 3 (3) : 383-387. doi: 10.3934/ipi.2009.3.383

[20]

Tim Hoheisel, Christian Kanzow, Alexandra Schwartz. Improved convergence properties of the Lin-Fukushima-Regularization method for mathematical programs with complementarity constraints. Numerical Algebra, Control & Optimization, 2011, 1 (1) : 49-60. doi: 10.3934/naco.2011.1.49

2018 Impact Factor: 1.025

Metrics

  • PDF downloads (18)
  • HTML views (179)
  • Cited by (0)

Other articles
by authors

[Back to Top]