December  2020, 28(4): 1573-1624. doi: 10.3934/era.2020115

A survey of gradient methods for solving nonlinear optimization

1. 

University of Niš, Faculty of Sciences and Mathematics, Višegradska 33, 18000 Niš, Serbia

2. 

Technical Faculty in Bor, University of Belgrade, Vojske Jugoslavije 12, 19210 Bor, Serbia

3. 

School of Mathematical Science, Harbin Normal University, Harbin 150025, China

* Corresponding author: Predrag S. Stanimirović

Received  October 2020 Revised  November 2020 Published  November 2020

The paper surveys, classifies and investigates theoretically and numerically main classes of line search methods for unconstrained optimization. Quasi-Newton (QN) and conjugate gradient (CG) methods are considered as representative classes of effective numerical methods for solving large-scale unconstrained optimization problems. In this paper, we investigate, classify and compare main QN and CG methods to present a global overview of scientific advances in this field. Some of the most recent trends in this field are presented. A number of numerical experiments is performed with the aim to give an experimental and natural answer regarding the numerical one another comparison of different QN and CG methods.

Citation: Predrag S. Stanimirović, Branislav Ivanov, Haifeng Ma, Dijana Mosić. A survey of gradient methods for solving nonlinear optimization. Electronic Research Archive, 2020, 28 (4) : 1573-1624. doi: 10.3934/era.2020115
References:
[1]

J. Abaffy, A new reprojection of the conjugate directions, Numer. Algebra Control Optim., 9 (2019), 157-171.  doi: 10.3934/naco.2019012.  Google Scholar

[2]

M. Al-Baali, Descent property and global convergence of the Fletcher-Reeves method with inexact line search, IMA J. Numer. Anal., 5 (1985), 121-124.  doi: 10.1093/imanum/5.1.121.  Google Scholar

[3]

N. Andrei, An unconstrained optimization test functions collection, Adv. Model. Optim., 10 (2008), 147-161.   Google Scholar

[4]

N. Andrei, An acceleration of gradient descent algorithm with backtracking for unconstrained optimization, Numer. Algorithms, 42 (2006), 63-73.  doi: 10.1007/s11075-006-9023-9.  Google Scholar

[5]

N. Andrei, A Dai-Liao conjugate gradient algorithm with clustering of eigenvalues, Numer. Algorithms, 77 (2018), 1273-1282.  doi: 10.1007/s11075-017-0362-5.  Google Scholar

[6]

N. Andrei, Relaxed gradient descent and a new gradient descent methods for unconstrained optimization, Visited August 19, (2018). Google Scholar

[7]

N. Andrei, Nonlinear Conjugate Gradient Methods for Unconstrained Optimization, 1st edition, Springer International Publishing, 2020. doi: 10.1007/978-3-030-42950-8.  Google Scholar

[8]

L. Armijo, Minimization of functions having Lipschitz continuous first partial derivatives, Pacific J. Math., 16 (1966), 1-3.  doi: 10.2140/pjm.1966.16.1.  Google Scholar

[9]

S. Babaie-Kafaki and R. Ghanbari, The Dai-Liao nonlinear conjugate gradient method with optimal parameter choices, European J. Oper. Res., 234 (2014), 625-630.  doi: 10.1016/j.ejor.2013.11.012.  Google Scholar

[10]

B. Baluch, Z. Salleh, A. Alhawarat and U. A. M. Roslan, A new modified three-term conjugate gradient method with sufficient descent property and its global convergence, J. Math., 2017 (2017), Article ID 2715854, 12 pages. doi: 10.1155/2017/2715854.  Google Scholar

[11]

J. Barzilai and J. M. Borwein, Two-point step-size gradient method, IMA J. Numer. Anal., 8 (1988), 141-148.  doi: 10.1093/imanum/8.1.141.  Google Scholar

[12]

M. Bastani and D. K. Salkuyeh, On the GSOR iteration method for image restoration, Numerical Algebra, Control and Optimization, (2020). doi: 10.3934/naco.2020013.  Google Scholar

[13]

A. E. J. BogaersS. KokB. D. Reddy and T. Franz, An evaluation of quasi-Newton methods for application to FSI problems involving free surface flow and solid body contact, Computers & Structures, 173 (2016), 71-83.  doi: 10.1016/j.compstruc.2016.05.018.  Google Scholar

[14]

I. BongartzA. R. ConnN. Gould and Ph. L. Toint, CUTE: Constrained and unconstrained testing environments, ACM Trans. Math. Softw., 21 (1995), 123-160.  doi: 10.1145/200979.201043.  Google Scholar

[15]

C. Brezinski, A classification of quasi-Newton methods, Numer. Algorithms, 33 (2003), 123-135.  doi: 10.1023/A:1025551602679.  Google Scholar

[16]

J. Cao and J. Wu, A conjugate gradient algorithm and its applications in image restoration, Appl. Numer. Math., 152 (2020), 243-252.  doi: 10.1016/j.apnum.2019.12.002.  Google Scholar

[17]

W. Cheng, A two-term PRP-based descent method, Numer. Funct. Anal. Optim., 28 (2007), 1217-1230.  doi: 10.1080/01630560701749524.  Google Scholar

[18]

Y. ChengQ. MouX. Pan and S. Yao, A sufficient descent conjugate gradient method and its global convergence, Optim. Methods Softw., 31 (2016), 577-590.  doi: 10.1080/10556788.2015.1124431.  Google Scholar

[19]

A. I. Cohen, Stepsize analysis for descent methods, J. Optim. Theory Appl., 33 (1981), 187-205.  doi: 10.1007/BF00935546.  Google Scholar

[20]

A. R. ConnN. I. M. Gould and Ph. L. Toint, Convergence of quasi-Newton matrices generated by the symmetric rank one update, Math. Programming, 50 (1991), 177-195.  doi: 10.1007/BF01594934.  Google Scholar

[21]

Y.-H. Dai, Nonlinear Conjugate Gradient Methods, Wiley Encyclopedia of Operations Research and Management Science, (2011). doi: 10.1002/9780470400531.eorms0183.  Google Scholar

[22]

Y.-H. Dai, Alternate Step Gradient Method, Report AMSS–2001–041, Academy of Mathematics and Systems Sciences, Chinese Academy of Sciences, 2001. Google Scholar

[23]

Y. Dai, A nonmonotone conjugate gradient algorithm for unconstrained optimization, J. Syst. Sci. Complex., 15 (2002), 139-145.   Google Scholar

[24]

Y.-H. Dai and R. Fletcher, On the Asymptotic Behaviour of some New Gradient Methods, Numerical Analysis Report, NA/212, Dept. of Math. University of Dundee, Scotland, UK, 2003. Google Scholar

[25]

Y.-H. Dai and C.-X. Kou, A nonlinear conjugate gradient algorithm with an optimal property and an improved wolfe line search, SIAM. J. Optim., 23 (2013), 296-320.  doi: 10.1137/100813026.  Google Scholar

[26]

Y.-H. Dai and L.-Z. Liao, New conjugacy conditions and related nonlinear conjugate gradient methods, Appl. Math. Optim., 43 (2001), 87-101.  doi: 10.1007/s002450010019.  Google Scholar

[27]

Y.-H. Dai and L.-Z. Liao, R-linear convergence of the Barzilai and Borwein gradient method, IMA J. Numer. Anal., 22 (2002), 1-10.  doi: 10.1093/imanum/22.1.1.  Google Scholar

[28]

Y.-H. Dai and Q. Ni, Testing different conjugate gradient methods for large-scale unconstrained optimization, J. Comput. Math., 21 (2003), 311-320.   Google Scholar

[29]

Z. Dai and F. Wen, Another improved Wei–Yao–Liu nonlinear conjugate gradient method with sufficient descent property, Appl. Math. Comput., 218 (2012), 7421-7430.  doi: 10.1016/j.amc.2011.12.091.  Google Scholar

[30]

Y.-H. Dai and Y. Yuan, A nonlinear conjugate gradient method with a strong global convergence property, SIAM J. Optim., 10 (1999), 177-182.  doi: 10.1137/S1052623497318992.  Google Scholar

[31]

Y.-H. Dai and Y. Yuan, An efficient hybrid conjugate gradient method for unconstrained optimization, Ann. Oper. Res., 103 (2001), 33-47.  doi: 10.1023/A:1012930416777.  Google Scholar

[32]

Y. H. Dai and Y. Yuan, A class of Globally Convergent Conjugate Gradient Methods, Research report ICM-98-030, Institute of Computational Mathematics and Scientific/Engineering Computing, Chinese Academy of Sciences, 1998. Google Scholar

[33]

Y. Dai and Y. Yuan, A class of globally convergent conjugate gradient methods, Sci. China Ser. A, 46 (2003), 251-261.   Google Scholar

[34]

Y. DaiJ. Yuan and Y.-X. Yuan, Modified two-point step-size gradient methods for unconstrained optimization, Comput. Optim. Appl., 22 (2002), 103-109.  doi: 10.1023/A:1014838419611.  Google Scholar

[35]

Y.-H. Dai and Y.-X. Yuan, Alternate minimization gradient method, IMA J. Numer. Anal., 23 (2003), 377-393.  doi: 10.1093/imanum/23.3.377.  Google Scholar

[36]

Y.-H. Dai and Y.-X. Yuan, Analysis of monotone gradient methods, J. Ind. Manag. Optim., 1 (2005), 181-192.  doi: 10.3934/jimo.2005.1.181.  Google Scholar

[37]

Y.-H. Dai and H. Zhang, An Adaptive Two-Point Step-size gradient method, Research report, Institute of Computational Mathematics and Scientific/Engineering Computing, Chinese Academy of Sciences, 2001. Google Scholar

[38]

J. W. Daniel, The conjugate gradient method for linear and nonlinear operator equations, SIAM J. Numer. Anal., 4 (1967), 10-26.  doi: 10.1137/0704002.  Google Scholar

[39]

S. Delladji, M. Belloufi and B. Sellami, Behavior of the combination of PRP and HZ methods for unconstrained optimization, Numerical Algebra, Control and Optimization, (2020). doi: 10.3934/naco.2020032.  Google Scholar

[40]

Y. DingE. Lushi and Q. Li, Investigation of quasi-Newton methods for unconstrained optimization, International Journal of Computer Application, 29 (2010), 48-58.   Google Scholar

[41]

S. S. Djordjević, Two modifications of the method of the multiplicative parameters in descent gradient methods, Appl. Math, Comput., 218 (2012), 8672-8683.  doi: 10.1016/j.amc.2012.02.029.  Google Scholar

[42]

S. S. Djordjević, Unconstrained Optimization Methods: Conjugate Gradient Methods and Trust-Region Methods, Book Chapter, 2019. doi: 10.5772/intechopen.84374.  Google Scholar

[43]

E. D. Dolan and J. J. Moré, Benchmarking optimization software with performance profiles, Math. Program., 91 (2002), 201-213.  doi: 10.1007/s101070100263.  Google Scholar

[44]

M. S. EngelmanG. Strang and K.-J. Bathe, The application of quasi-Nnewton methods in fluid mechanics, Internat. J. Numer. Methods Engrg., 17 (1981), 707-718.  doi: 10.1002/nme.1620170505.  Google Scholar

[45]

D. K. Faddeev and I. S. Sominskiǐ, Collection of Problems on Higher Algebra. Gostekhizdat, 2nd edition, Moscow, 1949. Google Scholar

[46]

A. G. Farizawani, M. Puteh, Y. Marina and A. Rivaie, A review of artificial neural network learning rule based on multiple variant of conjugate gradient approaches, Journal of Physics: Conference Series, 1529 (2020). doi: 10.1088/1742-6596/1529/2/022040.  Google Scholar

[47]

R. Fletcher, Practical Methods of Optimization, Unconstrained Optimization, 1st edition, Wiley, New York, 1987. Google Scholar

[48]

R. Fletcher and C. M. Reeves, Function minimization by conjugate gradients, Comput. J., 7 (1964), 149-154.  doi: 10.1093/comjnl/7.2.149.  Google Scholar

[49]

J. C. Gilbert and J. Nocedal, Global convergence properties of conjugate gradient methods for optimization, SIAM J. Optim., 2 (1992), 21-42.  doi: 10.1137/0802003.  Google Scholar

[50]

A. A. Goldstein, On steepest descent, J. SIAM Control Ser. A, 3 (1965), 147-151.  doi: 10.1137/0303013.  Google Scholar

[51]

L. GrippoF. Lampariello and S. Lucidi, A nonmonotone line search technique for Newton's method, SIAM J. Numer. ANAL., 23 (1986), 707-716.  doi: 10.1137/0723046.  Google Scholar

[52]

L. GrippoF. Lampariello and S. Lucidi, A class of nonmonotone stability methods in unconstrained optimization, Numer. Math., 59 (1991), 779-805.  doi: 10.1007/BF01385810.  Google Scholar

[53]

L. GrippoF. Lampariello and S. Lucidi, A truncated Newton method with nonmonotone line search for unconstrained optimization, J. Optim. Theory Appl., 60 (1989), 401-419.  doi: 10.1007/BF00940345.  Google Scholar

[54]

L. Grippo and M. Sciandrone, Nonmonotone globalization techniques for the Barzilai-Borwein gradient method, Comput. Optim. Appl., 23 (2002), 143-169.  doi: 10.1023/A:1020587701058.  Google Scholar

[55]

W. W. Hager and H. Zhang, A new conjugate gradient method with guaranteed descent and an efficient line search, SIAM J. Optim., 16 (2005), 170-192.  doi: 10.1137/030601880.  Google Scholar

[56]

W. W. Hager and H. Zhang, Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent, ACM Trans. Math. Software, 32 (2006), 113-137.  doi: 10.1145/1132973.1132979.  Google Scholar

[57]

W. W. Hager and H. Zhang, A survey of nonlinear conjugate gradient methods, Pac. J. Optim., 2 (2006), 35-58.   Google Scholar

[58]

L. Han and M. Neumann, Combining quasi-Newton and Cauchy directions, Int. J. Appl. Math., 12 (2003), 167-191.   Google Scholar

[59]

B. A. Hassan, A new type of quasi-Newton updating formulas based on the new quasi-Newton equation, Numer. Algebra Control Optim., 10 (2020), 227-235.  doi: 10.3934/naco.2019049.  Google Scholar

[60]

M. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res. Nat. Bur. Standards, 49 (1952), 409-436.  doi: 10.6028/jres.049.044.  Google Scholar

[61]

Y. F. Hu and C. Storey, Global convergence result for conjugate gradient methods, J. Optim. Theory Appl., 71 (1991), 399-405.  doi: 10.1007/BF00939927.  Google Scholar

[62]

S. Ishikawa, Fixed points by a new iteration method, Proc. Am. Math. Soc., 44 (1974), 147-150.  doi: 10.1090/S0002-9939-1974-0336469-5.  Google Scholar

[63]

B. Ivanov, P. S. Stanimirović, G. V. Milovanović, S. Djordjević and I. Brajević, Accelerated multiple step-size methods for solving unconstrained optimization problems, Optimization Methods and Software, (2019). doi: 10.1080/10556788.2019.1653868.  Google Scholar

[64]

Z. Jia, Applications of the conjugate gradient method in optimal surface parameterizations, Int. J. Comput. Math., 87 (2010), 1032-1039.  doi: 10.1080/00207160802275951.  Google Scholar

[65]

J. JianL. Han and X. Jiang, A hybrid conjugate gradient method with descent property for unconstrained optimization, Appl. Math. Model., 39 (2015), 1281-1290.  doi: 10.1016/j.apm.2014.08.008.  Google Scholar

[66]

S. H. Khan, A Picard-Mann hybrid iterative process, Fixed Point Theory Appl., 2013 (2013), Article number: 69, 10 pp. doi: 10.1186/1687-1812-2013-69.  Google Scholar

[67]

Z. Khanaiah and G. Hmod, Novel hybrid algorithm in solving unconstrained optimizations problems, International Journal of Novel Research in Physics Chemistry & Mathematics, 4 (2017), 36-42.   Google Scholar

[68]

N. Kontrec and M. Petrović, Implementation of gradient methods for optimization of underage costs in aviation industry, University Thought, Publication in Natural Sciences, 6 (2016), 71-74.  doi: 10.5937/univtho6-10134.  Google Scholar

[69]

J. Kwon and P. Mertikopoulos, A continuous-time approach to online optimization, J. Dyn. Games, 4 (2017), 125-148.  doi: 10.3934/jdg.2017008.  Google Scholar

[70]

M. S. LeeB. S. GohH. G. Harno and K. H. Lim, On a two phase approximate greatest descent method for nonlinear optimization with equality constraints, Numer. Algebra Control Optim., 8 (2018), 315-326.  doi: 10.3934/naco.2018020.  Google Scholar

[71]

D.-H. Li and M. Fukushima, A modified BFGS method and its global convergence in nonconvex minimization, J. Comput. Appl. Math., 129 (2001), 15-35.  doi: 10.1016/S0377-0427(00)00540-9.  Google Scholar

[72]

X. Li and Q. Ruan, A modified PRP conjugate gradient algorithm with trust region for optimization problems, Numer. Funct. Anal. Optim., 32 (2011), 496-506.  doi: 10.1080/01630563.2011.554948.  Google Scholar

[73]

X. LiC. Shen and L.-H. Zhang, A projected preconditioned conjugate gradient method for the linear response eigenvalue problem, Numer. Algebra Control Optim., 8 (2018), 389-412.  doi: 10.3934/naco.2018025.  Google Scholar

[74]

D.-H. Li and X.-L. Wang, A modified Fletcher-Reeves-type derivative-free method for symmetric nonlinear equations, Numer. Algebra Control Optim., 1 (2011), 71-82.  doi: 10.3934/naco.2011.1.71.  Google Scholar

[75]

K. H. LimH. H. Tan and H. G. Harno, Approximate greatest descent in neural network optimization, Numer. Algebra Control Optim., 8 (2018), 327-336.  doi: 10.3934/naco.2018021.  Google Scholar

[76]

J. Liu, S. Du and Y. Chen, A sufficient descent nonlinear conjugate gradient method for solving $M$-tensor equations, J. Comput. Appl. Math., 371 (2020), 112709, 11 pp. doi: 10.1016/j.cam.2019.112709.  Google Scholar

[77]

J. K. Liu and S. J. Li, A projection method for convex constrained monotone nonlinear equations with applications, Comput. Math. Appl., 70 (2015), 2442-2453.  doi: 10.1016/j.camwa.2015.09.014.  Google Scholar

[78]

Y. Liu and C. Storey, Efficient generalized conjugate gradient algorithms, part 1: Theory, J. Optim. Theory Appl., 69 (1991), 129-137.  doi: 10.1007/BF00940464.  Google Scholar

[79]

Q. Liu and X. Zou, A risk minimization problem for finite horizon semi-Markov decision processes with loss rates, J. Dyn. Games, 5 (2018), 143-163.  doi: 10.3934/jdg.2018009.  Google Scholar

[80]

I. E. Livieris and P. Pintelas, A descent Dai-Liao conjugate gradient method based on a modified secant equation and its global convergence, ISRN Computational Mathematics, 2012 (2012), Article ID 435495. doi: 10.5402/2012/435495.  Google Scholar

[81]

M. Lotfi and S. M. Hosseini, An efficient Dai-Liao type conjugate gradient method by reformulating the CG parameter in the search direction equation, J. Comput. Appl. Math., 371 (2020), 112708, 15 pp. doi: 10.1016/j.cam.2019.112708.  Google Scholar

[82]

Y.-Z. LuoG.-J. Tang and L.-N. Zhou, Hybrid approach for solving systems of nonlinear equations using chaos optimization and quasi-Newton method, Applied Soft Computing, 8 (2008), 1068-1073.  doi: 10.1016/j.asoc.2007.05.013.  Google Scholar

[83]

W. R. Mann, Mean value methods in iterations, Proc. Amer. Math. Soc., 4 (1953), 506-510.  doi: 10.1090/S0002-9939-1953-0054846-3.  Google Scholar

[84]

M. MiladinovićP. Stanimirović and S. Miljković, Scalar correction method for solving large scale unconstrained minimization problems, J. Optim. Theory Appl., 151 (2011), 304-320.  doi: 10.1007/s10957-011-9864-9.  Google Scholar

[85]

S. K. Mishra and B. Ram, Introduction to Unconstrained Optimization with R, 1st edition, Springer Singapore, Springer Nature Singapore Pte Ltd., 2019. doi: 10.1007/978-981-15-0894-3.  Google Scholar

[86]

I. S. Mohammed, M. Mamat, I. Abdulkarim and F. S. Bt. Mohamad, A survey on recent modifications of conjugate gradient methods, Proceedings of the UniSZA Research Conference 2015 (URC 5), Universiti Sultan Zainal Abidin, 14–16 April 2015. Google Scholar

[87]

H. MohammadM. Y. Waziri and S. A. Santos, A brief survey of methods for solving nonlinear least-squares problems, Numer. Algebra Control Optim., 9 (2019), 1-13.  doi: 10.3934/naco.2019001.  Google Scholar

[88]

Y. Narushima and H. Yabe, A survey of sufficient descent conjugate gradient methods for unconstrained optimization, SUT J. Math., 50 (2014), 167-203.   Google Scholar

[89]

J. L. Nazareth, Conjugate-Gradient Methods, In: C. Floudas and P. Pardalos (eds), Encyclopedia of Optimization, 2$^nd$ edition, Springer, Boston, 2009. doi: 10.1007/978-0-387-74759-0.  Google Scholar

[90]

J. Nocedal and S. J. Wright, Numerical Optimization, Springer, New York, 1999. doi: 10.1007/b98874.  Google Scholar

[91]

W. F. H. W. Osman, M. A. H. Ibrahim and M. Mamat, Hybrid DFP-CG method for solving unconstrained optimization problems, Journal of Physics: Conf. Series, 890 (2017), 012033. doi: 10.1088/1742-6596/890/1/012033.  Google Scholar

[92]

S. PanićM. J. Petrović and M. Mihajlov-Carević, Initial improvement of the hybrid accelerated gradient descent process, Bull. Aust. Math. Soc., 98 (2018), 331-338.  doi: 10.1017/S0004972718000552.  Google Scholar

[93]

A. Perry, A modified conjugate gradient algorithm, Oper. Res., 26 (1978), 1073-1078.  doi: 10.1287/opre.26.6.1073.  Google Scholar

[94]

M. J. Petrović, An accelerated double step-size method in unconstrained optimization, Applied Math. Comput., 250 (2015), 309-319.  doi: 10.1016/j.amc.2014.10.104.  Google Scholar

[95]

M. J. PetrovićN. Kontrec and S. Panić, Determination of accelerated factors in gradient descent iterations based on Taylor's series, University Thought, Publication in Natural Sciences, 7 (2017), 41-45.  doi: 10.5937/univtho7-14337.  Google Scholar

[96]

M. J. PetrovićV. RakočevićN. KontrecS. Panić and D. Ilić, Hybridization of accelerated gradient descent method, Numer. Algorithms, 79 (2018), 769-786.  doi: 10.1007/s11075-017-0460-4.  Google Scholar

[97]

M. J. PetrovićV. RakočevićD. Valjarević and D. Ilić, A note on hybridization process applied on transformed double step size model, Numerical Algorithms, 85 (2020), 449-465.  doi: 10.1007/s11075-019-00821-8.  Google Scholar

[98]

M. J. Petrović and P. S. Stanimirović, Accelerated double direction method for solving unconstrained optimization problems, Math. Probl. Eng., 2014 (2014), Article ID 965104, 8 pages. doi: 10.1155/2014/965104.  Google Scholar

[99]

M. J. Petrović, P. S. Stanimirović, N. Kontrec and J. Mladenović, Hybrid modification of accelerated double direction method, Math. Probl. Eng., 2018 (2018), Article ID 1523267, 8 pages. doi: 10.1155/2018/1523267.  Google Scholar

[100]

M. R. PeyghamiH. Ahmadzadeh and A. Fazli, A new class of efficient and globally convergent conjugate gradient methods in the Dai-Liao family, Optim. Methods Softw., 30 (2015), 843-863.  doi: 10.1080/10556788.2014.1001511.  Google Scholar

[101]

E. Picard, Memoire sur la theorie des equations aux derivees partielles et la methode des approximations successives, J. Math. Pures Appl., 6 (1890), 145-210.   Google Scholar

[102]

E. Polak and G. Ribière, Note sur la convergence des méthodes de directions conjuguées, Rev. Française Imformat Recherche Opértionelle, 3 (1969), 35–43.  Google Scholar

[103]

B. T. Polyak, The conjugate gradient method in extreme problems, USSR Comput. Math. and Math. Phys., 9 (1969), 94-112.  doi: 10.1016/0041-5553(69)90035-4.  Google Scholar

[104]

M. J. D. Powell, Algorithms for nonlinear constraints that use Lagrangian functions, Math. Programming, 14 (1978), 224-248.  doi: 10.1007/BF01588967.  Google Scholar

[105]

M. Raydan, On the Barzilai and Borwein choice of steplength for the gradient method, IMA J. Numer. Anal., 13 (1993), 321-326.  doi: 10.1093/imanum/13.3.321.  Google Scholar

[106]

M. Raydan, The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem, SIAM J. Optim., 7 (1997), 26-33.  doi: 10.1137/S1052623494266365.  Google Scholar

[107]

M. Raydan and B. F. Svaiter, Relaxed steepest descent and Cauchy-Barzilai-Borwein method, Comput. Optim. Appl., 21 (2002), 155-167.  doi: 10.1023/A:1013708715892.  Google Scholar

[108]

N. ShapieeM. RivaieM. Mamat and P. L. Ghazali, A new family of conjugate gradient coefficient with application, International Journal of Engineering & Technology, 7 (2018), 36-43.  doi: 10.14419/ijet.v7i3.28.20962.  Google Scholar

[109]

Z.-J. Shi, Convergence of line search methods for unconstrained optimization, Appl. Math. Comput., 157 (2004), 393-405.  doi: 10.1016/j.amc.2003.08.058.  Google Scholar

[110]

S. ShoidN. ShapieeN. ZullN. H. A. GhaniN. S. MohamedM. Rivaie and M. Mamat, The application of new conjugate gradient methods in estimating data, International Journal of Engineering & Technology, 7 (2018), 25-27.  doi: 10.14419/ijet.v7i2.14.11147.  Google Scholar

[111]

H. S. SimW. J. LeongC. Y. Chen and S. N. I. Ibrahim, Multi-step spectral gradient methods with modified weak secant relation for large scale unconstrained optimization, Numer. Algebra Control Optim., 8 (2018), 377-387.  doi: 10.3934/naco.2018024.  Google Scholar

[112]

P. S. StanimirovićB. IvanovS. Djordjević and I. Brajević, New hybrid conjugate gradient and Broyden-Fletcher-Goldfarb-Shanno conjugate gradient methods, J. Optim. Theory Appl., 178 (2018), 860-884.  doi: 10.1007/s10957-018-1324-3.  Google Scholar

[113]

P. S. StanimirovićV. N. Katsikis and D. Pappas, Computation of $\{2, 4\}$ and $\{2, 3\}$-inverses based on rank-one updates, Linear Multilinear Algebra, 66 (2018), 147-166.  doi: 10.1080/03081087.2017.1290042.  Google Scholar

[114]

P. S. StanimirovićV. N. Katsikis and D. Pappas, Computing $\{2, 4\}$ and $\{2, 3\}$-inverses by using the Sherman-Morrison formula, Appl. Math. Comput., 273 (2015), 584-603.  doi: 10.1016/j.amc.2015.10.023.  Google Scholar

[115]

P. S. Stanimirović and M. B. Miladinović, Accelerated gradient descent methods with line search, Numer. Algor., 54 (2010), 503-520.  doi: 10.1007/s11075-009-9350-8.  Google Scholar

[116]

P. S. Stanimirović, G. V. Milovanović, M. J. Petrović and N. Z. Kontrec, A transformation of accelerated double step-size method for unconstrained optimization, Math. Probl. Eng., 2015 (2015), Article ID 283679, 8 pages. doi: 10.1155/2015/283679.  Google Scholar

[117]

W. SunJ. Han and J. Sun, Global convergence of nonmonotone descent methods for unconstrained optimization problems, J. Comp. Appl. Math., 146 (2002), 89-98.  doi: 10.1016/S0377-0427(02)00420-X.  Google Scholar

[118]

Z. Sun and T. Sugie, Identification of Hessian matrix in distributed gradient based multi agent coordination control systems, Numer. Algebra Control Optim., 9 (2019), 297-318.  doi: 10.3934/naco.2019020.  Google Scholar

[119]

W. Sun and Y.-X. Yuan, Optimization Theory and Methods: Nonlinear Programming, 1st edition, Springer, New York, 2006. doi: 10.1007/b106451.  Google Scholar

[120]

Ph. L. Toint, Non–monotone trust–region algorithm for nonlinear optimization subject to convex constraints, Math. Prog., 77 (1997), 69-94.  doi: 10.1007/BF02614518.  Google Scholar

[121]

D. Touati-Ahmed and C. Storey, Efficient hybrid conjugate gradient techniques, J. Optim. Theory Appl., 64 (1990), 379-397.  doi: 10.1007/BF00939455.  Google Scholar

[122]

M. N. VrahatisG. S. AndroulakisJ. N. Lambrinos and G. D. Magoulas, A class of gradient unconstrained minimization algorithms with adaptive step-size, J. Comp. Appl. Math., 114 (2000), 367-386.  doi: 10.1016/S0377-0427(99)00276-9.  Google Scholar

[123]

Z. WeiG. Li and L. Qi, New quasi-Newton methods for unconstrained optimization problems, Appl. Math. Comput., 175 (2006), 1156-1188.  doi: 10.1016/j.amc.2005.08.027.  Google Scholar

[124]

Z. WeiG. Li and L. Qi, New nonlinear conjugate gradient formulas for large-scale unconstrained optimization problems, Appl. Math. Comput., 179 (2006), 407-430.  doi: 10.1016/j.amc.2005.11.150.  Google Scholar

[125]

Z. WeiS. Yao and L. Liu, The convergence properties of some new conjugate gradient methods, Appl. Math. Comput., 183 (2006), 1341-1350.  doi: 10.1016/j.amc.2006.05.150.  Google Scholar

[126]

P. Wolfe, Convergence conditions for ascent methods, SIAM Rev., 11 (1969), 226-235.  doi: 10.1137/1011036.  Google Scholar

[127]

M. XiW. Sun and J. Chen, Survey of derivative-free optimization, Numer. Algebra Control Optim., 10 (2020), 537-555.  doi: 10.3934/naco.2020050.  Google Scholar

[128]

Y. Xiao and H. Zhu, A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing, J. Math. Anal. Appl., 405 (2013), 310-319.  doi: 10.1016/j.jmaa.2013.04.017.  Google Scholar

[129]

H. Yabe and M. Takano, Global convergence properties of nonlinear conjugate gradient methods with modified secant condition, Comput. Optim. Appl., 28 (2004), 203-225.  doi: 10.1023/B:COAP.0000026885.81997.88.  Google Scholar

[130]

X. Yang, Z. Luo and X. Dai, A global convergence of LS-CD hybrid conjugate gradient method, Adv. Numer. Anal., 2013 (2013), Article ID 517452, 5 pp. doi: 10.1155/2013/517452.  Google Scholar

[131]

S. Yao, X. Lu and Z. Wei, A conjugate gradient method with global convergence for large-scale unconstrained optimization problems, J. Appl. Math., 2013 (2013), Article ID 730454, 9 pp. doi: 10.1155/2013/730454.  Google Scholar

[132]

G. Yuan and Z. Wei, Convergence analysis of a modified BFGS method on convex minimizations, Comp. Optim. Appl., 47 (2010), 237-255.  doi: 10.1007/s10589-008-9219-0.  Google Scholar

[133]

S. YaoZ. Wei and H. Huang, A notes about WYL's conjugate gradient method and its applications, Appl. Math. Comput., 191 (2007), 381-388.  doi: 10.1016/j.amc.2007.02.094.  Google Scholar

[134]

Y. Yuan, A new stepsize for the steepest descent method, J. Comput. Math., 24 (2006), 149-156.   Google Scholar

[135]

G. YuanT. Li and W. Hu, A conjugate gradient algorithm for large-scale nonlinear equations and image restoration problems, Appl. Numer. Math., 147 (2020), 129-141.  doi: 10.1016/j.apnum.2019.08.022.  Google Scholar

[136]

G. Yuan, T. Li and W. Hu, A conjugate gradient algorithm and its application in large-scale optimization problems and image restoration, J. Inequal. Appl., 2019 (2019), Article number: 247, 25 pp. doi: 10.1186/s13660-019-2192-6.  Google Scholar

[137]

G. YuanZ. Wei and Y. Wu, Modified limited memory BFGS method with nonmonotone line search for unconstrained optimization, J. Korean Math. Soc., 47 (2010), 767-788.  doi: 10.4134/JKMS.2010.47.4.767.  Google Scholar

[138]

L. Zhang, An improved Wei-Yao-Liu nonlinear conjugate gradient method for optimization computation, Appl. Math. Comput., 215 (2009), 2269-2274.  doi: 10.1016/j.amc.2009.08.016.  Google Scholar

[139]

Y. Zheng and B. Zheng, Two new Dai-Liao-type conjugate gradient methods for unconstrained optimization problems, J. Optim. Theory Appl., 175 (2017), 502-509.  doi: 10.1007/s10957-017-1140-1.  Google Scholar

[140]

H. ZhongG. Chen and X. Guo, Semi-local convergence of the Newton-HSS method under the center Lipschitz condition, Numer. Algebra Control Optim., 9 (2019), 85-99.  doi: 10.3934/naco.2019007.  Google Scholar

[141]

W. Zhou and L. Zhang, A nonlinear conjugate gradient method based on the MBFGS secant condition, Optim. Methods Softw., 21 (2006), 707-714.  doi: 10.1080/10556780500137041.  Google Scholar

[142]

G. Zoutendijk, Nonlinear programming, computational methods, In: J. Abadie (eds.): Integer and Nonlinear Programming, North-Holland, Amsterdam, (1970), 37–86.  Google Scholar

[143]

N. ZullN. 'AiniM. Rivaie and M. Mamat, A new gradient method for solving linear regression model, International Journal of Recent Technology and Engineering, 7 (2019), 624-630.   Google Scholar

show all references

References:
[1]

J. Abaffy, A new reprojection of the conjugate directions, Numer. Algebra Control Optim., 9 (2019), 157-171.  doi: 10.3934/naco.2019012.  Google Scholar

[2]

M. Al-Baali, Descent property and global convergence of the Fletcher-Reeves method with inexact line search, IMA J. Numer. Anal., 5 (1985), 121-124.  doi: 10.1093/imanum/5.1.121.  Google Scholar

[3]

N. Andrei, An unconstrained optimization test functions collection, Adv. Model. Optim., 10 (2008), 147-161.   Google Scholar

[4]

N. Andrei, An acceleration of gradient descent algorithm with backtracking for unconstrained optimization, Numer. Algorithms, 42 (2006), 63-73.  doi: 10.1007/s11075-006-9023-9.  Google Scholar

[5]

N. Andrei, A Dai-Liao conjugate gradient algorithm with clustering of eigenvalues, Numer. Algorithms, 77 (2018), 1273-1282.  doi: 10.1007/s11075-017-0362-5.  Google Scholar

[6]

N. Andrei, Relaxed gradient descent and a new gradient descent methods for unconstrained optimization, Visited August 19, (2018). Google Scholar

[7]

N. Andrei, Nonlinear Conjugate Gradient Methods for Unconstrained Optimization, 1st edition, Springer International Publishing, 2020. doi: 10.1007/978-3-030-42950-8.  Google Scholar

[8]

L. Armijo, Minimization of functions having Lipschitz continuous first partial derivatives, Pacific J. Math., 16 (1966), 1-3.  doi: 10.2140/pjm.1966.16.1.  Google Scholar

[9]

S. Babaie-Kafaki and R. Ghanbari, The Dai-Liao nonlinear conjugate gradient method with optimal parameter choices, European J. Oper. Res., 234 (2014), 625-630.  doi: 10.1016/j.ejor.2013.11.012.  Google Scholar

[10]

B. Baluch, Z. Salleh, A. Alhawarat and U. A. M. Roslan, A new modified three-term conjugate gradient method with sufficient descent property and its global convergence, J. Math., 2017 (2017), Article ID 2715854, 12 pages. doi: 10.1155/2017/2715854.  Google Scholar

[11]

J. Barzilai and J. M. Borwein, Two-point step-size gradient method, IMA J. Numer. Anal., 8 (1988), 141-148.  doi: 10.1093/imanum/8.1.141.  Google Scholar

[12]

M. Bastani and D. K. Salkuyeh, On the GSOR iteration method for image restoration, Numerical Algebra, Control and Optimization, (2020). doi: 10.3934/naco.2020013.  Google Scholar

[13]

A. E. J. BogaersS. KokB. D. Reddy and T. Franz, An evaluation of quasi-Newton methods for application to FSI problems involving free surface flow and solid body contact, Computers & Structures, 173 (2016), 71-83.  doi: 10.1016/j.compstruc.2016.05.018.  Google Scholar

[14]

I. BongartzA. R. ConnN. Gould and Ph. L. Toint, CUTE: Constrained and unconstrained testing environments, ACM Trans. Math. Softw., 21 (1995), 123-160.  doi: 10.1145/200979.201043.  Google Scholar

[15]

C. Brezinski, A classification of quasi-Newton methods, Numer. Algorithms, 33 (2003), 123-135.  doi: 10.1023/A:1025551602679.  Google Scholar

[16]

J. Cao and J. Wu, A conjugate gradient algorithm and its applications in image restoration, Appl. Numer. Math., 152 (2020), 243-252.  doi: 10.1016/j.apnum.2019.12.002.  Google Scholar

[17]

W. Cheng, A two-term PRP-based descent method, Numer. Funct. Anal. Optim., 28 (2007), 1217-1230.  doi: 10.1080/01630560701749524.  Google Scholar

[18]

Y. ChengQ. MouX. Pan and S. Yao, A sufficient descent conjugate gradient method and its global convergence, Optim. Methods Softw., 31 (2016), 577-590.  doi: 10.1080/10556788.2015.1124431.  Google Scholar

[19]

A. I. Cohen, Stepsize analysis for descent methods, J. Optim. Theory Appl., 33 (1981), 187-205.  doi: 10.1007/BF00935546.  Google Scholar

[20]

A. R. ConnN. I. M. Gould and Ph. L. Toint, Convergence of quasi-Newton matrices generated by the symmetric rank one update, Math. Programming, 50 (1991), 177-195.  doi: 10.1007/BF01594934.  Google Scholar

[21]

Y.-H. Dai, Nonlinear Conjugate Gradient Methods, Wiley Encyclopedia of Operations Research and Management Science, (2011). doi: 10.1002/9780470400531.eorms0183.  Google Scholar

[22]

Y.-H. Dai, Alternate Step Gradient Method, Report AMSS–2001–041, Academy of Mathematics and Systems Sciences, Chinese Academy of Sciences, 2001. Google Scholar

[23]

Y. Dai, A nonmonotone conjugate gradient algorithm for unconstrained optimization, J. Syst. Sci. Complex., 15 (2002), 139-145.   Google Scholar

[24]

Y.-H. Dai and R. Fletcher, On the Asymptotic Behaviour of some New Gradient Methods, Numerical Analysis Report, NA/212, Dept. of Math. University of Dundee, Scotland, UK, 2003. Google Scholar

[25]

Y.-H. Dai and C.-X. Kou, A nonlinear conjugate gradient algorithm with an optimal property and an improved wolfe line search, SIAM. J. Optim., 23 (2013), 296-320.  doi: 10.1137/100813026.  Google Scholar

[26]

Y.-H. Dai and L.-Z. Liao, New conjugacy conditions and related nonlinear conjugate gradient methods, Appl. Math. Optim., 43 (2001), 87-101.  doi: 10.1007/s002450010019.  Google Scholar

[27]

Y.-H. Dai and L.-Z. Liao, R-linear convergence of the Barzilai and Borwein gradient method, IMA J. Numer. Anal., 22 (2002), 1-10.  doi: 10.1093/imanum/22.1.1.  Google Scholar

[28]

Y.-H. Dai and Q. Ni, Testing different conjugate gradient methods for large-scale unconstrained optimization, J. Comput. Math., 21 (2003), 311-320.   Google Scholar

[29]

Z. Dai and F. Wen, Another improved Wei–Yao–Liu nonlinear conjugate gradient method with sufficient descent property, Appl. Math. Comput., 218 (2012), 7421-7430.  doi: 10.1016/j.amc.2011.12.091.  Google Scholar

[30]

Y.-H. Dai and Y. Yuan, A nonlinear conjugate gradient method with a strong global convergence property, SIAM J. Optim., 10 (1999), 177-182.  doi: 10.1137/S1052623497318992.  Google Scholar

[31]

Y.-H. Dai and Y. Yuan, An efficient hybrid conjugate gradient method for unconstrained optimization, Ann. Oper. Res., 103 (2001), 33-47.  doi: 10.1023/A:1012930416777.  Google Scholar

[32]

Y. H. Dai and Y. Yuan, A class of Globally Convergent Conjugate Gradient Methods, Research report ICM-98-030, Institute of Computational Mathematics and Scientific/Engineering Computing, Chinese Academy of Sciences, 1998. Google Scholar

[33]

Y. Dai and Y. Yuan, A class of globally convergent conjugate gradient methods, Sci. China Ser. A, 46 (2003), 251-261.   Google Scholar

[34]

Y. DaiJ. Yuan and Y.-X. Yuan, Modified two-point step-size gradient methods for unconstrained optimization, Comput. Optim. Appl., 22 (2002), 103-109.  doi: 10.1023/A:1014838419611.  Google Scholar

[35]

Y.-H. Dai and Y.-X. Yuan, Alternate minimization gradient method, IMA J. Numer. Anal., 23 (2003), 377-393.  doi: 10.1093/imanum/23.3.377.  Google Scholar

[36]

Y.-H. Dai and Y.-X. Yuan, Analysis of monotone gradient methods, J. Ind. Manag. Optim., 1 (2005), 181-192.  doi: 10.3934/jimo.2005.1.181.  Google Scholar

[37]

Y.-H. Dai and H. Zhang, An Adaptive Two-Point Step-size gradient method, Research report, Institute of Computational Mathematics and Scientific/Engineering Computing, Chinese Academy of Sciences, 2001. Google Scholar

[38]

J. W. Daniel, The conjugate gradient method for linear and nonlinear operator equations, SIAM J. Numer. Anal., 4 (1967), 10-26.  doi: 10.1137/0704002.  Google Scholar

[39]

S. Delladji, M. Belloufi and B. Sellami, Behavior of the combination of PRP and HZ methods for unconstrained optimization, Numerical Algebra, Control and Optimization, (2020). doi: 10.3934/naco.2020032.  Google Scholar

[40]

Y. DingE. Lushi and Q. Li, Investigation of quasi-Newton methods for unconstrained optimization, International Journal of Computer Application, 29 (2010), 48-58.   Google Scholar

[41]

S. S. Djordjević, Two modifications of the method of the multiplicative parameters in descent gradient methods, Appl. Math, Comput., 218 (2012), 8672-8683.  doi: 10.1016/j.amc.2012.02.029.  Google Scholar

[42]

S. S. Djordjević, Unconstrained Optimization Methods: Conjugate Gradient Methods and Trust-Region Methods, Book Chapter, 2019. doi: 10.5772/intechopen.84374.  Google Scholar

[43]

E. D. Dolan and J. J. Moré, Benchmarking optimization software with performance profiles, Math. Program., 91 (2002), 201-213.  doi: 10.1007/s101070100263.  Google Scholar

[44]

M. S. EngelmanG. Strang and K.-J. Bathe, The application of quasi-Nnewton methods in fluid mechanics, Internat. J. Numer. Methods Engrg., 17 (1981), 707-718.  doi: 10.1002/nme.1620170505.  Google Scholar

[45]

D. K. Faddeev and I. S. Sominskiǐ, Collection of Problems on Higher Algebra. Gostekhizdat, 2nd edition, Moscow, 1949. Google Scholar

[46]

A. G. Farizawani, M. Puteh, Y. Marina and A. Rivaie, A review of artificial neural network learning rule based on multiple variant of conjugate gradient approaches, Journal of Physics: Conference Series, 1529 (2020). doi: 10.1088/1742-6596/1529/2/022040.  Google Scholar

[47]

R. Fletcher, Practical Methods of Optimization, Unconstrained Optimization, 1st edition, Wiley, New York, 1987. Google Scholar

[48]

R. Fletcher and C. M. Reeves, Function minimization by conjugate gradients, Comput. J., 7 (1964), 149-154.  doi: 10.1093/comjnl/7.2.149.  Google Scholar

[49]

J. C. Gilbert and J. Nocedal, Global convergence properties of conjugate gradient methods for optimization, SIAM J. Optim., 2 (1992), 21-42.  doi: 10.1137/0802003.  Google Scholar

[50]

A. A. Goldstein, On steepest descent, J. SIAM Control Ser. A, 3 (1965), 147-151.  doi: 10.1137/0303013.  Google Scholar

[51]

L. GrippoF. Lampariello and S. Lucidi, A nonmonotone line search technique for Newton's method, SIAM J. Numer. ANAL., 23 (1986), 707-716.  doi: 10.1137/0723046.  Google Scholar

[52]

L. GrippoF. Lampariello and S. Lucidi, A class of nonmonotone stability methods in unconstrained optimization, Numer. Math., 59 (1991), 779-805.  doi: 10.1007/BF01385810.  Google Scholar

[53]

L. GrippoF. Lampariello and S. Lucidi, A truncated Newton method with nonmonotone line search for unconstrained optimization, J. Optim. Theory Appl., 60 (1989), 401-419.  doi: 10.1007/BF00940345.  Google Scholar

[54]

L. Grippo and M. Sciandrone, Nonmonotone globalization techniques for the Barzilai-Borwein gradient method, Comput. Optim. Appl., 23 (2002), 143-169.  doi: 10.1023/A:1020587701058.  Google Scholar

[55]

W. W. Hager and H. Zhang, A new conjugate gradient method with guaranteed descent and an efficient line search, SIAM J. Optim., 16 (2005), 170-192.  doi: 10.1137/030601880.  Google Scholar

[56]

W. W. Hager and H. Zhang, Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent, ACM Trans. Math. Software, 32 (2006), 113-137.  doi: 10.1145/1132973.1132979.  Google Scholar

[57]

W. W. Hager and H. Zhang, A survey of nonlinear conjugate gradient methods, Pac. J. Optim., 2 (2006), 35-58.   Google Scholar

[58]

L. Han and M. Neumann, Combining quasi-Newton and Cauchy directions, Int. J. Appl. Math., 12 (2003), 167-191.   Google Scholar

[59]

B. A. Hassan, A new type of quasi-Newton updating formulas based on the new quasi-Newton equation, Numer. Algebra Control Optim., 10 (2020), 227-235.  doi: 10.3934/naco.2019049.  Google Scholar

[60]

M. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res. Nat. Bur. Standards, 49 (1952), 409-436.  doi: 10.6028/jres.049.044.  Google Scholar

[61]

Y. F. Hu and C. Storey, Global convergence result for conjugate gradient methods, J. Optim. Theory Appl., 71 (1991), 399-405.  doi: 10.1007/BF00939927.  Google Scholar

[62]

S. Ishikawa, Fixed points by a new iteration method, Proc. Am. Math. Soc., 44 (1974), 147-150.  doi: 10.1090/S0002-9939-1974-0336469-5.  Google Scholar

[63]

B. Ivanov, P. S. Stanimirović, G. V. Milovanović, S. Djordjević and I. Brajević, Accelerated multiple step-size methods for solving unconstrained optimization problems, Optimization Methods and Software, (2019). doi: 10.1080/10556788.2019.1653868.  Google Scholar

[64]

Z. Jia, Applications of the conjugate gradient method in optimal surface parameterizations, Int. J. Comput. Math., 87 (2010), 1032-1039.  doi: 10.1080/00207160802275951.  Google Scholar

[65]

J. JianL. Han and X. Jiang, A hybrid conjugate gradient method with descent property for unconstrained optimization, Appl. Math. Model., 39 (2015), 1281-1290.  doi: 10.1016/j.apm.2014.08.008.  Google Scholar

[66]

S. H. Khan, A Picard-Mann hybrid iterative process, Fixed Point Theory Appl., 2013 (2013), Article number: 69, 10 pp. doi: 10.1186/1687-1812-2013-69.  Google Scholar

[67]

Z. Khanaiah and G. Hmod, Novel hybrid algorithm in solving unconstrained optimizations problems, International Journal of Novel Research in Physics Chemistry & Mathematics, 4 (2017), 36-42.   Google Scholar

[68]

N. Kontrec and M. Petrović, Implementation of gradient methods for optimization of underage costs in aviation industry, University Thought, Publication in Natural Sciences, 6 (2016), 71-74.  doi: 10.5937/univtho6-10134.  Google Scholar

[69]

J. Kwon and P. Mertikopoulos, A continuous-time approach to online optimization, J. Dyn. Games, 4 (2017), 125-148.  doi: 10.3934/jdg.2017008.  Google Scholar

[70]

M. S. LeeB. S. GohH. G. Harno and K. H. Lim, On a two phase approximate greatest descent method for nonlinear optimization with equality constraints, Numer. Algebra Control Optim., 8 (2018), 315-326.  doi: 10.3934/naco.2018020.  Google Scholar

[71]

D.-H. Li and M. Fukushima, A modified BFGS method and its global convergence in nonconvex minimization, J. Comput. Appl. Math., 129 (2001), 15-35.  doi: 10.1016/S0377-0427(00)00540-9.  Google Scholar

[72]

X. Li and Q. Ruan, A modified PRP conjugate gradient algorithm with trust region for optimization problems, Numer. Funct. Anal. Optim., 32 (2011), 496-506.  doi: 10.1080/01630563.2011.554948.  Google Scholar

[73]

X. LiC. Shen and L.-H. Zhang, A projected preconditioned conjugate gradient method for the linear response eigenvalue problem, Numer. Algebra Control Optim., 8 (2018), 389-412.  doi: 10.3934/naco.2018025.  Google Scholar

[74]

D.-H. Li and X.-L. Wang, A modified Fletcher-Reeves-type derivative-free method for symmetric nonlinear equations, Numer. Algebra Control Optim., 1 (2011), 71-82.  doi: 10.3934/naco.2011.1.71.  Google Scholar

[75]

K. H. LimH. H. Tan and H. G. Harno, Approximate greatest descent in neural network optimization, Numer. Algebra Control Optim., 8 (2018), 327-336.  doi: 10.3934/naco.2018021.  Google Scholar

[76]

J. Liu, S. Du and Y. Chen, A sufficient descent nonlinear conjugate gradient method for solving $M$-tensor equations, J. Comput. Appl. Math., 371 (2020), 112709, 11 pp. doi: 10.1016/j.cam.2019.112709.  Google Scholar

[77]

J. K. Liu and S. J. Li, A projection method for convex constrained monotone nonlinear equations with applications, Comput. Math. Appl., 70 (2015), 2442-2453.  doi: 10.1016/j.camwa.2015.09.014.  Google Scholar

[78]

Y. Liu and C. Storey, Efficient generalized conjugate gradient algorithms, part 1: Theory, J. Optim. Theory Appl., 69 (1991), 129-137.  doi: 10.1007/BF00940464.  Google Scholar

[79]

Q. Liu and X. Zou, A risk minimization problem for finite horizon semi-Markov decision processes with loss rates, J. Dyn. Games, 5 (2018), 143-163.  doi: 10.3934/jdg.2018009.  Google Scholar

[80]

I. E. Livieris and P. Pintelas, A descent Dai-Liao conjugate gradient method based on a modified secant equation and its global convergence, ISRN Computational Mathematics, 2012 (2012), Article ID 435495. doi: 10.5402/2012/435495.  Google Scholar

[81]

M. Lotfi and S. M. Hosseini, An efficient Dai-Liao type conjugate gradient method by reformulating the CG parameter in the search direction equation, J. Comput. Appl. Math., 371 (2020), 112708, 15 pp. doi: 10.1016/j.cam.2019.112708.  Google Scholar

[82]

Y.-Z. LuoG.-J. Tang and L.-N. Zhou, Hybrid approach for solving systems of nonlinear equations using chaos optimization and quasi-Newton method, Applied Soft Computing, 8 (2008), 1068-1073.  doi: 10.1016/j.asoc.2007.05.013.  Google Scholar

[83]

W. R. Mann, Mean value methods in iterations, Proc. Amer. Math. Soc., 4 (1953), 506-510.  doi: 10.1090/S0002-9939-1953-0054846-3.  Google Scholar

[84]

M. MiladinovićP. Stanimirović and S. Miljković, Scalar correction method for solving large scale unconstrained minimization problems, J. Optim. Theory Appl., 151 (2011), 304-320.  doi: 10.1007/s10957-011-9864-9.  Google Scholar

[85]

S. K. Mishra and B. Ram, Introduction to Unconstrained Optimization with R, 1st edition, Springer Singapore, Springer Nature Singapore Pte Ltd., 2019. doi: 10.1007/978-981-15-0894-3.  Google Scholar

[86]

I. S. Mohammed, M. Mamat, I. Abdulkarim and F. S. Bt. Mohamad, A survey on recent modifications of conjugate gradient methods, Proceedings of the UniSZA Research Conference 2015 (URC 5), Universiti Sultan Zainal Abidin, 14–16 April 2015. Google Scholar

[87]

H. MohammadM. Y. Waziri and S. A. Santos, A brief survey of methods for solving nonlinear least-squares problems, Numer. Algebra Control Optim., 9 (2019), 1-13.  doi: 10.3934/naco.2019001.  Google Scholar

[88]

Y. Narushima and H. Yabe, A survey of sufficient descent conjugate gradient methods for unconstrained optimization, SUT J. Math., 50 (2014), 167-203.   Google Scholar

[89]

J. L. Nazareth, Conjugate-Gradient Methods, In: C. Floudas and P. Pardalos (eds), Encyclopedia of Optimization, 2$^nd$ edition, Springer, Boston, 2009. doi: 10.1007/978-0-387-74759-0.  Google Scholar

[90]

J. Nocedal and S. J. Wright, Numerical Optimization, Springer, New York, 1999. doi: 10.1007/b98874.  Google Scholar

[91]

W. F. H. W. Osman, M. A. H. Ibrahim and M. Mamat, Hybrid DFP-CG method for solving unconstrained optimization problems, Journal of Physics: Conf. Series, 890 (2017), 012033. doi: 10.1088/1742-6596/890/1/012033.  Google Scholar

[92]

S. PanićM. J. Petrović and M. Mihajlov-Carević, Initial improvement of the hybrid accelerated gradient descent process, Bull. Aust. Math. Soc., 98 (2018), 331-338.  doi: 10.1017/S0004972718000552.  Google Scholar

[93]

A. Perry, A modified conjugate gradient algorithm, Oper. Res., 26 (1978), 1073-1078.  doi: 10.1287/opre.26.6.1073.  Google Scholar

[94]

M. J. Petrović, An accelerated double step-size method in unconstrained optimization, Applied Math. Comput., 250 (2015), 309-319.  doi: 10.1016/j.amc.2014.10.104.  Google Scholar

[95]

M. J. PetrovićN. Kontrec and S. Panić, Determination of accelerated factors in gradient descent iterations based on Taylor's series, University Thought, Publication in Natural Sciences, 7 (2017), 41-45.  doi: 10.5937/univtho7-14337.  Google Scholar

[96]

M. J. PetrovićV. RakočevićN. KontrecS. Panić and D. Ilić, Hybridization of accelerated gradient descent method, Numer. Algorithms, 79 (2018), 769-786.  doi: 10.1007/s11075-017-0460-4.  Google Scholar

[97]

M. J. PetrovićV. RakočevićD. Valjarević and D. Ilić, A note on hybridization process applied on transformed double step size model, Numerical Algorithms, 85 (2020), 449-465.  doi: 10.1007/s11075-019-00821-8.  Google Scholar

[98]

M. J. Petrović and P. S. Stanimirović, Accelerated double direction method for solving unconstrained optimization problems, Math. Probl. Eng., 2014 (2014), Article ID 965104, 8 pages. doi: 10.1155/2014/965104.  Google Scholar

[99]

M. J. Petrović, P. S. Stanimirović, N. Kontrec and J. Mladenović, Hybrid modification of accelerated double direction method, Math. Probl. Eng., 2018 (2018), Article ID 1523267, 8 pages. doi: 10.1155/2018/1523267.  Google Scholar

[100]

M. R. PeyghamiH. Ahmadzadeh and A. Fazli, A new class of efficient and globally convergent conjugate gradient methods in the Dai-Liao family, Optim. Methods Softw., 30 (2015), 843-863.  doi: 10.1080/10556788.2014.1001511.  Google Scholar

[101]

E. Picard, Memoire sur la theorie des equations aux derivees partielles et la methode des approximations successives, J. Math. Pures Appl., 6 (1890), 145-210.   Google Scholar

[102]

E. Polak and G. Ribière, Note sur la convergence des méthodes de directions conjuguées, Rev. Française Imformat Recherche Opértionelle, 3 (1969), 35–43.  Google Scholar

[103]

B. T. Polyak, The conjugate gradient method in extreme problems, USSR Comput. Math. and Math. Phys., 9 (1969), 94-112.  doi: 10.1016/0041-5553(69)90035-4.  Google Scholar

[104]

M. J. D. Powell, Algorithms for nonlinear constraints that use Lagrangian functions, Math. Programming, 14 (1978), 224-248.  doi: 10.1007/BF01588967.  Google Scholar

[105]

M. Raydan, On the Barzilai and Borwein choice of steplength for the gradient method, IMA J. Numer. Anal., 13 (1993), 321-326.  doi: 10.1093/imanum/13.3.321.  Google Scholar

[106]

M. Raydan, The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem, SIAM J. Optim., 7 (1997), 26-33.  doi: 10.1137/S1052623494266365.  Google Scholar

[107]

M. Raydan and B. F. Svaiter, Relaxed steepest descent and Cauchy-Barzilai-Borwein method, Comput. Optim. Appl., 21 (2002), 155-167.  doi: 10.1023/A:1013708715892.  Google Scholar

[108]

N. ShapieeM. RivaieM. Mamat and P. L. Ghazali, A new family of conjugate gradient coefficient with application, International Journal of Engineering & Technology, 7 (2018), 36-43.  doi: 10.14419/ijet.v7i3.28.20962.  Google Scholar

[109]

Z.-J. Shi, Convergence of line search methods for unconstrained optimization, Appl. Math. Comput., 157 (2004), 393-405.  doi: 10.1016/j.amc.2003.08.058.  Google Scholar

[110]

S. ShoidN. ShapieeN. ZullN. H. A. GhaniN. S. MohamedM. Rivaie and M. Mamat, The application of new conjugate gradient methods in estimating data, International Journal of Engineering & Technology, 7 (2018), 25-27.  doi: 10.14419/ijet.v7i2.14.11147.  Google Scholar

[111]

H. S. SimW. J. LeongC. Y. Chen and S. N. I. Ibrahim, Multi-step spectral gradient methods with modified weak secant relation for large scale unconstrained optimization, Numer. Algebra Control Optim., 8 (2018), 377-387.  doi: 10.3934/naco.2018024.  Google Scholar

[112]

P. S. StanimirovićB. IvanovS. Djordjević and I. Brajević, New hybrid conjugate gradient and Broyden-Fletcher-Goldfarb-Shanno conjugate gradient methods, J. Optim. Theory Appl., 178 (2018), 860-884.  doi: 10.1007/s10957-018-1324-3.  Google Scholar

[113]

P. S. StanimirovićV. N. Katsikis and D. Pappas, Computation of $\{2, 4\}$ and $\{2, 3\}$-inverses based on rank-one updates, Linear Multilinear Algebra, 66 (2018), 147-166.  doi: 10.1080/03081087.2017.1290042.  Google Scholar

[114]

P. S. StanimirovićV. N. Katsikis and D. Pappas, Computing $\{2, 4\}$ and $\{2, 3\}$-inverses by using the Sherman-Morrison formula, Appl. Math. Comput., 273 (2015), 584-603.  doi: 10.1016/j.amc.2015.10.023.  Google Scholar

[115]

P. S. Stanimirović and M. B. Miladinović, Accelerated gradient descent methods with line search, Numer. Algor., 54 (2010), 503-520.  doi: 10.1007/s11075-009-9350-8.  Google Scholar

[116]

P. S. Stanimirović, G. V. Milovanović, M. J. Petrović and N. Z. Kontrec, A transformation of accelerated double step-size method for unconstrained optimization, Math. Probl. Eng., 2015 (2015), Article ID 283679, 8 pages. doi: 10.1155/2015/283679.  Google Scholar

[117]

W. SunJ. Han and J. Sun, Global convergence of nonmonotone descent methods for unconstrained optimization problems, J. Comp. Appl. Math., 146 (2002), 89-98.  doi: 10.1016/S0377-0427(02)00420-X.  Google Scholar

[118]

Z. Sun and T. Sugie, Identification of Hessian matrix in distributed gradient based multi agent coordination control systems, Numer. Algebra Control Optim., 9 (2019), 297-318.  doi: 10.3934/naco.2019020.  Google Scholar

[119]

W. Sun and Y.-X. Yuan, Optimization Theory and Methods: Nonlinear Programming, 1st edition, Springer, New York, 2006. doi: 10.1007/b106451.  Google Scholar

[120]

Ph. L. Toint, Non–monotone trust–region algorithm for nonlinear optimization subject to convex constraints, Math. Prog., 77 (1997), 69-94.  doi: 10.1007/BF02614518.  Google Scholar

[121]

D. Touati-Ahmed and C. Storey, Efficient hybrid conjugate gradient techniques, J. Optim. Theory Appl., 64 (1990), 379-397.  doi: 10.1007/BF00939455.  Google Scholar

[122]

M. N. VrahatisG. S. AndroulakisJ. N. Lambrinos and G. D. Magoulas, A class of gradient unconstrained minimization algorithms with adaptive step-size, J. Comp. Appl. Math., 114 (2000), 367-386.  doi: 10.1016/S0377-0427(99)00276-9.  Google Scholar

[123]

Z. WeiG. Li and L. Qi, New quasi-Newton methods for unconstrained optimization problems, Appl. Math. Comput., 175 (2006), 1156-1188.  doi: 10.1016/j.amc.2005.08.027.  Google Scholar

[124]

Z. WeiG. Li and L. Qi, New nonlinear conjugate gradient formulas for large-scale unconstrained optimization problems, Appl. Math. Comput., 179 (2006), 407-430.  doi: 10.1016/j.amc.2005.11.150.  Google Scholar

[125]

Z. WeiS. Yao and L. Liu, The convergence properties of some new conjugate gradient methods, Appl. Math. Comput., 183 (2006), 1341-1350.  doi: 10.1016/j.amc.2006.05.150.  Google Scholar

[126]

P. Wolfe, Convergence conditions for ascent methods, SIAM Rev., 11 (1969), 226-235.  doi: 10.1137/1011036.  Google Scholar

[127]

M. XiW. Sun and J. Chen, Survey of derivative-free optimization, Numer. Algebra Control Optim., 10 (2020), 537-555.  doi: 10.3934/naco.2020050.  Google Scholar

[128]

Y. Xiao and H. Zhu, A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing, J. Math. Anal. Appl., 405 (2013), 310-319.  doi: 10.1016/j.jmaa.2013.04.017.  Google Scholar

[129]

H. Yabe and M. Takano, Global convergence properties of nonlinear conjugate gradient methods with modified secant condition, Comput. Optim. Appl., 28 (2004), 203-225.  doi: 10.1023/B:COAP.0000026885.81997.88.  Google Scholar

[130]

X. Yang, Z. Luo and X. Dai, A global convergence of LS-CD hybrid conjugate gradient method, Adv. Numer. Anal., 2013 (2013), Article ID 517452, 5 pp. doi: 10.1155/2013/517452.  Google Scholar

[131]

S. Yao, X. Lu and Z. Wei, A conjugate gradient method with global convergence for large-scale unconstrained optimization problems, J. Appl. Math., 2013 (2013), Article ID 730454, 9 pp. doi: 10.1155/2013/730454.  Google Scholar

[132]

G. Yuan and Z. Wei, Convergence analysis of a modified BFGS method on convex minimizations, Comp. Optim. Appl., 47 (2010), 237-255.  doi: 10.1007/s10589-008-9219-0.  Google Scholar

[133]

S. YaoZ. Wei and H. Huang, A notes about WYL's conjugate gradient method and its applications, Appl. Math. Comput., 191 (2007), 381-388.  doi: 10.1016/j.amc.2007.02.094.  Google Scholar

[134]

Y. Yuan, A new stepsize for the steepest descent method, J. Comput. Math., 24 (2006), 149-156.   Google Scholar

[135]

G. YuanT. Li and W. Hu, A conjugate gradient algorithm for large-scale nonlinear equations and image restoration problems, Appl. Numer. Math., 147 (2020), 129-141.  doi: 10.1016/j.apnum.2019.08.022.  Google Scholar

[136]

G. Yuan, T. Li and W. Hu, A conjugate gradient algorithm and its application in large-scale optimization problems and image restoration, J. Inequal. Appl., 2019 (2019), Article number: 247, 25 pp. doi: 10.1186/s13660-019-2192-6.  Google Scholar

[137]

G. YuanZ. Wei and Y. Wu, Modified limited memory BFGS method with nonmonotone line search for unconstrained optimization, J. Korean Math. Soc., 47 (2010), 767-788.  doi: 10.4134/JKMS.2010.47.4.767.  Google Scholar

[138]

L. Zhang, An improved Wei-Yao-Liu nonlinear conjugate gradient method for optimization computation, Appl. Math. Comput., 215 (2009), 2269-2274.  doi: 10.1016/j.amc.2009.08.016.  Google Scholar

[139]

Y. Zheng and B. Zheng, Two new Dai-Liao-type conjugate gradient methods for unconstrained optimization problems, J. Optim. Theory Appl., 175 (2017), 502-509.  doi: 10.1007/s10957-017-1140-1.  Google Scholar

[140]

H. ZhongG. Chen and X. Guo, Semi-local convergence of the Newton-HSS method under the center Lipschitz condition, Numer. Algebra Control Optim., 9 (2019), 85-99.  doi: 10.3934/naco.2019007.  Google Scholar

[141]

W. Zhou and L. Zhang, A nonlinear conjugate gradient method based on the MBFGS secant condition, Optim. Methods Softw., 21 (2006), 707-714.  doi: 10.1080/10556780500137041.  Google Scholar

[142]

G. Zoutendijk, Nonlinear programming, computational methods, In: J. Abadie (eds.): Integer and Nonlinear Programming, North-Holland, Amsterdam, (1970), 37–86.  Google Scholar

[143]

N. ZullN. 'AiniM. Rivaie and M. Mamat, A new gradient method for solving linear regression model, International Journal of Recent Technology and Engineering, 7 (2019), 624-630.   Google Scholar

Figure 1.  IT performance profile for AGD, MSM and SM methods
Figure 2.  FE performance profile for AGD, MSM and SM methods
Figure 3.  CPU time performance profile for AGD, MSM and SM methods
Figure 4.  IT performance profile for HS, PRP and LS methods
Figure 5.  FE performance profile for HS, PRP and LS method
Figure 6.  CPU time performance profile for HS, PRP and LS methods
Figure 7.  IT performance profile for DY, FR and CD methods
Figure 8.  FE performance profile for DY, FR and CD methods
Figure 9.  CPU time performance profile for DY, FR and CD methods
Figure 10.  IT performance profile for hybrid CG methods HCG1–HCG10
Figure 11.  FE performance profile for hybrid CG methods HCG1–HCG10)
Figure 12.  CPU time performance profile for hybrid CG methods HCG1–HCG10
Figure 13.  Performance profiles of DHSDL (T1, T2, T3, T4, T5, T6) method based on IT
Figure 14.  Performance profiles of DHSDL (T1, T2, T3, T4, T5, T6) method based on FE
Figure 15.  Performance profile of DHSDL (T1, T2, T3, T4, T5, T6) method based on CPU time
Figure 16.  Performance profiles of DLSDL (T1, T2, T3, T4, T5, T6) method based on IT
Figure 17.  Performance profiles of DLSDL (T1, T2, T3, T4, T5, T6) method based on FE
Figure 18.  Performance profile of DLSDL (T1, T2, T3, T4, T5, T6) method based on CPU
Table 1.  Some modifications of quasi-Newton equations
Quasi-Newton Eqs. $ \tilde{\mathbf{y}}_{k-1} $ Ref.
$ B_{k}{\mathbf s}_{k-1}\!=\!\tilde{\mathbf{y}}_{k-1} $ $ \tilde{\mathbf{y}}_{k-1} \!=\! \varphi_{k-1} \mathbf{y}_{k-1} + (1-\varphi_{k-1})B_{k-1}\mathbf{s}_{k-1} $ [104]
$ B_{k}{\mathbf s}_{k-1}=\tilde{\mathbf{y}}_{k-1} $ $ \tilde{\mathbf{y}}_{k-1} \!=\! \mathbf{y}_{k-1} + t_{k-1}\mathbf{s}_{k-1}, t_{k-1}\leq10^{-6} $ [72]
$ B_{k}{\mathbf s}_{k-1}=\tilde{\mathbf{y}}_{k-1} $ $ \tilde{\mathbf{y}}_{k-1} \!=\! \mathbf{y}_{k-1} + \frac{2(f_{k-1} - f_k)+(\mathbf{g}_k + \mathbf{g}_{k-1})^ \mathrm{T} \mathbf{s}_{k-1}}{\|\mathbf{s}_{k-1}\|^2}\mathbf{s}_{k-1} $ [123]
$ B_{k}{\mathbf s}_{k-1}=\tilde{\mathbf{y}}_{k-1} $ $ \tilde{\mathbf{y}}_{k-1} \!=\! \mathbf{y}_{k-1} + \frac{\max (0, 2(f_{k-1} - f_k)+(\mathbf{g}_k + \mathbf{g}_{k-1})^ \mathrm{T} \mathbf{s}_{k-1})}{\|\mathbf{s}_{k-1}\|^2}\mathbf{s}_{k-1} $ [133]
$ B_{k}{\mathbf s}_{k-1}=\tilde{\mathbf{y}}_{k-1} $ $ \tilde{\mathbf{y}}_{k-1} \!=\! \mathbf{y}_{k-1} + \frac{\max (0, 6(f_{k-1} - f_k)+3(\mathbf{g}_k + \mathbf{g}_{k-1})^ \mathrm{T} \mathbf{s}_{k-1})}{\|\mathbf{s}_{k-1}\|^2}\mathbf{s}_{k-1} $ [134]
$ B_{k}{\mathbf s}_{k-1}=\tilde{\mathbf{y}}_{k-1} $ $ \tilde{\mathbf{y}}_{k-1} \!=\! \frac{1}{2}\mathbf{y}_{k-1} + \frac{(f_{k-1} - f_k) - \frac{1}{2} \mathbf{g}_k^ \mathrm{T} \mathbf{s}_{k-1}}{\mathbf{s}_{k-1}^ \mathrm{T} \mathbf{y}_{k-1}}\mathbf{y}_{k-1} $ [59]
Quasi-Newton Eqs. $ \tilde{\mathbf{y}}_{k-1} $ Ref.
$ B_{k}{\mathbf s}_{k-1}\!=\!\tilde{\mathbf{y}}_{k-1} $ $ \tilde{\mathbf{y}}_{k-1} \!=\! \varphi_{k-1} \mathbf{y}_{k-1} + (1-\varphi_{k-1})B_{k-1}\mathbf{s}_{k-1} $ [104]
$ B_{k}{\mathbf s}_{k-1}=\tilde{\mathbf{y}}_{k-1} $ $ \tilde{\mathbf{y}}_{k-1} \!=\! \mathbf{y}_{k-1} + t_{k-1}\mathbf{s}_{k-1}, t_{k-1}\leq10^{-6} $ [72]
$ B_{k}{\mathbf s}_{k-1}=\tilde{\mathbf{y}}_{k-1} $ $ \tilde{\mathbf{y}}_{k-1} \!=\! \mathbf{y}_{k-1} + \frac{2(f_{k-1} - f_k)+(\mathbf{g}_k + \mathbf{g}_{k-1})^ \mathrm{T} \mathbf{s}_{k-1}}{\|\mathbf{s}_{k-1}\|^2}\mathbf{s}_{k-1} $ [123]
$ B_{k}{\mathbf s}_{k-1}=\tilde{\mathbf{y}}_{k-1} $ $ \tilde{\mathbf{y}}_{k-1} \!=\! \mathbf{y}_{k-1} + \frac{\max (0, 2(f_{k-1} - f_k)+(\mathbf{g}_k + \mathbf{g}_{k-1})^ \mathrm{T} \mathbf{s}_{k-1})}{\|\mathbf{s}_{k-1}\|^2}\mathbf{s}_{k-1} $ [133]
$ B_{k}{\mathbf s}_{k-1}=\tilde{\mathbf{y}}_{k-1} $ $ \tilde{\mathbf{y}}_{k-1} \!=\! \mathbf{y}_{k-1} + \frac{\max (0, 6(f_{k-1} - f_k)+3(\mathbf{g}_k + \mathbf{g}_{k-1})^ \mathrm{T} \mathbf{s}_{k-1})}{\|\mathbf{s}_{k-1}\|^2}\mathbf{s}_{k-1} $ [134]
$ B_{k}{\mathbf s}_{k-1}=\tilde{\mathbf{y}}_{k-1} $ $ \tilde{\mathbf{y}}_{k-1} \!=\! \frac{1}{2}\mathbf{y}_{k-1} + \frac{(f_{k-1} - f_k) - \frac{1}{2} \mathbf{g}_k^ \mathrm{T} \mathbf{s}_{k-1}}{\mathbf{s}_{k-1}^ \mathrm{T} \mathbf{y}_{k-1}}\mathbf{y}_{k-1} $ [59]
Table 2.  Some modifications of quasi-Newton equations
$ \beta_k $ Title Year Reference
$ \beta_k^{\mathrm{HS}}\!=\!\dfrac{{\mathbf y}_{k-1}^ \mathrm{T} {\mathbf g}_k}{{\mathbf y}_{k-1}^ \mathrm{T} {\mathbf d}_{k-1}} $ $ \rm Hestenses–Stiefel $ 1952 [60]
$ \beta_k^{\mathrm{FR}}\!=\!\dfrac{\|{\mathbf g}_k\|^2}{\|{\mathbf g}_{k-1}\|^2} $ $ \rm Fletcher–Reeves $ 1964 [48]
$ \beta_k^{\mathrm{D}}\!=\!\dfrac{{\mathbf g}_{k}^ \mathrm{T} {\mathbf G}_{k-1} {\mathbf d}_{k-1}}{{\mathbf d}_{k-1}^ \mathrm{T} {\mathbf G}_{k-1}{\mathbf d}_{k-1}} $ 1967 [38]
$ \beta_k^{\mathrm{PRP}}\!=\!\dfrac{{\mathbf y}_{k-1}^ \mathrm{T} {\mathbf g}_k}{\|{\mathbf g}_{k-1}\|^2} $ $ \rm Polak–Ribiere–Polyak $ 1969 [102,103]
$ \beta_k^{\mathrm{CD}}\!=\!-\dfrac{\|{\mathbf g}_k\|^2}{{\mathbf g}_{k-1}^ \mathrm{T} {\mathbf d}_{k-1}} $ $ \rm Conjugate Descent $ 1987 [47]
$ \beta_k^{\mathrm{LS}}\!=\!-\dfrac{{\mathbf y}_{k-1}^ \mathrm{T} {\mathbf g}_k}{{\mathbf g}_{k-1}^ \mathrm{T} {\mathbf d}_{k-1}} $ $ \rm Liu–Storey $ 1991 [79]
$ \beta_k^{\mathrm{DY}}\!=\!\dfrac{\|{\mathbf g}_k\|^2}{{\mathbf y}_{k-1}^ \mathrm{T} {\mathbf d}_{k-1}} $ $ \rm Dai–Yuan $ 1999 [30]
$ \beta_k $ Title Year Reference
$ \beta_k^{\mathrm{HS}}\!=\!\dfrac{{\mathbf y}_{k-1}^ \mathrm{T} {\mathbf g}_k}{{\mathbf y}_{k-1}^ \mathrm{T} {\mathbf d}_{k-1}} $ $ \rm Hestenses–Stiefel $ 1952 [60]
$ \beta_k^{\mathrm{FR}}\!=\!\dfrac{\|{\mathbf g}_k\|^2}{\|{\mathbf g}_{k-1}\|^2} $ $ \rm Fletcher–Reeves $ 1964 [48]
$ \beta_k^{\mathrm{D}}\!=\!\dfrac{{\mathbf g}_{k}^ \mathrm{T} {\mathbf G}_{k-1} {\mathbf d}_{k-1}}{{\mathbf d}_{k-1}^ \mathrm{T} {\mathbf G}_{k-1}{\mathbf d}_{k-1}} $ 1967 [38]
$ \beta_k^{\mathrm{PRP}}\!=\!\dfrac{{\mathbf y}_{k-1}^ \mathrm{T} {\mathbf g}_k}{\|{\mathbf g}_{k-1}\|^2} $ $ \rm Polak–Ribiere–Polyak $ 1969 [102,103]
$ \beta_k^{\mathrm{CD}}\!=\!-\dfrac{\|{\mathbf g}_k\|^2}{{\mathbf g}_{k-1}^ \mathrm{T} {\mathbf d}_{k-1}} $ $ \rm Conjugate Descent $ 1987 [47]
$ \beta_k^{\mathrm{LS}}\!=\!-\dfrac{{\mathbf y}_{k-1}^ \mathrm{T} {\mathbf g}_k}{{\mathbf g}_{k-1}^ \mathrm{T} {\mathbf d}_{k-1}} $ $ \rm Liu–Storey $ 1991 [79]
$ \beta_k^{\mathrm{DY}}\!=\!\dfrac{\|{\mathbf g}_k\|^2}{{\mathbf y}_{k-1}^ \mathrm{T} {\mathbf d}_{k-1}} $ $ \rm Dai–Yuan $ 1999 [30]
Table 3.  Classification of CG methods
Denominator
Numerator $\|\mathbf{g}_{k-1}\|^2$ $\mathbf{y}_{k-1}^{\rm T} \mathbf{d}_{k-1}$ $-\mathbf{g}_{k-1}^{\rm T} \mathbf{d}_{k-1}$
$\|\mathbf{g}_{k}\|^2$ FR DY CD
$\mathbf{y}_{k-1}^{\rm T} \mathbf{g}_{k}$ PRP HS LS
Denominator
Numerator $\|\mathbf{g}_{k-1}\|^2$ $\mathbf{y}_{k-1}^{\rm T} \mathbf{d}_{k-1}$ $-\mathbf{g}_{k-1}^{\rm T} \mathbf{d}_{k-1}$
$\|\mathbf{g}_{k}\|^2$ FR DY CD
$\mathbf{y}_{k-1}^{\rm T} \mathbf{g}_{k}$ PRP HS LS
Table 4.  Summary numerical results of the AGD, MSM and SM methods with respect to IT, FE and CPU
IT profile FE profile CPU time
Test function AGD MSM SM AGD MSM SM AGD MSM SM
Perturbed Quadratic 353897 34828 59908 13916515 200106 337910 6756.047 116.281 185.641
Raydan 1 22620 26046 14918 431804 311260 81412 158.359 31.906 36.078
Diagonal 3 120416 7030 12827 4264718 38158 69906 5527.844 52.609 102.875
Generalized Tridiagonal 1 670 346 325 9334 1191 1094 11.344 1.469 1.203
Extended Tridiagonal 1 3564 1370 4206 14292 10989 35621 55.891 29.047 90.281
Extended TET 443 156 156 3794 528 528 3.219 0.516 0.594
Diagonal 4 120 96 96 1332 636 636 0.781 0.203 0.141
Extended Himmelblau 396 260 196 6897 976 668 1.953 0.297 0.188
Perturbed quadratic diagonal 2542050 37454 44903 94921578 341299 460028 44978.750 139.625 185.266
Quadratic QF1 366183 36169 62927 13310016 208286 352975 12602.563 81.531 138.172
Extended quadratic penalty QP1 210 369 271 2613 2196 2326 1.266 1.000 0.797
Extended quadratic penalty QP2 395887 1674 3489 9852040 11491 25905 3558.734 3.516 6.547
Quadratic QF2 100286 32727 64076 3989239 183142 353935 1582.766 73.438 132.703
Extended quadratic exponential EP1 48 100 73 990 894 661 0.750 0.688 0.438
Extended Tridiagonal 2 1657 659 543 8166 2866 2728 3.719 1.047 1.031
ARWHEAD (CUTE) 5667 430 270 214284 5322 3919 95.641 1.969 1.359
Almost Perturbed Quadratic 356094 33652 60789 14003318 194876 338797 13337.125 73.047 133.516
LIARWHD (CUTE) 1054019 3029 18691 47476667 27974 180457 27221.516 9.250 82.016
ENGVAL1 (CUTE) 743 461 375 6882 2285 2702 3.906 1.047 1.188
QUARTC (CUTE) 171 217 290 402 494 640 2.469 1.844 2.313
Generalized Quartic 187 181 189 849 493 507 0.797 0.281 0.188
Diagonal 7 72 147 108 333 504 335 0.625 0.547 0.375
Diagonal 8 60 120 118 304 383 711 0.438 0.469 0.797
Full Hessian FH3 45 63 63 1352 566 631 1.438 0.391 0.391
Diagonal 9 329768 10540 13619 13144711 68189 89287 6353.172 43.609 38.672
IT profile FE profile CPU time
Test function AGD MSM SM AGD MSM SM AGD MSM SM
Perturbed Quadratic 353897 34828 59908 13916515 200106 337910 6756.047 116.281 185.641
Raydan 1 22620 26046 14918 431804 311260 81412 158.359 31.906 36.078
Diagonal 3 120416 7030 12827 4264718 38158 69906 5527.844 52.609 102.875
Generalized Tridiagonal 1 670 346 325 9334 1191 1094 11.344 1.469 1.203
Extended Tridiagonal 1 3564 1370 4206 14292 10989 35621 55.891 29.047 90.281
Extended TET 443 156 156 3794 528 528 3.219 0.516 0.594
Diagonal 4 120 96 96 1332 636 636 0.781 0.203 0.141
Extended Himmelblau 396 260 196 6897 976 668 1.953 0.297 0.188
Perturbed quadratic diagonal 2542050 37454 44903 94921578 341299 460028 44978.750 139.625 185.266
Quadratic QF1 366183 36169 62927 13310016 208286 352975 12602.563 81.531 138.172
Extended quadratic penalty QP1 210 369 271 2613 2196 2326 1.266 1.000 0.797
Extended quadratic penalty QP2 395887 1674 3489 9852040 11491 25905 3558.734 3.516 6.547
Quadratic QF2 100286 32727 64076 3989239 183142 353935 1582.766 73.438 132.703
Extended quadratic exponential EP1 48 100 73 990 894 661 0.750 0.688 0.438
Extended Tridiagonal 2 1657 659 543 8166 2866 2728 3.719 1.047 1.031
ARWHEAD (CUTE) 5667 430 270 214284 5322 3919 95.641 1.969 1.359
Almost Perturbed Quadratic 356094 33652 60789 14003318 194876 338797 13337.125 73.047 133.516
LIARWHD (CUTE) 1054019 3029 18691 47476667 27974 180457 27221.516 9.250 82.016
ENGVAL1 (CUTE) 743 461 375 6882 2285 2702 3.906 1.047 1.188
QUARTC (CUTE) 171 217 290 402 494 640 2.469 1.844 2.313
Generalized Quartic 187 181 189 849 493 507 0.797 0.281 0.188
Diagonal 7 72 147 108 333 504 335 0.625 0.547 0.375
Diagonal 8 60 120 118 304 383 711 0.438 0.469 0.797
Full Hessian FH3 45 63 63 1352 566 631 1.438 0.391 0.391
Diagonal 9 329768 10540 13619 13144711 68189 89287 6353.172 43.609 38.672
Table 5.  Summary numerical results of the HS, PRP and LS methods with respect to the IT, FE and CPU
IT profile FE profile CPU time
Test function HS PRP LS HS PRP LS HS PRP LS
Perturbed Quadratic 1157 1157 6662 3481 3481 19996 0.234 0.719 1.438
Raydan 2 NaN 174 40 NaN 373 120 NaN 0.094 0.078
Diagonal 2 NaN 1721 5007 NaN 6594 15498 NaN 1.313 2.891
Extended Tridiagonal 1 NaN 170 17079 NaN 560 54812 NaN 0.422 13.641
Diagonal 4 NaN 70 1927 NaN 180 5739 NaN 0.078 0.391
Diagonal 5 NaN 154 30 NaN 338 90 NaN 0.172 0.078
Extended Himmelblau 160 120 241 820 600 1043 0.172 0.125 0.172
Full Hessian FH2 5096 5686 348414 15294 17065 1045123 83.891 80.625 5081.875
Perturbed quadratic diagonal 1472 1120 21667 4419 3363 65057 0.438 0.391 2.547
Quadratic QF1 1158 1158 5612 3484 3484 16813 0.281 0.313 1.047
Extended quadratic penalty QP2 NaN 533 NaN NaN 5395 NaN NaN 0.781 NaN
Quadratic QF2 2056 2311 NaN 9168 9862 NaN 0.969 0.859 NaN
Extended quadratic exponential EP1 NaN NaN 70 NaN NaN 350 NaN NaN 0.141
TRIDIA (CUTE) 6835 6744 NaN 20521 20248 NaN 1.438 1.094 NaN
Almost Perturbed Quadratic 1158 1158 5996 3484 3484 17998 0.281 0.328 1.063
LIARWHD (CUTE) NaN 408 11498 NaN 4571 50814 NaN 0.438 2.969
POWER (CUTE) 7781 7789 190882 23353 23377 572656 1.422 1.219 14.609
NONSCOMP (CUTE) 4545 3647 NaN 15128 12433 NaN 0.875 0.656 NaN
QUARTC (CUTE) NaN 165 155 NaN 1347 1466 NaN 0.781 0.766
Diagonal 6 NaN 174 137 NaN 373 442 NaN 0.109 0.125
DIXON3DQ (CUTE) NaN 12595 12039 NaN 37714 36091 NaN 1.641 2.859
BIGGSB1 (CUTE) NaN 11454 11517 NaN 34293 34530 NaN 1.969 2.141
Generalized Quartic NaN 134 139 NaN 458 445 NaN 0.125 0.094
Diagonal 7 NaN 51 80 NaN 142 240 NaN 0.063 0.109
Diagonal 8 NaN 70 80 NaN 180 180 NaN 0.063 0.125
FLETCHCR (CUTE) 18292 19084 20354 178305 170266 171992 8.859 6.203 7.484
IT profile FE profile CPU time
Test function HS PRP LS HS PRP LS HS PRP LS
Perturbed Quadratic 1157 1157 6662 3481 3481 19996 0.234 0.719 1.438
Raydan 2 NaN 174 40 NaN 373 120 NaN 0.094 0.078
Diagonal 2 NaN 1721 5007 NaN 6594 15498 NaN 1.313 2.891
Extended Tridiagonal 1 NaN 170 17079 NaN 560 54812 NaN 0.422 13.641
Diagonal 4 NaN 70 1927 NaN 180 5739 NaN 0.078 0.391
Diagonal 5 NaN 154 30 NaN 338 90 NaN 0.172 0.078
Extended Himmelblau 160 120 241 820 600 1043 0.172 0.125 0.172
Full Hessian FH2 5096 5686 348414 15294 17065 1045123 83.891 80.625 5081.875
Perturbed quadratic diagonal 1472 1120 21667 4419 3363 65057 0.438 0.391 2.547
Quadratic QF1 1158 1158 5612 3484 3484 16813 0.281 0.313 1.047
Extended quadratic penalty QP2 NaN 533 NaN NaN 5395 NaN NaN 0.781 NaN
Quadratic QF2 2056 2311 NaN 9168 9862 NaN 0.969 0.859 NaN
Extended quadratic exponential EP1 NaN NaN 70 NaN NaN 350 NaN NaN 0.141
TRIDIA (CUTE) 6835 6744 NaN 20521 20248 NaN 1.438 1.094 NaN
Almost Perturbed Quadratic 1158 1158 5996 3484 3484 17998 0.281 0.328 1.063
LIARWHD (CUTE) NaN 408 11498 NaN 4571 50814 NaN 0.438 2.969
POWER (CUTE) 7781 7789 190882 23353 23377 572656 1.422 1.219 14.609
NONSCOMP (CUTE) 4545 3647 NaN 15128 12433 NaN 0.875 0.656 NaN
QUARTC (CUTE) NaN 165 155 NaN 1347 1466 NaN 0.781 0.766
Diagonal 6 NaN 174 137 NaN 373 442 NaN 0.109 0.125
DIXON3DQ (CUTE) NaN 12595 12039 NaN 37714 36091 NaN 1.641 2.859
BIGGSB1 (CUTE) NaN 11454 11517 NaN 34293 34530 NaN 1.969 2.141
Generalized Quartic NaN 134 139 NaN 458 445 NaN 0.125 0.094
Diagonal 7 NaN 51 80 NaN 142 240 NaN 0.063 0.109
Diagonal 8 NaN 70 80 NaN 180 180 NaN 0.063 0.125
FLETCHCR (CUTE) 18292 19084 20354 178305 170266 171992 8.859 6.203 7.484
Table 6.  Summary numerical results of the DY, FR and CD methods with respect to IT, FE and CPU
IT profile FE profile CPU time
Test function DY FR CD DY FR CD DY FR CD
Perturbed Quadratic 1157 1157 1157 3481 3481 3481 0.469 0.609 0.531
Raydan 2 86 40 40 192 100 100 0.063 0.016 0.016
Diagonal 2 1636 3440 2058 4774 7982 8063 0.922 1.563 1.297
Extended Tridiagonal 1 2081 690 1140 4639 2022 2984 1.703 1.141 1.578
Diagonal 4 70 70 70 200 200 200 0.047 0.031 0.016
Diagonal 5 40 124 155 100 258 320 0.109 0.141 0.125
Extended Himmelblau 383 339 207 1669 1467 961 0.219 0.172 0.172
Full Hessian FH2 4682 4868 4794 14054 14610 14390 65.938 66.469 65.922
Perturbed quadratic diagonal 1036 1084 1276 3114 3258 3834 0.406 0.422 0.422
Quadratic QF1 1158 1158 1158 3484 3484 3484 0.297 0.297 0.328
Quadratic QF2 NaN NaN 2349 NaN NaN 10073 NaN NaN 1.531
Extended quadratic exponential EP1 NaN 60 60 NaN 310 310 NaN 0.109 0.125
Almost Perturbed Quadratic 1158 1158 1158 3484 3484 3484 0.422 0.453 0.391
LIARWHD (CUTE) 2812 1202 1255 12366 7834 7379 0.938 1.000 1.109
POWER (CUTE) 7779 7781 7782 23347 23353 23356 1.078 1.500 1.328
NONSCOMP (CUTE) 2558 13483 10901 49960 43268 33413 1.203 1.406 1.422
QUARTC (CUTE) 134 94 95 1132 901 916 0.688 0.672 0.563
Diagonal 6 86 40 40 192 100 100 0.047 0.063 0.063
DIXON3DQ (CUTE) 16047 18776 19376 48172 56369 58176 2.266 2.516 2.734
BIGGSB1 (CUTE) 15274 17835 18374 45853 53546 55170 2.875 2.922 2.484
Generalized Quartic 142 214 173 497 712 589 0.078 0.172 0.109
Diagonal 7 50 50 50 160 160 160 0.063 0.047 0.094
Diagonal 8 50 40 40 160 130 130 0.109 0.125 0.063
Full Hessian FH3 43 43 43 139 139 139 0.063 0.109 0.109
FLETCHCR (CUTE) NaN NaN 26793 NaN NaN 240237 NaN NaN 10.203
IT profile FE profile CPU time
Test function DY FR CD DY FR CD DY FR CD
Perturbed Quadratic 1157 1157 1157 3481 3481 3481 0.469 0.609 0.531
Raydan 2 86 40 40 192 100 100 0.063 0.016 0.016
Diagonal 2 1636 3440 2058 4774 7982 8063 0.922 1.563 1.297
Extended Tridiagonal 1 2081 690 1140 4639 2022 2984 1.703 1.141 1.578
Diagonal 4 70 70 70 200 200 200 0.047 0.031 0.016
Diagonal 5 40 124 155 100 258 320 0.109 0.141 0.125
Extended Himmelblau 383 339 207 1669 1467 961 0.219 0.172 0.172
Full Hessian FH2 4682 4868 4794 14054 14610 14390 65.938 66.469 65.922
Perturbed quadratic diagonal 1036 1084 1276 3114 3258 3834 0.406 0.422 0.422
Quadratic QF1 1158 1158 1158 3484 3484 3484 0.297 0.297 0.328
Quadratic QF2 NaN NaN 2349 NaN NaN 10073 NaN NaN 1.531
Extended quadratic exponential EP1 NaN 60 60 NaN 310 310 NaN 0.109 0.125
Almost Perturbed Quadratic 1158 1158 1158 3484 3484 3484 0.422 0.453 0.391
LIARWHD (CUTE) 2812 1202 1255 12366 7834 7379 0.938 1.000 1.109
POWER (CUTE) 7779 7781 7782 23347 23353 23356 1.078 1.500 1.328
NONSCOMP (CUTE) 2558 13483 10901 49960 43268 33413 1.203 1.406 1.422
QUARTC (CUTE) 134 94 95 1132 901 916 0.688 0.672 0.563
Diagonal 6 86 40 40 192 100 100 0.047 0.063 0.063
DIXON3DQ (CUTE) 16047 18776 19376 48172 56369 58176 2.266 2.516 2.734
BIGGSB1 (CUTE) 15274 17835 18374 45853 53546 55170 2.875 2.922 2.484
Generalized Quartic 142 214 173 497 712 589 0.078 0.172 0.109
Diagonal 7 50 50 50 160 160 160 0.063 0.047 0.094
Diagonal 8 50 40 40 160 130 130 0.109 0.125 0.063
Full Hessian FH3 43 43 43 139 139 139 0.063 0.109 0.109
FLETCHCR (CUTE) NaN NaN 26793 NaN NaN 240237 NaN NaN 10.203
Table 7.  Summary numerical results of the hybrid CG methods HCG1–HCG10 with respect to IT
Test function HCG1 HCG2 HCG3 HCG4 HCG5 HCG6 HCG7 HCG8 HCG9 HCG10
Perturbed Quadratic 1157 1157 1157 1157 1157 1157 1157 1157 1157 1157
Raydan 2 40 40 40 57 78 81 40 69 NaN 126
Diagonal 2 1584 1581 1542 1488 1500 2110 2193 1843 1475 1453
Extended Tridiagonal 1 805 623 754 2110 2160 10129 1167 966 NaN 270
Diagonal 4 60 60 70 60 70 70 60 70 NaN 113
Diagonal 5 124 39 98 39 120 109 39 141 154 130
Extended Himmelblau 145 139 111 161 181 207 159 381 109 108
Full Hessian FH2 5036 5036 5036 4820 4820 4800 4994 4789 5163 5705
Perturbed quadratic diagonal 1228 1214 1266 934 1093 987 996 1016 NaN 2679
Quadratic QF1 1158 1158 1158 1158 1158 1158 1158 1158 NaN 1158
Quadratic QF2 2125 2098 2174 1995 1991 2425 2378 NaN 2204 2034
TRIDIA (CUTE) NaN NaN NaN 6210 6210 5594 NaN NaN 6748 7345
Almost Perturbed Quadratic 1158 1158 1158 1158 1158 1158 1158 1158 1158 1158
LIARWHD (CUTE) 1367 817 1592 1024 1831 1774 531 2152 NaN 573
POWER (CUTE) 7782 7782 7782 7779 7779 7802 7781 7780 NaN 7781
NONSCOMP (CUTE) 10092 10746 8896 10466 9972 13390 11029 3520 3988 11411
QUARTC (CUTE) 94 160 145 150 126 95 160 114 165 154
Diagonal 6 40 40 40 57 78 81 40 69 NaN 126
DIXON3DQ (CUTE) 12182 5160 11257 5160 11977 14302 5160 17080 NaN 12264
BIGGSB1 (CUTE) 10664 5160 10479 5160 11082 13600 5160 16192 NaN 11151
Generalized Quartic 129 107 110 107 142 153 107 123 131 145
Diagonal 7 50 NaN 40 NaN 40 50 NaN 50 51 40
Diagonal 8 40 40 40 50 NaN 50 40 NaN NaN 40
Full Hessian FH3 43 42 42 42 42 43 42 43 NaN NaN
FLETCHCR (CUTE) 17821 17632 18568 17272 17446 26794 24865 NaN 17315 20813
Test function HCG1 HCG2 HCG3 HCG4 HCG5 HCG6 HCG7 HCG8 HCG9 HCG10
Perturbed Quadratic 1157 1157 1157 1157 1157 1157 1157 1157 1157 1157
Raydan 2 40 40 40 57 78 81 40 69 NaN 126
Diagonal 2 1584 1581 1542 1488 1500 2110 2193 1843 1475 1453
Extended Tridiagonal 1 805 623 754 2110 2160 10129 1167 966 NaN 270
Diagonal 4 60 60 70 60 70 70 60 70 NaN 113
Diagonal 5 124 39 98 39 120 109 39 141 154 130
Extended Himmelblau 145 139 111 161 181 207 159 381 109 108
Full Hessian FH2 5036 5036 5036 4820 4820 4800 4994 4789 5163 5705
Perturbed quadratic diagonal 1228 1214 1266 934 1093 987 996 1016 NaN 2679
Quadratic QF1 1158 1158 1158 1158 1158 1158 1158 1158 NaN 1158
Quadratic QF2 2125 2098 2174 1995 1991 2425 2378 NaN 2204 2034
TRIDIA (CUTE) NaN NaN NaN 6210 6210 5594 NaN NaN 6748 7345
Almost Perturbed Quadratic 1158 1158 1158 1158 1158 1158 1158 1158 1158 1158
LIARWHD (CUTE) 1367 817 1592 1024 1831 1774 531 2152 NaN 573
POWER (CUTE) 7782 7782 7782 7779 7779 7802 7781 7780 NaN 7781
NONSCOMP (CUTE) 10092 10746 8896 10466 9972 13390 11029 3520 3988 11411
QUARTC (CUTE) 94 160 145 150 126 95 160 114 165 154
Diagonal 6 40 40 40 57 78 81 40 69 NaN 126
DIXON3DQ (CUTE) 12182 5160 11257 5160 11977 14302 5160 17080 NaN 12264
BIGGSB1 (CUTE) 10664 5160 10479 5160 11082 13600 5160 16192 NaN 11151
Generalized Quartic 129 107 110 107 142 153 107 123 131 145
Diagonal 7 50 NaN 40 NaN 40 50 NaN 50 51 40
Diagonal 8 40 40 40 50 NaN 50 40 NaN NaN 40
Full Hessian FH3 43 42 42 42 42 43 42 43 NaN NaN
FLETCHCR (CUTE) 17821 17632 18568 17272 17446 26794 24865 NaN 17315 20813
Table 8.  Summary numerical results of the hybrid CG methods HCG1–HCG10 with respect to FE
Test function HCG1 HCG2 HCG3 HCG4 HCG5 HCG6 HCG7 HCG8 HCG9 HCG10
Perturbed Quadratic 3481 3481 3481 3481 3481 3481 3481 3481 3481 3481
Raydan 2 100 100 100 134 176 182 100 158 NaN 282
Diagonal 2 6136 6217 6006 5923 5944 8281 8594 4822 5711 5636
Extended Tridiagonal 1 2369 1991 2275 4678 4924 22418 3119 2661 NaN 869
Diagonal 4 170 170 200 170 200 200 170 200 NaN 339
Diagonal 5 258 88 206 88 270 228 88 292 338 270
Extended Himmelblau 855 687 583 763 813 961 757 1613 567 594
Full Hessian FH2 15115 15115 15115 14467 14467 14407 14989 14374 15495 17122
Perturbed quadratic diagonal 3686 3647 3805 2805 3282 2967 2993 3053 NaN 8044
Quadratic QF1 3484 3484 3484 3484 3484 3484 3484 3484 NaN 3484
Quadratic QF2 9455 9202 9501 9016 9054 10229 10086 NaN 9531 9085
TRIDIA (CUTE) NaN NaN NaN 18640 18640 16792 NaN NaN 20260 22051
Almost Perturbed Quadratic 3484 3484 3484 3484 3484 3484 3484 3484 3484 3484
LIARWHD (CUTE) 7712 5931 8275 6165 8113 9395 5854 10305 NaN 4848
POWER (CUTE) 23356 23356 23356 23347 23347 23416 23353 23350 NaN 23353
NONSCOMP (CUTE) 31355 33211 27801 32705 31458 40807 34013 23411 13367 35106
QUARTC (CUTE) 901 1254 1261 1224 1224 916 1254 1041 1347 1305
Diagonal 6 100 100 100 134 176 182 100 158 NaN 282
DIXON3DQ (CUTE) 36508 15534 33759 15534 35926 42952 15534 51284 NaN 36796
BIGGSB1 (CUTE) 31960 15534 31427 15534 33247 40846 15534 48620 NaN 33469
Generalized Quartic 457 371 370 371 481 529 371 439 446 467
Diagonal 7 160 NaN 130 NaN 130 160 NaN 160 142 13
Diagonal 8 130 130 130 160 NaN 160 130 NaN NaN 130
Full Hessian FH3 139 136 136 136 136 139 136 139 NaN NaN
FLETCHCR (CUTE) 166463 165774 168739 175309 175845 240240 184939 NaN 174406 215687
Test function HCG1 HCG2 HCG3 HCG4 HCG5 HCG6 HCG7 HCG8 HCG9 HCG10
Perturbed Quadratic 3481 3481 3481 3481 3481 3481 3481 3481 3481 3481
Raydan 2 100 100 100 134 176 182 100 158 NaN 282
Diagonal 2 6136 6217 6006 5923 5944 8281 8594 4822 5711 5636
Extended Tridiagonal 1 2369 1991 2275 4678 4924 22418 3119 2661 NaN 869
Diagonal 4 170 170 200 170 200 200 170 200 NaN 339
Diagonal 5 258 88 206 88 270 228 88 292 338 270
Extended Himmelblau 855 687 583 763 813 961 757 1613 567 594
Full Hessian FH2 15115 15115 15115 14467 14467 14407 14989 14374 15495 17122
Perturbed quadratic diagonal 3686 3647 3805 2805 3282 2967 2993 3053 NaN 8044
Quadratic QF1 3484 3484 3484 3484 3484 3484 3484 3484 NaN 3484
Quadratic QF2 9455 9202 9501 9016 9054 10229 10086 NaN 9531 9085
TRIDIA (CUTE) NaN NaN NaN 18640 18640 16792 NaN NaN 20260 22051
Almost Perturbed Quadratic 3484 3484 3484 3484 3484 3484 3484 3484 3484 3484
LIARWHD (CUTE) 7712 5931 8275 6165 8113 9395 5854 10305 NaN 4848
POWER (CUTE) 23356 23356 23356 23347 23347 23416 23353 23350 NaN 23353
NONSCOMP (CUTE) 31355 33211 27801 32705 31458 40807 34013 23411 13367 35106
QUARTC (CUTE) 901 1254 1261 1224 1224 916 1254 1041 1347 1305
Diagonal 6 100 100 100 134 176 182 100 158 NaN 282
DIXON3DQ (CUTE) 36508 15534 33759 15534 35926 42952 15534 51284 NaN 36796
BIGGSB1 (CUTE) 31960 15534 31427 15534 33247 40846 15534 48620 NaN 33469
Generalized Quartic 457 371 370 371 481 529 371 439 446 467
Diagonal 7 160 NaN 130 NaN 130 160 NaN 160 142 13
Diagonal 8 130 130 130 160 NaN 160 130 NaN NaN 130
Full Hessian FH3 139 136 136 136 136 139 136 139 NaN NaN
FLETCHCR (CUTE) 166463 165774 168739 175309 175845 240240 184939 NaN 174406 215687
Table 9.  Summary numerical results of the hybrid CG methods HCG1–HCG10 with respect to the CPU (sec)
Test function HCG1 HCG2 HCG3 HCG4 HCG5 HCG6 HCG7 HCG8 HCG9 HCG10
Perturbed Quadratic 0.656 0.516 0.781 0.719 0.594 0.438 0.719 0.688 0.844 0.688
Raydan 2 0.031 0.063 0.078 0.078 0.078 0.078 0.078 0.078 NaN 0.078
Diagonal 2 1.453 1.328 1.656 1.172 1.438 1.797 1.813 1.266 1.250 1.141
Extended Tridiagonal 1 1.016 1.125 1.359 2.250 2.375 7.578 1.672 1.375 NaN 0.922
Diagonal 4 0.031 0.031 0.031 0.078 0.078 0.047 0.109 0.094 NaN 0.094
Diagonal 5 0.141 0.063 0.156 0.094 0.094 0.125 0.109 0.078 0.219 0.156
Extended Himmelblau 0.172 0.172 0.109 0.141 0.172 0.141 0.125 0.141 0.172 0.125
Full Hessian FH2 83.125 91.938 86.984 85.766 94.484 78.281 77.141 74.500 80.969 82.469
Perturbed quadratic diagonal 0.406 0.609 0.641 0.375 0.563 0.359 0.328 0.344 NaN 0.734
Quadratic QF1 0.359 0.438 0.422 0.422 0.406 0.391 0.484 0.422 NaN 0.281
Quadratic QF2 1.047 1.313 1.203 1.156 1.063 1.156 1.000 NaN 1.094 1.047
TRIDIA (CUTE) NaN NaN NaN 1.688 1.391 1.859 NaN NaN 1.875 1.391
Almost Perturbed Quadratic 0.406 0.438 0.516 0.594 0.250 0.359 0.406 0.578 0.641 0.422
LIARWHD (CUTE) 0.938 0.828 1.203 0.797 1.125 1.172 0.938 1.203 NaN 0.594
POWER (CUTE) 1.563 1.672 1.750 1.609 1.625 1.578 1.625 1.188 NaN 1.453
NONSCOMP (CUTE) 1.547 1.484 1.063 1.766 1.422 1.719 1.516 1.063 1.203 1.703
QUARTC (CUTE) 0.750 1.000 0.969 0.969 0.875 0.797 0.938 0.703 1.266 0.93
Diagonal 6 0.078 0.078 0.078 0.094 0.063 0.016 0.016 0.125 NaN 0.109
DIXON3DQ (CUTE) 2.047 1.453 2.016 1.484 2.359 2.234 1.406 2.297 NaN 2.078
BIGGSB1 (CUTE) 1.875 2.047 2.359 1.750 2.250 2.391 1.422 2.672 NaN 2.422
Generalized Quartic 0.063 0.125 0.141 0.156 0.125 0.094 0.078 0.109 0.172 0.109
Diagonal 7 0.063 NaN 0.016 NaN 0.109 0.063 NaN 0.063 0.063 0.063
Diagonal 8 0.078 0.125 0.078 0.031 NaN 0.063 0.109 NaN NaN 0.078
Full Hessian FH3 0.063 0.047 0.109 0.047 0.031 0.063 0.047 0.109 NaN NaN
FLETCHCR (CUTE) 5.656 6.750 7.922 9.484 6.484 8.766 7.281 NaN 6.906 7.547
Test function HCG1 HCG2 HCG3 HCG4 HCG5 HCG6 HCG7 HCG8 HCG9 HCG10
Perturbed Quadratic 0.656 0.516 0.781 0.719 0.594 0.438 0.719 0.688 0.844 0.688
Raydan 2 0.031 0.063 0.078 0.078 0.078 0.078 0.078 0.078 NaN 0.078
Diagonal 2 1.453 1.328 1.656 1.172 1.438 1.797 1.813 1.266 1.250 1.141
Extended Tridiagonal 1 1.016 1.125 1.359 2.250 2.375 7.578 1.672 1.375 NaN 0.922
Diagonal 4 0.031 0.031 0.031 0.078 0.078 0.047 0.109 0.094 NaN 0.094
Diagonal 5 0.141 0.063 0.156 0.094 0.094 0.125 0.109 0.078 0.219 0.156
Extended Himmelblau 0.172 0.172 0.109 0.141 0.172 0.141 0.125 0.141 0.172 0.125
Full Hessian FH2 83.125 91.938 86.984 85.766 94.484 78.281 77.141 74.500 80.969 82.469
Perturbed quadratic diagonal 0.406 0.609 0.641 0.375 0.563 0.359 0.328 0.344 NaN 0.734
Quadratic QF1 0.359 0.438 0.422 0.422 0.406 0.391 0.484 0.422 NaN 0.281
Quadratic QF2 1.047 1.313 1.203 1.156 1.063 1.156 1.000 NaN 1.094 1.047
TRIDIA (CUTE) NaN NaN NaN 1.688 1.391 1.859 NaN NaN 1.875 1.391
Almost Perturbed Quadratic 0.406 0.438 0.516 0.594 0.250 0.359 0.406 0.578 0.641 0.422
LIARWHD (CUTE) 0.938 0.828 1.203 0.797 1.125 1.172 0.938 1.203 NaN 0.594
POWER (CUTE) 1.563 1.672 1.750 1.609 1.625 1.578 1.625 1.188 NaN 1.453
NONSCOMP (CUTE) 1.547 1.484 1.063 1.766 1.422 1.719 1.516 1.063 1.203 1.703
QUARTC (CUTE) 0.750 1.000 0.969 0.969 0.875 0.797 0.938 0.703 1.266 0.93
Diagonal 6 0.078 0.078 0.078 0.094 0.063 0.016 0.016 0.125 NaN 0.109
DIXON3DQ (CUTE) 2.047 1.453 2.016 1.484 2.359 2.234 1.406 2.297 NaN 2.078
BIGGSB1 (CUTE) 1.875 2.047 2.359 1.750 2.250 2.391 1.422 2.672 NaN 2.422
Generalized Quartic 0.063 0.125 0.141 0.156 0.125 0.094 0.078 0.109 0.172 0.109
Diagonal 7 0.063 NaN 0.016 NaN 0.109 0.063 NaN 0.063 0.063 0.063
Diagonal 8 0.078 0.125 0.078 0.031 NaN 0.063 0.109 NaN NaN 0.078
Full Hessian FH3 0.063 0.047 0.109 0.047 0.031 0.063 0.047 0.109 NaN NaN
FLETCHCR (CUTE) 5.656 6.750 7.922 9.484 6.484 8.766 7.281 NaN 6.906 7.547
Table 10.  Labels and values of scalar t in the DHSDL, DLSDL, MHSDL and MLSDL methods
Label T1 T2 T3 T4 T5 T6
Value of the scalar $ t $ $ t_k=\alpha_k $ 0.05 0.1 0.2 0.5 0.9
Label T1 T2 T3 T4 T5 T6
Value of the scalar $ t $ $ t_k=\alpha_k $ 0.05 0.1 0.2 0.5 0.9
Table 11.  Average IT values for 22 test functions tested on 10 numerical experiments
Method T1 T2 T3 T4 T5 T6
DHSDL 32980.14 31281.32 33640.45 32942.36 34448.32 33872.36
DLSDL 30694.00 28701.14 31048.32 30594.77 31926.59 31573.05
MHSDL 29289.73 27653.64 29660.00 29713.50 30491.18 30197.27
MLSDL 25398.82 22941.77 24758.27 24250.68 25722.64 25032.64
Method T1 T2 T3 T4 T5 T6
DHSDL 32980.14 31281.32 33640.45 32942.36 34448.32 33872.36
DLSDL 30694.00 28701.14 31048.32 30594.77 31926.59 31573.05
MHSDL 29289.73 27653.64 29660.00 29713.50 30491.18 30197.27
MLSDL 25398.82 22941.77 24758.27 24250.68 25722.64 25032.64
Table 12.  Average values of FE for 22 test functions tested on 10 numerical experiments
Method T1 T2 T3 T4 T5 T6
DHSDL 1228585.50 1191960.55 1252957.09 1238044.36 1271176.59 1255710.45
DLSDL 1131421.41 1083535.14 1149482.41 1134315.00 1167030.14 1158554.77
MHSDL 1089700.41 1036710.32 1089777.64 1091985.41 1105299.91 1101380.18
MLSDL 904217.14 845017.55 891669.50 879473.14 913165.68 895652.36
Method T1 T2 T3 T4 T5 T6
DHSDL 1228585.50 1191960.55 1252957.09 1238044.36 1271176.59 1255710.45
DLSDL 1131421.41 1083535.14 1149482.41 1134315.00 1167030.14 1158554.77
MHSDL 1089700.41 1036710.32 1089777.64 1091985.41 1105299.91 1101380.18
MLSDL 904217.14 845017.55 891669.50 879473.14 913165.68 895652.36
Table 13.  Average CPU time for 22 test functions tested on 10 numerical experiments
Method T1 T2 T3 T4 T5 T6
DHSDL 902.06 894.73 917.77 930.56 911.28 870.93
DLSDL 816.08 790.63 804.69 816.28 803.84 809.67
MHSDL 770.78 751.65 728.61 749.70 712.64 720.57
MLSDL 573.14 587.41 581.50 576.32 582.62 580.96
Method T1 T2 T3 T4 T5 T6
DHSDL 902.06 894.73 917.77 930.56 911.28 870.93
DLSDL 816.08 790.63 804.69 816.28 803.84 809.67
MHSDL 770.78 751.65 728.61 749.70 712.64 720.57
MLSDL 573.14 587.41 581.50 576.32 582.62 580.96
[1]

M. S. Lee, H. G. Harno, B. S. Goh, K. H. Lim. On the bang-bang control approach via a component-wise line search strategy for unconstrained optimization. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 45-61. doi: 10.3934/naco.2020014

[2]

Thomas Frenzel, Matthias Liero. Effective diffusion in thin structures via generalized gradient systems and EDP-convergence. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 395-425. doi: 10.3934/dcdss.2020345

[3]

Xuefei He, Kun Wang, Liwei Xu. Efficient finite difference methods for the nonlinear Helmholtz equation in Kerr medium. Electronic Research Archive, 2020, 28 (4) : 1503-1528. doi: 10.3934/era.2020079

[4]

Wei Ouyang, Li Li. Hölder strong metric subregularity and its applications to convergence analysis of inexact Newton methods. Journal of Industrial & Management Optimization, 2021, 17 (1) : 169-184. doi: 10.3934/jimo.2019105

[5]

Martin Heida, Stefan Neukamm, Mario Varga. Stochastic homogenization of $ \Lambda $-convex gradient flows. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 427-453. doi: 10.3934/dcdss.2020328

[6]

Yasmine Cherfaoui, Mustapha Moulaï. Biobjective optimization over the efficient set of multiobjective integer programming problem. Journal of Industrial & Management Optimization, 2021, 17 (1) : 117-131. doi: 10.3934/jimo.2019102

[7]

Xin Guo, Lei Shi. Preface of the special issue on analysis in data science: Methods and applications. Mathematical Foundations of Computing, 2020, 3 (4) : i-ii. doi: 10.3934/mfc.2020026

[8]

Shun Zhang, Jianlin Jiang, Su Zhang, Yibing Lv, Yuzhen Guo. ADMM-type methods for generalized multi-facility Weber problem. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020171

[9]

Wenbin Li, Jianliang Qian. Simultaneously recovering both domain and varying density in inverse gravimetry by efficient level-set methods. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020073

[10]

Karoline Disser. Global existence and uniqueness for a volume-surface reaction-nonlinear-diffusion system. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 321-330. doi: 10.3934/dcdss.2020326

[11]

José Madrid, João P. G. Ramos. On optimal autocorrelation inequalities on the real line. Communications on Pure & Applied Analysis, 2021, 20 (1) : 369-388. doi: 10.3934/cpaa.2020271

[12]

Zuliang Lu, Fei Huang, Xiankui Wu, Lin Li, Shang Liu. Convergence and quasi-optimality of $ L^2- $norms based an adaptive finite element method for nonlinear optimal control problems. Electronic Research Archive, 2020, 28 (4) : 1459-1486. doi: 10.3934/era.2020077

[13]

George W. Patrick. The geometry of convergence in numerical analysis. Journal of Computational Dynamics, 2021, 8 (1) : 33-58. doi: 10.3934/jcd.2021003

[14]

Yahia Zare Mehrjerdi. A new methodology for solving bi-criterion fractional stochastic programming. Numerical Algebra, Control & Optimization, 2020  doi: 10.3934/naco.2020054

[15]

Shenglan Xie, Maoan Han, Peng Zhu. A posteriori error estimate of weak Galerkin fem for second order elliptic problem with mixed boundary condition. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020340

[16]

Tommi Brander, Joonas Ilmavirta, Petteri Piiroinen, Teemu Tyni. Optimal recovery of a radiating source with multiple frequencies along one line. Inverse Problems & Imaging, 2020, 14 (6) : 967-983. doi: 10.3934/ipi.2020044

[17]

Thierry Horsin, Mohamed Ali Jendoubi. On the convergence to equilibria of a sequence defined by an implicit scheme. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020465

[18]

Min Xi, Wenyu Sun, Jun Chen. Survey of derivative-free optimization. Numerical Algebra, Control & Optimization, 2020, 10 (4) : 537-555. doi: 10.3934/naco.2020050

[19]

Haiyu Liu, Rongmin Zhu, Yuxian Geng. Gorenstein global dimensions relative to balanced pairs. Electronic Research Archive, 2020, 28 (4) : 1563-1571. doi: 10.3934/era.2020082

[20]

Jianhua Huang, Yanbin Tang, Ming Wang. Singular support of the global attractor for a damped BBM equation. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020345

 Impact Factor: 0.263

Article outline

Figures and Tables

[Back to Top]