April  2017, 13(2): 595-608. doi: 10.3934/jimo.2016034

A scaled conjugate gradient method with moving asymptotes for unconstrained optimization problems

1. 

Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu Province, 211106, China

2. 

Huaibei Normal University, Huaibei, Anhui Province, 235000, China

3. 

Hubei Engineering University, Xiaogan, Hubei Pvovince, 432000, China

Received  January 2015 Revised  December 2015 Published  May 2016

Fund Project: The authors are supported by the Natural Science Foundation of China (11471159,11571169), the Natural Science Foundation of Jiangsu Province (BK20141409), the Education Department Foundation of Anhui Province(KJ2016A651,2014jyxm161), and the Science and Technology Foundation of the Department of Education of Hubei Province (D20152701)

In this paper, a scaled method that combines the conjugate gradient with moving asymptotes is presented for solving the large-scaled nonlinear unconstrained optimization problem. A diagonal matrix is obtained by the moving asymptote technique, and a scaled gradient is determined by multiplying the gradient with the diagonal matrix. The search direction is either a scaled conjugate gradient direction or a negative scaled gradient direction under different conditions. This direction is sufficient descent if the step size satisfies the strong Wolfe condition. A global convergence analysis of this method is also provided. The numerical results show that the scaled method is efficient for solving some large-scaled nonlinear problems.

Citation: Guanghui Zhou, Qin Ni, Meilan Zeng. A scaled conjugate gradient method with moving asymptotes for unconstrained optimization problems. Journal of Industrial & Management Optimization, 2017, 13 (2) : 595-608. doi: 10.3934/jimo.2016034
References:
[1]

M. Al-BaaliY. Narushima and H. Yabe, A family of three-term conjugate gradient methods with sufficient descent property for unconstrained optimization, Computational Optimization and Applications, 60 (2015), 89-110. doi: 10.1007/s10589-014-9662-z. Google Scholar

[2]

N. Andrei, An unconstrained optimization test functions collection, Advanced Modeling and Optimization, 10 (2008), 147-161. Google Scholar

[3]

Y. Dai and Y. Yuan, A nonlinear conjugate gradient method with a strong global convergence property, SIAM Journal on Optimization, 10 (1999), 177-182. doi: 10.1137/S1052623497318992. Google Scholar

[4]

Y. Dai and C. Kou, A nonlinear conjugate gradient algorithm with an optimal property and an improved Wolfe line search, SIAM Journal on Optimization, 23 (2013), 296-320. doi: 10.1137/100813026. Google Scholar

[5]

E. D. Dolan and J. J. Moré, Benchmarking optimization software with performance profiles, Mathematical Programming, 91 (2002), 201-213. doi: 10.1007/s101070100263. Google Scholar

[6]

R. Fletcher and C. M. Reeves, Function minimization by conjugate gradients, The Computer Journal, 7 (1964), 149-154. doi: 10.1093/comjnl/7.2.149. Google Scholar

[7]

N. I. M. GouldD. Orban and P. L. Toint, CUTEr and SifDec: A constrained and unconstrained testing environment, revisited, ACM Transactions on Mathematical Software, 29 (2003), 353-372. doi: 10.1145/962437.962438. Google Scholar

[8]

W. Hager and H. Zhang, A new conjugate gradient method with guaranteed descent and an efficient line search, SIAM Journal on Optimization, 16 (2005), 170-192. doi: 10.1137/030601880. Google Scholar

[9]

W. Hager and H. Zhang, The limited memory conjugate gradient method, SIAM Journal on Optimization, 23 (2013), 2150-2168. doi: 10.1137/120898097. Google Scholar

[10]

M. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, Journal of Research of the National Bureau of Standards, 49 (1952), 409-436. doi: 10.6028/jres.049.044. Google Scholar

[11]

D. Luenberger and Y. Ye, Linear and Nonlinear Programming, 3rd edition, Springer-Verlag, New York, 2008. Google Scholar

[12]

W. NakamuraY. Narushima and H. Yabe, Nonlinear conjugate gradient methods with sufficient descent properties for unconstrained optimization, Journal of Industrial and Management Optimization, 9 (2013), 595-619. doi: 10.3934/jimo.2013.9.595. Google Scholar

[13]

Y. NarushimaH. Yabe and J. A. Ford, A three-term conjugate gradient method with sufficient descent property for unconstrained optimization, SIAM Journal on Optimization, 21 (2011), 212-230. doi: 10.1137/080743573. Google Scholar

[14]

Q. Ni, A globally convergent method of moving asymptotes with trust region technique, Optimization methods and software, 18 (2003), 283-297. doi: 10.1080/1055678031000118491. Google Scholar

[15]

E. Polak and G. Ribiere, Note sur la convergence de méthodes de directions conjuguées, Revue française d'informatique et de recherche opérationnelle, série rouge, 3 (1969), 35-43. Google Scholar

[16]

B. T. Polyak, The conjugate gradient method in extremal problems, USSR Computational Mathematics and Mathematical Physics, 9 (1969), 94-112. doi: 10.1016/0041-5553(69)90035-4. Google Scholar

[17]

K. Svanberg, The method of moving asymptotes—a new method for structural optimization, International Journal for Numerical Methods in Engineering, 24 (2987), 359-373. doi: 10.1002/nme.1620240207. Google Scholar

[18]

H. Wang and Q. Ni, A new method of moving asymptotes for large-scale unconstrained optimization, Applied Mathematics and Computaiton, 203 (2008), 62-71. doi: 10.1016/j.amc.2008.03.035. Google Scholar

[19]

W. Zhou and Y. Zhou, On the strong convergence of a modified Hestenes-Stiefel method for nonconvex optimization, Journal of Industrial and Management Optimization, 9 (2013), 893-899. doi: 10.3934/jimo.2013.9.893. Google Scholar

show all references

References:
[1]

M. Al-BaaliY. Narushima and H. Yabe, A family of three-term conjugate gradient methods with sufficient descent property for unconstrained optimization, Computational Optimization and Applications, 60 (2015), 89-110. doi: 10.1007/s10589-014-9662-z. Google Scholar

[2]

N. Andrei, An unconstrained optimization test functions collection, Advanced Modeling and Optimization, 10 (2008), 147-161. Google Scholar

[3]

Y. Dai and Y. Yuan, A nonlinear conjugate gradient method with a strong global convergence property, SIAM Journal on Optimization, 10 (1999), 177-182. doi: 10.1137/S1052623497318992. Google Scholar

[4]

Y. Dai and C. Kou, A nonlinear conjugate gradient algorithm with an optimal property and an improved Wolfe line search, SIAM Journal on Optimization, 23 (2013), 296-320. doi: 10.1137/100813026. Google Scholar

[5]

E. D. Dolan and J. J. Moré, Benchmarking optimization software with performance profiles, Mathematical Programming, 91 (2002), 201-213. doi: 10.1007/s101070100263. Google Scholar

[6]

R. Fletcher and C. M. Reeves, Function minimization by conjugate gradients, The Computer Journal, 7 (1964), 149-154. doi: 10.1093/comjnl/7.2.149. Google Scholar

[7]

N. I. M. GouldD. Orban and P. L. Toint, CUTEr and SifDec: A constrained and unconstrained testing environment, revisited, ACM Transactions on Mathematical Software, 29 (2003), 353-372. doi: 10.1145/962437.962438. Google Scholar

[8]

W. Hager and H. Zhang, A new conjugate gradient method with guaranteed descent and an efficient line search, SIAM Journal on Optimization, 16 (2005), 170-192. doi: 10.1137/030601880. Google Scholar

[9]

W. Hager and H. Zhang, The limited memory conjugate gradient method, SIAM Journal on Optimization, 23 (2013), 2150-2168. doi: 10.1137/120898097. Google Scholar

[10]

M. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, Journal of Research of the National Bureau of Standards, 49 (1952), 409-436. doi: 10.6028/jres.049.044. Google Scholar

[11]

D. Luenberger and Y. Ye, Linear and Nonlinear Programming, 3rd edition, Springer-Verlag, New York, 2008. Google Scholar

[12]

W. NakamuraY. Narushima and H. Yabe, Nonlinear conjugate gradient methods with sufficient descent properties for unconstrained optimization, Journal of Industrial and Management Optimization, 9 (2013), 595-619. doi: 10.3934/jimo.2013.9.595. Google Scholar

[13]

Y. NarushimaH. Yabe and J. A. Ford, A three-term conjugate gradient method with sufficient descent property for unconstrained optimization, SIAM Journal on Optimization, 21 (2011), 212-230. doi: 10.1137/080743573. Google Scholar

[14]

Q. Ni, A globally convergent method of moving asymptotes with trust region technique, Optimization methods and software, 18 (2003), 283-297. doi: 10.1080/1055678031000118491. Google Scholar

[15]

E. Polak and G. Ribiere, Note sur la convergence de méthodes de directions conjuguées, Revue française d'informatique et de recherche opérationnelle, série rouge, 3 (1969), 35-43. Google Scholar

[16]

B. T. Polyak, The conjugate gradient method in extremal problems, USSR Computational Mathematics and Mathematical Physics, 9 (1969), 94-112. doi: 10.1016/0041-5553(69)90035-4. Google Scholar

[17]

K. Svanberg, The method of moving asymptotes—a new method for structural optimization, International Journal for Numerical Methods in Engineering, 24 (2987), 359-373. doi: 10.1002/nme.1620240207. Google Scholar

[18]

H. Wang and Q. Ni, A new method of moving asymptotes for large-scale unconstrained optimization, Applied Mathematics and Computaiton, 203 (2008), 62-71. doi: 10.1016/j.amc.2008.03.035. Google Scholar

[19]

W. Zhou and Y. Zhou, On the strong convergence of a modified Hestenes-Stiefel method for nonconvex optimization, Journal of Industrial and Management Optimization, 9 (2013), 893-899. doi: 10.3934/jimo.2013.9.893. Google Scholar

Figure 1.  CPU time performance profiles in a log2 scale (n = 102)
Figure 2.  Iterations performance profiles in a log2 scale (n = 102)
Figure 3.  CPU time performance profiles in a log2 scale (n = 103)
Figure 4.  Iterations performance profiles in a log2 scale (n = 103)
Figure 5.  CPU time performance profiles in a log2 scale (n = 104)
Figure 6.  Iterations performance profiles in a log2 scale (n = 104)
Table 1.  Test functions
No. Function Name No. Function Name
1 ARWHEAD 17 Ext quadratic penalty QP2
2 COSINE 18 Ext quadratic exponential EP1
3 EDENSCH 19 Ext Tridiagonal 2
4 EG2 20 Ext DENSCHNF
5 ENGVAL1 21 HIMMELBG
6 GENROSE 22 HIMMELH
7 Ext Freudenstein & Roth 23 TOINTGSS
8 Raydan 2 24 Extended DENSCHNB
9 Ext Tridiagonal 1 25 LIARWHD
10 Ext TET 26 Extended Trigonometric
11 Diagonal 5 27 Extended Penalty
12 Diagonal 2 28 Extended BD1
13 Ext Maratos 29 Perturbed Quadratic
14 Ext Cliff 30 Raydan 1
15 Perturbed quadratic diagonal 31 Diagonal 4
16 Ext quadratic penalty QP1 32 QUARTC
No. Function Name No. Function Name
1 ARWHEAD 17 Ext quadratic penalty QP2
2 COSINE 18 Ext quadratic exponential EP1
3 EDENSCH 19 Ext Tridiagonal 2
4 EG2 20 Ext DENSCHNF
5 ENGVAL1 21 HIMMELBG
6 GENROSE 22 HIMMELH
7 Ext Freudenstein & Roth 23 TOINTGSS
8 Raydan 2 24 Extended DENSCHNB
9 Ext Tridiagonal 1 25 LIARWHD
10 Ext TET 26 Extended Trigonometric
11 Diagonal 5 27 Extended Penalty
12 Diagonal 2 28 Extended BD1
13 Ext Maratos 29 Perturbed Quadratic
14 Ext Cliff 30 Raydan 1
15 Perturbed quadratic diagonal 31 Diagonal 4
16 Ext quadratic penalty QP1 32 QUARTC
[1]

Wataru Nakamura, Yasushi Narushima, Hiroshi Yabe. Nonlinear conjugate gradient methods with sufficient descent properties for unconstrained optimization. Journal of Industrial & Management Optimization, 2013, 9 (3) : 595-619. doi: 10.3934/jimo.2013.9.595

[2]

Shishun Li, Zhengda Huang. Guaranteed descent conjugate gradient methods with modified secant condition. Journal of Industrial & Management Optimization, 2008, 4 (4) : 739-755. doi: 10.3934/jimo.2008.4.739

[3]

El-Sayed M.E. Mostafa. A nonlinear conjugate gradient method for a special class of matrix optimization problems. Journal of Industrial & Management Optimization, 2014, 10 (3) : 883-903. doi: 10.3934/jimo.2014.10.883

[4]

Shummin Nakayama, Yasushi Narushima, Hiroshi Yabe. Memoryless quasi-Newton methods based on spectral-scaling Broyden family for unconstrained optimization. Journal of Industrial & Management Optimization, 2019, 15 (4) : 1773-1793. doi: 10.3934/jimo.2018122

[5]

Hong Seng Sim, Wah June Leong, Chuei Yee Chen, Siti Nur Iqmal Ibrahim. Multi-step spectral gradient methods with modified weak secant relation for large scale unconstrained optimization. Numerical Algebra, Control & Optimization, 2018, 8 (3) : 377-387. doi: 10.3934/naco.2018024

[6]

Zhong Wan, Chaoming Hu, Zhanlu Yang. A spectral PRP conjugate gradient methods for nonconvex optimization problem based on modified line search. Discrete & Continuous Dynamical Systems - B, 2011, 16 (4) : 1157-1169. doi: 10.3934/dcdsb.2011.16.1157

[7]

Yigui Ou, Xin Zhou. A modified scaled memoryless BFGS preconditioned conjugate gradient algorithm for nonsmooth convex optimization. Journal of Industrial & Management Optimization, 2018, 14 (2) : 785-801. doi: 10.3934/jimo.2017075

[8]

Mohamed Aly Tawhid. Nonsmooth generalized complementarity as unconstrained optimization. Journal of Industrial & Management Optimization, 2010, 6 (2) : 411-423. doi: 10.3934/jimo.2010.6.411

[9]

Anulekha Dhara, Aparna Mehra. Conjugate duality for generalized convex optimization problems. Journal of Industrial & Management Optimization, 2007, 3 (3) : 415-427. doi: 10.3934/jimo.2007.3.415

[10]

C.Y. Wang, M.X. Li. Convergence property of the Fletcher-Reeves conjugate gradient method with errors. Journal of Industrial & Management Optimization, 2005, 1 (2) : 193-200. doi: 10.3934/jimo.2005.1.193

[11]

Nam-Yong Lee, Bradley J. Lucier. Preconditioned conjugate gradient method for boundary artifact-free image deblurring. Inverse Problems & Imaging, 2016, 10 (1) : 195-225. doi: 10.3934/ipi.2016.10.195

[12]

Xing Li, Chungen Shen, Lei-Hong Zhang. A projected preconditioned conjugate gradient method for the linear response eigenvalue problem. Numerical Algebra, Control & Optimization, 2018, 8 (4) : 389-412. doi: 10.3934/naco.2018025

[13]

Jun Chen, Wenyu Sun, Zhenghao Yang. A non-monotone retrospective trust-region method for unconstrained optimization. Journal of Industrial & Management Optimization, 2013, 9 (4) : 919-944. doi: 10.3934/jimo.2013.9.919

[14]

Lijuan Zhao, Wenyu Sun. Nonmonotone retrospective conic trust region method for unconstrained optimization. Numerical Algebra, Control & Optimization, 2013, 3 (2) : 309-325. doi: 10.3934/naco.2013.3.309

[15]

Lixing Han. An unconstrained optimization approach for finding real eigenvalues of even order symmetric tensors. Numerical Algebra, Control & Optimization, 2013, 3 (3) : 583-599. doi: 10.3934/naco.2013.3.583

[16]

Xin Zhang, Jie Wen, Qin Ni. Subspace trust-region algorithm with conic model for unconstrained optimization. Numerical Algebra, Control & Optimization, 2013, 3 (2) : 223-234. doi: 10.3934/naco.2013.3.223

[17]

Saman Babaie–Kafaki, Reza Ghanbari. A class of descent four–term extension of the Dai–Liao conjugate gradient method based on the scaled memoryless BFGS update. Journal of Industrial & Management Optimization, 2017, 13 (2) : 649-658. doi: 10.3934/jimo.2016038

[18]

Gaohang Yu, Lutai Guan, Guoyin Li. Global convergence of modified Polak-Ribière-Polyak conjugate gradient methods with sufficient descent property. Journal of Industrial & Management Optimization, 2008, 4 (3) : 565-579. doi: 10.3934/jimo.2008.4.565

[19]

Yigui Ou, Haichan Lin. A class of accelerated conjugate-gradient-like methods based on a modified secant equation. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-16. doi: 10.3934/jimo.2019013

[20]

Jun Takaki, Nobuo Yamashita. A derivative-free trust-region algorithm for unconstrained optimization with controlled error. Numerical Algebra, Control & Optimization, 2011, 1 (1) : 117-145. doi: 10.3934/naco.2011.1.117

2018 Impact Factor: 1.025

Metrics

  • PDF downloads (43)
  • HTML views (288)
  • Cited by (0)

Other articles
by authors

[Back to Top]