• Previous Article
    Asset allocation for a DC pension plan with learning about stock return predictability
  • JIMO Home
  • This Issue
  • Next Article
    On convergence properties of the modified trust region method under Hölderian error bound condition
doi: 10.3934/jimo.2021191
Online First

Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible.

Readers can access Online First articles via the “Online First” tab for the selected journal.

Diagonally scaled memoryless quasi–Newton methods with application to compressed sensing

†. 

Department of Mathematics, Semnan University, P.O. Box: 35195–363, Semnan, Iran

* Corresponding author: Saman Babaie-Kafaki

Received  February 2021 Revised  July 2021 Early access November 2021

Memoryless quasi–Newton updating formulas of BFGS (Broyden–Fletcher–Goldfarb–Shanno) and DFP (Davidon–Fletcher–Powell) are scaled using well-structured diagonal matrices. In the scaling approach, diagonal elements as well as eigenvalues of the scaled memoryless quasi–Newton updating formulas play significant roles. Convergence analysis of the given diagonally scaled quasi–Newton methods is discussed. At last, performance of the methods is numerically tested on a set of CUTEr problems as well as the compressed sensing problem.

Citation: Zohre Aminifard, Saman Babaie-Kafaki. Diagonally scaled memoryless quasi–Newton methods with application to compressed sensing. Journal of Industrial and Management Optimization, doi: 10.3934/jimo.2021191
References:
[1]

M. Al-Baali, Numerical experience with a class of self-scaling quasi–Newton algorithms, J. Optim. Theory Appl., 96 (1998), 533-553.  doi: 10.1023/A:1022608410710.

[2]

M. Al-Baali and H. Khalfan, A combined class of self-scaling and modified quasi–Newton methods, Comput. Optim. Appl., 52 (2012), 393-408.  doi: 10.1007/s10589-011-9415-1.

[3]

M. Al-BaaliE. Spedicato and F. Maggioni, Broyden's quasi–Newton methods for a nonlinear system of equations and unconstrained optimization: A review and open problems, Optim. Methods Softw., 29 (2014), 937-954.  doi: 10.1080/10556788.2013.856909.

[4]

S. B. Albert and T. Martin, A robust multi-batch L–BFGS method for machine learning, Optim. Methods Softw., 35 (2020), 191-219.  doi: 10.1080/10556788.2019.1658107.

[5]

K. Amini and A. Ghorbani Rizi, A new structured quasi–Newton algorithm using partial information on Hessian, J. Comput. Appl. Math., 234 (2010), 805-811.  doi: 10.1016/j.cam.2010.01.044.

[6]

Z. Aminifard and S. Babaie-Kafaki, A modified descent Polak–Ribiére–Polyak conjugate gradient method with global convergence property for nonconvex functions, Calcolo, 56 (2019), 16.  doi: 10.1007/s10092-019-0312-9.

[7]

Z. AminifardS. Babaie-Kafaki and S. Ghafoori, An augmented memoryless BFGS method based on a modified secant equation with application to compressed sensing, Appl. Numer. Math., 167 (2021), 187-201.  doi: 10.1016/j.apnum.2021.05.002.

[8]

N. Andrei, Accelerated scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, European J. Oper. Res., 204 (2010), 410-420.  doi: 10.1016/j.ejor.2009.11.030.

[9]

N. Andrei, A double-parameter scaling Broyden–Fletcher–Goldfarb–Shanno method based on minimizing the measure function of Byrd and Nocedal for unconstrained optimization, J. Optim. Theory Appl., 178 (2018), 191-218.  doi: 10.1007/s10957-018-1288-3.

[10]

M. R. ArazmS. Babaie-Kafaki and R. Ghanbari, An extended Dai–Liao conjugate gradient method with global convergence for nonconvex functions, Glas. Mat. Ser., 52 (2017), 361-375.  doi: 10.3336/gm.52.2.12.

[11]

S. Babaie-Kafaki, On optimality of the parameters of self-scaling memoryless quasi–Newton updating formulae, J. Optim. Theory Appl., 167 (2015), 91-101.  doi: 10.1007/s10957-015-0724-x.

[12]

S. Babaie-Kafaki, A modified scaling parameter for the memoryless BFGS updating formula, Numer. Algorithms, 72 (2016), 425-433.  doi: 10.1007/s11075-015-0053-z.

[13]

S. Babaie-Kafaki, A hybrid scaling parameter for the scaled memoryless BFGS method based on the $\ell_{\infty}$ matrix norm, Int. J. Comput. Math., 96 (2019), 1595-1602.  doi: 10.1080/00207160.2018.1465940.

[14]

S. Babaie-Kafaki and Z. Aminifard, Two-parameter scaled memoryless BFGS methods with a nonmonotone choice for the initial step length, Numer. Algorithms, 82 (2019), 1345-1357.  doi: 10.1007/s11075-019-00658-1.

[15]

S. Babaie-Kafaki and R. Ghanbari, A modified scaled conjugate gradient method with global convergence for nonconvex functions, Bull. Belg. Math. Soc. Simon Stevin, 21 (2014), 465-477. 

[16]

S. Babaie-Kafaki and R. Ghanbari, A linear hybridization of the Hestenes–Stiefel method and the memoryless BFGS technique, Mediterr. J. Math., 15 (2018), 86.  doi: 10.1007/s00009-018-1132-x.

[17]

H. BademA. BasturkA. Caliskan and M. E. Yuksel, A new efficient training strategy for deep neural networks by hybridization of artificial bee colony and limited-memory BFGS optimization algorithms, Neurocomputing, 266 (2017), 506-526.  doi: 10.1016/j.neucom.2017.05.061.

[18]

M. BaiJ. Zhao and Z. Zhang, A descent cautious BFGS method for computing US-eigenvalues of symmetric complex tensors, J. Global Optim., 76 (2020), 889-911.  doi: 10.1007/s10898-019-00843-5.

[19]

J. Barzilai and J. M. Borwein, Two-point stepsize gradient methods, IMA J. Numer. Anal., 8 (1988), 141-148.  doi: 10.1093/imanum/8.1.141.

[20]

F. Biglari and A. Ebadian, Limited memory BFGS method based on a high-order tensor model, Comput. Optim. Appl., 60 (2015), 413-422.  doi: 10.1007/s10589-014-9678-4.

[21]

M. Borhani, Multi-label Log-Loss function using L–BFGS for document categorization, Eng. Appl. Artif. Intell., 91 (2020), 103623.  doi: 10.1016/j.engappai.2020.103623.

[22]

Y. H. Dai and L. Z. Liao, New conjugacy conditions and related nonlinear conjugate gradient methods, Appl. Math. Optim., 43 (2001), 87-101.  doi: 10.1007/s002450010019.

[23]

R. DehghaniN. Bidabadi and M. M. Hosseini, A new modified BFGS method for solving systems of nonlinear equations, J. Interdiscip. Math., 22 (2019), 75-89.  doi: 10.1080/09720502.2019.1574065.

[24]

J. E. DennisH. J. Martínez and R. A. Tapia, Convergence theory for the structured BFGS secant method with an application to nonlinear least squares, J. Optim. Theory Appl., 61 (1989), 161-178.  doi: 10.1007/BF00962795.

[25]

E. D. Dolan and J. J. Moré, Benchmarking optimization software with performance profiles, Math. Programming, 91 (2002), 201-213.  doi: 10.1007/s101070100263.

[26]

A. Ebrahimi and G. B. Loghmani, Shape modeling based on specifying the initial B-spline curve and scaled BFGS optimization method, Multimed. Tools Appl., 77 (2018), 30331-30351.  doi: 10.1007/s11042-018-6109-z.

[27]

I. E. Ebrahimi, An advanced active set L–BFGS algorithm for training weight-constrained neural networks, Neural. Comput. Applic., 32 (2020), 6669-6684.  doi: 10.1007/s00521-019-04689-6.

[28]

H. Esmaeili, S. Shabani and M. Kimiaei, A new generalized shrinkage conjugate gradient method for sparse recovery, Calcolo, 56 (2019), 38 pp. doi: 10.1007/s10092-018-0296-x.

[29]

J. A. Ford and I. A. Moghrabi, Multi-step quasi–Newton methods for optimization, J. Comput. Appl. Math., 50 (1994), 305-323.  doi: 10.1016/0377-0427(94)90309-3.

[30]

N. I. M. GouldD. Orban and P. L. Toint, CUTEr: A constrained and unconstrained testing environment, revisited, ACM Trans. Math. Software, 29 (2003), 373-394.  doi: 10.1145/962437.962439.

[31]

L. GrippoF. Lampariello and S. Lucidi, A nonmonotone line search technique for Newton's method, SIAM J. Numer. Anal., 23 (1986), 707-716.  doi: 10.1137/0723046.

[32]

W. W. Hager and H. Zhang, Algorithm 851: CG_Descent, a conjugate gradient method with guaranteed descent, ACM Trans. Math. Software, 32 (2006), 113-137. 

[33]

D. H. Li and M. Fukushima, A modified BFGS method and its global convergence in nonconvex minimization, J. Comput. Appl. Math., 129 (2001), 15-35.  doi: 10.1016/S0377-0427(00)00540-9.

[34]

D. H. Li and M. Fukushima, On the global convergence of the BFGS method for nonconvex unconstrained optimization problems, SIAM J. Optim., 11 (2001), 1054-1064.  doi: 10.1137/S1052623499354242.

[35]

M. Li, A modified Hestense–Stiefel conjugate gradient method close to the memoryless BFGS quasi–Newton method, Optim. Methods Softw., 33 (2018), 336-353.  doi: 10.1080/10556788.2017.1325885.

[36]

I. E. LivierisV. Tampakas and P. Pintelas, A descent hybrid conjugate gradient method based on the memoryless BFGS update, Numer. Algor., 79 (2018), 1169-1185.  doi: 10.1007/s11075-018-0479-1.

[37]

L. Z. LuM. K. Ng and F. R. Lin, Approximation BFGS methods for nonlinear image restoration, J. Comput. Appl. Math., 226 (2009), 84-91.  doi: 10.1016/j.cam.2008.05.056.

[38]

A. Mohammad NezhadR. Aliakbari Shandiz and A. Eshraghniaye Jahromi, A particle swarm-BFGS algorithm for nonlinear programming problems, Comput. Oper. Res., 40 (2013), 963-972.  doi: 10.1016/j.cor.2012.11.008.

[39]

J. Nocedal and S. J. Wright, Numerical Optimization, 2$^{nd}$ edition, Series in Operations Research and Financial Engineering. Springer, New York, 2006.

[40]

S. S. Oren and D. G. Luenberger, Self-scaling variable metric (SSVM) algorithms. I. Criteria and sufficient conditions for scaling a class of algorithms, Management Sci., 20 (1973/74), 845-862.  doi: 10.1287/mnsc.20.5.845.

[41]

S. S. Oren and E. Spedicato, Optimal conditioning of self-scaling variable metric algorithms, Math. Programming, 10 (1976), 70-90.  doi: 10.1007/BF01580654.

[42]

C. ShenC. FanY. Wang and W. Xue, Limited memory BFGS algorithm for the matrix approximation problem in Frobenius norm, Comput. Appl. Math., 39 (2020), 43.  doi: 10.1007/s40314-020-1089-9.

[43]

K. SugikiY. Narushima and H. Yabe, Globally convergent three–term conjugate gradient methods that use secant conditions and generate descent search directions for unconstrained optimization, J. Optim. Theory Appl., 153 (2012), 733-757.  doi: 10.1007/s10957-011-9960-x.

[44]

W. Sun and Y. X. Yuan, Optimization Theory and Methods: Nonlinear Programming, , Springer Optimization and Its Applications, 1. Springer, New York, 2006.

[45]

Z. WeiG. Li and L. Qi, New quasi–Newton methods for unconstrained optimization problems, Appl. Math. Comput., 175 (2006), 1156-1188.  doi: 10.1016/j.amc.2005.08.027.

[46]

Z. WeiG. YuG. Yuan and Z. Lian, The superlinear convergence of a modified BFGS-type method for unconstrained optimization, Comput. Optim. Appl., 29 (2004), 315-332.  doi: 10.1023/B:COAP.0000044184.25410.39.

[47]

C. Xu and J. Z. Zhang, A survey of quasi–Newton equations and quasi–Newton methods for optimization, Ann. Oper. Res., 103 (2001), 213-234.  doi: 10.1023/A:1012959223138.

[48]

F. YangM. DingX. ZhangW. Hou and C. Zhong, Non-rigid multi-modal medical image registration by combining L–BFGS–B with cat swarm optimization, Inform. Sciences, 316 (2015), 440-456.  doi: 10.1016/j.ins.2014.10.051.

[49]

X. Yao and Z. Wang, Broad echo state network for multivariate time series prediction, J. Franklin Inst., 356 (2019), 4888-4906.  doi: 10.1016/j.jfranklin.2019.01.027.

[50]

F. YinY. N. Wang and S. N. Wei, Inverse kinematic solution for robot manipulator based on electromagnetism-like and modified DFP algorithms, Acta Automatica Sinica, 37 (2011), 74-82.  doi: 10.3724/SP.J.1004.2011.00074.

[51]

X. YuanW. HuangP.-A. Absil and K. A. Gallivan, A Riemannian limited-memory BFGS algorithm for computing the matrix geometric mean, Procedia Comput. Sci., 80 (2016), 2147-2157.  doi: 10.1016/j.procs.2016.05.534.

[52]

Y. X. Yuan, A modified BFGS algorithm for unconstrained optimization, IMA J. Numer. Anal., 11 (1991), 325-332.  doi: 10.1093/imanum/11.3.325.

[53]

H. ZhangK. WangX. Zhou and W. Wang, Using DFP algorithm for nodal demand estimation of water distribution networks, KSCE J. Civ. Eng., 22 (2018), 2747-2754.  doi: 10.1007/s12205-018-0176-6.

[54]

J. Z. ZhangN. Y. Deng and L. H. Chen, New quasi–Newton equation and related methods for unconstrained optimization, J. Optim. Theory Appl., 102 (1999), 147-167.  doi: 10.1023/A:1021898630001.

[55]

W. Zhou, A modified BFGS type quasi–Newton method with line search for symmetric nonlinear equations problems, J. Comput. Appl. Math., 367 (2020), 112454.  doi: 10.1016/j.cam.2019.112454.

[56]

W. Zhou and L. Zhang, A nonlinear conjugate gradient method based on the MBFGS secant condition, Optim. Methods Softw., 21 (2006), 707-714.  doi: 10.1080/10556780500137041.

show all references

References:
[1]

M. Al-Baali, Numerical experience with a class of self-scaling quasi–Newton algorithms, J. Optim. Theory Appl., 96 (1998), 533-553.  doi: 10.1023/A:1022608410710.

[2]

M. Al-Baali and H. Khalfan, A combined class of self-scaling and modified quasi–Newton methods, Comput. Optim. Appl., 52 (2012), 393-408.  doi: 10.1007/s10589-011-9415-1.

[3]

M. Al-BaaliE. Spedicato and F. Maggioni, Broyden's quasi–Newton methods for a nonlinear system of equations and unconstrained optimization: A review and open problems, Optim. Methods Softw., 29 (2014), 937-954.  doi: 10.1080/10556788.2013.856909.

[4]

S. B. Albert and T. Martin, A robust multi-batch L–BFGS method for machine learning, Optim. Methods Softw., 35 (2020), 191-219.  doi: 10.1080/10556788.2019.1658107.

[5]

K. Amini and A. Ghorbani Rizi, A new structured quasi–Newton algorithm using partial information on Hessian, J. Comput. Appl. Math., 234 (2010), 805-811.  doi: 10.1016/j.cam.2010.01.044.

[6]

Z. Aminifard and S. Babaie-Kafaki, A modified descent Polak–Ribiére–Polyak conjugate gradient method with global convergence property for nonconvex functions, Calcolo, 56 (2019), 16.  doi: 10.1007/s10092-019-0312-9.

[7]

Z. AminifardS. Babaie-Kafaki and S. Ghafoori, An augmented memoryless BFGS method based on a modified secant equation with application to compressed sensing, Appl. Numer. Math., 167 (2021), 187-201.  doi: 10.1016/j.apnum.2021.05.002.

[8]

N. Andrei, Accelerated scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, European J. Oper. Res., 204 (2010), 410-420.  doi: 10.1016/j.ejor.2009.11.030.

[9]

N. Andrei, A double-parameter scaling Broyden–Fletcher–Goldfarb–Shanno method based on minimizing the measure function of Byrd and Nocedal for unconstrained optimization, J. Optim. Theory Appl., 178 (2018), 191-218.  doi: 10.1007/s10957-018-1288-3.

[10]

M. R. ArazmS. Babaie-Kafaki and R. Ghanbari, An extended Dai–Liao conjugate gradient method with global convergence for nonconvex functions, Glas. Mat. Ser., 52 (2017), 361-375.  doi: 10.3336/gm.52.2.12.

[11]

S. Babaie-Kafaki, On optimality of the parameters of self-scaling memoryless quasi–Newton updating formulae, J. Optim. Theory Appl., 167 (2015), 91-101.  doi: 10.1007/s10957-015-0724-x.

[12]

S. Babaie-Kafaki, A modified scaling parameter for the memoryless BFGS updating formula, Numer. Algorithms, 72 (2016), 425-433.  doi: 10.1007/s11075-015-0053-z.

[13]

S. Babaie-Kafaki, A hybrid scaling parameter for the scaled memoryless BFGS method based on the $\ell_{\infty}$ matrix norm, Int. J. Comput. Math., 96 (2019), 1595-1602.  doi: 10.1080/00207160.2018.1465940.

[14]

S. Babaie-Kafaki and Z. Aminifard, Two-parameter scaled memoryless BFGS methods with a nonmonotone choice for the initial step length, Numer. Algorithms, 82 (2019), 1345-1357.  doi: 10.1007/s11075-019-00658-1.

[15]

S. Babaie-Kafaki and R. Ghanbari, A modified scaled conjugate gradient method with global convergence for nonconvex functions, Bull. Belg. Math. Soc. Simon Stevin, 21 (2014), 465-477. 

[16]

S. Babaie-Kafaki and R. Ghanbari, A linear hybridization of the Hestenes–Stiefel method and the memoryless BFGS technique, Mediterr. J. Math., 15 (2018), 86.  doi: 10.1007/s00009-018-1132-x.

[17]

H. BademA. BasturkA. Caliskan and M. E. Yuksel, A new efficient training strategy for deep neural networks by hybridization of artificial bee colony and limited-memory BFGS optimization algorithms, Neurocomputing, 266 (2017), 506-526.  doi: 10.1016/j.neucom.2017.05.061.

[18]

M. BaiJ. Zhao and Z. Zhang, A descent cautious BFGS method for computing US-eigenvalues of symmetric complex tensors, J. Global Optim., 76 (2020), 889-911.  doi: 10.1007/s10898-019-00843-5.

[19]

J. Barzilai and J. M. Borwein, Two-point stepsize gradient methods, IMA J. Numer. Anal., 8 (1988), 141-148.  doi: 10.1093/imanum/8.1.141.

[20]

F. Biglari and A. Ebadian, Limited memory BFGS method based on a high-order tensor model, Comput. Optim. Appl., 60 (2015), 413-422.  doi: 10.1007/s10589-014-9678-4.

[21]

M. Borhani, Multi-label Log-Loss function using L–BFGS for document categorization, Eng. Appl. Artif. Intell., 91 (2020), 103623.  doi: 10.1016/j.engappai.2020.103623.

[22]

Y. H. Dai and L. Z. Liao, New conjugacy conditions and related nonlinear conjugate gradient methods, Appl. Math. Optim., 43 (2001), 87-101.  doi: 10.1007/s002450010019.

[23]

R. DehghaniN. Bidabadi and M. M. Hosseini, A new modified BFGS method for solving systems of nonlinear equations, J. Interdiscip. Math., 22 (2019), 75-89.  doi: 10.1080/09720502.2019.1574065.

[24]

J. E. DennisH. J. Martínez and R. A. Tapia, Convergence theory for the structured BFGS secant method with an application to nonlinear least squares, J. Optim. Theory Appl., 61 (1989), 161-178.  doi: 10.1007/BF00962795.

[25]

E. D. Dolan and J. J. Moré, Benchmarking optimization software with performance profiles, Math. Programming, 91 (2002), 201-213.  doi: 10.1007/s101070100263.

[26]

A. Ebrahimi and G. B. Loghmani, Shape modeling based on specifying the initial B-spline curve and scaled BFGS optimization method, Multimed. Tools Appl., 77 (2018), 30331-30351.  doi: 10.1007/s11042-018-6109-z.

[27]

I. E. Ebrahimi, An advanced active set L–BFGS algorithm for training weight-constrained neural networks, Neural. Comput. Applic., 32 (2020), 6669-6684.  doi: 10.1007/s00521-019-04689-6.

[28]

H. Esmaeili, S. Shabani and M. Kimiaei, A new generalized shrinkage conjugate gradient method for sparse recovery, Calcolo, 56 (2019), 38 pp. doi: 10.1007/s10092-018-0296-x.

[29]

J. A. Ford and I. A. Moghrabi, Multi-step quasi–Newton methods for optimization, J. Comput. Appl. Math., 50 (1994), 305-323.  doi: 10.1016/0377-0427(94)90309-3.

[30]

N. I. M. GouldD. Orban and P. L. Toint, CUTEr: A constrained and unconstrained testing environment, revisited, ACM Trans. Math. Software, 29 (2003), 373-394.  doi: 10.1145/962437.962439.

[31]

L. GrippoF. Lampariello and S. Lucidi, A nonmonotone line search technique for Newton's method, SIAM J. Numer. Anal., 23 (1986), 707-716.  doi: 10.1137/0723046.

[32]

W. W. Hager and H. Zhang, Algorithm 851: CG_Descent, a conjugate gradient method with guaranteed descent, ACM Trans. Math. Software, 32 (2006), 113-137. 

[33]

D. H. Li and M. Fukushima, A modified BFGS method and its global convergence in nonconvex minimization, J. Comput. Appl. Math., 129 (2001), 15-35.  doi: 10.1016/S0377-0427(00)00540-9.

[34]

D. H. Li and M. Fukushima, On the global convergence of the BFGS method for nonconvex unconstrained optimization problems, SIAM J. Optim., 11 (2001), 1054-1064.  doi: 10.1137/S1052623499354242.

[35]

M. Li, A modified Hestense–Stiefel conjugate gradient method close to the memoryless BFGS quasi–Newton method, Optim. Methods Softw., 33 (2018), 336-353.  doi: 10.1080/10556788.2017.1325885.

[36]

I. E. LivierisV. Tampakas and P. Pintelas, A descent hybrid conjugate gradient method based on the memoryless BFGS update, Numer. Algor., 79 (2018), 1169-1185.  doi: 10.1007/s11075-018-0479-1.

[37]

L. Z. LuM. K. Ng and F. R. Lin, Approximation BFGS methods for nonlinear image restoration, J. Comput. Appl. Math., 226 (2009), 84-91.  doi: 10.1016/j.cam.2008.05.056.

[38]

A. Mohammad NezhadR. Aliakbari Shandiz and A. Eshraghniaye Jahromi, A particle swarm-BFGS algorithm for nonlinear programming problems, Comput. Oper. Res., 40 (2013), 963-972.  doi: 10.1016/j.cor.2012.11.008.

[39]

J. Nocedal and S. J. Wright, Numerical Optimization, 2$^{nd}$ edition, Series in Operations Research and Financial Engineering. Springer, New York, 2006.

[40]

S. S. Oren and D. G. Luenberger, Self-scaling variable metric (SSVM) algorithms. I. Criteria and sufficient conditions for scaling a class of algorithms, Management Sci., 20 (1973/74), 845-862.  doi: 10.1287/mnsc.20.5.845.

[41]

S. S. Oren and E. Spedicato, Optimal conditioning of self-scaling variable metric algorithms, Math. Programming, 10 (1976), 70-90.  doi: 10.1007/BF01580654.

[42]

C. ShenC. FanY. Wang and W. Xue, Limited memory BFGS algorithm for the matrix approximation problem in Frobenius norm, Comput. Appl. Math., 39 (2020), 43.  doi: 10.1007/s40314-020-1089-9.

[43]

K. SugikiY. Narushima and H. Yabe, Globally convergent three–term conjugate gradient methods that use secant conditions and generate descent search directions for unconstrained optimization, J. Optim. Theory Appl., 153 (2012), 733-757.  doi: 10.1007/s10957-011-9960-x.

[44]

W. Sun and Y. X. Yuan, Optimization Theory and Methods: Nonlinear Programming, , Springer Optimization and Its Applications, 1. Springer, New York, 2006.

[45]

Z. WeiG. Li and L. Qi, New quasi–Newton methods for unconstrained optimization problems, Appl. Math. Comput., 175 (2006), 1156-1188.  doi: 10.1016/j.amc.2005.08.027.

[46]

Z. WeiG. YuG. Yuan and Z. Lian, The superlinear convergence of a modified BFGS-type method for unconstrained optimization, Comput. Optim. Appl., 29 (2004), 315-332.  doi: 10.1023/B:COAP.0000044184.25410.39.

[47]

C. Xu and J. Z. Zhang, A survey of quasi–Newton equations and quasi–Newton methods for optimization, Ann. Oper. Res., 103 (2001), 213-234.  doi: 10.1023/A:1012959223138.

[48]

F. YangM. DingX. ZhangW. Hou and C. Zhong, Non-rigid multi-modal medical image registration by combining L–BFGS–B with cat swarm optimization, Inform. Sciences, 316 (2015), 440-456.  doi: 10.1016/j.ins.2014.10.051.

[49]

X. Yao and Z. Wang, Broad echo state network for multivariate time series prediction, J. Franklin Inst., 356 (2019), 4888-4906.  doi: 10.1016/j.jfranklin.2019.01.027.

[50]

F. YinY. N. Wang and S. N. Wei, Inverse kinematic solution for robot manipulator based on electromagnetism-like and modified DFP algorithms, Acta Automatica Sinica, 37 (2011), 74-82.  doi: 10.3724/SP.J.1004.2011.00074.

[51]

X. YuanW. HuangP.-A. Absil and K. A. Gallivan, A Riemannian limited-memory BFGS algorithm for computing the matrix geometric mean, Procedia Comput. Sci., 80 (2016), 2147-2157.  doi: 10.1016/j.procs.2016.05.534.

[52]

Y. X. Yuan, A modified BFGS algorithm for unconstrained optimization, IMA J. Numer. Anal., 11 (1991), 325-332.  doi: 10.1093/imanum/11.3.325.

[53]

H. ZhangK. WangX. Zhou and W. Wang, Using DFP algorithm for nodal demand estimation of water distribution networks, KSCE J. Civ. Eng., 22 (2018), 2747-2754.  doi: 10.1007/s12205-018-0176-6.

[54]

J. Z. ZhangN. Y. Deng and L. H. Chen, New quasi–Newton equation and related methods for unconstrained optimization, J. Optim. Theory Appl., 102 (1999), 147-167.  doi: 10.1023/A:1021898630001.

[55]

W. Zhou, A modified BFGS type quasi–Newton method with line search for symmetric nonlinear equations problems, J. Comput. Appl. Math., 367 (2020), 112454.  doi: 10.1016/j.cam.2019.112454.

[56]

W. Zhou and L. Zhang, A nonlinear conjugate gradient method based on the MBFGS secant condition, Optim. Methods Softw., 21 (2006), 707-714.  doi: 10.1080/10556780500137041.

Figure 1.  DM performance profile outputs for DSSD1, DSSD2, DSSD3, DSSD4 and SSD
Figure 2.  DM performance profile outputs for NMDSSD1, NMDSSD2, NMDSSD3, NMDSSD4 and NMSSD
Figure 3.  DM performance profile outputs for DSMBFGS1, DSMBFGS2 and SMBFGS
Figure 4.  DM performance profile outputs for DSMDFP1, DSMDFP2 and SMDFP
Figure 5.  DM performance profile outputs for DSMBFGS1, LMBFGS, TPSMBFGS, MLBFGSCG1 and MLBFGSCG2
Figure 6.  Compressed sensing outputs for the Gaussian matrix
Figure 7.  Compressed sensing outputs for the scaled Gaussian matrix
Figure 8.  Compressed sensing outputs for the orthogonalized Gaussian matrix
Figure 9.  Compressed sensing outputs for the Bernoulli matrix
Figure 10.  Compressed sensing outputs for the Hadamard matrix
[1]

Hong Seng Sim, Chuei Yee Chen, Wah June Leong, Jiao Li. Nonmonotone spectral gradient method based on memoryless symmetric rank-one update for large-scale unconstrained optimization. Journal of Industrial and Management Optimization, 2021  doi: 10.3934/jimo.2021143

[2]

Shummin Nakayama, Yasushi Narushima, Hiroshi Yabe. Memoryless quasi-Newton methods based on spectral-scaling Broyden family for unconstrained optimization. Journal of Industrial and Management Optimization, 2019, 15 (4) : 1773-1793. doi: 10.3934/jimo.2018122

[3]

Hong Seng Sim, Wah June Leong, Chuei Yee Chen, Siti Nur Iqmal Ibrahim. Multi-step spectral gradient methods with modified weak secant relation for large scale unconstrained optimization. Numerical Algebra, Control and Optimization, 2018, 8 (3) : 377-387. doi: 10.3934/naco.2018024

[4]

Rouhollah Tavakoli, Hongchao Zhang. A nonmonotone spectral projected gradient method for large-scale topology optimization problems. Numerical Algebra, Control and Optimization, 2012, 2 (2) : 395-412. doi: 10.3934/naco.2012.2.395

[5]

Linfei Wang, Dapeng Tao, Ruonan Wang, Ruxin Wang, Hao Li. Big Map R-CNN for object detection in large-scale remote sensing images. Mathematical Foundations of Computing, 2019, 2 (4) : 299-314. doi: 10.3934/mfc.2019019

[6]

Yingying Li, Stanley Osher. Coordinate descent optimization for l1 minimization with application to compressed sensing; a greedy algorithm. Inverse Problems and Imaging, 2009, 3 (3) : 487-503. doi: 10.3934/ipi.2009.3.487

[7]

Mohamed Aly Tawhid. Nonsmooth generalized complementarity as unconstrained optimization. Journal of Industrial and Management Optimization, 2010, 6 (2) : 411-423. doi: 10.3934/jimo.2010.6.411

[8]

Sarra Delladji, Mohammed Belloufi, Badreddine Sellami. Behavior of the combination of PRP and HZ methods for unconstrained optimization. Numerical Algebra, Control and Optimization, 2021, 11 (3) : 377-389. doi: 10.3934/naco.2020032

[9]

Boris Kramer, John R. Singler. A POD projection method for large-scale algebraic Riccati equations. Numerical Algebra, Control and Optimization, 2016, 6 (4) : 413-435. doi: 10.3934/naco.2016018

[10]

Danuta Gaweł, Krzysztof Fujarewicz. On the sensitivity of feature ranked lists for large-scale biological data. Mathematical Biosciences & Engineering, 2013, 10 (3) : 667-690. doi: 10.3934/mbe.2013.10.667

[11]

Mahmut Çalik, Marcel Oliver. Weak solutions for generalized large-scale semigeostrophic equations. Communications on Pure and Applied Analysis, 2013, 12 (2) : 939-955. doi: 10.3934/cpaa.2013.12.939

[12]

Philippe Bonneton, Nicolas Bruneau, Bruno Castelle, Fabien Marche. Large-scale vorticity generation due to dissipating waves in the surf zone. Discrete and Continuous Dynamical Systems - B, 2010, 13 (4) : 729-738. doi: 10.3934/dcdsb.2010.13.729

[13]

Steven L. Brunton, Joshua L. Proctor, Jonathan H. Tu, J. Nathan Kutz. Compressed sensing and dynamic mode decomposition. Journal of Computational Dynamics, 2015, 2 (2) : 165-191. doi: 10.3934/jcd.2015002

[14]

Jun Chen, Wenyu Sun, Zhenghao Yang. A non-monotone retrospective trust-region method for unconstrained optimization. Journal of Industrial and Management Optimization, 2013, 9 (4) : 919-944. doi: 10.3934/jimo.2013.9.919

[15]

Lijuan Zhao, Wenyu Sun. Nonmonotone retrospective conic trust region method for unconstrained optimization. Numerical Algebra, Control and Optimization, 2013, 3 (2) : 309-325. doi: 10.3934/naco.2013.3.309

[16]

Lixing Han. An unconstrained optimization approach for finding real eigenvalues of even order symmetric tensors. Numerical Algebra, Control and Optimization, 2013, 3 (3) : 583-599. doi: 10.3934/naco.2013.3.583

[17]

Guanghui Zhou, Qin Ni, Meilan Zeng. A scaled conjugate gradient method with moving asymptotes for unconstrained optimization problems. Journal of Industrial and Management Optimization, 2017, 13 (2) : 595-608. doi: 10.3934/jimo.2016034

[18]

Wataru Nakamura, Yasushi Narushima, Hiroshi Yabe. Nonlinear conjugate gradient methods with sufficient descent properties for unconstrained optimization. Journal of Industrial and Management Optimization, 2013, 9 (3) : 595-619. doi: 10.3934/jimo.2013.9.595

[19]

Xin Zhang, Jie Wen, Qin Ni. Subspace trust-region algorithm with conic model for unconstrained optimization. Numerical Algebra, Control and Optimization, 2013, 3 (2) : 223-234. doi: 10.3934/naco.2013.3.223

[20]

Ying Zhang, Ling Ma, Zheng-Hai Huang. On phaseless compressed sensing with partially known support. Journal of Industrial and Management Optimization, 2020, 16 (3) : 1519-1526. doi: 10.3934/jimo.2019014

2020 Impact Factor: 1.801

Metrics

  • PDF downloads (343)
  • HTML views (214)
  • Cited by (0)

Other articles
by authors

[Back to Top]