NACO
Convergence analysis of sparse quasi-Newton updates with positive definite matrix completion for two-dimensional functions
Yuhong Dai Nobuo Yamashita
Numerical Algebra, Control & Optimization 2011, 1(1): 61-69 doi: 10.3934/naco.2011.1.61
In this paper, we briefly review the extensions of quasi-Newton methods for large-scale optimization. Specially, based on the idea of maximum determinant positive definite matrix completion, Yamashita (2008) proposed a new sparse quasi-Newton update, called MCQN, for unconstrained optimization problems with sparse Hessian structures. In exchange of the relaxation of the secant equation, the MCQN update avoids solving difficult subproblems and overcomes the ill-conditioning of approximate Hessian matrices. A global convergence analysis is given in this paper for the MCQN update with Broyden's convex family assuming that the objective function is uniformly convex and its dimension is only two.
    This paper is dedicated to Professor Masao Fukushima on occasion of his 60th birthday.
keywords: Quasi-Newton method global convergence. large-scale optimization positive definite matrix completion sparsity
JIMO
Analysis of monotone gradient methods
Yuhong Dai Ya-xiang Yuan
Journal of Industrial & Management Optimization 2005, 1(2): 181-192 doi: 10.3934/jimo.2005.1.181
The gradient method is one simple method in nonlinear optimization. In this paper, we give a brief review on monotone gradient methods and study their numerical properties by introducing a new technique of long-term observation. We find that, one monotone gradient algorithm which is proposed by Yuan recently shares with the Barzilai-Borwein (BB) method the property that the gradient components with respect to the eigenvectors of the function Hessian are decreasing together. This might partly explain why this algorithm by Yuan is comparable to the BB method in practice. Some examples are also provided showing that the alternate minimization algorithm and the other algorithm by Yuan may fall into cycles. Some more efficient gradient algorithms are provided. Particularly, one of them is monotone and performs better than the BB method in the quadratic case.
keywords: monotone cycle. nonmonotone gradient method strictly convex quadratics

Year of publication

Related Authors

Related Keywords

[Back to Top]