## Journals

- Advances in Mathematics of Communications
- Big Data & Information Analytics
- Communications on Pure & Applied Analysis
- Discrete & Continuous Dynamical Systems - A
- Discrete & Continuous Dynamical Systems - B
- Discrete & Continuous Dynamical Systems - S
- Evolution Equations & Control Theory
- Inverse Problems & Imaging
- Journal of Computational Dynamics
- Journal of Dynamics & Games
- Journal of Geometric Mechanics
- Journal of Industrial & Management Optimization
- Journal of Modern Dynamics
- Kinetic & Related Models
- Mathematical Biosciences & Engineering
- Mathematical Control & Related Fields
- Mathematical Foundations of Computing
- Networks & Heterogeneous Media
- Numerical Algebra, Control & Optimization
- AIMS Mathematics
- Conference Publications
- Electronic Research Announcements
- Mathematics in Engineering

### Open Access Journals

NACO

In this paper, we briefly review the
extensions of quasi-Newton methods for large-scale optimization.
Specially, based on the idea of maximum determinant positive
definite matrix completion, Yamashita (2008) proposed a new sparse
quasi-Newton update, called MCQN, for unconstrained optimization
problems with sparse Hessian structures. In exchange of the
relaxation of the secant equation, the MCQN update avoids solving
difficult subproblems and overcomes the ill-conditioning of
approximate Hessian matrices. A global convergence analysis is
given in this paper for the MCQN update with Broyden's convex family
assuming that the objective function is uniformly convex and its
dimension is only two.

This paper is dedicated to Professor Masao Fukushima on occasion of his 60th birthday.

This paper is dedicated to Professor Masao Fukushima on occasion of his 60th birthday.

JIMO

The gradient method is one simple method in nonlinear optimization.
In this paper, we give a brief review on monotone gradient methods and
study their numerical properties by introducing a new technique of
long-term observation. We find that, one monotone gradient algorithm
which is proposed by Yuan recently shares with the Barzilai-Borwein (BB)
method the property that the gradient
components with respect to the eigenvectors of the function Hessian are
decreasing together. This might partly explain why this algorithm by Yuan
is comparable to the BB method in practice. Some examples are also
provided showing that the alternate minimization algorithm and the
other algorithm by Yuan may fall into cycles. Some more efficient
gradient algorithms are provided. Particularly, one of them is monotone
and performs better than the BB method in the quadratic case.

## Year of publication

## Related Authors

## Related Keywords

[Back to Top]