This issuePrevious ArticleA smoothing Newton algorithm for mathematical programs with complementarity constraintsNext ArticleAnalysis of monotone gradient methods
trust region method for nonsmooth convex optimization
We propose an iterative method that solves a nonsmooth
convex optimization problem by converting the original
objective function to a once continuously differentiable
function by way of Moreau-Yosida regularization.
The proposed method makes use of approximate function
and gradient values of the Moreau-Yosida regularization
instead of the corresponding exact values.
Under this setting, Fukushima and Qi (1996) and Rauf
and Fukushima (2000) proposed a proximal Newton method and
a proximal BFGS method, respectively, for nonsmooth convex optimization.
While these methods employ a line search strategy
to achieve global convergence, the method proposed in this paper
uses a trust region strategy.
We establish global and superlinear convergence of the method
under appropriate assumptions.