Advanced Search
Article Contents
Article Contents

Proximal point nonlinear rescaling method for convex optimization

Abstract / Introduction Related Papers Cited by
  • Nonlinear rescaling (NR) methods alternate finding an unconstrained minimizer of the Lagrangian for the equivalent problem in the primal space (which is an infinite procedure) with Lagrange multipliers update.
        We introduce and study a proximal point nonlinear rescaling (PPNR) method that preserves convergence and retains a linear convergence rate of the original NR method and at the same time does not require an infinite procedure at each step.
        The critical component of our analysis is the equivalence of the NR method with dynamic scaling parameter update to the interior quadratic proximal point method for the dual problem in the rescaled from step to step dual space.
        By adding the classical quadratic proximal term to the primal objective function the PPNR step can be viewed as a primal-dual proximal point mapping. This allows analyzing a wide variety of non-quadratic augmented Lagrangian methods from unique and general point of view using tools typical for the classical quadratic proximal-point technique.
        We proved convergence of the primal-dual PPNR sequence under minimum assumptions on the input data and established a $q$-linear rate of convergence under the standard second-order optimality conditions.
    Mathematics Subject Classification: 90C25; 90C30.


    \begin{equation} \\ \end{equation}
  • [1]

    A. Auslender, M. Teboulle and S. Ben-Tiba, Interior proximal and multipliers methods based on second-order homegeneous kernels, Mathematics of Operations Research, 24 (1999), 645-668.doi: doi:10.1287/moor.24.3.645.


    D. Bertsekas, "Constrained Optimization and Lagrange Multipliers Methods," Academic Press, New York, 1982.


    I. Griva and R. Polyak, Primal-Dual Nonlinear Rescaling Method with Dynamic Scaling Parameter Update, Mathematical Programming, 106 (2006), 237-259.doi: doi:10.1007/s10107-005-0603-6.


    G. J. Minty, Monotone (nonlinear) Operators in Hilbert Space, Duke Math Journal, 29 (1962), 341-346.doi: doi:10.1215/S0012-7094-62-02933-2.


    B. T. Polyak, "Introduction to Optimization," Software Inc., New York, 1987.


    R. Polyak, Modified Barrier Functions (theory and methods), Mathematical Programming, 54 (1992), 177-222.doi: doi:10.1007/BF01586050.


    R. Polyak, Nonlinear Rescaling vs. Smoothing Technique in convex optimization, Mathematical Programming, 92 (2002), 197-235.doi: doi:10.1007/s101070100293.


    R. Polyak, Nonlinear rescaling as Interior Quadratic Prox method in convex optimization, Computational Optimization and Applications, 35 (2006), 347-373.doi: doi:10.1007/s10589-006-9759-0.


    R. Polyak and M. Teboulle, Nonlinear Rescaling and Proximal-like Methods in convex optimization, Mathematical Programming, 76 (1997), 265-284.doi: doi:10.1007/BF02614440.


    R. Polyak and I. Griva, Primal-Dual Nonlinear Rescaling method for convex optimization, Journal of Optimization Theory and Applications, 122 (2004), 111-156.doi: doi:10.1023/B:JOTA.0000041733.24606.99.


    R. T. Rockafellar, Monotone Operators and The Proximal Point Algorithm, SIAM Journal of Control and Optimization, 14 (1976), 887-898.doi: doi:10.1137/0314056.


    R. T. Rockafellar, Augmented Lagrangians and Applications of the Proximal Point algorithm in convex programming, Mathematics of Operations Research, 1 (1976), 97-116.doi: doi:10.1287/moor.1.2.97.


    R. T. Rockafellar, "Convex Analysis," Princeton University Press, Princeton, 1996.


    P. Tseng and D. Bertsekas, On the convergence of exponential multipliers method for convex programming, Mathematical Programming, 76 (1993), 1-19.doi: doi:10.1007/BF01580598.

  • 加载中

Article Metrics

HTML views() PDF downloads(92) Cited by(0)

Access History

Other Articles By Authors



    DownLoad:  Full-Size Img  PowerPoint