\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

The Glowinski–Le Tallec splitting method revisited: A general convergence and convergence rate analysis

The work of L.-Z. Liao was supported in part by grants from Hong Kong Baptist University (FRG) and General Research Fund (GRF) of Hong Kong

Abstract / Introduction Full Text(HTML) Figure(4) / Table(7) Related Papers Cited by
  • In this paper, we focus on a splitting method called the $ \theta $-scheme proposed by Glowinski and Le Tallec in [17,20,27]. First, we present an elaborative convergence analysis in a Hilbert space and propose a general convergent inexact $ \theta $-scheme. Second, for unconstrained problems, we prove the convergence of the $ \theta $-scheme and show a sublinear convergence rate in terms of objective value. Furthermore, a practical inexact $ \theta $-scheme is derived to solve $ l_2 $-loss based problems and its convergence is proved. Third, for constrained problems, even though the convergence of the $ \theta $-scheme is available in the literature, yet its sublinear convergence rate is unknown until we provide one via a variational reformulation of the solution set. Besides, in order to relax the condition imposed on the $ \theta $-scheme, we propose a new variant and show its convergence. Finally, some preliminary numerical experiments demonstrate the efficiency of the $ \theta $-scheme and our proposed methods.

    Mathematics Subject Classification: Primary: 90C25; Secondary: 49M27.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Left to right: original image, blurring image, and recovered image by $ \theta $-scheme on TV-based deblurring

    Figure 2.  Visualization of function values, MSE, and ISNR on TV-based deblurring

    Figure 3.  Objective value with respect to the computing time and residual of the linear equation (29) with respect to the outer loop iteration on "news20.scale''

    Figure 4.  Objective value with respect to the computing time and residual of the linear equation (29) with respect to the outer loop iteration on "url_combined"

    Table 1.  A sensitivity test of the $ \theta $-scheme on TV-based deblurring

    $ \beta_1 $ $ \beta_2 $ Iteration Time (s) Objective MSE ISNR
    0.1 0.1 $ 3000 $ $ 106.41 $ 0.6259 1.6135e-04 11.9101
    0.3 0.2 2852 101.04 0.6259 1.5999e-04 11.9470
    0.7 0.5 1202 42.45 0.6259 1.5999e-04 11.9470
    0.8 0.5 1088 38.57 0.6259 1.5999e-04 11.9470
    0.9 0.8 879 31.01 0.6259 1.5999e-04 11.9470
    0.9 0.9 847 29.82 0.6259 1.5999e-04 11.9470
     | Show Table
    DownLoad: CSV

    Table 2.  Comparison of $ \theta $-scheme, ADMM+GPRSM, GADMM, and FISTA on TV-based deblurring under same inner precision

    Inner Algorithm Iteration Mean/Max Time (s) Objective MSE ISNR
    precision FGP
    1e-02 $ \theta $-scheme 763 4.00/4 29.00 0.6259 1.5999e-04 11.9470
    ADMM+GPRSM 763 4.00/4 30.00 0.6259 1.5999e-04 11.9470
    GADMM 1526 2.00/2 33.43 0.6259 1.5999e-04 11.9470
    FISTA 3000 2.00/2 53.92 0.6261 1.5992e-04 11.9489
    1e-04 $ \theta $-scheme 763 4.00/6 28.84 0.6259 1.5999e-04 11.9470
    ADMM+GPRSM 763 4.00/6 29.99 0.6259 1.5999e-04 11.9470
    GADMM 1526 2.00/3 33.13 0.6259 1.5999e-04 11.9470
    FISTA 3000 2.00/3 53.70 0.6261 1.5992e-04 11.9487
    1e-06 $ \theta $-scheme 763 4.60/20 28.31 0.6259 1.5999e-04 11.9470
    ADMM+GPRSM 763 4.61/20 29.25 0.6259 1.5999e-04 11.9470
    GADMM 1526 2.27/10 32.42 0.6259 1.5999e-04 11.9470
    FISTA 3000 5.45/10 82.36 0.6259 1.5996e-04 11.9476
    1e-08 $ \theta $-scheme 763 8.65/20 38.37 0.6259 1.5999e-04 11.9470
    ADMM+GPRSM 763 8.65/20 39.50 0.6259 1.5999e-04 11.9470
    GADMM 1526 4.32/10 42.45 0.6259 1.5999e-04 11.9470
    FISTA 1277 7.62/10 44.71 0.6259 1.5997e-04 11.9476
    1e-10 $ \theta $-scheme 762 16.61/20 58.43 0.6259 1.5999e-04 11.9470
    ADMM+GPRSM 763 16.72/20 59.79 0.6259 1.5999e-04 11.9470
    GADMM 1525 8.44/10 63.16 0.6259 1.5999e-04 11.9470
    FISTA 1214 9.87/10 51.49 0.6259 1.5996e-04 11.9476
     | Show Table
    DownLoad: CSV

    Table 3.  Comparison of $ \theta $-scheme, ADMM+GPRSM, GADMM, and FISTA on TV-based deblurring under different inner precisions

    Inner Algorithm Iteration Mean/Max Time (s) Objective MSE ISNR
    precision FGP
    1e-02 $ \theta $-scheme 763 4.00/4 27.87 0.6259 1.5999e-04 11.9470
    1e-02 ADMM+GPRSM 763 4.00/4 30.51 0.6259 1.5999e-04 11.9470
    1e-04 GADMM 1526 2.00/3 33.01 0.6259 1.5999e-04 11.9470
    1e-08 FISTA 1277 7.62/10 47.88 0.6259 1.5997e-04 11.9476
     | Show Table
    DownLoad: CSV

    Table 4.  Values of $ \eta $, $ L $ and $ \beta_1 (\beta_2) $ for synthetic dataset

    (m, n, s) $ \eta $ L $ \beta_1(\beta_2) $
    ($ 1\times10^4, 1.5\times10^4, 50\% $) 878.32 1.8416e+04 1.0860e-04
    ($ 1\times10^4, 1.5\times10^4, 10\% $) 293.15 4.4668e+03 4.4775e-04
    ($ 1.5\times10^4, 2\times10^4, 5\% $) 174.12 3.2264e+03 6.1989e-04
    ($ 2\times10^4, 3\times10^4, 1\% $) 67.90 9.4158e+02 2.1241e-03
    ($ 2\times10^5, 3\times10^5, 0.1\% $) 58.27 9.0143e+02 2.2187e-03
    ($ 3\times10^5, 2\times10^6, 0.01\% $) 9.59 3.6101e+02 5.5400e-03
     | Show Table
    DownLoad: CSV

    Table 5.  Comparison of IN-$ \theta_{\text{LSQR}} $, IN-$ \theta_{\text{Cholesky}} $, IN-$ \theta_{1e-t} $ and IN-$ \theta $ on synthetic dataset

    (m, n, s) Algorithm Iteration Mean/Max CG Time (s) Objective
    ($ 1\times10^4, 1.5\times10^4, 50\% $) IN-$ \theta_{\text{LSQR}} $ 9 $ \sim/\sim $ 24.72 5.8157e+04
    IN-$ \theta_{\text{Cholesky}} $ 9 $ \sim/\sim $ 710.26 5.8157e+04
    IN-$ \theta_{1e-10} $ 10 7.00/10 27.00 5.8157e+04
    IN-$ \theta_{1e-8} $ 9 5.44/8 20.46 5.8157e+04
    IN-$ \theta_{1e-6} $ 9 3.44/6 15.51 5.8157e+04
    IN-$ \theta_{1e-4} $ 500 0.03/4 319.16 5.8157e+04
    IN-$ \theta_{1e-2} $ 500 0.01/2 314.86 5.8157e+04
    IN-$ \theta $ 10 0.90/1 11.27 5.8157e+04
    ($ 1\times10^4, 1.5\times10^4, 10\% $) IN-$ \theta_{\text{LSQR}} $ 9 $ \sim/\sim $ 6.36 1.6892e+04
    IN-$ \theta_{\text{Cholesky}} $ 9 $ \sim/\sim $ 61.89 1.6892e+04
    IN-$ \theta_{1e-10} $ 9 7.11/10 6.17 1.6892e+04
    IN-$ \theta_{1e-8} $ 9 5.44/8 5.11 1.6892e+04
    IN-$ \theta_{1e-6} $ 9 3.44/6 3.87 1.6892e+04
    IN-$ \theta_{1e-4} $ 500 0.03/4 77.57 1.6892e+04
    IN-$ \theta_{1e-2} $ 500 0.01/2 79.92 1.6892e+04
    IN-$ \theta $ 9 0.89/1 2.68 1.6892e+04
    ($ 1.5\times10^4, 2\times10^4, 5\% $) IN-$ \theta_{\text{LSQR}} $ 8 $ \sim/\sim $ 5.98 1.0405e+04
    IN-$ \theta_{\text{Cholesky}} $ 8 $ \sim/\sim $ 96.69 1.0405e+04
    IN-$ \theta_{1e-10} $ 9 7.11/10 6.57 1.0405e+04
    IN-$ \theta_{1e-8} $ 8 5.63/8 4.97 1.0405e+04
    IN-$ \theta_{1e-6} $ 8 3.63/6 3.85 1.0405e+04
    IN-$ \theta_{1e-4} $ 500 0.03/4 84.29 1.0405e+04
    IN-$ \theta_{1e-2} $ 500 0.01/2 83.01 1.0405e+04
    IN-$ \theta $ 9 0.89/1 2.81 1.0405e+04
    ($ 2\times10^4, 3\times10^4, 1\% $) IN-$ \theta_{\text{LSQR}} $ 10 $ \sim/\sim $ 2.94 4.7993e+03
    IN-$ \theta_{\text{Cholesky}} $ 10 $ \sim/\sim $ 232.74 4.7993e+03
    IN-$ \theta_{1e-10} $ 10 7.10/10 3.24 4.7993e+03
    IN-$ \theta_{1e-8} $ 11 4.82/8 2.73 4.7993e+03
    IN-$ \theta_{1e-6} $ 10 3.30/6 1.96 4.7993e+03
    IN-$ \theta_{1e-4} $ 500 0.03/4 36.20 4.7993e+03
    IN-$ \theta_{1e-2} $ 500 0.01/2 35.20 4.7993e+03
    IN-$ \theta $ 10 0.90/1 1.74 4.7993e+03
    ($ 2\times10^5, 3\times10^5, 0.1\% $) IN-$ \theta_{\text{LSQR}} $ 9 $ \sim/\sim $ 38.52 4.9942e+03
    IN-$ \theta_{1e-10} $ 9 7.22/10 50.94 4.9942e+03
    IN-$ \theta_{1e-8} $ 9 5.44/8 41.94 4.9942e+03
    IN-$ \theta_{1e-6} $ 9 3.44/6 31.82 4.9942e+03
    IN-$ \theta_{1e-4} $ 500 0.03/4 644.25 4.9942e+03
    IN-$ \theta_{1e-2} $ 500 0.01/2 641.81 4.9942e+03
    IN-$ \theta $ 9 0.89/1 20.72 4.9942e+03
    ($ 3\times10^5, 2\times10^6, 0.01\% $) IN-$ \theta_{\text{LSQR}} $ 38 $ \sim/\sim $ 159.14 2.1247e+03
    IN-$ \theta_{1e-10} $ 37 5.49/8 180.66 2.1247e+03
    IN-$ \theta_{1e-8} $ 37 3.95/7 146.45 2.1247e+03
    IN-$ \theta_{1e-6} $ 37 2.43/5 113.97 2.1247e+03
    IN-$ \theta_{1e-4} $ 500 0.08/4 707.33 2.1247e+03
    IN-$ \theta_{1e-2} $ 500 0.02/2 693.12 2.1247e+03
    IN-$ \theta $ 37 0.97/1 91.95 2.1247e+03
     | Show Table
    DownLoad: CSV

    Table 6.  Comparison of IN-$ \theta_{\text{LSQR}} $, IN-$ \theta_{1e-t} $ and IN-$ \theta_{\text{R}} $ on "news20.scale''

    Algorithm Iteration Mean/Max CG Time (s) Objective
    IN-$ \theta_{\text{LSQR}} $ 500 $ \sim/\sim $ 26.31 6.5859e+05
    IN-$ \theta_{1e-10} $ 44 4.11/6 2.68 6.5859e+05
    IN-$ \theta_{1e-8} $ 44 3.07/5 2.28 6.5859e+05
    IN-$ \theta_{1e-6} $ 43 1.91/4 1.76 6.5859e+05
    IN-$ \theta_{1e-4} $ 500 0.08/3 10.92 6.5859e+05
    IN-$ \theta_{1e-2} $ 500 0.02/2 10.39 6.5860e+05
    IN-$ \theta_{\text{R}} $ 46 0.61/2 1.36 6.5859e+05
     | Show Table
    DownLoad: CSV

    Table 7.  Comparison of IN-$ \theta_{\text{LSQR}} $, IN-$ \theta_{1e-t} $ and IN-$ \theta_{R} $ on "url_combined" dataset

    Algorithm Iteration Mean/Max CG Time (s) Objective
    IN-$ \theta_{\text{LSQR}} $ 29 $ \sim/\sim $ 1794.95 6.1097e+05
    IN-$ \theta_{1e-10} $ 30 4.00/4 869.09 6.0859e+05
    IN-$ \theta_{1e-8} $ 30 3.13/4 757.60 6.0859e+05
    IN-$ \theta_{1e-6} $ 30 3.00/3 740.08 6.0859e+05
    IN-$ \theta_{1e-4} $ 29 2.00/2 592.07 6.1097e+05
    IN-$ \theta_{1e-2} $ 29 1.10/2 474.73 6.1074e+05
    IN-$ \theta_{\text{R}} $ 29 0.90/2 454.99 6.1045e+05
     | Show Table
    DownLoad: CSV
  • [1] M. V. AfonsoJ. M. Bioucas-Dias and and M. A. Figueiredo, Fast image recovery using variable splitting and constrained optimization, IEEE Trans. Image Process., 19 (2010), 2345-2356.  doi: 10.1109/TIP.2010.2047910.
    [2] N. AhmedT. Natarajan and and K. R. Rao, Discrete cosine transform, IEEE Trans. Comput., C-23 (1974), 90-93.  doi: 10.1109/T-C.1974.223784.
    [3] J. B. Baillon and G. Haddad, Quelques propriétés des opérateurs angle-bornés etn-cycliquement monotones, Israel J. Math., 26 (1977), 137-150.  doi: 10.1007/BF03007664.
    [4] H. H. Bauschke and P. L. Combettes, et al., Convex Analysis and Monotone Operator Theory in Hilbert Spaces, CMS Books in Mathematics, Springer, Cham, 2017. doi: 10.1007/978-3-319-48311-5.
    [5] A. Beck and M. Teboulle, Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems, IEEE Trans. Image Process., 18 (2009), 2419-2434.  doi: 10.1109/TIP.2009.2028250.
    [6] A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), 183-202.  doi: 10.1137/080716542.
    [7] A. Beck and M. Teboulle, A fast dual proximal gradient algorithm for convex minimization and applications, Oper. Res. Lett., 42 (2014), 1-6.  doi: 10.1016/j.orl.2013.10.007.
    [8] D. P. Bertsekas, Nonlinear Programming, Athena Scientific Optimization and Computation Series, Athena Scientific, Belmont, MA, 1999.
    [9] S. BoydN. ParikhE. ChuB. PeleatoJ. Eckstein and et al., Distributed optimization and statistical learning via the alternating direction method of multipliers, Foundations and Trends in Machine Learning, 3 (2010), 1-122.  doi: 10.1561/2200000016.
    [10] P. L. Combettes, Solving monotone inclusions via compositions of nonexpansive averaged operators, Optimization, 53 (2004), 475-504.  doi: 10.1080/02331930412331327157.
    [11] P. L. Combettes and V. R. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model. Simul., 4 (2005), 1168-1200.  doi: 10.1137/050626090.
    [12] J. Douglas and H. H. Rachford Jr., On the numerical solution of heat conduction problems in two and three space variables, Trans. Amer. Math. Soc., 82 (1956), 421-439.  doi: 10.1090/S0002-9947-1956-0084194-4.
    [13] J. Eckstein and D. P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators, Math. Programming, 55 (1992), 293-318.  doi: 10.1007/BF01581204.
    [14] J. Eckstein and W. Yao, Augmented Lagrangian and alternating direction methods for convex optimization: A tutorial and some illustrative computational results, RUTCOR Research Reports, 32 (2012).
    [15] R. Fletcher, Conjugate gradient methods for indefinite systems, in Numerical Analysis, Lecture Notes in Math., 506, Springer, Berlin, 1976, 73–89. doi: 10.1007/BFb0080116.
    [16] J. C. Gilbert and J. Nocedal, Global convergence properties of conjugate gradient methods for optimization, SIAM J. Optim., 2 (1992), 21-42.  doi: 10.1137/0802003.
    [17] R. Glowinski, Splitting methods for the numerical solution of the incompressible Navier-Stokes equations, in Vistas in Applied Mathematics, Optimization Software, New York, 1986, 57–95.
    [18] R. Glowinski, Variational Methods for the Numerical Solution of Nonlinear Elliptic Problems, CBMS-NSF Regional Conference Series in Applied Mathematics, 86, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2015. doi: 10.1137/1.9781611973785.ch1.
    [19] R. Glowinski, P. G. Ciarlet and J.-L. Lions, Numerical Methods for Fluids: Finite Element Methods for Incompressible Viscous Flow, North Holland, 2003.
    [20] R. Glowinski and P. Le Tallec, Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics, SIAM Studies in Applied Mathematics, 9. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1989. doi: 10.1137/1.9781611970838.
    [21] R. Glowinski, S. Leung and J. L. Qian, Operator-splitting based fast sweeping methods for isotropic wave propagation in a moving fluid, SIAM J. Sci. Comput., 38 (2016), A1195–A1223. doi: 10.1137/15M1043868.
    [22] R. Glowinski and A. Marroco, Sur l'approximation, par éléments finis d'ordre un, et larésolution, par pénalisation-dualité d'une classe de problèmes de dirichlet non linéaires, Rev. Française Automat. Informat. Recherche Opérationnelle Sér. Rouge Anal. Numér., 9 (1975), 41–76. doi: 10.1051/m2an/197509R200411.
    [23] R. Glowinski, S. J. Osher and W. T. Yin, Splitting Methods in Communication, Imaging, Science, and Engineering, Scientific Computation, Springer, Cham, 2016. doi: 10.1007/978-3-319-41589-5.
    [24] T. GoldsteinB. O'DonoghueS. Setzer and R. Baraniuk, Fast alternating direction optimization methods, SIAM J. Imaging Sci., 7 (2014), 1588-1623.  doi: 10.1137/120896219.
    [25] S. HaubrugeV. H. Nguyen and J. J. Strodiot, Convergence analysis and applications of the Glowinski–Le Tallec splitting method for finding a zero of the sum of two maximal monotone operators, J. Optim. Theory Appl., 97 (1998), 645-673.  doi: 10.1023/A:1022646327085.
    [26] B. S. He and X. M. Yuan, On the $O(1/n)$ convergence rate of the Douglas–Rachford alternating direction method, SIAM J. Numer. Anal., 50 (2012), 700-709.  doi: 10.1137/110836936.
    [27] P. Le Tallec, Numerical Analysis of Viscoelastic Problems, Research in Applied Mathematics, 15, Masson, Paris, 1990.
    [28] P. L. Lions and B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal., 16 (1979), 964-979.  doi: 10.1137/0716071.
    [29] G. I. Marchuk, Splitting and alternating direction methods, in Handbook of Numerical Analysis, Vol. I, Handb. Numer. Anal., 1, North-Holland, Amsterdam, 1990, 197–462.
    [30] C. C. Paige and M. A. Saunders, LSQR: An algorithm for sparse linear equations and sparse least squares, ACM Trans. Math. Software, 8 (1982), 43-71.  doi: 10.1145/355984.355989.
    [31] D. W. Peaceman and H. H. Rachford Jr., The numerical solution of parabolic and elliptic differential equations, J. Soc. Indust. Appl. Math., 3 (1955), 28-41.  doi: 10.1137/0103003.
    [32] K. R. Rao and  P. YipDiscrete Cosine Transform: Algorithms, Advantages, Applications, Academic Press, Inc., Boston, MA, 1990.  doi: 10.1016/c2009-0-22279-3.
    [33] R. T. RockafellarConvex Analysis, Princeton Mathematical Series, 28, Princeton University Press, Princeton, NJ, 1970. 
    [34] R. Tibshirani, Regression shrinkage and selection via the lasso, J. Roy. Statist. Soc. Ser. B, 58 (1996), 267-288.  doi: 10.1111/j.2517-6161.1996.tb02080.x.
    [35] P. Tseng, Applications of a splitting algorithm to decomposition in convex programming and variational inequalities, SIAM J. Control Optim., 29 (1991), 119-138.  doi: 10.1137/0329006.
    [36] P. T. Vuong and J. J. Strodiot, The Glowinski–Le Tallec splitting method revisited in the framework of equilibrium problems in Hilbert spaces, J. Global Optim., 70 (2018), 477-495.  doi: 10.1007/s10898-017-0575-0.
    [37] H. R. Yue, Q. Z. Yang, X. F. Wang and X. M. Yuan, Implementing the alternating direction method of multipliers for big datasets: A case study of least absolute shrinkage and selection operator, SIAM J. Sci. Comput., 40 (2018), A3121–A3156. doi: 10.1137/17M1146567.
    [38] C. Zalinescu, Convex Analysis in General Vector Spaces, World Scientific Publishing Co., Inc., River Edge, NJ, 2002. doi: 10.1142/9789812777096.
    [39] E. H. Zarantonello, Projections on convex sets in Hilbert space and spectral theory. Ⅰ: Projections on convex sets; Ⅱ: Spectral theory, in Contributions to Nonlinear Functional Analysis, Academic Press, New York, 1971, 237-424.
  • 加载中

Figures(4)

Tables(7)

SHARE

Article Metrics

HTML views(3017) PDF downloads(329) Cited by(0)

Access History

Other Articles By Authors

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return