doi: 10.3934/jimo.2020040

The Glowinski–Le Tallec splitting method revisited: A general convergence and convergence rate analysis

Department of Mathematics, Hong Kong Baptist University, Kowloon Tong, Kowloon, Hong Kong SAR, China

Received  August 2019 Revised  October 2019 Published  February 2020

Fund Project: The work of L.-Z. Liao was supported in part by grants from Hong Kong Baptist University (FRG) and General Research Fund (GRF) of Hong Kong

In this paper, we focus on a splitting method called the $ \theta $-scheme proposed by Glowinski and Le Tallec in [17,20,27]. First, we present an elaborative convergence analysis in a Hilbert space and propose a general convergent inexact $ \theta $-scheme. Second, for unconstrained problems, we prove the convergence of the $ \theta $-scheme and show a sublinear convergence rate in terms of objective value. Furthermore, a practical inexact $ \theta $-scheme is derived to solve $ l_2 $-loss based problems and its convergence is proved. Third, for constrained problems, even though the convergence of the $ \theta $-scheme is available in the literature, yet its sublinear convergence rate is unknown until we provide one via a variational reformulation of the solution set. Besides, in order to relax the condition imposed on the $ \theta $-scheme, we propose a new variant and show its convergence. Finally, some preliminary numerical experiments demonstrate the efficiency of the $ \theta $-scheme and our proposed methods.

Citation: Yaonan Ma, Li-Zhi Liao. The Glowinski–Le Tallec splitting method revisited: A general convergence and convergence rate analysis. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2020040
References:
[1]

M. V. AfonsoJ. M. Bioucas-Dias and and M. A. Figueiredo, Fast image recovery using variable splitting and constrained optimization, IEEE Trans. Image Process., 19 (2010), 2345-2356.  doi: 10.1109/TIP.2010.2047910.  Google Scholar

[2]

N. AhmedT. Natarajan and and K. R. Rao, Discrete cosine transform, IEEE Trans. Comput., C-23 (1974), 90-93.  doi: 10.1109/T-C.1974.223784.  Google Scholar

[3]

J. B. Baillon and G. Haddad, Quelques propriétés des opérateurs angle-bornés etn-cycliquement monotones, Israel J. Math., 26 (1977), 137-150.  doi: 10.1007/BF03007664.  Google Scholar

[4]

H. H. Bauschke and P. L. Combettes, et al., Convex Analysis and Monotone Operator Theory in Hilbert Spaces, CMS Books in Mathematics, Springer, Cham, 2017. doi: 10.1007/978-3-319-48311-5.  Google Scholar

[5]

A. Beck and M. Teboulle, Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems, IEEE Trans. Image Process., 18 (2009), 2419-2434.  doi: 10.1109/TIP.2009.2028250.  Google Scholar

[6]

A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), 183-202.  doi: 10.1137/080716542.  Google Scholar

[7]

A. Beck and M. Teboulle, A fast dual proximal gradient algorithm for convex minimization and applications, Oper. Res. Lett., 42 (2014), 1-6.  doi: 10.1016/j.orl.2013.10.007.  Google Scholar

[8]

D. P. Bertsekas, Nonlinear Programming, Athena Scientific Optimization and Computation Series, Athena Scientific, Belmont, MA, 1999.  Google Scholar

[9]

S. BoydN. ParikhE. ChuB. PeleatoJ. Eckstein and et al., Distributed optimization and statistical learning via the alternating direction method of multipliers, Foundations and Trends in Machine Learning, 3 (2010), 1-122.  doi: 10.1561/2200000016.  Google Scholar

[10]

P. L. Combettes, Solving monotone inclusions via compositions of nonexpansive averaged operators, Optimization, 53 (2004), 475-504.  doi: 10.1080/02331930412331327157.  Google Scholar

[11]

P. L. Combettes and V. R. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model. Simul., 4 (2005), 1168-1200.  doi: 10.1137/050626090.  Google Scholar

[12]

J. Douglas and H. H. Rachford Jr., On the numerical solution of heat conduction problems in two and three space variables, Trans. Amer. Math. Soc., 82 (1956), 421-439.  doi: 10.1090/S0002-9947-1956-0084194-4.  Google Scholar

[13]

J. Eckstein and D. P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators, Math. Programming, 55 (1992), 293-318.  doi: 10.1007/BF01581204.  Google Scholar

[14]

J. Eckstein and W. Yao, Augmented Lagrangian and alternating direction methods for convex optimization: A tutorial and some illustrative computational results, RUTCOR Research Reports, 32 (2012). Google Scholar

[15]

R. Fletcher, Conjugate gradient methods for indefinite systems, in Numerical Analysis, Lecture Notes in Math., 506, Springer, Berlin, 1976, 73–89. doi: 10.1007/BFb0080116.  Google Scholar

[16]

J. C. Gilbert and J. Nocedal, Global convergence properties of conjugate gradient methods for optimization, SIAM J. Optim., 2 (1992), 21-42.  doi: 10.1137/0802003.  Google Scholar

[17]

R. Glowinski, Splitting methods for the numerical solution of the incompressible Navier-Stokes equations, in Vistas in Applied Mathematics, Optimization Software, New York, 1986, 57–95.  Google Scholar

[18]

R. Glowinski, Variational Methods for the Numerical Solution of Nonlinear Elliptic Problems, CBMS-NSF Regional Conference Series in Applied Mathematics, 86, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2015. doi: 10.1137/1.9781611973785.ch1.  Google Scholar

[19]

R. Glowinski, P. G. Ciarlet and J.-L. Lions, Numerical Methods for Fluids: Finite Element Methods for Incompressible Viscous Flow, North Holland, 2003. Google Scholar

[20]

R. Glowinski and P. Le Tallec, Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics, SIAM Studies in Applied Mathematics, 9. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1989. doi: 10.1137/1.9781611970838.  Google Scholar

[21]

R. Glowinski, S. Leung and J. L. Qian, Operator-splitting based fast sweeping methods for isotropic wave propagation in a moving fluid, SIAM J. Sci. Comput., 38 (2016), A1195–A1223. doi: 10.1137/15M1043868.  Google Scholar

[22]

R. Glowinski and A. Marroco, Sur l'approximation, par éléments finis d'ordre un, et larésolution, par pénalisation-dualité d'une classe de problèmes de dirichlet non linéaires, Rev. Française Automat. Informat. Recherche Opérationnelle Sér. Rouge Anal. Numér., 9 (1975), 41–76. doi: 10.1051/m2an/197509R200411.  Google Scholar

[23]

R. Glowinski, S. J. Osher and W. T. Yin, Splitting Methods in Communication, Imaging, Science, and Engineering, Scientific Computation, Springer, Cham, 2016. doi: 10.1007/978-3-319-41589-5.  Google Scholar

[24]

T. GoldsteinB. O'DonoghueS. Setzer and R. Baraniuk, Fast alternating direction optimization methods, SIAM J. Imaging Sci., 7 (2014), 1588-1623.  doi: 10.1137/120896219.  Google Scholar

[25]

S. HaubrugeV. H. Nguyen and J. J. Strodiot, Convergence analysis and applications of the Glowinski–Le Tallec splitting method for finding a zero of the sum of two maximal monotone operators, J. Optim. Theory Appl., 97 (1998), 645-673.  doi: 10.1023/A:1022646327085.  Google Scholar

[26]

B. S. He and X. M. Yuan, On the $O(1/n)$ convergence rate of the Douglas–Rachford alternating direction method, SIAM J. Numer. Anal., 50 (2012), 700-709.  doi: 10.1137/110836936.  Google Scholar

[27]

P. Le Tallec, Numerical Analysis of Viscoelastic Problems, Research in Applied Mathematics, 15, Masson, Paris, 1990.  Google Scholar

[28]

P. L. Lions and B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal., 16 (1979), 964-979.  doi: 10.1137/0716071.  Google Scholar

[29]

G. I. Marchuk, Splitting and alternating direction methods, in Handbook of Numerical Analysis, Vol. I, Handb. Numer. Anal., 1, North-Holland, Amsterdam, 1990, 197–462.  Google Scholar

[30]

C. C. Paige and M. A. Saunders, LSQR: An algorithm for sparse linear equations and sparse least squares, ACM Trans. Math. Software, 8 (1982), 43-71.  doi: 10.1145/355984.355989.  Google Scholar

[31]

D. W. Peaceman and H. H. Rachford Jr., The numerical solution of parabolic and elliptic differential equations, J. Soc. Indust. Appl. Math., 3 (1955), 28-41.  doi: 10.1137/0103003.  Google Scholar

[32] K. R. Rao and P. Yip, Discrete Cosine Transform: Algorithms, Advantages, Applications, Academic Press, Inc., Boston, MA, 1990.  doi: 10.1016/c2009-0-22279-3.  Google Scholar
[33] R. T. Rockafellar, Convex Analysis, Princeton Mathematical Series, 28, Princeton University Press, Princeton, NJ, 1970.   Google Scholar
[34]

R. Tibshirani, Regression shrinkage and selection via the lasso, J. Roy. Statist. Soc. Ser. B, 58 (1996), 267-288.  doi: 10.1111/j.2517-6161.1996.tb02080.x.  Google Scholar

[35]

P. Tseng, Applications of a splitting algorithm to decomposition in convex programming and variational inequalities, SIAM J. Control Optim., 29 (1991), 119-138.  doi: 10.1137/0329006.  Google Scholar

[36]

P. T. Vuong and J. J. Strodiot, The Glowinski–Le Tallec splitting method revisited in the framework of equilibrium problems in Hilbert spaces, J. Global Optim., 70 (2018), 477-495.  doi: 10.1007/s10898-017-0575-0.  Google Scholar

[37]

H. R. Yue, Q. Z. Yang, X. F. Wang and X. M. Yuan, Implementing the alternating direction method of multipliers for big datasets: A case study of least absolute shrinkage and selection operator, SIAM J. Sci. Comput., 40 (2018), A3121–A3156. doi: 10.1137/17M1146567.  Google Scholar

[38]

C. Zalinescu, Convex Analysis in General Vector Spaces, World Scientific Publishing Co., Inc., River Edge, NJ, 2002. doi: 10.1142/9789812777096.  Google Scholar

[39]

E. H. Zarantonello, Projections on convex sets in Hilbert space and spectral theory. Ⅰ: Projections on convex sets; Ⅱ: Spectral theory, in Contributions to Nonlinear Functional Analysis, Academic Press, New York, 1971, 237-424.  Google Scholar

show all references

References:
[1]

M. V. AfonsoJ. M. Bioucas-Dias and and M. A. Figueiredo, Fast image recovery using variable splitting and constrained optimization, IEEE Trans. Image Process., 19 (2010), 2345-2356.  doi: 10.1109/TIP.2010.2047910.  Google Scholar

[2]

N. AhmedT. Natarajan and and K. R. Rao, Discrete cosine transform, IEEE Trans. Comput., C-23 (1974), 90-93.  doi: 10.1109/T-C.1974.223784.  Google Scholar

[3]

J. B. Baillon and G. Haddad, Quelques propriétés des opérateurs angle-bornés etn-cycliquement monotones, Israel J. Math., 26 (1977), 137-150.  doi: 10.1007/BF03007664.  Google Scholar

[4]

H. H. Bauschke and P. L. Combettes, et al., Convex Analysis and Monotone Operator Theory in Hilbert Spaces, CMS Books in Mathematics, Springer, Cham, 2017. doi: 10.1007/978-3-319-48311-5.  Google Scholar

[5]

A. Beck and M. Teboulle, Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems, IEEE Trans. Image Process., 18 (2009), 2419-2434.  doi: 10.1109/TIP.2009.2028250.  Google Scholar

[6]

A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), 183-202.  doi: 10.1137/080716542.  Google Scholar

[7]

A. Beck and M. Teboulle, A fast dual proximal gradient algorithm for convex minimization and applications, Oper. Res. Lett., 42 (2014), 1-6.  doi: 10.1016/j.orl.2013.10.007.  Google Scholar

[8]

D. P. Bertsekas, Nonlinear Programming, Athena Scientific Optimization and Computation Series, Athena Scientific, Belmont, MA, 1999.  Google Scholar

[9]

S. BoydN. ParikhE. ChuB. PeleatoJ. Eckstein and et al., Distributed optimization and statistical learning via the alternating direction method of multipliers, Foundations and Trends in Machine Learning, 3 (2010), 1-122.  doi: 10.1561/2200000016.  Google Scholar

[10]

P. L. Combettes, Solving monotone inclusions via compositions of nonexpansive averaged operators, Optimization, 53 (2004), 475-504.  doi: 10.1080/02331930412331327157.  Google Scholar

[11]

P. L. Combettes and V. R. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model. Simul., 4 (2005), 1168-1200.  doi: 10.1137/050626090.  Google Scholar

[12]

J. Douglas and H. H. Rachford Jr., On the numerical solution of heat conduction problems in two and three space variables, Trans. Amer. Math. Soc., 82 (1956), 421-439.  doi: 10.1090/S0002-9947-1956-0084194-4.  Google Scholar

[13]

J. Eckstein and D. P. Bertsekas, On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators, Math. Programming, 55 (1992), 293-318.  doi: 10.1007/BF01581204.  Google Scholar

[14]

J. Eckstein and W. Yao, Augmented Lagrangian and alternating direction methods for convex optimization: A tutorial and some illustrative computational results, RUTCOR Research Reports, 32 (2012). Google Scholar

[15]

R. Fletcher, Conjugate gradient methods for indefinite systems, in Numerical Analysis, Lecture Notes in Math., 506, Springer, Berlin, 1976, 73–89. doi: 10.1007/BFb0080116.  Google Scholar

[16]

J. C. Gilbert and J. Nocedal, Global convergence properties of conjugate gradient methods for optimization, SIAM J. Optim., 2 (1992), 21-42.  doi: 10.1137/0802003.  Google Scholar

[17]

R. Glowinski, Splitting methods for the numerical solution of the incompressible Navier-Stokes equations, in Vistas in Applied Mathematics, Optimization Software, New York, 1986, 57–95.  Google Scholar

[18]

R. Glowinski, Variational Methods for the Numerical Solution of Nonlinear Elliptic Problems, CBMS-NSF Regional Conference Series in Applied Mathematics, 86, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2015. doi: 10.1137/1.9781611973785.ch1.  Google Scholar

[19]

R. Glowinski, P. G. Ciarlet and J.-L. Lions, Numerical Methods for Fluids: Finite Element Methods for Incompressible Viscous Flow, North Holland, 2003. Google Scholar

[20]

R. Glowinski and P. Le Tallec, Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics, SIAM Studies in Applied Mathematics, 9. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1989. doi: 10.1137/1.9781611970838.  Google Scholar

[21]

R. Glowinski, S. Leung and J. L. Qian, Operator-splitting based fast sweeping methods for isotropic wave propagation in a moving fluid, SIAM J. Sci. Comput., 38 (2016), A1195–A1223. doi: 10.1137/15M1043868.  Google Scholar

[22]

R. Glowinski and A. Marroco, Sur l'approximation, par éléments finis d'ordre un, et larésolution, par pénalisation-dualité d'une classe de problèmes de dirichlet non linéaires, Rev. Française Automat. Informat. Recherche Opérationnelle Sér. Rouge Anal. Numér., 9 (1975), 41–76. doi: 10.1051/m2an/197509R200411.  Google Scholar

[23]

R. Glowinski, S. J. Osher and W. T. Yin, Splitting Methods in Communication, Imaging, Science, and Engineering, Scientific Computation, Springer, Cham, 2016. doi: 10.1007/978-3-319-41589-5.  Google Scholar

[24]

T. GoldsteinB. O'DonoghueS. Setzer and R. Baraniuk, Fast alternating direction optimization methods, SIAM J. Imaging Sci., 7 (2014), 1588-1623.  doi: 10.1137/120896219.  Google Scholar

[25]

S. HaubrugeV. H. Nguyen and J. J. Strodiot, Convergence analysis and applications of the Glowinski–Le Tallec splitting method for finding a zero of the sum of two maximal monotone operators, J. Optim. Theory Appl., 97 (1998), 645-673.  doi: 10.1023/A:1022646327085.  Google Scholar

[26]

B. S. He and X. M. Yuan, On the $O(1/n)$ convergence rate of the Douglas–Rachford alternating direction method, SIAM J. Numer. Anal., 50 (2012), 700-709.  doi: 10.1137/110836936.  Google Scholar

[27]

P. Le Tallec, Numerical Analysis of Viscoelastic Problems, Research in Applied Mathematics, 15, Masson, Paris, 1990.  Google Scholar

[28]

P. L. Lions and B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal., 16 (1979), 964-979.  doi: 10.1137/0716071.  Google Scholar

[29]

G. I. Marchuk, Splitting and alternating direction methods, in Handbook of Numerical Analysis, Vol. I, Handb. Numer. Anal., 1, North-Holland, Amsterdam, 1990, 197–462.  Google Scholar

[30]

C. C. Paige and M. A. Saunders, LSQR: An algorithm for sparse linear equations and sparse least squares, ACM Trans. Math. Software, 8 (1982), 43-71.  doi: 10.1145/355984.355989.  Google Scholar

[31]

D. W. Peaceman and H. H. Rachford Jr., The numerical solution of parabolic and elliptic differential equations, J. Soc. Indust. Appl. Math., 3 (1955), 28-41.  doi: 10.1137/0103003.  Google Scholar

[32] K. R. Rao and P. Yip, Discrete Cosine Transform: Algorithms, Advantages, Applications, Academic Press, Inc., Boston, MA, 1990.  doi: 10.1016/c2009-0-22279-3.  Google Scholar
[33] R. T. Rockafellar, Convex Analysis, Princeton Mathematical Series, 28, Princeton University Press, Princeton, NJ, 1970.   Google Scholar
[34]

R. Tibshirani, Regression shrinkage and selection via the lasso, J. Roy. Statist. Soc. Ser. B, 58 (1996), 267-288.  doi: 10.1111/j.2517-6161.1996.tb02080.x.  Google Scholar

[35]

P. Tseng, Applications of a splitting algorithm to decomposition in convex programming and variational inequalities, SIAM J. Control Optim., 29 (1991), 119-138.  doi: 10.1137/0329006.  Google Scholar

[36]

P. T. Vuong and J. J. Strodiot, The Glowinski–Le Tallec splitting method revisited in the framework of equilibrium problems in Hilbert spaces, J. Global Optim., 70 (2018), 477-495.  doi: 10.1007/s10898-017-0575-0.  Google Scholar

[37]

H. R. Yue, Q. Z. Yang, X. F. Wang and X. M. Yuan, Implementing the alternating direction method of multipliers for big datasets: A case study of least absolute shrinkage and selection operator, SIAM J. Sci. Comput., 40 (2018), A3121–A3156. doi: 10.1137/17M1146567.  Google Scholar

[38]

C. Zalinescu, Convex Analysis in General Vector Spaces, World Scientific Publishing Co., Inc., River Edge, NJ, 2002. doi: 10.1142/9789812777096.  Google Scholar

[39]

E. H. Zarantonello, Projections on convex sets in Hilbert space and spectral theory. Ⅰ: Projections on convex sets; Ⅱ: Spectral theory, in Contributions to Nonlinear Functional Analysis, Academic Press, New York, 1971, 237-424.  Google Scholar

Figure 1.  Left to right: original image, blurring image, and recovered image by $ \theta $-scheme on TV-based deblurring
Figure 2.  Visualization of function values, MSE, and ISNR on TV-based deblurring
Figure 3.  Objective value with respect to the computing time and residual of the linear equation (29) with respect to the outer loop iteration on "news20.scale''
Figure 4.  Objective value with respect to the computing time and residual of the linear equation (29) with respect to the outer loop iteration on "url_combined"
Table 1.  A sensitivity test of the $ \theta $-scheme on TV-based deblurring
$ \beta_1 $ $ \beta_2 $ Iteration Time (s) Objective MSE ISNR
0.1 0.1 $ 3000 $ $ 106.41 $ 0.6259 1.6135e-04 11.9101
0.3 0.2 2852 101.04 0.6259 1.5999e-04 11.9470
0.7 0.5 1202 42.45 0.6259 1.5999e-04 11.9470
0.8 0.5 1088 38.57 0.6259 1.5999e-04 11.9470
0.9 0.8 879 31.01 0.6259 1.5999e-04 11.9470
0.9 0.9 847 29.82 0.6259 1.5999e-04 11.9470
$ \beta_1 $ $ \beta_2 $ Iteration Time (s) Objective MSE ISNR
0.1 0.1 $ 3000 $ $ 106.41 $ 0.6259 1.6135e-04 11.9101
0.3 0.2 2852 101.04 0.6259 1.5999e-04 11.9470
0.7 0.5 1202 42.45 0.6259 1.5999e-04 11.9470
0.8 0.5 1088 38.57 0.6259 1.5999e-04 11.9470
0.9 0.8 879 31.01 0.6259 1.5999e-04 11.9470
0.9 0.9 847 29.82 0.6259 1.5999e-04 11.9470
Table 2.  Comparison of $ \theta $-scheme, ADMM+GPRSM, GADMM, and FISTA on TV-based deblurring under same inner precision
Inner Algorithm Iteration Mean/Max Time (s) Objective MSE ISNR
precision FGP
1e-02 $ \theta $-scheme 763 4.00/4 29.00 0.6259 1.5999e-04 11.9470
ADMM+GPRSM 763 4.00/4 30.00 0.6259 1.5999e-04 11.9470
GADMM 1526 2.00/2 33.43 0.6259 1.5999e-04 11.9470
FISTA 3000 2.00/2 53.92 0.6261 1.5992e-04 11.9489
1e-04 $ \theta $-scheme 763 4.00/6 28.84 0.6259 1.5999e-04 11.9470
ADMM+GPRSM 763 4.00/6 29.99 0.6259 1.5999e-04 11.9470
GADMM 1526 2.00/3 33.13 0.6259 1.5999e-04 11.9470
FISTA 3000 2.00/3 53.70 0.6261 1.5992e-04 11.9487
1e-06 $ \theta $-scheme 763 4.60/20 28.31 0.6259 1.5999e-04 11.9470
ADMM+GPRSM 763 4.61/20 29.25 0.6259 1.5999e-04 11.9470
GADMM 1526 2.27/10 32.42 0.6259 1.5999e-04 11.9470
FISTA 3000 5.45/10 82.36 0.6259 1.5996e-04 11.9476
1e-08 $ \theta $-scheme 763 8.65/20 38.37 0.6259 1.5999e-04 11.9470
ADMM+GPRSM 763 8.65/20 39.50 0.6259 1.5999e-04 11.9470
GADMM 1526 4.32/10 42.45 0.6259 1.5999e-04 11.9470
FISTA 1277 7.62/10 44.71 0.6259 1.5997e-04 11.9476
1e-10 $ \theta $-scheme 762 16.61/20 58.43 0.6259 1.5999e-04 11.9470
ADMM+GPRSM 763 16.72/20 59.79 0.6259 1.5999e-04 11.9470
GADMM 1525 8.44/10 63.16 0.6259 1.5999e-04 11.9470
FISTA 1214 9.87/10 51.49 0.6259 1.5996e-04 11.9476
Inner Algorithm Iteration Mean/Max Time (s) Objective MSE ISNR
precision FGP
1e-02 $ \theta $-scheme 763 4.00/4 29.00 0.6259 1.5999e-04 11.9470
ADMM+GPRSM 763 4.00/4 30.00 0.6259 1.5999e-04 11.9470
GADMM 1526 2.00/2 33.43 0.6259 1.5999e-04 11.9470
FISTA 3000 2.00/2 53.92 0.6261 1.5992e-04 11.9489
1e-04 $ \theta $-scheme 763 4.00/6 28.84 0.6259 1.5999e-04 11.9470
ADMM+GPRSM 763 4.00/6 29.99 0.6259 1.5999e-04 11.9470
GADMM 1526 2.00/3 33.13 0.6259 1.5999e-04 11.9470
FISTA 3000 2.00/3 53.70 0.6261 1.5992e-04 11.9487
1e-06 $ \theta $-scheme 763 4.60/20 28.31 0.6259 1.5999e-04 11.9470
ADMM+GPRSM 763 4.61/20 29.25 0.6259 1.5999e-04 11.9470
GADMM 1526 2.27/10 32.42 0.6259 1.5999e-04 11.9470
FISTA 3000 5.45/10 82.36 0.6259 1.5996e-04 11.9476
1e-08 $ \theta $-scheme 763 8.65/20 38.37 0.6259 1.5999e-04 11.9470
ADMM+GPRSM 763 8.65/20 39.50 0.6259 1.5999e-04 11.9470
GADMM 1526 4.32/10 42.45 0.6259 1.5999e-04 11.9470
FISTA 1277 7.62/10 44.71 0.6259 1.5997e-04 11.9476
1e-10 $ \theta $-scheme 762 16.61/20 58.43 0.6259 1.5999e-04 11.9470
ADMM+GPRSM 763 16.72/20 59.79 0.6259 1.5999e-04 11.9470
GADMM 1525 8.44/10 63.16 0.6259 1.5999e-04 11.9470
FISTA 1214 9.87/10 51.49 0.6259 1.5996e-04 11.9476
Table 3.  Comparison of $ \theta $-scheme, ADMM+GPRSM, GADMM, and FISTA on TV-based deblurring under different inner precisions
Inner Algorithm Iteration Mean/Max Time (s) Objective MSE ISNR
precision FGP
1e-02 $ \theta $-scheme 763 4.00/4 27.87 0.6259 1.5999e-04 11.9470
1e-02 ADMM+GPRSM 763 4.00/4 30.51 0.6259 1.5999e-04 11.9470
1e-04 GADMM 1526 2.00/3 33.01 0.6259 1.5999e-04 11.9470
1e-08 FISTA 1277 7.62/10 47.88 0.6259 1.5997e-04 11.9476
Inner Algorithm Iteration Mean/Max Time (s) Objective MSE ISNR
precision FGP
1e-02 $ \theta $-scheme 763 4.00/4 27.87 0.6259 1.5999e-04 11.9470
1e-02 ADMM+GPRSM 763 4.00/4 30.51 0.6259 1.5999e-04 11.9470
1e-04 GADMM 1526 2.00/3 33.01 0.6259 1.5999e-04 11.9470
1e-08 FISTA 1277 7.62/10 47.88 0.6259 1.5997e-04 11.9476
Table 4.  Values of $ \eta $, $ L $ and $ \beta_1 (\beta_2) $ for synthetic dataset
(m, n, s) $ \eta $ L $ \beta_1(\beta_2) $
($ 1\times10^4, 1.5\times10^4, 50\% $) 878.32 1.8416e+04 1.0860e-04
($ 1\times10^4, 1.5\times10^4, 10\% $) 293.15 4.4668e+03 4.4775e-04
($ 1.5\times10^4, 2\times10^4, 5\% $) 174.12 3.2264e+03 6.1989e-04
($ 2\times10^4, 3\times10^4, 1\% $) 67.90 9.4158e+02 2.1241e-03
($ 2\times10^5, 3\times10^5, 0.1\% $) 58.27 9.0143e+02 2.2187e-03
($ 3\times10^5, 2\times10^6, 0.01\% $) 9.59 3.6101e+02 5.5400e-03
(m, n, s) $ \eta $ L $ \beta_1(\beta_2) $
($ 1\times10^4, 1.5\times10^4, 50\% $) 878.32 1.8416e+04 1.0860e-04
($ 1\times10^4, 1.5\times10^4, 10\% $) 293.15 4.4668e+03 4.4775e-04
($ 1.5\times10^4, 2\times10^4, 5\% $) 174.12 3.2264e+03 6.1989e-04
($ 2\times10^4, 3\times10^4, 1\% $) 67.90 9.4158e+02 2.1241e-03
($ 2\times10^5, 3\times10^5, 0.1\% $) 58.27 9.0143e+02 2.2187e-03
($ 3\times10^5, 2\times10^6, 0.01\% $) 9.59 3.6101e+02 5.5400e-03
Table 5.  Comparison of IN-$ \theta_{\text{LSQR}} $, IN-$ \theta_{\text{Cholesky}} $, IN-$ \theta_{1e-t} $ and IN-$ \theta $ on synthetic dataset
(m, n, s) Algorithm Iteration Mean/Max CG Time (s) Objective
($ 1\times10^4, 1.5\times10^4, 50\% $) IN-$ \theta_{\text{LSQR}} $ 9 $ \sim/\sim $ 24.72 5.8157e+04
IN-$ \theta_{\text{Cholesky}} $ 9 $ \sim/\sim $ 710.26 5.8157e+04
IN-$ \theta_{1e-10} $ 10 7.00/10 27.00 5.8157e+04
IN-$ \theta_{1e-8} $ 9 5.44/8 20.46 5.8157e+04
IN-$ \theta_{1e-6} $ 9 3.44/6 15.51 5.8157e+04
IN-$ \theta_{1e-4} $ 500 0.03/4 319.16 5.8157e+04
IN-$ \theta_{1e-2} $ 500 0.01/2 314.86 5.8157e+04
IN-$ \theta $ 10 0.90/1 11.27 5.8157e+04
($ 1\times10^4, 1.5\times10^4, 10\% $) IN-$ \theta_{\text{LSQR}} $ 9 $ \sim/\sim $ 6.36 1.6892e+04
IN-$ \theta_{\text{Cholesky}} $ 9 $ \sim/\sim $ 61.89 1.6892e+04
IN-$ \theta_{1e-10} $ 9 7.11/10 6.17 1.6892e+04
IN-$ \theta_{1e-8} $ 9 5.44/8 5.11 1.6892e+04
IN-$ \theta_{1e-6} $ 9 3.44/6 3.87 1.6892e+04
IN-$ \theta_{1e-4} $ 500 0.03/4 77.57 1.6892e+04
IN-$ \theta_{1e-2} $ 500 0.01/2 79.92 1.6892e+04
IN-$ \theta $ 9 0.89/1 2.68 1.6892e+04
($ 1.5\times10^4, 2\times10^4, 5\% $) IN-$ \theta_{\text{LSQR}} $ 8 $ \sim/\sim $ 5.98 1.0405e+04
IN-$ \theta_{\text{Cholesky}} $ 8 $ \sim/\sim $ 96.69 1.0405e+04
IN-$ \theta_{1e-10} $ 9 7.11/10 6.57 1.0405e+04
IN-$ \theta_{1e-8} $ 8 5.63/8 4.97 1.0405e+04
IN-$ \theta_{1e-6} $ 8 3.63/6 3.85 1.0405e+04
IN-$ \theta_{1e-4} $ 500 0.03/4 84.29 1.0405e+04
IN-$ \theta_{1e-2} $ 500 0.01/2 83.01 1.0405e+04
IN-$ \theta $ 9 0.89/1 2.81 1.0405e+04
($ 2\times10^4, 3\times10^4, 1\% $) IN-$ \theta_{\text{LSQR}} $ 10 $ \sim/\sim $ 2.94 4.7993e+03
IN-$ \theta_{\text{Cholesky}} $ 10 $ \sim/\sim $ 232.74 4.7993e+03
IN-$ \theta_{1e-10} $ 10 7.10/10 3.24 4.7993e+03
IN-$ \theta_{1e-8} $ 11 4.82/8 2.73 4.7993e+03
IN-$ \theta_{1e-6} $ 10 3.30/6 1.96 4.7993e+03
IN-$ \theta_{1e-4} $ 500 0.03/4 36.20 4.7993e+03
IN-$ \theta_{1e-2} $ 500 0.01/2 35.20 4.7993e+03
IN-$ \theta $ 10 0.90/1 1.74 4.7993e+03
($ 2\times10^5, 3\times10^5, 0.1\% $) IN-$ \theta_{\text{LSQR}} $ 9 $ \sim/\sim $ 38.52 4.9942e+03
IN-$ \theta_{1e-10} $ 9 7.22/10 50.94 4.9942e+03
IN-$ \theta_{1e-8} $ 9 5.44/8 41.94 4.9942e+03
IN-$ \theta_{1e-6} $ 9 3.44/6 31.82 4.9942e+03
IN-$ \theta_{1e-4} $ 500 0.03/4 644.25 4.9942e+03
IN-$ \theta_{1e-2} $ 500 0.01/2 641.81 4.9942e+03
IN-$ \theta $ 9 0.89/1 20.72 4.9942e+03
($ 3\times10^5, 2\times10^6, 0.01\% $) IN-$ \theta_{\text{LSQR}} $ 38 $ \sim/\sim $ 159.14 2.1247e+03
IN-$ \theta_{1e-10} $ 37 5.49/8 180.66 2.1247e+03
IN-$ \theta_{1e-8} $ 37 3.95/7 146.45 2.1247e+03
IN-$ \theta_{1e-6} $ 37 2.43/5 113.97 2.1247e+03
IN-$ \theta_{1e-4} $ 500 0.08/4 707.33 2.1247e+03
IN-$ \theta_{1e-2} $ 500 0.02/2 693.12 2.1247e+03
IN-$ \theta $ 37 0.97/1 91.95 2.1247e+03
(m, n, s) Algorithm Iteration Mean/Max CG Time (s) Objective
($ 1\times10^4, 1.5\times10^4, 50\% $) IN-$ \theta_{\text{LSQR}} $ 9 $ \sim/\sim $ 24.72 5.8157e+04
IN-$ \theta_{\text{Cholesky}} $ 9 $ \sim/\sim $ 710.26 5.8157e+04
IN-$ \theta_{1e-10} $ 10 7.00/10 27.00 5.8157e+04
IN-$ \theta_{1e-8} $ 9 5.44/8 20.46 5.8157e+04
IN-$ \theta_{1e-6} $ 9 3.44/6 15.51 5.8157e+04
IN-$ \theta_{1e-4} $ 500 0.03/4 319.16 5.8157e+04
IN-$ \theta_{1e-2} $ 500 0.01/2 314.86 5.8157e+04
IN-$ \theta $ 10 0.90/1 11.27 5.8157e+04
($ 1\times10^4, 1.5\times10^4, 10\% $) IN-$ \theta_{\text{LSQR}} $ 9 $ \sim/\sim $ 6.36 1.6892e+04
IN-$ \theta_{\text{Cholesky}} $ 9 $ \sim/\sim $ 61.89 1.6892e+04
IN-$ \theta_{1e-10} $ 9 7.11/10 6.17 1.6892e+04
IN-$ \theta_{1e-8} $ 9 5.44/8 5.11 1.6892e+04
IN-$ \theta_{1e-6} $ 9 3.44/6 3.87 1.6892e+04
IN-$ \theta_{1e-4} $ 500 0.03/4 77.57 1.6892e+04
IN-$ \theta_{1e-2} $ 500 0.01/2 79.92 1.6892e+04
IN-$ \theta $ 9 0.89/1 2.68 1.6892e+04
($ 1.5\times10^4, 2\times10^4, 5\% $) IN-$ \theta_{\text{LSQR}} $ 8 $ \sim/\sim $ 5.98 1.0405e+04
IN-$ \theta_{\text{Cholesky}} $ 8 $ \sim/\sim $ 96.69 1.0405e+04
IN-$ \theta_{1e-10} $ 9 7.11/10 6.57 1.0405e+04
IN-$ \theta_{1e-8} $ 8 5.63/8 4.97 1.0405e+04
IN-$ \theta_{1e-6} $ 8 3.63/6 3.85 1.0405e+04
IN-$ \theta_{1e-4} $ 500 0.03/4 84.29 1.0405e+04
IN-$ \theta_{1e-2} $ 500 0.01/2 83.01 1.0405e+04
IN-$ \theta $ 9 0.89/1 2.81 1.0405e+04
($ 2\times10^4, 3\times10^4, 1\% $) IN-$ \theta_{\text{LSQR}} $ 10 $ \sim/\sim $ 2.94 4.7993e+03
IN-$ \theta_{\text{Cholesky}} $ 10 $ \sim/\sim $ 232.74 4.7993e+03
IN-$ \theta_{1e-10} $ 10 7.10/10 3.24 4.7993e+03
IN-$ \theta_{1e-8} $ 11 4.82/8 2.73 4.7993e+03
IN-$ \theta_{1e-6} $ 10 3.30/6 1.96 4.7993e+03
IN-$ \theta_{1e-4} $ 500 0.03/4 36.20 4.7993e+03
IN-$ \theta_{1e-2} $ 500 0.01/2 35.20 4.7993e+03
IN-$ \theta $ 10 0.90/1 1.74 4.7993e+03
($ 2\times10^5, 3\times10^5, 0.1\% $) IN-$ \theta_{\text{LSQR}} $ 9 $ \sim/\sim $ 38.52 4.9942e+03
IN-$ \theta_{1e-10} $ 9 7.22/10 50.94 4.9942e+03
IN-$ \theta_{1e-8} $ 9 5.44/8 41.94 4.9942e+03
IN-$ \theta_{1e-6} $ 9 3.44/6 31.82 4.9942e+03
IN-$ \theta_{1e-4} $ 500 0.03/4 644.25 4.9942e+03
IN-$ \theta_{1e-2} $ 500 0.01/2 641.81 4.9942e+03
IN-$ \theta $ 9 0.89/1 20.72 4.9942e+03
($ 3\times10^5, 2\times10^6, 0.01\% $) IN-$ \theta_{\text{LSQR}} $ 38 $ \sim/\sim $ 159.14 2.1247e+03
IN-$ \theta_{1e-10} $ 37 5.49/8 180.66 2.1247e+03
IN-$ \theta_{1e-8} $ 37 3.95/7 146.45 2.1247e+03
IN-$ \theta_{1e-6} $ 37 2.43/5 113.97 2.1247e+03
IN-$ \theta_{1e-4} $ 500 0.08/4 707.33 2.1247e+03
IN-$ \theta_{1e-2} $ 500 0.02/2 693.12 2.1247e+03
IN-$ \theta $ 37 0.97/1 91.95 2.1247e+03
Table 6.  Comparison of IN-$ \theta_{\text{LSQR}} $, IN-$ \theta_{1e-t} $ and IN-$ \theta_{\text{R}} $ on "news20.scale''
Algorithm Iteration Mean/Max CG Time (s) Objective
IN-$ \theta_{\text{LSQR}} $ 500 $ \sim/\sim $ 26.31 6.5859e+05
IN-$ \theta_{1e-10} $ 44 4.11/6 2.68 6.5859e+05
IN-$ \theta_{1e-8} $ 44 3.07/5 2.28 6.5859e+05
IN-$ \theta_{1e-6} $ 43 1.91/4 1.76 6.5859e+05
IN-$ \theta_{1e-4} $ 500 0.08/3 10.92 6.5859e+05
IN-$ \theta_{1e-2} $ 500 0.02/2 10.39 6.5860e+05
IN-$ \theta_{\text{R}} $ 46 0.61/2 1.36 6.5859e+05
Algorithm Iteration Mean/Max CG Time (s) Objective
IN-$ \theta_{\text{LSQR}} $ 500 $ \sim/\sim $ 26.31 6.5859e+05
IN-$ \theta_{1e-10} $ 44 4.11/6 2.68 6.5859e+05
IN-$ \theta_{1e-8} $ 44 3.07/5 2.28 6.5859e+05
IN-$ \theta_{1e-6} $ 43 1.91/4 1.76 6.5859e+05
IN-$ \theta_{1e-4} $ 500 0.08/3 10.92 6.5859e+05
IN-$ \theta_{1e-2} $ 500 0.02/2 10.39 6.5860e+05
IN-$ \theta_{\text{R}} $ 46 0.61/2 1.36 6.5859e+05
Table 7.  Comparison of IN-$ \theta_{\text{LSQR}} $, IN-$ \theta_{1e-t} $ and IN-$ \theta_{R} $ on "url_combined" dataset
Algorithm Iteration Mean/Max CG Time (s) Objective
IN-$ \theta_{\text{LSQR}} $ 29 $ \sim/\sim $ 1794.95 6.1097e+05
IN-$ \theta_{1e-10} $ 30 4.00/4 869.09 6.0859e+05
IN-$ \theta_{1e-8} $ 30 3.13/4 757.60 6.0859e+05
IN-$ \theta_{1e-6} $ 30 3.00/3 740.08 6.0859e+05
IN-$ \theta_{1e-4} $ 29 2.00/2 592.07 6.1097e+05
IN-$ \theta_{1e-2} $ 29 1.10/2 474.73 6.1074e+05
IN-$ \theta_{\text{R}} $ 29 0.90/2 454.99 6.1045e+05
Algorithm Iteration Mean/Max CG Time (s) Objective
IN-$ \theta_{\text{LSQR}} $ 29 $ \sim/\sim $ 1794.95 6.1097e+05
IN-$ \theta_{1e-10} $ 30 4.00/4 869.09 6.0859e+05
IN-$ \theta_{1e-8} $ 30 3.13/4 757.60 6.0859e+05
IN-$ \theta_{1e-6} $ 30 3.00/3 740.08 6.0859e+05
IN-$ \theta_{1e-4} $ 29 2.00/2 592.07 6.1097e+05
IN-$ \theta_{1e-2} $ 29 1.10/2 474.73 6.1074e+05
IN-$ \theta_{\text{R}} $ 29 0.90/2 454.99 6.1045e+05
[1]

Yulan Lu, Minghui Song, Mingzhu Liu. Convergence rate and stability of the split-step theta method for stochastic differential equations with piecewise continuous arguments. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 695-717. doi: 10.3934/dcdsb.2018203

[2]

Jinyan Fan, Jianyu Pan. On the convergence rate of the inexact Levenberg-Marquardt method. Journal of Industrial & Management Optimization, 2011, 7 (1) : 199-210. doi: 10.3934/jimo.2011.7.199

[3]

Yves Bourgault, Damien Broizat, Pierre-Emmanuel Jabin. Convergence rate for the method of moments with linear closure relations. Kinetic & Related Models, 2015, 8 (1) : 1-27. doi: 10.3934/krm.2015.8.1

[4]

Xinfu Chen, Bei Hu, Jin Liang, Yajing Zhang. Convergence rate of free boundary of numerical scheme for American option. Discrete & Continuous Dynamical Systems - B, 2016, 21 (5) : 1435-1444. doi: 10.3934/dcdsb.2016004

[5]

Zhili Ge, Gang Qian, Deren Han. Global convergence of an inexact operator splitting method for monotone variational inequalities. Journal of Industrial & Management Optimization, 2011, 7 (4) : 1013-1026. doi: 10.3934/jimo.2011.7.1013

[6]

Matania Ben–Artzi, Joseph Falcovitz, Jiequan Li. The convergence of the GRP scheme. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 1-27. doi: 10.3934/dcds.2009.23.1

[7]

Stefan Kindermann, Andreas Neubauer. On the convergence of the quasioptimality criterion for (iterated) Tikhonov regularization. Inverse Problems & Imaging, 2008, 2 (2) : 291-299. doi: 10.3934/ipi.2008.2.291

[8]

Caifang Wang, Tie Zhou. The order of convergence for Landweber Scheme with $\alpha,\beta$-rule. Inverse Problems & Imaging, 2012, 6 (1) : 133-146. doi: 10.3934/ipi.2012.6.133

[9]

Benoît Merlet, Morgan Pierre. Convergence to equilibrium for the backward Euler scheme and applications. Communications on Pure & Applied Analysis, 2010, 9 (3) : 685-702. doi: 10.3934/cpaa.2010.9.685

[10]

Shahad Al-azzawi, Jicheng Liu, Xianming Liu. Convergence rate of synchronization of systems with additive noise. Discrete & Continuous Dynamical Systems - B, 2017, 22 (2) : 227-245. doi: 10.3934/dcdsb.2017012

[11]

Andriy Bondarenko, Guy Bouchitté, Luísa Mascarenhas, Rajesh Mahadevan. Rate of convergence for correctors in almost periodic homogenization. Discrete & Continuous Dynamical Systems - A, 2005, 13 (2) : 503-514. doi: 10.3934/dcds.2005.13.503

[12]

Desmond J. Higham, Xuerong Mao, Lukasz Szpruch. Convergence, non-negativity and stability of a new Milstein scheme with applications to finance. Discrete & Continuous Dynamical Systems - B, 2013, 18 (8) : 2083-2100. doi: 10.3934/dcdsb.2013.18.2083

[13]

Bahareh Akhtari, Esmail Babolian, Andreas Neuenkirch. An Euler scheme for stochastic delay differential equations on unbounded domains: Pathwise convergence. Discrete & Continuous Dynamical Systems - B, 2015, 20 (1) : 23-38. doi: 10.3934/dcdsb.2015.20.23

[14]

Matúš Tibenský, Angela Handlovičová. Convergence analysis of the discrete duality finite volume scheme for the regularised Heston model. Discrete & Continuous Dynamical Systems - S, 2019  doi: 10.3934/dcdss.2020226

[15]

Stefan Kindermann. Convergence of the gradient method for ill-posed problems. Inverse Problems & Imaging, 2017, 11 (4) : 703-720. doi: 10.3934/ipi.2017033

[16]

Regina S. Burachik, C. Yalçın Kaya. An update rule and a convergence result for a penalty function method. Journal of Industrial & Management Optimization, 2007, 3 (2) : 381-398. doi: 10.3934/jimo.2007.3.381

[17]

Karl Kunisch, Markus Müller. Uniform convergence of the POD method and applications to optimal control. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 4477-4501. doi: 10.3934/dcds.2015.35.4477

[18]

Yong Duan, Jian-Guo Liu. Convergence analysis of the vortex blob method for the $b$-equation. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 1995-2011. doi: 10.3934/dcds.2014.34.1995

[19]

Zehui Jia, Xue Gao, Xingju Cai, Deren Han. The convergence rate analysis of the symmetric ADMM for the nonconvex separable optimization problems. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020053

[20]

Fabio Camilli, Claudio Marchi. On the convergence rate in multiscale homogenization of fully nonlinear elliptic problems. Networks & Heterogeneous Media, 2011, 6 (1) : 61-75. doi: 10.3934/nhm.2011.6.61

2019 Impact Factor: 1.366

Metrics

  • PDF downloads (37)
  • HTML views (233)
  • Cited by (0)

Other articles
by authors

[Back to Top]