doi: 10.3934/ipi.2021034

An efficient multi-grid method for TV minimization problems

1. 

Center for Applied Mathematics, Tianjin University, Tianjin, China

2. 

Center for Mathematical Sciences, Huazhong University of Technology, Wuhan, China

3. 

Department of Mathematics, Hong Kong Bapist University, Kowloon Tong, Hong Kong, China

* Corresponding author: kyin@hust.edu.cn

* Corresponding author: yuping.duan@tju.edu.cn

Received  August 2020 Revised  February 2021 Published  April 2021

Fund Project: The work of the first three authors were supported by NSFC 11701418 and 12071345, Major Science and Technology Project of Tianjin 18ZXRHSY00160 and Recruitment Program of Global Young Expert. The work of Yin was supported by NSFC 11801200 and a startup grant from HUST. The work of the last author was supported by Hong Kong Baptist University startup grants RG(R)-RC/17-18/02-MATH and FRG2/17-18/033

We propose an efficient multi-grid domain decomposition method for solving the total variation (TV) minimization problems. Our multi-grid scheme is developed based on the piecewise constant function spanned subspace correction rather than the piecewise linear one in [17], which ensures the calculation of the TV term only occurs on the boundaries of the support sets. Besides, the domain decomposition method is implemented on each layer to enable parallel computation. Comprehensive comparison results are presented to demonstrate the improvement in CPU time and image quality of the proposed method on medium and large-scale image denoising and reconstruction problems.

Citation: Zhenwei Zhang, Xue Li, Yuping Duan, Ke Yin, Xue-Cheng Tai. An efficient multi-grid method for TV minimization problems. Inverse Problems & Imaging, doi: 10.3934/ipi.2021034
References:
[1]

R. Acar and C. R. Vogel, Analysis of bounded variation penalty methods for ill-posed problems, Inverse Problems, 10 (1994), 1217-1229.  doi: 10.1088/0266-5611/10/6/003.  Google Scholar

[2]

S. T. Acton, Multigrid anisotropic diffusion, IEEE Transactions on Image Processing: A Publication of the IEEE Signal Processing Society, 7 (1998), 280-291.   Google Scholar

[3]

X. Bresson and T. F. Chan, Fast dual minimization of the vectorial total variation norm and applications to color image processing, Inverse Probl. Imaging, 2 (2008), 455-484.  doi: 10.3934/ipi.2008.2.455.  Google Scholar

[4]

C. Brito-Loeza and K. Chen, On high-order denoising models and fast algorithms for vector-valued images, IEEE Trans. Image Process., 19 (2010), 1518-1527.  doi: 10.1109/TIP.2010.2042655.  Google Scholar

[5]

A. Chambolle, An algorithm for total variation minimization and applications, J. Math. Imaging Vision, 20 (2004), 89-97.   Google Scholar

[6]

A. Chambolle and T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging, J. Math. Imaging Vision, 40 (2011), 120-145.  doi: 10.1007/s10851-010-0251-1.  Google Scholar

[7]

A. Chambolle and T. Pock, An introduction to continuous optimization for imaging, Acta Numer., 25 (2016), 161-319.  doi: 10.1017/S096249291600009X.  Google Scholar

[8]

A. Chambolle and T. Pock, On the ergodic convergence rates of a first-order primal-dual algorithm, Math. Program., 159 (2016), 253-287.  doi: 10.1007/s10107-015-0957-3.  Google Scholar

[9]

R. H. Chan and K. Chen, A multilevel algorithm for simultaneously denoising and deblurring images, SIAM J. Sci. Comput., 32 (2010), 1043-1063.  doi: 10.1137/080741410.  Google Scholar

[10]

T. F. Chan and K. Chen, On a nonlinear multigrid algorithm with primal relaxation for the image total variation minimisation, Numer. Algorithms, 41 (2006), 387-411.  doi: 10.1007/s11075-006-9020-z.  Google Scholar

[11]

T. F. Chan and K. Chen, An optimization based multilevel algorithm for total variation image denoising, Multiscale Model. Simul., 5 (2006), 615-645.  doi: 10.1137/050644999.  Google Scholar

[12]

T. F. Chan and J. Shen, Mathematical models for local nontexture inpaintings, SIAM J. Appl. Math., 62 (2002), 1019-1043.  doi: 10.1137/S0036139900368844.  Google Scholar

[13]

T. F. Chan and L. A. Vese, Active contours without edges, IEEE Transactions on Image Processing, 10 (2001), 266–277. doi: 10.1109/83.902291.  Google Scholar

[14]

H. ChangX.-C. TaiL.-L. Wang and D. Yang, Convergence rate of overlapping domain decomposition methods for the Rudin-Osher-Fatemi model based on a dual formulation, SIAM J. Imaging Sci., 8 (2015), 564-591.  doi: 10.1137/140965016.  Google Scholar

[15]

R. ChenJ. Huang and X.-C. Cai, A parallel domain decomposition algorithm for large scale image denoising, Inverse Probl. Imaging, 13 (2019), 1259-1282.  doi: 10.3934/ipi.2019055.  Google Scholar

[16]

K. Chen and J. Savage, An accelerated algebraic multigrid algorithm for total-variation denoising, BIT, 47 (2007), 277-296.  doi: 10.1007/s10543-007-0123-2.  Google Scholar

[17]

K. Chen and X.-C. Tai, A nonlinear multigrid method for total variation minimization from image restoration, J. Sci. Comput., 33 (2007), 115-138.  doi: 10.1007/s10915-007-9145-9.  Google Scholar

[18]

Y. DuanH. Chang and X.-C. Tai, Convergent non-overlapping domain decomposition methods for variational image segmentation, J. Sci. Comput., 69 (2016), 532-555.  doi: 10.1007/s10915-016-0207-8.  Google Scholar

[19]

E. EsserX. Zhang and T. F. Chan, A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science, SIAM J. Imaging Sci., 3 (2010), 1015-1046.  doi: 10.1137/09076934X.  Google Scholar

[20]

D. Goldfarb and W. Yin, Parametric maximum flow algorithms for fast total variation minimization, SIAM J. Sci. Comput., 31 (2009), 3712-3743.  doi: 10.1137/070706318.  Google Scholar

[21]

T. Goldstein and S. Osher, The split Bregman method for ${L}_1$-regularized problems, SIAM J. Imaging Sci., 2 (2009), 323-343.  doi: 10.1137/080725891.  Google Scholar

[22]

Y. GuL.-L. Wang and X.-C. Tai, A direct approach toward global minimization for multiphase labeling and segmentation problems, IEEE Trans. Image Process., 21 (2012), 2399-2411.  doi: 10.1109/TIP.2011.2182522.  Google Scholar

[23]

M. Hintermüller and A. Langer, Non-overlapping domain decomposition methods for dual total variation based image denoising, J. Sci. Comput., 62 (2015), 456-481.  doi: 10.1007/s10915-014-9863-8.  Google Scholar

[24]

A. Langer and F. Gaspoz, Overlapping domain decomposition methods for total variation denoising, SIAM J. Numer. Anal., 57 (2019), 1411-1444.  doi: 10.1137/18M1173782.  Google Scholar

[25]

C.-O. Lee and C. Nam, Primal domain decomposition methods for the total variation minimization, based on dual decomposition, SIAM J. Sci. Comput., 39 (2017), B403–B423. doi: 10.1137/15M1049919.  Google Scholar

[26]

C.-O. LeeC. Nam and J. Park, Domain decomposition methods using dual conversion for the total variation minimization with $L^1$ fidelity term, J. Sci. Comput., 78 (2019), 951-970.  doi: 10.1007/s10915-018-0791-x.  Google Scholar

[27]

T. LuP. Neittaanmäki and X.-C. Tai, A parallel splitting up method and its application to Navier-Stokes equations, Appl. Math. Lett., 4 (1991), 25-29.  doi: 10.1016/0893-9659(91)90161-N.  Google Scholar

[28]

A. Marquina and S. Osher, Explicit algorithms for a new time dependent model based on level set motion for nonlinear deblurring and noise removal, SIAM J. Sci. Comput., 22 (2000), 387-405.  doi: 10.1137/S1064827599351751.  Google Scholar

[29]

L. I. RudinS. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms, Phys. D, 60 (1992), 259-268.  doi: 10.1016/0167-2789(92)90242-F.  Google Scholar

[30]

J. Savage and K. Chen, An improved and accelerated non-linear multigrid method for total-variation denoising, Int. J. Comput. Math., 82 (2005), 1001-1015.  doi: 10.1080/00207160500069904.  Google Scholar

[31]

E. Y. SidkyJ. H. Jørgensen and X. Pan, Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm, Physics in Medicine & Biology, 57 (2012), 3065-3091.  doi: 10.1088/0031-9155/57/10/3065.  Google Scholar

[32]

E. Y. Sidky and X. Pan, Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization, Physics in Medicine & Biology, 53 (2008), 4777. doi: 10.1088/0031-9155/53/17/021.  Google Scholar

[33]

X.-C. Tai and Y. Duan, Domain decomposition methods with graph cuts algorithms for image segmentation., Int. J. Numer. Anal. Model., 8 (2011), 137-155.   Google Scholar

[34]

X.-C. Tai and J. Xu, Global and uniform convergence of subspace correction methods for some convex optimization problems, Math. Comp., 71 (2002), 105-124.  doi: 10.1090/S0025-5718-01-01311-4.  Google Scholar

[35]

C. R. Vogel and M. E. Oman, Iterative methods for total variation denoising, SIAM J. Sci. Comput., 17 (1996), 227-238.  doi: 10.1137/0917016.  Google Scholar

[36]

C. R. Vogel and M. E. Oman, Fast, robust total variation-based reconstruction of noisy, blurred images, IEEE Trans. Image Process., 7 (1998), 813-824.  doi: 10.1109/83.679423.  Google Scholar

[37]

Y. WangJ. YangW. Yin and Y. Zhang, A new alternating minimization algorithm for total variation image reconstruction, SIAM J. Imaging Sci., 1 (2008), 248-272.  doi: 10.1137/080724265.  Google Scholar

[38]

C. Wu and X.-C. Tai, Augmented Lagrangian method, dual methods, and split bregman iteration for ROF, vectorial TV, and high order models, SIAM J. Imaging Sci., 3 (2010), 300-339.  doi: 10.1137/090767558.  Google Scholar

[39]

J. Xu, Iterative methods by space decomposition and subspace correction, SIAM Rev., 34 (1992), 581-613.  doi: 10.1137/1034116.  Google Scholar

[40]

J. XuH. B. Chang and J. Qin, Domain decomposition method for image deblurring, J. Comput. Appl. Math., 271 (2014), 401-414.  doi: 10.1016/j.cam.2014.03.030.  Google Scholar

[41]

J. XuX.-C. Tai and L.-L. Wang, A two-level domain decomposition method for image restoration, Inverse Probl. Imaging, 4 (2010), 523-545.  doi: 10.3934/ipi.2010.4.523.  Google Scholar

show all references

References:
[1]

R. Acar and C. R. Vogel, Analysis of bounded variation penalty methods for ill-posed problems, Inverse Problems, 10 (1994), 1217-1229.  doi: 10.1088/0266-5611/10/6/003.  Google Scholar

[2]

S. T. Acton, Multigrid anisotropic diffusion, IEEE Transactions on Image Processing: A Publication of the IEEE Signal Processing Society, 7 (1998), 280-291.   Google Scholar

[3]

X. Bresson and T. F. Chan, Fast dual minimization of the vectorial total variation norm and applications to color image processing, Inverse Probl. Imaging, 2 (2008), 455-484.  doi: 10.3934/ipi.2008.2.455.  Google Scholar

[4]

C. Brito-Loeza and K. Chen, On high-order denoising models and fast algorithms for vector-valued images, IEEE Trans. Image Process., 19 (2010), 1518-1527.  doi: 10.1109/TIP.2010.2042655.  Google Scholar

[5]

A. Chambolle, An algorithm for total variation minimization and applications, J. Math. Imaging Vision, 20 (2004), 89-97.   Google Scholar

[6]

A. Chambolle and T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging, J. Math. Imaging Vision, 40 (2011), 120-145.  doi: 10.1007/s10851-010-0251-1.  Google Scholar

[7]

A. Chambolle and T. Pock, An introduction to continuous optimization for imaging, Acta Numer., 25 (2016), 161-319.  doi: 10.1017/S096249291600009X.  Google Scholar

[8]

A. Chambolle and T. Pock, On the ergodic convergence rates of a first-order primal-dual algorithm, Math. Program., 159 (2016), 253-287.  doi: 10.1007/s10107-015-0957-3.  Google Scholar

[9]

R. H. Chan and K. Chen, A multilevel algorithm for simultaneously denoising and deblurring images, SIAM J. Sci. Comput., 32 (2010), 1043-1063.  doi: 10.1137/080741410.  Google Scholar

[10]

T. F. Chan and K. Chen, On a nonlinear multigrid algorithm with primal relaxation for the image total variation minimisation, Numer. Algorithms, 41 (2006), 387-411.  doi: 10.1007/s11075-006-9020-z.  Google Scholar

[11]

T. F. Chan and K. Chen, An optimization based multilevel algorithm for total variation image denoising, Multiscale Model. Simul., 5 (2006), 615-645.  doi: 10.1137/050644999.  Google Scholar

[12]

T. F. Chan and J. Shen, Mathematical models for local nontexture inpaintings, SIAM J. Appl. Math., 62 (2002), 1019-1043.  doi: 10.1137/S0036139900368844.  Google Scholar

[13]

T. F. Chan and L. A. Vese, Active contours without edges, IEEE Transactions on Image Processing, 10 (2001), 266–277. doi: 10.1109/83.902291.  Google Scholar

[14]

H. ChangX.-C. TaiL.-L. Wang and D. Yang, Convergence rate of overlapping domain decomposition methods for the Rudin-Osher-Fatemi model based on a dual formulation, SIAM J. Imaging Sci., 8 (2015), 564-591.  doi: 10.1137/140965016.  Google Scholar

[15]

R. ChenJ. Huang and X.-C. Cai, A parallel domain decomposition algorithm for large scale image denoising, Inverse Probl. Imaging, 13 (2019), 1259-1282.  doi: 10.3934/ipi.2019055.  Google Scholar

[16]

K. Chen and J. Savage, An accelerated algebraic multigrid algorithm for total-variation denoising, BIT, 47 (2007), 277-296.  doi: 10.1007/s10543-007-0123-2.  Google Scholar

[17]

K. Chen and X.-C. Tai, A nonlinear multigrid method for total variation minimization from image restoration, J. Sci. Comput., 33 (2007), 115-138.  doi: 10.1007/s10915-007-9145-9.  Google Scholar

[18]

Y. DuanH. Chang and X.-C. Tai, Convergent non-overlapping domain decomposition methods for variational image segmentation, J. Sci. Comput., 69 (2016), 532-555.  doi: 10.1007/s10915-016-0207-8.  Google Scholar

[19]

E. EsserX. Zhang and T. F. Chan, A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science, SIAM J. Imaging Sci., 3 (2010), 1015-1046.  doi: 10.1137/09076934X.  Google Scholar

[20]

D. Goldfarb and W. Yin, Parametric maximum flow algorithms for fast total variation minimization, SIAM J. Sci. Comput., 31 (2009), 3712-3743.  doi: 10.1137/070706318.  Google Scholar

[21]

T. Goldstein and S. Osher, The split Bregman method for ${L}_1$-regularized problems, SIAM J. Imaging Sci., 2 (2009), 323-343.  doi: 10.1137/080725891.  Google Scholar

[22]

Y. GuL.-L. Wang and X.-C. Tai, A direct approach toward global minimization for multiphase labeling and segmentation problems, IEEE Trans. Image Process., 21 (2012), 2399-2411.  doi: 10.1109/TIP.2011.2182522.  Google Scholar

[23]

M. Hintermüller and A. Langer, Non-overlapping domain decomposition methods for dual total variation based image denoising, J. Sci. Comput., 62 (2015), 456-481.  doi: 10.1007/s10915-014-9863-8.  Google Scholar

[24]

A. Langer and F. Gaspoz, Overlapping domain decomposition methods for total variation denoising, SIAM J. Numer. Anal., 57 (2019), 1411-1444.  doi: 10.1137/18M1173782.  Google Scholar

[25]

C.-O. Lee and C. Nam, Primal domain decomposition methods for the total variation minimization, based on dual decomposition, SIAM J. Sci. Comput., 39 (2017), B403–B423. doi: 10.1137/15M1049919.  Google Scholar

[26]

C.-O. LeeC. Nam and J. Park, Domain decomposition methods using dual conversion for the total variation minimization with $L^1$ fidelity term, J. Sci. Comput., 78 (2019), 951-970.  doi: 10.1007/s10915-018-0791-x.  Google Scholar

[27]

T. LuP. Neittaanmäki and X.-C. Tai, A parallel splitting up method and its application to Navier-Stokes equations, Appl. Math. Lett., 4 (1991), 25-29.  doi: 10.1016/0893-9659(91)90161-N.  Google Scholar

[28]

A. Marquina and S. Osher, Explicit algorithms for a new time dependent model based on level set motion for nonlinear deblurring and noise removal, SIAM J. Sci. Comput., 22 (2000), 387-405.  doi: 10.1137/S1064827599351751.  Google Scholar

[29]

L. I. RudinS. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms, Phys. D, 60 (1992), 259-268.  doi: 10.1016/0167-2789(92)90242-F.  Google Scholar

[30]

J. Savage and K. Chen, An improved and accelerated non-linear multigrid method for total-variation denoising, Int. J. Comput. Math., 82 (2005), 1001-1015.  doi: 10.1080/00207160500069904.  Google Scholar

[31]

E. Y. SidkyJ. H. Jørgensen and X. Pan, Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm, Physics in Medicine & Biology, 57 (2012), 3065-3091.  doi: 10.1088/0031-9155/57/10/3065.  Google Scholar

[32]

E. Y. Sidky and X. Pan, Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization, Physics in Medicine & Biology, 53 (2008), 4777. doi: 10.1088/0031-9155/53/17/021.  Google Scholar

[33]

X.-C. Tai and Y. Duan, Domain decomposition methods with graph cuts algorithms for image segmentation., Int. J. Numer. Anal. Model., 8 (2011), 137-155.   Google Scholar

[34]

X.-C. Tai and J. Xu, Global and uniform convergence of subspace correction methods for some convex optimization problems, Math. Comp., 71 (2002), 105-124.  doi: 10.1090/S0025-5718-01-01311-4.  Google Scholar

[35]

C. R. Vogel and M. E. Oman, Iterative methods for total variation denoising, SIAM J. Sci. Comput., 17 (1996), 227-238.  doi: 10.1137/0917016.  Google Scholar

[36]

C. R. Vogel and M. E. Oman, Fast, robust total variation-based reconstruction of noisy, blurred images, IEEE Trans. Image Process., 7 (1998), 813-824.  doi: 10.1109/83.679423.  Google Scholar

[37]

Y. WangJ. YangW. Yin and Y. Zhang, A new alternating minimization algorithm for total variation image reconstruction, SIAM J. Imaging Sci., 1 (2008), 248-272.  doi: 10.1137/080724265.  Google Scholar

[38]

C. Wu and X.-C. Tai, Augmented Lagrangian method, dual methods, and split bregman iteration for ROF, vectorial TV, and high order models, SIAM J. Imaging Sci., 3 (2010), 300-339.  doi: 10.1137/090767558.  Google Scholar

[39]

J. Xu, Iterative methods by space decomposition and subspace correction, SIAM Rev., 34 (1992), 581-613.  doi: 10.1137/1034116.  Google Scholar

[40]

J. XuH. B. Chang and J. Qin, Domain decomposition method for image deblurring, J. Comput. Appl. Math., 271 (2014), 401-414.  doi: 10.1016/j.cam.2014.03.030.  Google Scholar

[41]

J. XuX.-C. Tai and L.-L. Wang, A two-level domain decomposition method for image restoration, Inverse Probl. Imaging, 4 (2010), 523-545.  doi: 10.3934/ipi.2010.4.523.  Google Scholar

Figure 1.  An illustration of the piecewise linear basic function and piecewise constant basic function $ \phi_j^i $ on $ \bar{\tau}_j^i $ of the layer $ j = 4 $. In (a), the $ \square $ defines the weight of $ 1 $ for the piecewise constant function and $ \lozenge $ defines the weight of $ 0 $ for the outer boundary $ \partial\bar\tau_j^i $. In (b), the weights of $ \circ $, $ \ast $, $ \triangledown $ and $ \triangle $ are $ 1/8 $, $ 3/8 $, $ 5/8 $, $ 7/8 $ respectively
Figure 2.  An illustration of $ \partial\bar{\tau}_j^{i,l} $, $ \partial\bar{\tau}_j^{i,t} $, $ \partial\tau_j^{i,b} $ and $ \partial\tau_j^{i,r} $ located on $ \bar{\tau}_j^i $ of the layer $ j = 4 $
Figure 3.  Illustration of the 4-color domain decomposition for a domain of size $ 8\times 8 $, where each color has one element on the coarse layer $ j = 3 $
Figure 4.  The test images with sizes and their identifiers
Figure 5.  Denoised results and the residual images obtained by different methods on image 'Baboon' ($ \#6 $) of the noise level $ \sigma = 20 $ and $ \alpha = 15 $, and image 'Building' ($ \#8 $) of noise level $ \sigma = 40 $ and $ \alpha = 35 $
Figure 6.  The decay of relative error and numerical energy for Baboon image ($ \#6 $) of noise level $ \sigma = 20 $ with $ \alpha = 15 $ and Building image ($ \#8 $) of noise level $ \sigma = 40 $ with $ \alpha = 35 $. From left to right on the first row is the relative error and numerical energy of Baboon image ($ \#6 $) while on the second row are relative error and numerical energy of Building image ($ \#8 $)
Figure 7.  Image deburring results obtained by our MMC and PD method on 'Boat' and 'Lena' with motion blur
Figure 8.  Both CT reconstruction results (Row one) and residual images (Row two) obtained by MMC and PD method for 'Forbil-gen' phantom with $ N_p = 36 $ and $ 72 $
Figure 9.  MRI reconstruction results (Row one) and residual images (Row two) obtained by our MMC and PD method on the brain image with radial sampling patterns and sampling lines $ N_s = 30 $ and $ 50 $
Table 1.  The results on selected images ($ \#1 $, $ \#3 $, $ \#5 $ and $ \#7 $) with different $ J $ for noise level $ \sigma = 20 $
$ \alpha $ 10 15 20
MML $ J $ Iter Energy CPU(s) Iter Energy CPU(s) Iter Energy CPU(s)
#1 1 54 2.32E7 1.10 86 3.12E7 2.05 128 3.80E7 3.12
2 23 2.31E7 1.21 50 3.05E7 2.54 80 3.62E7 3.49
3 24 2.29E7 1.48 40 3.03E7 2.46 31 3.53E7 3.67
4 22 2.29E7 1.53 32 3.03E7 3.35 29 3.52E7 3.21
5 24 2.29E7 1.85 32 3.03E7 4.88 29 3.52E7 4.51
6 22 2.29E7 2.08 23 3.03E7 5.73 29 3.52E7 5.71
#3 1 35 8.60E7 5.64 82 1.14E8 11.32 139 1.38E8 19.35
2 18 8.57E7 5.87 36 1.11E8 10.32 65 1.29E8 16.43
3 18 8.56E7 6.18 44 1.07E8 10.65 32 1.25E8 14.47
4 18 8.56E7 6.79 31 1.07E8 9.31 32 1.24E8 11.28
5 18 8.56E7 6.77 31 1.07E8 9.78 32 1.24E8 12.02
6 18 8.56E7 7.25 31 1.07E8 10.17 32 1.24E8 12.56
#5 1 49 3.54E8 27.32 71 4.72E8 42.35 198 5.63E8 79.56
2 31 3.54E8 31.47 53 4.60E8 50.61 70 5.34E8 71.35
3 28 3.53E8 32.06 43 4.59E8 49.35 54 5.24E8 60.21
4 21 3.53E8 33.37 36 4.59E8 48.35 52 5.22E8 52.13
5 21 3.53E8 34.25 36 4.59E8 50.22 52 5.22E8 66.32
6 21 3.53E8 38.24 36 4.59E8 51.78 52 5.22E8 75.21
#7 1 57 1.38E9 121.32 120 1.82E9 237.21 189 2.21E9 369.75
2 31 1.38E9 117.24 61 1.76E9 225.76 102 2.05E9 366.14
3 26 1.37E9 115.78 44 1.76E9 193.35 65 1.99E9 299.78
4 26 1.37E9 122.73 39 1.75E9 192.49 56 1.94E9 280.32
5 26 1.37E9 126.25 39 1.75E9 208.15 53 1.94E9 288.25
6 26 1.37E9 139.35 39 1.75E9 229.29 53 1.94E9 282.12
MMC $ J $ Iter Energy CPU(s) Iter Energy CPU(s) Iter Energy CPU(s)
#1 1 33 2.31E7 0.85 72 3.12E7 1.72 112 3.79E7 2.77
2 21 2.31E7 0.69 41 3.05E7 1.22 37 3.63E7 1.21
3 23 2.31E7 0.83 33 3.03E7 1.15 46 3.56E7 1.54
4 21 2.31E7 0.88 33 3.03E7 1.21 34 3.54E7 1.20
5 21 2.31E7 0.83 29 3.03E7 1.31 33 3.54E7 1.28
6 20 2.31E7 0.82 23 3.03E7 1.95 33 3.54E7 1.55
#3 1 39 8.59E7 4.96 80 1.14E8 9.65 134 1.39E8 15.84
2 28 8.58E7 4.07 36 1.11E8 5.13 54 1.31E8 7.54
3 27 8.58E7 4.25 42 1.07E8 6.41 46 1.26E8 6.99
4 27 8.58E7 4.27 37 1.07E8 5.97 34 1.24E8 5.75
5 27 8.58E7 4.73 33 1.07E8 5.68 38 1.24E8 6.56
6 26 8.58E7 4.56 30 1.07E8 5.93 32 1.24E8 6.73
#5 1 51 3.54E8 23.32 99 4.71E8 44.23 161 5.64E8 74.36
2 33 3.54E8 20.67 49 4.60E8 29.95 66 5.37E8 42.74
3 31 3.53E8 23.07 46 4.59E8 31.26 47 5.26E8 34.33
4 30 3.53E8 22.07 40 4.59E8 28.98 48 5.23E8 34.28
5 28 3.53E8 25.88 42 4.59E8 31.20 43 5.23E8 37.25
6 29 3.53E8 30.15 39 4.59E8 36.59 44 5.23E8 42.67
#7 1 57 1.38E9 121.96 120 1.82E9 237.88 189 2.21E9 369.02
2 36 1.38E9 100.06 56 1.76E9 124.64 84 2.05E9 183.47
3 34 1.38E9 86.18 44 1.75E9 110.30 62 1.99E9 150.35
4 34 1.36E9 81.35 43 1.74E9 105.42 43 1.95E9 114.78
5 37 1.36E9 90.06 43 1.74E9 109.73 39 1.95E9 117.75
6 36 1.36E9 98.81 43 1.74E9 115.06 40 1.95E9 123.42
$ \alpha $ 10 15 20
MML $ J $ Iter Energy CPU(s) Iter Energy CPU(s) Iter Energy CPU(s)
#1 1 54 2.32E7 1.10 86 3.12E7 2.05 128 3.80E7 3.12
2 23 2.31E7 1.21 50 3.05E7 2.54 80 3.62E7 3.49
3 24 2.29E7 1.48 40 3.03E7 2.46 31 3.53E7 3.67
4 22 2.29E7 1.53 32 3.03E7 3.35 29 3.52E7 3.21
5 24 2.29E7 1.85 32 3.03E7 4.88 29 3.52E7 4.51
6 22 2.29E7 2.08 23 3.03E7 5.73 29 3.52E7 5.71
#3 1 35 8.60E7 5.64 82 1.14E8 11.32 139 1.38E8 19.35
2 18 8.57E7 5.87 36 1.11E8 10.32 65 1.29E8 16.43
3 18 8.56E7 6.18 44 1.07E8 10.65 32 1.25E8 14.47
4 18 8.56E7 6.79 31 1.07E8 9.31 32 1.24E8 11.28
5 18 8.56E7 6.77 31 1.07E8 9.78 32 1.24E8 12.02
6 18 8.56E7 7.25 31 1.07E8 10.17 32 1.24E8 12.56
#5 1 49 3.54E8 27.32 71 4.72E8 42.35 198 5.63E8 79.56
2 31 3.54E8 31.47 53 4.60E8 50.61 70 5.34E8 71.35
3 28 3.53E8 32.06 43 4.59E8 49.35 54 5.24E8 60.21
4 21 3.53E8 33.37 36 4.59E8 48.35 52 5.22E8 52.13
5 21 3.53E8 34.25 36 4.59E8 50.22 52 5.22E8 66.32
6 21 3.53E8 38.24 36 4.59E8 51.78 52 5.22E8 75.21
#7 1 57 1.38E9 121.32 120 1.82E9 237.21 189 2.21E9 369.75
2 31 1.38E9 117.24 61 1.76E9 225.76 102 2.05E9 366.14
3 26 1.37E9 115.78 44 1.76E9 193.35 65 1.99E9 299.78
4 26 1.37E9 122.73 39 1.75E9 192.49 56 1.94E9 280.32
5 26 1.37E9 126.25 39 1.75E9 208.15 53 1.94E9 288.25
6 26 1.37E9 139.35 39 1.75E9 229.29 53 1.94E9 282.12
MMC $ J $ Iter Energy CPU(s) Iter Energy CPU(s) Iter Energy CPU(s)
#1 1 33 2.31E7 0.85 72 3.12E7 1.72 112 3.79E7 2.77
2 21 2.31E7 0.69 41 3.05E7 1.22 37 3.63E7 1.21
3 23 2.31E7 0.83 33 3.03E7 1.15 46 3.56E7 1.54
4 21 2.31E7 0.88 33 3.03E7 1.21 34 3.54E7 1.20
5 21 2.31E7 0.83 29 3.03E7 1.31 33 3.54E7 1.28
6 20 2.31E7 0.82 23 3.03E7 1.95 33 3.54E7 1.55
#3 1 39 8.59E7 4.96 80 1.14E8 9.65 134 1.39E8 15.84
2 28 8.58E7 4.07 36 1.11E8 5.13 54 1.31E8 7.54
3 27 8.58E7 4.25 42 1.07E8 6.41 46 1.26E8 6.99
4 27 8.58E7 4.27 37 1.07E8 5.97 34 1.24E8 5.75
5 27 8.58E7 4.73 33 1.07E8 5.68 38 1.24E8 6.56
6 26 8.58E7 4.56 30 1.07E8 5.93 32 1.24E8 6.73
#5 1 51 3.54E8 23.32 99 4.71E8 44.23 161 5.64E8 74.36
2 33 3.54E8 20.67 49 4.60E8 29.95 66 5.37E8 42.74
3 31 3.53E8 23.07 46 4.59E8 31.26 47 5.26E8 34.33
4 30 3.53E8 22.07 40 4.59E8 28.98 48 5.23E8 34.28
5 28 3.53E8 25.88 42 4.59E8 31.20 43 5.23E8 37.25
6 29 3.53E8 30.15 39 4.59E8 36.59 44 5.23E8 42.67
#7 1 57 1.38E9 121.96 120 1.82E9 237.88 189 2.21E9 369.02
2 36 1.38E9 100.06 56 1.76E9 124.64 84 2.05E9 183.47
3 34 1.38E9 86.18 44 1.75E9 110.30 62 1.99E9 150.35
4 34 1.36E9 81.35 43 1.74E9 105.42 43 1.95E9 114.78
5 37 1.36E9 90.06 43 1.74E9 109.73 39 1.95E9 117.75
6 36 1.36E9 98.81 43 1.74E9 115.06 40 1.95E9 123.42
Table 2.  The denoising results by MMC and MML methods for Rudin-Osher-Fatemi model with $ \alpha = 10,\ 15,\ 20 $ for noise level $ \sigma = 20 $.
$ \alpha=10 $ $ \alpha=15 $ $ \alpha=20 $
MMC MML Dual P-D MMC MML Dual P-D MMC MML Dual P-D
PSNR #1 28.77 28.77 28.71 28.70 29.26 29.27 29.23 29.27 29.14 29.14 29.13 29.17
#2 28.43 28.42 28.40 28.41 28.64 28.64 28.60 28.62 27.61 27.59 27.55 27.58
#3 29.73 29.73 29.70 29.73 30.79 30.77 30.72 30.75 30.60 30.63 30.58 30.60
#4 29.84 29.86 29.83 29.83 31.14 31.13 31.09 31.11 30.28 30.32 30.28 30.30
#5 29.18 29.16 29.15 29.15 29.64 29.64 29.63 29.63 29.35 29.34 29.30 29.33
#6 27.81 27.82 27.80 27.79 27.95 27.95 27.89 27.94 26.58 26.58 26.55 26.55
#7 30.70 30.65 30.60 30.68 32.45 32.44 32.43 32.40 32.33 32.33 32.26 32.38
#8 30.43 30.40 30.39 30.41 32.28 32.25 32.23 32.27 31.27 31.26 31.20 31.25
SSIM #1 0.7786 0.7771 0.7735 0.7757 0.8511 0.8521 0.8515 0.8523 0.8492 0.8499 0.8491 0.8501
#2 0.7473 0.7451 0.7376 0.7391 0.8303 0.8279 0.8299 0.8255 0.8243 0.8216 0.8255 0.8299
#3 0.7342 0.7277 0.7225 0.7245 0.8256 0.8221 0.8242 0.8253 0.8195 0.8207 0.8217 0.8228
#4 0.7544 0.7431 0.7419 0.7433 0.8548 0.8562 0.854 0.8537 0.8518 0.8548 0.8520 0.8521
#5 0.7287 0.7298 0.7267 0.7294 0.7649 0.7631 0.7622 0.7641 0.7381 0.7394 0.7388 0.7395
#6 0.7953 0.7909 0.7943 0.7952 0.8031 0.8131 0.8124 0.8122 0.7883 0.7894 0.7788 0.7793
#7 0.7447 0.7412 0.7368 0.7342 0.8865 0.8877 0.8788 0.8791 0.8725 0.8717 0.8736 0.8744
#8 0.7386 0.7252 0.7237 0.7241 0.9203 0.9107 0.9154 0.9165 0.9154 0.9199 0.9198 0.9192
CPU(s) #1 0.88 1.53 2.41 1.40 1.21 3.35 4.49 2.15 1.20 3.61 7.61 2.71
#2 1.00 1.64 1.56 1.72 1.17 2.01 2.81 1.97 1.18 3.11 9.01 2.21
#3 4.27 6.94 8.26 5.62 5.97 9.83 14.42 8.60 5.81 11.28 20.67 13.94
#4 4.22 6.81 8.56 6.13 5.13 9.13 15.01 8.11 5.22 12.10 21.35 11.65
#5 22.68 33.19 30.12 19.73 28.68 48.35 54.08 29.93 34.28 52.13 63.59 41.57
#6 18.33 23.19 31.01 20.21 26.78 50.14 51.35 28.86 28.48 56.13 66.01 39.73
#7 88.80 122.73 184.93 113.37 106.43 192.49 463.12 161.99 115.83 280.32 560.32 188.35
#8 116.35 133.84 190.21 121.61 123.28 183.17 454.35 159.73 134.78 244.35 555.38 198.46
Energy #1 2.31E7 2.29E7 2.31E7 2.31E7 3.03E7 3.01E7 3.03E7 3.03E7 3.52E7 3.52E7 3.53E7 3.53E7
#2 2.44E7 2.43E7 2.43E7 2.43E7 3.23E7 3.21E7 3.23E7 3.23E7 3.70E7 3.71E7 3.71E7 3.70E7
#3 1.28E8 1.29E8 1.29E8 1.28E8 2.77E8 2.78E8 2.78E8 2.78E8 4.66E8 4.67E8 4.67E8 4.67E8
#4 8.39E7 8.37E7 8.39E7 8.39E7 1.08E8 1.09E8 1.09E8 1.09E8 1.22E8 1.23E8 1.22E8 1.23E8
#5 3.53E8 3.51E8 3.53E8 3.53E8 4.59E8 4.57E8 4.57E8 4.57E8 5.23E8 5.21E8 5.22E8 5.22E8
#6 3.71E8 3.72E8 3.72E8 3.72E8 4.71E8 4.72E8 4.71E8 4.71E8 5.97E8 5.97E8 5.98E8 5.98E8
#7 1.37E9 1.36E9 1.37E9 1.36E9 1.75E9 1.75E9 1.75E9 1.75E9 1.95E9 1.94E9 1.95E9 1.95E9
#8 1.40E9 1.42E9 1.40E9 1.41E9 1.81E9 1.80E9 1.81E9 1.80E9 1.97E9 1.98E9 1.98E9 1.97E9
$ \alpha=10 $ $ \alpha=15 $ $ \alpha=20 $
MMC MML Dual P-D MMC MML Dual P-D MMC MML Dual P-D
PSNR #1 28.77 28.77 28.71 28.70 29.26 29.27 29.23 29.27 29.14 29.14 29.13 29.17
#2 28.43 28.42 28.40 28.41 28.64 28.64 28.60 28.62 27.61 27.59 27.55 27.58
#3 29.73 29.73 29.70 29.73 30.79 30.77 30.72 30.75 30.60 30.63 30.58 30.60
#4 29.84 29.86 29.83 29.83 31.14 31.13 31.09 31.11 30.28 30.32 30.28 30.30
#5 29.18 29.16 29.15 29.15 29.64 29.64 29.63 29.63 29.35 29.34 29.30 29.33
#6 27.81 27.82 27.80 27.79 27.95 27.95 27.89 27.94 26.58 26.58 26.55 26.55
#7 30.70 30.65 30.60 30.68 32.45 32.44 32.43 32.40 32.33 32.33 32.26 32.38
#8 30.43 30.40 30.39 30.41 32.28 32.25 32.23 32.27 31.27 31.26 31.20 31.25
SSIM #1 0.7786 0.7771 0.7735 0.7757 0.8511 0.8521 0.8515 0.8523 0.8492 0.8499 0.8491 0.8501
#2 0.7473 0.7451 0.7376 0.7391 0.8303 0.8279 0.8299 0.8255 0.8243 0.8216 0.8255 0.8299
#3 0.7342 0.7277 0.7225 0.7245 0.8256 0.8221 0.8242 0.8253 0.8195 0.8207 0.8217 0.8228
#4 0.7544 0.7431 0.7419 0.7433 0.8548 0.8562 0.854 0.8537 0.8518 0.8548 0.8520 0.8521
#5 0.7287 0.7298 0.7267 0.7294 0.7649 0.7631 0.7622 0.7641 0.7381 0.7394 0.7388 0.7395
#6 0.7953 0.7909 0.7943 0.7952 0.8031 0.8131 0.8124 0.8122 0.7883 0.7894 0.7788 0.7793
#7 0.7447 0.7412 0.7368 0.7342 0.8865 0.8877 0.8788 0.8791 0.8725 0.8717 0.8736 0.8744
#8 0.7386 0.7252 0.7237 0.7241 0.9203 0.9107 0.9154 0.9165 0.9154 0.9199 0.9198 0.9192
CPU(s) #1 0.88 1.53 2.41 1.40 1.21 3.35 4.49 2.15 1.20 3.61 7.61 2.71
#2 1.00 1.64 1.56 1.72 1.17 2.01 2.81 1.97 1.18 3.11 9.01 2.21
#3 4.27 6.94 8.26 5.62 5.97 9.83 14.42 8.60 5.81 11.28 20.67 13.94
#4 4.22 6.81 8.56 6.13 5.13 9.13 15.01 8.11 5.22 12.10 21.35 11.65
#5 22.68 33.19 30.12 19.73 28.68 48.35 54.08 29.93 34.28 52.13 63.59 41.57
#6 18.33 23.19 31.01 20.21 26.78 50.14 51.35 28.86 28.48 56.13 66.01 39.73
#7 88.80 122.73 184.93 113.37 106.43 192.49 463.12 161.99 115.83 280.32 560.32 188.35
#8 116.35 133.84 190.21 121.61 123.28 183.17 454.35 159.73 134.78 244.35 555.38 198.46
Energy #1 2.31E7 2.29E7 2.31E7 2.31E7 3.03E7 3.01E7 3.03E7 3.03E7 3.52E7 3.52E7 3.53E7 3.53E7
#2 2.44E7 2.43E7 2.43E7 2.43E7 3.23E7 3.21E7 3.23E7 3.23E7 3.70E7 3.71E7 3.71E7 3.70E7
#3 1.28E8 1.29E8 1.29E8 1.28E8 2.77E8 2.78E8 2.78E8 2.78E8 4.66E8 4.67E8 4.67E8 4.67E8
#4 8.39E7 8.37E7 8.39E7 8.39E7 1.08E8 1.09E8 1.09E8 1.09E8 1.22E8 1.23E8 1.22E8 1.23E8
#5 3.53E8 3.51E8 3.53E8 3.53E8 4.59E8 4.57E8 4.57E8 4.57E8 5.23E8 5.21E8 5.22E8 5.22E8
#6 3.71E8 3.72E8 3.72E8 3.72E8 4.71E8 4.72E8 4.71E8 4.71E8 5.97E8 5.97E8 5.98E8 5.98E8
#7 1.37E9 1.36E9 1.37E9 1.36E9 1.75E9 1.75E9 1.75E9 1.75E9 1.95E9 1.94E9 1.95E9 1.95E9
#8 1.40E9 1.42E9 1.40E9 1.41E9 1.81E9 1.80E9 1.81E9 1.80E9 1.97E9 1.98E9 1.98E9 1.97E9
Table 3.  The denoising results for Rudin-Osher-Fatemi model with different noise levels $ \sigma = 20,\ 30,\ 40 $.
$ \alpha=15,\ \sigma=20 $ $ \alpha=25,\ \sigma=30 $ $ \alpha=35,\ \sigma=40 $
MMC MML Dual P-D MMC MML Dual P-D MMC MML Dual P-D
PSNR #1 29.26 29.27 29.23 29.27 26.73 26.73 26.73 26.73 25.04 25.04 25.04 25.04
#2 28.64 28.64 28.60 28.62 26.49 26.44 26.40 26.42 25.01 24.95 24.96 24.96
#3 30.79 30.77 30.72 30.75 28.78 28.78 28.78 28.78 27.25 27.21 27.21 27.21
#4 31.14 31.13 31.09 31.11 29.13 29.11 29.11 29.12 27.21 27.21 27.21 27.21
#5 29.64 29.64 29.63 29.63 28.73 27.75 27.75 27.74 26.40 26.38 26.39 26.40
#6 27.95 27.95 27.89 27.94 25.44 25.45 25.42 24.13 23.93 23.94 23.92 23.94
#7 32.45 32.44 32.43 32.40 29.92 29.91 29.90 29.91 28.20 28.20 28.20 28.20
#8 32.28 32.25 32.23 32.27 29.95 29.93 29.85 29.87 27.52 27.52 27.51 27.51
SSIM #1 0.8511 0.8521 0.8515 0.8523 0.8051 0.8048 0.8046 0.8084 0.7561 0.7563 0.7601 0.7623
#2 0.8303 0.8279 0.8299 0.8255 0.7782 0.7755 0.7749 0.7811 0.7499 0.7476 0.7377 0.7492
#3 0.8256 0.8221 0.8242 0.8253 0.7842 0.7827 0.7841 0.7851 0.7555 0.7519 0.7549 0.7537
#4 0.8548 0.8562 0.8542 0.8537 0.8046 0.8011 0.8044 0.8012 0.7717 0.7782 0.7739 0.7717
#5 0.7649 0.7631 0.7622 0.7641 0.7162 0.7144 0.7088 0.7101 0.6772 0.6766 0.6688 0.6768
#6 0.8031 0.8131 0.8124 0.8122 0.7311 0.7353 0.7344 0.7349 0.6601 0.6721 0.6687 0.6699
#7 0.8865 0.8877 0.8788 0.8791 0.8478 0.8516 0.8496 0.8502 0.8224 0.8236 0.8235 0.8247
#8 0.9203 0.9107 0.9154 0.9165 0.9002 0.8997 0.8981 0.8994 0.8811 0.8868 0.8714 0.8726
CPU(s) #1 1.21 3.35 4.49 2.15 1.62 3.88 4.35 3.99 2.55 5.95 5.69 5.81
#2 1.17 2.01 2.81 1.97 1.79 3.61 7.13 4.23 3.05 11.78 7.13 8.68
#3 5.97 9.83 14.42 8.60 6.93 16.06 17.14 11.86 9.88 37.98 17.19 15.06
#4 5.13 9.13 15.01 8.11 7.68 15.76 19.06 8.99 9.85 20.74 18.99 15.04
#5 28.68 48.35 54.08 29.93 35.66 76.67 64.43 49.30 48.53 190.87 78.30 56.35
#6 26.78 50.14 51.35 28.86 38.63 79.12 78.84 58.84 50.25 180.62 64.30 55.44
#7 106.43 192.49 463.12 161.99 161.37 334.45 452.12 188.14 186.92 491.83 443.35 269.35
#8 123.28 183.17 454.35 159.73 164.28 303.28 443.18 191.18 208.15 482.78 482.06 275.05
Energy #1 3.03E7 3.01E7 3.03E7 3.03E7 6.56E7 6.57E7 6.57E7 6.56E7 1.18E8 1.17E8 1.18E8 1.17E8
#2 3.23E7 3.21E7 3.23E7 3.23E7 6.84E7 6.85E7 6.85E7 6.84E7 1.15E8 1.16E8 1.16E8 1.15E8
#3 2.77E8 2.78E8 2.78E8 2.78E8 2.87E8 2.87E8 2.86E8 2.86E8 4.35E8 4.36E8 4.36E8 4.35E8
#4 1.08E8 1.09E8 1.09E8 1.09E8 2.42E8 2.43E8 2.43E8 2.43E8 4.30E8 4.29E8 4.31E8 4.31E8
#5 4.59E8 4.57E8 4.57E8 4.57E8 1.01E9 1.02E9 1.02E9 1.01E9 1.77E9 1.78E9 1.78E9 1.77E9
#6 4.72E8 4.71E8 4.71E8 4.71E8 1.08E9 1.09E9 1.09E9 1.08E9 1.87E9 1.88E9 1.88E9 1.87E9
#7 1.75E9 1.75E9 1.75E9 1.75E9 3.94E9 3.93E9 3.93E9 3.93E9 6.95E9 6.94E9 6.95E9 6.94E9
#8 1.81E9 1.80E9 1.81E9 1.80E9 3.98E9 3.99E9 3.99E9 3.99E9 7.31E9 7.32E9 7.31E9 7.31E9
$ \alpha=15,\ \sigma=20 $ $ \alpha=25,\ \sigma=30 $ $ \alpha=35,\ \sigma=40 $
MMC MML Dual P-D MMC MML Dual P-D MMC MML Dual P-D
PSNR #1 29.26 29.27 29.23 29.27 26.73 26.73 26.73 26.73 25.04 25.04 25.04 25.04
#2 28.64 28.64 28.60 28.62 26.49 26.44 26.40 26.42 25.01 24.95 24.96 24.96
#3 30.79 30.77 30.72 30.75 28.78 28.78 28.78 28.78 27.25 27.21 27.21 27.21
#4 31.14 31.13 31.09 31.11 29.13 29.11 29.11 29.12 27.21 27.21 27.21 27.21
#5 29.64 29.64 29.63 29.63 28.73 27.75 27.75 27.74 26.40 26.38 26.39 26.40
#6 27.95 27.95 27.89 27.94 25.44 25.45 25.42 24.13 23.93 23.94 23.92 23.94
#7 32.45 32.44 32.43 32.40 29.92 29.91 29.90 29.91 28.20 28.20 28.20 28.20
#8 32.28 32.25 32.23 32.27 29.95 29.93 29.85 29.87 27.52 27.52 27.51 27.51
SSIM #1 0.8511 0.8521 0.8515 0.8523 0.8051 0.8048 0.8046 0.8084 0.7561 0.7563 0.7601 0.7623
#2 0.8303 0.8279 0.8299 0.8255 0.7782 0.7755 0.7749 0.7811 0.7499 0.7476 0.7377 0.7492
#3 0.8256 0.8221 0.8242 0.8253 0.7842 0.7827 0.7841 0.7851 0.7555 0.7519 0.7549 0.7537
#4 0.8548 0.8562 0.8542 0.8537 0.8046 0.8011 0.8044 0.8012 0.7717 0.7782 0.7739 0.7717
#5 0.7649 0.7631 0.7622 0.7641 0.7162 0.7144 0.7088 0.7101 0.6772 0.6766 0.6688 0.6768
#6 0.8031 0.8131 0.8124 0.8122 0.7311 0.7353 0.7344 0.7349 0.6601 0.6721 0.6687 0.6699
#7 0.8865 0.8877 0.8788 0.8791 0.8478 0.8516 0.8496 0.8502 0.8224 0.8236 0.8235 0.8247
#8 0.9203 0.9107 0.9154 0.9165 0.9002 0.8997 0.8981 0.8994 0.8811 0.8868 0.8714 0.8726
CPU(s) #1 1.21 3.35 4.49 2.15 1.62 3.88 4.35 3.99 2.55 5.95 5.69 5.81
#2 1.17 2.01 2.81 1.97 1.79 3.61 7.13 4.23 3.05 11.78 7.13 8.68
#3 5.97 9.83 14.42 8.60 6.93 16.06 17.14 11.86 9.88 37.98 17.19 15.06
#4 5.13 9.13 15.01 8.11 7.68 15.76 19.06 8.99 9.85 20.74 18.99 15.04
#5 28.68 48.35 54.08 29.93 35.66 76.67 64.43 49.30 48.53 190.87 78.30 56.35
#6 26.78 50.14 51.35 28.86 38.63 79.12 78.84 58.84 50.25 180.62 64.30 55.44
#7 106.43 192.49 463.12 161.99 161.37 334.45 452.12 188.14 186.92 491.83 443.35 269.35
#8 123.28 183.17 454.35 159.73 164.28 303.28 443.18 191.18 208.15 482.78 482.06 275.05
Energy #1 3.03E7 3.01E7 3.03E7 3.03E7 6.56E7 6.57E7 6.57E7 6.56E7 1.18E8 1.17E8 1.18E8 1.17E8
#2 3.23E7 3.21E7 3.23E7 3.23E7 6.84E7 6.85E7 6.85E7 6.84E7 1.15E8 1.16E8 1.16E8 1.15E8
#3 2.77E8 2.78E8 2.78E8 2.78E8 2.87E8 2.87E8 2.86E8 2.86E8 4.35E8 4.36E8 4.36E8 4.35E8
#4 1.08E8 1.09E8 1.09E8 1.09E8 2.42E8 2.43E8 2.43E8 2.43E8 4.30E8 4.29E8 4.31E8 4.31E8
#5 4.59E8 4.57E8 4.57E8 4.57E8 1.01E9 1.02E9 1.02E9 1.01E9 1.77E9 1.78E9 1.78E9 1.77E9
#6 4.72E8 4.71E8 4.71E8 4.71E8 1.08E9 1.09E9 1.09E9 1.08E9 1.87E9 1.88E9 1.88E9 1.87E9
#7 1.75E9 1.75E9 1.75E9 1.75E9 3.94E9 3.93E9 3.93E9 3.93E9 6.95E9 6.94E9 6.95E9 6.94E9
#8 1.81E9 1.80E9 1.81E9 1.80E9 3.98E9 3.99E9 3.99E9 3.99E9 7.31E9 7.32E9 7.31E9 7.31E9
Table 4.  PSNR, SSIM values and CPUs for deblurring with different kernel
Image Gaussian Blur Motion Blur
CPU PSNR SSIM Energy CPU PSNR SSIM Energy
MMC Lena 8.63 34.85 0.9207 1.4840 14.34 35.41 0.9401 1.5673
PD 16.22 34.82 0.9234 1.4713 19.41 35.36 0.9382 1.5610
MMC Boat 16.69 33.83 0.9204 1.9612 18.98 33.43 0.9178 1.9549
PD 21.22 33.86 0.9222 1.9623 29.41 33.26 0.9166 1.9568
Image Gaussian Blur Motion Blur
CPU PSNR SSIM Energy CPU PSNR SSIM Energy
MMC Lena 8.63 34.85 0.9207 1.4840 14.34 35.41 0.9401 1.5673
PD 16.22 34.82 0.9234 1.4713 19.41 35.36 0.9382 1.5610
MMC Boat 16.69 33.83 0.9204 1.9612 18.98 33.43 0.9178 1.9549
PD 21.22 33.86 0.9222 1.9623 29.41 33.26 0.9166 1.9568
Table 5.  PSNR values and CPUs for CT reconstruction with different projection numbers of MMC and PD method
Image $ N_p $ 18 36 72
Size CPU(s) PSNR SSIM Energy CPU(s) PSNR SSIM Energy CPU(s) PSNR SSIM Energy
MMC Shepp 256 15.64 39.28 0.9908 0.0508 20.01 50.19 0.9995 0.0292 33.21 58.30 0.9998 0.0292
PD 27.59 39.16 0.9901 0.0508 59.33 50.37 0.9995 0.0292 112.04 58.21 0.9998 0.0292
MMC Fobild 256 21.33 27.96 0.9508 0.0595 24.15 37.66 0.9838 0.0421 40.21 45.06 0.9918 0.0423
PD 25.02 27.92 0.9500 0.0600 71.21 37.17 0.9828 0.0421 140.10 45.16 0.9917 0.0425
MMC Shepp 512 96.62 35.79 0.9881 0.0991 103.54 43.22 0.9911 0.0587 129.91 50.60 0.9995 0.0585
PD 176.86 35.72 0.9879 0.0991 204.57 43.21 0.9911 0.0585 370.14 50.51 0.9995 0.0585
MMC Fobild 512 130.01 31.29 0.9726 0.1501 153.52 37.90 0.9899 0.1557 179.58 48.88 0.9925 0.0861
PD 187.02 31.23 0.9726 0.1501 248.01 37.86 0.9896 0.1557 457.51 48.83 0.9925 0.0861
Image $ N_p $ 18 36 72
Size CPU(s) PSNR SSIM Energy CPU(s) PSNR SSIM Energy CPU(s) PSNR SSIM Energy
MMC Shepp 256 15.64 39.28 0.9908 0.0508 20.01 50.19 0.9995 0.0292 33.21 58.30 0.9998 0.0292
PD 27.59 39.16 0.9901 0.0508 59.33 50.37 0.9995 0.0292 112.04 58.21 0.9998 0.0292
MMC Fobild 256 21.33 27.96 0.9508 0.0595 24.15 37.66 0.9838 0.0421 40.21 45.06 0.9918 0.0423
PD 25.02 27.92 0.9500 0.0600 71.21 37.17 0.9828 0.0421 140.10 45.16 0.9917 0.0425
MMC Shepp 512 96.62 35.79 0.9881 0.0991 103.54 43.22 0.9911 0.0587 129.91 50.60 0.9995 0.0585
PD 176.86 35.72 0.9879 0.0991 204.57 43.21 0.9911 0.0585 370.14 50.51 0.9995 0.0585
MMC Fobild 512 130.01 31.29 0.9726 0.1501 153.52 37.90 0.9899 0.1557 179.58 48.88 0.9925 0.0861
PD 187.02 31.23 0.9726 0.1501 248.01 37.86 0.9896 0.1557 457.51 48.83 0.9925 0.0861
Table 6.  PSNR values and CPUs for MRI reconstruction with different numbers of sampling lines for the MMC and PD method
$ N_s $ 30 50
Pattern CPU PSNR SSIM Energy CPU PSNR SSIM Energy
MMC Radial 4.78 32.15 0.9711 1.8840 10.19 38.41 0.9847 1.1673
PD 13.22 32.08 0.9701 1.8840 19.41 38.36 0.9837 1.1675
MMC Random 5.84 27.03 0.9500 2.0915 11.98 35.73 0.9812 2.0070
PD 15.34 27.01 0.9504 2.0913 25.65 35.72 0.9812 2.0070
$ N_s $ 30 50
Pattern CPU PSNR SSIM Energy CPU PSNR SSIM Energy
MMC Radial 4.78 32.15 0.9711 1.8840 10.19 38.41 0.9847 1.1673
PD 13.22 32.08 0.9701 1.8840 19.41 38.36 0.9837 1.1675
MMC Random 5.84 27.03 0.9500 2.0915 11.98 35.73 0.9812 2.0070
PD 15.34 27.01 0.9504 2.0913 25.65 35.72 0.9812 2.0070
[1]

Haijuan Hu, Jacques Froment, Baoyan Wang, Xiequan Fan. Spatial-Frequency domain nonlocal total variation for image denoising. Inverse Problems & Imaging, 2020, 14 (6) : 1157-1184. doi: 10.3934/ipi.2020059

[2]

Mujibur Rahman Chowdhury, Jun Zhang, Jing Qin, Yifei Lou. Poisson image denoising based on fractional-order total variation. Inverse Problems & Imaging, 2020, 14 (1) : 77-96. doi: 10.3934/ipi.2019064

[3]

Yunhai Xiao, Junfeng Yang, Xiaoming Yuan. Alternating algorithms for total variation image reconstruction from random projections. Inverse Problems & Imaging, 2012, 6 (3) : 547-563. doi: 10.3934/ipi.2012.6.547

[4]

Rongliang Chen, Jizu Huang, Xiao-Chuan Cai. A parallel domain decomposition algorithm for large scale image denoising. Inverse Problems & Imaging, 2019, 13 (6) : 1259-1282. doi: 10.3934/ipi.2019055

[5]

Jing Xu, Xue-Cheng Tai, Li-Lian Wang. A two-level domain decomposition method for image restoration. Inverse Problems & Imaging, 2010, 4 (3) : 523-545. doi: 10.3934/ipi.2010.4.523

[6]

Abdelghafour Atlas, Mostafa Bendahmane, Fahd Karami, Driss Meskine, Omar Oubbih. A nonlinear fractional reaction-diffusion system applied to image denoising and decomposition. Discrete & Continuous Dynamical Systems - B, 2021, 26 (9) : 4963-4998. doi: 10.3934/dcdsb.2020321

[7]

Juan C. Moreno, V. B. Surya Prasath, João C. Neves. Color image processing by vectorial total variation with gradient channels coupling. Inverse Problems & Imaging, 2016, 10 (2) : 461-497. doi: 10.3934/ipi.2016008

[8]

Zhengmeng Jin, Chen Zhou, Michael K. Ng. A coupled total variation model with curvature driven for image colorization. Inverse Problems & Imaging, 2016, 10 (4) : 1037-1055. doi: 10.3934/ipi.2016031

[9]

Jianjun Zhang, Yunyi Hu, James G. Nagy. A scaled gradient method for digital tomographic image reconstruction. Inverse Problems & Imaging, 2018, 12 (1) : 239-259. doi: 10.3934/ipi.2018010

[10]

Wei Zhu, Xue-Cheng Tai, Tony Chan. Augmented Lagrangian method for a mean curvature based image denoising model. Inverse Problems & Imaging, 2013, 7 (4) : 1409-1432. doi: 10.3934/ipi.2013.7.1409

[11]

Weihong Guo, Jing Qin. A geometry guided image denoising scheme. Inverse Problems & Imaging, 2013, 7 (2) : 499-521. doi: 10.3934/ipi.2013.7.499

[12]

Yuan Shen, Lei Ji. Partial convolution for total variation deblurring and denoising by new linearized alternating direction method of multipliers with extension step. Journal of Industrial & Management Optimization, 2019, 15 (1) : 159-175. doi: 10.3934/jimo.2018037

[13]

Baoli Shi, Zhi-Feng Pang, Jing Xu. Image segmentation based on the hybrid total variation model and the K-means clustering strategy. Inverse Problems & Imaging, 2016, 10 (3) : 807-828. doi: 10.3934/ipi.2016022

[14]

Xavier Bresson, Tony F. Chan. Fast dual minimization of the vectorial total variation norm and applications to color image processing. Inverse Problems & Imaging, 2008, 2 (4) : 455-484. doi: 10.3934/ipi.2008.2.455

[15]

Wei Wang, Na Sun, Michael K. Ng. A variational gamma correction model for image contrast enhancement. Inverse Problems & Imaging, 2019, 13 (3) : 461-478. doi: 10.3934/ipi.2019023

[16]

Dominique Zosso, Jing An, James Stevick, Nicholas Takaki, Morgan Weiss, Liane S. Slaughter, Huan H. Cao, Paul S. Weiss, Andrea L. Bertozzi. Image segmentation with dynamic artifacts detection and bias correction. Inverse Problems & Imaging, 2017, 11 (3) : 577-600. doi: 10.3934/ipi.2017027

[17]

Michael Hintermüller, Monserrat Rincon-Camacho. An adaptive finite element method in $L^2$-TV-based image denoising. Inverse Problems & Imaging, 2014, 8 (3) : 685-711. doi: 10.3934/ipi.2014.8.685

[18]

Weina Wang, Chunlin Wu, Jiansong Deng. Piecewise constant signal and image denoising using a selective averaging method with multiple neighbors. Inverse Problems & Imaging, 2019, 13 (5) : 903-930. doi: 10.3934/ipi.2019041

[19]

Ke Chen, Yiqiu Dong, Michael Hintermüller. A nonlinear multigrid solver with line Gauss-Seidel-semismooth-Newton smoother for the Fenchel pre-dual in total variation based image restoration. Inverse Problems & Imaging, 2011, 5 (2) : 323-339. doi: 10.3934/ipi.2011.5.323

[20]

Wei Wang, Ling Pi, Michael K. Ng. Saturation-Value Total Variation model for chromatic aberration correction. Inverse Problems & Imaging, 2020, 14 (4) : 733-755. doi: 10.3934/ipi.2020034

2019 Impact Factor: 1.373

Metrics

  • PDF downloads (60)
  • HTML views (76)
  • Cited by (0)

[Back to Top]