• Previous Article
    Two new non-negativity preserving iterative regularization methods for ill-posed inverse problems
  • IPI Home
  • This Issue
  • Next Article
    Numerical resolution by the quasi-reversibility method of a data completion problem for Maxwell's equations
doi: 10.3934/ipi.2020059

Spatial-Frequency domain nonlocal total variation for image denoising

1. 

Northeastern University at Qinhuangdao, School of Mathematics and Statistics, Hebei, 066004, China

2. 

Univ Bretagne-Sud, CNRS UMR 6205 LMBA, Campus de Tohannic, Vannes, F-56000, France

3. 

Guangdong University of Petrochemical Technology, College of Computer Science, Guangdong, 525000, China

4. 

Tianjin University, Center for Applied Mathematics, Tianjin, 300072, China

* Corresponding author: Haijuan Hu

Received  November 2019 Revised  June 2020 Published  October 2020

Following the pioneering works of Rudin, Osher and Fatemi on total variation (TV) and of Buades, Coll and Morel on non-local means (NL-means), the last decade has seen a large number of denoising methods mixing these two approaches, starting with the nonlocal total variation (NLTV) model. The present article proposes an analysis of the NLTV model for image denoising as well as a number of improvements, the most important of which being to apply the denoising both in the space domain and in the Fourier domain, in order to exploit the complementarity of the representation of image data. A local version obtained by a regionwise implementation followed by an aggregation process, called Local Spatial-Frequency NLTV (L-SFNLTV) model, is finally proposed as a new reference algorithm for image denoising among the family of approaches mixing TV and NL operators. The experiments show the great performance of L-SFNLTV in terms of image quality and of computational speed, comparing with other recently proposed NLTV-related methods.

Citation: Haijuan Hu, Jacques Froment, Baoyan Wang, Xiequan Fan. Spatial-Frequency domain nonlocal total variation for image denoising. Inverse Problems & Imaging, doi: 10.3934/ipi.2020059
References:
[1]

A. BuadesB. Coll and J. M. Morel, A review of image denoising algorithms, with a new one, Multiscale Model. Simul., 4 (2005), 490-530.  doi: 10.1137/040616024.  Google Scholar

[2]

S. G. ChangB. Yu and M. Vetterli, Adaptive wavelet thresholding for image denoising and compression, IEEE Trans. Image Process., 9 (2000), 1532-1546.  doi: 10.1109/83.862633.  Google Scholar

[3]

K. DabovA. FoiV. Katkovnik and K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering, IEEE Trans. Image Process., 16 (2007), 2080-2095.  doi: 10.1109/TIP.2007.901238.  Google Scholar

[4]

D. L. Donoho and I. M. Johnstone, Ideal spatial adaptation by wavelet shrinkage, Biometrika, 81 (1994), 425-455.  doi: 10.1093/biomet/81.3.425.  Google Scholar

[5]

M. Elad and M. Aharon, Image denoising via sparse and redundant representations over learned dictionaries, IEEE Trans. Image Process., 15 (2006), 3736-3745.  doi: 10.1109/TIP.2006.881969.  Google Scholar

[6]

G. Gilboa, J. Darbon, S. Osher and T. Chan, Nonlocal convex functionals for image regularization, UCLA CAM Report, 06–57. Google Scholar

[7]

G. Gilboa and S. Osher, Nonlocal linear image regularization and supervised segmentation, Multiscale Model. Simul., 6 (2007), 595-630.  doi: 10.1137/060669358.  Google Scholar

[8]

G. Gilboa and S. Osher, Nonlocal operators with applications to image processing, Multiscale Model. Simul, 7 (2008), 1005-1028.  doi: 10.1137/070698592.  Google Scholar

[9]

S. Gu, L. Zhang, W. Zuo and X. Feng, Weighted nuclear norm minimization with application to image denoising, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., (2014), 2862–2869. doi: 10.1109/CVPR.2014.366.  Google Scholar

[10]

H. Hu and J. Froment, Nonlocal total variation for image denoising, in Symposium on Photonics and Optoelectronics (SOPO), 2012, IEEE, (2012), 1–4. doi: 10.1109/SOPO.2012.6270982.  Google Scholar

[11]

H. HuJ. Froment and Q. Liu, A note on patch-based low-rank minimization for fast image denoising, J. Visual Commun. Image Representation, 50 (2018), 100-110.  doi: 10.1016/j.jvcir.2017.11.013.  Google Scholar

[12]

I. M. Johnstone and B. W. Silverman, Wavelet threshold estimators for data with correlated noise, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 59 (1997), 319-351.  doi: 10.1111/1467-9868.00071.  Google Scholar

[13]

S. KindermannS. Osher and P. W. Jones, Deblurring and denoising of images by nonlocal functionals, Multiscale Model. Simul., 4 (2005), 1091-1115.  doi: 10.1137/050622249.  Google Scholar

[14]

C. Knaus and M. Zwicker, Dual-domain image denoising, IEEE Trans. Image Process., 23 (2014), 3114-3125.  doi: 10.1109/TIP.2014.2326771.  Google Scholar

[15]

S. Lefkimmiatis and S. Osher, Non-local structure tensor functionals for image regularization, IEEE Transactions on Computational Imaging, 1 (2015), 16-29.  doi: 10.1109/TCI.2015.2434616.  Google Scholar

[16]

S. Lefkimmiatis, Universal denoising networks: A novel CNN architecture for image denoising, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), 3204–3213. doi: 10.1109/CVPR.2018.00338.  Google Scholar

[17]

Z. LiF. Malgouyres and T. Zeng, Regularized non-local total variation and application in image restoration, J. Math. Imaging Vis., 59 (2017), 296-317.  doi: 10.1007/s10851-017-0732-6.  Google Scholar

[18]

J. Liu and X. Zheng, A block nonlocal TV method for image restoration, SIAM J. Imaging Sciences, 10 (2017), 920-941.  doi: 10.1137/16M1074163.  Google Scholar

[19]

Y. LouX. ZhangS. Osher and A. Bertozzi, Image recovery via nonlocal operators, J. Sci. Comput., 42 (2010), 185-197.  doi: 10.1007/s10915-009-9320-2.  Google Scholar

[20]

C. Louchet and L. Moisan, Total variation as a local filter, SIAM J. Imaging Sci., 4 (2011), 651-694.  doi: 10.1137/100785855.  Google Scholar

[21]

J. Mairal, F. Bach, J. Ponce, G. Sapiro and A. Zisserman, Non-local sparse models for image restoration, in 2009 IEEE 12th International Conference on Computer Vision, IEEE, (2009), 2272–2279. doi: 10.1109/ICCV.2009.5459452.  Google Scholar

[22]

N. Pierazzo, M. Lebrun, M. E. Rais, J.-M. Morel and G. Facciolo, Non-local dual image denoising, in 2014 IEEE International Conference on Image Processing (ICIP), IEEE, (2014), 813–817. Google Scholar

[23]

S. RamaniT. Blu and M. Unser, Monte-carlo sure: A black-box optimization of regularization parameters for general denoising algorithms, IEEE Trans. Image Process., 17 (2008), 1540-1554.  doi: 10.1109/TIP.2008.2001404.  Google Scholar

[24]

L. I. RudinS. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D: Nonlinear Phenomena, 60 (1992), 259-268.  doi: 10.1016/0167-2789(92)90242-F.  Google Scholar

[25]

C. M. Stein, Estimation of the mean of a multivariate normal distribution, Ann. Statist., 9 (1981), 1135-1151.  doi: 10.1214/aos/1176345632.  Google Scholar

[26]

S. TangW. GongW. Li and W. Wang, Non-blind image deblurring method by local and nonlocal total variation models, Signal Processing, 94 (2014), 339-349.  doi: 10.1016/j.sigpro.2013.07.005.  Google Scholar

[27]

D. Van De Ville and M. Kocher, Sure-based non-local means, IEEE Signal Process. Lett., 16 (2009), 973-976.   Google Scholar

[28]

P. VincentH. LarochelleI. LajoieY. Bengio and P.-A. Manzagol, Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion, J. Mach. Learn. Res., 11 (2010), 3371-3408.   Google Scholar

[29]

M. Werlberger, T. Pock and H. Bischof, Motion estimation with non-local total variation regularization, in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (2010), 2464–2471. doi: 10.1109/CVPR.2010.5539945.  Google Scholar

[30]

J. Xie, L. Xu and E. Chen, Image denoising and inpainting with deep neural networks, in Advances in Neural Information Processing Systems, (2012), 341–349. Google Scholar

[31]

K. ZhangW. ZuoY. ChenD. Meng and L. Zhang, Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising, IEEE Trans. Image Process., 26 (2017), 3142-3155.  doi: 10.1109/TIP.2017.2662206.  Google Scholar

[32]

K. ZhangW. Zuo and L. Zhang, FFDNet: Toward a fast and flexible solution for CNN-based image denoising, IEEE Trans. Image Process., 27 (2018), 4608-4622.  doi: 10.1109/TIP.2018.2839891.  Google Scholar

[33]

X. Zhang, M. Burger, X. Bresson and S. Osher, Bregmanized nonlocal regularization for deconvolution and sparse reconstruction, SIAM J. Imaging Sci., 3 (2010), 253–276. doi: 10.1137/090746379.  Google Scholar

[34]

X. Zhang and T. F. Chan, Wavelet inpainting by nonlocal total variation, Inverse Probl. Imaging, 4 (2010), 191-210.  doi: 10.3934/ipi.2010.4.191.  Google Scholar

show all references

References:
[1]

A. BuadesB. Coll and J. M. Morel, A review of image denoising algorithms, with a new one, Multiscale Model. Simul., 4 (2005), 490-530.  doi: 10.1137/040616024.  Google Scholar

[2]

S. G. ChangB. Yu and M. Vetterli, Adaptive wavelet thresholding for image denoising and compression, IEEE Trans. Image Process., 9 (2000), 1532-1546.  doi: 10.1109/83.862633.  Google Scholar

[3]

K. DabovA. FoiV. Katkovnik and K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering, IEEE Trans. Image Process., 16 (2007), 2080-2095.  doi: 10.1109/TIP.2007.901238.  Google Scholar

[4]

D. L. Donoho and I. M. Johnstone, Ideal spatial adaptation by wavelet shrinkage, Biometrika, 81 (1994), 425-455.  doi: 10.1093/biomet/81.3.425.  Google Scholar

[5]

M. Elad and M. Aharon, Image denoising via sparse and redundant representations over learned dictionaries, IEEE Trans. Image Process., 15 (2006), 3736-3745.  doi: 10.1109/TIP.2006.881969.  Google Scholar

[6]

G. Gilboa, J. Darbon, S. Osher and T. Chan, Nonlocal convex functionals for image regularization, UCLA CAM Report, 06–57. Google Scholar

[7]

G. Gilboa and S. Osher, Nonlocal linear image regularization and supervised segmentation, Multiscale Model. Simul., 6 (2007), 595-630.  doi: 10.1137/060669358.  Google Scholar

[8]

G. Gilboa and S. Osher, Nonlocal operators with applications to image processing, Multiscale Model. Simul, 7 (2008), 1005-1028.  doi: 10.1137/070698592.  Google Scholar

[9]

S. Gu, L. Zhang, W. Zuo and X. Feng, Weighted nuclear norm minimization with application to image denoising, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., (2014), 2862–2869. doi: 10.1109/CVPR.2014.366.  Google Scholar

[10]

H. Hu and J. Froment, Nonlocal total variation for image denoising, in Symposium on Photonics and Optoelectronics (SOPO), 2012, IEEE, (2012), 1–4. doi: 10.1109/SOPO.2012.6270982.  Google Scholar

[11]

H. HuJ. Froment and Q. Liu, A note on patch-based low-rank minimization for fast image denoising, J. Visual Commun. Image Representation, 50 (2018), 100-110.  doi: 10.1016/j.jvcir.2017.11.013.  Google Scholar

[12]

I. M. Johnstone and B. W. Silverman, Wavelet threshold estimators for data with correlated noise, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 59 (1997), 319-351.  doi: 10.1111/1467-9868.00071.  Google Scholar

[13]

S. KindermannS. Osher and P. W. Jones, Deblurring and denoising of images by nonlocal functionals, Multiscale Model. Simul., 4 (2005), 1091-1115.  doi: 10.1137/050622249.  Google Scholar

[14]

C. Knaus and M. Zwicker, Dual-domain image denoising, IEEE Trans. Image Process., 23 (2014), 3114-3125.  doi: 10.1109/TIP.2014.2326771.  Google Scholar

[15]

S. Lefkimmiatis and S. Osher, Non-local structure tensor functionals for image regularization, IEEE Transactions on Computational Imaging, 1 (2015), 16-29.  doi: 10.1109/TCI.2015.2434616.  Google Scholar

[16]

S. Lefkimmiatis, Universal denoising networks: A novel CNN architecture for image denoising, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), 3204–3213. doi: 10.1109/CVPR.2018.00338.  Google Scholar

[17]

Z. LiF. Malgouyres and T. Zeng, Regularized non-local total variation and application in image restoration, J. Math. Imaging Vis., 59 (2017), 296-317.  doi: 10.1007/s10851-017-0732-6.  Google Scholar

[18]

J. Liu and X. Zheng, A block nonlocal TV method for image restoration, SIAM J. Imaging Sciences, 10 (2017), 920-941.  doi: 10.1137/16M1074163.  Google Scholar

[19]

Y. LouX. ZhangS. Osher and A. Bertozzi, Image recovery via nonlocal operators, J. Sci. Comput., 42 (2010), 185-197.  doi: 10.1007/s10915-009-9320-2.  Google Scholar

[20]

C. Louchet and L. Moisan, Total variation as a local filter, SIAM J. Imaging Sci., 4 (2011), 651-694.  doi: 10.1137/100785855.  Google Scholar

[21]

J. Mairal, F. Bach, J. Ponce, G. Sapiro and A. Zisserman, Non-local sparse models for image restoration, in 2009 IEEE 12th International Conference on Computer Vision, IEEE, (2009), 2272–2279. doi: 10.1109/ICCV.2009.5459452.  Google Scholar

[22]

N. Pierazzo, M. Lebrun, M. E. Rais, J.-M. Morel and G. Facciolo, Non-local dual image denoising, in 2014 IEEE International Conference on Image Processing (ICIP), IEEE, (2014), 813–817. Google Scholar

[23]

S. RamaniT. Blu and M. Unser, Monte-carlo sure: A black-box optimization of regularization parameters for general denoising algorithms, IEEE Trans. Image Process., 17 (2008), 1540-1554.  doi: 10.1109/TIP.2008.2001404.  Google Scholar

[24]

L. I. RudinS. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D: Nonlinear Phenomena, 60 (1992), 259-268.  doi: 10.1016/0167-2789(92)90242-F.  Google Scholar

[25]

C. M. Stein, Estimation of the mean of a multivariate normal distribution, Ann. Statist., 9 (1981), 1135-1151.  doi: 10.1214/aos/1176345632.  Google Scholar

[26]

S. TangW. GongW. Li and W. Wang, Non-blind image deblurring method by local and nonlocal total variation models, Signal Processing, 94 (2014), 339-349.  doi: 10.1016/j.sigpro.2013.07.005.  Google Scholar

[27]

D. Van De Ville and M. Kocher, Sure-based non-local means, IEEE Signal Process. Lett., 16 (2009), 973-976.   Google Scholar

[28]

P. VincentH. LarochelleI. LajoieY. Bengio and P.-A. Manzagol, Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion, J. Mach. Learn. Res., 11 (2010), 3371-3408.   Google Scholar

[29]

M. Werlberger, T. Pock and H. Bischof, Motion estimation with non-local total variation regularization, in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (2010), 2464–2471. doi: 10.1109/CVPR.2010.5539945.  Google Scholar

[30]

J. Xie, L. Xu and E. Chen, Image denoising and inpainting with deep neural networks, in Advances in Neural Information Processing Systems, (2012), 341–349. Google Scholar

[31]

K. ZhangW. ZuoY. ChenD. Meng and L. Zhang, Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising, IEEE Trans. Image Process., 26 (2017), 3142-3155.  doi: 10.1109/TIP.2017.2662206.  Google Scholar

[32]

K. ZhangW. Zuo and L. Zhang, FFDNet: Toward a fast and flexible solution for CNN-based image denoising, IEEE Trans. Image Process., 27 (2018), 4608-4622.  doi: 10.1109/TIP.2018.2839891.  Google Scholar

[33]

X. Zhang, M. Burger, X. Bresson and S. Osher, Bregmanized nonlocal regularization for deconvolution and sparse reconstruction, SIAM J. Imaging Sci., 3 (2010), 253–276. doi: 10.1137/090746379.  Google Scholar

[34]

X. Zhang and T. F. Chan, Wavelet inpainting by nonlocal total variation, Inverse Probl. Imaging, 4 (2010), 191-210.  doi: 10.3934/ipi.2010.4.191.  Google Scholar

Figure 1.  An illustration of the set $ \mathcal{U}^2_i $ with $ D = 3 $
Figure 2.  PSNR values versus iterations for different images for NLTV
Figure 3.  PSNR values versus $ \lambda $ for different search window sizes $ D $ with $ \sigma = 20 $
Figure 4.  Denoised image by NLTV with $ D = 3, 11 $ with $ \sigma = 20 $
Figure 5.  Denoised image by NLTV with $ D = 11 $ for different values of $ \lambda $ with $ \sigma = 20 $
Figure 6.  Denoised image by NLTV for different values of $ d $ with $ \sigma = 20 $
Figure 7.  The top row is Lena image, with regions of size 16$ \times $16 highlighted; the following two rows are enlarged regions of the top row; the last two rows are the corresponding estimated MSE and true MSE for the corresponding regions
Figure 8.  Left: image denoised by choosing $ \lambda $ with smallest estimated MSE; right: choosing $ \lambda $ randomly. Top: region size 16$ \times $ 16; bottom region size 32$ \times $ 32
Figure 9.  Left: noisy image and denoised image by FNLTV. Middle: the corresponding Fourier transforms of left column, where the bottom one can also be considered as denoised Fourier transform of noisy image. Right: the Fourier transforms of noise and method noise
Figure 10.  Top row: images denoised by FNLTV with different $ \lambda_f $; the third row: images denoised by NLTV with different $ \lambda $; the second row and bottom row: the corresponding method noise images of the top row and the third row
Figure 11.  Top: Root mean square of method noise versus different $ \lambda $ (for NLTV) or $ \lambda_f $ (for FNLTV); Bottom: PSNR values versus different $ \lambda $ (for NLTV) or $ \lambda_f $ (for FNLTV)
Figure 12.  Left: image denoised by NLTV and FNLTV globally; Middle: image denoised by NLTV and FNLTV locally with no-overlapping regions of size 64$ \times $64; Right: image denoised by NLTV and FNLTV locally with overlapping regions of size 64$ \times $64 for moving step $ n_s = 50 $ (top) and $ n_s = 10 $ (third row). The second and bottom rows are the method noise images of the corresponding images of the top and third rows
Figure 13.  PSNR values for different images with different versions of FNLTV as in Figure 12 and the corresponding versions of NLTV
Figure 14.  Denoised images by ROF model, NL-means, NLTV model and SFNLTV model for Barbara
Figure 15.  Denoised images by ROF model, NL-means, NLTV model and SFNLTV model for Lena
Figure 16.  Denoised Lena images by L-SFNLTV, NLSTV [15], RNLTV [17], BNLTV [18] and SFNLTV in the case $ \sigma = 10 $
Figure 17.  Denoised Lena images by L-SFNLTV, NLSTV [15], RNLTV [17], BNLTV [18] and SFNLTV in the case $ \sigma = 20 $
Figure 18.  Denoised Lena images by L-SFNLTV, NLSTV [15], RNLTV [17], BNLTV [18] and SFNLTV in the case $ \sigma = 50 $
Figure 19.  Denoised images Peppers and House by L-FNLTV and L-SFNLTV in the case $ \sigma = 20 $
Table 1.  PSNR values by choosing $ \lambda $ randomly (the first and third lines) and choosing $ \lambda $ according to the estimated MSE (the second and the fourth lines). The first two lines are with region size 16$ \times $ 16; the last two lines are with region size 32$ \times $ 32
Lena Barbara Peppers Boats Bridge House Cameraman
28.65 26.50 28.14 27.36 25.08 28.92 27.49
30.59 28.02 29.55 29.02 26.54 30.73 29.10
28.96 26.50 28.41 27.53 25.08 28.52 27.80
30.89 28.22 29.82 29.19 26.65 30.98 29.27
Lena Barbara Peppers Boats Bridge House Cameraman
28.65 26.50 28.14 27.36 25.08 28.92 27.49
30.59 28.02 29.55 29.02 26.54 30.73 29.10
28.96 26.50 28.41 27.53 25.08 28.52 27.80
30.89 28.22 29.82 29.19 26.65 30.98 29.27
Table 2.  PSNR values for different images with NLTV, NL-means, ROF, and SFNLTV in the case $ \sigma = 20 $
Image Lena Barbara Peppers Boats Bridge House Cameraman
NLTV 31.56 28.48 30.16 29.51 26.66 31.68 29.41
NL-means 31.61 $ \bf{29.68} $ 30.28 29.47 26.41 31.78 29.27
ROF 31.00 26.70 29.65 29.19 26.43 31.09 28.77
SFNLTV $ \bf{31.77} $ 29.19 $ \bf{30.29} $ $ \bf{29.89} $ $ \bf{ 26.92} $ $ \bf{ 32.14} $ $ \bf{29.64} $
Image Lena Barbara Peppers Boats Bridge House Cameraman
NLTV 31.56 28.48 30.16 29.51 26.66 31.68 29.41
NL-means 31.61 $ \bf{29.68} $ 30.28 29.47 26.41 31.78 29.27
ROF 31.00 26.70 29.65 29.19 26.43 31.09 28.77
SFNLTV $ \bf{31.77} $ 29.19 $ \bf{30.29} $ $ \bf{29.89} $ $ \bf{ 26.92} $ $ \bf{ 32.14} $ $ \bf{29.64} $
Table 3.  Choice of parameters of NLTV, SF(SFNLTVL) and L-SF(L-SFNLTV)
NLTV/SF/L-SF L-SF NLTV $ \lambda=2+0.6\sigma $
$ \sigma $ $ d $ $ D $ $ \sigma_r $ $ \lambda_f $ SF $ D_f=5 $ $ d_f=9 $
10 9 3 $ \sigma $ 6 $ \sigma_{rf}=0.8\sigma $
20 9 14 $ \lambda=0.55\sigma $ $ \lambda_f=1.6+0.02\sigma $
30 11 25 L-SF $ D_f=3 $ $ d_f=5 $
50 15 49 $ \sigma_{rf}=\sigma $ $ \lambda=4 $
NLTV/SF/L-SF L-SF NLTV $ \lambda=2+0.6\sigma $
$ \sigma $ $ d $ $ D $ $ \sigma_r $ $ \lambda_f $ SF $ D_f=5 $ $ d_f=9 $
10 9 3 $ \sigma $ 6 $ \sigma_{rf}=0.8\sigma $
20 9 14 $ \lambda=0.55\sigma $ $ \lambda_f=1.6+0.02\sigma $
30 11 25 L-SF $ D_f=3 $ $ d_f=5 $
50 15 49 $ \sigma_{rf}=\sigma $ $ \lambda=4 $
Table 4.  Comparisons of PSNR values for $ \sigma = 10 $ and $ \sigma = 20 $
NLTV NLSTV RNLTV BNLTV SFNLTV L-SFNLTV
$ \sigma=10 $
Lena 34.74 34.61 34.17 34.57 35.05 $ \bf{35.58} $
Barbara 32.79 31.29 32.79 33.77 33.93 $ \bf{34.46} $
Peppers 33.80 34.06 33.11 32.31 33.82 $ \bf{34.28} $
Boats 32.80 33.15 32.68 32.83 33.42 $ \bf{33.57} $
Bridge 30.56 30.62 29.14 29.96 30.86 $ \bf{30.97} $
House 34.94 34.52 34.43 34.97 35.49 $ \bf{35.62} $
Cameraman 33.25 33.30 31.97 32.52 33.45 $ \bf{33.65 } $
Monarch 32.98 33.00 31.49 32.41 33.51 $ \bf{33.73 } $
Couple 32.73 33.07 32.78 32.96 33.21 $ \bf{33.57 } $
Fingerprint 30.82 31.17 29.44 30.09 32.05 $ \bf{32.34 } $
Hill 32.66 32.41 32.15 32.89 33.11 $ \bf{33.29 } $
Man 33.18 33.00 32.19 32.96 33.40 $ \bf{33.75} $
$ \sigma=20 $
Lena 31.56 31.18 30.40 31.71 $ {31.77} $ $ \bf{32.54} $
Barbara 28.48 27.23 29.19 30.40 29.19 $ \bf{30.75} $
Peppers 30.16 30.16 29.64 28.38 $ {30.29} $ $ \bf{30.55} $
Boats 29.51 29.80 29.50 29.55 $ {29.89} $ $ \bf{30.42} $
Bridge 26.66 27.03 26.63 26.06 $ { 26.92} $ $ \bf{27.06} $
House 31.68 30.93 30.21 32.17 $ {32.14} $ $ \bf{32.54} $
Cameraman 29.41 29.41 28.54 28.85 $ \bf{29.64} $ $ {29.63} $
Monarch 29.30 28.56 27.82 28.78 $ \bf{29.66} $ 29.61
Couple 29.02 29.50 29.36 29.58 29.36 $ \bf{30.19} $
Fingerprint 26.71 26.83 26.94 26.22 27.50 $ \bf{28.55 } $
Hill 29.58 29.13 28.90 29.92 29.84 $ \bf{30.38 } $
Man 29.77 29.42 28.93 29.66 29.88 $ \bf{30.31 } $
NLTV NLSTV RNLTV BNLTV SFNLTV L-SFNLTV
$ \sigma=10 $
Lena 34.74 34.61 34.17 34.57 35.05 $ \bf{35.58} $
Barbara 32.79 31.29 32.79 33.77 33.93 $ \bf{34.46} $
Peppers 33.80 34.06 33.11 32.31 33.82 $ \bf{34.28} $
Boats 32.80 33.15 32.68 32.83 33.42 $ \bf{33.57} $
Bridge 30.56 30.62 29.14 29.96 30.86 $ \bf{30.97} $
House 34.94 34.52 34.43 34.97 35.49 $ \bf{35.62} $
Cameraman 33.25 33.30 31.97 32.52 33.45 $ \bf{33.65 } $
Monarch 32.98 33.00 31.49 32.41 33.51 $ \bf{33.73 } $
Couple 32.73 33.07 32.78 32.96 33.21 $ \bf{33.57 } $
Fingerprint 30.82 31.17 29.44 30.09 32.05 $ \bf{32.34 } $
Hill 32.66 32.41 32.15 32.89 33.11 $ \bf{33.29 } $
Man 33.18 33.00 32.19 32.96 33.40 $ \bf{33.75} $
$ \sigma=20 $
Lena 31.56 31.18 30.40 31.71 $ {31.77} $ $ \bf{32.54} $
Barbara 28.48 27.23 29.19 30.40 29.19 $ \bf{30.75} $
Peppers 30.16 30.16 29.64 28.38 $ {30.29} $ $ \bf{30.55} $
Boats 29.51 29.80 29.50 29.55 $ {29.89} $ $ \bf{30.42} $
Bridge 26.66 27.03 26.63 26.06 $ { 26.92} $ $ \bf{27.06} $
House 31.68 30.93 30.21 32.17 $ {32.14} $ $ \bf{32.54} $
Cameraman 29.41 29.41 28.54 28.85 $ \bf{29.64} $ $ {29.63} $
Monarch 29.30 28.56 27.82 28.78 $ \bf{29.66} $ 29.61
Couple 29.02 29.50 29.36 29.58 29.36 $ \bf{30.19} $
Fingerprint 26.71 26.83 26.94 26.22 27.50 $ \bf{28.55 } $
Hill 29.58 29.13 28.90 29.92 29.84 $ \bf{30.38 } $
Man 29.77 29.42 28.93 29.66 29.88 $ \bf{30.31 } $
Table 5.  Comparisons of PSNR values for $ \sigma = 30 $ and $ \sigma = 50 $
NLTV NLSTV RNLTV BNLTV SFNLTV L-SFNLTV
$ \sigma=30 $
Lena 29.67 29.86 27.89 29.98 29.82 $ \bf{30.64} $
Barbara 26.16 24.84 26.74 28.59 26.55 $ \bf{28.63} $
Peppers 27.96 $ \bf{28.51} $ 27.26 26.58 28.13 $ {28.44} $
Boats 27.73 28.14 27.18 27.94 27.93 $ \bf{28.54} $
Bridge 24.86 25.04 24.93 24.69 25.01 $ \bf{25.23} $
House 29.69 29.89 27.78 30.36 29.97 $ \bf{30.65} $
Cameraman 27.48 $ \bf{27.76} $ 26.55 27.21 27.58 27.65
Monarch 27.09 26.89 25.95 27.00 $ \bf{27.29 } $ $ \bf{27.29} $
Couple 27.11 27.62 27.01 27.88 27.31 $ \bf{28.26} $
Fingerprint 24.37 25.05 25.14 24.84 24.97 $ \bf{26.48 } $
Hill 28.06 27.79 26.68 28.44 28.22 $ \bf{28.76 } $
Man 28.03 27.93 26.74 28.10 28.08 $ \bf{28.47 } $
$ \sigma=50 $
Lena 27.51 27.67 24.40 27.92 27.61 $ \bf{28.28} $
Barbara 24.00 23.17 23.30 $ \bf{26.21} $ 24.11 $ {26.00} $
Peppers 25.31 $ {26.00} $ 23.91 24.39 25.48 $ \bf{ 26.03} $
Boats 25.62 25.96 23.94 25.92 25.69 $ \bf{26.28} $
Bridge 23.09 23.12 22.50 23.03 23.18 $ \bf{23.41} $
House 27.23 27.57 24.12 28.10 27.40 $ \bf{28.22} $
Cameraman 24.87 $ \bf{25.42} $ 23.41 25.08 24.83 25.20
Monarch 24.39 24.33 22.97 $ \bf{24.69} $ 24.47 24.64
Couple 25.12 25.36 23.72 25.75 25.21 $ \bf{26.00} $
Fingerprint 21.71 22.36 22.48 22.96 22.16 $ \bf{23.96 } $
Hill 26.35 25.87 23.50 26.60 26.46 $ \bf{26.86} $
Man 26.11 25.89 23.59 26.18 26.13 $ \bf{26.41} $
NLTV NLSTV RNLTV BNLTV SFNLTV L-SFNLTV
$ \sigma=30 $
Lena 29.67 29.86 27.89 29.98 29.82 $ \bf{30.64} $
Barbara 26.16 24.84 26.74 28.59 26.55 $ \bf{28.63} $
Peppers 27.96 $ \bf{28.51} $ 27.26 26.58 28.13 $ {28.44} $
Boats 27.73 28.14 27.18 27.94 27.93 $ \bf{28.54} $
Bridge 24.86 25.04 24.93 24.69 25.01 $ \bf{25.23} $
House 29.69 29.89 27.78 30.36 29.97 $ \bf{30.65} $
Cameraman 27.48 $ \bf{27.76} $ 26.55 27.21 27.58 27.65
Monarch 27.09 26.89 25.95 27.00 $ \bf{27.29 } $ $ \bf{27.29} $
Couple 27.11 27.62 27.01 27.88 27.31 $ \bf{28.26} $
Fingerprint 24.37 25.05 25.14 24.84 24.97 $ \bf{26.48 } $
Hill 28.06 27.79 26.68 28.44 28.22 $ \bf{28.76 } $
Man 28.03 27.93 26.74 28.10 28.08 $ \bf{28.47 } $
$ \sigma=50 $
Lena 27.51 27.67 24.40 27.92 27.61 $ \bf{28.28} $
Barbara 24.00 23.17 23.30 $ \bf{26.21} $ 24.11 $ {26.00} $
Peppers 25.31 $ {26.00} $ 23.91 24.39 25.48 $ \bf{ 26.03} $
Boats 25.62 25.96 23.94 25.92 25.69 $ \bf{26.28} $
Bridge 23.09 23.12 22.50 23.03 23.18 $ \bf{23.41} $
House 27.23 27.57 24.12 28.10 27.40 $ \bf{28.22} $
Cameraman 24.87 $ \bf{25.42} $ 23.41 25.08 24.83 25.20
Monarch 24.39 24.33 22.97 $ \bf{24.69} $ 24.47 24.64
Couple 25.12 25.36 23.72 25.75 25.21 $ \bf{26.00} $
Fingerprint 21.71 22.36 22.48 22.96 22.16 $ \bf{23.96 } $
Hill 26.35 25.87 23.50 26.60 26.46 $ \bf{26.86} $
Man 26.11 25.89 23.59 26.18 26.13 $ \bf{26.41} $
Table 6.  Running time in second with grayscale images of size $ 256\times 256 $, where NLSTV is run under Linux system, and other algorithms are run under Windows system on another computer with a slightly faster processor
NLSTV RNLTV BNLTV SFNLTV L-SFNLTV
22 3344 11.7 2.4 8.3
NLSTV RNLTV BNLTV SFNLTV L-SFNLTV
22 3344 11.7 2.4 8.3
[1]

Mujibur Rahman Chowdhury, Jun Zhang, Jing Qin, Yifei Lou. Poisson image denoising based on fractional-order total variation. Inverse Problems & Imaging, 2020, 14 (1) : 77-96. doi: 10.3934/ipi.2019064

[2]

Xiaoqun Zhang, Tony F. Chan. Wavelet inpainting by nonlocal total variation. Inverse Problems & Imaging, 2010, 4 (1) : 191-210. doi: 10.3934/ipi.2010.4.191

[3]

Yunho Kim, Paul M. Thompson, Luminita A. Vese. HARDI data denoising using vectorial total variation and logarithmic barrier. Inverse Problems & Imaging, 2010, 4 (2) : 273-310. doi: 10.3934/ipi.2010.4.273

[4]

Yunhai Xiao, Junfeng Yang, Xiaoming Yuan. Alternating algorithms for total variation image reconstruction from random projections. Inverse Problems & Imaging, 2012, 6 (3) : 547-563. doi: 10.3934/ipi.2012.6.547

[5]

Juan C. Moreno, V. B. Surya Prasath, João C. Neves. Color image processing by vectorial total variation with gradient channels coupling. Inverse Problems & Imaging, 2016, 10 (2) : 461-497. doi: 10.3934/ipi.2016008

[6]

Zhengmeng Jin, Chen Zhou, Michael K. Ng. A coupled total variation model with curvature driven for image colorization. Inverse Problems & Imaging, 2016, 10 (4) : 1037-1055. doi: 10.3934/ipi.2016031

[7]

Nils Dabrock, Yves van Gennip. A note on "Anisotropic total variation regularized $L^1$-approximation and denoising/deblurring of 2D bar codes". Inverse Problems & Imaging, 2018, 12 (2) : 525-526. doi: 10.3934/ipi.2018022

[8]

Rustum Choksi, Yves van Gennip, Adam Oberman. Anisotropic total variation regularized $L^1$ approximation and denoising/deblurring of 2D bar codes. Inverse Problems & Imaging, 2011, 5 (3) : 591-617. doi: 10.3934/ipi.2011.5.591

[9]

Yuan Shen, Lei Ji. Partial convolution for total variation deblurring and denoising by new linearized alternating direction method of multipliers with extension step. Journal of Industrial & Management Optimization, 2019, 15 (1) : 159-175. doi: 10.3934/jimo.2018037

[10]

Baoli Shi, Zhi-Feng Pang, Jing Xu. Image segmentation based on the hybrid total variation model and the K-means clustering strategy. Inverse Problems & Imaging, 2016, 10 (3) : 807-828. doi: 10.3934/ipi.2016022

[11]

Xavier Bresson, Tony F. Chan. Fast dual minimization of the vectorial total variation norm and applications to color image processing. Inverse Problems & Imaging, 2008, 2 (4) : 455-484. doi: 10.3934/ipi.2008.2.455

[12]

Ke Chen, Yiqiu Dong, Michael Hintermüller. A nonlinear multigrid solver with line Gauss-Seidel-semismooth-Newton smoother for the Fenchel pre-dual in total variation based image restoration. Inverse Problems & Imaging, 2011, 5 (2) : 323-339. doi: 10.3934/ipi.2011.5.323

[13]

Weihong Guo, Jing Qin. A geometry guided image denoising scheme. Inverse Problems & Imaging, 2013, 7 (2) : 499-521. doi: 10.3934/ipi.2013.7.499

[14]

Georgi Grahovski, Rossen Ivanov. Generalised Fourier transform and perturbations to soliton equations. Discrete & Continuous Dynamical Systems - B, 2009, 12 (3) : 579-595. doi: 10.3934/dcdsb.2009.12.579

[15]

Juan H. Arredondo, Francisco J. Mendoza, Alfredo Reyes. On the norm continuity of the hk-fourier transform. Electronic Research Announcements, 2018, 25: 36-47. doi: 10.3934/era.2018.25.005

[16]

Rinaldo M. Colombo, Francesca Monti. Solutions with large total variation to nonconservative hyperbolic systems. Communications on Pure & Applied Analysis, 2010, 9 (1) : 47-60. doi: 10.3934/cpaa.2010.9.47

[17]

Alexander Alekseenko, Jeffrey Limbacher. Evaluating high order discontinuous Galerkin discretization of the Boltzmann collision integral in $ \mathcal{O}(N^2) $ operations using the discrete fourier transform. Kinetic & Related Models, 2019, 12 (4) : 703-726. doi: 10.3934/krm.2019027

[18]

Fangfang Dong, Yunmei Chen. A fractional-order derivative based variational framework for image denoising. Inverse Problems & Imaging, 2016, 10 (1) : 27-50. doi: 10.3934/ipi.2016.10.27

[19]

Rongliang Chen, Jizu Huang, Xiao-Chuan Cai. A parallel domain decomposition algorithm for large scale image denoising. Inverse Problems & Imaging, 2019, 13 (6) : 1259-1282. doi: 10.3934/ipi.2019055

[20]

Wei Zhu, Xue-Cheng Tai, Tony Chan. Augmented Lagrangian method for a mean curvature based image denoising model. Inverse Problems & Imaging, 2013, 7 (4) : 1409-1432. doi: 10.3934/ipi.2013.7.1409

2019 Impact Factor: 1.373

Metrics

  • PDF downloads (11)
  • HTML views (30)
  • Cited by (0)

[Back to Top]