\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

RWRM: Residual Wasserstein regularization model for image restoration

  • * Corresponding Author

    * Corresponding Author 
Abstract Full Text(HTML) Figure(19) / Table(5) Related Papers Cited by
  • Existing image restoration methods mostly make full use of various image prior information. However, they rarely exploit the potential of residual histograms, especially their role as ensemble regularization constraint. In this paper, we propose a residual Wasserstein regularization model (RWRM), in which a residual histogram constraint is subtly embedded into a type of variational minimization problems. Specifically, utilizing the Wasserstein distance from the optimal transport theory, this scheme is achieved by enforcing the observed image residual histogram as close as possible to the reference residual histogram. Furthermore, the RWRM unifies the residual Wasserstein regularization and image prior regularization to improve image restoration performance. The robustness of parameter selection in the RWRM makes the proposed algorithms easier to implement. Finally, extensive experiments have confirmed that our RWRM applied to Gaussian denoising and non-blind deconvolution is effective.

    Mathematics Subject Classification: 68U10.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 11.  The curve graphs of parameter robustness. One can see that $ \lambda $ has a strong influence on PSNR in TV-L2 ($ \beta = 0 $), while in the RWRM ($ \beta \ne 0 $) PSNR is little affected by $ \lambda $. Thus, the parameters in our RWRM are robust

    Figure 1.  Ten test images for Gaussian denoising. From left to right, they are Airfield, Lena, Peppers, Plane, Couple, Girl, Lake, Boat, Loco and Martha respectively

    Figure 2.  TV-L2 denoising results on noisy image ($ \sigma = 25 $) with different regularization parameters $ \lambda $. One can find that the TV-L2 model with a moderate $ \lambda \in [1500,1800] $ can optimize the denoising effect

    Figure 3.  Relationships between PSNR and model parameters taking the image "Boat". (a) In the TV-L2 model ($ \beta = 0 $), the effect of the parameter $ \lambda $ on PSNR. (b) In our RWRM, the effect of Wasserstein regularization parameter $ \beta $ on PSNR in the cases of three different $ \lambda $

    Figure 4.  PSNR comparison curve on the ten images about the proposed RWRM ($ \beta \ne 0 $) and TV-L2 ($ \beta = 0 $) in Gaussian denoising

    Figure 5.  The distribution comparison for the final residuals of TV-L2 (green), RWRM (red), and the ground truth (blue). One can see that RWRM residual estimate is more accurate than TV-L2

    Figure 6.  Denoising results on the three images Peppers, Plane and Martha. Here (c) and (d) are the result images generated by the TV-L2 and our RWRM respectively. One can see that our approach clearly yields better visual results

    Figure 7.  Local magnification comparison of denoised images. One can see that the RWRM can recover better image edge details

    Figure 8.  Eight test images. From left to right, they are labeled as 1 to 8 respectively

    Figure 9.  The denoising output results (PSNR and SSIM) of BM3D [5], NCSR [6], GHP [49] and our RWRM on the test image 8. From the first to the third row, the Gaussian noise standard deviations of the corresponding noisy images are 60, 80, and 100, respectively

    Figure 10.  The denoising effect of RWRM and learning-based method LDCNN [48]

    Figure 12.  Ten test images for non-blind deconvolution. From left to right, they are Plane, Glodhill, Couple, Peppers, Martha, Boat, Girl, Lena, Bacteria and Brain respectively

    Figure 13.  The graph of the PSNR with $ \lambda $ in the RWRM ($ \beta = 0 $) applied to non-blind deconvolution

    Figure 14.  The graph of the PSNR with $ \beta $ ($ \lambda = 1800 $) in the RWRM applied to non-blind deconvolution

    Figure 15.  In non-blind deconvolution, PSNR comparison curves of the TV-L2 ($ \beta = 0 $) and the RWRM ($ \beta \ne 0 $) on the ten test images

    Figure 16.  The results of non-blind deconvolution on the images Couple, Martha and Bacteria. Here (a) is the blurred and noisy images, (b) and (c) are the result images of the TV-L2 and the RWRM, respectively. One can see that the RWRM has obvious advantages in terms of both visual and numerical aspects

    Figure 17.  Local enlarged images on non-blind deconvolution. One can see that the RWRM can recover better edge information

    Figure 18.  Another eight test images for the non-blind deconvolution. From left to right, they are labeled as 1 to 8 respectively

    Figure 19.  Visual effect comparison of non-blind deconvolution by three methods NL-$ {{\rm{H}}^{\rm{1}}} $, NL-TV and the proposed RWRM

    Table 1.  Denoising PSNR comparison on the ten test images in Fig. 1: the TV-L2 model and our RWRM.

    Images Airfield Lena Peppers Plane Couple Girl Lake Boat Lolo Martha Avg.
    TV-L2 25.55 29.23 28.70 28.02 31.49 29.91 27.19 27.92 25.52 29.44 28.30
    0.6689 0.7410 0.7233 0.7198 0.8300 0.7502 0.7129 0.7136 0.6212 0.7458 0.7227
    RWRM 26.55 30.16 30.47 29.60 32.28 31.02 27.90 28.71 26.90 30.74 29.43
    0.7008 0.8183 0.8070 0.8361 0.8515 0.8244 0.7635 0.7990 0.7500 0.8195 0.7970
     | Show Table
    DownLoad: CSV

    Table 2.  PSNR (dB) and SSIM values of BM3D [5], NCSR [6], GHP [49] and our RWRM in Gaussian denoising

    $ \sigma $ Methods 1 2 3 4 5 6 7 8 Avg.
    $ \sigma=60 $ BM3D 24.77 23.33 29.95 23.36 27.43 24.80 23.27 27.81 25.59
    0.488 0.472 0.793 0.455 0.604 0.549 0.519 0.687 0.571
    NCSR 24.62 23.25 29.57 23.23 27.05 24.66 23.04 27.61 25.38
    0.466 0.460 0.804 0.440 0.589 0.539 0.494 0.690 0.560
    GHP 24.74 23.32 29.49 23.32 27.18 24.71 23.17 27.59 25.44
    0.486 0.479 0.790 0.458 0.598 0.548 0.512 0.683 0.569
    RWRM 24.75 23.25 29.58 23.27 27.48 24.75 23.00 27.88 25.50
    0.490 0.481 0.793 0.463 0.602 0.535 0.505 0.691 0.570
    $ \sigma=70 $ BM3D 24.37 22.93 29.26 23.03 26.97 24.43 22.84 27.34 25.15
    0.461 0.444 0.777 0.432 0.586 0.530 0.493 0.670 0.549
    NCSR 24.24 22.83 28.99 22.85 26.60 24.25 22.56 27.13 24.93
    0.439 0.426 0.794 0.410 0.572 0.517 0.464 0.678 0.538
    GHP 24.35 22.89 28.93 22.93 26.75 24.31 22.65 27.12 24.99
    0.461 0.446 0.775 0.428 0.582 0.524 0.480 0.667 0.545
    RWRM 24.36 22.90 29.07 22.94 27.10 24.42 22.55 27.51 25.11
    0.464 0.451 0.788 0.437 0.588 0.518 0.481 0.681 0.551
    $ \sigma=80 $ BM3D 24.04 22.59 28.66 22.75 26.61 24.11 22.47 26.92 24.77
    0.441 0.421 0.763 0.414 0.572 0.513 0.472 0.655 0.531
    NCSR 23.92 22.47 28.52 22.55 26.22 23.91 22.17 26.70 24.56
    0.419 0.400 0.785 0.388 0.559 0.501 0.441 0.667 0.520
    GHP 24.02 22.52 28.46 22.61 26.36 23.96 22.23 26.66 24.60
    0.440 0.421 0.763 0.408 0.567 0.508 0.457 0.651 0.527
    RWRM 24.10 22.53 28.66 22.67 26.80 24.14 22.17 27.18 24.78
    0.445 0.428 0.782 0.416 0.576 0.504 0.458 0.672 0.535
    $ \sigma=90 $ BM3D 23.75 22.32 28.12 22.49 26.30 23.81 22.15 26.48 24.43
    0.423 0.403 0.751 0.397 0.559 0.498 0.454 0.640 0.516
    NCSR 23.65 22.16 28.10 22.29 25.88 23.62 21.82 26.30 24.23
    0.404 0.379 0.778 0.372 0.548 0.489 0.422 0.658 0.506
    GHP 23.68 22.15 27.99 22.29 25.97 23.65 21.72 26.21 24.21
    0.424 0.398 0.750 0.390 0.554 0.493 0.430 0.636 0.509
    RWRM 23.84 22.21 28.27 22.40 26.53 23.90 21.87 26.87 24.49
    0.435 0.410 0.775 0.402 0.567 0.493 0.437 0.666 0.523
    $ \sigma=100 $ BM3D 23.48 22.07 27.69 22.27 26.00 23.55 21.85 26.16 24.13
    0.409 0.388 0.738 0.387 0.548 0.487 0.437 0.627 0.503
    NCSR 23.43 21.89 27.71 22.05 25.58 23.35 21.50 25.92 23.93
    0.392 0.362 0.771 0.360 0.539 0.479 0.406 0.649 0.495
    GHP 23.35 21.71 27.47 21.93 25.56 23.26 21.22 25.69 23.77
    0.408 0.374 0.735 0.374 0.537 0.476 0.407 0.616 0.491
    RWRM 23.64 22.00 28.00 22.20 26.32 23.67 21.60 26.61 24.26
    0.421 0.392 0.766 0.385 0.560 0.484 0.419 0.660 0.511
     | Show Table
    DownLoad: CSV

    Table 3.  Denoising PSNR and SSIM results of RWRM and learning-based method LDCNN [48]

    $ \sigma $ Methods 1 2 3 4 5 6 7 8
    $ \sigma=40 $ RWRM 25.76 24.38 30.74 24.30 28.50 25.71 24.28 28.88
    0.5612 0.5609 0.8095 0.5510 0.6468 0.5914 0.5947 0.7204
    LDCNN 26.14 24.83 31.58 24.81 28.66 26.05 24.92 29.24
    0.6104 0.6141 0.8342 0.6043 0.6625 0.6419 0.6457 0.7424
    $ \sigma=50 $ RWRM 25.06 23.66 30.11 23.64 27.92 25.15 23.51 28.31
    0.4914 0.4893 0.8013 0.4781 0.6176 0.5570 0.5402 0.7012
    LDCNN 25.47 24.09 30.90 24.12 28.07 25.45 24.17 28.55
    0.5522 0.5520 0.8178 0.5415 0.6330 0.5994 0.5894 0.7132
     | Show Table
    DownLoad: CSV

    Table 4.  PSNR (dB) comparison of non-blind deconvolution on the ten test images: the TV-L2 model ($ \beta = 0 $) and the RWRM ($ \beta \ne 0 $).

    Images Plane Goldhill Couple Peppers Martha Boat Girl Lena Bacteria Brain Avg.
    TV-L2 24.07 25.89 27.81 25.70 24.89 24.43 27.12 24.19 27.77 28.20 26.01
    RWRM 24.80 26.36 28.30 26.57 25.89 24.94 27.89 24.82 28.37 28.69 26.66
     | Show Table
    DownLoad: CSV

    Table 5.  The non-blind deconvolution PSNR and SSIM result comparison of the proposed RWRM, NL-$ {{\rm{H}}^{\rm{1}}} $ and NL-TV

    Images 1 2 3 4 5 6 7 8 Avg.
    NL-$ {{\rm{H}}^{\rm{1}}} $ 25.64 30.17 26.48 27.11 26.03 26.36 26.47 25.30 26.70
    0.7800 0.8218 0.7982 0.7600 0.8481 0.8044 0.8785 0.7999 0.8114
    NL-TV 25.83 29.91 26.42 26.95 26.74 26.41 27.40 25.50 26.90
    0.7752 0.8209 0.7944 0.7649 0.8504 0.8048 0.8631 0.7972 0.8089
    RWRM 26.24 30.95 26.83 27.88 27.98 26.90 28.24 26.32 27.67
    0.7993 0.8358 0.8096 0.7800 0.8707 0.8148 0.8599 0.8217 0.8240
     | Show Table
    DownLoad: CSV
  • [1] E. J Candes and T Tao, Near-optimal signal recovery from random projections: Universal encoding strategies?, IEEE Transactions on Information Theory, 52 (2006), 5406-5425.  doi: 10.1109/TIT.2006.885507.
    [2] A. Chambolle, An algorithm for total variation minimization and applications, Journal of Mathematical Imaging and Vision, 20 (2004), 89-97. 
    [3] T. S. ChoC. L. ZitnickN. JoshiS. B. KangR. Szeliski and W. T. Freeman, Image restoration by matching gradient distributions, IEEE Transactions on Pattern Analysis and Machine Intelligence, 34 (2012), 683-694.  doi: 10.1109/TPAMI.2011.166.
    [4] P. L Combettes and V. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Modeling and Simulation, 4 (2005), 1168-1200.  doi: 10.1137/050626090.
    [5] K. DabovA. FoiV. Katkovnik and K. Egiazarian, Image denoising by sparse 3-d transform-domain collaborative filtering, IEEE Transactions on image processing, 16 (2007), 2080-2095.  doi: 10.1109/TIP.2007.901238.
    [6] W. DongL. ZhangG. Shi and X. Li, Nonlocally centralized sparse representation for image restoration, IEEE Transactions on Image Processing, 22 (2013), 1620-1630.  doi: 10.1109/TIP.2012.2235847.
    [7] D. L. Donoho, Compressed sensing, IEEE Transactions on Information Theory, 52 (2006), 1289-1306.  doi: 10.1109/TIT.2006.871582.
    [8] M. Elad and M. Aharon, Image denoising via sparse and redundant representations over learned dictionaries, IEEE Transactions on Image processing, 15 (2006), 3736-3745.  doi: 10.1109/TIP.2006.881969.
    [9] M. El Gheche, J.-F. Aujol, Y. Berthoumieu and C.-A. Deledalle, Texture reconstruction guided by the histogram of a high-resolution patch, IEEE Trans. Image Process, 26 (2017), 549-560. doi: 10.1109/TIP.2016.2627812.
    [10] W. Feller, An Introduction to Probability Theory and Its Applications Ⅱ, John Wiley & Sons, 1968.
    [11] A. L. Gibbs, Convergence in the wasserstein metric for markov chain monte carlo algorithms with applications to image restoration, Stochastic Models, 20 (2004), 473-492.  doi: 10.1081/STM-200033117.
    [12] G. Gilboa and S. Osher, Nonlocal operators with applications to image processing, Multiscale Modeling and Simulation, 7 (2008), 1005-1028.  doi: 10.1137/070698592.
    [13] R. C. GonzalezR. E Woods and  S. L EddinsDigital Image Processing Using MATLAB, Prentice Hall Press, 2007. 
    [14] S. Harmeling, C. J. Schuler and H. C. Burger, Image denoising: Can plain neural networks compete with bm3d?, In IEEE Conference on Computer Vision and Pattern Recognition, (2012), 2392-2399.
    [15] R. HeX. FengW. WangX. Zhu and Ch unyu Yang, W-ldmm: A wasserstein driven low-dimensional manifold model for noisy image restoration, Neurocomputing, 371 (2020), 108-123.  doi: 10.1016/j.neucom.2019.08.088.
    [16] D. J. Heeger and J. R. Bergen, Pyramid-based texture analysis/synthesis, In International Conference on Image Processing, 1995. Proceedings, (1995), 229-238.
    [17] V. Jain and H. Sebastian Seung, Natural image denoising with convolutional networks, In International Conference on Neural Information Processing Systems, (2008), 769-776.
    [18] D. Krishnan and R. Fergus, Fast image deconvolution using hyper-laplacian priors, In International Conference on Neural Information Processing Systems, (2009), 1033-1041.
    [19] X. Lan, S. Roth, D. Huttenlocher and M. J Black, Efficient belief propagation with learned higher-order markov random fields, In European Conference on Computer Vision, pages 269-282. Springer, 2006. doi: 10.1007/11744047_21.
    [20] S. Z. Li, Markov Random Field Modeling in Image Analysis, Springer-Verlag London, Ltd., London, 2009.
    [21] Y. LouX. ZhangS. Osher and A. Bertozzi, Image recovery via nonlocal operators, Journal of Scientific Computing, 42 (2010), 185-197.  doi: 10.1007/s10915-009-9320-2.
    [22] J. MairalM. Elad and G. Sapiro, Sparse representation for color image restoration, IEEE Transactions on Image Processing, 17 (2008), 53-69.  doi: 10.1109/TIP.2007.911828.
    [23] S. MallatA Wavelet Tour of Signal Processing, Academic Press, Inc., San Diego, CA, 1998. 
    [24] X. Mei, W. Dong, B. G. Hu and S. Lyu, Unihist: A unified framework for image restoration with marginal histogram constraints, In Computer Vision and Pattern Recognition, pages 3753-3761, 2015.
    [25] S. OsherM. BurgerD. GoldfarbJ. Xu and W. Yin, An iterative regularization method for total variation-based image restoration, Multiscale Modeling and Simulation, 4 (2005), 460-489.  doi: 10.1137/040605412.
    [26] O. Pele and M. Werman, Fast and robust earth mover's distances, In IEEE International Conference on Computer Vision, (2010), 460-467. doi: 10.1109/ICCV.2009.5459199.
    [27] G. Peyré, J. Fadili and J. Rabin, Wasserstein active contours, In IEEE International Conference on Image Processing, (2013), 2541-2544.
    [28] J. PortillaV. StrelaM. J. Wainwright and E. P. Simoncelli, Image denoising using scale mixtures of gaussians in the wavelet domain, IEEE Transactions on Image Processing, 12 (2003), 1338-1351.  doi: 10.1109/TIP.2003.818640.
    [29] J. Rabin and G. Peyré, Wasserstein regularization of imaging problem, In IEEE International Conference on Image Processing, (2011), 1541-1544, . doi: 10.1109/ICIP.2011.6115740.
    [30] A. RajwadeA. Rangarajan and A. Banerjee, Image denoising using the higher order singular value decomposition, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35 (2012), 849-862. 
    [31] W. H. Richardson, Bayesian-based iterative method of image restoration, Journal of the Optical Society of America, 62 (1972), 55-59.  doi: 10.1364/JOSA.62.000055.
    [32] Y. RomanoM. Protter and M. Elad, Single image interpolation via adaptive nonlocal sparsity-based modeling, IEEE Transactions on Image Processing, 23 (2014), 3085-3098.  doi: 10.1109/TIP.2014.2325774.
    [33] L. I. RudinS. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms, Eleventh International Conference of the Center for Nonlinear Studies on Experimental Mathematics: Computational Issues in Nonlinear Science. Phys. D, 60 (1992), 259-268.  doi: 10.1016/0167-2789(92)90242-F.
    [34] O. Scherzer, M. Grasmair, H. Grossauer, M. Haltmeier and F. Lenzen, Variational Methods in Imaging, Springer, 2009.
    [35] U. Schmidt, Q. Gao and S. Roth, A generative perspective on mrfs in low-level vision, In Computer Vision and Pattern Recognition, 2010, pages 1751-1758. doi: 10.1109/CVPR.2010.5539844.
    [36] O. StanleyZ. Shi and W. Zhu, Low dimensional manifold model for image processing, SIAM Journal on Imaging Sciences, 10 (2017), 1669-1690.  doi: 10.1137/16M1058686.
    [37] D. Strong and T. Chan, Edge-preserving and scale-dependent properties of total variation regularization, Inverse Problems, 19 (2003), 165-187.  doi: 10.1088/0266-5611/19/6/059.
    [38] K. SuzukiI. Horiba and N. Sugie, Efficient approximation of neural filters for removing quantum noise from images, IEEE Transactions on Signal Processing, 50 (2002), 1787-1799.  doi: 10.1109/TSP.2002.1011218.
    [39] P. Swoboda and C. Schnorr, Convex variational image restoration with histogram priors, SIAM Journal on Imaging Sciences, 6 (2013), 1719-1735.  doi: 10.1137/120897535.
    [40] G. TartavelG. Peyré and Y. Gousseau, Wasserstein loss for image synthesis and restoration, SIAM Journal on Imaging Sciences, 9 (2016), 1726-1755.  doi: 10.1137/16M1067494.
    [41] F. Thaler, K. Hammernik, C. Payer, M. Urschler and D. Stern, Sparse-view ct reconstruction using wasserstein gans, 2018, pages 75-82. doi: 10.1007/978-3-030-00129-2_9.
    [42] M. VauhkonenD. VadaszP. A. KarjalainenE. Somersalo and J. P. Kaipio, Tikhonov regularization and prior information in electrical impedance tomography, IEEE Transactions on Medical Imaging, 17 (1998), 285-93.  doi: 10.1109/42.700740.
    [43] C. Villani, Optimal Transport: Old and New, volume 338, Springer-Verlag, Berlin, 2009 doi: 10.1007/978-3-540-71050-9.
    [44] Y. Weiss and W. T. Freeman, What makes a good model of natural images?, In 2007 IEEE Conference on Computer Vision and Pattern Recognition, (2007) pages 1-8. doi: 10.1109/CVPR.2007.383092.
    [45] O. J. Woodford, C. Rother and V. Kolmogorov, A global perspective on map inference for low-level vision, In IEEE International Conference on Computer Vision, 2009, pages 2319-2326. doi: 10.1109/ICCV.2009.5459434.
    [46] F. Wu, B. Wang, D. Cui and L. Li, Single image super-resolution based on wasserstein gans, Chinese Control Conference (CCC), 2018. doi: 10.23919/ChiCC.2018.8484039.
    [47] Q. YangP. YanY. ZhangH. YuY. ShiX. MouM. K. KalraY. ZhangL. Sun and G. Wang, Low-dose ct image denoising using a generative adversarial network with wasserstein distance and perceptual loss, IEEE Transactions on Medical Imaging, 37 (2018), 1348-1357.  doi: 10.1109/TMI.2018.2827462.
    [48] K. Zhang, W. Zuo, S. Gu and L. Zhang, Learning deep cnn denoiser prior for image restoration, 2017, pages 2808-2817. doi: 10.1109/CVPR.2017.300.
    [49] W. ZuoL. ZhangC. SongD. Zhang and H. Gao, Gradient histogram estimation and preservation for texture enhanced image denoising, IEEE Transactions on Image Processing, 23 (2014), 2459-2472.  doi: 10.1109/TIP.2014.2316423.
  • 加载中
Open Access Under a Creative Commons license

Figures(19)

Tables(5)

SHARE

Article Metrics

HTML views(911) PDF downloads(490) Cited by(0)

Access History

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return