June  2020, 14(3): 401-421. doi: 10.3934/ipi.2020019

$ \chi^2 $ test for total variation regularization parameter selection

Department of Mathematics, Boise State University, Boise, ID, USA

Received  March 2019 Revised  January 2020 Published  March 2020

Fund Project: This work was funded by National Science Foundation, DMS10431047

Total Variation (TV) is an effective method of removing noise in digital image processing while preserving edges. The scaling or regularization parameter in the TV process defines the amount of denoising, with a value of zero giving a result equivalent to the input signal. The discrepancy principle is a classical method for regularization parameter selection whereby data is fit to a specified tolerance. The tolerance is often identified based on the fact that the least squares data fit is known to follow a $ \chi^2 $ distribution. However, this approach fails when the number of parameters is greater than or equal to the number of data. Typically, heuristics are employed to identify the tolerance in the discrepancy principle and this leads to oversmoothing. In this work we identify a $ \chi^2 $ test for TV regularization parameter selection assuming the blurring matrix is full rank. In particular, we prove that the degrees of freedom in the TV regularized residual is the number of data and this is used to identify the appropriate tolerance. The importance of this work lies in the fact that the $ \chi^2 $ test introduced here for TV automates the choice of regularization parameter selection and can straightforwardly be incorporated into any TV algorithm. Results are given for three test images and compared to results using the discrepancy principle and MAP estimates.

Citation: J. Mead. $ \chi^2 $ test for total variation regularization parameter selection. Inverse Problems & Imaging, 2020, 14 (3) : 401-421. doi: 10.3934/ipi.2020019
References:
[1]

A. Ali and R. J. Tibshirani, The generalized lasso problem and uniqueness, Electron. J. Stat., 13 (2019), 2307-2347.  doi: 10.1214/19-EJS1569.  Google Scholar

[2] R. C. AsterB. Borchers and C. H. Thurber, Parameter Estimation and Inverse Problems, 2$^{nd}$ edition, Elsevier/Academic Press, Amsterdam, 2013.  doi: 10.1016/B978-0-12-385048-5.00001-X.  Google Scholar
[3]

S. D. BabacanR. Molina and A. K. Katsaggelos, Variational Bayesian blind deconvolution using a total variation prior, IEEE Trans. Image Process., 18 (2009), 12-26.  doi: 10.1109/TIP.2008.2007354.  Google Scholar

[4]

T. Blu and F. Luisier, The SURE-LET approach to image denoising, IEEE Trans. Image Process., 16 (2007), 2778-2786.  doi: 10.1109/TIP.2007.906002.  Google Scholar

[5]

G. Casella and R. L. Berger, Statistical Inference, 2$^{nd}$ edition Duxbury, 2001.  Google Scholar

[6]

J. M. Bioucas-Dias, M. A. T. Figueiredo and J. P. Oliveira, Adaptive total variation image deconvolution: A majorization-minimization approach, 14$^{th }$ European Signal Processing Conference, (2006), 1–4. Google Scholar

[7]

S. H. ChanR. KhoshabehK. B. GibsonP. E. Gill and T. Q. Nguyen, An augmented Lagrangian method for total variation video restoration, IEEE Trans. Image Process., 20 (2011), 3097-3111.  doi: 10.1109/TIP.2011.2158229.  Google Scholar

[8]

J. DahlP. C. HansenS. H. Jensen and T. L. Jensen, Algorithms and software for total variation image reconstruction via first-order methods, Numer. Algorithms, 53 (2010), 67-92.  doi: 10.1007/s11075-009-9310-3.  Google Scholar

[9]

C.-A. DeledalleS. VaiterJ. Fadili and G. Peyré, Stein Unbiased GrAdient estimator of the Risk (SUGAR) for multiple parameter selection, SIAM J. Imaging. Sci., 7 (2014), 2448-2487.  doi: 10.1137/140968045.  Google Scholar

[10]

J. C. De los Reyes and C.-B. Schönlieb, Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization, Inverse Probl. Imaging, 7 (2013), 1183-1214.  doi: 10.3934/ipi.2013.7.1183.  Google Scholar

[11]

N. DeyL. Blanc-FeraudC. ZimmerP. RouxZ. KamJ.-C. Olivo-Marin and J. Zerubia, Richardson-Lucy algorithm with total variation regularization for 3D confocal microscope deconvolution, Microsc. Res. Tech., 69 (2006), 260-266.  doi: 10.1002/jemt.20294.  Google Scholar

[12]

Y. DongM. Hintermüller and M. M. Rincon-Camacho, Automated regularization parameter selection in multi-scale total variation models for image restoration, J. Math. Imaging Vision, 40 (2011), 82-104.  doi: 10.1007/s10851-010-0248-9.  Google Scholar

[13]

C. DossalM. KachourM. J. FadiliG. Peyré and C. Chesneau, The degrees of freedom of the lasso for general design matrix, Statist. Sinica, 23 (2013), 809-828.   Google Scholar

[14]

B. EfronT. HastieI. Johnstone and R. Tibshirani, Least angle regression, Ann. Statist., 32 (2004), 407-499.  doi: 10.1214/009053604000000067.  Google Scholar

[15]

P. Getreuer, Rudin-Osher-Fatemi total variation denoising using Split Bregman, Image Processing On Line, 2 (2012), 74-95.  doi: 10.5201/ipol.2012.g-tvd.  Google Scholar

[16]

M. L. Green, Statistics of Images, the TV Algorithm of Rudin-Osher-Fatemi for Image Denoising and an Improved Denoising Algorithm, UCLA CAM 02-55, 2002. Google Scholar

[17]

T. Goldstein and S. Osher, The split Bregman method for L$^1$-regularized problems, SIAM J. Imaging Sci., 2 (2009), 323-343.  doi: 10.1137/080725891.  Google Scholar

[18]

P. Hall and D. M. Titterington, Common structure of techniques for choosing smoothing parameters in regression problems, J. Roy. Statist. Soc. Ser. B, 49 (1987), 184-198.   Google Scholar

[19]

P. C. Hansen, J. G. Nagy and D. P. O'Leary, Deblurring Images: Matrices, Spectra and Filtering, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2006. doi: 10.1137/1.9780898718874.  Google Scholar

[20]

P. C. Hansen, Regularization tools version 4.0 for Matlab 7.3, Numer. Algorithms, 46 (2007), 189-194.  doi: 10.1007/s11075-007-9136-9.  Google Scholar

[21]

M. Hintermüller and C. N. Rautenberg, Optimal selection of the regularization function in a weighted total variation model. Part I: Modelling and theory, J. Math. Imaging Vision, 59 (2017), 498-514.  doi: 10.1007/s10851-017-0744-2.  Google Scholar

[22]

M. HintermüllerC. N. RautenbergT. Wu and A. Langer, Optimal selection of the regularization function in a weighted total variation model. Part II: Algorithm, its analysis and numerical tests, J. Math. Imaging Vision, 59 (2017), 515-533.  doi: 10.1007/s10851-017-0736-2.  Google Scholar

[23]

J. Huang and D. Mumford, Statistics of natural images and models, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1 (1999), 541-547.   Google Scholar

[24]

H. LiaoF. Li and M. K. Ng, Section of regularization parameter in total variation image restoration, J. Opt. Soc. Amer. A, 26 (2009), 2311-2320.  doi: 10.1364/JOSAA.26.002311.  Google Scholar

[25]

S. Kotz, T. J. Kozubowski and K. Podgórski, The Laplace Distribution and Generalizations. A Revisit with Applications to Communications, Economics, Engineering, and Finance, Birkhäuser Boston, Inc., Boston, MA, 2001. doi: 10.1007/978-1-4612-0173-1.  Google Scholar

[26]

K. Kunisch and T. Pock, A bilevel Optimization approach for parameter learning in variational models, SIAM J. Imaging Sci., 6 (2013), 938-983.  doi: 10.1137/120882706.  Google Scholar

[27]

A. Langer, Automated parameter selection for total variation minimization in image restoration, J. Math. Imaging Vision, 57 (2017), 239-268.  doi: 10.1007/s10851-016-0676-2.  Google Scholar

[28]

J. Lee and P. K. Kitanidis, Bayesian inversion with total variation prior for discrete geologic structure identification, Water Resour. Res., 49 (2013), 7658-7669.   Google Scholar

[29]

Y. LinB. Wohlberg and H. Guo, UPRE method for total variation parameter selection, Signal Processing, 90 (2010), 2546-2551.   Google Scholar

[30]

D. W. Marquardt, Generalized inverses, ridge regression, biased linear estimation, and nonlinear estimation, Technometrics, 12 (1970), 591-612.   Google Scholar

[31]

J. L. Mead, A priori weighting for parameter estimation, J. Inverse Ill-Posed Probl., 16 (2008), 175-193.  doi: 10.1515/JIIP.2008.011.  Google Scholar

[32]

J. L. Mead and R. A. Renaut, A Newton root-finding algorithm for estimating the regularization parameter for solving ill-conditioned least squares problems, Inverse Problems, 25 (2009), 025002. doi: 10.1088/0266-5611/25/2/025002.  Google Scholar

[33]

J. L. Mead and R. A. Renaut, Least squares problems with inequality constraints as quadratic constraints, Linear Algebra Appl., 432 (2010), 1936-1949.  doi: 10.1016/j.laa.2009.04.017.  Google Scholar

[34]

J. L. Mead, Discontinuous parameter estimates with least squares estimators, Appl. Math. Comput., 219 (2013), 5210-5223.  doi: 10.1016/j.amc.2012.11.067.  Google Scholar

[35]

J. L. Mead and C. C. Hammerquist, $\chi^2$ tests for the choice of the regularization parameter in nonlinear inverse problems, SIAM J. Matrix Anal. Appl., 34 (2013), 1213-1230.  doi: 10.1137/12088447X.  Google Scholar

[36]

V. A. Morozov, Methods for Solving Incorrectly Posed Problems, Translated from the Russian by A. B. Aries, translation edited by Z. Nashed, Springer-Verlag, New York, 1984. doi: 10.1007/978-1-4612-5280-1.  Google Scholar

[37]

M. K. NgP. Weiss and X. Yuan, Solving Constrained total-variation image restoration and reconstruction problems via alternating direction methods, SIAM J. Sci. Comput., 32 (2010), 2710-2736.  doi: 10.1137/090774823.  Google Scholar

[38]

J. P. OliveiraJ. M. Bioucas-Dias and M. A. T. Figueiredo, Adaptive total variation image deblurring: A majorization-minimization approach, Signal Processing, 89 (2009), 1683-1693.   Google Scholar

[39]

L. I. RudinS. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms. Experimental mathematics: Computational issues in nonlinear science, Phys. D, 60 (1992), 259-268.  doi: 10.1016/0167-2789(92)90242-F.  Google Scholar

[40]

A. SrivastavaA. B. LeeE. P. Simoncelli and S.-C. Zhu, On advances in statistical modeling of natural images, J. Math. Imaging Vision, 18 (2003), 17-33.  doi: 10.1023/A:1021889010444.  Google Scholar

[41]

T. Teuber, G. Steidl and R. H. Chan, Minimization and parameter estimation for seminorm regularization models with $I$-divergence constraints, Inverse Problems, 29 (2013), 035007, 28 pp. doi: 10.1088/0266-5611/29/3/035007.  Google Scholar

[42]

R. J. Tibshirani and J. Taylor, The solution path of the generalized lasso, Ann. Statist., 39 (2011), 1335-1371.  doi: 10.1214/11-AOS878.  Google Scholar

[43]

R. J. Tibshirani and J. Taylor, Degrees of freedom in lasso problems, Ann. Statist., 40 (2012), 1198-1232.  doi: 10.1214/12-AOS1003.  Google Scholar

[44]

D. M. Titterington, Choosing the regularization parameter in image restoration, IMS Lecture Notes Monogr. Ser., 20 (1991), 392-402.  doi: 10.1214/lnms/1215460514.  Google Scholar

[45]

G. Wahba, Bayesian "confidence intervals" for the cross-validated smoothing spline, J. Roy. Statist. Soc. Ser. B, 45 (1983), 133-150.  doi: 10.1111/j.2517-6161.1983.tb01239.x.  Google Scholar

[46]

Y. WangJ. YangW. Yin and Y. Zhang, A new alternating minimization algorithm for total variation image reconstruction, SIAM J. Imaging Sci., 1 (2008), 248-272.  doi: 10.1137/080724265.  Google Scholar

[47]

Y.-W. Wen and R. H. Chan, Parameter selection for total-variation-based image restoration using discrepancy principle, IEEE Trans. Image Process., 21 (2012), 1770-1781.  doi: 10.1109/TIP.2011.2181401.  Google Scholar

show all references

References:
[1]

A. Ali and R. J. Tibshirani, The generalized lasso problem and uniqueness, Electron. J. Stat., 13 (2019), 2307-2347.  doi: 10.1214/19-EJS1569.  Google Scholar

[2] R. C. AsterB. Borchers and C. H. Thurber, Parameter Estimation and Inverse Problems, 2$^{nd}$ edition, Elsevier/Academic Press, Amsterdam, 2013.  doi: 10.1016/B978-0-12-385048-5.00001-X.  Google Scholar
[3]

S. D. BabacanR. Molina and A. K. Katsaggelos, Variational Bayesian blind deconvolution using a total variation prior, IEEE Trans. Image Process., 18 (2009), 12-26.  doi: 10.1109/TIP.2008.2007354.  Google Scholar

[4]

T. Blu and F. Luisier, The SURE-LET approach to image denoising, IEEE Trans. Image Process., 16 (2007), 2778-2786.  doi: 10.1109/TIP.2007.906002.  Google Scholar

[5]

G. Casella and R. L. Berger, Statistical Inference, 2$^{nd}$ edition Duxbury, 2001.  Google Scholar

[6]

J. M. Bioucas-Dias, M. A. T. Figueiredo and J. P. Oliveira, Adaptive total variation image deconvolution: A majorization-minimization approach, 14$^{th }$ European Signal Processing Conference, (2006), 1–4. Google Scholar

[7]

S. H. ChanR. KhoshabehK. B. GibsonP. E. Gill and T. Q. Nguyen, An augmented Lagrangian method for total variation video restoration, IEEE Trans. Image Process., 20 (2011), 3097-3111.  doi: 10.1109/TIP.2011.2158229.  Google Scholar

[8]

J. DahlP. C. HansenS. H. Jensen and T. L. Jensen, Algorithms and software for total variation image reconstruction via first-order methods, Numer. Algorithms, 53 (2010), 67-92.  doi: 10.1007/s11075-009-9310-3.  Google Scholar

[9]

C.-A. DeledalleS. VaiterJ. Fadili and G. Peyré, Stein Unbiased GrAdient estimator of the Risk (SUGAR) for multiple parameter selection, SIAM J. Imaging. Sci., 7 (2014), 2448-2487.  doi: 10.1137/140968045.  Google Scholar

[10]

J. C. De los Reyes and C.-B. Schönlieb, Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization, Inverse Probl. Imaging, 7 (2013), 1183-1214.  doi: 10.3934/ipi.2013.7.1183.  Google Scholar

[11]

N. DeyL. Blanc-FeraudC. ZimmerP. RouxZ. KamJ.-C. Olivo-Marin and J. Zerubia, Richardson-Lucy algorithm with total variation regularization for 3D confocal microscope deconvolution, Microsc. Res. Tech., 69 (2006), 260-266.  doi: 10.1002/jemt.20294.  Google Scholar

[12]

Y. DongM. Hintermüller and M. M. Rincon-Camacho, Automated regularization parameter selection in multi-scale total variation models for image restoration, J. Math. Imaging Vision, 40 (2011), 82-104.  doi: 10.1007/s10851-010-0248-9.  Google Scholar

[13]

C. DossalM. KachourM. J. FadiliG. Peyré and C. Chesneau, The degrees of freedom of the lasso for general design matrix, Statist. Sinica, 23 (2013), 809-828.   Google Scholar

[14]

B. EfronT. HastieI. Johnstone and R. Tibshirani, Least angle regression, Ann. Statist., 32 (2004), 407-499.  doi: 10.1214/009053604000000067.  Google Scholar

[15]

P. Getreuer, Rudin-Osher-Fatemi total variation denoising using Split Bregman, Image Processing On Line, 2 (2012), 74-95.  doi: 10.5201/ipol.2012.g-tvd.  Google Scholar

[16]

M. L. Green, Statistics of Images, the TV Algorithm of Rudin-Osher-Fatemi for Image Denoising and an Improved Denoising Algorithm, UCLA CAM 02-55, 2002. Google Scholar

[17]

T. Goldstein and S. Osher, The split Bregman method for L$^1$-regularized problems, SIAM J. Imaging Sci., 2 (2009), 323-343.  doi: 10.1137/080725891.  Google Scholar

[18]

P. Hall and D. M. Titterington, Common structure of techniques for choosing smoothing parameters in regression problems, J. Roy. Statist. Soc. Ser. B, 49 (1987), 184-198.   Google Scholar

[19]

P. C. Hansen, J. G. Nagy and D. P. O'Leary, Deblurring Images: Matrices, Spectra and Filtering, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2006. doi: 10.1137/1.9780898718874.  Google Scholar

[20]

P. C. Hansen, Regularization tools version 4.0 for Matlab 7.3, Numer. Algorithms, 46 (2007), 189-194.  doi: 10.1007/s11075-007-9136-9.  Google Scholar

[21]

M. Hintermüller and C. N. Rautenberg, Optimal selection of the regularization function in a weighted total variation model. Part I: Modelling and theory, J. Math. Imaging Vision, 59 (2017), 498-514.  doi: 10.1007/s10851-017-0744-2.  Google Scholar

[22]

M. HintermüllerC. N. RautenbergT. Wu and A. Langer, Optimal selection of the regularization function in a weighted total variation model. Part II: Algorithm, its analysis and numerical tests, J. Math. Imaging Vision, 59 (2017), 515-533.  doi: 10.1007/s10851-017-0736-2.  Google Scholar

[23]

J. Huang and D. Mumford, Statistics of natural images and models, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1 (1999), 541-547.   Google Scholar

[24]

H. LiaoF. Li and M. K. Ng, Section of regularization parameter in total variation image restoration, J. Opt. Soc. Amer. A, 26 (2009), 2311-2320.  doi: 10.1364/JOSAA.26.002311.  Google Scholar

[25]

S. Kotz, T. J. Kozubowski and K. Podgórski, The Laplace Distribution and Generalizations. A Revisit with Applications to Communications, Economics, Engineering, and Finance, Birkhäuser Boston, Inc., Boston, MA, 2001. doi: 10.1007/978-1-4612-0173-1.  Google Scholar

[26]

K. Kunisch and T. Pock, A bilevel Optimization approach for parameter learning in variational models, SIAM J. Imaging Sci., 6 (2013), 938-983.  doi: 10.1137/120882706.  Google Scholar

[27]

A. Langer, Automated parameter selection for total variation minimization in image restoration, J. Math. Imaging Vision, 57 (2017), 239-268.  doi: 10.1007/s10851-016-0676-2.  Google Scholar

[28]

J. Lee and P. K. Kitanidis, Bayesian inversion with total variation prior for discrete geologic structure identification, Water Resour. Res., 49 (2013), 7658-7669.   Google Scholar

[29]

Y. LinB. Wohlberg and H. Guo, UPRE method for total variation parameter selection, Signal Processing, 90 (2010), 2546-2551.   Google Scholar

[30]

D. W. Marquardt, Generalized inverses, ridge regression, biased linear estimation, and nonlinear estimation, Technometrics, 12 (1970), 591-612.   Google Scholar

[31]

J. L. Mead, A priori weighting for parameter estimation, J. Inverse Ill-Posed Probl., 16 (2008), 175-193.  doi: 10.1515/JIIP.2008.011.  Google Scholar

[32]

J. L. Mead and R. A. Renaut, A Newton root-finding algorithm for estimating the regularization parameter for solving ill-conditioned least squares problems, Inverse Problems, 25 (2009), 025002. doi: 10.1088/0266-5611/25/2/025002.  Google Scholar

[33]

J. L. Mead and R. A. Renaut, Least squares problems with inequality constraints as quadratic constraints, Linear Algebra Appl., 432 (2010), 1936-1949.  doi: 10.1016/j.laa.2009.04.017.  Google Scholar

[34]

J. L. Mead, Discontinuous parameter estimates with least squares estimators, Appl. Math. Comput., 219 (2013), 5210-5223.  doi: 10.1016/j.amc.2012.11.067.  Google Scholar

[35]

J. L. Mead and C. C. Hammerquist, $\chi^2$ tests for the choice of the regularization parameter in nonlinear inverse problems, SIAM J. Matrix Anal. Appl., 34 (2013), 1213-1230.  doi: 10.1137/12088447X.  Google Scholar

[36]

V. A. Morozov, Methods for Solving Incorrectly Posed Problems, Translated from the Russian by A. B. Aries, translation edited by Z. Nashed, Springer-Verlag, New York, 1984. doi: 10.1007/978-1-4612-5280-1.  Google Scholar

[37]

M. K. NgP. Weiss and X. Yuan, Solving Constrained total-variation image restoration and reconstruction problems via alternating direction methods, SIAM J. Sci. Comput., 32 (2010), 2710-2736.  doi: 10.1137/090774823.  Google Scholar

[38]

J. P. OliveiraJ. M. Bioucas-Dias and M. A. T. Figueiredo, Adaptive total variation image deblurring: A majorization-minimization approach, Signal Processing, 89 (2009), 1683-1693.   Google Scholar

[39]

L. I. RudinS. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms. Experimental mathematics: Computational issues in nonlinear science, Phys. D, 60 (1992), 259-268.  doi: 10.1016/0167-2789(92)90242-F.  Google Scholar

[40]

A. SrivastavaA. B. LeeE. P. Simoncelli and S.-C. Zhu, On advances in statistical modeling of natural images, J. Math. Imaging Vision, 18 (2003), 17-33.  doi: 10.1023/A:1021889010444.  Google Scholar

[41]

T. Teuber, G. Steidl and R. H. Chan, Minimization and parameter estimation for seminorm regularization models with $I$-divergence constraints, Inverse Problems, 29 (2013), 035007, 28 pp. doi: 10.1088/0266-5611/29/3/035007.  Google Scholar

[42]

R. J. Tibshirani and J. Taylor, The solution path of the generalized lasso, Ann. Statist., 39 (2011), 1335-1371.  doi: 10.1214/11-AOS878.  Google Scholar

[43]

R. J. Tibshirani and J. Taylor, Degrees of freedom in lasso problems, Ann. Statist., 40 (2012), 1198-1232.  doi: 10.1214/12-AOS1003.  Google Scholar

[44]

D. M. Titterington, Choosing the regularization parameter in image restoration, IMS Lecture Notes Monogr. Ser., 20 (1991), 392-402.  doi: 10.1214/lnms/1215460514.  Google Scholar

[45]

G. Wahba, Bayesian "confidence intervals" for the cross-validated smoothing spline, J. Roy. Statist. Soc. Ser. B, 45 (1983), 133-150.  doi: 10.1111/j.2517-6161.1983.tb01239.x.  Google Scholar

[46]

Y. WangJ. YangW. Yin and Y. Zhang, A new alternating minimization algorithm for total variation image reconstruction, SIAM J. Imaging Sci., 1 (2008), 248-272.  doi: 10.1137/080724265.  Google Scholar

[47]

Y.-W. Wen and R. H. Chan, Parameter selection for total-variation-based image restoration using discrepancy principle, IEEE Trans. Image Process., 21 (2012), 1770-1781.  doi: 10.1109/TIP.2011.2181401.  Google Scholar

Figure 1.  Example of Theorem 3
Figure 2.  Gaussian filter with variance 9. BSNR 20 (Left), BSNR 30 (Middle), BSNR 40 (Right)
Figure 3.  Uniform 15 x 15 filter. BSNR 20 (Left), BSNR 30 (Middle), BSNR 40 (Right)
Figure 4.  Uniform 15 x 15 filter. BSNR 20 (Left), BSNR 30 (Middle), BSNR 40 (Right)
Figure 5.  Left vertical axis: $ \chi^2 $ value (- -) and target value $ mn\sigma^2 $ ($ \cdot \cdot $). Right vertical axis: ISNR (–). Gaussian filter with variance 9
Figure 6.  Left vertical axis: $ \chi^2 $ value (- -) and target value $ mn\sigma^2 $ ($ \cdot \cdot $). Right vertical axis: ISNR (–). Uniform filter 15 x 15
Table 1.  ISNR: Gaussian blur with variance 9
BSNR MAP estimate Discrepancy $ \chi^2 $ test Maximum
Camerman $ (m=n=256) $
40 3.8419 5.5893 5.6180 6.2762
30 2.1214 3.4184 3.4507 4.2052
20 1.4033 2.1465 2.1675 2.7272
MRI $ (m=n=256) $
40 5.2304 6.2520 6.3032 7.0209
30 2.5435 4.2183 4.2656 5.0079
20 1.4242 2.5236 2.5483 3.2903
Mountain $ (m=480, n=640) $
40 2.0275 3.0206 3.0242 3.2735
30 0.9674 1.8473 1.8526 2.1171
20 0.5623 1.0222 1.0265 1.3306
BSNR MAP estimate Discrepancy $ \chi^2 $ test Maximum
Camerman $ (m=n=256) $
40 3.8419 5.5893 5.6180 6.2762
30 2.1214 3.4184 3.4507 4.2052
20 1.4033 2.1465 2.1675 2.7272
MRI $ (m=n=256) $
40 5.2304 6.2520 6.3032 7.0209
30 2.5435 4.2183 4.2656 5.0079
20 1.4242 2.5236 2.5483 3.2903
Mountain $ (m=480, n=640) $
40 2.0275 3.0206 3.0242 3.2735
30 0.9674 1.8473 1.8526 2.1171
20 0.5623 1.0222 1.0265 1.3306
Table 2.  ISNR: Uniform 15 x 15
BSNR MAP estimate Discrepancy $ \chi^2 $ test Maximum
Camerman $ (m=n=256) $
40 5.0019 7.0914 7.1123 7.6329
30 2.8671 5.3398 5.3556 5.6426
20 1.8228 3.6031 3.6241 4.0441
MRI $ (m=n=256) $
40 5.6696 8.1718 8.2201 9.2978
30 3.2113 5.9225 5.9510 6.5944
20 1.7017 3.8260 3.8474 4.5641
Mountain $ (m=480, n=640) $
40 2.8357 4.0904 4.0938 4.3440
30 1.6074 2.9915 2.9945 3.1432
20 0.9594 2.1004 2.1049 2.2803
BSNR MAP estimate Discrepancy $ \chi^2 $ test Maximum
Camerman $ (m=n=256) $
40 5.0019 7.0914 7.1123 7.6329
30 2.8671 5.3398 5.3556 5.6426
20 1.8228 3.6031 3.6241 4.0441
MRI $ (m=n=256) $
40 5.6696 8.1718 8.2201 9.2978
30 3.2113 5.9225 5.9510 6.5944
20 1.7017 3.8260 3.8474 4.5641
Mountain $ (m=480, n=640) $
40 2.8357 4.0904 4.0938 4.3440
30 1.6074 2.9915 2.9945 3.1432
20 0.9594 2.1004 2.1049 2.2803
Table 3.  PSNR: Gaussian blur with variance 9
BSNR MAP estimate Discrepancy $ \chi^2 $ test Maximum ISNR
Camerman $ (m=n=256) $
40.0 25.3775 27.1479 27.1772 27.8249
30.0 23.6516 24.9772 25.0077 25.6775
20.0 22.6682 23.3500 23.3718 24.0393
MRI $ (m=n=256) $
40.0 27.7080 28.8271 28.8750 29.5010
30.0 24.9866 26.7687 26.8176 27.5807
20.0 23.6173 24.6765 24.7007 25.4519
Mountain $ (m=480, n=640) $
40.0 19.0529 20.0588 20.0624 20.3040
30.0 17.9755 18.8666 18.8720 19.1485
20.0 17.4244 17.8787 17.8830 18.1806
BSNR MAP estimate Discrepancy $ \chi^2 $ test Maximum ISNR
Camerman $ (m=n=256) $
40.0 25.3775 27.1479 27.1772 27.8249
30.0 23.6516 24.9772 25.0077 25.6775
20.0 22.6682 23.3500 23.3718 24.0393
MRI $ (m=n=256) $
40.0 27.7080 28.8271 28.8750 29.5010
30.0 24.9866 26.7687 26.8176 27.5807
20.0 23.6173 24.6765 24.7007 25.4519
Mountain $ (m=480, n=640) $
40.0 19.0529 20.0588 20.0624 20.3040
30.0 17.9755 18.8666 18.8720 19.1485
20.0 17.4244 17.8787 17.8830 18.1806
Table 4.  PSNR: Uniform 15 x 15. The value in parentheses is the percent relative difference from the PSNR obtained with the $ \lambda $ that achieves maximum ISNR
BSNR MAP estimate Discrepancy $ \chi^2 $ test Maximum ISNR
Camerman $ (m=n=256) $
40.0 24.1719 (9.84) 26.3170 (1.84) 26.3371 (1.77) 26.8111
30.0 22.0192 (11.10) 24.4882 (1.14) 24.5037 (1.08) 24.7709
20.0 20.8288 (9.70) 22.5266 (2.34) 22.5493 (2.24) 23.0660
MRI $ (m=n=256) $
40.0 25.0072 (12.82) 27.5739 (3.87) 27.6225 (3.70) 28.6832
30.0 22.5294 (13.29) 25.1393 (3.24) 25.1710 (3.12) 25.9823
20.0 20.9540 (11.69) 23.1799 (2.31) 23.1974 (2.23) 23.7269
Mountain $ (m=480, n=640) $
40.0 18.3899 (7.56) 19.6504 (1.22) 19.6538 (1.20) 19.8931
30.0 17.1524 (8.24) 18.5660 (0.68) 18.5688 (0.66) 18.6924
20.0 16.4198 (7.48) 17.5595 (1.06) 17.5633 (1.04) 17.7470
BSNR MAP estimate Discrepancy $ \chi^2 $ test Maximum ISNR
Camerman $ (m=n=256) $
40.0 24.1719 (9.84) 26.3170 (1.84) 26.3371 (1.77) 26.8111
30.0 22.0192 (11.10) 24.4882 (1.14) 24.5037 (1.08) 24.7709
20.0 20.8288 (9.70) 22.5266 (2.34) 22.5493 (2.24) 23.0660
MRI $ (m=n=256) $
40.0 25.0072 (12.82) 27.5739 (3.87) 27.6225 (3.70) 28.6832
30.0 22.5294 (13.29) 25.1393 (3.24) 25.1710 (3.12) 25.9823
20.0 20.9540 (11.69) 23.1799 (2.31) 23.1974 (2.23) 23.7269
Mountain $ (m=480, n=640) $
40.0 18.3899 (7.56) 19.6504 (1.22) 19.6538 (1.20) 19.8931
30.0 17.1524 (8.24) 18.5660 (0.68) 18.5688 (0.66) 18.6924
20.0 16.4198 (7.48) 17.5595 (1.06) 17.5633 (1.04) 17.7470
Table 5.  SSIM: Gaussian blur with variance 9
BSNR MAP estimate Discrepancy $ \chi^2 $ test Maximum ISNR
Camerman $ (m=n=256) $
40.0 0.7956 0.8402 0.8408 0.8379
30.0 0.7393 0.7809 0.7818 0.7855
20.0 0.7024 0.7271 0.7278 0.7327
MRI $ (m=n=256) $
40.0 0.8496 0.8717 0.8725 0.8767
30.0 0.7846 0.8279 0.8288 0.8332
20.0 0.7246 0.7686 0.7693 0.7607
Mountain $ (m=480, n=640) $
40.0 0.5301 0.6197 0.6201 0.6343
30.0 0.4440 0.5156 0.5160 0.5377
20.0 0.3868 0.4348 0.4352 0.4526
BSNR MAP estimate Discrepancy $ \chi^2 $ test Maximum ISNR
Camerman $ (m=n=256) $
40.0 0.7956 0.8402 0.8408 0.8379
30.0 0.7393 0.7809 0.7818 0.7855
20.0 0.7024 0.7271 0.7278 0.7327
MRI $ (m=n=256) $
40.0 0.8496 0.8717 0.8725 0.8767
30.0 0.7846 0.8279 0.8288 0.8332
20.0 0.7246 0.7686 0.7693 0.7607
Mountain $ (m=480, n=640) $
40.0 0.5301 0.6197 0.6201 0.6343
30.0 0.4440 0.5156 0.5160 0.5377
20.0 0.3868 0.4348 0.4352 0.4526
Table 6.  SSIM: Uniform 15 x 15
BSNR MAP estimate Discrepancy $ \chi^2 $ test Maximum ISNR
Camerman $ (m=n=256) $
40.0 0.7571 0.8281 0.8285 0.8158
30.0 0.6766 0.7660 0.7662 0.7628
20.0 0.6300 0.6931 0.6938 0.6984
MRI $ (m=n=256) $
40.0 0.7683 0.8341 0.8349 0.8270
30.0 0.6732 0.7694 0.7702 0.7553
20.0 0.6037 0.6934 0.6938 0.6706
Mountain $ (m=480, n=640) $
40.0 0.4609 0.5834 0.5837 0.5965
30.0 0.3602 0.4801 0.4804 0.4906
20.0 0.3060 0.3901 0.3904 0.4003
BSNR MAP estimate Discrepancy $ \chi^2 $ test Maximum ISNR
Camerman $ (m=n=256) $
40.0 0.7571 0.8281 0.8285 0.8158
30.0 0.6766 0.7660 0.7662 0.7628
20.0 0.6300 0.6931 0.6938 0.6984
MRI $ (m=n=256) $
40.0 0.7683 0.8341 0.8349 0.8270
30.0 0.6732 0.7694 0.7702 0.7553
20.0 0.6037 0.6934 0.6938 0.6706
Mountain $ (m=480, n=640) $
40.0 0.4609 0.5834 0.5837 0.5965
30.0 0.3602 0.4801 0.4804 0.4906
20.0 0.3060 0.3901 0.3904 0.4003
[1]

Fabio Camilli, Giulia Cavagnari, Raul De Maio, Benedetto Piccoli. Superposition principle and schemes for measure differential equations. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2020050

[2]

Jiaquan Liu, Xiangqing Liu, Zhi-Qiang Wang. Sign-changing solutions for a parameter-dependent quasilinear equation. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020454

[3]

Giuseppina Guatteri, Federica Masiero. Stochastic maximum principle for problems with delay with dependence on the past through general measures. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020048

[4]

Kha Van Huynh, Barbara Kaltenbacher. Some application examples of minimization based formulations of inverse problems and their regularization. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020074

[5]

Jia Cai, Guanglong Xu, Zhensheng Hu. Sketch-based image retrieval via CAT loss with elastic net regularization. Mathematical Foundations of Computing, 2020, 3 (4) : 219-227. doi: 10.3934/mfc.2020013

2019 Impact Factor: 1.373

Metrics

  • PDF downloads (142)
  • HTML views (139)
  • Cited by (0)

Other articles
by authors

[Back to Top]