patch size | 5 | 6 | 7 | 8 | 9 | 10 | 11 |
time | 300 | 217 | 173 | 137 | 128 | 121 | 119 |
psnr | 30.236 | 30.228 | 30.226 | 30.199 | 30.199 | 30.198 | 30.194 |
ssim | 0.898 | 0.898 | 0.898 | 0.898 | 0.898 | 0.897 | 0.896 |
In this paper, we propose a novel scheme for single image super resolution (SR) reconstruction. Firstly, we construct a new self-similarity framework by regarding the low resolution (LR) images as the low rank version of corresponding high resolution (HR) images. Subsequently, nuclear norm minimization (NNM) is employed to generate LR image pyramids from HR ones. The structure of our framework is beneficial to extract LR features, where we regard the quotient image, calculated between HR image and LR image at the same layer, as LR feature. This LR feature has the same dimension as LR image; however the dimension of commonly used gradient feature is 4 times than LR image. On the other hand, we employ nonlocal similar patch, within the same scale and across different scales, to generate HR and LR dictionaries. In the course of encoding, codes are calculated from both row and column of LR dictionary for each LR patch; at the same time, both low rank and sparse constraints on codes matrix give us a hand to remove coding noises. Finally, both quantitative and perceptual results demonstrate that our proposed method has a good SR performance.
Citation: |
Table 1.
The running time for patch size"
patch size | 5 | 6 | 7 | 8 | 9 | 10 | 11 |
time | 300 | 217 | 173 | 137 | 128 | 121 | 119 |
psnr | 30.236 | 30.228 | 30.226 | 30.199 | 30.199 | 30.198 | 30.194 |
ssim | 0.898 | 0.898 | 0.898 | 0.898 | 0.898 | 0.897 | 0.896 |
Table 2.
Comparison among different methods "
Methods | lena | Child | butterfly | foreman | house | hat | bike | parrots | girl | pepper |
Bicubic | 29.469 | 30.741 | 24.140 | 32.186 | 29.005 | 29.205 | 22.801 | 27.998 | 33.718 | 30.939 |
0.908 | 0.909 | 0.824 | 0.907 | 0.840 | 0.833 | 0.705 | 0.883 | 0.846 | 0.941 | |
ScSR | 30.056 | 32.428 | 24.579 | 32.789 | 30.334 | 29.626 | 23.426 | 28.680 | 34.278 | 31.157 |
0.840 | 0.844 | 0.704 | 0.660 | 0.472 | 0.525 | 0.653 | 0.621 | 0.605 | 0.853 | |
LRSC | 29.758 | 30.753 | 24.133 | 32.709 | 29.319 | 29.446 | 23.016 | 28.357 | 34.066 | 30.926 |
0.912 | 0.904 | 0.831 | 0.913 | 0.845 | 0.842 | 0.718 | 0.891 | 0.851 | 0.942 | |
Our | 30.102 | 31.517 | 24.830 | 32.719 | 29.909 | 29.649 | 23.515 | 28.589 | 34.099 | 31.296 |
0.913 | 0.910 | 0.832 | 0.914 | 0.845 | 0.842 | 0.718 | 0.892 | 0.853 | 0.942 |
Table 3.
Comparison among different methods "
Methods | lena | Child | butterfly | foreman | house | hat | bike | parrots | girl | pepper |
Bicubic | 28.913 | 30.432 | 24.320 | 32.814 | 30.213 | 29.921 | 23.411 | 28.536 | 33.685 | 29.901 |
0.933 | 0.933 | 0.894 | 0.947 | 0.912 | 0.896 | 0.804 | 0.927 | 0.900 | 0.963 | |
ScSR | 30.136 | 31.452 | 25.104 | 33.468 | 30.878 | 30.559 | 24.089 | 29.264 | 34.194 | 30.778 |
0.672 | 0.702 | 0.574 | 0.590 | 0.415 | 0.440 | 0.557 | 0.548 | 0.489 | 0.683 | |
LRSC | 29.054 | 30.753 | 24.466 | 33.203 | 30.387 | 30.381 | 23.411 | 28.408 | 33.915 | 29.897 |
0.938 | 0.904 | 0.894 | 0.940 | 0.915 | 0.902 | 0.801 | 0.931 | 0.901 | 0.963 | |
Our | 29.782 | 30.933 | 24.671 | 33.450 | 30.487 | 29.649 | 23.775 | 28.837 | 33.916 | 29.901 |
0.939 | 0.936 | 0.897 | 0.950 | 0.916 | 0.842 | 0.810 | 0.931 | 0.904 | 0.967 | |
The values in the cell are PSNR (dB) and SSIM from top to bottom. |
Table 4.
Noisy case: Comparison among different methods "
Methods | lena | Child | butterfly | foreman | house | hat | bike | parrots | girl | pepper |
Bicubic | 25.059 | 26.557 | 22.790 | 26.795 | 25.980 | 25.726 | 22.918 | 25.262 | 27.138 | 26.470 |
0.594 | 0.593 | 0.615 | 0.506 | 0.484 | 0.444 | 0.550 | 0.497 | 0.476 | 0.598 | |
ScSR | 25.094 | 25.422 | 20.334 | 25.416 | 24.784 | 24.660 | 21.567 | 24.309 | 25.564 | 25.393 |
0.404 | 0.409 | 0.465 | 0.243 | 0.231 | 0.1778 | 0.428 | 0.234 | 0.214 | 0.404 | |
LRSC | 26.370 | 26.315 | 23.452 | 27.611 | 26.529 | 26.472 | 22.463 | 26.057 | 27.994 | 26.785 |
0.675 | 0.667 | 0.671 | 0.585 | 0.555 | 0.532 | 0.614 | 0.585 | 0.558 | 0.671 | |
WNNM | 25.774 | 25.941 | 22.937 | 27.275 | 27.819 | 25.632 | 27.635 | 25.463 | 27.866 | 25.798 |
0.621 | 0.653 | 0.658 | 0.576 | 0.569 | 0.523 | 0.564 | 0.534 | 0.545 | 0.579 | |
Our | 27.125 | 26.315 | 24.121 | 28.833 | 27.375 | 27.352 | 22.988 | 26.896 | 29.295 | 27.720 |
0.727 | 0.724 | 0.717 | 0.659 | 0.619 | 0.598 | 0.661 | 0.653 | 0.632 | 0.732 | |
The values in the cell are PSNR (dB) and SSIM from top to bottom. |
[1] |
J. Allebach and P. W. Wong, Edge-Directed Interpolation, International Conference on Image Processing IEEE, 1996.
doi: 10.1109/ICIP.1996.560768.![]() ![]() |
[2] |
S. Baker and T. Kanade, Limits on super-resolution and how to break them, IEEE Transactions on Pattern Analysis and Machine Intelligence, 24 (2002), 1167-1183.
doi: 10.1109/TPAMI.2002.1033210.![]() ![]() |
[3] |
T. Chan, S. Esedoglu and A. Yip, Recent Developments in Total Variation Image Restoration, Mathematical Models of Computer Vision, 2011.
![]() |
[4] |
W. Dong, L. Zhang, G. Shi and Xin Li, Nonlocally centralized sparse representation for image restoration, IEEE Transactions on Image Processing, 22 (2013), 1620-1630.
doi: 10.1109/TIP.2012.2235847.![]() ![]() ![]() |
[5] |
W. Dong, L. Zhang and G. Shi, Centralized sparse representation for image restoration, International Conference on Computer Vision, 2011, 1259-1266.
![]() |
[6] |
W. Dong, L. Zhang, G. Shi and X. Wu, Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization, IEEE Transactions on Image Processing, 20 (2011), 1838-1857.
doi: 10.1109/TIP.2011.2108306.![]() ![]() ![]() |
[7] |
W. T. Freeman, T.R. Jones and E. C. Pasztor, Example based super resolution, IEEE Computer Graphics and Applications, 22 (2012), 56-65.
doi: 10.1109/38.988747.![]() ![]() |
[8] |
S. Gu, W. Zuo, Q. Xie, D. Meng, X. Feng and L. Zhang, Convolutional sparse coding for image super-resolution, International Conference on Computer Vision, (2015), 1823-1831.
doi: 10.1109/ICCV.2015.212.![]() ![]() |
[9] |
S. Gu, Q. Xi, D. Meng, W. Zuo, X. Feng and L. Zhang, Weighted nuclear norm minimization and its applications to low level vision, International Journal of Computer Vision, 121 (2017), 183-208.
doi: 10.1007/s11263-016-0930-5.![]() ![]() |
[10] |
H. Chang, D.-Y. Yeung and Y. Xiong, Super-resolution through neighbor embedding, Computer Vision and Pattern Recognition, 2004,275-282.
doi: 10.1109/CVPR.2004.1315043.![]() ![]() |
[11] |
R. G. Keys, Cubic convolution interpolation for digital image processing, IEEE Transactions on Acoustics, Speech, and Signal Processing, 29 (1981), 1153-1160.
doi: 10.1109/TASSP.1981.1163711.![]() ![]() ![]() |
[12] |
X. Li and M. T. Orchard, New Edge-Directed interpolation, IEEE Transactions on Image Processing, 10 (2001), 1521-1527.
doi: 10.1109/83.951537.![]() ![]() |
[13] |
Z. Lin and H.-Y. Shum, Fundamental limits of reconstruction-based superresolution algorithms under local translation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 26 (2004), 83-97.
doi: 10.1109/TPAMI.2004.1261081.![]() ![]() |
[14] |
Z. Lin, M. Chen and Y. Ma, The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices, arXiv: 1009.5055.
![]() |
[15] |
G. Liu and S. Yan, Latent low-rank representation for subspace segmentation and feature extraction, International Conference on Computer Vision, (2011), 1615-1622.
doi: 10.1109/ICCV.2011.6126422.![]() ![]() |
[16] |
L. I. Rudin, S. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D: Nonlinear Phenomena, 60 (1992), 259-268.
doi: 10.1016/0167-2789(92)90242-F.![]() ![]() ![]() |
[17] |
J. Shi and C. Qi, Low-rank sparse representation for single image super-resolution via self-similarity learning, International Conference on Image Processing, (2016), 1424-1428.
doi: 10.1109/ICIP.2016.7532593.![]() ![]() |
[18] |
J. A. Tropp and S. J. Wright, Computational methods for sparse solution of linear inverse problems, Proceedings of the IEEE, 96 (2010), 948-958.
![]() |
[19] |
S. L. Wang, D. Zhang and L. Yan, Semi-coupled dictionary learning with applications to image super-resolution and photo-sketch synthesis, Computer Vision and Pattern Recognition, (2012), 2216-2223.
![]() |
[20] |
H. Wang, S. Z. Li and Y. Wang, Face recognition under varying lighting conditions using self quotient image, IEEE International Conference on Automatic Face Gesture Recognition, (2004), 819-824.
![]() |
[21] |
J. Yang, J. Wright, T. S. Huang and Y. Ma, Image super-resolution via sparse representation, IEEE Transactions on Image Processing, 19 (2010), 2861-2873.
doi: 10.1109/TIP.2010.2050625.![]() ![]() ![]() |
[22] |
C.-Y. Yang, J.-B. Huang and M.-H. Yang, Exploiting self-similarities for single frame super-resolution, Asian Conference on Computer Vision, (2010), 497-510.
doi: 10.1007/978-3-642-19318-7_39.![]() ![]() |
[23] |
G. Yu, G. Sapiro and S. Mallat, Solving inverse problems with piecewise linear estimators:From Gaussian mixture models to structured sparsity, IEEE Transactions on Image Processing, 21 (2012), 2481-2499.
doi: 10.1109/TIP.2011.2176743.![]() ![]() ![]() |
[24] |
T. Zhang, B. Ghanem, S. Liu, C. Xu and N. Ahuja, Low-rank sparse coding for image classification, International Conference on Computer Vision, (2013), 281-288.
doi: 10.1109/ICCV.2013.42.![]() ![]() |