Advanced Search
Article Contents
Article Contents

Helsinki deblur challenge 2021: Description of photographic data

  • *Corresponding author: Markus Juvonen

    *Corresponding author: Markus Juvonen 
Abstract Full Text(HTML) Figure(15) Related Papers Cited by
  • Image deconvolution is a classical inverse problem that serves well as a computational test bench for reconstruction algorithms. Namely, the direct operator can be modelled in a straightforward way either by convolution or by multiplication in the frequency domain. Further, the ill-posedness of the inverse problem can be adjusted by the form of the point spread function (PSF). An open photographic dataset is described, suitable for testing practical deconvolution methods. The image material was designed and collected for the Helsinki Deblur Challenge 2021. The dataset contains pairs of images taken by two identical cameras of the same target but with different conditions. One camera is always in focus and generates sharp and low-noise images, while the other camera produces blurred and noisy photos as it is gradually more and more out of focus and has a higher ISO setting. The data is available here: https://doi.org/10.5281/zenodo.4916176

    Mathematics Subject Classification: Primary: 00A69, 15-11; Secondary: 78A46.


    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  A sample set of 10 image pairs. One pair is shown for each amount of blur up the focus step 9

    Figure 2.  A sample set of 10 image pairs representing the focus steps 10–19. See Figure 1 for the first 9 focus steps

    Figure 3.  Some results from the winning team (Theophil Trippe, Jan Macdonald, Maximilian März and Martin Genzel) of the Helsinki Deblur Challenge 2021. Top row: recovery of random text at the most severe level of misfocus. Bottom row: performance of the winning algorithm on one of the "sanity-check images"

    Figure 4.  Tikhonov and TV reconstructions of one sample image for 4 different blur levels. Radii for the PSFs were: r = 7 for step 1, r = 39 for step 5, r = 67 for step 9 and r = 99 for step 13

    Figure 5.  Some algorithms tend to produce more text-like results than others. Here is a selection of reconstructions of a QR code image at the most severe blur level (Step 19). It is evident that the various methods produce wildly different results visually

    Figure 6.  Line and point spread functions for focus steps 0, 3, 6 and 9. See Figure 7 for images of PSFs at all 20 blur levels

    Figure 7.  Point spread functions for all 20 focus steps. Note that each row of images has been gamma corrected separately to make the discs more visible

    Figure 8.  Two examples of the natural images for focus steps 0, 4, 9 and 14

    Figure 9.  Sharp and blurry QR codes for focus steps 0, 4, 9 and 14

    Figure 10.  By removing the low frequency trend of the original image we are left with only the noise

    Figure 11.  Histogram of the noise in an image from focus step 15. The red curve depicts a standard normal distribution

    Figure 12.  Diagram of the experiment setup. The mirror is half-transparent. Note that the image recorded by Camera 2 is flipped due to the mirror

    Figure 13.  Target image example showing random text using 30-point Verdana font

    Figure 14.  Image showing the cameras, the mirror and the e-ink display set up on the breadboards. The camera in the bottom right-hand corner and the beam-splitter mirror were covered with black cloth during shooting, as shown in Figure 15

    Figure 15.  Image showing Camera 2, the beamsplitter mirror, Camera 1 covered by cloth and the e-ink display from behind

  • [1] A. Abdelhamed, S. Lin and M. S. Brown, A high-quality denoising dataset for smartphone cameras, In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. doi: 10.1109/CVPR.2018.00182.
    [2] A. Abuolaim and M. S. Brown, Defocus deblurring using dual-pixel data, In European Conference on Computer Vision (ECCV), Springer, 2020, 111-126. doi: 10.1007/978-3-030-58607-2_7.
    [3] J. Anaya and A. Barbu, Renoir - a dataset for real low-light noise image reduction, Journal of Visual Communication and Image Representation, 51 (2018), 144-154.  doi: 10.1016/j.jvcir.2018.01.012.
    [4] S. ArridgeP. MaassO. Öktem and C.-B. Schönlieb, Solving inverse problems using data-driven models, Acta Numerica, 28 (2019), 1-174.  doi: 10.1017/S0962492919000059.
    [5] S. Chan, deconvtv - fast algorithm for total variation deconvolution, https://www.mathworks.com/matlabcentral/fileexchange/43600-deconvtv-fast-algorithm-for-total-variation-deconvolution, 2013.
    [6] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, ImageNet: A large-scale hierarchical image database, In CVPR09, 2009. doi: 10.1109/CVPR.2009.5206848.
    [7] R. J.-L. FétickL. M. MugnierT. Fusco and B. Neichel, Blind deconvolution in astronomy with adaptive optics: The parametric marginal approach, Monthly Notices of the Royal Astronomical Society, 496 (2020), 4209-4220.  doi: 10.1093/mnras/staa1813.
    [8] M. Guo, Y. Li, Y. Su, T. Lambert, D. D. Nogare, M. W. Moyle, L. H. Duncan, R. Ikegami, A. Santella, I. Rey-Suarez, et al., Rapid image deconvolution and multiview fusion for optical microscopy, Nature Biotechnology, 38 (2020), 1337-1346. doi: 10.1038/s41587-020-0560-x.
    [9] M. Hanke, Iterative regularization techniques in image reconstruction, In Surveys on Solution Methods for Inverse Problems, Springer, 2000, 35-52.
    [10] P. C. Hansen, J. G. Nagy and D. P. O'leary, Deblurring Images: Matrices, Spectra, and Filtering, SIAM, 2006. doi: 10.1137/1.9780898718874.
    [11] P. C. Hansen, J. G. Nagy and D. P. O'leary, HNO functions, http://www.imm.dtu.dk/pcha/HNO/, 2006.
    [12] J. HuG. T. Schuster and P. A. Valasek, Poststack migration deconvolution, Geophysics, 66 (2001), 939-952.  doi: 10.1190/1.1444984.
    [13] L. HuangY. Xia and T. Ye, Effective blind image deblurring using matrix-variable optimization, IEEE Transactions on Image Processing, 30 (2021), 4653-4666.  doi: 10.1109/TIP.2021.3073856.
    [14] J. J. Hull, A database for handwritten text recognition research, IEEE Transactions on Pattern Analysis and Machine Intelligence, 16 (1994), 550-554.  doi: 10.1109/34.291440.
    [15] R. Jaesung, L. Haeyun, W. Jucheol and C. Sunghyun, Real-world blur dataset for learning and benchmarking deblurring algorithms, In Proceedings of the European Conference on Computer Vision (ECCV), 2020.
    [16] P. A. Jansson, Deconvolution of Images and Spectra, Courier Corporation, 2014.
    [17] R. Köhler, M. Hirsch, B. Mohler, B. Schölkopf and S. Harmeling, Recording and playback of camera shake: Benchmarking blind deconvolution with a real-world database, In Computer Vision - ECCV 2012, LNCS Vol. 7578, Berlin, Germany, 2012. Springer, 27-40. doi: 10.1007/978-3-642-33786-4_3.
    [18] I. Krasin, T. Duerig, N. Alldrin, V. Ferrari, S. Abu-El-Haija, A. Kuznetsova, H. Rom, J. Uijlings, S. Popov, S. Kamali, M. Malloci, J. Pont-Tuset, A. Veit, S. Belongie, V. Gomes, A. Gupta, C. Sun, G. Chechik, D. Cai, Z. Feng, D. Narayanan and K. Murphy, Openimages: A public dataset for large-scale multi-label and multi-class image classification, Dataset available from https://storage.googleapis.com/openimages/web/index.html, 2017.
    [19] Y. LeCun and C. Cortes, MNIST handwritten digit database, 2010.
    [20] J. LiuM. Yan and T. Zeng, Surface-aware blind image deblurring, IEEE Transactions on Pattern Analysis and Machine Intelligence, 43 (2021), 1041-1055.  doi: 10.1109/TPAMI.2019.2941472.
    [21] D. MartinC. FowlkesD. Tal and J. Malik, A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics, Proc. 8th Int'l Conf. Computer Vision, 2 (2001), 416-423.  doi: 10.1109/ICCV.2001.937655.
    [22] S. Nah, T. Hyun Kim and K. Mu Lee, Deep multi-scale convolutional neural network for dynamic scene deblurring, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, 3883-3891. doi: 10.1109/CVPR.2017.35.
    [23] T. Plötz and S. Roth, Benchmarking denoising algorithms with real photographs, CoRR, arXiv: 1707.01313, 2017. doi: 10.1109/CVPR.2017.294.
    [24] M. TofighiO. YorulmazK. KöseD. C. YıldırımR. Çetin-Atalay and A. Enis Cetin, Phase and tv based convex sets for blind deconvolution of microscopic images, IEEE Journal of Selected Topics in Signal Processing, 10 (2015), 81-91.  doi: 10.1109/JSTSP.2015.2502541.
    [25] T. Trippe, M. Genzel, J. Macdonald and M. März, HDC2021 submission, https://github.com/theophil-trippe/HDC_TUBerlin_version_1, 2021.
    [26] S. V. Vorontsov and S. M. Jefferies, A new approach to blind deconvolution of astronomical images, Inverse Problems, 33 (2017), 055004.  doi: 10.1088/1361-6420/aa5e16.
    [27] K. ZhangW. RenW. LuoW.-S. LaiB. StengerM.-H. Yang and H. Li, Deep image deblurring: A survey, International Journal of Computer Vision, 130 (2022), 2103-2130.  doi: 10.1007/s11263-022-01633-5.
    [28] P. ZhangL. HanZ. XuF. Zhang and Y. Wei, Sparse blind deconvolution based low-frequency seismic data reconstruction for multiscale full waveform inversion, Journal of Applied Geophysics, 139 (2017), 91-108.  doi: 10.1016/j.jappgeo.2017.02.021.
  • 加载中



Article Metrics

HTML views(1912) PDF downloads(203) Cited by(0)

Access History



    DownLoad:  Full-Size Img  PowerPoint