Methods | batch size | RRMSE | PSNR |
1 | 0.842 | 12.561 | |
1 | 0.338 | 26.506 | |
1 | 0.068 | 43.484 | |
40 | 0.011 | 51.887 | |
2000 | 0.013 | 49.971 |
There has been an emerging interest in developing and applying dictionary learning (DL) to process massive datasets in the last decade. Many of these efforts, however, focus on employing DL to compress and extract a set of important features from data, while considering restoring the original data from this set a secondary goal. On the other hand, although several methods are able to process streaming data by updating the dictionary incrementally as new snapshots pass by, most of those algorithms are designed for the setting where the snapshots are randomly drawn from a probability distribution. In this paper, we present a new DL approach to compress and denoise massive dataset in real time, in which the data are streamed through in a preset order (instances are videos and temporal experimental data), so at any time, we can only observe a biased sample set of the whole data. Our approach incrementally builds up the dictionary in a relatively simple manner: if the new snapshot is adequately explained by the current dictionary, we perform a sparse coding to find its sparse representation; otherwise, we add the new snapshot to the dictionary, with a Gram-Schmidt process to maintain the orthogonality. To compress and denoise noisy datasets, we apply the denoising to the snapshot directly before sparse coding, which deviates from traditional dictionary learning approach that achieves denoising via sparse coding. Compared to full-batch matrix decomposition methods, where the whole data is kept in memory, and other mini-batch approaches, where unbiased sampling is often assumed, our approach has minimal requirement in data sampling and storage: i) each snapshot is only seen once then discarded, and ii) the snapshots are drawn in a preset order, so can be highly biased. Through experiments on climate simulations and scanning transmission electron microscopy (STEM) data, we demonstrate that the proposed approach performs competitively to those methods in data reconstruction and denoising.
Citation: |
Figure 7. (Growth of dictionaries) The growth of dictionary size is significantly different for our two test cases. For climate dataset, complicated development of the data requires us to update the dictionary more frequently in later stage. STEM data, on the other hand, progresses in similar cycles so the dictionary is established early
Table 1.
A comparison between our algorithm and other matrix factorization solvers in
Methods | batch size | RRMSE | PSNR |
1 | 0.842 | 12.561 | |
1 | 0.338 | 26.506 | |
1 | 0.068 | 43.484 | |
40 | 0.011 | 51.887 | |
2000 | 0.013 | 49.971 |
Table 2.
A comparison between our algorithm and other matrix factorization solvers in
Methods | batch size | RRMSE | PSNR |
1 | 0.857 | 12.418 | |
1 | 0.364 | 26.455 | |
1 | 0.134 | 28.895 | |
61 | 0.183 | 25.996 | |
2000 | 0.179 | 26.236 |
Table 3.
A comparison between our algorithm and other matrix factorization solvers in
Methods | batch size | RRMSE | PSNR |
1 | 0.647 | 16.546 | |
1 | 0.449 | 20.139 | |
1 | 0.0814 | 36.949 | |
41 | 0.011 | 52.378 | |
11616 | 0.132 | 30.878 |
Table 4.
A comparison between our algorithm and other matrix factorization solvers in
Methods | batch size | RRMSE | PSNR |
1 | 0.594 | 17.280 | |
1 | 0.643 | 16.927 | |
1 | 0.211 | 26.327 | |
39 | 0.086 | 34.156 | |
11616 | 0.152 | 29.423 |
[1] | M. Aharon and M. Elad, Sparse and redundant modeling of image content using an image-signature-dictionary, SIAM J. Imaging Sci., 1 (2008), 228-247. doi: 10.1137/07070156X. |
[2] | M. Aharon, M. Elad and A. Bruckstein, K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation, IEEE Trans. Signal Process., 54 (2006), 4311-4322. doi: 10.1109/TSP.2006.881199. |
[3] | A. Chambolle, An algorithm for total variation minimization and applications, J. Math. Imaging Vision, 20 (2004), 89-97. |
[4] | K. Degraux, U. S. Kamilov, P. T. Boufounos and D. Liu, Online convolutional dictionary learning for multimodal imaging, in 2017 IEEE International Conference on Image Processing (ICIP), (2017), 1617–1621. |
[5] | M. Elad and M. Aharon, Image denoising via sparse and redundant representations over learned dictionaries, IEEE Transactions Image Processing, 15 (2006), 3736-3745. doi: 10.1109/TIP.2006.881969. |
[6] | J. Galewsky, R. K. Scott and L. M. Polvani, An initial-value problem for testing numerical models of the global shallow-water equations, Tellus A: Dynamic Meteorology and Oceanography, 56 (2004), 429-440. doi: 10.3402/tellusa.v56i5.14436. |
[7] | P. Getreuer, Rudin-osher-fatemi total variation denoising using split bregman, Image Processing On Line, 2 (2012), 74-95. doi: 10.5201/ipol.2012.g-tvd. |
[8] | S. Ghosh, A. Choquette, S. May, M. P. Oxley, A. R. Lupini, S. T. Pantelides and A. Y. Borisevich, Identifying novel polar distortion modes in engineered magnetic oxide superlattices, Microscopy and Microanalysis, 23 (2017), 1590-1591. doi: 10.1017/S1431927617008613. |
[9] | R. Jenatton, G. Obozinski and F. Bach, Structured sparse principal component analysis, in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (eds. Y. W. Teh and M. Titterington), vol. 9 of Proceedings of Machine Learning Research, JMLR Workshop and Conference Proceedings, Chia Laguna Resort, Sardinia, Italy, (2010), 366–373, http://proceedings.mlr.press/v9/jenatton10a.html. |
[10] | S. P. Kasiviswanathan, H. Wang, A. Banerjee and P. Melville, Online l1-dictionary learning with application to novel document detection, Advances in Neural Information Processing Systems, 2258–2266. |
[11] | J. Liu, C. Garcia-Cardona, B. Wohlberg and W. Yin, First- and second-order methods for online convolutional dictionary learning, SIAM J. Imaging Sci., 11 (2018), 1589-1628. doi: 10.1137/17M1145689. |
[12] | C. Lu, J. Shi and J. Jia, Online robust dictionary learning, in 2013 IEEE Conference on Computer Vision and Pattern Recognition, (2013), 415–422. doi: 10.1109/CVPR.2013.60. |
[13] | J. Mairal, F. Bach and J. Ponce, Task-driven dictionary learning, IEEE Trans. Pattern Anal. Mach. Intel., 34 (2012), 791-804. |
[14] | J. Mairal, F. Bach, J. Ponce and G. Sapiro, Online dictionary learning for sparse coding, in Proceedings of the 26th Annual International Conference on Machine Learning, (2009), 689–696. doi: 10.1145/1553374.1553463. |
[15] | A. Mensch, J. Mairal, B. Thirion and G. Varoquaux, Dictionary learning for massive matrix factorization, Proceedings of the 33rd International Conference on Machine Learning (ICML), 48 (2016), 1737-1746. |
[16] | R. D. Nair, S. J. Thomas and R. D. Loft, A discontinuous galerkin transport scheme on the cubed sphere, Monthly Weather Review, 133 (2005), 814-828. doi: 10.1175/MWR2890.1. |
[17] | B. A. Olshausen and D. J. Field, Sparse coding with an overcomplete basis set: A strategy employed by v1?, Vision Research, 37 (1997), 3311-3325. doi: 10.1016/S0042-6989(97)00169-7. |
[18] | F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot and E. Duchesnay, Scikit-learn: Machine learning in Python, J. Mach. Learn. Res., 12 (2011), 2825-2830. |
[19] | D. A. Ross, J. Lim, R.-S. Lin and M.-H. Yang, Incremental learning for robust visual tracking, International Journal of Computer Vision, 77 (2008), 125-141. doi: 10.1007/s11263-007-0075-7. |
[20] | R. Rubinstein, A. M. Bruckstein and M. Elad, Dictionaries for sparse representation modeling, Proceedings of the IEEE, 98 (2010), 1045-1057. |
[21] | K. Slavakis and G. B. Giannakis, Online dictionary learning from big data using accelerated stochastic approximation algorithms, in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (2014), 16–20. |
[22] | Z. Szabó, B. Póczos and A. Lörincz, Online group-structured dictionary learning, in CVPR 2011, (2011), 2865–2872. |
[23] | I. Tosic and P. Frossard, Dictionary learning, IEEE Signal Processing Magazine, 28 (2011), 27-38. doi: 10.1109/MSP.2010.939537. |
[24] | T. H. Vu and V. Monga, Fast low-rank shared dictionary learning for image classification, IEEE Trans. Image Process., 26 (2017), 5160-5175. doi: 10.1109/TIP.2017.2729885. |
[25] | Y. Xu and W. Yin, A fast patch-dictionary method for whole image recovery, Inverse Probl. Imaging, 10 (2016), 563-583. doi: 10.3934/ipi.2016012. |
[26] | S. Zhang, S. Kasiviswanathan, P. C. Yuen and M. Harandi, Online dictionary learning on symmetric positive definite manifolds with vision applications, Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 3165–3173. |
(Climate dataset) The top
(Climate dataset) Original and the reconstructed images by our algorithm
(STEM dataset) The top
(STEM dataset) Original and the reconstructed images by our algorithm
(Noisy climate dataset) Noisy and reconstructed images by our algorithm
(Noisy STEM dataset) Noisy and reconstructed images by our algorithm
(Growth of dictionaries) The growth of dictionary size is significantly different for our two test cases. For climate dataset, complicated development of the data requires us to update the dictionary more frequently in later stage. STEM data, on the other hand, progresses in similar cycles so the dictionary is established early