\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

A dictionary learning algorithm for compression and reconstruction of streaming data in preset order

This manuscript has been co-authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).

Abstract / Introduction Full Text(HTML) Figure(7) / Table(4) Related Papers Cited by
  • There has been an emerging interest in developing and applying dictionary learning (DL) to process massive datasets in the last decade. Many of these efforts, however, focus on employing DL to compress and extract a set of important features from data, while considering restoring the original data from this set a secondary goal. On the other hand, although several methods are able to process streaming data by updating the dictionary incrementally as new snapshots pass by, most of those algorithms are designed for the setting where the snapshots are randomly drawn from a probability distribution. In this paper, we present a new DL approach to compress and denoise massive dataset in real time, in which the data are streamed through in a preset order (instances are videos and temporal experimental data), so at any time, we can only observe a biased sample set of the whole data. Our approach incrementally builds up the dictionary in a relatively simple manner: if the new snapshot is adequately explained by the current dictionary, we perform a sparse coding to find its sparse representation; otherwise, we add the new snapshot to the dictionary, with a Gram-Schmidt process to maintain the orthogonality. To compress and denoise noisy datasets, we apply the denoising to the snapshot directly before sparse coding, which deviates from traditional dictionary learning approach that achieves denoising via sparse coding. Compared to full-batch matrix decomposition methods, where the whole data is kept in memory, and other mini-batch approaches, where unbiased sampling is often assumed, our approach has minimal requirement in data sampling and storage: i) each snapshot is only seen once then discarded, and ii) the snapshots are drawn in a preset order, so can be highly biased. Through experiments on climate simulations and scanning transmission electron microscopy (STEM) data, we demonstrate that the proposed approach performs competitively to those methods in data reconstruction and denoising.

    Mathematics Subject Classification: Primary: 68P20, 68W27; Secondary: 15A23.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  (Climate dataset) The top $ 20 $ components from a dictionary of $ 40 $ components extracted by our algorithm

    Figure 2.  (Climate dataset) Original and the reconstructed images by our algorithm

    Figure 3.  (STEM dataset) The top $ 24 $ components from a dictionary of $ 41 $ components extracted by our algorithm

    Figure 4.  (STEM dataset) Original and the reconstructed images by our algorithm

    Figure 5.  (Noisy climate dataset) Noisy and reconstructed images by our algorithm

    Figure 6.  (Noisy STEM dataset) Noisy and reconstructed images by our algorithm

    Figure 7.  (Growth of dictionaries) The growth of dictionary size is significantly different for our two test cases. For climate dataset, complicated development of the data requires us to update the dictionary more frequently in later stage. STEM data, on the other hand, progresses in similar cycles so the dictionary is established early

    Table 1.  A comparison between our algorithm and other matrix factorization solvers in $ \mathtt{scikit-learn} $ for dictionary learning of $ \textbf{climate dataset} $

    Methods batch size RRMSE PSNR
    $ \mathtt{MiniBatchSparsePCA} $ 1 0.842 12.561
    $\mathtt{MiniBatchDictionaryLearning} $ 1 0.338 26.506
    $ \textbf{Our method} $ 1 0.068 43.484
    $ \mathtt{IncrementalPCA} $ 40 0.011 51.887
    $ \mathtt{DictionaryLearning} $ 2000 0.013 49.971
     | Show Table
    DownLoad: CSV

    Table 2.  A comparison between our algorithm and other matrix factorization solvers in $ \mathtt{scikit-learn} $ for dictionary learning of $ \textbf{noisy climate dataset} $

    Methods batch size RRMSE PSNR
    $ \mathtt{MiniBatchSparsePCA} $ 1 0.857 12.418
    $\mathtt{MiniBatchDictionaryLearning}$ 1 0.364 26.455
    $\textbf{Our method}$ 1 0.134 28.895
    $\mathtt{IncrementalPCA}$ 61 0.183 25.996
    $\mathtt{DictionaryLearning}$ 2000 0.179 26.236
     | Show Table
    DownLoad: CSV

    Table 3.  A comparison between our algorithm and other matrix factorization solvers in $ \mathtt{scikit-learn} $ for dictionary learning of $ \textbf{STEM dataset} $

    Methods batch size RRMSE PSNR
    $\mathtt{MiniBatchSparsePCA}$ 1 0.647 16.546
    $\mathtt{MiniBatchDictionaryLearning}$ 1 0.449 20.139
    $\textbf{Our method}$ 1 0.0814 36.949
    $\mathtt{IncrementalPCA}$ 41 0.011 52.378
    $\mathtt{DictionaryLearning}$ 11616 0.132 30.878
     | Show Table
    DownLoad: CSV

    Table 4.  A comparison between our algorithm and other matrix factorization solvers in $ \mathtt{scikit-learn} $ for dictionary learning of $ \textbf{noisy STEM dataset} $

    Methods batch size RRMSE PSNR
    $\mathtt{MiniBatchSparsePCA}$ 1 0.594 17.280
    $\mathtt{MiniBatchDictionaryLearning}$ 1 0.643 16.927
    $\textbf{Our method}$ 1 0.211 26.327
    $\mathtt{IncrementalPCA}$ 39 0.086 34.156
    $\mathtt{DictionaryLearning}$ 11616 0.152 29.423
     | Show Table
    DownLoad: CSV
  • [1] M. Aharon and M. Elad, Sparse and redundant modeling of image content using an image-signature-dictionary, SIAM J. Imaging Sci., 1 (2008), 228-247.  doi: 10.1137/07070156X.
    [2] M. AharonM. Elad and A. Bruckstein, K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation, IEEE Trans. Signal Process., 54 (2006), 4311-4322.  doi: 10.1109/TSP.2006.881199.
    [3] A. Chambolle, An algorithm for total variation minimization and applications, J. Math. Imaging Vision, 20 (2004), 89-97. 
    [4] K. Degraux, U. S. Kamilov, P. T. Boufounos and D. Liu, Online convolutional dictionary learning for multimodal imaging, in 2017 IEEE International Conference on Image Processing (ICIP), (2017), 1617–1621.
    [5] M. Elad and M. Aharon, Image denoising via sparse and redundant representations over learned dictionaries, IEEE Transactions Image Processing, 15 (2006), 3736-3745.  doi: 10.1109/TIP.2006.881969.
    [6] J. GalewskyR. K. Scott and L. M. Polvani, An initial-value problem for testing numerical models of the global shallow-water equations, Tellus A: Dynamic Meteorology and Oceanography, 56 (2004), 429-440.  doi: 10.3402/tellusa.v56i5.14436.
    [7] P. Getreuer, Rudin-osher-fatemi total variation denoising using split bregman, Image Processing On Line, 2 (2012), 74-95.  doi: 10.5201/ipol.2012.g-tvd.
    [8] S. GhoshA. ChoquetteS. MayM. P. OxleyA. R. LupiniS. T. Pantelides and A. Y. Borisevich, Identifying novel polar distortion modes in engineered magnetic oxide superlattices, Microscopy and Microanalysis, 23 (2017), 1590-1591.  doi: 10.1017/S1431927617008613.
    [9] R. Jenatton, G. Obozinski and F. Bach, Structured sparse principal component analysis, in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (eds. Y. W. Teh and M. Titterington), vol. 9 of Proceedings of Machine Learning Research, JMLR Workshop and Conference Proceedings, Chia Laguna Resort, Sardinia, Italy, (2010), 366–373, http://proceedings.mlr.press/v9/jenatton10a.html.
    [10] S. P. Kasiviswanathan, H. Wang, A. Banerjee and P. Melville, Online l1-dictionary learning with application to novel document detection, Advances in Neural Information Processing Systems, 2258–2266.
    [11] J. LiuC. Garcia-CardonaB. Wohlberg and W. Yin, First- and second-order methods for online convolutional dictionary learning, SIAM J. Imaging Sci., 11 (2018), 1589-1628.  doi: 10.1137/17M1145689.
    [12] C. Lu, J. Shi and J. Jia, Online robust dictionary learning, in 2013 IEEE Conference on Computer Vision and Pattern Recognition, (2013), 415–422. doi: 10.1109/CVPR.2013.60.
    [13] J. MairalF. Bach and J. Ponce, Task-driven dictionary learning, IEEE Trans. Pattern Anal. Mach. Intel., 34 (2012), 791-804. 
    [14] J. Mairal, F. Bach, J. Ponce and G. Sapiro, Online dictionary learning for sparse coding, in Proceedings of the 26th Annual International Conference on Machine Learning, (2009), 689–696. doi: 10.1145/1553374.1553463.
    [15] A. MenschJ. MairalB. Thirion and G. Varoquaux, Dictionary learning for massive matrix factorization, Proceedings of the 33rd International Conference on Machine Learning (ICML), 48 (2016), 1737-1746. 
    [16] R. D. NairS. J. Thomas and R. D. Loft, A discontinuous galerkin transport scheme on the cubed sphere, Monthly Weather Review, 133 (2005), 814-828.  doi: 10.1175/MWR2890.1.
    [17] B. A. Olshausen and D. J. Field, Sparse coding with an overcomplete basis set: A strategy employed by v1?, Vision Research, 37 (1997), 3311-3325.  doi: 10.1016/S0042-6989(97)00169-7.
    [18] F. PedregosaG. VaroquauxA. GramfortV. MichelB. ThirionO. GriselM. BlondelP. PrettenhoferR. WeissV. DubourgJ. VanderplasA. PassosD. CournapeauM. BrucherM. Perrot and E. Duchesnay, Scikit-learn: Machine learning in Python, J. Mach. Learn. Res., 12 (2011), 2825-2830. 
    [19] D. A. RossJ. LimR.-S. Lin and M.-H. Yang, Incremental learning for robust visual tracking, International Journal of Computer Vision, 77 (2008), 125-141.  doi: 10.1007/s11263-007-0075-7.
    [20] R. RubinsteinA. M. Bruckstein and M. Elad, Dictionaries for sparse representation modeling, Proceedings of the IEEE, 98 (2010), 1045-1057. 
    [21] K. Slavakis and G. B. Giannakis, Online dictionary learning from big data using accelerated stochastic approximation algorithms, in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (2014), 16–20.
    [22] Z. Szabó, B. Póczos and A. Lörincz, Online group-structured dictionary learning, in CVPR 2011, (2011), 2865–2872.
    [23] I. Tosic and P. Frossard, Dictionary learning, IEEE Signal Processing Magazine, 28 (2011), 27-38.  doi: 10.1109/MSP.2010.939537.
    [24] T. H. Vu and V. Monga, Fast low-rank shared dictionary learning for image classification, IEEE Trans. Image Process., 26 (2017), 5160-5175.  doi: 10.1109/TIP.2017.2729885.
    [25] Y. Xu and W. Yin, A fast patch-dictionary method for whole image recovery, Inverse Probl. Imaging, 10 (2016), 563-583.  doi: 10.3934/ipi.2016012.
    [26] S. Zhang, S. Kasiviswanathan, P. C. Yuen and M. Harandi, Online dictionary learning on symmetric positive definite manifolds with vision applications, Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 3165–3173.
  • 加载中

Figures(7)

Tables(4)

SHARE

Article Metrics

HTML views(2962) PDF downloads(439) Cited by(0)

Access History

Other Articles By Authors

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return