doi: 10.3934/dcdss.2021102
Online First

Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible.

Readers can access Online First articles via the “Online First” tab for the selected journal.

A dictionary learning algorithm for compression and reconstruction of streaming data in preset order

Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA

* Corresponding author: Hoang Tran (tranha@ornl.gov)

Received  February 2021 Revised  June 2021 Early access September 2021

Fund Project: This manuscript has been co-authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).

There has been an emerging interest in developing and applying dictionary learning (DL) to process massive datasets in the last decade. Many of these efforts, however, focus on employing DL to compress and extract a set of important features from data, while considering restoring the original data from this set a secondary goal. On the other hand, although several methods are able to process streaming data by updating the dictionary incrementally as new snapshots pass by, most of those algorithms are designed for the setting where the snapshots are randomly drawn from a probability distribution. In this paper, we present a new DL approach to compress and denoise massive dataset in real time, in which the data are streamed through in a preset order (instances are videos and temporal experimental data), so at any time, we can only observe a biased sample set of the whole data. Our approach incrementally builds up the dictionary in a relatively simple manner: if the new snapshot is adequately explained by the current dictionary, we perform a sparse coding to find its sparse representation; otherwise, we add the new snapshot to the dictionary, with a Gram-Schmidt process to maintain the orthogonality. To compress and denoise noisy datasets, we apply the denoising to the snapshot directly before sparse coding, which deviates from traditional dictionary learning approach that achieves denoising via sparse coding. Compared to full-batch matrix decomposition methods, where the whole data is kept in memory, and other mini-batch approaches, where unbiased sampling is often assumed, our approach has minimal requirement in data sampling and storage: i) each snapshot is only seen once then discarded, and ii) the snapshots are drawn in a preset order, so can be highly biased. Through experiments on climate simulations and scanning transmission electron microscopy (STEM) data, we demonstrate that the proposed approach performs competitively to those methods in data reconstruction and denoising.

Citation: Richard Archibald, Hoang Tran. A dictionary learning algorithm for compression and reconstruction of streaming data in preset order. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2021102
References:
[1]

M. Aharon and M. Elad, Sparse and redundant modeling of image content using an image-signature-dictionary, SIAM J. Imaging Sci., 1 (2008), 228-247.  doi: 10.1137/07070156X.  Google Scholar

[2]

M. AharonM. Elad and A. Bruckstein, K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation, IEEE Trans. Signal Process., 54 (2006), 4311-4322.  doi: 10.1109/TSP.2006.881199.  Google Scholar

[3]

A. Chambolle, An algorithm for total variation minimization and applications, J. Math. Imaging Vision, 20 (2004), 89-97.   Google Scholar

[4]

K. Degraux, U. S. Kamilov, P. T. Boufounos and D. Liu, Online convolutional dictionary learning for multimodal imaging, in 2017 IEEE International Conference on Image Processing (ICIP), (2017), 1617–1621. Google Scholar

[5]

M. Elad and M. Aharon, Image denoising via sparse and redundant representations over learned dictionaries, IEEE Transactions Image Processing, 15 (2006), 3736-3745.  doi: 10.1109/TIP.2006.881969.  Google Scholar

[6]

J. GalewskyR. K. Scott and L. M. Polvani, An initial-value problem for testing numerical models of the global shallow-water equations, Tellus A: Dynamic Meteorology and Oceanography, 56 (2004), 429-440.  doi: 10.3402/tellusa.v56i5.14436.  Google Scholar

[7]

P. Getreuer, Rudin-osher-fatemi total variation denoising using split bregman, Image Processing On Line, 2 (2012), 74-95.  doi: 10.5201/ipol.2012.g-tvd.  Google Scholar

[8]

S. GhoshA. ChoquetteS. MayM. P. OxleyA. R. LupiniS. T. Pantelides and A. Y. Borisevich, Identifying novel polar distortion modes in engineered magnetic oxide superlattices, Microscopy and Microanalysis, 23 (2017), 1590-1591.  doi: 10.1017/S1431927617008613.  Google Scholar

[9]

R. Jenatton, G. Obozinski and F. Bach, Structured sparse principal component analysis, in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (eds. Y. W. Teh and M. Titterington), vol. 9 of Proceedings of Machine Learning Research, JMLR Workshop and Conference Proceedings, Chia Laguna Resort, Sardinia, Italy, (2010), 366–373, http://proceedings.mlr.press/v9/jenatton10a.html. Google Scholar

[10]

S. P. Kasiviswanathan, H. Wang, A. Banerjee and P. Melville, Online l1-dictionary learning with application to novel document detection, Advances in Neural Information Processing Systems, 2258–2266. Google Scholar

[11]

J. LiuC. Garcia-CardonaB. Wohlberg and W. Yin, First- and second-order methods for online convolutional dictionary learning, SIAM J. Imaging Sci., 11 (2018), 1589-1628.  doi: 10.1137/17M1145689.  Google Scholar

[12]

C. Lu, J. Shi and J. Jia, Online robust dictionary learning, in 2013 IEEE Conference on Computer Vision and Pattern Recognition, (2013), 415–422. doi: 10.1109/CVPR.2013.60.  Google Scholar

[13]

J. MairalF. Bach and J. Ponce, Task-driven dictionary learning, IEEE Trans. Pattern Anal. Mach. Intel., 34 (2012), 791-804.   Google Scholar

[14]

J. Mairal, F. Bach, J. Ponce and G. Sapiro, Online dictionary learning for sparse coding, in Proceedings of the 26th Annual International Conference on Machine Learning, (2009), 689–696. doi: 10.1145/1553374.1553463.  Google Scholar

[15]

A. MenschJ. MairalB. Thirion and G. Varoquaux, Dictionary learning for massive matrix factorization, Proceedings of the 33rd International Conference on Machine Learning (ICML), 48 (2016), 1737-1746.   Google Scholar

[16]

R. D. NairS. J. Thomas and R. D. Loft, A discontinuous galerkin transport scheme on the cubed sphere, Monthly Weather Review, 133 (2005), 814-828.  doi: 10.1175/MWR2890.1.  Google Scholar

[17]

B. A. Olshausen and D. J. Field, Sparse coding with an overcomplete basis set: A strategy employed by v1?, Vision Research, 37 (1997), 3311-3325.  doi: 10.1016/S0042-6989(97)00169-7.  Google Scholar

[18]

F. PedregosaG. VaroquauxA. GramfortV. MichelB. ThirionO. GriselM. BlondelP. PrettenhoferR. WeissV. DubourgJ. VanderplasA. PassosD. CournapeauM. BrucherM. Perrot and E. Duchesnay, Scikit-learn: Machine learning in Python, J. Mach. Learn. Res., 12 (2011), 2825-2830.   Google Scholar

[19]

D. A. RossJ. LimR.-S. Lin and M.-H. Yang, Incremental learning for robust visual tracking, International Journal of Computer Vision, 77 (2008), 125-141.  doi: 10.1007/s11263-007-0075-7.  Google Scholar

[20]

R. RubinsteinA. M. Bruckstein and M. Elad, Dictionaries for sparse representation modeling, Proceedings of the IEEE, 98 (2010), 1045-1057.   Google Scholar

[21]

K. Slavakis and G. B. Giannakis, Online dictionary learning from big data using accelerated stochastic approximation algorithms, in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (2014), 16–20. Google Scholar

[22]

Z. Szabó, B. Póczos and A. Lörincz, Online group-structured dictionary learning, in CVPR 2011, (2011), 2865–2872. Google Scholar

[23]

I. Tosic and P. Frossard, Dictionary learning, IEEE Signal Processing Magazine, 28 (2011), 27-38.  doi: 10.1109/MSP.2010.939537.  Google Scholar

[24]

T. H. Vu and V. Monga, Fast low-rank shared dictionary learning for image classification, IEEE Trans. Image Process., 26 (2017), 5160-5175.  doi: 10.1109/TIP.2017.2729885.  Google Scholar

[25]

Y. Xu and W. Yin, A fast patch-dictionary method for whole image recovery, Inverse Probl. Imaging, 10 (2016), 563-583.  doi: 10.3934/ipi.2016012.  Google Scholar

[26]

S. Zhang, S. Kasiviswanathan, P. C. Yuen and M. Harandi, Online dictionary learning on symmetric positive definite manifolds with vision applications, Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 3165–3173. Google Scholar

show all references

References:
[1]

M. Aharon and M. Elad, Sparse and redundant modeling of image content using an image-signature-dictionary, SIAM J. Imaging Sci., 1 (2008), 228-247.  doi: 10.1137/07070156X.  Google Scholar

[2]

M. AharonM. Elad and A. Bruckstein, K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation, IEEE Trans. Signal Process., 54 (2006), 4311-4322.  doi: 10.1109/TSP.2006.881199.  Google Scholar

[3]

A. Chambolle, An algorithm for total variation minimization and applications, J. Math. Imaging Vision, 20 (2004), 89-97.   Google Scholar

[4]

K. Degraux, U. S. Kamilov, P. T. Boufounos and D. Liu, Online convolutional dictionary learning for multimodal imaging, in 2017 IEEE International Conference on Image Processing (ICIP), (2017), 1617–1621. Google Scholar

[5]

M. Elad and M. Aharon, Image denoising via sparse and redundant representations over learned dictionaries, IEEE Transactions Image Processing, 15 (2006), 3736-3745.  doi: 10.1109/TIP.2006.881969.  Google Scholar

[6]

J. GalewskyR. K. Scott and L. M. Polvani, An initial-value problem for testing numerical models of the global shallow-water equations, Tellus A: Dynamic Meteorology and Oceanography, 56 (2004), 429-440.  doi: 10.3402/tellusa.v56i5.14436.  Google Scholar

[7]

P. Getreuer, Rudin-osher-fatemi total variation denoising using split bregman, Image Processing On Line, 2 (2012), 74-95.  doi: 10.5201/ipol.2012.g-tvd.  Google Scholar

[8]

S. GhoshA. ChoquetteS. MayM. P. OxleyA. R. LupiniS. T. Pantelides and A. Y. Borisevich, Identifying novel polar distortion modes in engineered magnetic oxide superlattices, Microscopy and Microanalysis, 23 (2017), 1590-1591.  doi: 10.1017/S1431927617008613.  Google Scholar

[9]

R. Jenatton, G. Obozinski and F. Bach, Structured sparse principal component analysis, in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (eds. Y. W. Teh and M. Titterington), vol. 9 of Proceedings of Machine Learning Research, JMLR Workshop and Conference Proceedings, Chia Laguna Resort, Sardinia, Italy, (2010), 366–373, http://proceedings.mlr.press/v9/jenatton10a.html. Google Scholar

[10]

S. P. Kasiviswanathan, H. Wang, A. Banerjee and P. Melville, Online l1-dictionary learning with application to novel document detection, Advances in Neural Information Processing Systems, 2258–2266. Google Scholar

[11]

J. LiuC. Garcia-CardonaB. Wohlberg and W. Yin, First- and second-order methods for online convolutional dictionary learning, SIAM J. Imaging Sci., 11 (2018), 1589-1628.  doi: 10.1137/17M1145689.  Google Scholar

[12]

C. Lu, J. Shi and J. Jia, Online robust dictionary learning, in 2013 IEEE Conference on Computer Vision and Pattern Recognition, (2013), 415–422. doi: 10.1109/CVPR.2013.60.  Google Scholar

[13]

J. MairalF. Bach and J. Ponce, Task-driven dictionary learning, IEEE Trans. Pattern Anal. Mach. Intel., 34 (2012), 791-804.   Google Scholar

[14]

J. Mairal, F. Bach, J. Ponce and G. Sapiro, Online dictionary learning for sparse coding, in Proceedings of the 26th Annual International Conference on Machine Learning, (2009), 689–696. doi: 10.1145/1553374.1553463.  Google Scholar

[15]

A. MenschJ. MairalB. Thirion and G. Varoquaux, Dictionary learning for massive matrix factorization, Proceedings of the 33rd International Conference on Machine Learning (ICML), 48 (2016), 1737-1746.   Google Scholar

[16]

R. D. NairS. J. Thomas and R. D. Loft, A discontinuous galerkin transport scheme on the cubed sphere, Monthly Weather Review, 133 (2005), 814-828.  doi: 10.1175/MWR2890.1.  Google Scholar

[17]

B. A. Olshausen and D. J. Field, Sparse coding with an overcomplete basis set: A strategy employed by v1?, Vision Research, 37 (1997), 3311-3325.  doi: 10.1016/S0042-6989(97)00169-7.  Google Scholar

[18]

F. PedregosaG. VaroquauxA. GramfortV. MichelB. ThirionO. GriselM. BlondelP. PrettenhoferR. WeissV. DubourgJ. VanderplasA. PassosD. CournapeauM. BrucherM. Perrot and E. Duchesnay, Scikit-learn: Machine learning in Python, J. Mach. Learn. Res., 12 (2011), 2825-2830.   Google Scholar

[19]

D. A. RossJ. LimR.-S. Lin and M.-H. Yang, Incremental learning for robust visual tracking, International Journal of Computer Vision, 77 (2008), 125-141.  doi: 10.1007/s11263-007-0075-7.  Google Scholar

[20]

R. RubinsteinA. M. Bruckstein and M. Elad, Dictionaries for sparse representation modeling, Proceedings of the IEEE, 98 (2010), 1045-1057.   Google Scholar

[21]

K. Slavakis and G. B. Giannakis, Online dictionary learning from big data using accelerated stochastic approximation algorithms, in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (2014), 16–20. Google Scholar

[22]

Z. Szabó, B. Póczos and A. Lörincz, Online group-structured dictionary learning, in CVPR 2011, (2011), 2865–2872. Google Scholar

[23]

I. Tosic and P. Frossard, Dictionary learning, IEEE Signal Processing Magazine, 28 (2011), 27-38.  doi: 10.1109/MSP.2010.939537.  Google Scholar

[24]

T. H. Vu and V. Monga, Fast low-rank shared dictionary learning for image classification, IEEE Trans. Image Process., 26 (2017), 5160-5175.  doi: 10.1109/TIP.2017.2729885.  Google Scholar

[25]

Y. Xu and W. Yin, A fast patch-dictionary method for whole image recovery, Inverse Probl. Imaging, 10 (2016), 563-583.  doi: 10.3934/ipi.2016012.  Google Scholar

[26]

S. Zhang, S. Kasiviswanathan, P. C. Yuen and M. Harandi, Online dictionary learning on symmetric positive definite manifolds with vision applications, Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 3165–3173. Google Scholar

Figure 1.  (Climate dataset) The top $ 20 $ components from a dictionary of $ 40 $ components extracted by our algorithm
Figure 2.  (Climate dataset) Original and the reconstructed images by our algorithm
Figure 3.  (STEM dataset) The top $ 24 $ components from a dictionary of $ 41 $ components extracted by our algorithm
Figure 4.  (STEM dataset) Original and the reconstructed images by our algorithm
Figure 5.  (Noisy climate dataset) Noisy and reconstructed images by our algorithm
Figure 6.  (Noisy STEM dataset) Noisy and reconstructed images by our algorithm
Figure 7.  (Growth of dictionaries) The growth of dictionary size is significantly different for our two test cases. For climate dataset, complicated development of the data requires us to update the dictionary more frequently in later stage. STEM data, on the other hand, progresses in similar cycles so the dictionary is established early
Table 1.  A comparison between our algorithm and other matrix factorization solvers in $ \mathtt{scikit-learn} $ for dictionary learning of $ \textbf{climate dataset} $
Methods batch size RRMSE PSNR
$ \mathtt{MiniBatchSparsePCA} $ 1 0.842 12.561
$\mathtt{MiniBatchDictionaryLearning} $ 1 0.338 26.506
$ \textbf{Our method} $ 1 0.068 43.484
$ \mathtt{IncrementalPCA} $ 40 0.011 51.887
$ \mathtt{DictionaryLearning} $ 2000 0.013 49.971
Methods batch size RRMSE PSNR
$ \mathtt{MiniBatchSparsePCA} $ 1 0.842 12.561
$\mathtt{MiniBatchDictionaryLearning} $ 1 0.338 26.506
$ \textbf{Our method} $ 1 0.068 43.484
$ \mathtt{IncrementalPCA} $ 40 0.011 51.887
$ \mathtt{DictionaryLearning} $ 2000 0.013 49.971
Table 2.  A comparison between our algorithm and other matrix factorization solvers in $ \mathtt{scikit-learn} $ for dictionary learning of $ \textbf{noisy climate dataset} $
Methods batch size RRMSE PSNR
$ \mathtt{MiniBatchSparsePCA} $ 1 0.857 12.418
$\mathtt{MiniBatchDictionaryLearning}$ 1 0.364 26.455
$\textbf{Our method}$ 1 0.134 28.895
$\mathtt{IncrementalPCA}$ 61 0.183 25.996
$\mathtt{DictionaryLearning}$ 2000 0.179 26.236
Methods batch size RRMSE PSNR
$ \mathtt{MiniBatchSparsePCA} $ 1 0.857 12.418
$\mathtt{MiniBatchDictionaryLearning}$ 1 0.364 26.455
$\textbf{Our method}$ 1 0.134 28.895
$\mathtt{IncrementalPCA}$ 61 0.183 25.996
$\mathtt{DictionaryLearning}$ 2000 0.179 26.236
Table 3.  A comparison between our algorithm and other matrix factorization solvers in $ \mathtt{scikit-learn} $ for dictionary learning of $ \textbf{STEM dataset} $
Methods batch size RRMSE PSNR
$\mathtt{MiniBatchSparsePCA}$ 1 0.647 16.546
$\mathtt{MiniBatchDictionaryLearning}$ 1 0.449 20.139
$\textbf{Our method}$ 1 0.0814 36.949
$\mathtt{IncrementalPCA}$ 41 0.011 52.378
$\mathtt{DictionaryLearning}$ 11616 0.132 30.878
Methods batch size RRMSE PSNR
$\mathtt{MiniBatchSparsePCA}$ 1 0.647 16.546
$\mathtt{MiniBatchDictionaryLearning}$ 1 0.449 20.139
$\textbf{Our method}$ 1 0.0814 36.949
$\mathtt{IncrementalPCA}$ 41 0.011 52.378
$\mathtt{DictionaryLearning}$ 11616 0.132 30.878
Table 4.  A comparison between our algorithm and other matrix factorization solvers in $ \mathtt{scikit-learn} $ for dictionary learning of $ \textbf{noisy STEM dataset} $
Methods batch size RRMSE PSNR
$\mathtt{MiniBatchSparsePCA}$ 1 0.594 17.280
$\mathtt{MiniBatchDictionaryLearning}$ 1 0.643 16.927
$\textbf{Our method}$ 1 0.211 26.327
$\mathtt{IncrementalPCA}$ 39 0.086 34.156
$\mathtt{DictionaryLearning}$ 11616 0.152 29.423
Methods batch size RRMSE PSNR
$\mathtt{MiniBatchSparsePCA}$ 1 0.594 17.280
$\mathtt{MiniBatchDictionaryLearning}$ 1 0.643 16.927
$\textbf{Our method}$ 1 0.211 26.327
$\mathtt{IncrementalPCA}$ 39 0.086 34.156
$\mathtt{DictionaryLearning}$ 11616 0.152 29.423
[1]

Aude Hofleitner, Tarek Rabbani, Mohammad Rafiee, Laurent El Ghaoui, Alex Bayen. Learning and estimation applications of an online homotopy algorithm for a generalization of the LASSO. Discrete & Continuous Dynamical Systems - S, 2014, 7 (3) : 503-523. doi: 10.3934/dcdss.2014.7.503

[2]

Ran Ma, Lu Zhang, Yuzhong Zhang. A best possible algorithm for an online scheduling problem with position-based learning effect. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021144

[3]

Ning Zhang, Qiang Wu. Online learning for supervised dimension reduction. Mathematical Foundations of Computing, 2019, 2 (2) : 95-106. doi: 10.3934/mfc.2019008

[4]

Haixia Liu, Jian-Feng Cai, Yang Wang. Subspace clustering by (k,k)-sparse matrix factorization. Inverse Problems & Imaging, 2017, 11 (3) : 539-551. doi: 10.3934/ipi.2017025

[5]

Shuhua Wang, Zhenlong Chen, Baohuai Sheng. Convergence of online pairwise regression learning with quadratic loss. Communications on Pure & Applied Analysis, 2020, 19 (8) : 4023-4054. doi: 10.3934/cpaa.2020178

[6]

Yangyang Xu, Ruru Hao, Wotao Yin, Zhixun Su. Parallel matrix factorization for low-rank tensor completion. Inverse Problems & Imaging, 2015, 9 (2) : 601-624. doi: 10.3934/ipi.2015.9.601

[7]

Jiping Tao, Ronghuan Huang, Tundong Liu. A $2.28$-competitive algorithm for online scheduling on identical machines. Journal of Industrial & Management Optimization, 2015, 11 (1) : 185-198. doi: 10.3934/jimo.2015.11.185

[8]

Roberto C. Alamino, Nestor Caticha. Bayesian online algorithms for learning in discrete hidden Markov models. Discrete & Continuous Dynamical Systems - B, 2008, 9 (1) : 1-10. doi: 10.3934/dcdsb.2008.9.1

[9]

Marc Bocquet, Alban Farchi, Quentin Malartic. Online learning of both state and dynamics using ensemble Kalman filters. Foundations of Data Science, 2021, 3 (3) : 305-330. doi: 10.3934/fods.2020015

[10]

Soheila Garshasbi, Brian Yecies, Jun Shen. Microlearning and computer-supported collaborative learning: An agenda towards a comprehensive online learning system. STEM Education, 2021, 1 (4) : 225-255. doi: 10.3934/steme.2021016

[11]

Ruiqi Yang, Dachuan Xu, Yicheng Xu, Dongmei Zhang. An adaptive probabilistic algorithm for online k-center clustering. Journal of Industrial & Management Optimization, 2019, 15 (2) : 565-576. doi: 10.3934/jimo.2018057

[12]

Zongwei Chen. An online-decision algorithm for the multi-period bank clearing problem. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021091

[13]

Lingling Lv, Zhe Zhang, Lei Zhang, Weishu Wang. An iterative algorithm for periodic sylvester matrix equations. Journal of Industrial & Management Optimization, 2018, 14 (1) : 413-425. doi: 10.3934/jimo.2017053

[14]

Armin Eftekhari, Michael B. Wakin, Ping Li, Paul G. Constantine. Randomized learning of the second-moment matrix of a smooth function. Foundations of Data Science, 2019, 1 (3) : 329-387. doi: 10.3934/fods.2019015

[15]

Yudong Li, Yonggang Li, Bei Sun, Yu Chen. Zinc ore supplier evaluation and recommendation method based on nonlinear adaptive online transfer learning. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021193

[16]

Victor Meng Hwee Ong, David J. Nott, Taeryon Choi, Ajay Jasra. Flexible online multivariate regression with variational Bayes and the matrix-variate Dirichlet process. Foundations of Data Science, 2019, 1 (2) : 129-156. doi: 10.3934/fods.2019006

[17]

Vassilios A. Tsachouridis, Georgios Giantamidis, Stylianos Basagiannis, Kostas Kouramas. Formal analysis of the Schulz matrix inversion algorithm: A paradigm towards computer aided verification of general matrix flow solvers. Numerical Algebra, Control & Optimization, 2020, 10 (2) : 177-206. doi: 10.3934/naco.2019047

[18]

Jiping Tao, Zhijun Chao, Yugeng Xi. A semi-online algorithm and its competitive analysis for a single machine scheduling problem with bounded processing times. Journal of Industrial & Management Optimization, 2010, 6 (2) : 269-282. doi: 10.3934/jimo.2010.6.269

[19]

Ran Ma, Jiping Tao. An improved 2.11-competitive algorithm for online scheduling on parallel machines to minimize total weighted completion time. Journal of Industrial & Management Optimization, 2018, 14 (2) : 497-510. doi: 10.3934/jimo.2017057

[20]

Armin Lechleiter, Tobias Rienmüller. Factorization method for the inverse Stokes problem. Inverse Problems & Imaging, 2013, 7 (4) : 1271-1293. doi: 10.3934/ipi.2013.7.1271

2020 Impact Factor: 2.425

Metrics

  • PDF downloads (108)
  • HTML views (132)
  • Cited by (0)

Other articles
by authors

[Back to Top]