• Previous Article
    Fast algorithms for robust principal component analysis with an upper bound on the rank
  • IPI Home
  • This Issue
  • Next Article
    Some worst-case datasets of deterministic first-order methods for solving binary logistic regression
February  2021, 15(1): 79-107. doi: 10.3934/ipi.2020066

Stochastic greedy algorithms for multiple measurement vectors

1. 

Department of Mathematics, University of Kentucky, Lexington, KY 40506

2. 

Department of Mathematics, University of California, Los Angeles, CA 90095

3. 

Department of Mathematics, University of California, Irvine, CA 92697

4. 

Department of Mathematics, Wofford College, Spartanburg, SC 29303

5. 

Center for Outcomes Research and Evaluation, Yale University, New Haven, CT 06511

6. 

Spiceworks, Austin, TX 78746

* Corresponding author: Jing Qin

Received  December 2019 Revised  July 2020 Published  February 2021 Early access  November 2020

Sparse representation of a single measurement vector (SMV) has been explored in a variety of compressive sensing applications. Recently, SMV models have been extended to solve multiple measurement vectors (MMV) problems, where the underlying signal is assumed to have joint sparse structures. To circumvent the NP-hardness of the $ \ell_0 $ minimization problem, many deterministic MMV algorithms solve the convex relaxed models with limited efficiency. In this paper, we develop stochastic greedy algorithms for solving the joint sparse MMV reconstruction problem. In particular, we propose the MMV Stochastic Iterative Hard Thresholding (MStoIHT) and MMV Stochastic Gradient Matching Pursuit (MStoGradMP) algorithms, and we also utilize the mini-batching technique to further improve their performance. Convergence analysis indicates that the proposed algorithms are able to converge faster than their SMV counterparts, i.e., concatenated StoIHT and StoGradMP, under certain conditions. Numerical experiments have illustrated the superior effectiveness of the proposed algorithms over their SMV counterparts.

Citation: Jing Qin, Shuang Li, Deanna Needell, Anna Ma, Rachel Grotheer, Chenxi Huang, Natalie Durgin. Stochastic greedy algorithms for multiple measurement vectors. Inverse Problems & Imaging, 2021, 15 (1) : 79-107. doi: 10.3934/ipi.2020066
References:
[1]

A. Agarwal, S. Negahban and M. J. Wainwright, Fast global convergence rates of gradient methods for high-dimensional statistical recovery, Ann. Statist., 40 (2012), 2452–2482. doi: 10.1214/12-AOS1032.  Google Scholar

[2]

M. AharonM. Elad and A. Bruckstein, K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation, IEEE Transactions on Signal Processing, 54 (2006), 4311-4322.  doi: 10.1109/TSP.2006.881199.  Google Scholar

[3]

D. BaronM. B. WakinM. F. DuarteS. Sarvotham and R. G. Baraniuk, Distributed compressed sensing, IEEE Transactions on Information Theory, 52 (2006), 5406-5425.   Google Scholar

[4]

J. A. Bazerque and G. B. Giannakis, Distributed spectrum sensing for cognitive radio networks by exploiting sparsity, IEEE Transactions on Signal Processing, 58 (2010), 1847-1862.  doi: 10.1109/TSP.2009.2038417.  Google Scholar

[5]

J. D. BlanchardM. CermakD. Hanle and Y. Jing, Greedy algorithms for joint sparse recovery, IEEE Transactions on Signal Processing, 62 (2014), 1694-1704.  doi: 10.1109/TSP.2014.2301980.  Google Scholar

[6]

J. D. BlanchardJ. Tanner and K. Wei, CGIHT: conjugate gradient iterative hard thresholding for compressed sensing and matrix completion, Information and Inference: A Journal of the IMA, 4 (2015), 289-327.  doi: 10.1093/imaiai/iav011.  Google Scholar

[7]

T. Blumensath and M. E. Davies, Iterative hard thresholding for compressed sensing, Applied and Computational Harmonic Analysis, 27 (2009), 265-274.  doi: 10.1016/j.acha.2009.04.002.  Google Scholar

[8]

T. Blumensath and M. E. Davies, Normalized iterative hard thresholding: Guaranteed stability and performance, IEEE Journal of Selected Topics in Signal Processing, 4 (2010), 298-309.  doi: 10.1109/JSTSP.2010.2042411.  Google Scholar

[9]

R. Burden and J. Faires, Numerical Analysis, Cengage Learning, 2004. Google Scholar

[10]

J. Chen and X. Huo, Theoretical results on sparse representations of multiple-measurement vectors, IEEE Transactions on Signal Processing, 54 (2006), 4634-4643.  doi: 10.1109/TSP.2006.881263.  Google Scholar

[11]

S. F. CotterB. D. RaoK. Engan and K. Kreutz-Delgado, Sparse solutions to linear inverse problems with multiple measurement vectors, IEEE Transactions on Signal Processing, 53 (2005), 2477-2488.  doi: 10.1109/TSP.2005.849172.  Google Scholar

[12]

W. Dai and O. Milenkovic, Subspace pursuit for compressive sensing signal reconstruction, IEEE Transactions on Information Theory, 55 (2009), 2230-2249.  doi: 10.1109/TIT.2009.2016006.  Google Scholar

[13]

M. E. Davies and Y. C. Eldar, Rank awareness in joint sparse recovery, IEEE Transactions on Information Theory, 58 (2012), 1135-1146.  doi: 10.1109/TIT.2011.2173722.  Google Scholar

[14]

D. L. Donoho, Unconditional bases are optimal bases for data compression and for statistical estimation, Applied and Computational Harmonic Analysis, 1 (1993), 100-115.  doi: 10.1006/acha.1993.1008.  Google Scholar

[15]

N. Durgin, C. Huang, R. Grotheer, S. Li, A. Ma, D. Needell and J. Qin, Fast hyperspectral diffuse optical imaging method with joint sparsity, in 41st Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), 2019. doi: 10.1109/EMBC.2019.8857069.  Google Scholar

[16]

M. Elad, Sparse and Redundant Representations From Theory to Applications in Signal and Image Processing, Springer, 2010. doi: 10.1007/978-1-4419-7011-4.  Google Scholar

[17]

P. Feng and Y. Bresler, Spectrum-blind minimum-rate sampling and reconstruction of multiband signals, in Acoustics, Speech and Signal Processing (ICASSP), 1996 IEEE International Conference on, vol. 3, IEEE, 1996, 1688–1691. doi: 10.1109/ICASSP.1996.544131.  Google Scholar

[18]

S. Foucart, Hard thresholding pursuit: an algorithm for compressive sensing, SIAM Journal on Numerical Analysis, 49 (2011), 2543-2563.  doi: 10.1137/100806278.  Google Scholar

[19]

S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing, vol. 1, Birkhäuser/Springer, New York, 2013. doi: 10.1007/978-0-8176-4948-7.  Google Scholar

[20]

R. GiryesS. NamM. EladR. Gribonval and M. E. Davies, Greedy-like algorithms for the cosparse analysis model, Linear Algebra and its Applications, 441 (2014), 22-60.  doi: 10.1016/j.laa.2013.03.004.  Google Scholar

[21]

Z. He, A. Cichocki, R. Zdunek and J. Cao, CG-M-FOCUSS and its application to distributed compressed sensing, in International Symposium on Neural Networks, Springer, 2008,237–245. doi: 10.1007/978-3-540-87732-5_27.  Google Scholar

[22]

Z. Jian, F. Yuli, Z. Qiheng and L. Haifeng, Split bregman algorithms for multiple measurement vector problem, Multidim Syst Sign Process. Google Scholar

[23]

J. M. KimO. K. Lee and J. C. Ye, Compressive MUSIC: Revisiting the link between compressive sensing and array signal processing, IEEE Transactions on Information Theory, 58 (2012), 278-301.  doi: 10.1109/TIT.2011.2171529.  Google Scholar

[24]

K. LeeY. Bresler and M. Junge, Subspace methods for joint sparse recovery, IEEE Transactions on Information Theory, 58 (2012), 3613-3641.  doi: 10.1109/TIT.2012.2189196.  Google Scholar

[25]

S. LiD. YangG. Tang and M. B. Wakin, Atomic norm minimization for modal analysis from random and compressed samples, IEEE Transactions on Signal Processing, 66 (2018), 1817-1831.  doi: 10.1109/TSP.2018.2793907.  Google Scholar

[26]

H. Lu, X. Long and J. Lv, A fast algorithm for recovery of jointly sparse vectors based on the alternating direction methods, in Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011,461–469. Google Scholar

[27]

J. R. Magnus, On the concept of matrix derivative, Journal of Multivariate Analysis, 101 (2010), 2200-2206.  doi: 10.1016/j.jmva.2010.05.005.  Google Scholar

[28]

A. Majumdar and R. Ward, Rank awareness in group-sparse recovery of multi-echo MR images, Sensors, 13 (2013), 3902-3921.  doi: 10.3390/s130303902.  Google Scholar

[29]

A. Majumdar and R. K. Ward, Joint reconstruction of multiecho mr images using correlated sparsity, Magnetic Resonance Imaging, 29 (2011), 899-906.  doi: 10.1016/j.mri.2011.03.008.  Google Scholar

[30]

A. Majumdar and R. K. Ward, Face recognition from video: An MMV recovery approach, in Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on, IEEE, 2012, 2221–2224. doi: 10.1109/ICASSP.2012.6288355.  Google Scholar

[31]

M. Mishali and Y. C. Eldar, Reduce and boost: Recovering arbitrary sets of jointly sparse vectors, IEEE Transactions on Signal Processing, 56 (2008), 4692-4702.  doi: 10.1109/TSP.2008.927802.  Google Scholar

[32]

D. Needell and J. A. Tropp, CoSaMP: Iterative signal recovery from incomplete and inaccurate samples, Applied and Computational Harmonic Analysis, 26 (2009), 301-321.  doi: 10.1016/j.acha.2008.07.002.  Google Scholar

[33]

D. Needell and R. Vershynin, Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit, IEEE Journal of Selected Topics in Signal Processing, 4 (2010), 310-316.  doi: 10.1109/JSTSP.2010.2042412.  Google Scholar

[34]

D. Needell and R. Ward, Batched stochastic gradient descent with weighted sampling, in International Conference Approximation Theory, Springer, 2016,279–306. doi: 10.1007/978-3-319-59912-0_14.  Google Scholar

[35]

N. Nguyen, D. Needell and T. Woolf, Linear convergence of stochastic iterative greedy algorithms with sparse constraints, IEEE Transactions on Information Theory, 63 (2017), 6869–6895. doi: 10.1109/TIT.2017.2749330.  Google Scholar

[36]

N. Nguyen, S. Chin and T. Tran, A Unified Iterative Greedy Algorithm for Sparsity-Constrained Optimization, 2012. Google Scholar

[37]

Y. C. Pati, R. Rezaiifar and P. S. Krishnaprasad, Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition, in Proceedings of 27th Asilomar Conference on Signals, Systems and Computers, IEEE, 1993, 40–44. doi: 10.1109/ACSSC.1993.342465.  Google Scholar

[38]

R. Schmidt, Multiple emitter location and signal parameter estimation, IEEE Transactions on Antennas and Propagation, 34 (1986), 276-280.  doi: 10.1109/TAP.1986.1143830.  Google Scholar

[39]

J. A. Tropp, Greed is good: Algorithmic results for sparse approximation, IEEE Transactions on Information theory, 50 (2004), 2231-2242.  doi: 10.1109/TIT.2004.834793.  Google Scholar

[40]

J. A. TroppA. C. Gilbert and M. J. Strauss, Algorithms for simultaneous sparse approximation. Part Ⅰ: Greedy pursuit, Signal Processing, 86 (2006), 572-588.  doi: 10.1016/j.sigpro.2005.05.030.  Google Scholar

[41]

X.-T. Yuan, P. Li and T. Zhang, Gradient hard thresholding pursuit for sparsity-constrained optimization, in International Conference on Machine Learning, 2014,127–135.  Google Scholar

[42]

T. Zhang, Sparse recovery with orthogonal matching pursuit under RIP, IEEE Transactions on Information Theory, 57 (2011), 6215-6221.  doi: 10.1109/TIT.2011.2162263.  Google Scholar

show all references

References:
[1]

A. Agarwal, S. Negahban and M. J. Wainwright, Fast global convergence rates of gradient methods for high-dimensional statistical recovery, Ann. Statist., 40 (2012), 2452–2482. doi: 10.1214/12-AOS1032.  Google Scholar

[2]

M. AharonM. Elad and A. Bruckstein, K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation, IEEE Transactions on Signal Processing, 54 (2006), 4311-4322.  doi: 10.1109/TSP.2006.881199.  Google Scholar

[3]

D. BaronM. B. WakinM. F. DuarteS. Sarvotham and R. G. Baraniuk, Distributed compressed sensing, IEEE Transactions on Information Theory, 52 (2006), 5406-5425.   Google Scholar

[4]

J. A. Bazerque and G. B. Giannakis, Distributed spectrum sensing for cognitive radio networks by exploiting sparsity, IEEE Transactions on Signal Processing, 58 (2010), 1847-1862.  doi: 10.1109/TSP.2009.2038417.  Google Scholar

[5]

J. D. BlanchardM. CermakD. Hanle and Y. Jing, Greedy algorithms for joint sparse recovery, IEEE Transactions on Signal Processing, 62 (2014), 1694-1704.  doi: 10.1109/TSP.2014.2301980.  Google Scholar

[6]

J. D. BlanchardJ. Tanner and K. Wei, CGIHT: conjugate gradient iterative hard thresholding for compressed sensing and matrix completion, Information and Inference: A Journal of the IMA, 4 (2015), 289-327.  doi: 10.1093/imaiai/iav011.  Google Scholar

[7]

T. Blumensath and M. E. Davies, Iterative hard thresholding for compressed sensing, Applied and Computational Harmonic Analysis, 27 (2009), 265-274.  doi: 10.1016/j.acha.2009.04.002.  Google Scholar

[8]

T. Blumensath and M. E. Davies, Normalized iterative hard thresholding: Guaranteed stability and performance, IEEE Journal of Selected Topics in Signal Processing, 4 (2010), 298-309.  doi: 10.1109/JSTSP.2010.2042411.  Google Scholar

[9]

R. Burden and J. Faires, Numerical Analysis, Cengage Learning, 2004. Google Scholar

[10]

J. Chen and X. Huo, Theoretical results on sparse representations of multiple-measurement vectors, IEEE Transactions on Signal Processing, 54 (2006), 4634-4643.  doi: 10.1109/TSP.2006.881263.  Google Scholar

[11]

S. F. CotterB. D. RaoK. Engan and K. Kreutz-Delgado, Sparse solutions to linear inverse problems with multiple measurement vectors, IEEE Transactions on Signal Processing, 53 (2005), 2477-2488.  doi: 10.1109/TSP.2005.849172.  Google Scholar

[12]

W. Dai and O. Milenkovic, Subspace pursuit for compressive sensing signal reconstruction, IEEE Transactions on Information Theory, 55 (2009), 2230-2249.  doi: 10.1109/TIT.2009.2016006.  Google Scholar

[13]

M. E. Davies and Y. C. Eldar, Rank awareness in joint sparse recovery, IEEE Transactions on Information Theory, 58 (2012), 1135-1146.  doi: 10.1109/TIT.2011.2173722.  Google Scholar

[14]

D. L. Donoho, Unconditional bases are optimal bases for data compression and for statistical estimation, Applied and Computational Harmonic Analysis, 1 (1993), 100-115.  doi: 10.1006/acha.1993.1008.  Google Scholar

[15]

N. Durgin, C. Huang, R. Grotheer, S. Li, A. Ma, D. Needell and J. Qin, Fast hyperspectral diffuse optical imaging method with joint sparsity, in 41st Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), 2019. doi: 10.1109/EMBC.2019.8857069.  Google Scholar

[16]

M. Elad, Sparse and Redundant Representations From Theory to Applications in Signal and Image Processing, Springer, 2010. doi: 10.1007/978-1-4419-7011-4.  Google Scholar

[17]

P. Feng and Y. Bresler, Spectrum-blind minimum-rate sampling and reconstruction of multiband signals, in Acoustics, Speech and Signal Processing (ICASSP), 1996 IEEE International Conference on, vol. 3, IEEE, 1996, 1688–1691. doi: 10.1109/ICASSP.1996.544131.  Google Scholar

[18]

S. Foucart, Hard thresholding pursuit: an algorithm for compressive sensing, SIAM Journal on Numerical Analysis, 49 (2011), 2543-2563.  doi: 10.1137/100806278.  Google Scholar

[19]

S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing, vol. 1, Birkhäuser/Springer, New York, 2013. doi: 10.1007/978-0-8176-4948-7.  Google Scholar

[20]

R. GiryesS. NamM. EladR. Gribonval and M. E. Davies, Greedy-like algorithms for the cosparse analysis model, Linear Algebra and its Applications, 441 (2014), 22-60.  doi: 10.1016/j.laa.2013.03.004.  Google Scholar

[21]

Z. He, A. Cichocki, R. Zdunek and J. Cao, CG-M-FOCUSS and its application to distributed compressed sensing, in International Symposium on Neural Networks, Springer, 2008,237–245. doi: 10.1007/978-3-540-87732-5_27.  Google Scholar

[22]

Z. Jian, F. Yuli, Z. Qiheng and L. Haifeng, Split bregman algorithms for multiple measurement vector problem, Multidim Syst Sign Process. Google Scholar

[23]

J. M. KimO. K. Lee and J. C. Ye, Compressive MUSIC: Revisiting the link between compressive sensing and array signal processing, IEEE Transactions on Information Theory, 58 (2012), 278-301.  doi: 10.1109/TIT.2011.2171529.  Google Scholar

[24]

K. LeeY. Bresler and M. Junge, Subspace methods for joint sparse recovery, IEEE Transactions on Information Theory, 58 (2012), 3613-3641.  doi: 10.1109/TIT.2012.2189196.  Google Scholar

[25]

S. LiD. YangG. Tang and M. B. Wakin, Atomic norm minimization for modal analysis from random and compressed samples, IEEE Transactions on Signal Processing, 66 (2018), 1817-1831.  doi: 10.1109/TSP.2018.2793907.  Google Scholar

[26]

H. Lu, X. Long and J. Lv, A fast algorithm for recovery of jointly sparse vectors based on the alternating direction methods, in Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011,461–469. Google Scholar

[27]

J. R. Magnus, On the concept of matrix derivative, Journal of Multivariate Analysis, 101 (2010), 2200-2206.  doi: 10.1016/j.jmva.2010.05.005.  Google Scholar

[28]

A. Majumdar and R. Ward, Rank awareness in group-sparse recovery of multi-echo MR images, Sensors, 13 (2013), 3902-3921.  doi: 10.3390/s130303902.  Google Scholar

[29]

A. Majumdar and R. K. Ward, Joint reconstruction of multiecho mr images using correlated sparsity, Magnetic Resonance Imaging, 29 (2011), 899-906.  doi: 10.1016/j.mri.2011.03.008.  Google Scholar

[30]

A. Majumdar and R. K. Ward, Face recognition from video: An MMV recovery approach, in Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on, IEEE, 2012, 2221–2224. doi: 10.1109/ICASSP.2012.6288355.  Google Scholar

[31]

M. Mishali and Y. C. Eldar, Reduce and boost: Recovering arbitrary sets of jointly sparse vectors, IEEE Transactions on Signal Processing, 56 (2008), 4692-4702.  doi: 10.1109/TSP.2008.927802.  Google Scholar

[32]

D. Needell and J. A. Tropp, CoSaMP: Iterative signal recovery from incomplete and inaccurate samples, Applied and Computational Harmonic Analysis, 26 (2009), 301-321.  doi: 10.1016/j.acha.2008.07.002.  Google Scholar

[33]

D. Needell and R. Vershynin, Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit, IEEE Journal of Selected Topics in Signal Processing, 4 (2010), 310-316.  doi: 10.1109/JSTSP.2010.2042412.  Google Scholar

[34]

D. Needell and R. Ward, Batched stochastic gradient descent with weighted sampling, in International Conference Approximation Theory, Springer, 2016,279–306. doi: 10.1007/978-3-319-59912-0_14.  Google Scholar

[35]

N. Nguyen, D. Needell and T. Woolf, Linear convergence of stochastic iterative greedy algorithms with sparse constraints, IEEE Transactions on Information Theory, 63 (2017), 6869–6895. doi: 10.1109/TIT.2017.2749330.  Google Scholar

[36]

N. Nguyen, S. Chin and T. Tran, A Unified Iterative Greedy Algorithm for Sparsity-Constrained Optimization, 2012. Google Scholar

[37]

Y. C. Pati, R. Rezaiifar and P. S. Krishnaprasad, Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition, in Proceedings of 27th Asilomar Conference on Signals, Systems and Computers, IEEE, 1993, 40–44. doi: 10.1109/ACSSC.1993.342465.  Google Scholar

[38]

R. Schmidt, Multiple emitter location and signal parameter estimation, IEEE Transactions on Antennas and Propagation, 34 (1986), 276-280.  doi: 10.1109/TAP.1986.1143830.  Google Scholar

[39]

J. A. Tropp, Greed is good: Algorithmic results for sparse approximation, IEEE Transactions on Information theory, 50 (2004), 2231-2242.  doi: 10.1109/TIT.2004.834793.  Google Scholar

[40]

J. A. TroppA. C. Gilbert and M. J. Strauss, Algorithms for simultaneous sparse approximation. Part Ⅰ: Greedy pursuit, Signal Processing, 86 (2006), 572-588.  doi: 10.1016/j.sigpro.2005.05.030.  Google Scholar

[41]

X.-T. Yuan, P. Li and T. Zhang, Gradient hard thresholding pursuit for sparsity-constrained optimization, in International Conference on Machine Learning, 2014,127–135.  Google Scholar

[42]

T. Zhang, Sparse recovery with orthogonal matching pursuit under RIP, IEEE Transactions on Information Theory, 57 (2011), 6215-6221.  doi: 10.1109/TIT.2011.2162263.  Google Scholar

Figure 1.  Comparison of BCStoIHT and BMStoIHT for various sparsity levels of the signal matrix. From left to right: batch sizes are 1 and 10
Figure 2.  Comparison of BCStoIHT and BMStoIHT for various numbers of signals to be reconstructed. From left to right: batch sizes are 1 and 10
Figure 3.  Comparison of BCStoIHT and BMStoIHT for various noise levels to the measurement matrix. From left to right: batch sizes are 1 and 10
Figure 4.  Comparison of BCStoGradMP and BMStoGradMP with various sparsity levels. From left to right: batch sizes are 1 and 10
Figure 5.  Comparison of BCStoGradMP and BMStoGradMP with various numbers of signals. From left to right: batch sizes are 1 and 10
Figure 6.  Comparison of BCStoGradMP and BMStoGradMP with various noise levels. From left to right: batch sizes are 1 and 10
Figure 7.  (a) K-SVD dictionary learned from the total 75 frames from the candle video. (b) Support of sparse coefficient matrix for extracted 11 frames. The sparse coefficient matrix has non-zero entries on white area. (c-d) Some columns of the learned K-SVD dictionary
Table 1.  Comparison of the proposed algorithms and other MMV algorithms
Method Error Runtime (sec.)
BMStoIHT $ 5.64\times10^{-7} $ 0.0184
BMStoGradMP $ 1.05\times10^{-15} $ 0.0023
M-FOCUSS $ 3.80\times 10^{-3} $ 0.0190
M-OMP $ 7.45\times 10^{-16} $ 0.0492
M-SP $ 6.17\times 10^{-16} $ 0.0217
M-CoSaMP $ 8.22\times 10^{-16} $ 0.0218
M-HTP $ 5.33\times 10^{-16} $ 0.0912
Method Error Runtime (sec.)
BMStoIHT $ 5.64\times10^{-7} $ 0.0184
BMStoGradMP $ 1.05\times10^{-15} $ 0.0023
M-FOCUSS $ 3.80\times 10^{-3} $ 0.0190
M-OMP $ 7.45\times 10^{-16} $ 0.0492
M-SP $ 6.17\times 10^{-16} $ 0.0217
M-CoSaMP $ 8.22\times 10^{-16} $ 0.0218
M-HTP $ 5.33\times 10^{-16} $ 0.0912
Table 2.  Performance comparison on joint sparse video recovery
Method Error Runtime (sec.)
BMStoGradMP $ 3.09\times10^{-15} $ $ 2.56\times10^{-4} $
M-FOCUSS $ 3.21\times 10^{-3} $ 0.2452
M-OMP $ 2.51\times 10^{-16} $ 0.0743
M-SP $ 2.06\times 10^{-16} $ 0.1297
M-CoSaMP $ 4.07\times 10^{-16} $ 0.0115
M-HTP $ 5.72\times 10^{-2} $ 0.0134
SBC $ 1.19\times 10^{-8} $ $ 6.28\times10^{-4} $
Method Error Runtime (sec.)
BMStoGradMP $ 3.09\times10^{-15} $ $ 2.56\times10^{-4} $
M-FOCUSS $ 3.21\times 10^{-3} $ 0.2452
M-OMP $ 2.51\times 10^{-16} $ 0.0743
M-SP $ 2.06\times 10^{-16} $ 0.1297
M-CoSaMP $ 4.07\times 10^{-16} $ 0.0115
M-HTP $ 5.72\times 10^{-2} $ 0.0134
SBC $ 1.19\times 10^{-8} $ $ 6.28\times10^{-4} $
[1]

Hong Jiang, Wei Deng, Zuowei Shen. Surveillance video processing using compressive sensing. Inverse Problems & Imaging, 2012, 6 (2) : 201-214. doi: 10.3934/ipi.2012.6.201

[2]

Lingchen Kong, Naihua Xiu, Guokai Liu. Partial $S$-goodness for partially sparse signal recovery. Numerical Algebra, Control & Optimization, 2014, 4 (1) : 25-38. doi: 10.3934/naco.2014.4.25

[3]

Björn Popilka, Simon Setzer, Gabriele Steidl. Signal recovery from incomplete measurements in the presence of outliers. Inverse Problems & Imaging, 2007, 1 (4) : 661-672. doi: 10.3934/ipi.2007.1.661

[4]

Xiaoping Fang, Youjun Deng. Uniqueness on recovery of piecewise constant conductivity and inner core with one measurement. Inverse Problems & Imaging, 2018, 12 (3) : 733-743. doi: 10.3934/ipi.2018031

[5]

Sebastian Acosta. Recovery of the absorption coefficient in radiative transport from a single measurement. Inverse Problems & Imaging, 2015, 9 (2) : 289-300. doi: 10.3934/ipi.2015.9.289

[6]

Abdulkarim Hassan Ibrahim, Poom Kumam, Min Sun, Parin Chaipunya, Auwal Bala Abubakar. Projection method with inertial step for nonlinear equations: Application to signal recovery. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021173

[7]

Shitao Liu. Recovery of the sound speed and initial displacement for the wave equation by means of a single Dirichlet boundary measurement. Evolution Equations & Control Theory, 2013, 2 (2) : 355-364. doi: 10.3934/eect.2013.2.355

[8]

Tommi Brander, Joonas Ilmavirta, Petteri Piiroinen, Teemu Tyni. Optimal recovery of a radiating source with multiple frequencies along one line. Inverse Problems & Imaging, 2020, 14 (6) : 967-983. doi: 10.3934/ipi.2020044

[9]

Jian-Wu Xue, Xiao-Kun Xu, Feng Zhang. Big data dynamic compressive sensing system architecture and optimization algorithm for internet of things. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1401-1414. doi: 10.3934/dcdss.2015.8.1401

[10]

Francesco Cordoni, Luca Di Persio. Optimal control for the stochastic FitzHugh-Nagumo model with recovery variable. Evolution Equations & Control Theory, 2018, 7 (4) : 571-585. doi: 10.3934/eect.2018027

[11]

Bruno Sixou, Valentina Davidoiu, Max Langer, Francoise Peyrin. Absorption and phase retrieval with Tikhonov and joint sparsity regularizations. Inverse Problems & Imaging, 2013, 7 (1) : 267-282. doi: 10.3934/ipi.2013.7.267

[12]

Amin Boumenir, Vu Kim Tuan. Recovery of the heat coefficient by two measurements. Inverse Problems & Imaging, 2011, 5 (4) : 775-791. doi: 10.3934/ipi.2011.5.775

[13]

Qian Zhao, Bitao Jiang, Xiaogang Yu, Yue Zhang. Collaborative mission optimization for ship rapid search by multiple heterogeneous remote sensing satellites. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021092

[14]

Fenglong Qu, Jiaqing Yang. On recovery of an inhomogeneous cavity in inverse acoustic scattering. Inverse Problems & Imaging, 2018, 12 (2) : 281-291. doi: 10.3934/ipi.2018012

[15]

Vinicius Albani, Uri M. Ascher, Xu Yang, Jorge P. Zubelli. Data driven recovery of local volatility surfaces. Inverse Problems & Imaging, 2017, 11 (5) : 799-823. doi: 10.3934/ipi.2017038

[16]

Wanyou Cheng, Zixin Chen, Donghui Li. Nomonotone spectral gradient method for sparse recovery. Inverse Problems & Imaging, 2015, 9 (3) : 815-833. doi: 10.3934/ipi.2015.9.815

[17]

Amin Boumenir, Vu Kim Tuan, Nguyen Hoang. The recovery of a parabolic equation from measurements at a single point. Evolution Equations & Control Theory, 2018, 7 (2) : 197-216. doi: 10.3934/eect.2018010

[18]

Hang Xu, Song Li. Analysis non-sparse recovery for relaxed ALASSO. Communications on Pure & Applied Analysis, 2020, 19 (8) : 4055-4068. doi: 10.3934/cpaa.2020179

[19]

Xuemei Chen, Julia Dobrosotskaya. Inpainting via sparse recovery with directional constraints. Mathematical Foundations of Computing, 2020, 3 (4) : 229-247. doi: 10.3934/mfc.2020025

[20]

Victor Isakov. Recovery of time dependent volatility coefficient by linearization. Evolution Equations & Control Theory, 2014, 3 (1) : 119-134. doi: 10.3934/eect.2014.3.119

2020 Impact Factor: 1.639

Metrics

  • PDF downloads (233)
  • HTML views (141)
  • Cited by (0)

[Back to Top]