\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

Stochastic greedy algorithms for multiple measurement vectors

  • * Corresponding author: Jing Qin

    * Corresponding author: Jing Qin 
Abstract Full Text(HTML) Figure(7) / Table(2) Related Papers Cited by
  • Sparse representation of a single measurement vector (SMV) has been explored in a variety of compressive sensing applications. Recently, SMV models have been extended to solve multiple measurement vectors (MMV) problems, where the underlying signal is assumed to have joint sparse structures. To circumvent the NP-hardness of the $ \ell_0 $ minimization problem, many deterministic MMV algorithms solve the convex relaxed models with limited efficiency. In this paper, we develop stochastic greedy algorithms for solving the joint sparse MMV reconstruction problem. In particular, we propose the MMV Stochastic Iterative Hard Thresholding (MStoIHT) and MMV Stochastic Gradient Matching Pursuit (MStoGradMP) algorithms, and we also utilize the mini-batching technique to further improve their performance. Convergence analysis indicates that the proposed algorithms are able to converge faster than their SMV counterparts, i.e., concatenated StoIHT and StoGradMP, under certain conditions. Numerical experiments have illustrated the superior effectiveness of the proposed algorithms over their SMV counterparts.

    Mathematics Subject Classification: Primary: 68W20, 94A12; Secondary: 47N10.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Comparison of BCStoIHT and BMStoIHT for various sparsity levels of the signal matrix. From left to right: batch sizes are 1 and 10

    Figure 2.  Comparison of BCStoIHT and BMStoIHT for various numbers of signals to be reconstructed. From left to right: batch sizes are 1 and 10

    Figure 3.  Comparison of BCStoIHT and BMStoIHT for various noise levels to the measurement matrix. From left to right: batch sizes are 1 and 10

    Figure 4.  Comparison of BCStoGradMP and BMStoGradMP with various sparsity levels. From left to right: batch sizes are 1 and 10

    Figure 5.  Comparison of BCStoGradMP and BMStoGradMP with various numbers of signals. From left to right: batch sizes are 1 and 10

    Figure 6.  Comparison of BCStoGradMP and BMStoGradMP with various noise levels. From left to right: batch sizes are 1 and 10

    Figure 7.  (a) K-SVD dictionary learned from the total 75 frames from the candle video. (b) Support of sparse coefficient matrix for extracted 11 frames. The sparse coefficient matrix has non-zero entries on white area. (c-d) Some columns of the learned K-SVD dictionary

    Table 1.  Comparison of the proposed algorithms and other MMV algorithms

    Method Error Runtime (sec.)
    BMStoIHT $ 5.64\times10^{-7} $ 0.0184
    BMStoGradMP $ 1.05\times10^{-15} $ 0.0023
    M-FOCUSS $ 3.80\times 10^{-3} $ 0.0190
    M-OMP $ 7.45\times 10^{-16} $ 0.0492
    M-SP $ 6.17\times 10^{-16} $ 0.0217
    M-CoSaMP $ 8.22\times 10^{-16} $ 0.0218
    M-HTP $ 5.33\times 10^{-16} $ 0.0912
     | Show Table
    DownLoad: CSV

    Table 2.  Performance comparison on joint sparse video recovery

    Method Error Runtime (sec.)
    BMStoGradMP $ 3.09\times10^{-15} $ $ 2.56\times10^{-4} $
    M-FOCUSS $ 3.21\times 10^{-3} $ 0.2452
    M-OMP $ 2.51\times 10^{-16} $ 0.0743
    M-SP $ 2.06\times 10^{-16} $ 0.1297
    M-CoSaMP $ 4.07\times 10^{-16} $ 0.0115
    M-HTP $ 5.72\times 10^{-2} $ 0.0134
    SBC $ 1.19\times 10^{-8} $ $ 6.28\times10^{-4} $
     | Show Table
    DownLoad: CSV
  • [1] A. Agarwal, S. Negahban and M. J. Wainwright, Fast global convergence rates of gradient methods for high-dimensional statistical recovery, Ann. Statist., 40 (2012), 2452–2482. doi: 10.1214/12-AOS1032.
    [2] M. AharonM. Elad and A. Bruckstein, K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation, IEEE Transactions on Signal Processing, 54 (2006), 4311-4322.  doi: 10.1109/TSP.2006.881199.
    [3] D. BaronM. B. WakinM. F. DuarteS. Sarvotham and R. G. Baraniuk, Distributed compressed sensing, IEEE Transactions on Information Theory, 52 (2006), 5406-5425. 
    [4] J. A. Bazerque and G. B. Giannakis, Distributed spectrum sensing for cognitive radio networks by exploiting sparsity, IEEE Transactions on Signal Processing, 58 (2010), 1847-1862.  doi: 10.1109/TSP.2009.2038417.
    [5] J. D. BlanchardM. CermakD. Hanle and Y. Jing, Greedy algorithms for joint sparse recovery, IEEE Transactions on Signal Processing, 62 (2014), 1694-1704.  doi: 10.1109/TSP.2014.2301980.
    [6] J. D. BlanchardJ. Tanner and K. Wei, CGIHT: conjugate gradient iterative hard thresholding for compressed sensing and matrix completion, Information and Inference: A Journal of the IMA, 4 (2015), 289-327.  doi: 10.1093/imaiai/iav011.
    [7] T. Blumensath and M. E. Davies, Iterative hard thresholding for compressed sensing, Applied and Computational Harmonic Analysis, 27 (2009), 265-274.  doi: 10.1016/j.acha.2009.04.002.
    [8] T. Blumensath and M. E. Davies, Normalized iterative hard thresholding: Guaranteed stability and performance, IEEE Journal of Selected Topics in Signal Processing, 4 (2010), 298-309.  doi: 10.1109/JSTSP.2010.2042411.
    [9] R. Burden and J. Faires, Numerical Analysis, Cengage Learning, 2004.
    [10] J. Chen and X. Huo, Theoretical results on sparse representations of multiple-measurement vectors, IEEE Transactions on Signal Processing, 54 (2006), 4634-4643.  doi: 10.1109/TSP.2006.881263.
    [11] S. F. CotterB. D. RaoK. Engan and K. Kreutz-Delgado, Sparse solutions to linear inverse problems with multiple measurement vectors, IEEE Transactions on Signal Processing, 53 (2005), 2477-2488.  doi: 10.1109/TSP.2005.849172.
    [12] W. Dai and O. Milenkovic, Subspace pursuit for compressive sensing signal reconstruction, IEEE Transactions on Information Theory, 55 (2009), 2230-2249.  doi: 10.1109/TIT.2009.2016006.
    [13] M. E. Davies and Y. C. Eldar, Rank awareness in joint sparse recovery, IEEE Transactions on Information Theory, 58 (2012), 1135-1146.  doi: 10.1109/TIT.2011.2173722.
    [14] D. L. Donoho, Unconditional bases are optimal bases for data compression and for statistical estimation, Applied and Computational Harmonic Analysis, 1 (1993), 100-115.  doi: 10.1006/acha.1993.1008.
    [15] N. Durgin, C. Huang, R. Grotheer, S. Li, A. Ma, D. Needell and J. Qin, Fast hyperspectral diffuse optical imaging method with joint sparsity, in 41st Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), 2019. doi: 10.1109/EMBC.2019.8857069.
    [16] M. Elad, Sparse and Redundant Representations From Theory to Applications in Signal and Image Processing, Springer, 2010. doi: 10.1007/978-1-4419-7011-4.
    [17] P. Feng and Y. Bresler, Spectrum-blind minimum-rate sampling and reconstruction of multiband signals, in Acoustics, Speech and Signal Processing (ICASSP), 1996 IEEE International Conference on, vol. 3, IEEE, 1996, 1688–1691. doi: 10.1109/ICASSP.1996.544131.
    [18] S. Foucart, Hard thresholding pursuit: an algorithm for compressive sensing, SIAM Journal on Numerical Analysis, 49 (2011), 2543-2563.  doi: 10.1137/100806278.
    [19] S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing, vol. 1, Birkhäuser/Springer, New York, 2013. doi: 10.1007/978-0-8176-4948-7.
    [20] R. GiryesS. NamM. EladR. Gribonval and M. E. Davies, Greedy-like algorithms for the cosparse analysis model, Linear Algebra and its Applications, 441 (2014), 22-60.  doi: 10.1016/j.laa.2013.03.004.
    [21] Z. He, A. Cichocki, R. Zdunek and J. Cao, CG-M-FOCUSS and its application to distributed compressed sensing, in International Symposium on Neural Networks, Springer, 2008,237–245. doi: 10.1007/978-3-540-87732-5_27.
    [22] Z. Jian, F. Yuli, Z. Qiheng and L. Haifeng, Split bregman algorithms for multiple measurement vector problem, Multidim Syst Sign Process.
    [23] J. M. KimO. K. Lee and J. C. Ye, Compressive MUSIC: Revisiting the link between compressive sensing and array signal processing, IEEE Transactions on Information Theory, 58 (2012), 278-301.  doi: 10.1109/TIT.2011.2171529.
    [24] K. LeeY. Bresler and M. Junge, Subspace methods for joint sparse recovery, IEEE Transactions on Information Theory, 58 (2012), 3613-3641.  doi: 10.1109/TIT.2012.2189196.
    [25] S. LiD. YangG. Tang and M. B. Wakin, Atomic norm minimization for modal analysis from random and compressed samples, IEEE Transactions on Signal Processing, 66 (2018), 1817-1831.  doi: 10.1109/TSP.2018.2793907.
    [26] H. Lu, X. Long and J. Lv, A fast algorithm for recovery of jointly sparse vectors based on the alternating direction methods, in Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011,461–469.
    [27] J. R. Magnus, On the concept of matrix derivative, Journal of Multivariate Analysis, 101 (2010), 2200-2206.  doi: 10.1016/j.jmva.2010.05.005.
    [28] A. Majumdar and R. Ward, Rank awareness in group-sparse recovery of multi-echo MR images, Sensors, 13 (2013), 3902-3921.  doi: 10.3390/s130303902.
    [29] A. Majumdar and R. K. Ward, Joint reconstruction of multiecho mr images using correlated sparsity, Magnetic Resonance Imaging, 29 (2011), 899-906.  doi: 10.1016/j.mri.2011.03.008.
    [30] A. Majumdar and R. K. Ward, Face recognition from video: An MMV recovery approach, in Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on, IEEE, 2012, 2221–2224. doi: 10.1109/ICASSP.2012.6288355.
    [31] M. Mishali and Y. C. Eldar, Reduce and boost: Recovering arbitrary sets of jointly sparse vectors, IEEE Transactions on Signal Processing, 56 (2008), 4692-4702.  doi: 10.1109/TSP.2008.927802.
    [32] D. Needell and J. A. Tropp, CoSaMP: Iterative signal recovery from incomplete and inaccurate samples, Applied and Computational Harmonic Analysis, 26 (2009), 301-321.  doi: 10.1016/j.acha.2008.07.002.
    [33] D. Needell and R. Vershynin, Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit, IEEE Journal of Selected Topics in Signal Processing, 4 (2010), 310-316.  doi: 10.1109/JSTSP.2010.2042412.
    [34] D. Needell and R. Ward, Batched stochastic gradient descent with weighted sampling, in International Conference Approximation Theory, Springer, 2016,279–306. doi: 10.1007/978-3-319-59912-0_14.
    [35] N. Nguyen, D. Needell and T. Woolf, Linear convergence of stochastic iterative greedy algorithms with sparse constraints, IEEE Transactions on Information Theory, 63 (2017), 6869–6895. doi: 10.1109/TIT.2017.2749330.
    [36] N. Nguyen, S. Chin and T. Tran, A Unified Iterative Greedy Algorithm for Sparsity-Constrained Optimization, 2012.
    [37] Y. C. Pati, R. Rezaiifar and P. S. Krishnaprasad, Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition, in Proceedings of 27th Asilomar Conference on Signals, Systems and Computers, IEEE, 1993, 40–44. doi: 10.1109/ACSSC.1993.342465.
    [38] R. Schmidt, Multiple emitter location and signal parameter estimation, IEEE Transactions on Antennas and Propagation, 34 (1986), 276-280.  doi: 10.1109/TAP.1986.1143830.
    [39] J. A. Tropp, Greed is good: Algorithmic results for sparse approximation, IEEE Transactions on Information theory, 50 (2004), 2231-2242.  doi: 10.1109/TIT.2004.834793.
    [40] J. A. TroppA. C. Gilbert and M. J. Strauss, Algorithms for simultaneous sparse approximation. Part Ⅰ: Greedy pursuit, Signal Processing, 86 (2006), 572-588.  doi: 10.1016/j.sigpro.2005.05.030.
    [41] X.-T. Yuan, P. Li and T. Zhang, Gradient hard thresholding pursuit for sparsity-constrained optimization, in International Conference on Machine Learning, 2014,127–135.
    [42] T. Zhang, Sparse recovery with orthogonal matching pursuit under RIP, IEEE Transactions on Information Theory, 57 (2011), 6215-6221.  doi: 10.1109/TIT.2011.2162263.
  • 加载中

Figures(7)

Tables(2)

SHARE

Article Metrics

HTML views(1823) PDF downloads(329) Cited by(0)

Access History

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return