Article Contents
Article Contents

# Stochastic greedy algorithms for multiple measurement vectors

• * Corresponding author: Jing Qin
• Sparse representation of a single measurement vector (SMV) has been explored in a variety of compressive sensing applications. Recently, SMV models have been extended to solve multiple measurement vectors (MMV) problems, where the underlying signal is assumed to have joint sparse structures. To circumvent the NP-hardness of the $\ell_0$ minimization problem, many deterministic MMV algorithms solve the convex relaxed models with limited efficiency. In this paper, we develop stochastic greedy algorithms for solving the joint sparse MMV reconstruction problem. In particular, we propose the MMV Stochastic Iterative Hard Thresholding (MStoIHT) and MMV Stochastic Gradient Matching Pursuit (MStoGradMP) algorithms, and we also utilize the mini-batching technique to further improve their performance. Convergence analysis indicates that the proposed algorithms are able to converge faster than their SMV counterparts, i.e., concatenated StoIHT and StoGradMP, under certain conditions. Numerical experiments have illustrated the superior effectiveness of the proposed algorithms over their SMV counterparts.

Mathematics Subject Classification: Primary: 68W20, 94A12; Secondary: 47N10.

 Citation:

• Figure 1.  Comparison of BCStoIHT and BMStoIHT for various sparsity levels of the signal matrix. From left to right: batch sizes are 1 and 10

Figure 2.  Comparison of BCStoIHT and BMStoIHT for various numbers of signals to be reconstructed. From left to right: batch sizes are 1 and 10

Figure 3.  Comparison of BCStoIHT and BMStoIHT for various noise levels to the measurement matrix. From left to right: batch sizes are 1 and 10

Figure 4.  Comparison of BCStoGradMP and BMStoGradMP with various sparsity levels. From left to right: batch sizes are 1 and 10

Figure 5.  Comparison of BCStoGradMP and BMStoGradMP with various numbers of signals. From left to right: batch sizes are 1 and 10

Figure 6.  Comparison of BCStoGradMP and BMStoGradMP with various noise levels. From left to right: batch sizes are 1 and 10

Figure 7.  (a) K-SVD dictionary learned from the total 75 frames from the candle video. (b) Support of sparse coefficient matrix for extracted 11 frames. The sparse coefficient matrix has non-zero entries on white area. (c-d) Some columns of the learned K-SVD dictionary

Table 1.  Comparison of the proposed algorithms and other MMV algorithms

 Method Error Runtime (sec.) BMStoIHT $5.64\times10^{-7}$ 0.0184 BMStoGradMP $1.05\times10^{-15}$ 0.0023 M-FOCUSS $3.80\times 10^{-3}$ 0.0190 M-OMP $7.45\times 10^{-16}$ 0.0492 M-SP $6.17\times 10^{-16}$ 0.0217 M-CoSaMP $8.22\times 10^{-16}$ 0.0218 M-HTP $5.33\times 10^{-16}$ 0.0912

Table 2.  Performance comparison on joint sparse video recovery

 Method Error Runtime (sec.) BMStoGradMP $3.09\times10^{-15}$ $2.56\times10^{-4}$ M-FOCUSS $3.21\times 10^{-3}$ 0.2452 M-OMP $2.51\times 10^{-16}$ 0.0743 M-SP $2.06\times 10^{-16}$ 0.1297 M-CoSaMP $4.07\times 10^{-16}$ 0.0115 M-HTP $5.72\times 10^{-2}$ 0.0134 SBC $1.19\times 10^{-8}$ $6.28\times10^{-4}$

Figures(7)

Tables(2)