RECOVERY OF BLOCK SPARSE SIGNALS UNDER THE CONDITIONS ON BLOCK RIC AND ROC BY BOMP

. In this paper, we consider the block orthogonal matching pursuit (BOMP) algorithm and the block orthogonal multi-matching pursuit (BOMMP) algorithm respectively to recover block sparse signals from an underdetermined system of linear equations. We ﬁrst introduce the notion of block restricted or- thogonality constant (ROC), which is a generalization of the standard restricted orthogonality constant, and establish respectively the suﬃcient conditions in terms of the block RIC and ROC to ensure the exact and stable recovery of any block sparse signals in both noiseless and noisy cases through the BOMP and BOMMP algorithm. We ﬁnally show that the suﬃcient condition on the block RIC and ROC is sharp for the BOMP algorithm.


1.
Introduction. Compressed sensing, which is a novel sampling theory, has many important practical applications including signal processing [48], medical imaging [32] and radar systems [1]. The central goal of compressed sensing is to efficiently recover sparse signals x from an underdetermined linear system where y ∈ R m is a measurement signal, Φ ∈ R m×n (usually m n) is a given sensing matrix, the vector x ∈ R n is an unknown K-sparse signal (i.e., x has at most K nonzero entries) and e ∈ R m is a vector of measurement errors. Many efficient methods, such as convex optimization method [4]- [6] and [10,23,41], greedy algorithm [14,20,38,39,47] and iterative threshold algorithm [2,3,18,21], have been developed to recover x based on the sensing matrix Φ and its measurements y.
The orthogonal matching pursuit (OMP) algorithm [45], which is a greedy algorithm recovering exactly the K-sparse signal x or its support in (1), has received much attention due to its high efficiency and competitive performance. Two commonly used measures for theoretically characterizing the recovery conditions of sparse recovery algorithms are restricted isometry constant (RIC) and restricted orthogonality constant (ROC) in [8]. A matrix Φ satisfies the restricted isometry property (RIP) of order K if there exists a constant δ K such that holds for all K-sparse signals x. And the smallest constant δ K is called as the RIC. The ROC with θ K,K is the smallest quantity satisfying for all K-sparse signals x and K -sparse signalsx with disjoint support. A number of conditions based on the restricted isometry constant (RIC) and the restricted orthogonality constant (ROC) for the exact recovery of K-sparse signals x by the OMP algorithm have been extensively studied in the literature. For example, sufficient conditions in the noiseless case include δ K+1 < 1 √ K+1 in [55] guarantee exact support recovery of the K-sparse signal x, when the minimum magnitude of the nonzero elements of the signal x satisfies some appropriate constraints. The orthogonal mult-matching pursuit (OMMP) algorithm [50,59], which is a natural extension of the OMP algorithm, has received much attention in recent years. Many RIC and ROC based sufficient conditions have been proposed to ensure the exact and stable recovery of sparse signals in both noiseless and noisy cases via the OMMP algorithm, see, e.g., [11,16,42,50,54,59]. This paper will provide analogous results from the traditional sparse signal to the block sparse signal defined in the next paragraph.
l}, then a block K-sparse signal x satisfies x 2,0 ≤ K or |T | ≤ K. It is obvious that if d 1 = d 2 = · · · = d l = 1, then the block sparse signal x reduces to the conventional sparse signal in [7], [22]. Similar to (4), the sensing matrix Φ can be expressed as a concatenation of l column blocks, i.e., where Φ i is the i-th column of Φ for i = 1, 2, · · · , n.
To recover block sparse signals x, a natural approach which exploits block sparsity is the mixed 2 / 0 norm minimization min x x 2,0 subject to Φx = y in noiseless case. In noisy setting, the mixed 2 / 0 norm minimization is min x x 2,0 subject to Φx − y 2 ≤ ε.
The main aim of this paper is to explicitly take this block structure into account to recover block sparse signals from (1) via the BOMP and BOMMP algorithms. We first introduce the notion of block restricted orthogonality constant (ROC), which is a generalization of the standard restricted orthogonality constant, and respectively establish the sufficient conditions by using the block RIC and ROC to ensure the exact and stable recovery of any block sparse signals in both noiseless and noisy cases through the BOMP and BOMMP algorithms. Our results on the recovery of the block signals respectively improve that of [27] and [58] via the BOMP and BOMMP algorithms. We finally show that the sufficient condition on the block RIC and ROC is sharp for the BOMP algorithm.
1.1. Block orthogonal matching pursuit algorithm. As a generalized version of the OMP algorithm, the BOMP algorithm was independently proposed in [24], [44] and described in Algorithm 1. In [24], Eldar et al. introduced the BOMP algorithm and proposed the exact recovery sufficient condition based on the blockcoherence µ B for block K-sparse signals x with d 1 = · · · = d l = d. They showed that if the block-coherence satisfies Kd < (µ −1 , then all block K-sparse signals x can be recovered from y = Φx via the BOMP algorithm in K iterations, where ν is the sub-coherence of Φ. If d = 1, then ν = 0 and the above condition reduces to K < (µ −1 + 1)/2 [45], where µ is the standard coherence. Some matrices may satisfy the block restricted isometry property (RIP) but may not satisfy the block coherence. A concrete example was constructed in [27] and the authors have shown that if the sensing matrix Φ satisfies the block RIP with the block RIC δ I K+1 < 1/( then the BOMP algorithm can exactly recover any block K-sparse signal x from few measurements y = Φx in K iterations. We also show this condition is sharp in the sense that for any K ≥ 1, there exists a matrix A satisfying the block-RIP δ I K+1 = 1 √ K+1 and a K-sparse block signal x such that the BOMP algorithm may fail to recover x in K iterations. Moreover, we prove that the BOMP algorithm can recover the correct block support of any block K-sparse signal x from y = Φx + e under the above condition with an additional lower bound condition of 2 norm of the nonzero blocks of the signal for 2 bounded noise. Algorithm 1. The BOMP algorithm Input: measurements y ∈ R m , sensing matrix Φ ∈ R m×n , block sparse level K. Initialize: iteration count k = 0, residual vector r 0 = y, estimated block support set Λ 0 = ∅. While "stopping criterion is not met" do 1: k = k + 1, 2: Select the block index t k satisfying t k = arg max i∈{1,··· ,l} Output the estimated signalx = arg min Identifying the block index t k in Step 2 and updating the residual r k in Step 5 are very important for every iteration of the BOMP algorithm and identifying the block index t k+1 in the next iteration depends on the current residual r k . Therefore, we make a simple analysis on the residuals here. By solving the minimization problem x Λ k = arg min u y − Φ Λ k u 2 by the least square method, we obtain Based on the above equality, we have which says that the column blocks of Φ Λ k are orthogonal to the residual r k . Hence, P Λ k and P ⊥ Λ k = I − P Λ k are two orthogonal projection operators which project a given vector orthogonally onto the spanned space by all column blocks of Φ Λ k and onto its orthogonal complement respectively. Then Therefore, if Λ k ⊆ T , we obtain where we denote From (5) and (6), it follows that 1.2. Block orthogonal multi-matching pursuit. In [58], the BOMMP algorithm, which is a natural extension of the BOMP algorithm, was introduced and described in Algorithm 2. It is known that the BOMP algorithm performs K iterations to recover exactly any block K-sparse signal if it selects one correct block index at each iteration. However, the BOMMP algorithm chooses N (N ≥ 1 is an input parameter in the BOMMP algorithm) block indices. If the selected N block indices contain at least one correct block index from the block support of the block K-sparse signal x per iteration, then it can recover any block K-sparse signal via at most K iterations. In [58], the authors showed that if the block RIC satisfies then any block K-sparse signal can be exactly recovered via the BOMMP algorithm. Furthermore, they demonstrated that the recovery performance of the BOMMP algorithm overtakes those of the BOMP and BMP algorithm by several simulations in [58]. Later, a sharp sufficient condition of the reconstruction of any block Ksparse signal through the BOMMP algorithm in K iterations was obtained [12], i.e., the condition δ I KN +1 < 1 K N + 1 is sufficient to perfectly recover any block K-sparse signal via the BOMMP algorithm in at most K iterations.
Here, we have two contributions. First, we show that the condition based on the block RIC and ROC satisfying is sufficient for the recovery of any block K-sparse signal x from y = Φx in at most K iterations. Second, for 2 bounded noise, we prove that the BOMMP algorithm can recover the correct block support of any block K-sparse signal under the above condition with an extra condition on the minimum 2 norm of the nonzero blocks of the signal.

Algorithm 2. The BOMMP algorithm
Input: measurements y ∈ R m , sensing matrix Φ ∈ R m×n , block sparse level K, number of indices for each selection N (N ≥ 1). Initialize: iteration count k = 0, residual vector r 0 = y, estimated block support Γ 0 = ∅. While "stopping criterion is not met" do 1: The same as the BOMP algorithm, we make a necessary analysis on the residual r k in the BOMMP algorithm for the proofs of Theorems 4.2 and 4.3. The residual r k is expressed as where As mentioned, (6) and (8) are different since we only guarantee The rest of the paper is organized as follows. In Section 2, we give some basic lemmas that will be used. We analyze the BOMP algorithm in Section 3. The analysis of the recovery of any block K-sparse signal via the BOMMP algorithm is presented in Section 4. In Section 5, we make some discussions on the comparison of our results in Sections 3 and 4 with the results of prior work.
2. The main lemmas. We first recall a useful lemma in [53].
Lemma 2.1. Let the sensing matrix Φ ∈ R m×n satisfy the block RIP of order K and Γ be a block index set with |Γ| ≤ K. Then Now, we collect several elementary properties on the block RIC and ROC in the following lemma.
Lemma 2.2. For any positive integers K 1 ≤ K 1 and K 2 ≤ K 2 , let Λ and Γ be two disjoint sets of block indices. Then By the definitions of the block RIC and ROC, the results (1) and (2) and Combining (10) and (11), we have therefore proved the item (3) ( which is the item (4). We complete the proof of the lemma.
3. Recovery conditions via the BOMP algorithm. In this section, a sufficient condition based on the block RIC and ROC will present to ensure that the BOMP algorithm can exactly recover any block K-sparse signal. Then we analyze the robustness of the BOMP algorithm under 2 bounded noise. The following lemma plays an important role in our analysis. (6), (7) and the item (3) in Lemma 2.2, we have It follows from the notion of t k+1 1 and the item (4) in Lemma 2.2 that Note that, based on the definitions of s k+1 , the columns of Φ Λ k are orthogonal to the residual r k and the selection rule t k = arg max i∈{1,··· ,l} it is obvious that the condition s k+1 ensures that an element of the correct support is found at (k + 1)-th iteration.
3.1. The exact recovery in the noiseless case. In this subsection, let us turn to the perfect recovery of any block sparse signal via the BOMP algorithm for noiseless case. It is clear that the sufficient condition for the BOMP algorithm choosing a correct block index from the block support T = b-supp(x) iss k+1 1 >t k+1 1 (0 ≤ k < K) for the (k +1)-th iteration. The following theorem provides a sufficient condition to guarantee the recovery of any block K-sparse signal by the BOMP algorithm.
Theorem 3.2. Let x be any block K-sparse signal and the sensing matrix Φ satisfy Kθ I K,1 < 1. Then the BOMP algorithm will recover exactly x from y = Φx in K iterations.
Proof. For the first iteration, r 0 = Φx, i.e, ν (0) = x and the sufficient condition for the BOMP algorithm choosing a block index from the block support T = b-supp(x) iss 1 1 >t 1 1 .
By Corollary 1, the condition δ I K + √ Kθ I K,1 < 1 guarantees the success of the first iteration. Suppose that the BOMP algorithm is successful in the first k (1 ≤ k < K) iterations under the condition δ I K + √ Kθ I K,1 < 1. For the (k + 1)-th iteration, it follows from Corollary 1 and δ I K + As a result, a correct block index is selected in the (k+1)-th iteration. By induction, the BOMP algorithm selects a correct block index from T in each iteration. It remains to prove x =x, wherex is the estimated signal of x after performing K iterations successfully in the BOMP algorithm. Therefore we have that Λ K = T andx where we use the fact that the matrix Φ satisfies the block RIP of order K +1, which means Φ Λ K is full column rank. So, the proof of the theorem is completed.
Now, we prove that the sufficient condition δ I K + √ Kθ I K,1 < 1 is sharp for the BOMP algorithm. such that the BOMP algorithm may fail in K iterations.
Proof. For simplicity, we assume that each block of the block K-sparse signal x has identical length of d.
where I d is the d-dimensional identity matrix. For any block 1-sparse signal x ∈ R 2d , it is clear that Φx 2 2 = x 2 2 . Therefore, the block RIC δ I 1 of the block matrix Φ in (12) is 0. For any pair of block 1-sparse signals x andx with disjoint block support, we have | Φx, Φx | ≤ [2] = e 1 ∈ R d (e 1 ∈ R d is the first coordinate unit vector), then | Φx, Φx | = x 2 x 2 . This implies that θ I 1,1 = 1. Therefore, δ I 1 + θ I 1,1 = 1. For which implies that the BOMP algorithm may fail in the first iteration. For any given positive integer K ≥ 2, assume that the block K-sparse signal x consists of 2K − 1 blocks which have identical length of d, i.e., n = (2K − 1)d. Let When d = 1, Φ is the matrix (9) in [17]. For Φ in (13), we have for any positive integers 1 ≤ i < j ≤ 2K − 1. Hence it is obvious that for any block index set Ω with |Ω| = K. By a simple calculation, we have Hence, 1 − K−1 2K−1 and 1 + 1 2K−1 are eigenvalues of Φ Ω Φ Ω with multiplicity of d and (K − 1)d respectively. Therefore, for any block K−sparse signal x ∈ R (2K−1)d with T = b-supp(x), we derive that It follows from the above inequality that Now, we prove that the matrix Φ satisfies the block RIP of order K with the block RIC For d = 1, denote Φ byΦ. Let h ∈ R K be the eigenvector ofΦ ΩΦ Ω corresponding to the eigenvalue 1 − K−1 Therefore, the block RIC δ I K of Φ is K−1 2K−1 . Next, we claim that the matrix Φ in (13) satisfies the block ROC θ I K,1 = √ K 2K−1 . Let any block K−sparse signal x ∈ R (2K−1)d with block support set T and any block 1−sparse signalx ∈ R (2K−1)d with block support setT and T ∩T = ∅, then it follows from (14) that the above inequality takes the equality. Then θ I i.e.,s 1 1 =t 1 1 = K 2K−1 , which implies the BOMP algorithm may fail in the first iteration for the matrix Φ. In conclusion, we complete the proof.

3.2.
Analysis in presence of 2 bounded noise. In this subsection, we show the main result on the robustness of the BOMP algorithm under the block RIC and ROC for 2 bounded noise. We obtain sufficient conditions in terms of the block RIC and ROC for exact support recovery of sparse signals via the BOMP algorithm.
Theorem 3.4. Suppose that e 2 ≤ ε and the sensing matrix Φ satisfies δ I K + √ Kθ I K,1 < 1 in the model y = Φx + e. Then the BOMP algorithm with stopping rule r k 2 ≤ ε recovers perfectly the correct block support T of block K-sparse signal x in K iterations, if all the nonzero block components x[i] satisfy Remark 1. For the unknown sparse signal x, the lower bound on x[i] 2 with x[i] = 0 cannot be checked, but the lower bound tends to zero as the noise bound ε is sufficiently small. There are a large number of such sparse signals in practical application. Some analogous lower bounds on all the nonzero components are considered in [43], [57], [15] and [16] for the standard sparse signals.
Proof. First, we prove that the BOMP algorithm selects a correct block index in every iteration, that is, s k+1 (0 ≤ k < K). Second, we show it performs exactly K iterations.
By induction, we show that a correct block index is identified in every iteration for the BOMP algorithm. Suppose that the BOMP algorithm selects k correct block indices in the first k (0 ≤ k < K) iterations, i.e., Λ k ⊆ T . For the (k + 1)-th iteration, it is obvious that the condition can guarantee the BOMP algorithm choosing a correct block index in the (k+1)-th iteration. Assume that there exists a block index c k 0 fulfilling Therefore, it follows from (16), Lemma 2.1 and (I − P Λ k )e 2 ≤ e 2 ≤ ε, that By the assumption we obtain that Based on (17) and (18), the condition (15) holds, which implies the BOMP algorithm is successful in the (k +1)-th iteration, and the BOMP algorithm recovers exactly the block support set T of the signal x in the first K iterations. Now, we prove that the BOMP algorithm does not stop earlier, which is equivalent to prove that r K ≤ ε and r k 2 > ε (0 ≤ k < K). By the fact Λ K = T , we have Then, we obtain that For 0 ≤ k < K, i.e., T − Λ k = ∅, it follows from (6) that Hence, the BOMP algorithm exactly stops after K iterations. The theorem is proved.
By the assumption e 2 ≤ ε and the above fact, we deduce that Based on the proof of (18) in Theorem 3.4, all the nonzero block components x[i] only need to satisfy for i ∈ T in Theorem 3.4.
Remark 3. When d 1 = d 2 = · · · = d l = 1, (19) and δ K + √ Kθ K,1 < 1 are exactly the sufficient conditions for 2 bounded noisy case in [19]. 4. Recovery conditions via the BOMMP algorithm. The BOMMP algorithm is a natural extension of the BOMP algorithm in the sense that N indices are identified per iteration. Comparing with the BOMP algorithm, the BOMMP algorithm recovers the block K-sparse signal x in fewer iterations and further decreases the computational complexity. Now, we give a lemma to analyze the performance of the BOMMP algorithm.
Assume that T − Γ k = ∅, then there hold Proof. In view of the notion of α k+1 i (i = 1, 2, · · · , N ), we have From (9), it follows that Therefore, it follows from the definition of β k+1 1 and the above inequality that As mentioned, the proof of the lemma is completed.
Similarly, from the definitions of α k+1 1 and β k+1 1 , the columns of Φ Γ k are orthogonal to the residual r k and the selection rule of the BOMMP algorithm, it follows that the condition β k+1 1 > α k+1 1 ensures that at least a correct support index is found at the (k + 1)-th iteration.
Corollary 2. In the noiseless case, e = 0 and r k = Φ T ∪Γ k ω T ∪Γ k . Let 4.1. The exact recovery in the noiseless case. In this subsection, the main theorem provides a sufficient condition on the sensing matrix Φ ensuring that any block K-sparse signal x can be exactly recovered from measurements y of the model y = Φx through the BOMMP algorithm. The proof of the theorem is rooted in that of [16] and [12]. Certainly, some essential modifications are necessary to establish the conditionβ k 1 >ᾱ k N (1 ≤ k ≤ K), which implies that at least one correct block index is selected in every iteration, i.e., the BOMMP algorithm makes a success in this iteration.
Theorem 4.2. Suppose x is a block K-sparse signal and the sensing matrix Φ satisfies then x is exactly recovered from the measurements y = Φx via at most K iterations for the BOMMP algorithm. Proof. We only need to consider the block K-sparse signal x = 0. For the first iteration, by Corollary 2 and Lemma 2.2, we have that where we use the fact that KN − N + 1 ≥ K = |T | (K, N ≥ 1). Using the condition ,N < 1 and the above two inequalities, we get which means that the BOMMP algorithm selects at least one correct block index Therefore, the BOMMP algorithm makes a success in the first iteration under the condition δ I KN −N +1 + K N θ I KN −N +1,N < 1. Suppose the BOMMP algorithm has performed k iterations successfully, where 1 ≤ k ≤ K − 1. Under the assumption, there holds |T ∪ Γ k | ≤ K + k(N − 1) since the BOMMP algorithm makes a success selecting at least one correct block index. Further, using 1 ≤ k ≤ K − 1 and N ≥ 1 we have that For the (k + 1)-th iteration, if T − Γ k = ∅, then T ⊆ Γ k . Hence, the block K-sparse signal x has already been recovered exactly. If T − Γ k = ∅, then ω T ∪Γ k = 0. We claim that the BOMMP algorithm will be successful in the (k + 1)-th iteration, i.e., which is the claimed result. By the induction method, the BOMMP algorithm selects at least one correct index in each iteration under the condition (20) when the iteration loop of the BOMMP algorithm does not end. If the iteration loop of the BOMMP algorithm ends up, there are T ⊆ Γ s with 1 ≤ s ≤ K and where we use the fact that x Γ s −T = 0. We can conclude that the BOMMP algorithm exactly recover any block K-sparse signal x through at most K iterations.

4.2.
Analysis in presence of 2 bounded noise. In this subsection, we shall discuss recovery conditions of any block K-sparse signal x in bounded 2 noisy setting via the BOMMP algorithm from y = Ax + e. Theorem 4.3. Suppose e 2 ≤ ε and the sensing matrix Φ satisfies the condition Let x be a block K-sparse signal with all the nonzero blocks x[i] satisfying Then the BOMMP algorithm with the stopping rule r k 2 ≤ ε exactly recovers the correct block support of the signal x from the measurements y = Φx + e.
Remark 5. The same as Remark (2), if K ≥ 2, then the condition (23) can be reduced to for i ∈ T .
Proof. The proof of the theorem consists of two parts. In the first part, we show that the BOMMP algorithm chooses at least one correct block index in every iteration, which means the BOMMP algorithm implements at most K iterations for the recovery of the block support T = b-supp(x). In the second part, we prove that the BOMMP exactly stops under the stopping rule r k ≤ ε when all the correct block indices are selected. Part I: We prove this part by induction. We suppose that the BOMMP algorithm has already performed k iterations successfully where 1 ≤ k ≤ K − 1. Considering the (k + 1)-th iteration, if T − Γ k = ∅, that is, T ⊆ Γ k , then the correct support T of the original block K-sparse signal x has already been recovered. If T − Γ k = ∅, then |T − Γ k | ≥ 1. In this case, it is clear that ω T ∪Γ k = 0.
Notice that there exist i k ∈ T ∪ Γ k and j k ∈ {φ(1), φ(2), · · · , φ(N )} satisfying Hence, By Lemma 4.1, the above inequality and (22), we have that that is, β k+1 1 > α k+1 N , which guarantees at least one block index selected from the block support T of the signal x in the (k + 1)-th iteration.
Now consider T − Γ k = ∅. From the definition of the block RIP, (8), (22) and (23), it follows that As mentioned, the BOMMP algorithm does not stop earlier. The proof is completed.

5.
Discussion. In this section, we discuss the connection between our work in this paper and the previous BOMP and BOMMP results. In Section 3.1, we showed the sufficiency and sharpness of the condition δ I K+1 + √ Kθ I K,1 < 1 for the BOMP algorithm to ensure the exact recovery of every block K-sparse signal x from y = Φx. It is worth discussing the relations between our sharp condition δ I K+1 + √ Kθ I K,1 < 1 and the conditions in two relative papers [27], [53]. First, from the fact that θ I K,1 ≤ δ I K+1 and Lemma 2.2 (1), it follows that our sharp condition δ I K+1 + √ Kθ I K,1 < 1 is weaker than the sufficient condition δ I K+1 < 1 √ K+1 [27]. However, our condition δ I K+1 +
In Section 3.2, we considered the exact support recovery of a block K-sparse signal x from measurements y = Φx + e with 2 bounded noise. We proved that if the sensing matrix Φ satisfies δ I K+1 + √ Kθ I K,1 < 1, then under the condition min i∈T , the support of the K-sparse signal x can be exactly recovered via the BOMP algorithm in K iterations. In the noisy case, the authors in [53] showed that if the sensing matrix Φ satisfies δ I K+1 < 1 √ K+1 , then under some constraints on min i∈T x[i] 2 , the BOMP algorithm exactly recovers the support of the K-sparse x for 2 and ∞ bounded noise. In this paper, we do not study ∞ bounded noise since the results and proofs are similar to the case of 2 bounded noise.
For the BOMMP algorithm, our condition δ I KN −N +1 + K N θ I KN −N +1,N < 1 in Section 4.1 improved the condition δ I K+(N −2)k+N < insures the exact recovery of the block support of the signal x from the measurements y = Φx + e via the BOMMP algorithm with the stopping rule r k 2 ≤ ε. Therefore, the results in this paper may guide the practitioners to apply the BOMP and BOMMP algorithms properly in sparse signal recovery.