DECODING OF 2D CONVOLUTIONAL CODES OVER AN ERASURE CHANNEL

In this paper we address the problem of decoding 2D convolutional codes over an erasure channel. To this end we introduce the notion of neighbors around a set of erasures which can be considered an analogue of the notion of sliding window in the context of 1D convolutional codes. The main idea is to reduce the decoding problem of 2D convolutional codes to a problem of decoding a set of associated 1D convolutional codes. We first show how to recover sets of erasures that are distributed on vertical, horizontal and diagonal lines. Finally we outline some ideas to treat any set of erasures distributed randomly on the 2D plane.


Introduction
It is well-known that when transmitting over an erasure channel the symbols sent either arrive correctly or they are erased. In recent years, there has been an increased interest in the study of one-dimensional (1D) convolutional codes over an erasure channel [2,14,15,16] as a possible alternative for the widely use of block codes. Due to their rich structure 1D convolutional codes have an interesting property called sliding window property that allows adaptation of the correction process to the distribution of the erasure pattern. In the recent paper [16] it has been shown how it is possible to exploit this property in order to recover erasures which are uncorrectable by any other kind of (block) codes. The codes proposed were codes over large alphabets with strong distance properties, called Maximum Distance Profile (MDP), reverse-MDP and complete-MDP codes. Simulation results showed that these codes perform extremely efficiently when compared to Maximum Distance Separable (MDS) block codes.
In the context of 1D convolutional codes the received codeword is viewed as a finite sequence v = (v 0 , v 1 , . . . , v ) and the sliding window is constructed by selecting a subsequence of v, (v i , . . . , v i+N ), where i, N ∈ N depend on the erasure pattern. Once the appropriate window is fixed, the decoding is performed within that window as if it were a block code. In the context of two-dimensional (2D) convolutional codes [3,4,8,12,13,17] the information is distributed in two dimensions and therefore there is not an obvious way to extend the idea of sliding window to the 2D case.
In this work our main goal is to present a procedure for the decoding of 2D convolutional codes over an erasure channel.
The main idea is similar to the one used for 1D convolutional codes, namely, we provide a systematic procedure to group the received information in appropriate ways so that the decoding can be split into simpler tasks. More concretely, when receiving a set of erasures in the 2D plane, a number of associated 1D convolutional codes are considered in order to recursively recover parts of the erasure.
To this end, we introduce the notion of neighbors around an erasure and show that within these neighbors the information can be regarded as codewords of an associated 1D convolutional code. This reduces the problem of decoding 2D convolutional codes over an erasure channel to a problem of decoding 1D convolutional codes over the same channel.
This article is organized as follows. In Section 2 we recall the notation and some required concepts of 1D and 2D convolutional codes. In Section 3 we address the problem of decoding sets of erasures that are distributed along horizontal, vertical and diagonal lines. An algorithm for the general case is discussed in Section 4. We conclude the paper with some conclusions in Section 5.

1D and 2D convolutional codes
In this section we recall the basic background on 1D and 2D (finite support) convolutional codes.
Denote by F[z 1 , z 2 ] the ring of polynomials in the two variables z 1 and z 2 , with coefficients in the finite field F. Definition 1 ( [5]). A 2D finite support convolutional code C of rate k/n is a free F[z 1 , z 2 ]-submodule of F[z 1 , z 2 ] n with rank k.
A full column rank polynomial matrix G(z 1 , z 2 ) ∈ F[z 1 , z 2 ] n×k whose columns constitute a basis for C, i.e., such that is called an encoder of C. The elements of C are called codewords.
Note that the definition given in [5] deals with 2D finite support convolutional codes defined on Z × Z instead of N × N.
If the code C admits a right factor prime encoder [4,6,8,17], then it can be equivalently described using an (n − k) × n full rank polynomial matrix H(z 1 , z 2 ), called parity-check matrix of C, as We denote by N 0 the set of nonnegative integers, and use the ordering in N 2 0 given by (with v(a, b) ∈ F n and γ ≥ 0) and we define its support as the set where wt (v(a, b)) is the Hamming weight of v(a, b) (i.e., the number of nonzero entries of v(a, b)) and the distance of a code is Moreover, we represent a polynomial matrix H(z 1 , z 2 ) as with H(i, j) ∈ F (n−k)×n and where H(a, b) = 0 for some (a, b) with a + b = δ. We call δ the degree of H(z 1 , z 2 ).
Using expressions (1) and (2), we can expand the kernel representation where v(l, k) = 0 if l + k > γ + δ or l + k < 0) or equivalently, using constant matrices as In Figure 1 we illustrate H and v for δ = 3, where 0 denotes the (n − k) × n zero matrix. To understand the structure of matrix H, note that for t = 0, 1, 2, . . . in the columns corresponding to the block indices t(t+1) appear all the coefficient matrices H(i, j) of H(z 1 , z 2 ) ordered according to ≺ with the particularity that the matrices H(i, j), with i + j = d, for d = 0, 1, 2, . . . , δ − 1, are separated from the matrices H(i, j), with i + j = d + 1, by t zero blocks.
We will refer to v(i, j) ∈ F n as coefficients of v(z 1 , z 2 ) or v.
In the context of 1D convolutional codes it is easy to see that the analogous set of homogeneous equations of expression (3) is An important distance measure of 1D convolutional codes is their th column distances d c ( ), given by the expression represents the -th truncation of the codeword v(z) ∈ C and H c , called the sliding parity-check matrix, is defined as where H = 0 for > α. From these notions the next simple result readily follows.
Lemma 1. If a 1D convolutional code C has column distance d c ( ) ≥ d then none of the first n columns of H c is contained in the span of any other d − 2 columns.
In order to find the values of a set of erasures occurring in v(z) or equivalently in v = v 0 v 1 · · · v γ , we can make use of the so-called sliding window, that is, we can select a suitable interval v in order to decode within that window (see [16]). Hence, the received information sequence can be handled in many suitable ways, depending on the distribution of the set of erasures. For instance if the number of erasures is too large to be treated directly, one can split it into smaller blocks and start to decode the easiest blocks first. This flexibility in grouping the information allows 1D convolutional codes to correct more erasures than a block code of the same length could do.
In the 2D case the codewords are distributed over the 2D plane and there is not an obvious extension of the notion of sliding window. In this paper we propose a systematic procedure to group the 2D information in such a way that what we obtain can be treated as 1D codewords. This simplifies the decoding problem as the treatment of 1D convolutional codes is simpler and decoding algorithms have been implemented (see [16]).
Given a 2D convolutional code ) and v(i, j), are "mutually related" if (a, b) and (i, j) are in an equilateral triangle with side length δ, i.e., there exists (r, s) such that Note that the coefficients v(i, j) with (i, j) ∈ ∆ r,s satisfy a block of (n − k) homogeneous equations. This is made clear by the following example.
Example 2. For δ = 3, the coefficient v(4, 4) depends on the coefficients v(i, j) such that (i, j) lies, for example, in the two triangles ∆ 4,4 and ∆ 6,4 given by Figure 2, which correspond to the block of n − k equations and triangles ∆ r,s such that (a, b) ∈ ∆ r,s . All these triangles define the hexagon Suppose now that the vector v(z 1 , z 2 ) is transmitted through an erasure channel. Each one of the components of v is either received correctly or is considered erasure. Denote by E( v(z 1 , z 2 )) andĒ( v(z 1 , z 2 )) the sets of indices in which there are erasures and there are not erasures, respectively, i.e., (z 1 , z 2 )).  If no confusion arises we use E andĒ for E( v(z 1 , z 2 )) andĒ( v(z 1 , z 2 )), respectively.
One can select the columns of the matrix in (3) which correspond to the coefficients v(i, j) that contain an erasure in order to construct a new system of equations where the erasures are the variables to be determined. More concretely, denote by H E and HĒ the submatrices of H whose block columns are indexed by E andĒ, respectively and v E and vĒ accordingly, so that we can write expression (3) as H E v E + HĒ vĒ = 0. Note that as the channel is an erasure channel, vĒ , and therefore HĒ vĒ , is known. Hence, we obtain a system of linear nonhomogeneous equations (6) H where the components of the vector v E that are erased are considered the unknowns to be determined. Thus, in order to decode v E one needs to solve system (6). The next lemma is straightforward and shows the importance of the distance of a code when transmitting over an erasure channel. It follows from Lemma 2 that in order to construct 2D convolutional codes with good error-correcting capabilities one needs to construct a matrix H having d linearly independent columns such that d is as large as possible. However, trying to decode directly using the matrix H is computationally very expensive due to its large size (even for small parameters n, k, δ, γ) and moreover the construction of H with the properties of Lemma 2 seems extremely difficult. In this paper we propose a solution to overcome such a problem by considering special submatrices of H. These matrices appear in the context of 1D convolutional codes and their existence and construction has been extensively investigated [1,7,9,10,11,14,15].

Decoding set of erasures on lines
In this section we present a notion that can be considered analogous to the notion of sliding window, called neighbor around a set of erasures. This will allow us to present a solution for the decoding problem of 2D convolutional codes by decoding a set of associated 1D convolutional codes.
Hence, equation (3) can be divided as where H E h and HĒ h are submatrices of H whose block columns are indexed by E h andĒ h = supp( v(z 1 , z 2 ))\E h , respectively, and v E h and vĒ h are defined accordingly.
Note that E h might be strictly contained in E and so the vector vĒ h may contain erasures as well.
The next example will help to understand this concept. and, for δ = 3, Figure 5 shows the neighbors Ω 0,3 E h and Ω 2,3 E h .  The neighbors introduced in Definition 2 define a set of indices (i, j) such that the corresponding coefficients v(i, j) of v(z 1 , z 2 ) have the particularity that they alone satisfy a set of equations with the structure that appears in the context of 1D convolutional codes. This is elaborated in the next result. Lemma 3. Let C = Ker F[z1,z2] H(z 1 , z 2 ) with δ the degree of H(z 1 , z 2 ). Suppose that v(z 1 , z 2 ) ∈ C is a transmitted codeword, E is the support of its coefficients with erasures and E h ⊂ E is the support of the coefficients with erasures distributed on a horizontal line in N 2 0 , with cardinality t + 1. Consider H and v as in (3), H E h and v E h as in (8) and define a vector v Ωj by selecting the coefficients v(c, d) of v with (c, d) ∈ Ω j,δ E h \ E h . Define H Ωj a submatrix of H, accordingly. Then it holds that, for j = 0, 1, . . . , δ, Proof. Let us first consider the case j = 0. As explained before each v(r, s) influences the vectors v(i, j) with (i, j) contained in the hexagon centered in (r, s). Moreover, each triangle ∆ a,b , with (r, s) ∈ ∆ a,b is associated to one block of (n − k) equations that involve v(i, j) with (i, j) ∈ ∆ a,b (see Example 2). Consider the block of (n − k) equations corresponding to the triangle ∆ r,s where (r, s) is the first index of E h , namely, Then [H(0, 0) 0 · · · 0] and [−H(1, 0) − H(0, 1) · · · − H(0, δ)] constitute the first (n−k) rows of H 0 E h and H Ω0 , respectively, and [v(r−1, s) v(r, s−1) · · · v(r, s− δ) ] the first coefficients of v Ω0 . Considering next the equations associated to the triangle ∆ r+1,s we obtain in the same way the second (n − k) block of equations of (9), namely, We proceed in the same way t + 1 + δ − j = t + 1 + δ times to complete the system of equations of (9) for j = 0. Same reasoning applies for the cases j = 1, 2, . . . , δ.
The following lemma states that if a neighbor only contains the erasures on the horizontal line then H Ωj v Ωj can be computed, as the vector v Ωj does not contain erasures and therefore we obtain a system of equations (as in (9)) having as indeterminates the erasures corresponding to E h , i.e., the erasures in v E h . Furthermore, solving such a system amounts to decoding the associated 1D convolutional code C j . Lemma 4. Let C = Ker F[z1,z2] H(z 1 , z 2 ) with δ the degree of H(z 1 , z 2 ). Suppose that v(z 1 , z 2 ) ∈ C is a transmitted codeword and let E h ⊂ E be given.
Let J be the set of indices j such that Ω j,δ E h contains only indices in E h (and not in E \ E h ), i.e., E ∩ Ω j,δ E h = E h . Then for each j 0 ∈ J there exists a subsystem of (9) such that E h is as in (10). Moreover, consider the 1D convolutional code C j0 with column distances d c j0 ( ), ≥ 0. If there exists 0 such that at most d c j0 ( 0 ) − 1 erasures occur in any ( 0 + 1)n consecutive components of v E h , then we can completely recover the vector v E h .
Proof. The first claim is obvious from Lemma 3. For the second part let v E h ∈ F ( 0+1)n be the first ( 0 + 1)n components of v E h . Hence, the first ( 0 + 1)(n − k) equations of (12) can be written as (13) H j0 where a j0 is the column vector formed by the first ( 0 + 1)(n − k) components of a j0 and (14) H j0  By Lemma 1 the columns of H j0 E h are linearly independent and therefore the system has a unique solution. In this way we recover the vector v E h . If we cut off the first ( 0 + 1)n components of v E h that have already been decoded one can proceed in an analogous way to decode the next ( 0 + 1)n components. In such a way we completely recover the vector v E h . This concludes the proof.
Remark 1. The condition that Ω j,δ E h contains only indices in E h can be considered analogous to the condition of having some guard space in the 1D context. It is well-known that in order to decode a 1D convolutional code it is necessary to have some guard space, see [16,Remark 3.7] for more details.
Lemma 4 can be optimized by considering the best column distance among the set of (δ + 1) 1D convolutional codes C j as follows: with J being as in Lemma 4.
with δ the degree of H(z 1 , z 2 ). Suppose that v(z 1 , z 2 ) ∈ C is a transmitted codeword and let E h ⊂ E be given.
If there exists an , say 0 , such that in any consecutive ( 0 + 1)n elements of v E h at most d c max ( 0 ) − 1 erasures occur, then we can fully recover v E h . Proof. The proof follows the same steps as the proof of Lemma 4 with . We finish this subsection with an algorithm to recover horizontal erasures.
Assume v(z 1 , z 2 ) ∈ C is transmitted over an erasure channel. If the condition of Theorem 1 is satisfied then a constructive algorithm can be developed in order to recover the codeword v(z 1 , z 2 ). Next we outline the main ideas of an algorithm based on the proof of Lemma 4, where |A| denotes the cardinality of a set A ⊂ N 2 0 : Data: v(z 1 , z 2 ) the received sequence and the erasures E h ⊂ E.
Result: the corrected vector v E h . Let 0 be as in Theorem 1, j 0 such that d c j 0 E h as in (14).
as in Lemmas 3 and 4 and take its first min ( 0 + 1)n, E h n components to estimate v E h by solving equation (13); Store v E h and take out the first min ( 0 + 1), E h indices of E h ; end Algorithm 1: How to recover a set of horizontal erasures 3.2. Vertical lines. In this subsection we treat in an analogous way sets of erasures whose support lies in a vertical line. Let us first suppose that the set of erasures described by E( v(z 1 , z 2 )) contains a subset E v of indices whose support lies on a vertical line of length t + 1, i.e., (15) E v ⊂ {(r, s), (r, s + 1), . . . , (r, s + t)} ⊂ E.
Example 6. Consider the set of erasures when (r, s) = (8, 5) and t = 4 then (7,6), (6,7), (5,8), (4,9)}, and, for δ = 3, Figure 7 shows the neighbors Ω 0,3 E d and Ω 1,3 E d . (b) Ω 1,3 E d Figure 7. Neighbors Ω 0,3 E d and Ω 1,3 E d As in the Subsection 3.2, all the results of Subsection 3.1 can be rewritten replacing E h by E d and H j E h by second round. Only when the three algorithms can no longer decode, the general algorithm declares that it cannot decode any further. It was shown in [16] that there exist 1D convolutional codes, called (reverse and complete) MDP, that have the maximum possible column distances (for a given set of parameters) and therefore perform particularly well over an erasure channel. Hence, another interesting simplified version of a general decoding algorithm for a 2D convolutional code C = Ker F[z1,z2] H(z 1 , z 2 ) can be implemented by considering only two 1D convolutional codes C h = Ker F[z] H h (z) and C v = Ker F[z] H v (z) that correspond to the neighbors Ω 0,δ E h and Ω 0,δ (E v ), respectively. Note that these neighbors contain indices that are to the left of and bellow the vertical and horizontal erasures, respectively. Hence, this algorithm recovers the erasures "on the left to right" and "from below to above" and requires having correct information in the left and below the erasure. Finally construct H v (z) = H h (z) in such a way that C h = C v are MDP 1D convolutional codes (see [1,16] for details on concrete constructions) and define H(z 1 , z 2 ) with H(z, 0) = H(0, z) = H h (z).

Conclusions
In this paper we have proposed a method to recover erasures in a 2D (finite support) convolutional code that are distributed in the 2D plane. We have shown that by considering neighbors around 2D erasures it is possible to treat these erasures as if they were erasures of 1D convolutional codes. Therefore techniques used for the decoding problem of 1D convolutional codes can be applied in the 2D context, thus simplifying the decoding of 2D convolutional codes.
Although this procedure does not solve all the possible erasure patterns it represents the first constructive procedure to deal with general erasures in 2D convolutional codes. This work raises an obvious but challenging follow-up question: how to design 2D convolutional codes so that the associated 1D convolutional codes have large column distances? This problem appears to be highly nontrivial and will be a topic of future research.