CONSTRUCTING STRONGLY-MDS CONVOLUTIONAL CODES WITH MAXIMUM DISTANCE PROFILE

This paper revisits strongly-MDS convolutional codes with maximum distance profile (MDP). These are (non-binary) convolutional codes that have an optimum sequence of column distances and attains the generalized Singleton bound at the earliest possible time frame. These properties make these convolutional codes applicable over the erasure channel, since they are able to correct a large number of erasures per time interval. The existence of these codes have been shown only for some specific cases. This paper shows by construction the existence of convolutional codes that are both strongly-MDS and MDP for all choices of parameters.


Introduction
In recent literature on convolutional codes, several new classes of codes with good distance properties have been introduced.These classes of codes are known as maximum distance separable (MDS) codes, maximum distance profile (MDP) codes, and strongly MDS (sMDS) codes [10-13, 20, 21].MDS codes are characterized by the property that they have the maximum possible free distance for a given rate and degree.sMDS codes are a subclass of MDS codes having the property that the free distance is attained at the earliest possible time step.Finally, MDP codes are characterized by the property that their column distances grow at a maximum possible rate.
The existence of MDP convolutional codes was first discussed in [12], and in [10], it was shown how to construct them when n − k divides δ.In this paper, we solve the problem of constructing MDP convolutional codes in the general case where n − k does not necessarily divide δ.Apart from solving a theoretical question, this construction also has a practical purpose, as we explain below.Recently, a number of papers [24][25][26][27] considered the use of MDP convolutional codes over the erasure channel, where the symbols sent either arrive correctly or they are erased.The Internet is such an example; here the packet sizes are upper bounded by 12,000 bits -the maximum that the Ethernet protocol allows.Each packet can be modeled as an element or a sequence of elements from a large alphabet, for example F := F 2 1,000 .Packets sent over the Internet are protected by a cyclic redundancy check (CRC) code.If the CRC check fails, the receiver knows that a packet is in error or has not arrived [18]; it then declares an erasure.With or without interleaving, such an encoding scheme results in the property that errors tend to occur in bursts, and this is a phenomenon observed over many channels modeled via the erasure channel.When transmitting over an erasure channel like the Internet, one of the problems encountered is the delay experienced on the received information due to the possible re-transmission of lost packets.One way to eliminate these delays is by using forward error correction.Commonly, block codes have been used for such a task, see, e.g., [7,15] and the references therein.The use of convolutional codes over the erasure channel has been proposed in Epstein [6], Arai et al. [4], and more recenty [27] in which a subclass of MDP codes was used over the erasure channel.The advantage that convolutional codes have over block codes, exploited in their decoding algorithms, is the flexibility obtained through the "sliding window" feature of convolutional codes.The received information can be grouped in appropriate ways, depending on the erasure bursts, and then be decoded by decoding the "easy" blocks first.This flexibility in grouping information brings certain freedom in the handling of sequences.This "sliding window" property of convolutional codes allows for more erasures to be corrected in a given block than a block code of that same length could correct.In addition, the algebraic properties of maximum distance profile (MDP) convolutional codes allow these codes to correct the largest amount of errors possible for a given window, making them powerful encoding schemes over the erasure channel (see [27] for details).
The paper is organized as follows.In Section 2, we introduce the background necessary for the development of the paper: it includes the necessary introductory material on convolutional codes and on MDP convolutional codes, in particular.In Section 3, we include the main result of the paper: for each parameter n, k, δ, and, in particular, for the open problem case of (n − k) δ, we show how to construct (n, k, δ) convolutional codes that are MDP (our codes will also be sMDS).At the end of Section 3, we formulate this constructive algorithm, and in Section 4 we conclude our paper.

Preliminaries
This section contains the mathematical background needed for the development of our results.Note that throughout the paper, vectors of length n will be viewed as n × 1 matrices, i.e., as column vectors.
Let F be a finite field and F[D] be the ring of polynomials with coefficients in F.
n of rank k given by a basic and minimal full-rank polynomial encoder matrix where basic means that G(D) has a polynomial right inverse, and minimal means that the sum of the row degrees of G(D) attains its minimal possible value δ, called the degree of C. * A rate k/n convolutional code C of degree δ is called an (n, k, δ) convolutional code [17].
A dual description of a convolutional code C can be given through one of its parity-check matrices which are (n − k) × n full rank polynomial matrices H(D) such that where H m = 0 and H i = 0, for i > m, the above kernel representation can be expanded as An important distance measure for a convolutional code C is its free distance d free (C) defined as where wt(v(D)) is the Hamming weight of a polynomial vector where wt(v i ) is the number of the nonzero components of v i .
In [20], Rosenthal and Smarandache showed that the free distance of an (n, k, δ) convolutional code is upper bounded by This bound was called the generalized Singleton bound since it generalizes in a natural way the Singleton bound for block codes (when δ = 0).An (n, k, δ) convolutional code with its free distance equal to the generalized Singleton bound was called a maximum distance separable (MDS) code [20].It was also observed in [20] that if C is an MDS convolutional code, then all the row-reduced encoders of Another important distance measure for a convolutional code is the jth column distance d c j (C), given by the equivalent expressions where H j = 0, for j > m.This notion is related to the free distance d free (C) in the following way The j-th column distance is upper bounded as following and the maximality of any of the jth column distances implies the maximality of all the previous ones, [10,12], i.e., Since no column distance can achieve a value greater than the generalized Singleton bound, there must exist an integer L for which the bound (6) could be attained for all j ≤ L and it is a strict inequality for j > L [10]; this value is An (n, k, δ) convolutional code C with every d c j (C) maximal, for each j ≤ L, is called a maximum distance profile (MDP) code [10,12].Therefore, the column distances of MDP codes increase as rapidly as possible for as long as possible.In contrast, an (n, k, δ) convolutional code C is called a strongly-MDS code if the generalized Singleton bound is attained as early as possible [10].We state these two definitions formally.

Definition 1 ( [10]
).Let C be a convolutional code of rate k/n and degree δ.
1. C is said to have a maximum distance profile (MDP) if where where M is the minimum instance j such that d c j = d free (C).
Remark 1.Note that, in general, neither MDP implies sMDS, nor sMDS implies MDP.However, for n − k | δ the two notions MDP and sMDS are equivalent.This is what makes this case simpler to address, see [10].
Remark 2. Note that MDP convolutional codes are similar to MDS block codes within windows of size (L + 1)n.Indeed, if we truncate a codeword with its first component nonzero at any j component, with j ≤ L, it will have weight higher or equal than the bound (6), which is the Singleton bound for block codes with the given parameters.
The next definition is essential for our construction of MDP codes; it gives a description of these codes using the "superregularity" of a certain matrix associated to a given parity-check matrix, as Theorem 1 below, taken from [10], will formally state.
Let θ = (θ ij ) be a square matrix of order m over F (or Z) and let S m be the symmetric group of order m.The determinant det(θ) of θ is given by (8) det(θ) where sgn(σ) is the signature of the permutation σ.
for all σ ∈ S m , we say that det(θ) is a trivial minor of γ.Let {M ij } be a set of matrices of the same size and γ = (M ij ) be a lower triangular block matrix, i.e., M ij = 0 for i < j.We say that γ is a superregular matrix if all entries of M ij , i ≥ j, are different from zero and all the non-trivial minors of γ are non-zero.
Observe that the trivial minors of a superregular matrix come from submatrices that contain a zero in their diagonal, or equivalent, submatrices that contain an s × t zero block for some s, t such that s + t − 1 is equal to or larger than its order.
It is important to remark here that there are several related but different notions of superregular matrices in the literature.Frequently, see for instance [22], a superregular matrix is defined to be a matrix (not necessarily lower block triangular) with the property that all of its square submatrices are nonsingular, implying that such a matrix must have all its entries nonzero.Also, in [1,16,23], several examples of triangular matrices were constructed in such a way that all submatrices inside this triangular configuration were nonsingular.These notions however, do not apply to the case we consider; we will consider matrices that allow zero entries.The more recent contributions [10,11,13,24,25] consider the same notion of superregularity but defined only for lower triangular matrices.The notion we consider comprises, however, a larger set, e.g., apart from lower triangular matrices, it also includes block triangular matrices.
tional code with a minimal basic parity-check matrix H(D) and let such that where m δ n−k + 1 = δ n−k (we assume n − k δ).Assume in addition that A 0 is invertible and let be the Laurent expansion of A(D) −1 B(D) over the field F((D)) of Laurent series.For all j ≥ 0, define where the matrix I (j+1)(n−k) is the identity matrix of size (j + 1)(n − k) and Then, the following conditions are equivalent, for all j ∈ {1, . . ., L}: Proof.The proof follows directly from [10, Theorem 2.4 & Theorem 3.1] and the definition of superregular matrix.

Constructing MDP convolutional codes
Given the parameters n, k, δ ∈ N, k < n, such that (n − k) δ, our goal is to construct an (n, k, δ) convolutional code with the MDP property.† In fact, we will construct an sMDS code with the MDP property, i.e., by Definition 1, we search for an (n, k, δ) code such that We aim to construct a matrix , as in Theorem 1 that satisfy the MDP and sMDS properties, or equivalently, such that the matrix P c L defined through (11) and (13) (for j = L) is superregular and such that To this end, we construct a matrix such that its submatrix P c L is superregular.We will show that this matrix defines a unique code C = ker A(D) B(D) through ( 15) where, for all i ∈ {0, . . ., M }, P i are the block entries of the matrix P c M .The superegularity of P c L will guarantee that the code C = ker H(D) has d c L = (n − k)(L + 1) + 1.The choice of the remaining part of P c M will ensure that d c M = (n − k) δ k k + 1 + δ + 1 and that the code C has rate k/n and degree δ.Let ( 16) be a superregular matrix and let P i T i , for i ∈ {0, . . ., L}, and For issues concerning the existence and constructions of such superregular matrices see [2,10].
It might seem natural to choose P M T M .Since T c M is superregular, this choice would seem to ensure also the maximality of the M th column distance of the code C, i.e., However, if the code has rate k/n and degree δ, where the second inequality holds because the middle expression is the generalized Singleton bound for the given parameters.This yields that since d c j is increasing in j, which leads to a contradiction.Therefore, P M has to be chosen differently.
Note that, when (n − k) δ, which is the case we consider here, we have that For simplicity, we denote δ δ We partition T M in the following way.Let ( 18) M , P X as in (20) with X variable.Let C = ker H(D), A i and B i satisfying equations (10) and (15).If A 0 is invertible then, for any matrix X, the following hold: 1.
Statement 2: From (3), it follows that It is easy to see that after a column permutation the sliding parity-check matrix H c M in (4) of C has the form and using that A 0 is invertible we can left multiply H c M by the inverse of the first block to obtain H c M (defined in ( 12)).Hence, where the vector v = (v 0 , v1 , . . ., v2M+1 ) is divided according to H c M (or H c M ).Note that when considering ker H c M , the condition (v 0 , vM+2 ) = 0 is equivalent to vM+2 = 0. On the other hand, let be the submatrix of H c M obtained by discarding its last δ rows while keeping the first δ rows of [P M P M −1 • • • P 0 ] , denoted by [P M P L . . .P 0 ].Then, is nonzero and therefore we obtain that Hence, ( 21) We can now prove the existence of a convolutional codes with sMDS and MDP properties.
Theorem 2. Given any set of parameters (n, k, δ), there exists an (n, k, δ) convolutional code that is both sMDS and MDP.
Proof.It is well-known that a convolutional code of rate k/n and degree δ cannot have free distance larger than the Singleton bound (n − k) δ k + 1 + δ + 1.The previous Lemma shows that a code C = ker H(D) with A i , B i and the P i related via equation ( 15) and P i chosen as in ( 17) and ( 20), has d c M equal to or larger than the Singleton bound of an (n, k, δ) code.We still do not know the rate and degree of C but if C had rate k/n and degree δ, then its free distance must satisfy the Singleton bound for these parameters, i.e., d free ≤ (n − k) δ k + 1 + δ + 1.Since d c M ≤ d free and from (21) we obtain that i.e., C is an sMDS (n, k, δ) code.Since C also has the MDP property (d c L = (n − k)(L + 1) + 1) this construction would produce the desired (n, k, δ)-code that is sMDS and has the MDP property.
Hence, we will choose P M based on the following criteria: 1.The constructed matrix P c M should generate, via equation ( 15), the matrices A i and B i , such that the code C is an (n, k, δ) code.2. A 0 is invertible (we can apply Theorem 1 and Lemma 1).For this purpose we make the following partition: where, The freedom to choose P M also gives some freedom in choosing the A i 's and B i 's.Making use of this freedom we shall impose the conditions: 1.A m = 0 , B m = 0 , 2. A 0 invertible, and show that these conditions ensure that the above mentioned criteria are satisfied ‡ .

Advances in Mathematics of Communications
Volume 10, No. 2 (2016), 275-290 Using the partition of the A i 's and B i 's and imposing the conditions A m = 0, B m = 0, we obtain that Note that if we partition (26) 0 is nonsingular as its determinant corresponds to a full size minor of the superregular matrix P , see for instance [10,Lemma A.1], [9] or [8].
Once [A m−1 . . .A 0 ] is fixed (as being a parity-check matrix of the code with generator matrix P ) and it satisfies the matrix equation [A m−1 . . .A 0 ] P = 0, we aim at deriving P M such that (23) is also satisfied, i.e., P M must be defined such that (27) [ 1. Select a superregular matrix T c M as in (16).For this task one can make use of the existing constructions of superregular matrices in [10,Example 3.10,(2)] and [2].2. Take P i = T i , for i ∈ {0, . . ., L}, and P M as in (20) with the sub-matrix P M through equation ( 29) and, therefore, "completing" the matrix P M . 5. Choose A 0 such that the matrix in (30) is nonsingular.One can use, for instance, the elements of the canonical basis.6. Solve the matrix equation (31 7. Compute the matrix B(D) through equation (22).Remark 3.Although the above algorithm gives a method to construct sMDS convolutional codes with maximum distance profile, it is important to note that the existing constructions of superregular matrices require large field sizes and that the construction of superregular matrices (Step 1) over small finite fields is still an open problem under investigation.Some conjectures and results on the size of the field required for the construction of superregular matrices are still unsolved [10,13].
We illustrate the algorithm with an example.
1. We consider the superregular matrices described in [2] for building T c 2 as follows: , where α is a primitive element of a field F of characteristic 2 and of size equal to or larger than 2 512 .2. Set and then where the variables x 1 and x 2 have to be computed.3. To find a solution for the matrix equation ( 23), i.e., to find A 0 and the variables x 1 and x 2 in P 2 such that A 0 P 1 P 2 = 0 is satisfied, we first compute A 0 as the parity check matrix of P as in (25).For these parameters P = P 1 and therefore A 0 = α 32 + α 40 α 16 + α 24 + α 32 1 .
6. To complete the values of the matrix A(D) one needs to compute A 1 (or equivalently of A 1 ) which is it done by solving the matrix equation (31):

Conclusions
A great deal of attention has been devoted in recent years to two new classes of (n, k, δ) convolutional codes called MDP and sMDS due to their optimal distance properties.However, the question of how to construct them has remained open and only the case (n − k) | δ (the case where these two classes coincide) has been solved.In this paper we have filled this gap by presenting an effective method to construct sMDS (n, k, δ)-codes with maximum distance profile for any choice of the parameters (n, k, δ).
C have := δ − k δ k rows of degree δ k and k − rows of degree δ k + 1. Equivalent result holds for minimal basic parity-check matrices H(D) and hence all row degrees of H(D) are upper bounded by m δ n−k and the degree δ is the sum of the row degrees of H(D).
Note that P is completely determined by(17) and(20).A parity-check matrix of the block code having the full rank matrix P as generator matrix, is a matrix with (n − k)m columns and (n − k)m − δ = δ rows.Let [A m−1 . . .A 0 ] ∈ F δ × (n−k)m be such parity-check matrix, i.e., Im F P = ker F [A m−1 . . .A 0 ].

3 .
Compute a minimal parity-check matrix of the code with generator matrix P given in (25) to obtain [A m−1 • • • A 0 ]. 4. Obtain the matrix P (2)