Asymptotic properties of delayed matrix exponential functions via Lambert function

In the case of first-order linear systems with single constant delay and with constant matrix, the application of the well-known "step by step" method (when ordinary differential equations with delay are solved) has recently been formalized using a special type matrix, called delayed matrix exponential. This matrix function is defined on the intervals \begin{document} $(k-1)τ≤q t , \begin{document} $k=0,1,\dots$ \end{document} (where \begin{document} $τ>0$ \end{document} is a delay) as different matrix polynomials, and is continuous at nodes \begin{document} $t=kτ$ \end{document} . In the paper, the asymptotic properties of delayed matrix exponential are studied for \begin{document} $k\to∞$ \end{document} and it is, e.g., proved that the sequence of values of a delayed matrix exponential at nodes is approximately represented by a geometric progression. A constant matrix has been found such that its matrix exponential is the "quotient" factor that depends on the principal branch of the Lambert function. Applications of the results obtained are given as well.

1. Introduction. The well-known "step by step" method is one of the basic concepts for the investigation of linear differential equations and systems with delay. The application of this method to first-order linear systems with single constant delay and with constant matrix of linear terms was formalized by using the notion of delayed matrix exponential e Bt τ , where B is a square constant matrix and τ > 0 is a delay, in [5,6]. A special delayed matrix function is defined on every interval (k − 1)τ ≤ t < kτ , k = 0, 1, . . . (where τ > 0 is a delay) as a matrix polynomial depending on B and is continuous at nodes t = kτ . Such a step by step definition complicates its asymptotic analysis. The paper deals with the asymptotic properties of delayed matrix exponential. Proofs of the results derived below make use of the properties of the matrix Lambert function [8]. Therefore, some basic notations and results related to this function are recalled in this part, too. Auxiliary results overviewed in Part 1.1 can be found in [5,6] and [2]. The results given in Part 1.2 are taken from [3] (see also the original source [8]). New auxiliary results are proved in Part 1.3. In Part 2, auxiliary determinants are computed and the results applied in Part 3 to prove the main result of the asymptotic behavior of a sequence of the ratios of delayed matrix exponentials at adjacent nodes. The sequence of values of delayed matrix exponential at nodes is approximately represented by a geometric progression. A constant matrix is found such that its matrix exponential is the "quotient" factor that depends on the principal branch of the Lambert function. Moreover, some further results on asymptotic properties of delayed matrix exponential are proved. Applications of the results derived are collected in Part 4.
1.1. First-order linear systems. Let B be an n × n constant matrix, Θ an n × n null matrix, I an n × n unit matrix and let τ > 0 be a constant. The delayed matrix exponential e Bt τ of the matrix B is an n × n matrix function mapping R to R n×n , continuous on R \ {−τ } and defined as follows: where k = t/τ is the ceiling function, i.e. the smallest integer greater than or equal to t/τ . The main property of the delayed matrix exponential e Bt τ is the following: and the matrix Y (t) = e Bt τ solves the initial problem for a matrix differential system with a single delay If ϕ : [−τ, 0] → R n is a continuously differentiable vector-function, then the solution of the initial-value problem can be represented in the form Let A be a regular n × n constant matrix and AB = BA. Then, the solution of the initial-value problem is given by the formula y(t) = e A(t+τ ) e B1t τ ϕ(−τ ) + where t ∈ [−τ, ∞) and B 1 = e −Aτ B.
1.2. The Lambert W function. As can easily be seen from the definition, the above delayed matrix function is defined on intervals (k − 1)τ ≤ t < kτ , k = 0, 1, . . . as different matrix polynomials. As mentioned in Introduction, this is the reason why its asymptotic analysis is complicated. Therefore, it seems to be important to study the sequence {e Bkτ τ } ∞ k=0 of values of the delayed exponential of a matrix B at nodes kτ , connecting two different matrix polynomials, as k → ∞. Later, we will prove that, for the special matrix considered, this sequence approximately equals a geometric progression and we will find a constant n × n matrix C such that its ordinary exponential e Cτ is the "quotient", i.e., that where ( · ) −1 denotes the inverse matrix, whose existence we assume. This will be done using the so-called Lambert function (named after Johann Heinrich Lambert, see [8]). Recall its definition and some basic results on the Lambert function (published in [3]).
Lambert defined the function as the inverse to the function This means that the Lambert function, usually denoted by W = W (z), is defined implicitly by the equation Such a function is multi-valued (except for the point z = 0). For real arguments z = x such that x > −1/e and real W (x) satisfying W (x) > −1, equation (8) defines a single-valued function W = W 0 (x) called the principal branch of the Lambert W (z) function, i.e., We prove that the matrix C in (7) is defined by the principal branch W 0 (z) of the Lambert W (z) function (see Corollary 1 below).
The Maclaurin expansion of W 0 (x) can be found easily being given by the series having the radius of convergence r = 1/e. The point x = 0 is a point of removable singularity of the function W 0 (x)/x. It follows from (10) that the Maclaurin expansion of the function i.e., has the same radius of convergence r = 1/e. The function E(x) is smooth and infinitely many times differentiable. Moreover, applying the Lagrange inversion theorem (Lagrange-Bürmann formula), we obtain Differentiating the defining equation (8), we conclude that all branches of W (z) satisfy the differential equation Let λ be a complex number. In determining the asymptotic properties of the exponential function exp(λx), where x ∈ R, the real part of the complex number λ often plays the principal role because the asymptotic properties differ for Re λx > 0 and Re λx < 0 and the domains for the real part being positive or negative are in the complex plane for λ "separated" by the set of points where Re λ = 0. In the definition of the Lambert function by (8), the behavior of the exponential function plays an important role as well.
Define the set of complex numbers such that Re W (z) = 0. Assuming z = x + iy and W (z) = u + iv, from (8), we get where v ∈ R. Analyzing the part of this curve corresponding to the principal branch W 0 (x + iy), i.e., x = −v sin v > − 1 e we conclude that (15), (16) is a simple closed curve for the admissible range v ∈ [−π/2, π/2]. This curve is depicted in Figure 1. From (15), (16), it is easy to deduce that the real part of the principal branch of the Lambert function is negative for This domain is bounded by the above curve (see Figure 1). Note that a Lambert W function cannot be expressed in terms of elementary functions. For more details, see [3]. 1.3. Limits with principal part W 0 of the Lambert function. Let k be a nonnegative integer. Define a polynomial Then, the formula where B 0 = I, expressing the values of a delayed matrix exponential at the nodes t = kτ , k = 0, 1, 2, . . . holds and can be simply verified using the definition of the delayed matrix exponential.

ASYMPTOTIC PROPERTIES OF DELAYED MATRIX EXPONENTIAL FUNCTIONS 127
Re z − π 2 Im z Re W 0 (z) > 0 Let x, α and β be real numbers and let n be a positive integer. The following is a well-known Abel's extension of the binomial theorem (see, e.g. [1]) which, for α = 0, can be rewritten as and will be used in the computations below.
Lemma 1.1. Let x ∈ (−1/e, 1/e) be fixed. Then, and, for l ∈ N, we have Proof. We decompose the ratio into the Maclaurin power series with respect to x and show that the sum of the first (k + 1) terms of this expansion where k ≥ 0 equals a polynomial of k-th degree (compare (12)) i.e., a = (− − 1) ( + 1)! , = 0, 1, . . . , k, where O is the Landau order symbol "big" O. We prove this by matching the coefficients at identical powers n, n = 0, 1, , . . . , k of two polynomials E k (x)P k+1 (x) and P k (x). The coefficient at the power x n (0 ≤ n ≤ k) of the product n! and is the same as the coefficient at the power x n of the polynomial P k (x). Therefore, formula (26) holds with the indicated accuracy. Formula (21) now follows from the property lim Formula (22) is a consequence of (21), (11) and (9) since Now we will show that (23) holds. Without loss of generality, we assume k > l in the sequel. Since power series are infinitely many times differentiable within their
Proof. We can decompose exp(−kW 0 (x))P k (x), using (13) and (18), into the Maclaurin power series. In the following decomposition, the first (k + 1) terms are written exactly.
For the limit of this product, we obtain Now we put r = −1 in (13) and develop the Maclaurin power series of the expression (below, the values for x = 0 are understood as limits for We get x . Using (9) and (14), we get .

2.
Preliminaries. Let us recall that two n × n matrices A and B are called similar if B = P −1 * AP * for some invertible n × n matrix P * (for properties of matrices used in this part, we refer, e.g. to [4, Chapter V]). Let s be a positive integer and s ≤ n.

ASYMPTOTIC PROPERTIES OF DELAYED MATRIX EXPONENTIAL FUNCTIONS 131
where λ is a complex number, is called a Jordan block. Any block diagonal matrix whose blocks are Jordan blocks is called a Jordan matrix and any matrix A is similar to an n × n Jordan matrix where, for positive integers m i , i = 1, 2, . . . , m N , we have m 1 + m 2 + · · · + m N = n and λ i are the eigenvalues of A with multiplicities m i . The Jordan matrix J given by (34) is unique up to a permutation of its diagonal blocks. J is called the Jordan normal form of A and, for some suitable invertible n × n matrix P , we have For an analytic function with a radius of convergence r given by the series is defined where the series has the same radius of convergence and the matrices defined by the series with the same radius of convergence r again, satisfy: Now we develop matrix analogies of the statements formulated in Lemma 1.1. Let k be a nonnegative integer, λ ∈ C, and s be a positive integer. For k ≥ s, we define an s × s matrix where the polynomial P k is given by formula (18). To avoid possible ambiguities in the following computations, we also define where k and λ are as above. In what follows, we do not consider zero points of the polynomial P k , so we will assume P k (λ) = 0. Thus, P k (J λ,s ) is an invertible matrix.
To describe the result of the matrix product P k (J λ,s ) = (p k ij (J λ,s )) s i,j=1 := P k (J λ,s )(P k+1 (J λ,s )) −1 , we need to define some auxiliary determinants M k (λ, s). The meaning of k and λ remains the same. The integer s in the following definition satisfies s ∈ Z.
c) Let i > j. Then, the minor M ij = (m pq ) s−1 p,q=1 is the determinant of a matrix with the following structure -its main diagonal equals the elements m pq = 0 if α) q = 1, . . . , j − 1 and p > q, β) p = i + 1, . . . , s − 1 and p > q, and the elements m pq where p, q = j, . . . , i − 1 generate a matrix with the determinant M k+1 (λ, i − j). We get

Remark 2.
In the sequel, we will need to express any minor M ij of the matrix P k+1 (J λ,s ) in terms of the determinants M k+1 (λ, i − j). For every minor M ij , i, j = 1, . . . , s, the same formula holds since, using Definition 2.1, we can write the statements of Lemma 2.2 as Using Remark 2, we can express the cofactors C ij , i, j = 1, . . . , s of the matrix P k+1 (J λ,s ) as: Now we will continue the computation of the matrix product (35). We can find the inverse matrix (P k+1 (J λ,s )) −1 by a well-known procedure using the adjoint matrix whose elements can be defined through the cofactors C ij (J λ,s ), i, j = 1, . . . , s and using the obvious formula det P k+1 (J λ,s ) = (P k+1 (λ)) s .
We get Because of the properties of determinants M k (see Definition 2.1), we have and, for the rest of the elements p k i,i+j (J λ,s )) with j = 0, 1, . . . , s − i, we get Due to (36), where the index i is not included in the final formula, we can definê for any i = 1, . . . , s, j = 0, 1, . . . , s − i.

ASYMPTOTIC PROPERTIES OF DELAYED MATRIX EXPONENTIAL FUNCTIONS 135
Compute now the (1, l)-cofactor of M k+1 (λ, s). It has the form . Applying the Laplace expansion of the the determinant M k+1 (λ, s) along the first row, we get

This equation can be rewritten in the form
Using (38) for s ≥ 1, we can prove a recurring equation between the elements of the matrix product P k (J λ,s )(P k+1 (J λ,s )) −1 : Lemma 2.3. For the elementsp k j (J λ,s )) of the product P k (J λ,s )(P k+1 (J λ,s )) −1 , defined by (37), and integer 1 ≤ l ≤ s − 1, we have: Proof. Substitute (36) forp k l− (J λ,s ) in the right-hand side of (39) to obtain: Now we rearrange by the formula a i the last sum ( * ) and apply the identity (38) to get: 3. Main results. Based on the auxiliary results proved we can now prove the main results of the paper. Proof. First we show that (40) holds if B is replaced by a Jordan block J λ,n . The limits of the elements p k ii (J λ,n )) =p k 0 (J λ,n )), i = 1, . . . , n of the product P k (J λ,n )(P k+1 (J λ,n )) −1 , as it follows from formula (36) (where j = 0) and from formula (21), are Now, by induction, we prove that, for the limits of other elementsp k l (J λ,n )) = p k i,i+l (J λ,n )), l = 1, . . . , n − i, we have i.e., for k → ∞ we havep k l (J λ,n )) = where o is the Landau order symbol "small" o. The assertion is proved for l = 0. Now we assume that this assertion holds for i = 0, . . . , l where l < n − i. We use formula (39) to express the elementp k l+1 (J λ,n ): Applying (23), we obtain: Consequently, formula (42) holds. The remaining elements p k ij (J λ,n ) of the product P k (J λ,n )(P k+1 (J λ,n )) −1 with i > j (under the main diagonal) are equal to zero.
Let n > 1. The radius of convergence of the Maclaurin series of the function is r = 1/e (see formulas (10)-(12)). Since inequalities |λ i |τ < 1/e, i = 1, . . . , n imply ρ(Bτ ) < 1/e, we can substitute x → Bτ into this Maclaurin decomposition to get convergent matrix series. Its sum equals Then, Let F (k) = {f ij (k)} n i,j=1 and G = {f ij(k) } n i,j=1 be matrices defined for all sufficiently large k. We say that where o(1) is the Landau order symbol "small" o.
The following theorem gives results on the behavior of the spectral radius ρ(·) and spectral norm · ρ (defined for a matrix A as A ρ = ρ(AA T ) 1/2 ) of the sequence of values of delayed exponential e Bkτ τ for (discrete) k → ∞ and of delayed exponential e Bt τ for (continuous) t → ∞. Theorem 3.3. Let τ > 0 and let an n×n constant matrix B ≡ Θ be given. Assume that the eigenvalues λ i , i = 1, . . . , n of the matrix B satisfy inequality τ |λ i | < 1/e, i = 1, . . . , n. The following three statements are true: (i) If all the eigenvalues λ i , i = 1, . . . , n satisfy Proof. To prove this theorem we use Remark 3. Figure 2 details the eigenvalue domain for each case considered.
(i) From (49), we conclude that, for all the eigenvalues λ i , i = 1, . . . , n, by (17), It is well-known that the n roots of a polynomial of degree n depend continuously on the coefficients and that the eigenvalues of a matrix depend continuously on the matrix (we refer, e.g. to [9] Figure 2. Detailed eigenvalue domains (ii) From assumption (51), by (17), the existence follows of at least one eigenvalue λ i0 such that Re W 0 (λ i0 τ ) > 0. Therefore, In much the same way as in part (i), by (48), we also deduce lim sup k→∞ ρ e Bkτ τ = ∞.
Then the conclusion of part (ii) follows from the relation between the spectral radius and the spectral norm: ρ(A) ≤ A ρ for any matrix A.
Let J be the Jordan canonical form of square matrix B. I.e., there is an invertible matrix P * such that B = P −1 * JP * . Note that the Jordan canonical form of the delayed exponential of matrix e Bt τ has the form P −1 * e Jt τ P * and, due to this fact, all the eigenvalues of e Bt τ are e λit τ , i = 1, . . . , n where λ i , i = 1, . . . , n are all the eigenvalues of B. Proceeding similarly to the scalar case, we conclude that (54) holds.

4.
Applications. In this part we make some suggestions for possible applications of the above results.
The above example can be generalized, e.g., for two showering persons. Suppose that hot and cold water is supplied in two separate pipes to a bathroom with two showers. Inside the bathroom, each pipe branches into two pipes leading to the shower mixers. A person taking a shower regulates the water temperature flowing from the mixer to the sprinkler. Due to the changes in the water pressure caused by water being regulated by two persons simultaneously, there is a mutual dependence between the temperatures T 1 and T 2 of the water flowing from mixer one to sprinkler one and from mixer two to sprinkler two, respectively. Then, a simple model modeling the behavior of two showering persons is where γ ij > 0, i, j = 1, 2 and T di , i = 1, 2 are the desired temperatures of water agreeable for each of the two showering persons. Substituting y i (t) = T i (t) − T di in (62), (63) we get Assuming the water temperature before regulation is constant, i.e. the initial condition is given by the relation the solution of (64)-(66) is where y 0 = (y 0 , y 0 ) T and Γ = −γ 11 γ 12 γ 21 −γ 22 .

4.2.
Instability of solutions. In this part we give sufficient conditions for the instability of the system (1). In general, the instability of systems (1) will be proved if, in every δ-neighborhood of zero initial function, there exist an initial function generating a solution not remaining in a given ε-neighborhood of the zero solution.
In the proof of the following theorem, it is sufficient to restrict the set of initial functions to constant initial functions only.

ASYMPTOTIC PROPERTIES OF DELAYED MATRIX EXPONENTIAL FUNCTIONS 143
Proof. We will employ constant initial functions Generated by ϕ i (t), solution y i = y i (t) equals y i (t) = e Bt τ C i , t ∈ [−τ, ∞), i = 1, . . . , n, Consider a matrix equation where Y (t) is an n × n matrix. Clearly This property proves the instability of the system (1).

Remark 4.
A similar result on instability can be derived for the system (4) if the following modifications are taken into account. Instead of constant initial functions used in the proof of Theorem 4.1, initial functions as solutions of the system can be used. Then, the formula (6) becomes y(t) = e A(t+τ ) e B1t τ ϕ(−τ ), t ∈ [−τ, ∞) where B 1 = e −Aτ B. In addition to this, additional assumptions on the matrix A for the statement on instability must be included.