PSEUDOSPECTRAL REDUCTION TO COMPUTE LYAPUNOV EXPONENTS OF DELAY DIFFERENTIAL EQUATIONS

. A recent pseudospectral collocation is used to reduce a nonlinear delay diﬀerential equation to a system of ordinary diﬀerential equations. Standard methods are then applied to compute Lyapunov exponents. The validity of this simple approach is shown experimentally. Matlab codes are also included.

1. Introduction. We consider the question of computing Lyapunov exponents of Delay Differential Equations (DDEs). Lyapunov exponents are defined through limits in the infinite time [1]. DDEs give rise to infinite-dimensional dynamical systems [16,23,29]. These two aspects make the problem particularly hard.
The literature is not lacking of material on the subject. On the one hand, most of the relevant papers are of experimental flavor: more focused on applications and results, less on the often coarse techniques they apply, and always restricted to specific, yet important examples (see, e.g., [26,39,40]). On the other hand, extremely few papers address the question from a more numerical or computational point of view, but the methods proposed are usually difficult to replicate by nonexperts and relevant algorithms and codes are rarely available or user-friendly (see, e.g., [14,41]).
The present work aims at filling the gap between the need for off-the-shelf routines for those interested in the applications and the complexity of reliable, efficient and rigorous methods.
The approach we propose is based on reducing the DDE to a system of Ordinary Differential Equations (ODEs), a class for which definitely more is known and available about both theory and computation of Lyapunov exponents (start, e.g., from [20]). The reduction is based on the pseudospectral discretization recently introduced in [8]. As described in Section 2, the resulting system of ODEs is quite simple to obtain: a single equation depends on the original DDE, basically retaining the same right-hand side; the rest is linear and independent of the DDE. To compute the exponents of ODEs we choose, among others, the discrete QR technique. It is easy to understand, also with respect to the underlying concept of exponential divergence of infinitesimally closed orbits. At the same time it is not difficult to code efficiently, taking inspiration from [17].
Despite the simplicity of the combined methodology, we also furnish relevant Matlab codes in Appendix A. To the best of our knowledge, they are the first public and available codes for computing Lypaunov exponents of DDEs. We explain and use them in Section 4, where we illustrate some tests on three equations, quite simple for deliberate choice. This way we keep both presentation and application at a basic level, focusing more on the validity of the method and the use of the codes. Possible technicalities due to generalizations could distract the reader from appreciating the straightforwardness of the proposed approach (however, suitable references to treat them are given).
We postpone to the following sections, where better indicated, additional comments and detailed references on the above and other interesting aspects about pseudospectral reduction and Lyapunov exponents. Some are related to either convergence and approximation, others to more theoretical issues such as, e.g., the proper state space to deal with. We just remark from [8] that the reduction to ODEs can be extended to more general classes of functional equations. Therefore, in principle, the methodology we propose has a potentially wide range of applications.
2. From DDEs to ODEs. Let us summarize from [8] the pseudospectral discretization to reduce DDEs to ODEs. We restrict to the scalar case, extension to systems being straightforward.

COMPUTING LYAPUNOV EXPONENTS OF DDES 2729
The basic idea behind the pseudospectral reduction is to collocate (4) at given nodes w.r.t. the variable θ ∈ [−τ, 0]. For M ≥ 1, let these nodes be the Chebyshev extrema, say −τ = θ M < · · · < θ 1 < θ 0 = 0 with For i = 0, 1, . . . , M , t ≥ 0 and u(t) ∈ D(A), denote by u i (t) the approximation of u(t)(θ i ) obtained as follows. Let U (t) be the Lagrange interpolation polynomial constitute the Lagrange basis. Now require that U (t)(θ) satisfies (4) at all the nodes but for θ 0 = 0, where the boundary condition in (3) is instead forced. Then and, eventually, we get the system of M + 1 ODEs with initial conditions Above, d i,j := j (θ i ) are the entries of the Chebyshev differentiation matrix, see, e.g., [42] for their explicit expression. Notice that only the first ODE in (5) depends on the original DDE, preserving the nonlinearity. All the others are linear due to the linearity of both interpolation and differentiation. They are also independent of the DDE if one scales time to get τ = 1. This makes the reduction simple, and particularly attractive from the point of view of implementation. For a detailed discussion on this advantage and other aspects see [8]. Again, we just remark that, in principle, the same approach can be extended to reduce also other general classes of functional equations, e.g., neutral and retarded-advanced as well as renewal and coupled renewal-retarded, [8,9,11]. Now, let us underline that when f is linear and autonomous, so is (5). Then, the matrix characterizing the right-hand side corresponds to the discretization of the infinitesimal generator first proposed in [10] to address the local stability of equilibria: See [13] for an updated exposition. Otherwise, when f is nonlinear, it is immediate to verify that linearization around a given solution x * and pseudospectral discretization commute [8,Theorem 2.5]. The result is the linear, in general nonautonomous system of ODEs for u := (u 0 , u 1 , . . . , u M ) T and where Df (x) is the Frechét derivative of f at x ∈ X. Time-dependence and, again, the original DDE affect only the first row. Let us conclude this section with a glimpse of the literature. The pseudospectral technique above recalled can be recast as the collocation in θ of the hyperbolic Partial Differential Equation (PDE) Spectral and pseudospectral methods for PDEs have been extensively treated in, e.g., [27,28]. (7) is equivalent to both (2) and (4) through see, e.g., [23,29]. This equivalence was originally exploited in [3,35] to reduce DDEs with a single constant delay to systems of ODEs by Runge-Kutta methods, for the purpose of numerical time-integration. Other reductions, similar in scopes, can be traced back to [30] about the use of spline functions and [2,31] concerning splines again and Galerkin-type projections, respectively. Except for the last two, all these methods rely on the classic state space X of continuous functions; [2,31], instead, move to an Hilbert space for the numerical aspects, see also the interesting comments in [31, p.148]. More on these different settings is left to Section 3.
3. Lyapunov exponents and the discrete QR method. There is an extensive literature on the computation of Lyapunov exponents of DDEs, mostly of an experimental flavor. We refer the reader to the introduction of [14] as a starting point, to [7] for a minimal theoretical basis and to [26] for maybe the most considered paper on the subject. Avoiding repetitions, we could summarize by saying that, until very recently, [14] represented to the best of our knowledge the only method with a rigorous and complete proof of convergence. This is no longer true potentially, since the authors of [15] propose a modern reduction of DDEs to ODEs by Galerkin-type projections via so-called Koornvinder orthogonal polynomials [32]. They show the reliability of the technique in approximating strange attractors and related statistical features. Therefore, we believe that computing the Lyapunov exponents by applying standard tools for ODEs to their reduced system is as reasonable as the method we presently investigate, although they do not mention such an approach.
The main difference is that we keep on working on the classic Banach state space of continuous functions -where DDEs are naturally posed -instead of the Hilbert setting required in [15] to legitimate orthogonal projections. The same setting is required even in [14], indeed, and more comments are given towards the end of the section. Now we prefer instead to briefly account for the available methods for computing Lyapunov exponents of ODEs, since there fall back either [15] or the collocation presented in Section 2.
If the literature on Lyapunov exponents of DDEs is ample, just guess about ODEs. Not to get lost, beyond the original thesis of Lyapunov [33], the most complete, modern reference from a theoretical point of view is maybe [1]. One usually considers linear and nonautonomous systems, resorting to linearization around a reference trajectory in the attractor in the nonlinear case. Lyapunov exponents can be recovered from a time limit as t → +∞ of the diagonal elements of the matrix function at the right-hand side, after the system has been reduced to triangular form through orthogonalization. From a computational point of view, due to stability considerations, this orthogonalization has to be performed along a sequence of not too-large time steps. In so doing one prevents the alignment along the direction of highest growth (i.e., largest exponent). This procedure is at the base of the most used class of computational techniques, namely QR methods, first discussed in the pioneering works [5,6]. Here we summarize from [17] the discrete version of the QR method for computing the Lyapunov exponents of where A : [0, +∞) → R n×n is continuous and bounded. In the following a QR factorization of a nonsingular matrix is intended as the unique one with positive diagonal elements. For a starting reference on theory and computation of Lyapunov exponents of ODEs see however [20].
Prior to go on we remark that the simple key of this work consists in replacing A in (8) with A M given in (6).
Let X(t) be the fundamental matrix solution of (8) exiting from a given nonsingular matrix X 0 prescribed at time 0 without loss of generality. Choose any strictly increasing sequence of time instants {t k } with t 0 = 0 and construct the iterative QR factorization as follows. Starting from X 0 = Q 0 R 0 , at each step j = 1, . . . , k solve and factorize the solution at t j as If S(t, s) = X(t)X(s) −1 is the state transition matrix, then 2732 DIMITRI BREDA AND SARA DELLA SCHIAVA Uniqueness of the QR factorization and (9) give Eventually, (upper) Lyapunov exponents are recovered as where [R j,j−1 ] i,i stands for the i-th diagonal entry of the j-th triangular factor R j,j−1 . Lower exponents come either as lim inf or as upper exponents of the adjoint system. Summarizing, each step requires the solution of the n initial value problems (10) and the QR factorization (11). For comparison, let us comment on the method in [14], which is based on lifting (12) to infinite dimension. Of course, for a QR factorization to make sense in infinite dimension, the state space must be changed from the Banach space X of continuous functions with uniform norm to the Hilbert space H := R×L 2 ([−τ, 0); R) with metric induced by the proper inner product. See [7,16] for a discussion on this setting and the former for the definition of the relevant Lyapunov exponents. The work [14] proposes to discretize the evolution family S(t, s) in H via generalized Fourier projection, going back to finite dimension with no need to solve the initial value problems (10), (11) being replaced bỹ S(t j , t j−1 )Q j−1 = Q j R j,j−1 forS approximating S. A rigorous convergence analysis is also provided there. The key fact is that one does not usually compute state transition matrices for ODEs (but rather solve them), while for DDEs an efficient way to approximate evolution families was already available in [12], for the sake of stability of periodic problems. The approach in [14] was indeed inspired from the latter. The pseudospectral collocation technique in [12] instead, is not appropriate in the context of Lyapunov exponents, since based on X where orthogonal factorization is nonsense. This is why here we take the other way around, i.e., we first reduce to finite dimension, allowing to change from uniform to euclidean metric with proper scalar product. Summarizing, one could choose between two strategies: lifting finite-dimensional methods to solve infinite-dimensional problems or reduce infinite-dimensional problems and solve by finite-dimensional methods. While [14] can be ascribed to the first class, the method proposed here belongs to the second. As such, it represents the natural attempt to address the study of chaotic dynamics by following the originating motivation of [8], there focused on equilibria and their bifurcation, and later applied to periodic orbits of renewal equations in [9]. We conclude by observing that most of the computational techniques proposed in the literature for computing exponents of DDEs belong to the second category above mentioned, see, e.g., [26,40].

Implementation and results. Three Matlab codes are listed in Appendix A:
• dqr, implementing the discrete QR method for computing Lyapunov exponents of linear nonautonomus ODEs like (8), as described in Section 3 and based on [17]; • solveDDE_MG, implementing the time-integration of nonlinear DDEs like (1), based on [38]; • lrhs_MG, implementing the construction of the linear(ized) nonautonomous matrix (6), based on Section 2 and [8].
The first code is a Matlab version of the Fortran 77 routine LESLIS freely available at [22]. As the original one, it solves the initial value problems (10) by using the celebrated DOPRI54 embedded Runge-Kutta pair [24] to keep the error on the exponents below a given tolerance by automatic selection of the stepsizes t j − t j−1 . The QR factorization is obtained via Matlab built-in qr, just notice that lines 39 and 57 are a rude, yet trivial way to realize the required uniqueness. For more on computational aspects of orthogonal factors in this context see [21]. The limit (13) is truncated at a given large time specified in input, say T . The latter two codes are written for the specific example of the Mackey-Glass equation, described below. The first one simply resorts to Matlab dde23. The second one follows Section 2 and uses the auxiliary routine difmat taken from [42] to construct the Chebyshev differentiation matrix. We believe that the first code is useful for those accustomed to the Matlab environment rather than to less high-level frameworks. The other two may be of help to those who wish to repeat the following numerical experiments or test their own problems. It is enough indeed to alter the auxiliary function rhs in solveDDE_MG and lines 25 − 26 in lrhs_MG, beyond straightforward modifications of parameters input. Results are obtained by computing a reference trajectory x * for a given choice of parameters with solveDDE_MG, constructing the matrix (6) with lrhs_MG and eventually approximating the Lyapunov exponents with dqr. An instance of explicit call is given next.
The remaining of the section is devoted to run the proposed methodology on three test DDEs, two scalar and a system, all with a single delay. We have no reason to doubt that the method could work properly also for more general equations and systems with either multiple and distributed delays. For extensions of the pseudospectral reduction in these directions it is enough to consider the linear case, for which we refer to [13], also for an efficient piecewise approach. Moreover, we have already mentioned in the Introduction and in Section 2 the possibility to extend the reduction to other classes of functional equations. In this sense, a first successful attempt to compute Lyapunov exponents of renewal equations is reported in [9].
Here we choose to sacrifice extensions and generality in favor of simplicity. We focus on demonstrating the validity of the approach as a manageable strategy to obtain Lyapunov exponents, unveiling some of its features experimentally. A rigorous proof of convergence of the computed exponents to the exact ones is not available, and out of the reach of the present work. In fact, even the convergence of the approximating system of ODEs to the original DDE is still a subject of current research. More comments can be found in [8], especially w.r.t the prominent role of the Chebyshev differentiation matrix D M and the stability of its time exponential exp(D M t) as M → ∞ (see also [28] in this respect).
The first example is the linear autonomous DDE Taken from [14], it is invoked for the lack of DDEs with known and exact exponents. For (14), instead, the exponents coincide with the real part of the characteristic roots. The recent literature abounds of efficient and reliable methods to approximate the latter. Moreover, it is not difficult to show that the rightmost root of (14) is 1. In Figure 1 (left) we plot in C the first five characteristic roots (×, with nonnegative imaginary part) computed to machine precision with the code eigAM accompanying [13], together with the corresponding Lyapunov exponents (along the real axis) approximated with the proposed technique (•) and with the method in [14] (•), both for M = 20. The truncation time was set to T = 10 5 , while dqr was modified to use the exact solution of (10) through the matrix exponential in order to save useless computations (recall that (14) is autonomous). In Figure 1 (right) we plot the absolute error of the largest exponent w.r.t. the leading real part 1 (leftmost curves, solid • for the current method, dashed • for the method in [14]). We plot also the absolute errors of the remaining exponents w.r.t. the real part of the relevant characteristic roots, assumed as reference values. The spectrally accurate behavior (see, e.g., [42]) typical of pseudospectral methods as applied to smooth functions emerges evident. Notice, moreover, how the error curves move to the right as the exponents decrease, signature that the error constants increase in general proportionally to |λ| in both cases. Eventually, the method in [14] seems to give lower errors in general (made exception for the leading exponent at larger values of M ). This behavior is not surprising: the method in [14] is developed specifically for DDEs, while the current one is more general w.r.t. the class of equations it can tackle. This major generality (and the subsequent simplicity) seems thus to trade the accuracy off. A lower barrier for the error at about 10 −5 is also evident from Figure 1 (right). It is essentially due to the truncation of the limit (13) at a finite time T . Some insight is given by the results illustrated in Figure 2, where the error on the largest exponent is plotted against T for varying M = 10, 15, 20 (solid •). Similar data for M = 20 from Figure 3 of [14] are included for a direct comparison (dashed •), confirming the expected linear decay as discussed in the latter. For a deeper analysis of the consequences of this truncation we refer also to [18,19].
The second example is the celebrated Mackey-Glass equation [34]: In Table 1 we collect the first six exponents for a = 0.2, b = 0.1, c = 10 and τ = 50 computed with M = 20, comparing with the values from [14,39]. Truncation time and tolerance in dqr were set to 10 5 and 10 −6 , respectively. The accordance of the results is limited to few digits, but this is not surprising in the context of Lyapunov exponents, given the role of the truncation of the limit already encountered above.  Table 1. First six exponents of (15) for a = 0.2, b = 0.1, c = 10 and τ = 50 computed with M = 20 (first column), from [14] (second column) and from [39] (third column); the reference solution corresponds to the initial function of constant value ϕ ≡ 2 in (2).
It is also interesting to validate the approach by plotting some attractors, see Figure 3 and confront with Figure 2 in [26]. The plane of projection has coordinates x(t−τ ) and x(t), respectively approximated by u M (t) and u 0 (t) obtained by solving (5) with Matlab ode23s for M = 9: notice how 10 ODEs are sufficient to get a rather accurate representation. In this sense, the pseudospectral reduction could be a valid alternative to more accomplished methods for initial value problems for DDEs [4].
To conclude with the Mackey-Glass equation, an example call to the Matlab codes follows, w.r.t. the data in Table 1: The literature on Lypaunov exponents for DDEs is not much rich on quantitative results to compare with. Nevertheless, we include a last example to test the behavior of the pseudospectral reduction on systems of DDEs. In particular, we consider from [14] the two Rössler oscillators symmetrically coupled taken originally from [37]. In Table 2 we compare the first three exponents of (16) with a = b = 0.1, c = 14 and = 0.5 and varying coupling delay τ , computed (on purpose) with a quite rough approximation obtained with the proposed method for M = 5, T = 10 3 and the usual tolerance 10 −6 in dqr. Notice that the dimension of the approximating system of ODEs is n(M + 1) = 36, hence already not properly small. The reference trajectory x * is computed by dde23 with starting time 0 and initial function ϕ in (2) of constant value a (pseudo)random vector in R 6 . The results are compared with those in Figure 6 of [14], computed with the method proposed therein with M = 20 and T = 10 4 . The latter is basically much more accurate, but the size of the underlying matrix discretization is n(M + 1) = 126, almost 4 times the one of the current method. The correspondence is nevertheless good, since signs and magnitudes of the exponents are preserved. The analogous of Figure 6 in [14] obtained with the current method would be almost indistinguishable at eye, or within the same range of variation of the original one in [37].
adaptable to other classes of functional equations and rather reliable (at least experimentally), although we are aware that a rigorous convergence is not proved yet, as already commented.