COMPARISON BETWEEN BOREL-PAD´E SUMMATION AND FACTORIAL SERIES, AS TIME INTEGRATION METHODS

. We compare the performance of two algorithms of computing the Borel sum of a time power series. The ﬁrst one uses Pad´e approximants in Borel space, followed by a Laplace transform. The second is based on factorial series. These algorithms are incorporated in a numerical scheme for time integration of diﬀerential equations.


1.
Introduction. Often, the numerical resolution of an evolution problem is very time-consuming. It is, for example, the case of the simulation of turbulent fluid flows. The main reason is that the time step ∆t required by classical schemes (Euler, Runge-Kutta, ...) is subject to a restrictive stability or accuracy condition. As a consequence, ∆t is generally very small (at the order of milliseconds for fluid flow and femtoseconds for particle transport), while a long-time simulation (some minutes to many hours) is necessary for engineering applications.
In [38], Razafindralandy and Hamdouni proposed an algorithm which enables bigger time steps for the resolution of differential equations. This algorithm is based on the Asymptotic Numerical Method, where the solution is sought as a (eventually zero radius of convergence) power series of t, followed by a Borel summation to enlarge the domain of validity of the series and, consequently, to speed up the resolution. For the heat equation, the time step was up to about 1000 times as large as the one required by the usual Euler explicit method. With a simple periodic problem, it was also shown that the Borel-summation-based algorithm is much faster, in terms of CPU time, than the Euler explicit and the 2-nd and 4-th order Runge-Kutta methods.
Beyond speed considerations, it was shown in [10] that, coupled with the Borel summation method, the asymptotic numerical method provides a well-behaved numerical scheme regarding to invariance properties. Indeed, it competes with symplectic schemes in solving hamiltonian systems since it respects, at a high order, Liouville's theorem of volume conservation. Contrarily to Runge-Kutta or Euler explicit method, it is, moreover, able to reproduce the iso-spectrality of the solution of a Lax pair problem.
Borel summation of a divergent but Gevrey seriesû consists in (a) computing the Borel transform Bû ofû which is a convergent series, (b) prolonging Bû along in a domain containing a semi-line from the origin, and (c) taking the Laplace transform to go back to the physical space [36,41]. At the end, one gets the so-called Borelsum ofû which is an holomorphic function (in some domain) and is an actual solution of the equation. Note that formal time series solutions of many equations arising in fluid mechanics (including the heat equation, Burgers'equation and the Navier-Stokes equations) are generally divergent but Gevrey series [26,28,27,8].
This summation technique was transformed in [38,10] into a numerical algorithm where the prolongation of the Borel-transformed series was realised with Padé approximants and the Laplace transformation with a Gauss-Laguerre quadrature. The strength but also the weakness of this three-stage algorithm are the Padé approximation. Indeed, on one hand, the latter are known to be an efficient tool in prolonging a convergent series outside its validity disk. They may also be usefull in finding poles and then in choosing the direction of the Laplace transform. On the other hand, the standard algorithm of Padé approximation requires the resolution of a linear problem which may not be inversible when the Padé table is not normal; a reduction is then needed. Moreover this linear problem is generally illconditionned and may lead to the apparition of spurious zero-pole pairs (Froissart doublet) [17,40].
To avoid the problem induced by the Padé approximants, the use of Padé-type approximants where the denominator (and the poles) are prescribed is proposed in [42]. Another method is to use a conformal mapping to extend the Borel-transformed series [21], but this method needs the knowledge of the closest pole of Bû. All these methods provide an (Laplace-) integral representation of the Borel sum at the end of the algorithm.
The Borel sum can also be represented by a generalised factorial series (GFS) which is computed directly from the terms of the series, without passing through the Borel space. The advantage is that it avoids the three-stage computation, reducing then the source of errors. The introduction of GFS as an asymptotics to a power series dates from the beginning of the last century [35,31,32,47]. More recently, the effective summation by GFS has been analysed numerically in [42,43,11,48] and applied to some generic (theoretical) problems where the formal series solution u is divergent.
In this article, we propose to compare the Borel-Padé algorithm to one based on generalised factorial series for the numerical resolution of ODE's. The goal is to examine robustness of these algorithm for a "blind" use.
In section 2, the Gevrey asymptotics theory and the Borel summation method are briefly recalled. The Borel-Padé algorithm is then illustrated. Its weak points are analysed in section 3 and an algorithm based on generalised factorial series is presented in section 4. The two last sections are devoted to numerical experiments.
2. Borel summation theory. Consider a meromorphic ordinary or partial differential equation with a formal time power series solution The coefficients u n can be calculated by injecting (2) in (1).
Series (2) may be divergent (in the sense that the radius of convergence is zero). However, we assume thatû(t) is a Gevrey series of index one, meaning that its general term does not grow faster than factorials. More generally, Definition 2.1. A formal power seriesû(t) = n=0 u n t n is called a Gevrey series of index I > 0 if there exists constants C, A ∈ R such that |u n | < CA n (n!) I ∀n ≥ 1.
Most of series coming from engineering applications are divergent [19] but Gevrey series of some order [37]. Knowing the terms ofû(t), our goal is to find an approximate solution of (1). This approximate solution will be an analytic function in a sector and owns series (2) as Gevrey asymptotics, as defined hereafter.
analytic in an open sector S, is Gevrey-asymptotic of order I to a power seriesû(t) = n≥0 u n t n at the origin in S if, for any compact subsector T ⊂ S, there exists two constants C and A such that As said, we limit ourselves to index-one Gevrey series (I = 1) in this article. We denote The general theory of Gevrey asymptotics [36] ensures that, given a Gevrey series, one can find a function wich has that series as Gevrey expansion. The Borel summation technique is a method of effectively computing this function.
2.1. Borel summation. As introduced, the Borel summation uses the Borel transformation and Laplace transformation. Let us define these operations.
Ifû(t) is a Gevrey series, its factorial growth of u n is then canceled out through the Borel transformation, such that Bû(ξ) has a non-zero convergence radius.
Let d be a semi-line d starting from the origin. We say that a function P (ξ), analytic in a domain containing d, has an exponential decay at the infinity in the direction d if there exists two constants A and C such that as |ξ| tends to infinity along d. We can now set the definition of the Laplace transformation, which is, when applied formally to a power series, the inverse of the Borel transformation.
Definition 2.4. Consider a function P (ξ), analytic in a domain containing d, and having an exponential decay at the infinity in the direction d. The Laplace transform 1 of P (ξ) in the direction d is LP (t) is analytic in a sector bissected by d.
We are now ready to set the definition of the Borel sum of a series.
• Bû(ξ) can be analytically prolonged into a function P (ξ) in a domain containing d, and • P (ξ) has an exponential decay at the infinity in the direction d.
The three operations involved in the Borel-Laplace summation method are summarised in Table 1. Table 1. Borel-Laplace summation The dependence of the Borel sum on the direction d is related to Stokes phenomenon [22]. Since for nonlinear problems, there lacks theory which would determine which direction would be the most suitable, we consider only the case where d is the positive real axis.
Numerically, the seriesû(t) is determined up to an arbitrary order N . More precisely,û(t) is numerically represented by the truncated serieŝ To approximate the Borel sum Sû(t), one of the most natural algorithms is the Borel-Padé algorithm.
3.1. Borel-Padé algorithm (BP). An algorithm which consists in transforming step-by-step the Borel-Laplace summation method into an effective numerical program is the Borel-Padé algorithm, described as follows: (a) Compute the approximate Borel transform Bû N (ξ), (b) Extend Bû N (ξ) on the positive real axis into a rational function P N (ξ) using Padé approximants, and (c) Compute the Laplace transform using the Gauss-Laguerre quadrature method.
The algorithm is schematised in Table 2. The function P N in Table 2 is linked to P N by the Gauss quadrature formula, ξ i are the roots of the N G -th Laguerre polynomial and ω i are the corresponding weights. Table 2. Borel-Padé summation algorithm (BP) The Padé approximant P N is determined by the relation [15,5,6]: where the degrees of the polynomials verify N 1 + N 2 = N − 1 and m is the highiest order that can be obtained. Any choice of N 1 from 0 to N may give a suitable approximation of P (ξ) but, when nothing is known on this function, the most popular choice is the diagonal Padé approximant. This choice is supported by some convergence theorems and conjectures from experiences (see [2,39,24]). So, in the numerical tests, we take N 1 = N/2 or N 1 = (N + 1)/2 according to the parity of N . The Gauss-Laguerre method has been chosen for the evaluation of the Laplace transform since it is one of the most popular quadrature methods of semi-infinite integrals. It integrates exactly a polynomial of degree up to 2N G −1. Other methods may be considered (but this is beyond the scope of this article). One can cite for example the Fourier-Chebyshev scheme [4] wich extends the Clenshaw-Curtis method and Gauss-type quadratures which integrate exactly rational functions [23,14,12,13,46].
The principal weaknesses and source of errors in the Borel-Padé algorithm are the integration error, the non-unicity of the Padé approximants (even when the degrees N 1 and N 2 are fixed) and the apparition of poles of the Padé approximants. We will analyse these points one after the other.

3.2.
Error due to the numerical integration. Consider the fundamental example of Euler's equation: The formal solution and its Borel transform are, respectivelly, and Bû(ξ) can be naturally extended into the analytic function Hence, the Borel sum ofû(t) is In summary, from the divergent series (12), the summation method provides the solution (15) of equation (11) which is analytic in the complex half-plane where the real part of t is positive. Borel-Padé algorithm is able to represent correctly Bû(ξ) and P (ξ). Indeed, as soon as N ≥ 2, P N (ξ) = P (ξ). The only source of error is the numerical integration. Figure 1 compares the approximate solution to the analyic one (15) for N = 10. It shows the superiority of the BP solution over the truncated series approximation. It also shows the effect of the numerical integration when time grows. The integration error is inherent to the method and cannot be avoided. It however should not be considered as a real weakness of the method since it is simply a discretisation error.

3.3.
Pole of the Padé approximants on R + . A more problematic situation is whenû(t) is Borel-summable in the positive real line direction but the Padé representation of P (ξ) has a pole on this line. For example, consider the equation The formal solution and its Borel transform are respectivelŷ u(t) = n1 t n and Bû(ξ) = n0 ξ n n! .
The natural prolongation of Bû(ξ) owns a Laplace transform along d = R + but, for instance, when N 1 = N 2 = 3, has a pole on d.
Poles on the positive real line may be ignored during the computation of the Laplace transform (provided, of course, that they do not coincide with any Gauss node). But, as said in [9,25], this may lead to a loss of precision or to a slow down.
Spurious poles may also appear due to numerical round-off [17,16,40], even when robust algorithms are used [29,3]. Fortunately, they are paired with a closeby zero such that the doublet zero-pole simplifies outside of a small region, and can be ignored.

3.4.
Non-unicity of Padé approximants. A classical problem arising with Padé approximation is the non-uniqueness of the representation. To remind this situation, consider the equation It is straight forward to check that the formal solution is Borel summable along d = R + . The Borel transform and its natural prolongation are However, definition (10) leads to a non invertible system when, for instance, N 1 = N 2 = 2, and the Padé approximant cannot be computed. This is due to the existence of a common divisor in the P N . Technics exist for circumventing this problem and choosing a unique representation but at some extra computational cost [1,18,7]. In the next section, the generalised factorial series algorithm, which avoid the computation of a Padé approximant, is presented.

Generalised factorial series algorithm (GFS).
To show the link between factorial series and the Borel sum, let us formally rewrite the Laplace transform in the following way: where the b n come from the Taylor development of P (− ln s) at s = 1. After integrations by parts, we get the inverse factorial representation or The inverse factorial series I(t) is unique, is absolutely convergent for Re(1/t) big enough [30,31,33,34,45] and converges effectively to Sû(t).
Denote I N (t) be the truncation of the factorial series (26) at order N . As shown in [11,26], I N (t) approximates asymptotically the Borel sum as follows.
Theorem 4.1. Consider a Borel-summable seriesû(t). Let A and C be the constants involved in the exponential decay of the prolongation P (ξ) of its Borel transform (see equation 6) and T = 1/C. Then the inverse factorial series representation I(t) of Sû(t) verifies where With a suitable rotation, the sum in any other direction d can be recovered from (26).
In fact, Sû(t) can be represented by a more general convergent series, namely by a generalised factorial series (GFS) [44,41]: for any sequence of complex numbers (s n ) located on the semi-line d which is the mirror symmetry of d about the real axis. The coefficients b n are linked to u k = P (k−1) (0) by the relation b n = 1 s 1 . . . s n n+1 k=1 |S(n, k − 1)|u k .
S(n, k) are the generalised Stirling numbers of first kind.
The GFS converges for Re(1/t) > 1/T and s n = nω e −iθ for any ω > ω 0 and some ω 0 > 0. ω 0 can be linked to the location of the closest singularity of P in the complex Borel plane. It is said that, numerically, the convergence of the GFS is more and more rapid as ω get close to ω 0 [41]. For linear equations, the closest singularity can be "read" from the coefficients of the equations and can be evaluated algorithmically. But for non-linear equations, the determination of ω 0 is harder.
Due to the explosive growth of the Stirling numbers, relation (30) should not be used as is, if the truncature order N is very big. Instead, the reccurence relation on the Stirling numbers permits to compute the first terms of (29) as follows (see [41]). Assume that θ = 0. Set τ = 1/t and v n+1 the n-th term of the GFS, that is Without explicitely using the Stirling numbers, v n+1 can be computed recursively by the relations 5. Numerical comparisons. In this section, we numerically compare the GFS algorithm with the Borel-Padé procedure. We take s i = i.

Discretisation errors.
In Section 3.2, we took Euler's equation (11) to analyse the effect of numerical discretisation on the Borel-Padé algorithm. We now apply the GFS algorithm to the same equation. As seen on Figure 2, the Borel-Padé solution is closer to the reference solution than the GFS one. We conclude that, if the quadrature of the Laplace integral is the only source of error, then BP is better than GFS. However, in realistic applications, there is only slim chance that the Padé representation P is exact. With the next example, the effect of the Padé approximation error combined to the quadrature is illustrated.
Consider for example the following equation: The formal series solution is: which is convergent inside the unit disc. The Borel transform of (34) is It prolonges into the exponential function After the Laplace transformation, we retrieve the exact solution In this case, the Borel summation is, theoretically, exact. Numerically, with BP, say when N 1 = N 2 = 3, the function (36) is approximated by the Padé approximant which is analytic on the real positive axis. This representation leads to a good approximation of the solution for small values of t. But when t grows, Figure 3 shows that BP suffers the effect of the numerical integration and of the truncation. As for it, GFS gives the exact Borel sum. This example clearly shows the advantage of avoiding the Padé approximation and Laplace transform in the numerical computation of the Borel sum of the solution, for some problems.
In the next section, we examine the case where a pole appears in the Padé approximant.
Since Padé approximants are invariant under dilatations [15,6], it can easily be seen from (19) that, contrarily to P (ξ), the rational approximation P 7 has a pole on R + , for any λ > 0. With λ = 8, Figure 4 shows that, for small values of t, the pole can be simply ignored. But for some values of t, it produces significant errors in BP. With GFS, the appproximate solution does not present a weird behaviour.   (20) again with N 1 = N 2 = 2. Theoretically, the Borel summation method gives the exact solution. However, as discussed previously, BP does not give any approximate solution without an additional optimisation. As for it, GFS encounters no problem. The approximate solution is presented on Figure 5.  Figure 5. Equation (20) The above tests show that many paramaters have to be considered with BP for it to be successful. On the other hand, GFS will always give an approximate solution, and is then more robust. How good this solution is depends on the approximation order N (and, of course, on the problem).
In the last section, some other numerical experiments are presented.
6. Other interesting tests. Consider the initial value problem Figure 6 compares the approximate solutions with BP and GFS. It shows that both algorithms reproduce the exact solution with a relatively good precision for small values of t. Next, around t = 1, differences with the exact solution start to be visible, first with BP, and then with GFS. If we take a closer look at the error, we can see (Figure 7) that BP is more precise for small t. But around t = 0.4 the error curves intersect and GFS becomes better. Similar conclusions can be drawn with the following problem The exact solution u(t) = tan(arctan(t) − 1) (42) is correctly predicted by both algorithms, as seen on Figure 8 for small t. Borel-Pade Factorial Figure 7. Absolute errors on equation (40) in log-log scale Figure 9 shows that at the beginnig, the error is smaller with BP than with GFS, but the tendency is reversed for bigger t.
Lastly, consider the logistic equation [20] du which models the dynamics of a population. The growth rate r is set to 1, and the harvesting rate h to 1/5. Figure 10 presents the approximate solutions with an initial population size u(0) = 3. It shows that the factorial series algorithm predicts a good approximate solution even for large values of t, contrarily to BP.

7.
Conclusion. In this article, we analysed the weak points of BP algorithm for numerical computation of the Borel sum. We also saw that GFS algorithm does not suffer from the same problem of robustness, and can be considered as a serious alternative to BP. Only one step of these algorithms have been considered here. For realistic resolutions, one has to iterate the algorithms. This means that, when the error reaches a Borel-Pade Factorial Figure 9. Absolute errors for equation (41) prescribed tolerance value, a new series expansion is computed with the last acceptable solution as initial condition, and the resummation algorithm is applied again. This operation is known as a continuation. Relations such as (27) may be used to evaluate this error. The next step would be a CPU-time comparison between BP with robustness amelioration and GFS, with continuation.