Polynomial interpolation and a priori bootstrap for computer-assisted proofs in nonlinear ODEs

In this work, we introduce a method based on piecewise polynomial interpolation to enclose rigorously solutions of nonlinear ODEs. Using a technique which we call a priori bootstrap, we transform the problem of solving the ODE into one of looking for a fixed point of a high order smoothing Picard-like operator. We then develop a rigorous computational method based on a Newton-Kantorovich type argument (the radii polynomial approach) to prove existence of a fixed point of the Picard-like operator. We present all necessary estimates in full generality and for any nonlinearities. With our approach, we study two systems of nonlinear equations: the Lorenz system and the ABC flow. For the Lorenz system, we solve Cauchy problems and prove existence of periodic and connecting orbits at the classical parameters, and for ABC flows, we prove existence of ballistic spiral orbits.


Introduction
This paper introduces an approach based on polynomial interpolation to obtain mathematically rigorous results about existence of solutions of nonlinear ordinary differential equations (ODEs). Our motivation for the present work is threefold. First, we believe that polynomial interpolation techniques are versatile and can lead to efficient and general computational methods to approximate solutions of ODEs with complicated (non polynomial) nonlinearities. Second, while polynomial interpolation techniques have be used to produce computer-assisted proofs in ODEs, their applicability to produce proofs is sometimes limited by the formulation of the problem itself. More precisely, a standard way to prove (both theoretically and computationally) existence of solutions of systems of ODEs is to reformulate the problem into an integral equation (often in the form of a Picard operator) and then to apply the contraction mapping theorem to get existence. If one is interested to produce computer-assisted proofs using that approach, the analytic estimates required to perform the proofs depend on the amount of regularity gained by applying the integral operator. This observation motivated developing what we call the a priori bootstrap, which consists of reformulating the original ODE problem into one of looking for the fixed point of a higher order smoothing Picard-like operator. Third, we believe (and hope) that our proposed method can be adapted to study infinite dimensional continuous dynamical systems (e.g. partial differential equations and delay differential equations) for which spectral methods may sometimes be difficult to apply (for instance because of the shape of the spatial domains or because the differential operators are difficult to invert in a given spectral basis).
It is important to realize that computer-assisted arguments to study differential equations are by now standard, and that providing a complete overview of the literature would require a tremendous effort and is outside the scope of this paper. However, we encourage the reader to read the survey papers [1,2,3,4,5,6], the books [7,8] and to consult the webpage of the CAPD group [9] to get a flavour of the extraordinary recent advances made in the field.
More closely related to the present work are methods based on the contraction mapping theorem using the radii polynomial approach (first introduced in [10]), which has been developed in the last decade to study fixed points, periodic orbits, invariant manifolds and connecting orbits of ODEs, partial differential equations and delay differential equations (see for instance [11,12,13,14,15,16]). The numerics and a posteriori analysis in those works mainly use spectral methods like Fourier and Chebyshev series, and Taylor methods. First order polynomial (piecewise linear) approximations were also used using the radii polynomial approach (see [17,18,19]), but more seldom, mainly because the numerical cost was higher and the accuracy was lower than for spectral methods. The computational cost of these low order methods is essentially due to the above mentioned low gain of regularity of the Picard operators chosen to perform the computer-assisted proofs.
In an attempt to address the low gain of regularity problem, we present here a new technique that we call a priori bootstrap which, when combined with the use of higher order interpolation, significantly improves the efficiency of computer-assisted proofs with polynomial interpolation methods. We stress that the limitations that affected the previous works using interpolation were not solely due to the use of first order methods, and that the a priori bootstrap is crucial (that is, just increasing the order of the interpolation does not significantly improve the results in those previous works). This point is illustrated in Section 6.
While we believe that one of the advantage of our proposed method is the versatility of the polynomial interpolations to tackle problems with complicated (non polynomial) nonlinearities, we hasten to mention the existence of previous powerful methods which have been developed in rigorous computing to study such problems. For instance, automatic differentiation (AD) techniques provide a beautiful and efficient means of computing solutions of nonlinear problems (e.g. see [7,20,21]) and are often combined with Taylor series expansions to prove existence of solutions of differential equations with non polynomial nonlinearities (e.g. see [1,22,23,24,25,26,27]). Also, in the recent work [28], the ideas of AD are combined with Fourier series to prove existence of periodic orbits in non polynomial vector fields. Independently of AD techniques, a method involving Chebyshev series to approximate the nonlinearities have been proposed recently in [29]. Finally, the fast Fourier transform (FFT) algorithm is used in [30] to control general nonlinearities in the context of computer-assisted proofs in KAM theory.
In this paper we consider φ : R n → R n a C r vector field (not necessarily polynomial) with r ≥ 1, and we present a rigorous numerical procedure to study problems of the form We treat three special cases for BV , corresponding to an initial value problem, a problem of looking for periodic orbits and a problem of looking for connecting orbits. We also note that, as for the already existing spectral methods, the technique presented here extends easily to treat parameter dependent versions of (1) (e.g. using ideas from [31,32,33]). For the sake of simplicity, we expose all the general arguments for the initial value problem only, that is for given an integration time τ > 0 and an initial condition u 0 ∈ R n . We explain how (2) needs to be modified for different problems as we introduce them in Sections 6 and 7. Our paper is organized as follows. In Section 2, we start by presenting our a priori bootstrap technique, together with a piecewise reformulation of the operator that we use throughout this work. We then recall in Section 3 some definitions and error estimates about polynomial interpolation, and explain how to combine them with our a priori bootstrap formulation to get computer-assisted proofs. The precise estimates needed for the proofs are then derived in Section 4, and their dependency with respect to the a priori bootstrap and to the parameters of the polynomial interpolation is commented in Section 5. This discussion is complemented by several examples in Section 6, where we apply our technique to validate solutions for the Lorenz system. Finally we give another example of application in Section 7, where we prove the existence of some specific orbits for ABC flows.

A priori bootstrap
One of the usual strategies used to study (2), both theoretically and numerically, is to recast it as a fixed point problem, as in the following lemma.

Lemma 2.1. Consider the standard Picard operator
Then u is a fixed point of f if and only if v : t → u( t τ ) is a solution of (2). In previous works using this reformulation, the limiting factor in the estimates needed to apply the contraction mapping theorem was a consequence of the fact that f only gains one order of regularity, that is maps C 0 into C 1 . This fact will be made precise in Section 4 where we derive the estimates in question and in Section 5 where we discuss how those estimates affect the effectiveness of our technique.
To circumvent this limitation, we propose a different reformulation that we call a priori bootstrap. This approach provides operators which gain more regularity, and therefore lead to sharper analytic estimates. First we introduce some notations. The following definition allows concisely describing the higher order equations obtained by taking successive derivatives of (2).

Definition 2.2. Consider the sequence of vector fields
for all u ∈ R n and for all p = 0, . . . , r. Lemma 2.3. For any 1 ≤ p ≤ r + 1, u solves (2) if and only if u solves the following Cauchy problem Proof. The direct implication is trivial. To prove the converse application, we consider e def = du dt − φ(u) and show that is solve a linear ODE of order p − 1, with initial conditions d q e de q (0) = 0 for all q = 0, . . . , p − 2, which implies that e ≡ 0.
Integrating the p th order Cauchy problem (4) p times leads to a new fixed point operator which now maps C 0 into C p . Lemma 2.4. Let 1 ≤ p ≤ r + 1 and consider the Picard-like operator Then u is a fixed point ofg if and only if v : t → u( t τ ) is a solution of (4) (and thus of (2)).
Proof. If u is a fixed point ofg, an elementary computation yields that v solves (4). Conversely, if v solves (4) then Taylor's formula with integral reminder shows that u is a fixed point ofg.
It is worth noting that, in the same framework of rigorous computation as the one used here, approximations using piecewise linear functions were used in [18] to prove existence of homoclinic orbits for the Gray-Scott equation. In that case the system of ODEs considered is of order 2, and therefore the equivalent integral operator is very similar tog in (5) for p = 2. Similarly, piecewise linear functions were used in [19] to prove existence of connecting orbits in the Lorenz equations. In that case, the standard Picard operator (3) was used. Now that we have an operator which provides a gain of several orders of regularity, it becomes interesting to consider polynomial interpolation of higher order, and again this will be detailed in Section 5 and applied in Section 6.

Piecewise reformulation of the Picard-like operator
We finish this section by a last equivalent formulation of the initial value problem (2), that will be the one used in the present paper to perform the computer-assisted proofs. Given m ∈ N, we introduce the mesh of [0, 1] where t 0 = 0 < t 1 < . . . < t m = 1. Then we consider C 0 ∆m ([0, 1], R n ) (respectively C k ∆m ([0, 1], R n )) the space of piecewise continuous (respectively C k ) functions having possible discontinuities only on the mesh ∆ m . More precisely, we use the following definition.
, R n ) and can be extended to a C k function on [t j , t j+1 ], for all j = 0, . . . , m − 1.
We then introduce defined on the interval (t j , t j+1 ) (j = 0, . . . , m − 1) by where u(t − j ) denotes the left limit of u at t j , and u(t − 0 ) must be replaced by u 0 (this last convention will be used throughout the paper). Remark 2.6. We point out that our computer-assisted proof is based on the operator g (defined in (6)), which differs slightly from the operatorg (defined in (5)), which was used in previous studies such as [17,18,19]. The only difference is that the integral in g is in some sense reseted at each t j . We introduce this piecewise reformulation because it allows for sharper estimates (see Remark 4.7).
We finally introduce G : Proof. This result is similar to Lemma 2.4. The only additional property that we need to check is that, if u ∈ C 0 ∆m ([0, 1], R n ) satisfies G(u) = 0, then u cannot be discontinuous. Indeed, if G(u) = 0 then g(u) = u, and for all j ∈ {1, . . . , m − 1} one has At this point, it might seems as if defining G on C 0 ∆m ([0, 1], R n ) brings unnecessary complications, and that we should simply define it on C 0 ([0, 1], R n ). While this is indeed a possibility, it will quickly become apparent that the present choice is more convenient, both for theoretical and numerical considerations (see Remark 3.4).
Finding a zero of G is the formulation of our initial problem (2) that we are going to use in the rest of this paper.

Preliminaries
Given a mesh ∆ m as defined in Section 2.2, we introduce the refined mesh ∆ m,k where, for all j ∈ {0, . . . , m − 1} we add k − 1 points between t j and t j+1 . More precisely we suppose that these points are the Chebyshev points (of the second kind) between t j and t j+1 , that is we add the following points: Notice that the above definition extends to t j,0 = t j and t j,k = t j+1 , and that k = 1 corresponds to the mesh used in previous studies with first order interpolation (e.g. see [18,19,17]). We then introduce the subspace S n m, is a polynomial of degree k for all j = 0, . . . , m − 1 .
Next, we define the projection operator Π n m,k : whereū is the function in S n m,k that matches the values of u on the mesh ∆ m,k . Notice that u can have discontinuities at the points t j , therefore the matching of u andū at those points must be understood as u(t − j ) =ū(t − j ) and u(t + j ) =ū(t + j ). In the sequel we will need to control the error between a function u and its interpolationū. This is the content of the following propositions, where · ∞ denotes the sup norm on [0, 1].
Λ k being the Lebesgue constant (see for instance [34]), and l−1 2 denoting the integer part of More information about the Lebesgue constant, and in particular sharp computable upper bounds for it, can be found in the Appendix, together with references and proofs of the two above propositions.

Finite dimensional projection
To get an approximate zero of G (and thus an approximate solution of (2)), we are going to look for a zero ofḠ def = Π n m,k G |S n m,k . But first, we need a convenient way to represent the elements of S n m,k . Here and in the sequel, we use the exponent (i) to denote the i-th component of a vector in R n , but we will work with all the components at once as often as possible to avoid burdening the notations with this exponent (i) . Let us introduce the set of indexes Perhaps the most natural way to characterize an elementū of S n m,k is to give all the values u (i) (t j,l ) for (i, j, l) ∈ E n m,k . However, we will also use another representation, more suited to numerical computations, which consists of decomposingū on the Chebyshev basis. That is, we writē where T l is the l-th Chebyshev polynomial of the first kind. We can thus also describe uniquely any function u belonging to S n m,k by the family of Chebyshev coefficients ū

Remark 3.4.
Let us mention how considering functions with possible discontinuities on the mesh points in ∆ m comes in handy. By restricting ourselves to functions in C 0 ([0, 1], R n ), we would need additional constraints on the Chebyshev coefficients to impose the continuity at each of the mesh point t j (j = 1, . . . , m − 1) and keep track of them in all computations. Instead, the choice of working with C 0 ∆m ([0, 1], R n ) allows avoiding these additional constraints.
for all j = 0, . . . , m − 1 and l = 0, . . . , k. (8) We recall that all the values . Thus we can numerically find a zeroū ofḠ, which is going to be our approximate solution. We note that we use these identifications between S n m,k and R nm(k+1) throughout the present work. Our objective is now to validate this numerical solutionū, that is to prove that within a given neighbourhood ofū lies a true zero u of G.

Back to a fixed point formulation
We consider the space We already have a projection onto X n m,k Π n m,k : and we also define its complementary Π n ∞ : We then define the norms On X n we consider the norm where r ∞ is a positive parameter. Notice that for all r ∞ > 0, (X n , · X n ) is a Banach space. For any r, r ∞ > 0, we denote by B X n (r, r ∞ ) the closed neighbourhood of 0 defined as Suppose that we now have computed a numerical zeroū ofḠ. We define A † m,k = DḠ (ū) and consider A m,k an injective numerical inverse of A † m,k . Finally, we introduce the Newton-like operator T : X n → X n defined by Notice that the fixed points of T are in one-to-one correspondence with the zeros of G. We now give a finite set of sufficient conditions, that can be rigorously checked on a computer using interval arithmetic, to ensure that T is a contraction on a given ball aroundū. If those conditions are satisfied, the Banach fixed point theorem then yields the existence and local uniqueness of a zero of G. This is the content of the following statement (based on [35], see also [10] for a detailed proof). and Assume that we have bounds Y and Z(r, r ∞ ) satisfying sup and sup If there exist r, r ∞ > 0 such that and then there exists a unique zero of G within the setū The quantities p given respectively in (15) and (16) are called the radii polynomials.
In the next section, we show how to obtain bounds Y and Z satisfying (11)- (14). Before doing so, let us make a quick remark about the different representations and norms we can use on X n m,k . Remark 3.6. As explained in Section 3.2, in practice we will mostly work withū ∈ X n m,k represented by its Chebyshev coefficients as in (7). However, there are going to be instances where the valuesū (i) (t j,l ) are needed, for instance to compute ū X n m,k . We point out that numerically, passing from one representation to the other can be done easily by using the Fast Fourier Transform.
One other important point is that, at some point in the next section we are going to need upper bounds for ū (i) ∞ . To get such a bound from our finite dimensional data, we have two options, namely or max Ifū is given, then (17) is usually better, whereas (18) is better ifū is any function in a given ball of X n m,k . Notice that (17) simply follows from the fact that the Chebyshev polynomials satisfy |T l (t)| ≤ 1 for all t ∈ [−1, 1] and all l ∈ N. For more information about the bound (18), see the Appendix and the references therein.

Formula for the bounds
In this section, we give formulas for Y ∞ satisfying the assumptions (11)-(14) of Theorem 3.5. To make the exposition clearer, we focus strictly on the derivation of the different bounds in this section. In particular, the discussion about the impact of the level of an priori bootstrap (that is the value of p) and the order of polynomial approximation (that is the value of k) is done in Section 5.

The Y bounds
In this section we derive the Y bounds, which measure the defect associated with a numerical solutionū, that is how close G(ū) is to 0. We start by the finite dimensional part. Proposition 4.1. Let y be defined as in (9) and consider . Then (11) holds.
Proof. Simply notice that Π n m,k y = −A m,k Π n m,k G(ū).

Remark 4.2.
The above bound is not completely satisfactory, in the sense that is not directly implementable. Indeed, to compute Y (i) j,l we need to evaluate (or at least to bound) the quantities G (i) (ū)(t j,l ). In particular (see (8)), we need to evaluate the integrals If φ is a non polynomial vector field, we use a Taylor approximation of order k 0 of Ψ to get an approximate value of the integral by computing Notice that this quantity can be evaluated explicitly. The error made in this approximation is then controlled as follows Notice that this error term is effective, since max s∈[−1,1] can be bounded using interval arithmetic. Therefore, the quantity Y where the vectorĜ(ū) contains the approximate integrals and the vector G (ū) contains the errors bounds for these approximations. Here and in the sequel, absolute values applied to a matrix, like |A m,k |, must be understood component-wise. We point out that, in practice, if the mesh ∆ m is refined enough, thenū is not going to be varying much on each subinterval [t j , t j+1 ], and thus we can get rather precise approximations even with a lower order k 0 for the Taylor expansion.
We mention that when the vector field φ is polynomial, Ψ has a finite Taylor expansion, therefore up the integrals can in fact be computed exactly (i.e. we can get G (ū) = 0).
We now turn our attention to the second part of the Y bound.

Proposition 4.3.
Let y be defined as in (9) and consider Then (12) holds.
(ū) and Proposition 3.1 yields but we can again get an explicit bound for this quantity by using a low order Taylor approximation and interval arithmetic. In the particular case where the vector field φ is polynomial, an explicit bound can also be obtained via the Chebyshev coefficients of the polynomial , as in (17).

The Z bounds
In this section we derive the Z bounds, which measure the contraction rate of T on the ball of radius r aroundū. We begin with the finite dimensional part, that is the projection on X n m,k . Let z be defined as in (10). Then where A m,k and A † m,k are defined as in Section 3.3. We are going to bound each term separately as

The bound Z 0 (r)
The computation of the bounds (Z 0 (r)) (i) j,l estimating the first of the terms in the splitting (19) is rather straightforward and is simply a control on the precision of the numerical inverse. for all (i, j, l) ∈ E n m,k and let Then,

The bound Z 1 (r, r ∞ )
We now construct the bounds (Z 1 (r, r ∞ )) (i) j,l estimating the second term in the splitting (19).
where 1 n is the vector of size n whose components all are equal to 1. Let Proof. By definition of A † m,k andḠ, we have that Therefore, we can rewrite and we only need to prove that Remembering (8) and using that Π n ∞ u 2 X n ∞ ≤ r ∞ r, we estimate for all (i, j, l) ∈ E n m,k , Notice that Remark 4.4 also applies here.

Remark 4.7.
Had we used the operatorg (see (5)) instead of g (see (6)), we would have gotten a bound like which is obviously worst because one has to consider the whole integral from 0 to t j,l instead of just from t j to t j,l .

The bound Z 2
We finally construct the bounds (Z 2 (r, r ∞ )) (i) j,l estimating the last term in the splitting (19).
j,l , for all (i, j, l) ∈ E n m,k .  (1 n , . . . , 1 n ), that is Proof. (of Proposition 4.8) We only have to prove that Using (18) we have that Then we estimate for all (i, j, l) ∈ E n m,k , Notice that Remark 4.4 also applies here.

The Z ∞ bound
We are left with the remainder part of the Z bound, which we treat in this section.
Proposition 4.10. Let u 1 , u 2 ∈ B X n (r, r ∞ ) and z as in (10). Define for all i ∈ {1, . . . , n} where C opt k,p is one of the two constants given by Propositions 3.1 and 3.2, namely Then (14) holds.

For any continuous function γ, one has
Notice that Remark 4.4 also applies here.

The radii polynomials and interval arithmetics
The following proposition sums up what has been proven up to now in this section, namely that we have derived bounds that satisfy the requirements (11) to (14) from Theorem 3.5. (9) and (10). Then, the bound defined in Proposition 4.1 satisfies (11) and the one from Proposition 4.3 satisfies (12). Also, consider the bounds defined in Propositions 4.5 to 4.8. Then

Proposition 4.11. Let y and z defined as in
satisfies (13) and finally the bound from Proposition 4.10 satisfies (14).
Notice that, the way these bounds are defined, they are polynomials in r and r ∞ , whose coefficients are all positive and can be computed explicitly with the help of the computer, since they depend on the numerical data of an approximate solutionū. Also, we make sure to control possible round-off errors by using interval arithmetic (in our case INTLAB [27]).
In practice, we first consider r ∞ so that it satisfies the constraint (20) introduced in the next section. If there does not exist such positive r ∞ , we increase m and/or k and/or p and try again. Once r ∞ is fixed, we try to find a positive r such that the last conditions (15) and (16) of Theorem 3.5 hold. If there is no such positive r, we increase m and/or k and/or p and try again. If we finally find a positive r satisfying (15) and (16), then we have proven that Theorem 3.5 applies, that is there exists a unique zero of G in B X n (r, r ∞ ).
In Sections 6 and 7, we give several examples where the procedure described just above is successfully used to validate solutions of an initial value problem, as well as periodic solutions and heteroclinic orbits. But before doing so, we discuss in the next section the role of the parameters k, m and p, and how they influence the bounds.

About the choice of the parameters
In this section, we explain how the parameters k, m and p should be chosen, and in particular we highlight how the a priori bootstrap (that is taking p ≥ 2) helps improving the efficiency of the computer-assisted procedure we propose. The discussion will be rather informal, but we hope it helps the reader understand the results of the various comparisons presented in Section 6. Also, to make things slightly simpler we assume here that the grid ∆ m is uniform, therefore in the estimates each instance of t j+1 − t j can be replaced by 1 m . Our main constraint is that we want the method to be successful while minimizing the size of our numerical data, that is the dimension of our finite dimensional space X n m,k , which is nm(k + 1). Since n is fixed by the dimension of the vector field φ, we want to minimize the product m(k + 1). As we see in the examples of Section 6, the usual limiting factor when trying to satisfy the radii polynomial inequalities (15) and (16) is to get the order one term (in r) to be negative. For the finite part (that is (15)), that means basically having that (see Proposition 4.6), and for the remainder part (that is (16)) we get a condition like where α and β are constants depending on the numerical solutionū and on the vector field φ, but not on the parameters k, m and p that we can tune. This leads to We want to be able to chose a r ∞ satisfying the above inequalities, and a necessary and sufficient condition for that is Remember that we want (21) to be satisfied, while minimizing the product m(k + 1). When p is fixed, and k becomes large, notice that C opt k,p is decreasing like ln(k) k p . However, satisfying (21) requires, roughly speaking, to decrease τ m p C opt k,p as much as possible. This suggests two things, which we confirm in our explicit examples of Section 6. First, that it is slightly better to increase m than k (because of the ln(k) factor) and second, that if we take p equal to 2 or more (that is if we use a priori bootstrap) then we can satisfy (21) while taking m(k + 1) much smaller than if we had p equal to 1.
Finally, we point out that taking k = p − 1 seems optimal for the conditon (21) given by the order one term. Indeed, increasing k from p − 1 to p increases the total number of coefficients, but brings no gain with respect to (21) since However, for the proof to succeed (that is for (15) and (16) to be satisfied) we also need small enough Y and Y ∞ bounds. Looking more precisely at Y ∞ , we see that it is of the form where γ depends on the numerical dataū and also on k, but the dependency on k is way less important than in the C k 1 m k+1 term, so we neglect it here. Looking back to the definition of C k in Proposition 3.1, we see that the term that we want to be small is of the form Therefore, if we really need to decrease the Y ∞ bound, increasing k is drastically better than increasing m. That is why, in practice we often take k = p, even though k = p − 1 would be enough to satisfy (21). Finally we point out that, if we are not simply focused on getting an existence result, but also care about having sharp error bound, then we should definitively take care of having small Y and Y ∞ bounds, which, as we will show in the next section, can be achieved by slightly increasing k (that is taking k > p).
In the next section, we present several comparisons for different choices of parameters, that confirm the heuristic presented in this section.

Examples of applications for the Lorenz system
In this section, we consider the Lorenz system, that is with standard parameter values (σ, β, ρ) = (10, 8 3 , 28). Here, we first consider the initial value problem (2), and use those bounds to try and validate orbits of various length with different parameters, to highlight the significant improvement made possible by the a priori bootstrap technique (that is taking p ≥ 2). Then, we show that the a priori bootstrap also allows to validate more interesting solutions (from a dynamical point of view), namely periodic orbits and connecting orbits.

Comparisons for the initial value problem
The aim of this section is to showcase the improvements allowed by the use of a priori bootstrap, and to validate the heuristics made in Section 5. To do so, we fix an initial data (chosen close to the attractor of the Lorenz system) and do two kinds of comparisons. First, we try to validate the longest possible orbits for p = 1, 2, 3 at various values of m and k. We recall that by validating, we mean getting the existence of a true solution near a numerical one, by checking that the hypotheses of Theorem 3.5 hold. To make the comparison fair, we fix the total number of coefficients used for the numerical approximation, that is the dimension of X n m,k , given by nm(k + 1). This quantity is usually the bottleneck of our approach, since we need to store and invert the matrix A † m,k which is of size nm(k + 1) × nm(k + 1). Here, we take nm(k + 1) = 14000 (or as close as possible to 14000). The computations were made on a laptop with 8GB of RAM, and of course nm(k + 1) could be taken larger on a computer with more memory.
The first set of results are given in Table 6.1 (we recall that we work with the Lorenz system, therefore n = 3).
In all cases, the proof fails for longer time τ , because (21) is no longer satisfied. We see here that, as announced in Section 5, it is better to take k as small as possible to get the longest possible orbit, but that increasing k helps reducing the Y ∞ bound, and thus the validation radius r. We see that simply increasing the order of the polynomial interpolation (given by k), allows to get better accuracy but does not really help to prove longer orbits. However, we are going to show on the next examples (see Table 2) that combining a priori bootstrap (that is taking p ≥ 2) with higher order polynomial interpolation does allow to get much longer orbits.  Table 1: Comparisons for p = 1. τ max is the longest integration time for which the proof succeeds, and r is the associated validation radius, that is a bound of the distance (in C 0 norm) between the numerical data used and the true solution.  Table 2: Comparisons for p = 2. τ max is the longest integration time for which the proof succeeds, and r is the associated validation radius, that is a bound of the distance (in C 0 norm) between the numerical data used and the true solution.
First, comparing the k = 1 case when p = 1 and p = 2, we see that using a priori bootstrap allows to get a slightly longer orbit, even for linear interpolation. Also, even for the longest possible orbit in that case (τ = 0.97), we still have much room to satisfy (21) (the quantity given by (21) is 1). However, we cannot get a longer orbit in that case even with p = 2, because the Y ∞ bound becomes too large. This can be dealt with by increasing k, and we see that we can then get much longer orbits. To finish this set of comparisons, we show that doing one more iteration of the a priori bootstrap process (that is taking p = 3 instead of p = 2) still improves the results and allows to get longer orbits (see Table 6 Table 3: Comparisons for p = 3. τ max is the longest integration time for which the proof succeeds, and r is the associated validation radius, that is a bound of the distance (in C 0 norm) between the numerical data used and the true solution.
We sum up this set of comparisons by displaying the longest orbit obtained with p = 1, p = 2 and p = 3 (see Figure 1).
We then finish this section with another set of comparisons, where we now fix the length of the orbit, here τ = 2, and instead look for the minimal total number of coefficients for which we can validate this orbit (for different values of p). The aim of this experiment is to show that using a priori bootstrap enables to validate solutions that one would not be able to validate without using it. Indeed, we are going to see that taking p greater than one allows us to use way less coefficients to validate the solutions. Thus, if for a given solution, the proof without a priori bootstrap requires more coefficients than what our computer can handle, one can reduce  Figure 1: The longest orbits we are able to validate, with a total number of coefficient of approximately 14000. In blue for p = 1, in green for p = 2 and in red for p = 3. The initial value is given by (22). this number by using a priori bootstrap and then possibly validate the orbit. For instance, still with the initial condition given by (22), we cannot validate the orbit of length τ = 2 without a priori bootstrap (that is with p = 1), at least not with less that 14000 coefficients. However, the next table of results shows that we can validate it with p = 2, and also using even less coefficients with p = 3.  Table 4: Minimal number of coefficients needed to validate the orbit of length τ = 2, starting from u 0 given in (22).

Validation of a periodic orbit.
To study periodic orbits, instead of an initial value problem, the system (1) has to be slightly modified into a boundary value problem where τ is now an unknown of the problem, and where u 0 , v 0 ∈ R n . The last equation is sometimes called Poincaré phase condition and is here to isolate to periodic orbit.
As for the initial value problem, we can then consider an equivalent integral formulation (possibly with a priori bootstrap) and define an equivalent fixed point operator T very similar to the one introduced in Section 3. The additional phase condition and the fact that τ is now a variable only require minor modifications of T and of the bounds derived in Section 4 (see for intance [12]).
Using a priori bootsrtap, we are able to validate fairly complicated periodic orbits (see Figure 2).  Figure 2: A validated periodic orbit of the Lorenz system, whose period τ is approximately 11.9973. We used two iterations of a priori bootstrap, that is p = 3, for the validation. If we want to minimize the total number of coefficients to do the validation, we can take k = 3 and m = 602 (which makes 7225 coefficients in total), and we then get a validation radius of 1.5627 × 10 −4 . It is possible to get a significantly lower validation radius, at the expense of a slight increase in the total number of coefficients: for instance with k = 5 and m = 495 (which makes 8911 coefficients in total), we get a validation radius of 4.7936 × 10 −9 .

Validation of a connecting orbit
In this section, we present a computer-assisted proof of existence of a connecting orbit in the Lorenz system for the standard parameter values (σ, β, ρ) = (10, 8 3 , 28). It is well know that at these parameter values, the Lorenz system admits a transverse connecting orbit between β(ρ − 1), β(ρ − 1), ρ − 1 and the origin.
While computer-assisted proofs of connecting orbits were already investigated several times using topological and analytical approaches (see [18,19,25,36,37,38,39,40,41,42,43,44,45,46,47]), a paper of particular relevance to the present work is [19], where a particular case of our method is developed, with only linear interpolation and no a priori bootstrap (that is k = 1 and p = 1). While the authors in [19] were able to validate several connecting orbits for the Lorenz system, they could not validate the aforementioned connecting orbit for the standard parameter values. In fact, one of the main motivations for the present work was to improve the setting of [19] to be able to validate more complicated orbits.
As we showcased in Section 6.1, using a priori bootstrap enables us to validate significantly more complicated orbits for the initial value problem, and this is also true for connecting orbits. Indeed we are able to validate the standard connecting orbit for the Lorenz system, with parameter values (σ, β, ρ) = (10, 8 3 , 28). Before exposing the results, we briefly describe how to modify (2) to be able to handle connecting orbits.
Compared to an initial value problem on a given time interval, or to a periodic orbit, connecting orbits present an aditionnal difficulty which is that they are defined on an infinite time interval (from −∞ to +∞). To circumvent this difficulty and get back to a time interval of finite length, which is more suited to numerical computations (and to computer-assisted proofs), we are going to use local stable and unstable manifolds of the fixed points. By a computer-assisted method very similar to the one presented here, we first compute and validate local parameterization of the unstable manifold at β(ρ − 1), β(ρ − 1), ρ − 1 and of the stable manifold at the origin. Since the main object of this work is not the rigorous computations of those manifolds, we simply assume that they are given (with validation radius) and do not give more details here. The interested reader can find more information about the computations and validations of these parameterizations in [13,48] and the references therein, and also more detailed examples of their usage to get connecting orbits in [18,19,11,12].
We denote by P a local parameterization of the stable manifold of the origin, and by Q a local parameterization of the unstable manifold of a local parameterization of the stable manifold of β(ρ − 1), β(ρ − 1), ρ − 1 . We point out that both manifolds are two-dimensional. We then want to solve where ϕ and θ each is a one dimensional parameter, the parameter in the other dimension being fixed to isolate the solution. Notice that τ is now an unknown of the system. As for the initial value problem, we can then consider an equivalent integral formulation (possibly with a priori bootstrap) and define an equivalent fixed point operator T very similar to the one introduced in Section 3. The additional equation u(τ ) = P (θ) and the fact that we have three additional variables τ , θ and ϕ only requires minor modifications of T and of the bounds derived in Section 4 (see for intance [12,19]).
Using p = 3, k = 3 and m = 1150 (that is a total number of 13803 coefficients) we are then able to rigorously compute a solution of (24) (see Figure 3).

Examples of applications for ABC flows
In this section, we apply our method to the non polynomial vector field   Figure 3: Validated connecting orbit for the Lorenz system, with parameters (σ, β, ρ) = (10, 8 3 , 28). The local stable manifold of the origin is in blue, the local unstable manifold of β(ρ − 1), β(ρ − 1), ρ − 1 in yellow, and the green connection between them (of length τ 17.3) is validated using polynomial interpolation, with a priori bootstrap (p = 3). The proof gives a validation radius of r = 3.1340 × 10 −5 .
The map φ A,B,C is usually referred to as the Arnold-Beltrami-Childress (ABC) vector field, and gives a prime example of complex steady incompressible periodic flows in 3D (see [49,50,51] and the references therein).
The main point of this section is to briefly illustrate the applicability of our technique to non polynomial vector fields. We plan on studying more thoroughly ABC flows with the help of our a posteriori validation method in a future work. Recently, the existence of orbits, that are periodic up to a shift of 2π in one coordinates, have been proven in the cases A = B = C = 1 and 0 < A 1, B = C = 1 [52,53]. Applying the method developed in this paper, we were able to complete these results by proving the following statements.  Table 5) and a solution (x, y, z) of the ABC flow such that Proof. The proof is done by running script_proofs_A11.m (available at [56]), which for each A = 0.1, 0.2, . . . , 1 computes an approximate solution, then computes bounds satisfying (11)- (14) as described in Section 4, and finally finds positive r ∞ and r such that (15)-(16) holds. x(t + τ ) = x(t) + 4π, y(t + τ ) = y(t), z(t + τ ) = z(t), ∀ t ∈ R and x(· + τ ) = x(·) + 2π.
Proof. The proof is done by running script_proofs_111.m (available at [56]), which computes an approximate solution, then computes bounds satisfying (11)-(14) as described in Section 4, and finally finds positive r ∞ and r such that (15)- (16) holds.
The solutions given by Theorem 7.1 are represented in Figure 4, and the solution given by Theorem 7.2 is represented in Figure 5.   z Figure 5: This is the orbit that is described in Theorem 7.2. The proof was done with p = 2, k = 2 and m = 300, and gave a validation radius r = 4.0458 × 10 −6 .
Introducing the basis consisting of the Lagrange functions we have that the Lagrange interpolation polynomial of order k is given by One can then show (see for instance [34]), that and therefore we get sup which is exactly (18). Since we used several times (18) and Proposition 3.2 in Section 4, the bounds that we obtained there depend on the Lebesgue constant Λ k . Therefore we need computable (and as sharp as possible) upper bounds for Λ k . One possibility is to use the well known bound (again see for instance [34]) Λ k ≤ 1 + 2 π ln(k + 1).
However, we can do better, at least when k is odd. In that case, it has been shown (see [54]) that and this formula can be evaluated rigorously using interval arithmetic. Unfortunately, there is no such formula for k even. For small values (k = 2 and k = 4) we computed Λ k by hand using (26), and for k ≥ 6 we used (27) (it is also know that Λ k ∼ 2 π ln(k), therefore (27) is sharp for large k).
We now turn our attention to the interpolation estimates of Section 3. We point out that the analogue of Proposition 3.1 for the Chebyshev interpolation points of the first kind is very standard, and can be found in many textbooks. However, the case of the Chebyshev interpolation points of the second kind is seldom discussed, therefore we include a proof here for the sake of completeness (which is nothing but a slight adaptation of the standard proof).
Proof of Proposition 3.1. We consider the polynomial W k (x) = k l=0 (x − x k l ) and use the standard interpolation error estimate for a function u ∈ C k+1 (see for instance [55]), To prove Proposition 3.1, we only have to show that W k ∞ = 1 2 k−1 (the remaining factor 1 2 k+1 coming from the rescaling). Introducing, for k ∈ N, the k-th Chebyshev polynomial of the second kind U k , defined by U k (cos(θ)) = sin(kθ) sin(θ) , we have that Indeed, the right hand side of (28) is a unitary polynomial of degree k + 1, that has the same zeros as W k . We can then rewrite comes from a combination of the Lebesgue constant and Jackson's Theorem, and can be found in [55]. However, it does not give a very sharp interpolation error estimate for small values of k and l, therefore we derive here the second part of the bound, namely 2q q that can be used in those cases. Letting u(x) = x p (with p ∈ {0, . . . , k}) in (25) leads to We now fix a function u ∈ C l . Using (29) with p = 0 (that is 1 = k i=0 L k i (x)), we get Using Taylor's formula, we then get for some y i in [−1, 1]. Then, expanding the (x k i − x) p terms and using again (29), we get that we can easily observe that L k i (x) = λ k i W k (x)/(x − x k i ), and therefore According to [34], in case the points x k i are the Chebyshev interpolation points of the second kind, we have The function is even and increasing on [0, 1], therefore its maximum is reached at x = 1 and we get Then, we compute We end up with and Proposition 3.2 is proven (the lacking 1 2 l factor coming from the time rescaling).