A 2ND-ORDER ONE-POINT NUMERICAL INTEGRATION SCHEME FOR FRACTIONAL ORDINARY DIFFERENTIAL EQUATIONS

. In this paper we propose an eﬃcient and easy-to-implement numerical method for an α -th order Ordinary Diﬀerential Equation (ODE) when α ∈ (0 , 1), based on a one-point quadrature rule. The quadrature point in each sub-interval of a given partition with mesh size h is chosen judiciously so that the degree of accuracy of the quadrature rule is 2 in the presence of the singular integral kernel. The resulting time-stepping method can be regarded as the counterpart for fractional ODEs of the well-known mid-point method for 1st-order ODEs. We show that the global error in a numerical solution generated by this method is of the order O ( h 2 ), independently of α . Numerical results are presented to demonstrate that the computed rates of convergence match the theoretical one very well and that our method is much more accurate than a well-known one-step method when α is small. than by classical (integer) ones. As it is very rare that a system of practical signiﬁcance can be solved analytically, one needs to be able to solve the system numerically. Clearly, an accurate and computationally eﬃcient numerical scheme is crucial for solving fractional ODEs. This is particularly true when solving an optimal control problem involving such a system as an iterative computational procedure for computing the optimal control requires the solving the system repeatedly.


1.
Introduction. Modelling and optimal control of many practical systems in engineering, science and economics traditionally involve Ordinary Differential Equation (ODE) systems of integer orders [2,24,25,27,28,29,30]. While integer order ODE systems are adequate for capturing the evolution of most standard phenomena, it has been shown over the last two decades that many complex systems in solid mechanics, viscoelastics, gas diffusion and heat conduction in porous media, signal and image processing, bio-engineering, biology, economics and financial engineering are better modelled by systems with fractional or non-integer-order differential equations (cf., for example, [3,4,5,6,7,8,22,23,26]). In particular, complex phenomena involving memory effects can be modelled more appropriately and accurately by fractional dynamical systems than by classical (integer) ones. As it is very rare that a system of practical significance can be solved analytically, one needs to be able to solve the system numerically. Clearly, an accurate and computationally efficient numerical scheme is crucial for solving fractional ODEs. This is particularly true when solving an optimal control problem involving such a system as an iterative computational procedure for computing the optimal control requires the solving the system repeatedly.
where x 0 and T are positive constants. f is a known function and 0 D α t x(t) denotes the following Captuto's αth-order derivative of x(t) in (0, T ] with α ∈ (0, 1): In the above Γ(·) denotes the Gamma function. Higher-order fractional initial value problems can be transformed into a system of fractional initial value problems of the form (1)- (2) and any efficient numerical method developed for (1)- (2) can be extended to a vector-valued initial value problem. There is also another representation of the αth-derivative called the Riemann-Liouville fractional derivative. However, initial value problems involving the Riemann-Liouville fractional derivative can be readily transformed into (1)- (2) as demonstrated in [10,14]. It has been proven [10,14,16] that solving (1)-(2) is equivalent to solving the following Volterra integral equation: In the open literature, there are four main numerical methods for solving (3): Euler's method [20], an Adams type predictor-corrector method [11,12,13,19,20], the p-th order method [20,21] and the block by block method [1,15,17]. Euler's method is simple, but the convergence order is only O(h α ), where h denotes the mesh size of a uniform partition of (0, T ). The Adams type predictor-corrector method was first proposed by Diethelem et al. [11]. They showed that the convergence order of this method is O(h 1+α ). Based on the work of Diethelem et al., Deng and Li [9] have developed another improved predictor-corrector method. They proved that the order of convergence of the improved version is O(h min(1+2α,2) ). Both of the schemes in [9] and [11] are single step methods. The p-th order and block-by-block methods have convergence rates of orders O(h 3 ) and O(h 3+α ), respectively. However, these methods are linear multiple step methods and thus computationally much more expensive than single step methods. Therefore, a question that arises is whether it is possible to design a single step method for (3) with an upper error bound better than O(h min(1+2α,2) ) when α < 0.5. In the integer case that α = 1, the mid-point one step method has an upper error bound of order O(h 2 ). This method takes advantage of the property that the mid-point quadrature rule yields a 'superconvergence' point for numerical integration, i.e., the mid-point is the only one in an interval which gives the exact numerical integral when the integrand is a linear polynomial. Clearly, this super-convergence property of the mid-point quadrature rule does not hold true for integrals of the type (3), because the kernel becomes singular when τ approaches t. Thus, the question of what is the counterpart of the classical implicit mid-point numerical integration method for (3) arises.
In this work, we design a one-step numerical method for (3) which is easy to implement, computationally inexpensive, and results in a global error of order O(h 2 ) for any α ∈ (0, 1). This method can be regarded as the counterpart for fractional ODEs of the implicit mid-point numerical integration method for first-order ODEs. In this method, we choose a numerical quadrature point in each of the sub-intervals of a given partition judiciously so that the local approximation error is of a higher order than that from the conventional one-point quadrature rule. This is the counterpart of the mid-point quadrature rule in conventional numerical integration. Based on this special numerical quadrature rule, we develop a one-step numerical integration method for (3) and prove that the global error in the numerical solutions generated by this method is of order O(h 2 ).
The rest of the paper is organized as the follows. In Section 2, we propose an approximation of (3) based on a Taylor expansion. An error analysis of the approximation is also presented. In Section 3, we propose an algorithm for implementing the approximate equation and analyse its convergence. In Section 4 we apply our method to several factional ODEs with known exact solutions to confirm the theoretical error estimate and demonstrate the superiority of our method over some of the existing ones. Section 5 concludes the paper.

2.
Approximation. For a given positive integer N , we first divide (0, T ] into N sub-intervals with mesh points t i = ih for i = 0, 1, . . . , N, where h = T /N . Thus, (3) can be written as follows.
We now consider approximation of the integral on right hand side of (4). Assume that f (t, x(t)) is twice continuously differentiable with respect to both t and x. For j = 1, 2, . . . , i, we apply Taylor's theorem to f (τ, x(τ )) at τ ij to get where c ij a constant representing the 2nd derivative of f at a point between τ ij and τ and Therefore, replacing f (τ, x(τ )) in the integrand of the last term in (4) with the RHS of (5), we have, for any j = 1, ..., i, where We now consider the choice of τ ij . From (6) it is clear that τ ij should be chosen such that the second term becomes zero so that the truncation error in (6) is just R ij . The choice of τ ij is given in the following theorem.
Theorem 2.1. For any given i = 1, 2, . . . , N and j = 1, 2 . . . , i, the unique solution to is given by Proof. We first integrate the LHS of (7) using integration by parts as follows.
From the above, we have Solving this for τ ij , we get (8).
To prove τ ij − jh < 0, we only need to show that Using the mean value theorem, we have Combining Theorem 2.1 and (4), we have the following representation for x(t i ). (9), we have an equation approximating (4) which has the truncation error R i . An upper bound for R i is given in the following theorem.
is twice continuously differentiable in t and x. Then, for any i = 1, 2, .., N , the following estimate holds: where C denotes a positive constant independent of h and α.
Proof. For j = 1, 2, . . . , i, from the definition of R i and Theorem 2.1 we have Then, from the definition of R i and the above estimate, we have Since α ∈ (0, 1), from [18] we have 2 α−1 ≤ Γ(1 + α) ≤ 1. Also, it is obvious that T α < max{1, T } for α ∈ (0, 1). Therefore, from the above estimates we see that (10) holds true for a C independent of both h and α. Thus, we have proved the theorem.
From (9) it is clear that to compute x(t i ), we need to calculate f (τ ij , x(τ ij )), j = 1, 2, . . . , i. However, x(τ ij ), j = 1, . . . , i, are not available directly from the scheme, although the τ ij are known. Thus, approximations for x(τ ij ), j = 1, . . . , i, need to be determined. In the next section, we propose a single step numerical scheme for implementing (9) when the remainder R i is omitted.
3. Algorithm and convergence. In this section, we propose an explicit timestepping algorithm for approximating the solution of (9) when R i is omitted, based on the linearization of the nonlinear term in x(t i ) in (9).
For any indices i and j satisfying 1 ≤ j ≤ i ≤ N , since τ ij ∈ (t j−1 , t j ) by Theorem 2.1, we use the following linear interpolation to approximate x(τ ij ): where Then, we approximate f (τ ij , x(τ ij )) as follows. (9) with the above expression and omit the truncation error terms of order O(h 2 ), we define the following single step time-stepping scheme for (3): for i = 1, 2, . . . , N , where τ ij is defined by (8) and The above scheme is implicit as the last term on the RHS of (14) contains a nonlinear function of x i . We can use an iterative method such as the conventional Newton's method to solve (14) which is usually computationally more expensive than the predictor-corrector process. However, for this case, we may define an explicit single step scheme by further approximating the last term in (14) by the following truncated Taylor's expansion: (17) Combining (14) and (17), we propose the following explicit one-step algorithm for (3).
As can be seen later, Algorithm A provides an efficient and stable 2nd-order method for (3). Strictly speaking, Algorithm A is a multiple step method. This is because the fractional derivative is a global operator and thus all x j , j < i are needed in order to evaluate x i in Algorithm A. However, since the last two terms in the numerator on the RHS of (18) only involve x i−1 , we still call it a one-step method. Also, unlike the case of the explicit mid-point method for first-order ODEs, it is generally very hard to construct a one-step explicit or predictor-correct method based on (9). This is because construction of such a scheme usually requires a fractional Taylor expansion which typically has a truncation error of order O(h nα ) for some positive integer n. Clearly, nα → 0 as α approaches 0.
Using linear interpolation theory and Taylor's theorem, we are able to prove that, for any i = 1, 2, . . . , N , x i generated by Algorithm A converges to x(t i ) at the rate O(h 2 ) when h → 0 + . This is formalized in the following theorem.
Theorem 3.1. Let x(t) be the exact solution of (3)/(4). If f (t, x) is twice continuously differentiable in t and x, then there exists anh > 0 such that h α <h implies where {x i } is the sequence generated by Algorithm A, C is a positive constant, independent of h and α, and h α is defined in (15).
Proof. In what follows, we let C denote a generic positive constant, independent of h. We now prove this theorem by mathematical induction. Firstly, we show that (19) holds for i = 1. By (9), we have Furthermore, from (11) and (13), we have where ρ 11 is defined in (12). Solving this equation for x(t 1 ) and using (18) with i = 1 we have 11 .
We now consider the case of i ≥ 2. Assume that when h α ≤h. We now show that max Using (13) and (17), we have, from (9) and (10) x(t i ) where Note that (18) can be re-written in the following form.
Subtracting (26) from (23) gives Let us first estimate B i −B i . From (25) and (28), we have Note that f is twice continuously differentiable. Using a Taylor expansion, we get where . Using (31) and the above estimate we have, from (30), (22). Thus, from the above expression and (29), we get This implies and so To estimate |x(t i ) − x i |, we need to estimate |A i−1 −Ã i−1 |. To simplify our notation, we let x ij = x j−1 + ρ ij (x j − x j−1 ). We also use either the RHS or LHS of (11) to represent the point x(τ ij ). From the definitions of A i−1 andÃ i−1 in (24) and (27), respectively, and using (11), we have since z α is an increasing function of z for α ∈ (0, 1). Because f is twice continuously differentiable, we have (recall that C is a generic positive constant, independent of h) since ρ ij ∈ (0, 1). In the above we have used (11). Thus, from Assumption (22), we have Replacing |f (τ ij , x(τ ij )) − f (τ ij , x τij )| in (33) with the above upper bound, we have Combining the above error bound with (32), we have Therefore, when h α ≤h, whereh is defined in (21), we have for a given σ > 0. Careful readers may have noticed the positive constant C used in the above proof is independent of h, but a function of T α /Γ(1 + α). However, in the proof of Theorem 2.2 we have shown that T α /Γ(1 + α) is bounded above by a positive constant, independent of α. Thus, we have proved the theorem.
We comment that in the above theorem we have established an upper error bound of order O(h 2 ) for the numerical approximation to (1)-(2) generated by our proposed single-step Algorithm A, while all of the existing single-step methods proposed in [9,11,12,13,19,20] have the drawback that their rates of convergence approach O(h) as α decreases.
We also comment that Algorithm A is a linearized form of our implicit method (14). This linearization does not affect the O(h 2 )-order rate of convergence of (14). In other words, our explicit method represented in Algorithm A performs only one Newton iteration for (14) as performing more Newton iterations will not increase the accuracy of the numerical method due to the discretization errors.
4. Numerical Results. In this section, we solve two examples using our proposed method. Example 1. Consider the following fractional differential equation The exact solution is This test problem is taken from [9] and it is solved by Algorithm A for various values of α and mesh sizes h. The computed errors E h k = max 1≤i≤1/h k |x i − x(t i )| for h k = 1/(2 k × 10), k = 0, 1, ..., 6 and the chosen values of α are listed in Table 1.
To estimate the rates of convergence, we calculate log 2 (E h k /E h k+1 ) for k = 0, 1, ..., 5 and the computed rates of convergence are also listed in Table 1. From the table we see that the computed rates of convergence are very close to the theoretical one in Theorem 3.1. In Table 1, we also compare our results with those obtained by the method in [9]. The latter method is a single-step predictor-corrector method with a theoretical rate of convergence of order O(h min(1+2α,2) ). The rates of convergence of our method are much higher than those of the method in [9] for α = 0.1. From the table we also see that the absolute errors from our method are much smaller than those in [9] unless α is close to 1 in which case classic methods apply. Clearly, our proposed method is superior to that in [9]. In addition, one can expect that a predictor-corrector method should be computationally more expensive than our method.
From Table 1 we see that when α is close to zero, the computed rate of convergence is slight worse than O(h 2 ). This may be because the constant C in (19) contains the term 1/Γ(1 + α), as noted in the previous section. However, when h decreases, the rate of convergence of our method increases.
The exact solution is x(t) = t 3+α . This example is from [1] and it is solved using Algorithm A for various values of h and α. The computed errors and rates of convergence are listed in Table 2 from which we see that the computed rates of convergence are close to the theoretical one in Theorem 3.1. Example 3. Consider the following fractional differential equation 0 D α t x(t) = Γ(4 + α) 6 t 3 + t 4(3+α) − x 4 (t), t ∈ (0, 1], The exact solution is also x(t) = t 3+α . Note that the RHS of the above equation is nonlinear in both t and x. This example is solved using Algorithm A for various values of h and α and the computed errors and rates of convergence are listed in Table 3. From the table we see that the computed order of convergence is greater than 2 when α is small. To summarise, from Tables 1-3 we see that, although computed convergence rates fluctuate for different values of α and h, when h α is small enough, the convergence order is 2 confirming our theoretical analysis. This can also be seen from the last row of each of the tables corresponding to h = 1/1280. All the errors are of the magnitude h 2 ≈ 6 × 10 −7 . To check the robustness of our method in α, we have also solved Example 3 for α = 10 −i , i = 1, ..., 8 and the computed results show that the orders of convergence are roughly 2, particularly when h is small. The robustness of our method may provide an effective way for solving problems with algebraic constraints, or differential algebraic equations. We will discuss this in a future paper. 5. Conclusion. In this paper, we proposed a new numerical method based on Taylor's theorem and linear interpolation for solving fractional differential equations. The proposed method is simple and easy to use. We have proved that the convergence order of the method is 2. The numerical results confirm our theoretical analysis.