Solving the inverse problem for an ordinary differential equation using conjugation

We consider the following inverse problem for an ordinary differential equation (ODE): given a set of data points $P=\{(t_i,x_i),\; i=1,\dots,N\}$, find an ODE $x^\prime(t) = v (x)$ that admits a solution $x(t)$ such that $x_i \approx x(t_i)$ as closely as possible. The key to the proposed method is to find approximations of the recursive or discrete propagation function $D(x)$ from the given data set. Afterwards, we determine the field $v(x)$, using the conjugate map defined by Schr\"{o}der's equation and the solution of a related Julia's equation. Moreover, our approach also works for the inverse problems where one has to determine an ODE from multiple sets of data points. We also study existence, uniqueness, stability and other properties of the recovered field $v(x)$. Finally, we present several numerical methods for the approximation of the field $v(x)$ and provide some illustrative examples of the application of these methods.


Introduction
In this paper we solve the problem for determining an ordinary differential equation (ODE) from given data using conjugacy methods.
In that sense, from a set of data points (t i , x i ), i = 1, . . ., N, we find an ODE x ′ (t) = v(x(t)) that admits a solution x(t) such that x i ≈ x(t i ) as closely as possible. More generally, given multiple sets of data points P (k) = {(t (k) i , x (k) i ), i = 1, . . . , N (k) }, k = 1, . . . , M, we obtain an ODE admitting solutions x (k) (t) such that x (k) i ≈ x (k) (t i ) as closely as possible.
The key to the proposed method consists in finding an approximation of the discrete propagation function D from the given data. Afterwards, we determine the field v(x), using the conjugate map defined by Schröder's equation and the solution of a related Julia's equation.
Explicit solutions of Schröder's equation can be obtained using analytical and asymptotic methods (see for instance [8,9,10,16]). However, it is not an easy task, in general. Even when if an asymptotic procedure could be completed, it relies on the knowledge of function D, therefore it can fail when the function D is known only approximately. In this paper, we developed numerical procedures to approximate the solution of Schröder's equation in the general case.
Methods for the identification of the iteration (or discrete propagation) function D in a discrete dynamical system are widely known (see for instance [6,25,28,30]). Our work is fundamentally based on the fact that the iteration function can be determined from the solution data set at some points. As many of the known methods are applicable, we do not delve into that direction, but propose instead a method based on interpolation that has successfully worked in the cases studied here.
We select this method as it has been employed to solve the Schröder's equation by exact or asymptotic methods [15,16,9,11], and the Julia's equation by numerical methods [1,2,29]. The conjugacy method is applicable in this problem as it allows for establishing a direct and stable relationship between the function D leaving the solution from the time t to t + 1 and the field of the ordinary differential equation.
The problem of searching the field knowing the solution at some time t i , i = 1, . . . , n is studied in [18,17], where authors determine an optimal vector field associated with contractive Picard operators. In [19], using a more general framework, such inverse problems are solved in another direction using the Collage Theorem. In both formulations the inverse problem is solved and a recovery method for the solution is proposed. Regarding the inverse problem studied here, the following open problem is proposed in [9]: Can the conjugacy method be applied in all situations? In this spirit, we solve such a problem by proposing a robust numerical technique to solve a functional equation for a class of functions broader than other previously established. Moreover, we show stability results and determine sufficient conditions on data input function to get a priori information of the solution, such as its regularity and monotonicity.
We provide analytical and numerical procedures for the recovery of the field, taking as an input the solution at some points uniformly distributed in time. To this end, we use the conjugacy method, which consists in solving functional equations (see [15]). This method was successfully used in [9,11] to determine the continuous behavior of iterates from the lattice of time points. Similar methods are used in other applications such as the ones in [12,14,21].
System identification can be defined as the problem of estimating the best possible model of a system, given a set of experimental data, see [26,27]. The problem studied here can be related to the identification problem, in the sense that if the field is parametrized then the inverse problem reduces to determine the parameters from the solution at some times t i , i = 1, . . . , n. The latter has several applications in physics, chemistry, and biology. In this paper, we present an alternative method for identification of the propagation function in an ordinary differential equation by using the conjugacy method.

The conjugacy method
Below we present a brief introduction to the conjugacy method and how it is related to the problem of field recovery in an ordinary differential equation.
Consider an evolution trajectory x(t) of a 1-dimensional system specified by a local, time-translation-invariant law given by the ordinary differential equation (ODE) Equation (1) can be integrated to produce the trajectory as a family of functions of the initial data indexed by the time where f t is a group of invariant transformations f t (see [3]). Thus, for a given time t and increment ∆t (here we take for easy notation ∆t = 1, but the same ideas can be applied for any ∆t), we have For the case of any time t, we have i.e., x(t + 1) is the same function of x(t) as x (1) is of x(0). Here, D = f 1 denotes the unit-time discrete propagation (or iteration) function associated with the ODE in Eq. (1). For notational convenience, we shall often use x(0) ∼ = x.
Using the above framework we can state the following problems. Problem 1: How does one obtain the complete continuous trajectory x(t) = f t (x) knowing the discrete propagation function D(x) = f 1 (x) in (4)?
Problem 1 was solved in [9] for certain kind of analytical functions D under some hypotheses covering physical problems of great interest. Additionally, notice that if this problem is solved we can obtain the corresponding field v(x) of the ODE (1) by the formula In summary, the method described in [9] to solve Problem 1 consists of the following: Given the differentiable function D : [0, a] → [0, b], that represents the discrete evolution function in (4), we can find a diffeomorphism h, which satisfies the conjugation property (Schröder's equation) for some constant s = 1. The existence, uniqueness and regularity of the solutions of this equation were widely addressed in the literature [15,16]. This equation has been used in some applications, e.g., [9,12]. It has the following property: when the origin is a fixed point of D (i.e., D(0) = 0), it follows that h(0) = 0 and, if h ′ (0) = 0 then s = D ′ (0). It is easy to calculate the integer iterates of D: h(D n (x)) = s n h(x) and D n ( The iteration property of equation (6) can be extended to any real t by considering where D t is an analytic function. Thus, if we have the function h, the iterates of the function D can be obtained for any real t. This method introduces a change of variables leading to a better representation of the trajectories in the neighborhood of the fixed point x = 0. The velocity of the flux in (7) can be obtained as Schematically, the method to obtain the field v from f 1 can be represented as The method described above was used in [9] for different cases in which the field can be obtained analytically in closed form.
Using solution of Problem 1, we can solve the following problem. Problem 2: How does one obtain the field v(x) from a solution of the ODE (1) given at discrete times t i , i = 1, . . . , N, i.e., given x(t i ) for i = 1, . . ., N?
To solve Problem 2, we use the solution of Problem 1. To do so, the iterate function D in (3) is obtained from a solution of the ODE (1) at discrete times t i , i.e., from the data x(t i ), with i = 1, . . ., N. This strategy is based on the fact that for times t i uniformly spaced (i.e., t i+1 − t i = ∆t), we have x(t i+1 ) = D(x(t i )), therefore function D can be extended for all x knowing its values at points x(t i ). Insomuch as if we have the continuous trajectory x(t) we can obtain the field v(x) by the formula (8). Schematically, this strategy to obtain the field from a given data can be represented as Denoting Moreover, one can verify that the function g satisfies Julia's equation with the condition g ′ (0) = 1. Thus, alternatively, obtaining the function g from equation (12), without relying on the computation of h from equation (6), leads to another strategy to recover the field v, which can be represented schematically as This paper focuses on the application of this second strategy.

Organization of the paper
This paper is organized as follows.
In Section 2, we discuss the existence and uniqueness of the solution of the Schröder's and Julia's equations for a certain class of functions D. The proof is straightforward and it includes the basics elements to approximate the solution of the functional equation in the general case. In Section 3, we provide some properties of the solutions of Julia's equation. Section 4 presents the sensitivity stability of two fields for close initial functions D 1 and D 2 in (3).
In Section 5, we describe several approximate methods for the solution of Problem 2. These methods are based on a combination of numerical techniques and analytical results. In Section 6 we present several numerical examples that use the proposed approximate methods to recover the field v(x), and illustrate their main characteristics.
Finally, section 7 is dedicated to present the final remarks. We suggest that readers who are more interested in the approximate methods and the numerical examples could skip the technicalities given in Sections 2-4 and continue reading from Section 5.

Existence and uniqueness of solutions
We reduce the inverse problem of recovering the field of the ordinary differential equation to obtain the solution through either Schröder's or Julia's equation. The construction of the function D in (3) from the data x(t i ), with i = 1, . . . , N is provided in Section 5.1. The error in this calculation corresponds to the interpolation error involved in the procedure.
In this section we prove some theoretical results of the existence and uniqueness of the solution for Schröder's (equation (6)) and Julia's equations (equation (12)) given the function D in (4).
We assume that the function D has an attractive fixed point at x = 0 and |D(x)| < |x| in a neighborhood of x = 0. These assumptions are not as restrictive because the equation can often be transformed, e.g., by substituting y = D −1 (x) (see [29]). The same results can be adapted for any fixed point different from zero or more general if D has several fixed points.
We use here results from [2,16,29] for the solution of (6) and (12); however, we also obtain other properties that will be used to prove the stability results.
Using Remark 2.3 we get |D n (x)| ≤ b n |x| → 0, when n → ∞. Since h ′ (0) = 1 and h ∈ C 1 it follows that h ′ (x) satisfies equation (14). This implies the uniqueness of h. Next we prove that the product in equation (14) converges. This is equivalent to proving that converges and so the function f is well defined. Since D ′ ∈ C ε , it follows that log D ′ ∈ C ε , thus there exists k > 0 such that | log D ′ (z) − log D ′ (y)| ≤ k|z − y| ε , ∀y, z ∈ (−a, a). In particular, yielding the absolute and uniform convergence of the series in equation (16). Moreover, since |D j (x) − D j (y)| = |(D j ) ′ (ξ )||x − y|, for some ξ ∈ (x, y), and |(D j ) ′ (ξ )| = ∏ j−1 j=0 |D ′ (D j (ξ ))| ≤ b j , we have, for every x, y ∈ (−a, a), yielding f ∈ C ε . Thus, is also of class C ε . Notice that Defining Finally, notice that • There exists a function D of the class C 1 , such that the product in equation (14) does not converge, ∀x = 0, and a conjugation h of class C 1 in the sense of Definition 2.1 does not exist.

Remark 2.7. The function h of the Proposition 2.2 is such that
This equation gives a relatively simple method for obtaining the solution of Schröder's equation, which is called Koenigs algorithm (see [29]). Notice that in this algorithm the computation of the derivative of D is not necessary.
Given the functions D and h as in Proposition 2.2 it may be checked that the function g := h/h ′ satisfies Julia's equation studied in [16]:

Proposition 2.8. Assume that D satisfies the conditions of Proposition 2.2 andg
where h is the solution of Schröder equation.
Proof. First, notice that ∀n ≥ 1,g(D n (x)) = (D n ) ′ (x)g(x). We prove it using induction. For n = 1, this is the initial functional equation. Assuming that the equation is valid for This yieldsg Next, we prove that for allb ∈ R, the functiong(x) =bg(x) =bh(x)/h ′ (x) is a solution of the functional equation (26). We havẽ . Thusg in (27) satisfies the functional equation (26).

Explicit solution of Julia's equation
We assume that function D satisfies the conditions Consider the solution g of Julia's equation (12), with g = h/h ′ , where h satisfies Schröder's equation (for similar results, see [7,2,16]). Combining equations (14), (25), (27) and using that g ′ (0) = 1, we obtain Formula (32) guarantees that g(0) = 0 can be rewritten as an infinite product, which is very useful for analysis and numerical calculations. Let us define and This representation of the solution as an infinite product is related to the solution of the functional equation that can be obtained from equation (12) by setting g(x) = xĝ(x) and using the condition g(0) = 1. We notice that applying the fixed-point iteration procesŝ using, as an initial guessĝ 0 , a continuous function such thatĝ 0 (0) = 1 readily leads to formula (35).

Solution of Julia's equation in the general case
In this section, we show how to solve Julia's functional equation in the general case. The assumptions used here correspond to the generic situation, in which the iteration function D(x) is obtained from the solution of an ODE with a sufficiently regular field satisfying conditions for the existence and uniqueness of solutions. This method was presented in [4] for solving an inverse problem and consists of two stages. First, the problem is subdivided into a finite (or at most countable) number of subproblems in consecutive bounded intervals covering the total interval that is the solution domain. This subdivision is done so that each subproblem is easier to solve, since it satisfies an extra inequality D(z) = z. The method for solving each of these "reduced" subproblems is presented in Section 2.1.
Finally, we show how to piece together the solutions of the reduced subproblems to obtain the global solution of the functional equation.
We consider that the analysis in Section 2.1 corresponds to the case when D(x) < x, therefore we set x n+1 = D(x n ). In the opposite case, when D(x) > x, we have to set . This result is summarized in the following in (a, b). Define the following sequences: Then the sequence {s n } is monotone decreasing and it converges to a.
Proof. We present the proof for case (i), as the proof for (ii) is analogous. Let s n+1 = D(s n ) < s n . Since s n+1 > a, {s n } is monotone decreasing and bounded from below, so it converges tos in [a, b). From the continuity of D, the equation s n+1 = D(s n ) implies thats = D(s), sos = a. ). Let s 0 , r 0 be any points in (a, b). Define the following sequences:

be a continuous monotone increasing function possessing a continuous inverse, such that D(a) = a and D(b)
Then the sequence {s n } is monotone decreasing and it converges to a, and the sequence {r n } is monotone increasing and it converges to b.
Proof. The proof follows from repeated applications of Lemma 2.9.
Consider the set G a of continuous functions in [a, ∞), differentiable at a, with g(a) = a, g ′ (a) = g ′ a = 0. Furthermore, consider the functional equation in G a : Then the functional equation (38) has a unique solution in (a, b). This solution is proportional to g ′ a .
Proof. The proof is similar to the one in Proposition 2.2 for the case that the fixed point is Proof. We use Theorem 2.11. Let us prove only the case when D(x) < x in (a, b), since the other one is similar. Take r 0 = s 0 as any point in (a, b). We compute D(r 0 ) and D(s 0 ) with the sequences s n → a, r n → b defined in Lemma 2.10; we relate g ′ (a) to g(s 0 ) and g ′ (b) to g(r 0 ) using appropriate versions of formulas (27) and (32) (replacing 0 by a, or 0 by b for cases (i) and (ii), respectively). Equating g(r 0 ) = g(s 0 ), we obtain:

Regularity
It is useful to study the regularity of the class of functions g depending on the regularity of functions D. The construction D ∈ C 1+ε for some ε > 0 determines all continuous solutions g admitting derivatives in 0 (there are other solutions that are only continuous). If h is of class C k , then h ′ is of class C k−1 and thus g is of class C k−1 . Since h ∈ C 1 , it is easy to prove (by induction in s) from the expression g( (25). Then the function h ∈ C k with k ≥ 2, if and only if D ∈ C k . λ h(x)) is a composition of functions of class C k and thus D ∈ C k .

Proposition 3.1. Let the functions D and h be as in Proposition 2.2, with h defined by
Conversely, if D ∈ C k with k ≥ 2, we may assume, by reducing the interval radius a if necessary, that D ( j) is bounded in (−a, a) for 1 ≤ j ≤ k. We have log(h ′ (x)) = f (x) defined by equation (16) and h ∈ C k , if and only if, log h ′ ∈ C k−1 , if and only if, where this series of continuous functions converges uniformly in Since ) and h ′ (x) are continuous functions which are uniformly bounded in (−a, a), and the series ∑ ∞ j=0 λ j converges absolutely, it follows that h ∈ C 2 .
We will show by induction on r, for 2 ≤ r ≤ k that h ∈ C r . Indeed, we prove that ∑ ∞ j=0 (log D ′ (D j (x)) − log λ ) (r−1) can be written as where P r is a polynomial in 3r − 1 variables (which depends on r) with For example, the initial case r = 2 follows from equation (42): we may take By the induction hypothesis according to which h ∈ C k−1 , the functions h 1 (x), h 2 (x), h 3 (x) are continuous and uniformly bounded in (−a, a). It follows that the series in equation (42) converges uniformly to (log h ′ ) (r−1) , which is a continuous function (since it is given by a series of continuous functions which converges uniformly). The claim follows by induction using the following formulas: and

Remark 3.3. If D is a real analytic function, then h is also a real analytic function since D can be extended analytically to some disk B ⊂ C with center at the origin, where
is the limit of a sequence of analytic functions in B which converges uniformly. In this case, if g is differentiable at 0 then g is a real analytic function. Analyticity is important for the investigation of stability properties of the function g depending on the function D, see [2].

Sufficient conditions for monotonicity
Next, we present sufficient conditions that ensure the monotonicity of the solution of the functional equation (26). To this end, for this we first introduce some important assumptions on the behaviour of function D.

Assumption 3.4. We assume that D(x) defined in (4) is a C 2 function for
where d < 1 is a constant.

Assumption 3.5. We consider that D(x) defined in (4) is a C 2 such that belongs to the class of functions
depending on certain constants r 1 , · · · , r 5 .

Lemma 3.7. Suppose that function D satisfies Assumptions 3.4, 3.5 above and
holds for all x ∈ (0, a]. Then the solution g of the functional equation (26) is monotone increasing in (0, a].
Proof. We set Consider y, x ∈ (0, a] with x < y. Let g be the solution of (26). Notice that from (35) we have where x j and y j represent the splinters of x and y as defined in (33). Notice that ρ ′ (s) is a fraction with numerator equal to the left side of inequality (53) and the denominator equal to (D ′ (x)x) 2 . Then inequality (53) yields ρ ′ (s) > 0 for all s ∈ (0, a].
Using that x j < y j (because function D is monotone increasing) and that the function ρ in (54) is monotone increasing, we have ρ(x j ) < ρ(y j ) for j = 1, 2, · · ·. As a consequence g(x) < g(y); thus g is monotone increasing in the interval (0, a].

Sufficient conditions for superlinearity
We now establish sufficient conditions for the solution g of equation (26) to satisfy a superlinearity condition Proof. To prove this inequality, it is sufficient to check that the function s(x) = g(x) − x possesses a local minimum at x = 0. Since we set g ′ (0) = 1 , then s ′ (0) = 0. Notice that s ′′ (0) = g ′′ (0) and g(0) = 0. Using g = h/h ′ and formulas (16) and (22) one may verify that Since D ′ (0) is positive and D ′′ (0) < 0 we have g ′′ (0) ≤ 0 and the lemma follows.
Remark 3.9. It is possible to verify that, if D ′′ (0) = 0, then Lemma 3.8 is still valid assuming that the first derivative such that D (m) (0) = 0 is less than zero.

Continuous dependence and stability
Continuous dependence of the functional equation solution g on the given iteration function D was established in [2,15]. However, due to the relevance of this result for this paper, we present here a version of this result adapted to the new statement and another class takes place on the stability. We have the following lemma:
We consider the functions D defined on [0, B] satisfying the condition (51). Taking s = D(x), Eq. (26) can be rewritten as Now, we verify the validity of the following Lemma: Proof. Now, using (63) and the notation ϒ 1 = D −1 1 (s) and ϒ 2 = D −1 2 (s), we obtain Using the mean value theorem and the definition of ϒ 1 , ϒ 2 , we have Moreover Finally using Remark 3.6 and Lemma (4.1) in (67), we obtain (64).

Approximate methods for ODE field recovering
In this section we present different approximate methods to solve Problem 2, i.e., recovering the field v(x) of the ODE (1).

Determining the function D from the input data
The first step when applying one of the strategies (10) or (13), consists in the recovering of the iteration function D. With this goal in mind, in this section we discuss how to approximately obtain the function D in (4) from the set of data points {(t i , x i ), i = 1, . . . , N}.
We notice that if the input data corresponds to a solution x(t) of an ODE, then we have that x i = x(t i ), i = 1, . . . , N. Consequently, when the times t i , i = 1, . . . , N, are uniformly spaced with a fixed stepsize ∆t, then the function D at points x i = x(t i ), i = 1, · · · , N − 1 is given as D(x i ) = x i+1 . In this case, an approximation of function D can be readily obtained by interpolation or a curve fitting procedure with the input data points More generally, when the discrete time points are not uniformly spaced, we first perform an interpolation in time to obtain an approximate trajectory x ap (t) on the interval [t 1 ,t N ]. Finally, we approximate the function D using interpolation or a curve fitting procedure with the input data {(x i , x ap (t i + ∆t)), i = 1, · · · , N − 1}.
It is worth remarking that both procedures can be readily adapted to the case where there are multiple sets of data available.
We remark that several methods for the identification of the iteration function D in a discrete dynamical system are widely known (see, for instance [6,25,28,30]) which could be applicable here. However, in this paper we do not delve into that direction and use the procedures discussed above.

Approximate methods to solve Julia's equation
In this section we describe several methods to approximate the solution g(x) of the Julia's equation (12) satisfying the condition g ′ (0) = 1. This allows us to reconstruct the ODE field (1) setting v(x) = log(D ′ (0)) g(x).

Infinite product approximation
This method consists in the implementation of equation (35) and the algorithm presented below returns an approximation of g at the given point x = x 0 . We assume that the functions D and D ′ can be readily computed.

Fixed point approximation
This method consists in the implementation of the fixedpoint iteration given by equation (37). Even though the infinite product and the fixed-point approximations are equivalent, we introduce a method based on the later approximation that relies on interpolation in order to avoid the direct computation of splinters, which is explicitly used in the former approximation. Another useful characteristic of this method is its flexibility regarding the interpolation step (fifth step of algorithm 2), which allows the final user to choose an interpolation method based on its own criteria.

Least square approximation
This method relies on the assumption that we have a parametrization of the unknown field v(x) of the ODE (1), i.e., we consider that v(x) = v p (x) where p represents the parameter vector. The goal is to estimate the parameter vector p * associated with the given data, leading in general to a data fitting problem. Optimization techniques have been extensively used for the estimation of parameters in ordinary and partial differential equations (see for instance [13,20,22,27,23]), and the proposed method also follows this approach.
More specifically, by taking into account the relationship between the field v(x) and the solution of Julia's equation g(x) in (11), we have the approximation v(x) ≈ v p * (x) where p * represents the solution of the following optimization problem. Find Remark 5.3. The optimization problem (68) leads to a system of linear equations when the dependence of v p on the parameters p is linear and whenever this dependence is nonlinear the corresponding optimization problem is also nonlinear.

Numerical experiments
In this section, we present numerical examples illustrating the application of the approximate methods discussed in the previous section. In these examples, given the function D(x), we approximately recover the field v(x). Moreover, since the exact solutions for these examples are known, they allow us to verify the robustness, advantages and drawbacks of the proposed methods.

Example 1: recovering a quadratic field
For 0 < a < 1, we consider the quadratic field v(x) = log(a) x(1 − x). The corresponding iteration function is given by . This bound is related to the fact that for x > 1 the solution of the associated ODE explodes in finite time.
We let a = 0.5 and generate a synthetic set of data points with different degree of randomness σ . We take {(x i , y i ), i = 0, . . ., N} where y i = D(x i ) + σ i and σ i are independent (pseudo)random numbers uniformly distributed in the interval (−σ /2, σ /2).
From these data we estimate the iteration function D by a curve fitting method, considering the dependence of D on the parameter a given in equation (69). The resulting iteration function for the case of σ = 0.5 is given in Figure 1. Next, we recover the corresponding field v(x) using Algorithm 1. In Figure 2 the exact and recovered fields for several values of the parameter, i.e., σ = 0.1, 0.5, 0.9, 1.9, 2.5 are presented. Notice that the exact field differs slightly from the approximated field for values of σ less than one. Despite increasing the difference between both fields for higher values of σ , the exact field is recovered in a stable and accurate way. This behavior suggests that the recovery method works well when used to simulate experimental data, which would be contaminated with errors.
We observe that the field v(x) is recovered beyond the singular point x s = 1/(1 − a), since the estimated iteration function approximates the correctly extended version of the exact iteration function (69) (which captures a kind of continuation from the infinity of the trajectories that explode in finite time).
In Table 1 we show the relative errors ε D , ε D ′ and ε v corresponding to the approximations of functions D, D ′ and the field v, respectively. In the last column we show the values of the The results in this column indicate that the method is robust and stable since the variables C v remain uniformly bounded as σ is varying in the range 0.1-4.5.

Example 2: recovering a cubic field
For 0 < a < 1, we consider the cubic field v(x) = log(a)x(1 − x 2 ). The corresponding iteration function is given by As in Example 1, this bound is a consequence of finite time blow up of the solutions of the associated ODE, for |x| > 1. However, in comparison with the previous example, this iteration function cannot be extended in an appropriate way for |x| ≥ x s , therefore a straightforward application of Algorithm 1 for |x| > 1 is not possible in this example.
In the following numerical illustrations, we consider a = 0.9 and use a synthetic input data set {(x i , D(x i )), i = 1, . . ., N}. First, we approximate the iteration function D using the adaptive Antoulas-Anderson (AAA) algorithm for rational approximation introduced in [24] and afterwards, we recover the field v(x) using Algorithm 2 by applying the AAA algorithm in the interpolation step.
Case (a): A set of data points {(t i , x i )} is obtained by sampling a trajectory lying in the interval (0, 1). This set of data points is shown in the upper plot of Figure 3 as Set 1 (blue points). The synthetic input data {(x i , D(x i )), i = 1, . . ., N} is generated from this single set of data points, as indicated in subsection 5.1. This data is shown in the lower plot of Figure 3.
In Figure 4, we present the graphics corresponding to the approximation of the iteration function as well as its derivative in the interval (0, 1). For both functions the absolute error is less than 2.0 × 10 −10 . The higher errors occur in the neighborhood of x = 1, where the input data is more sparse.
In Figure 5, we show the approximated field v(x). The absolute error is less than 2.5 × 10 −6 , illustrating a very accurate recovering of the field. As expected the higher errors also occur in the neighborhood of x = 1.
Finally, by fitting a third degree polynomial to the approximate values of the field, we get the following approximation v(x) ≈ 0.10536 (x 3 + 2.0352 × 10 −10 x 2 − x − 1.6465 × 10 −17 ), whose coefficients have an overall absolute error of at least 10 −6 . Consequently, this gives an approximation of v(x) that can be used beyond the interval (0, 1).
Case (b): In order to generate the input data, we proceed as in the previous case but using multiple sets of data points. We construct these sets by sampling several trajectories  Figure 3 and also those obtained by symmetry with respect to the axis of abscissas. The synthetic input data {(x i , D(x i )), i = 1, . . ., N} is generated from this multiple sets of data points, as indicated in subsection 5.1. In the lower plot of Figure 3 we show half of this synthetic data, since the whole set of data is symmetric with respect to the origin. In Figure 6, we show the graphics corresponding to the approximation of the iteration function D(x) as well as its derivative in the interval (−2.04, 2.04). For both functions the absolute error is less than 6.0 × 10 −11 ; however, the negative impact of the singularities of these functions for x = ±x s ≈ ±2.29 is already noticeable in the neighborhood of x = ±2.04.
In Figure 7, we show the field v(x) and its approximation. The absolute error is less than 2.0 × 10 −5 , illustrating a very accurate recovering of the field. The higher errors also occurs in the neighborhood of x = ±2.04.
Performing a curve fitting procedure with a third degree polynomial, we get the following  approximation v(x) ≈ 0.10536 (x 3 − 1.0620 × 10 −10 x 2 − x + 8.6391 × 10 −11 ), whose coefficients have an overall absolute error of at least 10 −6 . Therefore, this approximation of v(x) can give accurate results beyond the interval (−x s , x s ) ≈ (−2.29, 2.29). As a remark we should note that the interval where we are able to accurately recover the field v(x) using Algorithm 2 is a closed subinterval of (−x s , x s ). However, we accurately extrapolated the approximation beyond this interval using a curve fitting procedure by taking into account a parametric dependence of the field. In general, if we don't have any additional information, in order to recover the field in a larger interval, we need to use an iteration function corresponding to a smaller ∆t, i.e. to use a data set obtained with a higher sampling rate of the trajectories. Approximation error Figure 5. Exact and approximated field v(x) (upper plot) and approximation error (lower plot) in the interval (0, 1) corresponding to case (a) of Example 2 (subsection 6.2). Observe that the exact field and its approximation are indistinguishable.

Example 3: recovering a field with a singular fixed point
For 0 < a < 1, we consider the field v(x) = log(a)(x + 1/2) log(2x + 1) for x ≥ −1/2. The corresponding iteration function is given by that besides the regular fixed point x = 0 also has a singular fixed point at x s = −1/2. Notice that at x s both functions v(x) and D(x) are not differentiable. We let a = 0.5 and generate a synthetic set of data points {(x i , y i ), i = 1, . . . , N} in the interval (−0.5, 1.2) where y i = D(x i )(1 + σ i ) and σ i are independent (pseudo)random numbers uniformly distributed in the interval (−σ , σ ). As in the previous examples the data {(x i , D(x i )), i = 1, . . ., N} is obtained from a multiple set of data points {(t i , x i )} following the procedure discussed in subsection 5.1. We consider different values of σ (up to a 5% perturbation), and obtain a rational approximation of the iteration function D(x) using the full-Newton least square algorithm presented in [5]. Graphics of the approximate iteration function, its derivative and the approximation errors corresponding to the case with σ = 0.05 (5% perturbation) are shown in Figure 8. For the iteration function the absolute error is less than 10 −2 . However, the absolute error for the derivative is close to 0.5, and as expected the worst approximation occurs near the singularity.
In Figure 9, we show the field v(x) recovered using a least square rational approximation, also based on the algorithm mentioned above. The absolute error obtained with this method is less than 8.0 × 10 −2 . The result is not as accurate as in the previous example, but the current example has a different type of singularity that affects the derivatives of the iteration function and the field. Moreover, in the approximation of the field the higher errors occurs near x = 1.2, indicating that the effect of the sparseness of the data around this point has a stronger impact than the singularity.

Final remarks
In this work, we present a complete description for the inverse problem of the determination of an ODE based on solution values. Conditions of existence, uniqueness and stability of this inverse problem were set for a broad set of functions that allows for practical uses. Here the one-dimensional case has been studied, leaving for future work the case of higher dimensions.
The numerical examples illustrate that the proposed approximate methods are quite