ON PERIODIC PARAMETER IDENTIFICATION IN STOCHASTIC DIFFERENTIAL EQUATIONS

. Periodic parameters are common and important in stochastic differential equations (SDEs) arising in many contemporary scientiﬁc and engi- neering ﬁelds involving dynamical processes. These parameters include the damping coeﬃcient, the volatility or diﬀusion coeﬃcient and possibly an ex- ternal force. Identiﬁcation of these periodic parameters allows a better under-standing of the dynamical processes and their hidden intermittent instability. Conventional approaches usually assume that one of the parameters is known and focus on the recovery of rest parameters. By introducing the decorrela- tion time and calculating the standard Gaussian statistics (mean, variance) explicitly for the scalar Langevin equations with periodic parameters, we pro- pose a parameter identiﬁcation approach to simultaneously recovering all these parameters by observing a single trajectory of SDEs. Such an approach is summarized in form of regularization schemes with noisy operators and noisy right-hand sides and is further extended to parameter identiﬁcation of SDEs which are indirectly observed by other random processes. Numerical examples show that our approach performs well in stable and weakly unstable regimes but may fail in strongly unstable regime which is induced by the strong intermittent instability itself.

1. Introduction. Stochastic differential equations (SDEs) arise in many contemporary scientific and engineering fields involving dynamical processes [21]. These SDEs, in their most general form, are usually presented by dV = −γ(V, t)dt + σ(V, t)dW (t) (1) where W is a standard Brownian motion. The forward problem is to calculate trajectories or statistics of V (t) with the knowledge of the functions γ(V, t) and σ(V, t) whereas the inverse problem to be considered in present work is to provide estimates of these functions from (incomplete) measurement data of V (t). As a process model, a variant of SDE (1), with a random process {V (t), t ≥ 0}, is of particular interest where γ(V, t) = γ 1 (t)V + γ 2 (t) and σ(V, t) = σ(t) such that dV = −(γ 1 (t)V + γ 2 (t))dt + σ(t)dW (t). (2) Here γ 1 (t) is the damping coefficient, σ(t) (or σ 2 (t)) is the volatility or diffusion coefficient and γ 2 (t) is the external force. Periodic properties of these parameters are common and important, for instance, in the seasonal changes of climate patterns. In particular, positive or negative signs of the damping coefficient γ 1 (t) represent the stable and unstable regimes of the random process V (t). The external force γ 2 (t) decides the strength or amplitude of the intermittency. Meanwhile, in some complex dynamics, the full dynamics contain a hidden process which is unavailable from observations. In these models, one may have a full random process vector but only incomplete observations are obtained. As one of these low-order models, a simplified Stochastic Parameterization Extended Kalman Filter (SPEKF) model in [12] is given by where γ u (t) is the damping coefficient, σ u (t) (or σ 2 u (t)) is the volatility or diffusion coefficient and both of them are assumed to be known. The system (3) will be characterized if the random process V is hidden from the observation and contains intermittent instability. SPEKF is a general strategy for parameterization of unresolved scales in turbulent systems. In many applications, it is important to develop efficient approaches to identifying these parameters based on (incomplete) observations of the random variables.
In this paper, we propose an approach to identifying three parameters γ 1 (t), γ 2 (t), σ 2 (t) in (2) and (3) assuming that the other two parameters γ u , σ u are known in the incomplete observation scenarios. When these parameters are periodic, the proposed approach is applicable if one has a single trajectory of the random process of V (t) or the incomplete observation U (t). Our approach is based on the minimization of the difference between the asymptotic Gaussian statistics and their empirical values of the observation. Nevertheless, the conventional Gaussian statistics, known as the mean and variance, are not enough to identify these three unknown functions uniquely even all of them are constants. In present paper we search for a third Gaussian statistics: the (time-dependant) decorrelation time [19,20] and explore the efficiency to identify three unknown parameters uniquely.
The identification of unknown parameters in SDEs has attracted great interest in recent decades. Such identification can be posed on the general diffusion process (1) assuming that γ and σ 2 are time homogeneous or state homogeneous. We refer to a review paper [24] with different inferential methods for general diffusions. Mostly, Bayesian approach is efficient in parameter identification by using the likelihood of the observation [23,14]. If the observation is discrete, some calibration of the missing data or approximation of the likelihood is necessary and we refer to [25,9]. On the other hand, noticing that both (2) and (3) allow explicit forms of pathwise solutions, one can seek some analytic approaches to identifying the unknown parameters. Several analytic strategies have been investigated in [1,2,3,4], [15,16] to identify the parameters by fitting mean and variance of the random process V (t) (U (t)) at certain points or intervals obtained by a large number of trajectories [8]. Our approach has two main differences compared with the above references. Firstly, we introduce the decorrrelation time which allows us to identify three parameters simultaneously and uniquely. Secondly, assuming that the parameters are periodic, we can provide reconstructed parameters with one trajectory instead of a large number of trajectories.
The paper is organized as follows. In Section 2 we present the main idea by considering the parameter identification of the scalar Langevin equations with periodic parameters. The signs of the damping parameter then present the intermittent stability or instability. We show that the unknown parameters can be identified by minimizing the Gaussian statistics of mean, variance and decorrelation time. Such an approach is summarized in form of regularization schemes with noisy operators and noisy right-hand sides and is further extended to parameter identification of SDEs which are indirectly observed by another random process, i.e. (3) in Section 3. Numerical examples in Section 4 show that our approach performs well in stable and weakly unstable regimes but may fail in strongly unstable regime which is induced by the strong intermittent instability itself. A final conclusion Section 5 ends the paper with further prospects.
2. Parameter identification in directly observed stochastic differential equations. In this section, we first investigate the parameter identification of the time-dependent scalar Langevin equations when all parameters are periodic and the random processes can be observed directly. For the sake of simplicity, we assume that all these parameters are of the same known period T . In case that they have different ones, i.e. T 1 , T 2 , T 3 , we can choose T to be the multiplication such that T = T 1 T 2 T 3 and our arguments hold true with this new period. In the rest of the paper, we choose the initial time to be 0 which simplifies most formulae and makes our arguments more straightforward.
2.1. Asymptotic behaviors of Langevin equations with periodic parameters. In order to identify the unknown parameters by one trajectory of the random process, we need to derive the asymptotic Gaussian statistics and examine their connections with respect to the unknown parameters carefully. To better incorporate our formulation, we recall the Langevin equation (2) where all parameters γ v (t), f v (t) and σ v (t) are periodic functions with the same period T. The initial state v(0) is assumed to be a Gaussian random variable which is independent of the Brownian motion {W (t), t ≥ 0}. The path-wise solution of the above SDE (4) is Because the damping parameter γ v (t) appears in the power of exponential functions in the path-wise solution, we shall additionally assume that 0 < γ min ≤ T 0 γ v (s)ds ≤ γ max < ∞ to make sure that the random process v(t) is asymptotically stable in a suitable sense. The mean and variance of the random process v(t) are derived by As we have introduced in Section 1, to identify three unknown parameters in (4) uniquely, we need to include another important notion as decorrelation time T v (v(t)) (cf, i.e. [19, p.64], [20, p.254]), whose general definition is presented by the covariance function R(t, t + τ ) and the autocorrelation function ρ(t, t + τ ) such that is positive and finite. Consequently we may assume that the periodic function σ 2 v (t) is positive and finite. Then the decorrelation time of v(t) in (4) is formally calculated by (8) T Noticing the periodicity of the parameters in (4), we need to further investigate the asymptotic mean, variance and decorrelation time in (6), (8) carefully and examine whether these Gaussian statistics are periodic or not. In view of the path-wise solution (5) and its Gaussian statistics (6), (8), we observe that the function e − y x γv(s)ds for x, y ∈ [0, ∞) plays an important role and derive the following properties.  Proof. (1) By the periodicity of γ v , we obtain (2) Based on the property (1), we derive for any integer k satisfying y + kT > 0.
Choosing any t > 0, we can set t = kT + ζ such that k is a non-negative integer and a remainder ζ ∈ [0, T ). The periodic properties of the unknown parameters γ v (t), f v (t), σ v (t) and Lemma 2.1 allow us to rewrite the mean and variance in (6) into the following forms We provide a sketched calculation of above two statistics in the Appendix.
If f v and σ 2 v are continuous, both (9) and (10) show that the mean and variance are not periodic but asymptotically periodic if k is sufficiently large and first two terms in (9)-(10) vanish. The formal definition of asymptotically periodic functions can be found in [11,Def.2.1] and we skip the detail here. Meanwhile, referring to (A.1) and (A.2) in Appendix and setting t = kT +ζ, we verify that the decorrelation We define an auxiliary discontinuous kernel function and summarize the asymptotic Gaussian statistics below by letting k → ∞ in (9)- (11).
Let v(t) be the solution of (4) with t = kT + ζ and k → ∞, the asymptotic statistics of v(t) are As above proposition shows, when t (or k) goes to infinity, the asymptotic Gaussian statistics of the random process v(t) display periodic properties where the unknown parameters are hidden. Such asymptotic behaviors then yield the parameter identification approach by a single trajectory in the next subsection.

2.2.
Parameter identification for the Langevin equations with periodic parameters. To incorporate the asymptotical periodicity of the mean and variance in Proposition 1, we define two auxiliary functions such that (13) and The subscripts in both operators F γv and G γv indicate their dependence on the (unknown) damping parameter γ v . When t (or k) is sufficiently large, we could approximate the mean and variance by Then the identification of parameters f v (t) and σ 2 v (t) is to solve a Fredholm integral equation of the first kind with a discontinuous kernel if the damping parameter γ v (t) is known.
The key step is to recover, firstly, the time-dependent damping parameter γ v (t) from the decorrelation time. Referring to Proposition 1, the damping parameter γ v (t) is hidden in an exponential integration function e − ζ 0 γv(s)ds , ζ ∈ [0, T ). We thus define an auxiliary function Multiplying p v (ζ) on both sides of (11), we revise it into Assuming that γ v (ζ) is periodic and continuous in [0, T ], we obtain the equivalent differential equation of the above equality Solving this ordinary differential equation, we derive and consequently .
Recalling the definition of p v (ζ) in (15), we then obtain a Volterra integral equation of the first kind whose left-hand side is defined by a modified decorrelation time .
As mentioned in the introduction, we aim at the parameter identification approach which fits the empirical Gaussian statistics by a single trajectory of v(t) where generation of these empirical values is sophisticated. Noticing the asymptotic periodicity of the Gaussian statistics and the observation time interval [0, t] with t = kT + ζ, we treat the segment of trajectory in each interval [ T, ( + 1)T ), = 1, . . . , k − 1 to be a sample of the random process {v(ζ), ζ ∈ [0, T )}. Choosing an integer K(< k) large enough, we let the burn-in period be [0, KT ) and generate the empirical mean, variance, for any fixed ζ ∈ [0, T ), such that Pointwise calculation of the empirical decorrelation time is based on the trapezoid formula of T v (v(t)) in (7) by summing the autocorrelation value ρ(t i , t j ) among the observation samples v(t i ), v(t j ), i = 1, 2, . . ., j = i, i + 1, . . . and multiplying them with the time step dt, for instance, with a (discrete) time-step dt and a sufficiently large integer J((i + J)dt < T). Here we denote T be the final observation time. The empirical modified decorrelation time T emp (ζ), which approximates T γ v (ζ) in (17), is obtained by substituting the empirical decorrelation time T emp into the formula (17) by the trapezoid formula.
The main parameter identification approach is then presented below in form of three Tikhonov regularization schemes, where α 0 , α 1 and α 2 are regularization parameters.
Remark 1. We comment on the ending time of the trajectory. Referring to the formulae of E(v(t)), V(v(t)) in (9)-(10), we firstly need to choose t (or k) large enough so that the limitation value of e −k T 0 γv(s)ds is sufficiently small. If we denote δ 1 be the tolerance, then we need to choose K ≥ ln(1/δ1) T 0 γv(s)ds and select the burn-in period [0, KT ]. Moreover, in order to generate the empirical Gaussian statistics with certain accuracy, we need the strong law of large numbers. More precisely, referring to [4, Lem.1] and letting V(v(t)) be finite, if we choose δ 2 to be another small tolerance for the (pointwise) difference between the exact and empirical means, we need to choose k − K ∼ O(δ − 1 2 ) with ∈ (0, 1/2) to fulfill the tolerance almost surely.

Remark 2.
Noticing that the forward operators F γ v and G γ v in (20) are generated by the reconstructed damping parameter γ v instead of the exact one. Identification of the force and volatility parameters f v , σ 2 v in above (20) is a regularization scheme with a noisy operator and a noisy right-hand side, which has been well discussed in [18,27].
As one can observe, we propose the parameter identification approach in (20) by fitting the theoretical Gaussian statistics in Proposition 1 with the empirical values obtained by a single trajectory of v(t) when t is sufficiently large. In particular, to realize the above minimization problems, we recover the parameters γ v (ζ), f v (ζ), σ 2 v (ζ) sequentially and the latter two parameters are independent of each other. For extended discussion on these Tikhonov regularization and numerical schemes for the non-negative constraints, we refer to the monographs [10,17] and [28]. Consequence of the noisy operators F γ v and G γ v will be examined carefully in coming Subsection 2.3 and Section 4.
We discuss the uniqueness of the parameter identification approach for periodic parameters. Noticing that in order to allow (16) to be valid, we assume that γ v (t) is continuous. In view of Proposition 1, we need to prove that under the discontinuous kernel K the Fredholm integral equation of the first kind is uniquely solvable given a proper right-hand side F emp (ζ). Notice that the kernel function K is square integrable, the operator F γv is a compact operator and we provide the uniqueness below.
Theorem 2.2. Let assumptions in Proposition 1 be valid. Moreover, assume that are uniquely identifiable if mean and variance of the initial state v 0 are known.
Proof. Let γ v (ζ) be continuous and satisfy (17). Uniqueness of γ v (ζ) follows from the classic arguments on the Volterra integral equation of the first kind.
To prove the unique identification of f v in (21), we assume that there exist two f 1 and f 2 such that, ∀ζ ∈ [0, T ], there holds for any 0 ≤ ζ 1 < ζ 2 ≤ T . We thus concludef = 0 which yields the uniqueness of f v . The uniqueness of σ 2 v (ζ) is proved analogously noticing Proposition 1 and the definition of (14).
Though statistics of the initial state v 0 are assumed to be known in the above uniqueness theorem, in real calculations they have little influence on the reconstructed parameters since these values decay exponentially if a sufficiently large burn-in period is chosen.
If the damping parameter γ v is a constant, we have a simplified discontinuous kernel function which yields the following Corollary and the parameter identification approach (20) can be reduced to approximate some constants accordingly.
Corollary 1. Assume that the damping parameter γ v is a positive constant and parameters {f v (t), σ 2 v (t)} are periodic functions with the same period T . Let v(t) be the solution of (4). Setting t = kT + ζ and letting k → ∞, the asymptotic statistics of v(t) are γv . Assume that the damping parameter γ v is a positive constant and parameters Let v(t) be the solution of (4), then the asymptotic statistics of v(t) are

Intermittent instability and regularization with noisy operators.
In our present work, the continuous damping parameter γ v switches its sign between positive and negative ones which presents the intermittent stable and unstable regimes of v(t). Notice the fact that identification of the force and volatility parameters f v (ζ) and σ 2 v (ζ) highly depends on the reconstructed damping parameterγ v (ζ) in (20). The intermittent stable and unstable regimes yield different performance which shall be further addressed.
More precisely, in view of the equation (21), the forward operator F γv has a kernel function K(γ v , ζ, s) such that Then perturbation between the reconstructed damping parameterγ v (ζ) and the exact one γ v (ζ) performs differently in the intermittent stable (γ v > 0) and unstable (γ v < 0) regimes. One can easily verify that the kernel function K(γ v , ζ, s) has a small misfit if γ v is positive and deviates slightly. Nevertheless the misfit will become very large if γ v is negative with similar small perturbation. Recalling the parameter identification approach (20), the noisy kernel function K(γ v , ζ, s) yields noisy operators Fγ v and Gγ v . Mostly, if one encounters a noisy operator, the error between the exact and noisy operators shall be small enough to obtain accurate reconstruction [18,27]. Such requirement may be violate in the unstable regime when γ v becomes negative. We will provide extensive numerical illustration to verify such a conclusion in Section 4.
3. Parameter identification in indirectly observed stochastic differential equations. In this section, we investigate the parameter identification approach when v(t) is observed indirectly by another random process. A prototype coupled linear stochastic differential system (SPEKF-A model in [12]) is presented below which contains the SDEs of the unknown process v(t) and the observed random process u(t) The initial states v 0 , u 0 are independent of each other and W v , W u are two independent unit Brownian motions which are also independent of both initial states. The parameter identification problem in this section is to recover the unknown parameters {γ v , f v , σ v } by the indirect observation u(t) assuming that the parameters {γ u , σ u } in (24) are known. Referring to the minimization approach in previous section, we need to derive the asymptotic Gaussian statistics of u(t) explicitly where the unknown parameters Similar to previous section, we assume ds > 0 and finite so that both random processes u(t), v(t) are asymptotically stable in a suitable sense. With these periodic parameters, the path-wise solutions of (23)- (24) are The proposition for the asymptotic mean and variance of u(t) are summarized below and we recall the kernel function K(γ, ζ, s) in (12). Proof of the proposition can be found in Appendix. (23)- (24) are all periodic functions with the same period T and 0 < γ min ≤ Let v(t), u(t) be the solutions of (23)- (24). Set t = kT + ζ and k → ∞, the asymptotic mean of u(t) is Similar to Proposition 1, mean and variance of u(t) are asymptotically periodic whose forms are Fredholm integral equations with kernel functions depending on the damping parameters γ u and γ v . Since γ u is assumed to be known, we need to reveal the unknown parameter γ v (t) from the asymptotic decorrelation time of u(t) the same as in previous section. To this end, we incorporate the asymptotical periodicity of mean and variance in Proposition 2 and define auxiliary functions We shall mention that h(ζ) is one of the integral functions in (G γu,γv σ 2 v )(ζ) such that It is important to observe that if all functions (G γu,γv σ 2 v )(ζ), γ u and σ 2 u are known, we can obtain h(ζ) by solving a Fredholm integral equation of the first kind such that We then establish the asymptotic decorrelation time of u(t) referring to the definition of (G γu,γv σ 2 v )(ζ) and h(ζ) in (25).
Because of the indirect observation, recovery of the damping parameter γ v needs additional effort to calculate the auxiliary functions (G γu,γv σ 2 v )(ζ) and h(ζ) which is attainable if one derives the asymptotic variance of u(t) and assumes that γ u , σ 2 u are known. Then, the consequential recovery of the unknown parameters γ v , f v and σ 2 v is straightforward by following the minimization approach (20) and solving a double-Fredholm (or triple-Fredholm) integral equation of the first kind. We provide some details below. Denoting F u,emp (ζ), G u,emp (ζ) and T u,emp (ζ) be the empirical mean, variance and decorrelation time of the random process u(t), we define J u,emp be the empirical value of the modified decorrelation time J γ v (ζ) whose definition is (27).
The parameter identification approach for the indirect observation is summarized in form of three Tikhonov regularization schemes We shall mention again that the identification of the force and volatility parameters f v , σ 2 v depends on the reconstructed damping parameterγ v such that the noisy operators F γu,γ v , G γu,γ v appear naturally in above parameter identification approach. Thus, the intermittent instability enhances the ill-posedness when γ v becomes negative and we examine such an issue numerically in Section 4. Uniqueness of the parameter identification approach (28) can be carried out analogously referring to Theorem 2.2. Here we skip these details.
We also provide a consequential result of Proposition 2 when both damping parameters γ u and γ v are two positive constants or all these parameters are constants. The discontinuous kernel function K defined in (22) is necessary. (23)-(24) are all periodic functions with the same period T and {γ v , γ u } are positive constants. Let v(t), u(t) be the solutions of (23)-(24), t = kT + ζ and k → ∞, the asymptotic mean of u(t) is, The asymptotic decorrelation time is Let v(t), u(t) be the solutions of (23)- (24). If either σ u > 0 or σ v > 0, when t tends to ∞, the asymptotic statistics of u(t) are Inverse Problems and Imaging Volume 13, No. 3 (2019), 513-543 γu . One interesting observation from Corollary 2 is that the parameter identification approach fails to identify three constant parameters uniquely if σ v = 0.
4. Numerical tests. In this section, we illustrate the performance of our proposed parameter identification approach in the stochastic differential equations. As we have discussed in the introduction section, one important issue is the sign of the damping parameter γ v which has essential influence on identification of the unknown parameters. Here we propose three different regimes: the first one concerns the situation that the damping parameter γ v is strictly positive in the whole period [0, T ], which is called a stable regime. If γ v (ζ) takes negative values in a "small" subset of [0, T ] and the whole (positive) integration is comparably large, we call it a weakly unstable regime. On the other hand, if γ v (ζ) takes negative values in a "large" subset of [0, T ] and the whole (positive) integration is comparably small, we call it a strongly unstable regime.
In the following table, we present three different regimes with different choices of the damping parameter γ v and all parameters have the same periodic length T equaling to 1.
In all examples, calculations of the forward SDEs are carried out by the Euler-Maruyama method with discretization time-step dt = 10 −3 . We also test some implicit numerical methods and the performance is similar. Denoting T be the final observation time, we fix the over-all observation time by [0, T] where the first three quarters [0, 3 4 T] are the burn-in period and the empirical values are generated by the rest observation data in [ 3 4 T, T]. To generate the empirical mean and variance, we recall the approximation formula in (18). The empirical decorrelation time is more sophisticated and we briefly describe below referring to (19). Recalling the definition in (7), at time t i , we calculate the autocorrelation ρ(t i , t i + (j − i) * ∆t) which can be realized, for instance, in Matlab with the autocorr function. Then, the empirical decorrelation time is approximated by the trapezoid formula such that   , the force f v (ζ) and the volatility σ 2 v (ζ), we provide pathwise solutions v(t) for three different regimes in Figure 1. Each panel presents a segment of v(t) for t ∈ [100, 102] whereas the long path of the solution v(t) for t ∈ [0, 20000] is presented in the small picture in each panel. As one can observe, the stable and weakly unstable regimes have quite stable pathwise solutions and the amplitude of v(t) is small. Nevertheless, in the strongly unstable regime, the pathwise solution v(t) has large amplitude and the intermittent instability can be clearly observed. To further illustrate the difference among three regimes in Table  1, we collect empirical values of 5 different trajectories of the random process v(t) for each regime and present the empirical Gaussian statistics in Figure 2. As one can observe, the empirical values of the stable and weakly unstable regimes perform similarly and mimic the exact ones accurately. In particular, if the sample number increases, the empirical values converge to the exact ones. On the other hand, the strongly unstable regime provides unstable approximation of the empirical values, especially on the empirical mean and variance in the bottom row of Figure 2. Such instability may further cause error in the parameter identification approach.   We further extend our numerical tests to the indirect observation (23)-(24) and present their results in Figure 3. To simplify our discussion, we choose the known parameters in (24) to be Once again the stable and weakly unstable regimes provide similar behaviors for the observation u(t) since the unobserved random process v(t) has similar behaviors as well. Nevertheless, the strongly unstable regime performs differently because the large amplitude of v(t) enhances the force term and the noise in u(t) is comparably weaken. Similar to the direct observation, we also present the empirical Gaussian statistics of 5 different trajectories of u(t) for each regime in Figure 4 where one can observe that the stable and weakly unstable regimes provide reasonable empirical values but the strongly unstable regime provides bad empirical mean and variance.   Table 1 and (30). Upper row: v(t) and u(t) (stable regime); middle row: v(t) and u(t) (weakly unstable regime) and bottom row: v(t) and u(t) (strongly unstable regime). Each panel presents a segment of v(t) or u(t) for t ∈ [100, 102] whereas the long path of the solutions for t ∈ [0, 40000] is presented in the small picture in each panel.

4.2.
Parameter identification approach for direct observation. In this subsection, we show the performance of the parameter identification approach (20) for direct observation of v(t). Referring to Figure 2, these empirical values may vary at different trajectories, especially those of the strongly unstable regime. In what follows, we choose the empirical values which are close to the exact ones and verify the performance of the parameter identification approach.
Referring to the Tikhonov regularization schemes in (20), we encounter two main difficulties. The first one is the identification of σ 2 v where non-negative constraint is necessary and we refer to [28,Ch.9] for extended discussion. Another problem is the choice of the regularization parameters α 0 , α 1 , α 2 . In our numerical tests, the regularization parameter α 0 is chosen heuristically by fixing α 0 = 0.05. Choice of the regularization parameters α 1 , α 2 , nevertheless, are hard to determine because of the noisy operators Fγ v , Gγ v . In particular, we cannot estimate the error between the exact and reconstructed damping parameters γ v which is crucial to determine the error between the exact and noisy operators. In current work, we include the  influence of the noisy operator and choose the regularization parameters heuristically such that α 1 = α 2 = 0.5, which are ten times larger than α 0 . We also fix  We provide quantity information of the parameter identification approach (20) in Table 2. Columns 2-4 present the L 2 −relative errors of exact and empirical Gaussian statistics of v(t). The final three columns are the L 2 −relative errors of the exact and reconstructed parameters γ v , f v , σ 2 v respectively. As one can observe, the stable regime provides quite good reconstruction since the empirical Gaussian statistics mimic the original ones accurately and stably. On the other hand, the strongly unstable regime has accurate value of the empirical decorrelation time and best and empirical values are generated by u(t), t ∈ [30000, 40000]. Figure 6 presents the performance of the parameter identification approach (28). Similar to Figure 5, from top to bottom, each row shows the numerical result for stable, weakly unstable and strongly unstable regimes. From left to right, each column collects the exact and reconstructed damping parameter γ v , the force parameter f v and the volatility parameter σ 2 v . The red solid lines are the exact values of these parameters and the blue dashed lines are the reconstructed ones. Small figures in each panel are the exact and empirical Gaussian statistics of the observed variable u(t) with red solid and blue dashed lines, respectively. Table 3 collects the quantity information of the parameter identification approach (28). Similar to the previous subsection, columns 2-4 present the L 2 −relative errors of exact and empirical Gaussian statistics of the observation u(t). The final three columns are the L 2 −relative errors of the exact and reconstructed parameters γ v , f v , σ 2 v respectively. We can observe that with more observation samples, the empirical Gaussian statistics mimic the exact ones more accurately compared with the relative errors in Table 2. Nevertheless, with better approximation of the empirical values, we can obtain comparably accurate reconstruction of the damping parameter γ v . The reconstructed force and volatility parameters have worse approximation because of the indirect observation.
To further improve accuracy of the reconstructed parameters, one needs to increase the samples in the stable and weakly unstable regimes. But in the strongly unstable regime, to obtain certain accuracy of the force and volatility parameters, the sample number shall be extremely large to enhance the accuracy of the reconstructed damping parameter.

Prediction of the reconstructed parameters.
Finally we show the prediction of the reconstructed parameters. By inserting the reconstructed parameters for each regime, we run 5 different trajectories and calculate their empirical Gaussian statistics consequently. Though the reconstructed parameters may have some misfit with respect to the exact ones, the empirical Gaussian statistics of the trajectories are quite similar to those of the trajectories by exact parameters. For the sake of simplicity, we only present the strongly unstable regime for both direct and indirect observation in Figure 7. One can compare their performance with bottom rows in Figures 2 and 4 respectively.

5.
Conclusion and future plans. In this paper, we proposed and investigated a parameter identification approach in SDEs. Introducing the decorrelation time,  we could reconstruct all three parameters in SDEs, by minimizing the difference between the theoretical Gaussian statistics and their empirical ones within one entire (in)directly observed trajectory of the SDEs. Such an approach may be extended to some coupled nonlinear stochastic differential equations when the unobserved SDE is Gaussian, for instance, the nonlinear SPEKF-M model in [12] We mention that an MCMC approach has been proposed in [7] where all (constant) parameters in the above coupled system are aimed to be reconstructed. By deriving the Gaussian statistics, we may adopt the proposed parameter identification approach and will report these results in a future work. At the same time, we did not touch the stability or error estimates of our parameter identification approach. Noticing that though the parameter identification approach is deterministic, we recover a Gaussian distribution at each time ζ ∈ [0, T ). To quantify difference between two Gaussian distributions, we could consider a distance, for instance either the TV or Hellinger distance discussed in [26]. On the other hand, we shall also refer to the quantification of data assimilation approaches in [5] where information-based functions, i.e. the differential entropy, the relative entropy and the mutual information are considered. Systematic investigation of stability or error estimates will be considered as future works.
Appendix. We present some useful calculations and proofs below which are referred in the context.
A sketched calculation of the Gaussian statistics (9) and (10). Set E(v(t)) := E 1 (t) + E 2 (t) and V(v(t)) := V 1 (t) + V 2 (t) by denoting E i , V i (i = 1, 2) be the ith term on the right hand side of E(v(t)), V(v(t)) in (6) and calculate these four terms respectively. With t = kT + ζ and Lemma 2.1, we obtain Similarly, we also derive Periodicity of the decorrelation time and (11) for T v (v(t)) . Recalling the decorrelation time T v (v(t)) defined in (8), we can obtain where the second equality holds true by substituting s by s + kT . Moreover, based on Lemma 2.1, there holds for any ζ ∈ [0, T ), where the last equality holds true by substituting ζ + τ by τ and changing the bounds of the integral.
Proof of Proposition 2. Referring to the direct observation, we write the statistics of v(t) with periodic parameters by Incorporating the periodicity of the parameters, we set t = kT +ζ with some positive integer k and a remainder ζ ∈ [0, T ). For the sake of simplicity, we only consider the case of  The mean value of u(t) is We estimate three terms E 1 (t) − E 3 (t) appearing in the above equality by replacing t with kT + ζ and implementing the calculation in (A.2). By Lemma 2.1, we derive Estimation of the third term is more sophisticated and presented below where the final term is independent of k. If We thus prove which yields lim k→∞ V 2,1 (k, ζ) = 0. The second term V 2,2 (k, ζ) is decomposed and consequently defined by 8 terms below  (1 − e −k T 0 (γu+γv)(s)ds ), Noticing that the last term V 8 2 (k, ζ) does not depend on the index k, we have Letting k → ∞ and assuming T 0 γ v (s)ds = T 0 γ u (s)ds, we derive following auxiliary asymptotic estimates for η 5 (k), η 6 (k) in V 5 2 (k, ζ) − V 6 2 (k, ζ) such that Similarly, we derive the asymptotic estimates for η 1 (k) − η 4 (k) such that The proposition is then proved by combining the asymptotic estimates of V 1 and V 2 .
Proof of Proposition 3. We assume the following equality by setting t = kT + ζ The validity of the above equality will be provided later. Then, the key step for calculating the asymptotic decorrelation time is deriving the term lim k→∞ R u (kT + ζ, kT + ζ + τ ). By the definition of covariance and the Itô isometry, we obtain Ru(t, t + τ ) =e − t+τ t γu(s)ds V(u(t)) + Meanwhile, the second term R 2 (t, τ ), together with the formula of R v (t, t + τ ) and the periodicity of γ v , γ u , leads to Recalling the asymptotic behavior of variance V(v(t)) in Proposition 1, we derive To derive the integral in the second term of above equality, we use the similar calculation in (A.2) and obtain Finally we show that the equality (**1) holds true. Referring to the asymptotic variance of u(t), we know that (G γu,γv σ 2 v )(ζ) = 0 if and only if σ u ≡ 0 and σ v ≡ 0. If so, the coupled SDE system will degenerate into the deterministic setting where one can not identify the unknown parameters of v(t). For the sake of simplicity, we assume that both σ 2 u and σ 2 v are positive which yields that the asymptotic variance function (G γu,γv σ 2 v )(ζ) is positive for any ζ ∈ [0, T ). The second equality holds true by (**2) and the property of limit.