Mean-square almost automorphic solutions for stochastic differential equations with hyperbolicity

In the setting of mean-square exponential dichotomies, we study the existence and uniqueness of mean-square almost automorphic solutions of non-autonomous linear and nonlinear stochastic differential equations.

1. Introduction. In order to generalize the almost periodicity, the notion of almost automorphy was introduced by Bochner [2] in 1962. As shown in [10], a continuous function f defined on a Banach space X is said to be almost automorphic if any sequence {t n } of real numbers contains a subsequence {t n } such that the limit g(t) := lim n→∞ f (t + t n ) is well defined and lim n→∞ g(t − t n ) = f (t) for all t ∈ R. Veech [32] presented an example which is almost automorphic but not almost periodic.
During the last few decades, the notion of almost automorphy was used in the study of differential equations. In 1981 Johnson [17] constructed a first-order scalar almost periodic equation which has an almost automorphic solution but no almost periodic solutions. In the monograph [28], Shen and Yi gave an example in which the general dynamics produced by almost periodic differential equations is almost automorphic but has no almost periodicity. Some recent results on almost automorphy can be found in [3,4,5,11,21] and the references therein. A comprehensive understanding about the basic theory of almost automorphy and the applications can be found in the monograph [10,25].
Recently, stochastic differential equations (SDEs for short) have been extensively studied from different aspects because SDEs have important applications in many applied sciences. In the books [1,19,22] several topics about SDEs have been presented, and one of them is the study of mean-square dynamical behaviors. This topic becomes attractive (see [7,16,18,20,30,31]) because mean-square dynamical behaviors are essentially deterministic with the stochasticity built into or hidden in the time-dependent state spaces. In [6,14], the concept of mean-square almost automorphy was introduced for stochastic processes and the existence, uniqueness and asymptotic stability of mean-square almost automorphic solutions were established for some linear and nonlinear stochastic differential equations.
In this paper we investigate mean-square almost automorphic solutions for nonautonomous stochastic differential equations with hyperbolicity. In the setting that the equation where A(t) = (A ij (t)) n×n and G(t) = (G ij (t)) n×n are both almost automorphic matrices, admits a mean-square exponential dichotomy, we discuss the existence of a unique mean-square almost automorphic solution for the non-autonomous linear SDE dx(t) = (A(t)x(t) + f (t))dt + (G(t)x(t) + g(t))dω(t), t ∈ R, where f , g are both mean-square almost automorphic vector-valued progresses. Moreover, we also prove the existence and uniqueness of the mean-square almost automorphic solution for the nonlinear SDE dx(t) = (A(t)x(t) + f (t, x(t)))dt + (G(t)x(t) + g(t, x(t)))dω(t), t ∈ R. (3) This paper is organized as follows. In Section 2, we present some preliminary results. In Section 3, we establish the existence and uniqueness of mean-square almost automorphic solutions for linear SDE (2) and nonlinear SDE (3). In the proof of our main results, we will use the definitions and properties of product integration for SDEs, which will be presented in Appendix for the convenience of readers.
2. Preliminaries. Throughout this paper we use A T to denote the transpose of a matrix or a vector A, and let ω(t) = (ω 1 (t), . . . ω n (t)) T be an n-dimensional Brownian motion. Let · denote the Euclidean norm in R n or operator norm in R n×m .
, and C(I; R n×m ) be the set of all continuous R n×m -valued functions defined on an interval I. In addition, let (Ω, F, P) denote a probability space, where Ω is the sample space, F is its Borel σ-algebra, P is the Wiener measure. Let L 2 (Ω, R n ) be the family of all R n -valued random variables x such that E x 2 := Ω x 2 dP < ∞, and set

MEAN-SQUARE ALMOST AUTOMORPHIC SOLUTIONS 1937
As defined in [6,14], a stochastic process x : R → L 2 (Ω, R n ) is said to be stochastically continuous if and a stochastically continuous stochastic process x is said to be mean-square almost automorphic if for every sequence {t n } of real numbers there exists a subsequence {t n } and a stochastic process y : for t ∈ R. We use AA(R; L 2 (Ω, R n )) to denote the set of all such stochastic processes. Moreover, it was proved in [14,Theorem 2.4] that AA(R; L 2 (Ω, R n )) is a Banach space equipped with the norm Thus the following properties are obvious: (1) λf + µg ∈ AA(R; L 2 (Ω, R n )) for every scalar λ, µ and f, g ∈ AA(R; L 2 (Ω, R n )); ( and f n → f uniformly in R, then f ∈ AA(R; L 2 (Ω, R n )).
Then for any mean-square almost automorphic function x ∈ L 2 (Ω, R n ), the function F : R → L 2 (Ω, R n ) defined by F (t) := f (t, x(t)) is mean-square almost automorphic.
Recall that a solution x of a stochastic differential equation on a finite subinterval [a, b] is said to be unique if any other solutionx is indistinguishable from x, that is, P{t ∈ [a, b] : x(t) =x(t)} = 1, and if the assumption of existence and uniqueness theorem holds on every finite subinterval of R, then (4) has a unique solution x on the entire interval (−∞, ∞).
Finally, we recall the definition of mean-square exponential dichotomy. Let Φ(t) be a fundamental matrix of (1). By [22,Theorem 3.2.4], Φ(t) is invertible with probability 1 for all t in any finite subinterval of (−∞, ∞). We say that (1) admits a mean-square exponential dichotomy if there exist positive constants M and α and projections P (t) such that where Q(t) = Id − P (t). The notion of mean-square exponential dichotomies was introduced by Stanzhyts'kyi [30] and Stoica [31]. We remark here that the classical notion of exponential dichotomy was first introduced for ordinary differential equations by Perron in [26], and plays an important role in the study of dynamical behaviors of differential equations, particularly in what concerns the study of stable and unstable invariant manifolds, and therefore has attracted much attention during the last few decades. We refer to the books [9,24] for details and further references related to exponential dichotomies.
3. Main results. In this section, we establish the existence and uniqueness of mean-square almost automorphic solutions of nonautonomous linear SDE (2) and nonlinear SDE (3) when the linear equation (1) admits a mean-square exponential dichotomy.
Lemma 3.1. Suppose that A, G are almost automorphic and (1) admits a meansquare exponential dichotomy. Assume further that f , g are mean-square almost automorphic functions. Then (2) has a bounded solution in L 2 (Ω, R n ), which is given by Proof. Since Φ(t) is a fundamental matrix of (1), we know that Set Consequently, Let x(t) = Φ(t)ξ(t). Using (8), (9) and the Itô Product Rule (see e.g. [22]), we obtain which means that (7) is the solution of (2). It remains to prove that x is bounded in L 2 (Ω, R n ). Since G(t) is an almost automorphic function, there exists a constant γ 1 > 0 such that Similarly, for f, g ∈ AA(R; L 2 (Ω, R n )) there exists a constant γ 2 > 0 such that where a ∨ b denotes the maximum of a and b. It follows from the elementary inequality that By using the boundedness properties (11) and the inequalities (5), we first evaluate the first term of the right-hand side by Itô isometry property of stochastic integrals as follows: Similarly, by using the boundedness properties (11), the inequalities (6), and the isometry property of Itô integral, one can deduce the second term as follows: As to the third term, it follows from E x ≤ E x 2 , Cauchy-Schwarz inequality, the boundedness properties (10)- (11), and the inequalities (5) that Following the same idea in the proof of the third term, one can prove the last term as follows:

HAILONG ZHU, JIFENG CHU AND WEINIAN ZHANG
Therefore, which means that x is bounded in L 2 (Ω, R n ), and this completes the proof.
Following Dollard and Friedman's idea [12], our next lemma shows that the fundamental matrix Φ of (1) can be represented by the product integration. Such a representation was given by Gill and Johansen [15] and Slavík [29] for ordinary differential equations.
Then the principal matrix solution Φ(t) of the system (1) at the initial time t 0 can be represented as where is the notation of product integration.
Proof. The proof as well as the definition and properties of product integration of SDE (1) will be given in Appendix A.
Lemma 3.3. Assume that system (1) admits a mean-square exponential dichotomy, and A, G ∈ C(R; R n×n ) are almost automorphic matrices. Then the projections P (t), Q(t) in (5) and (6) are mean-square almost automorphic.
Proof. First since A, G ∈ C(R; R n×n ) are almost automorphic, we know that for every sequence {t n } of real numbers there exists a subsequence {t n } such that for some functionsÃ(t) andG(t), for each t ∈ R. It follows from Lemma 3.2 that is the principal matrix solution Φ(t) of the system (1) at the initial time t 0 . Then, from the Definition 4.7 we know that Let P (t) = Φ(t)P Φ −1 (t) with P is a projection such that P 2 = P . It is easy to verify that P (t) is mean-square bounded and stochastically continuous in R. Hence, we can suppose that for every sequence {t n } of real numbers, there exists a subsequence {t n } such that lim for each t ∈ R. In addition, it follows from Theorem 4.8 that for every sequence {t n } of real numbers there exists a subsequence {t n } such that Hence, it follows from the dominated convergence theorem, property of almost automorphic and Theorem 4.9 that Similarly, for every sequence {t n } of real numbers, there exists a subsequence {t n } such that for t ∈ R. Likewise, for every sequence {t n } of real numbers, there exists a subsequence {t n } such that Combining the above arguments and inequalities (5)-(6), we have It follows from the same method as in [9, pp. 19] thatP (t) is indistinguishable from P (t). Now the proof is finished.
Proof. By Lemma 3.1, we know that the function is a bounded solution in L 2 (Ω, R n ) of (2). Now we show that x(t) is a mean-square almost automorphic stochastic process. Since f and g are mean-square almost automorphic functions, i.e., for every sequence {t n } of real numbers, there exists a subsequence {t n } such that for each t ∈ R. In addition, it follows from Lemma 3.3 that projections P (t) is mean-square automorphy, that is, for every sequence {t n } of real numbers, there exists a subsequence {t n } such that where Ψ(t) is given by Lemma 3.3. Thus, where and By Lemma 3.3, taking limit n → ∞ in (14), we obtain Similarly, one can prove Clearly, the proof above is also valid for proving that N 1 (t) is mean-square automorphy, that is, for every sequence {t n } of real numbers, there exists a subsequence {t n } such that As to the second term M 2 (t), it follows from Cauchy-Schwarz inequality, the boundedness properties (10), elementary inequality (12), and the inequalities (5) that Repeating the same argument as above, one can prove that for every sequence {t n } of real numbers there exists a subsequence {t n } such that Thus we have proved that x is a mean-square almost automorphic stochastic solution of (2). For the uniqueness, it follows from [22, pp. 96] that every solution of (1) can be expressed by the formula for any t 0 in R. Assume that x(t) and y(t) are both mean-square almost automorphic solution of (2) associated with the initial conditions x(t 0 ) = x 0 and y(t 0 ) = y 0 , respectively. Let u(t) = x(t) − y(t). Then which is a mean-square almost automorphic solution of (1). Let Since P (t) and u(t) are both mean-square automorphic, it follows that for every sequence {t n } of real numbers there exists a subsequence {t n } such that and lim for each t ∈ R. Since Γ 1 (t) = P (t)Φ(t)Φ −1 (t 0 ) satisfies mean-square exponential dichotomy (5), it follows from (15) that E Γ 1 (t) 2 = 0 for any t ∈ R as t n → +∞. Hence, it follows from (16) that E Γ 1 (t) 2 = 0 for any t ∈ R. Likewise, we can prove that E Γ 2 (t) 2 = 0 for any t ∈ R. Thus the mean-square almost automorphic solution u(t) satisfies E u(t) 2 = E (P (t) + Q(t))Φ(t)Φ −1 (t 0 )(x 0 − y 0 ) 2 ≡ 0, that is, x is indistinguishable from y. This completes the proof.
Remark 1. The existence and uniqueness of mean-square almost automorphic (mild) solutions was studied by Chang, Zhao and N'Guérékata [6] for non-autonomous SDEs under the following condition that, for every sequence {t n } of real numbers and for every ε > 0, there exists a subsequence {t n } and N ∈ N such that for all n > N and t ≥ s. Later, in [8], Chen and Lin replaced the assumption by mean-square bi-almost automorphic condition to deal with the existence of some stochastic evolution equations. In fact, the above conditions can be expressed in an exact way by using product integration. More precisely, condition (17) and the assumption of mean-square bi-almost automorphic can be directly derived by using Lemma 3.3.
Theorem 3.5. Assume that (1) admits a mean-square exponential dichotomy, and A, G ∈ C(R; R n×n ) are almost automorphic. Suppose further that f and g are both mean-square almost automorphic processes in t ∈ R for every x ∈ L 2 (Ω, R n ), and satisfy the Lipschitz condition in mean-square for all x, y ∈ L 2 (Ω, R n ) and t ∈ R, i.e., with constants L, L > 0. Then (3) has a unique mean-square almost automorphic solution x(t) provided 12M (αL + 4L + 4γ 2 1 L ) α 2 < 1.
Proof. An argument similar to the one used in Lemma 3.1 shows that a mean-square almost automorphic solution of (3) is given by Now let the operator T : L 2 (Ω, R n ) → L 2 (Ω, R n ) be defined as Now we show that T is a contraction on AA(R; L 2 (Ω, R n )). For x, y ∈ AA(R; L 2 (Ω, R n )) and each t ∈ R, we have Since g satisfies Lipschitz condition (18), by using Itô isometry property and inequalities (5), the first term of right-hand side in (20) can be deduced as follows: As to the second term in (20), it follows from E x ≤ E x 2 , Cauchy-Schwarz inequality, and the inequalities (5) that Clearly, the proof above is also valid for proving other terms in the right-hand side in (20). Thus, we have Hence, we have with θ = 12M (αL + 4L + 4γ 2 1 L ) α 2 ∈ (0, 1).
It follows from (21) and Therefore, Clearly, under the condition θ ∈ (0, 1), T is a contraction on AA(R; L 2 (Ω, R n )). Thus there exists a unique x ∈ AA(R; L 2 (Ω, R n )) such that T x = x, which is the unique solution of (19). Since A, G are both almost automorphic and f, g are both mean-square almost automorphic, by Lemma 2.1 one can verify that the solution x of (19) is a unique mean-square almost automorphic solution of (3). The proof is completed.
The solution of (22) is given by Thus it is easy to verify that and therefore (22) admits a mean-square exponential dichotomy. Let f (t) and g(t) be mean-square almost automorphic functions. Then the system has a unique mean-square almost automorphic solution due to Theorem 3.4. In addition, assume that f (t, x) and g(t, x) are mean-square almost automorphic functions and satisfy the Lipschitz condition (18) with the condition 16L+8(1+2γ 2 1 L ) < 2a − σ 2 . Then the system has a unique mean-square almost automorphic solution by using Theorem 3.5.

4.
Product integration for SDEs. The concept of product integration was introduced by Volterra [33] and then developed by Schlesinger [27] and Masani [23]. See Slavík [29] for a systematical presentation. The original idea of product integration is very similar in spirit to the procedures for finding numerical solutions to An approximate solution of (23) can be obtained by using the Euler tangent-line method, which is based on the observation that the approximative value of (23) at the point t 0 + ∆t is x(t 0 + ∆t) ≈ x(t 0 )e A(t0)∆t for ∆t sufficiently small. Proceeding in this manner, we can obtain the following approximative value x(t) ≈ x(t 0 )e A(tn)∆t · · · e A(t0)∆t . Such a fact has been proved provided A(t) is continuous in R (see, e.g., [29]). Thus, if ∆t → 0, the calculation given above can converge to a value, and this value will be denoted by the symbol t t0 e A(τ )dτ · x(t 0 ), which is a solution of (23) with the initial condition x(t 0 ). We refer the reader to Dollard and Friedman [12], Gill and Johansen [15] and Slavík [29] for the definitions of product integration to (23).

Definition 4.1. A function (process) A(t) is called a step function (step process)
if there exists a partition P := {a = t 0 < t 1 < · · · < t m = b} such that  ; R)) be step functions (or step processes) mentioned above with the same partition P (if not, we can make it). Define the product integral with respect to the Brownian motion ω(t) by where the last integral in (24) is Itô integral. (iii): when A, G ∈ L 0 ([a, b]; R).
Proof. Part (i) is obvious. Note that and so forth, one can use the induction to show that F A,G (a, t)x 0 is a solution of the scarlar, linear Itô stochastic differential equation with the initial condition x(a) = x 0 except at the division points, and one can easily verify that F A,G (a, t) is continuous on [a, b]. Thus one can obtain (24) and (ii) is proved. Now we prove (iii). Note that ω tj+1 − ω tj is independent of where µ(P ) denotes the length of the longest subinterval of the partition P .
Similarly, we can prove the following lemma Proceeding in the same way as in (25), for any step functions A n , A m , G n , G m ∈ L 0 ([a, b]; R), we have That is to say that log F A n ,G n (a, b) is a Cauchy sequence in L 2 (Ω, R). So the limit exists and we define the limit as the product integral with respect to the Brownian motion ω(t). This leads to the following definition. where G n and A n are sequences of step functions such that (26), (27) hold respectively. Now let A, G ∈ C([a, b]; R n×n ), by using matrix notation, we can define the multi-dimensional product integral as follows where A n = (A n ij ) n×n and G n = (G n ij ) n×n are sequences of matrices, with A n ij and G n ij are step functions. We now establish some basic properties of the product integration.
Proof of Lemma 3.2. Let A n (t) = (A n ij (t)) n×n and G n (t) = (A n ij (t)) n×n are sequences of matrice with A n ij (t) and G n ij (t) are step functions which converge to A(t), G(t) in L 1 sense and L 2 sense respectively. Following the same idea in Lemma 4.3, we have F A n ,G n (a, t) = Id + t a A n (τ )F A n ,G n (a, τ )dτ + t a G n (τ )F A n ,G n (a, τ )dω(τ ). (28) Taking the limit of (28) as µ(P ) → 0, it follows from the definition of the solution of Itô stochastic differential equations (see e.g., [13,Chapter 5]) that F A,G (a, t)x 0 is the solution of (1) with the initial condition x(a) = x 0 .
Now let x k denote the k-th column of F A,G (a, t), i.e., x k = t a e (A(τ )dτ +G(τ )dω(τ )) · e k , where e k is the k-th vector from the canonical basis of R n . From the analysis above, one can see that the vector functions {x k } n k=1 are all the solutions of (1) with the initial condition x k (a) = e k . Thus the system of functions {x k } n k=1 is linearly independent and represents a fundamental set of solutions of the system (1). Thus it follows from the above results and (28) that (13) is the principal matrix solution of the system (1) at the initial time t 0 , and this completes the proof of Lemma 3.2.
Note that F A,G (a, t) is nonsingular almost surely for all t ∈ [a, b], thus we can give the following definition Proof. Note that (29) in Definition 4.7, one can set a 0 ≤ a 1 ≤ a 2 . The left hand side of (30) can be written as a2 a0 e (A(t)dt+G(t)dω(t)) = lim µ(P )→0 a2 a0 e (A n (t)dt+G n (t)dω(t)) , where P is a partition of [a 0 , a 2 ], and A n and G n are sequences of step functions. Without loss of generality, we can always choose partition P such that a 1 ∈ P . Thus it is obvious to know that (30) holds, and this completes the proof.  Proof. The proof follows from Lebesgue's dominated convergence theorem and the fact that an integrable function is also a product integrable function (see, e.g., [23,Theorem 16.3,pp. 169]). We omit the details here.