Generalized KdV equation subject to a stochastic perturbation

We prove global well-posedness of the subcritical generalized Korteweg-de Vries equation (the mKdV and the gKdV with quartic power of nonlinearity) subject to an additive random perturbation. More precisely, we prove that if the driving noise is a cylindrical Wiener process on $L^2(\mathbb{R})$ and the covariance operator is Hilbert-Schmidt in an appropriate Sobolev space, then the solutions with $H^1(\mathbb{R})$ data are globally well-posed in $H^1(\mathbb{R})$. This extends results obtained by A. de Bouard and A. Debussche for the stochastic KdV equation.

1. Introduction. In this paper we study a subcritical generalization of the Korteweg-de Vries (gKdV) equation subject to some additive random perturbation f (t), that is, with k = 2 (the mKdV case), or k = 3 (referred as the gKdV). Here µ = ±1, which is referred to as focusing or defocusing nonlinearity. The well-known KdV equation (k = 1) describes the propagation of long waves in a channel. Its generalizations (k > 1) appear in several physical systems; a large class of hyperbolic models can be reduced to these equations. The well-posedness in the KdV equation has been extensively studied by many authors in the deterministic setting without any forcing term (f = 0) and goes back to works of Kato [9], Kenig-Ponce-Vega [11] to name a few; there is an abundant literature available on that. The question about the minimal regularity assumptions on initial data needed for well-posedness has been also investigated intensively in recent years; two important methods should be mentioned: the so-called I-method (e.g., [6]) and the probabilistic approach of randomizing the initial data and showing the invariance of Gibbs measures (e.g., [2], [15]). In this paper we also take a probabilistic approach, however, in a completely different setting, where the equation itself has a random term. We do not aim to obtain the lowest possible regularity for such an equation, but simply show how to combine the deterministic and probabilistic approaches in this case to study well-posedness for the initial data with finite energy.
Here, we study the subcritical case of the generalized KdV equation, where an external random forcing f is driven by a cylindrical Brownian motion W on L 2 (R) and multiplied by some smoothing covariance operator Φ. The driving Wiener process W describes a noise in the environment, that is, a sum of little independent shocks properly renormalized. The smoothing operator describes spatial correlation of the noise, but the time increments of ΦW are independent, that is, the noise is white in time. The stochastic KdV equation (k = 1) on R has been studied in a series of papers by A. de Bouard and A. Debussche (see e.g. [3], [5], [4]). In [3] they proved that if u 0 ∈ H 1 (R) and if Φ is a Hilbert-Schmidt operator from L 2 (R) to H 1 (R), then there is a global solution to the stochastic KdV equation which belongs a.s. to C([0, T ]; H 1 (R)). Using Bourgain spaces, when u 0 ∈ L 2 (R) and the covariance operator Φ is Hilbert-Schmidt both from L 2 (R) to L 2 (R) and toḢ −3/8 (R), they have shown in [5] the existence and uniqueness of the solution in L 2 Ω; C([0, T ]; L 2 (R)) for any T > 0. Note that for the mKdV or gKdV equations, the Bourgain spaces approach to lower the regularity of global solutions is not needed (since it gives the same results but is more technically involved). Therefore, for mKdV and gKdV, k ≥ 2, it suffices to use arguments from [11]. In [4], the authors have proved the global well-posedness of solutions to the stochastic KdV equation in L 2 (R) (resp. H 1 (R)), when the noise is homogeneous, that is, of the form u(s)φdW (s) for a convolution operator φ defined in terms of an L 2 (R) ∩ L 1 (R) (resp. H 1 (R) ∩ L 1 (R)) kernel. They used the Bourgain space approach, which is necessary to lower regularity of solutions in the KdV case; it also helped to deal with multiplicative noise. We do not give a full reference for the stochastic KdV and related equations in the periodic setting. However, to guide the reader in the proper direction, we mention a few results. T. Oh [12] studied a stochastic KdV equation on the torus T = [0, 2π). For specific assumptions on the covariance operator Φ, he proved that there is local well-posedness in a certain Bourgain space if the initial condition belongs to it as well. Other KdV-type models can also be considered with variations of the additive noise, such as adding a derivative to the noise (e.g., see the work of G. Richards [13]).
The main goal of this paper is to obtain the global well-posedness of solutions of the mKdV and gKdV (with k = 3) equations in H 1 (R); global solutions with finite energy are important for physical applications, i.e., in study of solitary waves. To study well-posedness we need to set up a specific functional framework that provides the necessary flexibility to use smoothing properties of the Airy group while considering the stochastic term. We note that we consider a driving cylindrical Wiener process, which is quite usual in nonlinear dispersive hyperbolic models, such as the stochastic nonlinear Schrödinger (NLS) equation, this, in its turn, requires the use of non-Hilbert Sobolev spaces. We now state the main result and refer the reader to the next section for all notations. Theorem 1.1. Let u 0 be G 0 -measurable and belong to H 1 x a.s. (i) Let k = 2 and Φ ∈ L 0,1+ 2 for some > 0. Then given any positive time T , there exists a unique solution to (2) which belongs a.s. to Then given any positive time T , there exists a unique solution to (2) which belongs a.s. to ). While we follow the main framework of [3], additional difficulties appear which are due to higher power of nonlinearity considered. When k = 2, u 0 ∈ H 1/4 (R) and Φ is Hilbert-Schmidt from L 2 (R) to H 1+ (R) for some > 0, we prove that there exists a unique solution until some stopping time T 2 > 0. The hypothesis on Φ with some "larger derivative" is due to the functional space L 4 x (L ∞ t ). A similar space L 2 x (L ∞ t ) appears for the fixed point argument of the KdV equation; technical problems arise going from L 2 x to L 4 x . When k = 3, u 0 ∈ H 1/12 (R) and Φ is Hilbert-Schmidt from L 2 (R) to H 5/12 (R)), we prove that there exists a unique solution until some stopping time T 3 > 0. This technique does not easily extend to multiplicative noise; indeed change of variables in time is no longer possible for moments of norm estimates of the corresponding stochastic integral. The problem of multiplicative noise will be addressed elsewhere.
The paper is organized as follows. In section 2, we prove some technical lemmas on functional properties of the stochastic integral t 0 S(t − s)Φ dW (s). Using the functional framework introduced in [11] and a contraction principle in an appropriate function space, we prove local well-posedness of the solution in section 3. In section 4, we prove that if the initial condition belongs to H 1 (R) and Φ is Hilbert-Schmidt from L 2 (R) to H 1+ (R) when k = 2 (and from L 2 (R) to H 1 (R) when k = 3), the solution can be extended to any given time interval [0, T ]. Then it belongs to L 2 (Ω; L ∞ (0, T ; H 1 (R))), and takes a.s. its values in the set of continuous trajectories from [0, T ] to H 1 (R). The proof uses the time invariance of mass and Hamiltonian for solutions to the deterministic gKdV equation. In order to use these invariant quantities, we need a more regular solution. This is achieved approximating the solution u by a sequence {u n } n of solutions defined in terms of smoother initial conditions u 0,n and of more regularizing operators Φ n .
The first named author collaborated with Igor Chueshov on general 2D hydrodynamical models related with the Navier-Stokes equations. In this paper, we try to further develop the intertwining between deterministic and stochastic approaches in PDEs. Such interplay was one of the fundamental contributions of Igor Chueshov's scientific work.
2. Local existence of the solution. In this section we study the stochastic generalized KdV equation with additive noise defined for x ∈ R and t ≥ 0 with the initial condition u(x, 0) = u 0 (x). From now on we will assume µ = 1 (focusing case); the defocusing case follows automatically. The case k = 1, which is that of the stochastic KdV equation, has been studied in [3] and [5]. Here, W is a cylindrical Wiener process on L 2 (R) adapted to a filtration (G t , t ≥ 0), that is, W (t)ϕ = j∈N (e j , ϕ) β j (t) for any ϕ ∈ L 2 (R), where the processes β j (t), j ≥ 0, are independent one-dimensional Brownian motions adapted to (G t ) and {e j } j≥0 is an orthonormal basis of L 2 (R), often referred to as a CONS (complete orthonormal system). Note that the process W (t) is not L 2 (R)-valued, but W (t)ϕ is a centered Gaussian random variable with variance ϕ 2 L 2 = j≥0 (e j , ϕ) 2 . We suppose that Φ is a linear map which is Hilbert-Schmidt from L 2 into H σ (R) for some non-negative σ, that is We suppose that u 0 is G 0 -measurable and H 1 -valued. As in [3] using Duhamel's formula, we write this equation using its mild formulation, that is, where and F(u) =û denotes the Fourier transform of u. Note that is a centered H σ (R) -valued Gaussian variable. Since S(t−s) is an H σ (R) isometry for all σ ≥ 0, the variance of this stochastic integral is Following the approach in [11] (and [3] for the case k = 1) we introduce the following functional spaces of functions u : R × [0, T ] → R: for the mKdV equation (k = 2), and for the gKdV equation (k = 3). Here, L q x (resp. L p t ) denotes L q (R) (resp. L p (0, T )). In order to prove that the process v, defined by the stochastic integral belongs a.s. to the spaces X T k for k = 2, 3 under proper assumptions on the operator Φ, we first prove some technical lemmas. In each result we state the minimal regularity assumption on the operator Φ and the corresponding power of T obtained in the upper estimate, in order to deal with the X T k -norm of v. The following lemma is a generalization of Proposition 3.1 in [3].
Proof. The proof is quite classical; it is sketched for the sake of completeness. The upper estimate is proved for q ∈ [2, ∞) and deduced for q ∈ [1, ∞) by Hölder's The Cauchy-Schwarz inequality implies The Davies inequality for martingales, Young's inequality and Fubini's theorem imply which concludes the proof.
The following result will be used to upper estimate one of the norms in the definition of v X T 2 . Lemma 2.2. Let p, q satisfy 2 ≤ p ≤ q < ∞ and σ ≥ 0. Then for some C > 0 we have Proof. Since q ≥ p, Hölder's inequality with respect to dt, Fubini's theorem and moments of the stochastic integral yield for the CONS {e j } j≥0 of L 2 (R) in the definition of W : The local smoothing property (see Lemma 2.1 in [10]) implies that for every j ∈ N and t ∈ [0, T ] Therefore, This completes the proof of (9). Lemma 2.3. Let p, q be such that 2 ≤ p < q < ∞; for γ ≥ q−2 q letσ = γ q q−2 ≥ 1. There exists a positive constant C such that Proof. Lemma 2.2 applied with σ =σ − 1 yields The proof of (10) relies on the above inequality and on the upper estimate Indeed, suppose that (11) has been proved. Since γ ∈ [0,σ], an interpolation argument (see [3] Proposition A1) proves that for p(γ) defined by 1 Thus, in order to complete the proof of the lemma, we have to check that (11) holds. Since q ≥ p, Hölder's inequality applied with respect to dt and moments of the stochastic integral imply that for the CONS {e j } j≥0 of L 2 (R) in the definition of W (t), we have: where in the last step we change variable s to t − s. The Minskowski inequality implies that v 2 This completes the proof of (11).
The following lemma extends Proposition 3.3 in [3] to the case σ < 3 4 . The notation a ∨ b means max(a, b), while a ∧ b means min(a, b). Furthermore, Proof. We first prove (12) and let q = 4 . Hölder's inequality with respect to the expectation shows that (12) is a consequence of the following estimate We next prove that Indeed, the two previous estimates imply by interpolation (see [3] Proposition A1) that, since q = 4 , we have 1 Thus, using the Fubini theorem, we deduce that To prove (15) using Hölder's inequality with respect to dt, Fubini's theorem and moments of the stochastic integral, we deduce that for any CONS which completes the proof of (15), and thus, of (16). We next compute an upper estimate of v L q ω (L q x (L 2 t )) . Using Fubini's theorem, Hölder's inequality with respect to dt and moments of the stochastic integral, we obtain v q The Sobolev embedding theorem implies that forσ = 1 The inequalities (17) and (16) Since q 2 = 2 ≥ 1, the Sobolev embedding theorem yields Finally, where H denotes the Hilbert transform. Thus, we obtain This completes the proof of (14), and therefore, of (12).
To prove (13), let σ = = 2 5 . Then 1 2 − 4 = σ and (18) completes the proof. Lemma 2.5. Let p, q be such that 2 ≤ q ≤ p < ∞ and γ ≥ 0. There exists a positive constant C such that Proof. Fubini's theorem and Hölder's inequality with respect to dt prove that . Hence, (19) can be deduced from the following estimate (20) Moments of the stochastic integral, a change of variables and Hölder's inequality with respect to ds imply that for the CONS {e j } j≥0 of L 2 (R) in the definition of W , we have |D γ x S(s)Φe j | 2 q 2 ds dx.

ANNIE MILLET AND SVETLANA ROUDENKO
Using the Fubini theorem and then the Minkowski inequality, we deduce where in the last line we use the Sobolev embedding H σ x ⊂ L q x for σ = 1 2 − 1 q . This completes the proof.
Finally, in the case of the stochastic mKdV equation, we have to prove a result similar to Proposition 3.2 in [3]. However, we have to estimate the L 4 x (L ∞ t ) norm instead of the L 2 x (L ∞ t ); this requires a stronger condition on the operator Φ which has to be in L 0,1+ 2 for some positive . (21) Proof. The proof is based on results from the proof of Proposition 3.2 in [3]. We send to this reference for some intermediate results. Let {e j } j≥0 be the CONS of L 2 (R) in the definition of W . Let {ψ k } k≥0 denote a partition of unity such that Letψ k ∈ C ∞ 0 (R) satisfyψ k ≥ 0,ψ k = 1 on the support of ψ k , and suppψ k ⊂ [2 k−3 , 2 k+1 ]. For k ∈ N let S k (t) and Φ k be defined by Then S k (t)Φ = S k (t)Φ k , k ∈ N and S(t)Φ = k≥0 S k (t)Φ k . We prove that for every k ∈ N and ∈ (0, 1), (22) Suppose that (22) holds. Then using the Minkowski and Cauchy-Schwarz inequalities, we deduce that the last inequality is obtained from the upper estimate x for every ϕ ∈ L 2 x . We next prove (22). Let α > 0 to be chosen later, and p ≥ 4 such that αp > 1. The Sobolev embedding implies that W α,p t ⊂ L ∞ t ; hence, using Fubini's theorem we obtain where To upper estimate I 2 , we use Hölder's inequality with respect to the expected value, Fubini's theorem, moments of Gaussian variables and Minkowski's inequality with respect to dt and dx; this yields where the last inequality can be deduced from the inclusion H σ x ⊂ L 4 x for σ > 1 4 to be chosen later.
Remark. This is the place where the significant difference with the stochastic KdV case in [3] arises. Indeed, to deal with the higher power of nonlinearity, the functional space here is L 4 Using Theorem 2.7 in [10], we first consider the homogeneous part of the H σ The L 2 x part of the H σ x -norm obviously satisfies the same final upper bound. To upper estimate I 1 , we use Hölder's inequality with respect to the expected value and Fubini's theorem, Since the stochastic integral is Gaussian, for t ≤ t we have In the double time integral we first consider the case |t − t |2 γk ≤ 1 for γ > 0 to be chosen later on. Using parts of the proof of Proposition 3.1 pages 228-229 in [3] based on computations from [10], we deduce that for k, j ∈ N and 0 ≤ t ≤ t ≤ T , we obtain where for k ≥ 1 (resp. k = 0) we let Hence, we deduce that for k ∈ N and 0 ≤ t ≤ t ≤ T , we get Fix ∈ (0, 1); choose γ > 9 2 and α < 1 8 such that αγ < 8 . Note that for ∈ (0, 1) we have α < 1 36 ; thus, p > 36. Then for |t − t |2 γk ≤ 1, we obtain A direct computation shows that for k, j ∈ N and 0 ≤ t ≤ t ≤ T such that |t − t | 2 γk > 1, we get The above upper estimates and Minkowski's inequality with respect to dx imply Young's inequality yields x . Furthermore, using the explicit definition of H T k , we deduce The upper estimate (24) implies that for τ > 3 4 and σ = τ − 1 2 > 1 4 , we have Since p > 36, choosing σ and τ such that σ + τ ≤ 1 + 2 , the inequalities (23)-(25) conclude the proof of (22), and thus of the lemma.
In order to prove the existence of a local solution to (5), we first estimate moments of functional norms v X T k of the stochastic integral v(t) = t 0 S(t − s)ΦdW (s), k = 2, 3. Let u ∈ X T k ; following the notations in [10], we set where for some positive number ρ, we define x u).
The following proposition gathers the information from the previous lemmas.
These estimates and Hölder's inequality conclude the proof of (27).
3. Local well-posedness. In this section, we prove the existence of a unique local solution u ∈ X T (ω) k to (3) for some random terminal time T (ω), which is positive for almost every ω.  [11], we obtain that for almost every ω, x for some constant c k , which does not depend on T or ω (see [11] pages 584 and 586). Proposition 1 implies that, if the operator Φ is regular enough (that is, Φ ∈ L 0,1+ 2 for some positive when k = 2 or Φ ∈ L Let k = 2; using inequalities proved in [11] page 584-585, we deduce that for u 0 ∈ H 1 4 x a.s. and Φ ∈ L 0,1+ 2 for some positive , given u, u 1 , u 2 ∈ X T 2 , we have and let T 2 (ω) > 0 satisfy In a similar way, when k = 3, the inequalities proved in [11] page 590 imply that x a.s. and Φ ∈ L 0, 5 12 2 , given u, u 1 , u 2 ∈ X T 3 , we have for some ρ > 0 and let T 3 (ω) > 0 be such that (32) These choices imply that for k = 2, 3, F k maps Y , the map F k is a strict contraction on that set. Hence, F k has a unique fixed point in Y , k = 2, 3, thus, concluding the proof. 4. Global well-posedness. We now prove global existence when the initial condition u 0 is in H 1 x a.s. The argument relies on a regularization of u 0 and Φ and on the following conservation laws. When k = 2, 3 and z k is the (deterministic) solution to the gKdV equation x , then the following quantities are time invariant (see [9], Theorem 4.2), k = 2, 3, the mass: the Hamitonian: We now prove Theorem 1.1.
The proof is based on approximations of Φ and u 0 and contains several steps. Indeed, we want to obtain moments of the H 1 x -norm of u n uniformly in t. The mild formulation does not allow us to use martingale estimates for the stochastic integral appearing when the Itô formula is applied to the mass and to the Hamiltonian. Thus, we have to use a sequence of strong solutions {u n } of (3), where Φ n is a "smoother" Hilbert-Schmidt operator and u 0,n is a "smoother" initial condition. Let Φ n ∈ L 0,4 2 and u 0,n ∈ H 3 x be such that Φ n → Φ in L 0,1+ 2 , > 0 (resp. in L 0,1 2 ) for k = 2 (resp. k = 3), Step 1. Proposition 1 proves that the sequence v n (t) := t 0 S(t − s)Φ n dW (s) converges to the stochastic integral v in L 2 ω (X T k ). Hence, there exists a subsequence, still denoted {v n }, which converges to v a.s. Furthermore, for any integer n and k = 2, 3, there exists a unique solution u n to ∂ t u n (t) + ∂ 3 x u n (t) + u n (t) k ∂ x u n (t) dt = 0, u n (0) = u 0,n , and u n belongs a.s. to L ∞ t (H 3 x ). Indeed, following the argument in [3], Lemma 3.2, if we set v n (t) = t 0 S(t − s)Φ n dW (s) and let z n = u n − v n , then z n has to solve a.s. the deterministic equation To ease notations we do not specify the value of k = 2, 3 when dealing with the solution u n . Standard arguments such as the parabolic regularization described in [14] yield that the above equation has a unique local solution. Finally, an argument similar to that in [7] proves that the invariant quantities in (33) and (34) allow us to extend this solution to any time interval [0, T ]. Note that u n ∈ L ∞ t (H 3 x ) ∩ X T k a.s.
Step 2. We next prove that the sequence (u n ) is bounded in L 2q ω (L ∞ t (L 2 x )). The proof is based on Itô's formula for the mass and the time invariance of the L 2 x -norm for the solution to the deterministic generalized KdV equation.
Using the conservation of mass for the solutions to the deterministic gKdV equation, we get t 0 u n (s), ∂ 3 x u n (s) + u n (s) k ∂ x u n (s) ds = 0.
Note that this requires u n (s) ∈ H 3 x a.s., which holds by Step 1, and u n (s) ∈ L 2(k+1) x a.s., which is true, since H 1 . Itô's formula applied to u n (t) 2 L 2 x and the identity j≥0 Φ n e j Using once more Itô's formula with the map y → y q , q ∈ [2, ∞), and the process u n (t) 2 where The Cauchy-Schwarz inequality applied to the last term gives for some C(T ) > 0 which is an increasing function of T , where the last inequality is obtained using Young's inequality with the conjugate exponents q and q q−1 . Furthermore, the Davies inequality for stochastic integrals, the Cauchy-Schwarz and then the Young inequality applied with the conjugate exponents 2q and 2q 2q−1 imply u n (s), Φ n e j 2 ds for some C(T ) > 0, which is an increasing function of T . The inequalities (37)-(39) yield the existence of a constant C(T ) > 0 such that Using the explicit form of (42), we obtain j≥0 H k (u n (s))(Φ n e j , Φ n e j ) = For k = 3, the Gagliardo-Niremberg inequality implies un L 3 x ≤ C un α H 1 x un 1−α L 2 x for α = 1 2 − 1 3 = 1 6 . Therefore, using Young's inequality with the conjugate exponents 4 and 4/3, we get j≥0 H 3 (u n (s))(Φ n e j , Φ n e j ) ≤ u n (s) 2 As in (39), using once more the Davies inequality for the stochastic integral, integration by parts and the Cauchy-Schwarz inequality, we obtain for some positive number C( , T ), which is again an increasing function of T for fixed > 0. Note that for k = 2, k+3 5−k = 5 3 < 2, and for k = 3 we have k+3 5−k = 3. Collecting the information from the estimates (43)-(46) and choosing = 1 16 , we obtain for q(2) = 2 (resp. q Furthermore, if u 0 ∈ L 2q(k) ω (L 2 x ), choosing the exponent q =q(k) ≥ 2 used for the approximation u 0,n of u 0 , we deduce from (41) that u n L 2