DIFFUSION LIMIT FOR KINETIC FOKKER-PLANCK EQUATION WITH HEAVY TAILS EQUILIBRIA: THE CRITICAL CASE

. This paper is devoted to the diﬀusion and anomalous diﬀusion limit of the Fokker-Planck equation of plasma physics, in which the equilibrium function decays towards zero at inﬁnity like a negative power function. We use probabilistic methods to recover and extend the results obtained in [22]. We prove in particular, in the critical case where the classical diﬀusion coeﬃcient is no more deﬁned, that the small mean free path limit gives rise to a diﬀusion equation, with an anomalous time scaling and with a variance breaking.

1. Introduction and main results. We consider a collisional kinetic equation given by Such a problem naturally arises when modeling the behavior of a cloud of particles. Provided f 0 ≥ 0, the unknown f (t, x, v) ≥ 0 can be interpreted as the density of particles occupying at time t ≥ 0, the position x ∈ R d with a physical state described by the variable v ∈ R d representing the velocity of the particles. As in [22], we focus in this paper on the Fokker-Planck equation when the collisional operator Q has a diffusive form: and where the equilibria are characterized by the choice of ω. In the whole paper, except the first section, we choose ω = ω β for some β > d with where C β is chosen such that C β ω β dv = 1. Note that Q does not depend on x. We shall denote by µ β (or simply µ) the measure C β ω β dv = C β ω β −1 dv. This corresponds to the so called Barenblatt profile or general Cauchy distribution. Note that Q is nothing else but the adjoint operator (in L 2 (dv)) of which may be a more classical way to write the Fokker Planck operator as the sum of a Laplacian operator and a potential.

Diffusion approximation
When the scattering phenomenon is much stronger than the advection phenomenon, one expects that the solution of (1) can be approximated by a density depending on the time and space variable times a velocity profile given by the thermodynamical equilibrium.
More precisely, we introduce a small parameter ε 1 which describes the mean free path of the particles, then we consider the following rescaling x = ε x and t = θ(ε) t, with θ(ε) → 0.
Typically, it means that we assume that the time scale is very large as well as the observation scale. In order to study this asymptotic, let us rescale the distribution function The function f ε is now solution of (we skip the primes) The goal is then to study the behavior of the solution as ε → 0. The usual diffusion limit corresponds to θ(ε) = ε 2 as in [22], where the result is obtained in the particular situation of (3) by using the moment method which is by now classical to derive limits of kinetic equations ( [20] and references therein).

Probabilistic formulation
This problem has a natural probabilistic interpretation. Indeed, provided f 0 is a density of probability, by denoting by dB t a brownian motion, the solution f (t, x, v) of (1) is the density of probability (with respect to Lebesgue's measure) of the law of the (stochastic) diffusion process given by the following stochastic differential equation (S.D.E.) dx t = v t dt , DIFFUSION WITH HEAVY TAILS 729 starting with initial distribution f 0 (x, v)dxdv. We shall give rigorous statements later. The rescaling corresponds to the following: f ε (t, x, v) is the density of probability of the joint law of when (x 0 , v 0 ) are distributed according to f 0 (x, v)dxdv. Notice that we rescaled the initial data so that we do not have to rescale f ε by 1/ε d . We thus study the joint law of (ζ(s) s 0 v u du, v s ) when s → +∞, i.e. the joint law of the process v s and some particular additive functional of the process. Note that ζ emphasize the normalization by ε, i.e. ζ( t θ( ) ) = ε. This approach was already used by the first named author together with D. Chafai and S. Motsch [5] in the study of the so called persistent turning walker model introduced in [12]. In [4], a rather general study of long time behavior of additive functionals of ergodic Markov processes is done. The fact that one can then derive the joint behavior of (ζ(s) s 0 v u du, v s ) is explained in section 3 of [5] (subsection: coupling with propagation of chaos and asymptotic independence) and is granted in general situations, in particular the ones we will look at. It is this propagation of chaos (in time) property which ensures the asymptotic splitting of f ε as a product of a function of x times a function of v.
The main goal of this paper is to study the case where the diffusion coefficient is no more finite and actually, we focus here on the critical case where β = d + 4 and the case β < d + 4 is not addressed here. In this later case, note that, for the Boltzmann equation (see [20,21,2,3]), the limiting equation is a fractional diffusion equation.
1. When β > d+2, v ∈ L 2 (µ), and then S t defined by has a finite variance. But what is the long time behavior of E µ (S 2 t ) since the diffusion coefficient is no more finite? 2. What is the "good" normalization s t for S i t / √ s t to converge in distribution?
The natural choice would thus be s t = Var µ (S i t ). Several arguments in [4] indicate that this will be the case only if Var µ (S i t ) behaves like t times a slowly varying function.
3. If such an s t exists, what is the limiting distribution ? 4. What happens with the joint distribution, i.e. with the random vector S t ? Actually, when β = d + 4, we still get a diffusion limit as in [22], but with an anomalous scaling. Such a phenomenon of anomalous rate of convergence to a diffusion limit was already observed on other examples (see [13], [21]). An additional feature here is that some variance breaking occurs. Indeed, if we calculate which does not depend on i, it is shown that a ε t → 2 κ t as ε → 0, while the similar second moment for the limiting density ρ is 2 κt/3. This shows that there is a convergence of the measures but no convergence for the second moment to the limit of the second momentum of the converging measures.
Before proving this main result, we give a result for the classical diffusion result in a more general setting, give the hypothesis on the equilibria required and recall classical probabilistic method that enable to handle this case. We then check that this setting includes the equilibrium considered in this paper, i.e.
Existence theorem. We recall the following theorem has a unique solution f in the class of functions Y defined by: Classical diffusion approximation. The case where β > d + 4 leads to a diffusion equation as described in the following theorem.
Then, f ε converges weakly star in ) has an explicit shape. The diffusion coefficient is thus similar to dv with ν ∼ 1 |v| 2 and then, the constraint β > d + 4 corresponds to the range that ensure that the diffusion coefficient is finite.

2.2.
Main results. We first prove via probabilistic methods a reformulation of theorem 2.2 Theorem 2.3. Assume that β > d + 4. Assume that f 0 is a density of probability. Let ω −1 = C β ω −1 β . Assume that θ(ε) = ε 2 , the solution f ε of (5) weakly converges as ε → 0 towards ρ(t, x) ω −1 (v) where ρ is the unique solution of the heat equation for all t and all F which is continuous and bounded.
Notice that we can also assume that the initial condition is a Dirac mass δ x0,v0 . The type of convergence we obtain is different from the one in [22]. Furthermore it can be extended to a multi-time convergence.
The main result of this paper concernes the critical case as follows . Let x t be defined in (7), then there exists κ > 0 such that, for each i, 1. Var µ (x i t )/(t ln t) → κ > 0 as t → +∞, 2. the normalized additive functional x t / Var µ (x i t ) converges in distribution to a centered gaussian vector with covariance matrix (1/3) Id. Thus, with θ(ε) = ε 2 ln(1/ε), for all initial density of probability f 0 , the solution f ε t of (5) weakly converges as ε → 0 towards (C β ω −1 β (v) ρ(t, x) where ρ is solution to the diffusion equation

Strategy of the proof of Theorem 2.4
The proof is based on the Lindeberg method in the central limit theorem for mixing sequences and is constructed as follows.
1. Use some cut-off functions K(t) directly on H = L −1 (v). To this end, for K > 0, we define Define v K = LH K .

Remark 1.
Actually it is easier to use a cut-off on H rather than on v, introducing some bounded v K , since then we do not know the explicit solution of the Poisson equation

PATRICK CATTIAUX, ELISSAR NASREDDINE AND MARJOLAINE PUEL
3. Choose some K j (t) (j = 1, 2) growing to infinity with t such that on one hand so that Var µ (S t ) will be asymptotically equal to s , on the other hand → 0 in Probability (or for instance in L 1 (µ)).

Prove some Central Limit Theorem for S
, so that the same Central Limit Theorem will be available for S t / s thanks to Slutsky's theorem.
As we said, the difference between K 1 (t) and K 2 (t) will thus explain the anomalous rate of convergence since the normalization will not be the asymptotic square root of the variance.
It is worth noticing that the key of the result is the choice of K 1 (t) that has to be chosen in order to satisfy two conditions: a good cut-off property and some central limit theorem for S Outline of the paper In a first section, we shall rephrase the problem in a probabilistic way, show how to recover the same kind of result as Theorem 2.2 for a large class of weights ω by using arguments in [4] and check that the Barenblatt equilibria considered in this paper are covered by this study in order to show Theorem 2.3. In section 4, we shall prove Theorem 2.4 by following the strategy of the proof.
Notations : We shall make the following abuse of notation, denoting simply by v the function v → v.
< U, V > will denote the scalar product in R d when U, V are vectors in R d . < M > will denote the martingale bracket, when M is a martingale.
C will denote a constant that may change from line to line.
3. Classical rate of convergence. In this section we recall basic facts about long time behavior of stochastic diffusion processes and show the link between long time behavior and diffusion approximation. We then provide a proof of Theorem 2.3 in order to give a dictionary between the deterministic and the stochastic point of view.
3.1. PDE/SDE and semigroup reformulation. Come back to the S.D.E. (6). Let us consider general functions ω satisfying the following assumptions Hypotheses H.
1. H1 ω > 0 is smooth (C 2 or C ∞ ) and ω −1 dv = 1. We thus define the probability measure condition is sometimes called a drift condition.
If (H) is satisfied, it is known that (6) has a unique non explosive solution starting from any v 0 . Indeed, local existence follows from the smoothness of the coefficients, and H2 ensures that v → |v| 2 is a Lyapunov function for Hasminskii's explosion test [14]. In addition µ is the unique invariant probability measure for the process, and is actually symmetric. This means that for all g, h ∈ C ∞ 0 , gLh dµ = hLg dµ .
We may thus define for the associated semi-group, E v0 being the expectation when the process starts from v 0 . This semi-group extends to all L p (µ) spaces, and is self-ajoint in L 2 (µ). It is a Markov semi-group, i.e. P t 1 = 1 (1 being here the constant function). Furthermore, the operator norm of P t acting on L p (µ) is equal to 1.
Thanks to symmetry, if h ≥ 0 satisfies hdµ = 1 and if the law of v 0 is given by hµ, the law of v t is exactly P t h µ. In other words the solution of is given by

3.2.
Ergodic behavior, long time behavior. If we look at (5) without the transport term (or if one wants with an initial datum only depending on v), the asymptotic behavior as ε → 0 is given by the long time behavior of the semi-group P t . We shall now recall some known facts about this long time behavior. Denote ; gdµ = 0} the hyperplane of L p whose elements have zero mean. If 1 ≤ p ≤ r ≤ +∞, and T is a bounded operator from L r 0 (µ) into L p 0 (µ) introduce the operator norm of T . The operator P t is bounded from L p 0 (µ) into L p 0 (µ), and |P t | pp ≤ 1. Next result is due to Roeckner and Wang [23]. A stronger version is contained in [9]. Proposition 1. Assume that hypotheses (H) are satisfied. Then Notice that thanks to the semi-group property and the stability of L p 0 (µ), as soon as |P t0 | pp < 1 for some t 0 > 0 and some 1 < p < +∞, then |P t | pp ≤ K p e −λp t for some K p and λ p > 0. Applying the Riesz-Thorin interpolation theorem in an appropriate way (see [8]) one deduces that the same holds for all 1 < p < +∞. It follows the following alternative Remark 2. If there exist c and λ > 0 such that The last statement is equivalent to the fact that µ satisfies a Poincaré inequality As it is well known, (14) gives a spectral gap and implies the existence of some exponential moment for µ. Since this property is not satisfied for the Barenblatt profile ω = ω β , there is no spectral gap and one cannot expect in our case some exponential convergence.
The long time behavior is now summarized in the following proposition that gives a first result in a very particular case where everything is independent of x Proposition 2. Consider the equation (11). Assume that (H) is satisfied and, for simplicity that C = 1 in (H1), and that f 0 ≥ 0 is such that f 0 dv = 1.
If f 0 ω ∈ L r (ω −1 dv) for some r > 1, then the solution f t = P t (f 0 ω) ω −1 of (11) converges as t → +∞ towards ω −1 in the following sense: for all 1 ≤ p < r there exists some α(r, p, t) → 0 as t → +∞, such that In other words for all Proof. A simple application of Riesz-Thorin interpolation theorem to T t defined by T t g = P t g − g dµ with the pairs (∞, 2) and (q, q) furnishes Another simple proof is contained in [4] lemma 5.1. For 1 ≤ p < r ≤ 2 we obtain the result by duality and for 1 ≤ p ≤ 2 < r by a simple combination, thus Proposition 1 concludes the proof.
We shall come back later to the rate of convergence α which is of key importance for our problem.
3.3. Additive functional and the central limit theorem. As we said in the introduction, the propagation of chaos (in time) property explained in [5] allows us to look separately at v t/θ(ε) and ε t/θ(ε) 0 v s ds to get their asymptotic joint law. The asymptotic behavior of such additive functionals is well understood when v ∈ L 2 (µ). It is much less understood when v ∈ L p (µ) for some p < 2 (and not in L 2 (µ)).
For the latter situation nothing is known in the continuous time setting of this paper. In a discrete time setting some results have been obtained in [15,10].
When v ∈ L 2 (µ), we shall recall here the essential results explained in [4] ( see also previous results quoted in the bibliography in [4]). Notice that we are facing here an additional difficulty since the integrand is vector valued.
Denote by The asymptotic behavior of S t is given by the so called central limit theorem for additive functionals (a stronger form is called Donsker invariance principle or functional central limit principle). This principle tells us how to normalize S . to ensure the convergence of its law (or probability distribution) to a gaussian law.
Definition 3.1. We say that S . satisfies a multi-times central limit theorem (MCLT) at equilibrium with rate ζ and asymptotic covariance matrix Γ, if for every finite sequence 0 < t 1 ≤ ... ≤ t n < +∞, in law as η → 0, where (B . ) is a Brownian motion on R d with covariance matrix Γ. If the previous holds only for n = 1 (one time) but all t, we say that (CLT) is satisfied (the limit being then a gaussian vector).
Notice that in the previous definition, we assumed that the initial distribution of v 0 is the invariant distribution µ. We shall similarly use the terminology (MCLT) out of equilibrium when we can replace µ by some other initial distribution. Note that there is a slight difference between the definition stated here and the definition of (MCLT) in [4].
Notice also that the result gives a multi-time Central limit theorem but that only the Central Limit Theorem has a traduction in term of PDE.
A gathering of results of [4]) gives a general setting (general conditions on the equilibria, i.e. on µ = ω −1 dv) on which classical diffusion is proved as summarized below.
is satisfied, then S . satisfies the (MCLT) at equilibrium, with rate ζ(η) = √ η and asymptotic covariance matrix (or effective diffusion tensor) The conclusion of Theorem 3.2 still holds true out of equilibrium provided the law of the initial condition is either a Dirac mass δ v0 or is absolutely continuous w.r.t. µ.
As a corollary we obtain (since x + B t is still a Brownian motion with mean x) Corollary 1. [4] Assume that (H) holds true (with C = 1 for simplicity). Consider for all n, all t and all F which is continuous and bounded.

PATRICK CATTIAUX, ELISSAR NASREDDINE AND MARJOLAINE PUEL
Then There are mainly three approaches to get (MCLT) in our situation: the Kipnis-Varadhan theorem, mixing and a martingale approach. We give a brief presentation of the first one.

Kipnis-Varadhan approach
We shall say that the Kipnis-Varadhan condition is satisfied if (19) is satisfied. The proof uses reversibility, i.e.
(which immediately extend to the multi-dimensional setting) gives theorem 3.2. As discussed in Remark 3.6 of [4] a sufficient condition for (19) to be satisfied is the following: let H 1 0 = L 2 0 ∩ {g ; ∇g ∈ L 2 (µ)}. Then (19) is satisfied as soon as Remark 3 (Martingale approach). If one can obtain the Kipnis-Varadhan theorem by using an approximate martingale method (see [4] Theorem 3.3), the (true) martingale method is the most popular method for studying additive functionals, and is actually used in [5]. This method is based on the following idea: assume that we can find a solution to the Poisson equation (which here is vectorial) Applying Ito's formula we have so that, provided the boundary terms are in a sense neglectable, the asymptotic behavior of S t is equivalent to the one of the martingale term M t = √ 2 t 0 ∇H(v s ).dB s for which (MCLT) is known for a long time.
Formally the solution of (23) satisfying Hdµ = 0 (still assuming (17)) is given by This condition is stronger than (19) so that, from a general point of view, there is no possible gain by using this strategy, except the following: provided the martingale term is in L 2 (µ), we only need that H ∈ L 1 (µ). For instance it is enough that ∞ 0 P t v 1 dt < +∞, which holds in particular when v ∈ L p (µ) for some p > 1 and t 0 α(p, 1, t) dt < +∞. But in this situation we need ∇H ∈ L 2 (µ) to ensure that the martingale term is squared integrable.
It is very hard in general to explicitly control P t v since even if we know some bound for α(t), which is the case in many situations, this bound only furnishes upper bounds. It turns out, that in some specific cases, one can directly solve (23). As shown in [22], it can be done for ω = ω β . In addition, in this situation one obtains an explicit expression for the effective diffusion tensor.

Application to Barenblatt/Cauchy profiles, proof of theorem 2.3.
Here we shall only look at the case ω −1 = C β ω −1 β i.e. the general Cauchy distribution also known as Barenblatt profile. This case is partly discussed in subsection 5.4.1 of [4], but we shall here give more detailed results.
In order to find the range of parameters for which the assumptions of Corollary 1 are satisfied, we need the following lemma.
Lemma 3.4. Recall that α(t) = |P t | ∞,2 , P t being the semigroup associated to the operator L given by (4), we have the following estimate Proof. In order to calculate α(t) we shall use the optimal weak Poincaré inequality obtained in [6] (improving on [23]): there exists some constant C(d, β) such that for all nice f with f dµ = 0 it holds for all s > 0, An easy optimization in s furnishes for these f 's, the Nash type inequality In order to stay self-contained, we recall Theorem 2.2 in [17] Theorem 3.5.
[17] Liggett theorem Let L be a linear operator generating a Markov Semi-group P t . Define It follows that for all t ≥ 1, Proof. Proof of Theorem 2.3 Note that µ = ω −1 β dv satisfies (17) and that α 2 is integrable if and only if Recall that v ∈ L p (µ) if and only if β − d > p, so that we may apply Corollary 1 provided β > d + 4. As we said in the previous section, we can here explicitly solve the Poisson equation Inspired by the calculation in [22] we search for Notice that the v i 's are exchangeable, so that a and b are the same for all components. As in [22] we get i.e. β > d + 2. Now |∇H| 2 behaves like |v| 4 , so that it is integrable if and only if β > d + 4. In this situation H ∈ L 1 0 (µ). We may thus apply the last part of Corollary 1 , which furnishes an explicit expression for Γ ij , the one obtained in [22].
Notice that for i = j, and Γ ii = γ does not depend on i, all these properties being easy consequences of symmetries. We thus have theorem 2.3.

4.
Anomalous rate of convergence: A critical case. We shall look now at the critical case β − d = 4, for which |∇H| does no more belong to L 2 (µ).

4.1.
Properties of the truncated quantities H K and v K . We gather in a lemma that we will admit all the usefull facts about H K and v K .
Lemma 4.1. We have the following properties • For K > 0, define 1. Note that H K is of class C 1 and also is in L 1 µ , 2. It's second derivatives exist and are continuous for |v| = K 3. There exists a constant C such that |∇H K | ≤ CK 2 4. If β − d = 4,

4.2.
Computation of E µ ((S K t ) 2 ). In the sequel we shall sometimes simply write K instead of K(t) to simplify the notation. Since ∇H i K ∈ L 2 (µ) for all i, we may compute the covariance matrix of Note that, if we define for θ ∈ S d , s K t = Var µ (< θ, S K t >), s K t does not depend on θ. Let us prove Proof. Though H i K is not C 2 , ∂ 2 H i K is piecewise continuous, and we may apply the extended Ito's formula (sometimes called Meyer-Ito formula) to write We denote by Since all S i have the same distribution, from now on we skip the superscript i.
The key point is that, since µ is reversible, if v 0 is distributed according to µ, s → v t−s has the same distribution (on the path space up to time t) as s → v s . We may thus write whereM K . is a backward martingale with brackets In particular we have the following decomposition, known as Lyons-Zheng decomposition Another application of the reversibility property is the following: provided S K t and H K are square integrable, . Now thanks to stationarity It follows if β − d = 4, by (28), ∃κ such that Since by (28), and thus, if we choose K(t) t ln K(t), Notice that, since µ(< ∇H i K , ∇H j K >) = 0 for j = i, the martingales M i,K and M j,K are orthogonal, and we have in fact
Recall that g ∈ L p (µ) for p < β − d, and that, Actually, all choices of p will give the same rough bounds. So just take p = 2 and apply the contraction property of the semi-group, which yields Since (CLT) are written for mixing sequences we introduce some notations. For N = [t], and n ∈ N, we define Hence, (1/ √ t ln t) S K t = S N + R(t) with S N = N n=0 Z n,N and R(t) goes to 0 in L 2 (µ).
Of course, under P µ (i.e. starting from equilibrium) the sequence Z .,N is stationary and since the Z j,N 's are martingale increments, their correlations are equal to 0. This will be a key point in the proof and explains why we are using these variables instead of directly look at the increments of S . . We skip the subscript µ in what follows, when there is no possible confusion.
According to Lemma 4.2, since K(t) = t ν , κ N := Var µ (S N ) → 2ν κ as N → +∞. Let γ be a standard gaussian r.v., it is thus enough to show that where we set, and where h denotes some complex exponential function h(x) = e i λ x , λ ∈ R. Now we follow Lindeberg-Rio method to study the convergence in distribution of S N to a centered normal distribution with variance 2ν κ.
The idea is to decompose ∆ N into the sum of small increments using the hierarchical structure of the triangular array.
Denote, for j ≥ 0, The sequence (N j,N ) 1≤j≤N +1 , N ≥1 is assumed to be independent and independent of the sequence (Z j,N ). For 1 ≤ j ≤ N , we set T j,N = N +1 k=j+1 N k,N , empty sums are, as usual, set equal to 0. In particular T 0,N has the same distribution as √ κ N γ.
We are in position to use Rio's decomposition Define the functions Using independence (recall the definition of T j,N ), one can write N )) .
• Bound for ∆ (2) j,N (h). Taylor expansion yields the existence of some random variable τ j,N ∈ (0, 1) such that : Using independence, we see that the first two terms vanish. In addition since the third derivative of h is bounded we get |∆ (2) j,N (h)| ≤ C E(|N j+1,N | 3 ), hence, since N is gaussian, |∆

It follows that ∆
(2) j,N (h) ≤ C N −(1/2) goes to zero. • Bound for ∆ (1) j,N (h). Set ∆ (1) j,N (h) = E(δ (1) j,N (h)). Then, using Taylor formula again (with some random τ j,N ∈ (0, 1)), we may write δ j,N (S j−1,N + τ j,N Z j,N )Z 3 j,N . We analyze separately the terms in the previous expression. The first term vanishes thanks to the martingale property of Z. The last term can be bounded in the following way . We use K = t ν , Burkholder-Davis-Gundy inequality and Jensen's inequality to get so that summing up from j = 0 to j = N we obtain a term going to 0 if ν ≤ 1 4 . It remains to prove To this end, we split the sum in two terms: j≤N and N <j≤N .
and will go to 0 provided N N . For j ≥ N , once more, we split the sum by introducing a new parameter k that we will chose later To control the second term we may use the mixing property. Indeed Before going on, we need the following Lemma 4.6. Denote by F t the filtration generated by v s for s ≤ t (or equivalently here generated by the Brownian motion B . ) and by G u the σ-field generated by v s for s ≥ u. If F and G are bounded, non-negative and respectively F t and G u measurable for some u > t, then Proof. Indeed, using first the Markov property, then conditional expectation w.r.t. v t and finally stationarity and symmetry we have where f and g are bounded respectively by F ∞ and G ∞ , so that using Cauchy-Schwarz inequality and the decay of the semi-group we have the desired result.
Then by Lemmas 4.6 3.4 and 4.1, we get since s > j Hence choosing j − k = K 2 , i.e. k = j − N 2ν and N = N 2ν , the sum of all these terms for j ≥ N , will go to 0.
The first term can be written Now the first term in the previous sum can be written
At j and k fixed, we will now decompose A 1,1 j in a first part corresponding to the velocities v k and v j less than K, and a second part for the velocities greater than K. This yields Note that to bound the terms involving velocities greater than K, we just used the fact that |∇H K | 2 ≤ C K 4 and that |v| 1 |v|≥K dµ ≤ C K −3 . For the velocities less than K, we used that fact that E µ j+1 j |∇H K (v s )| 2 ds ≤ C ln N.
Since we still have to sum up the terms, the constraints on ν are now ν ≤ 3 4 and 1 + 3ν − 3 2 ≤ 0, that is ν ≤ 1 6 . Since we have to assume that ν ≥ 1 6 , the value K = t 1/6 is (up to slowly varying perturbation since a priori, K does not have to be on the form t ν ) the only possible one.

Gathering all these intermediate bounds we have obtained
Actually one can generalize Lemma 4.5, replacing h(x) = e iλx defined on R by h(x) = e i λ,x defined on R d . The proof above immediately extends to this situation, replacing the gaussian r.v. by a gaussian random vector with independent entries, and using that the correlations between the S i t 's are vanishing. Details are left to the reader. One can also check that the assumptions in Proposition 8.1 of [4] are satisfied in order to deduce a (MCLT) from the previous (CLT). This allows us to state our main theorem. Indeed, the statement (1)(2) of Theorem 2.4 are direct consequences of Lemma 4.5. Moreover, the last statement of Theorem 2.4 is easily deduced from the previous ones. Indeed, we may apply (2) with t = t/θ(ε) and a normalization √ t ln t = 1 ε t ln(t/θ(ε))/ ln(1/ε) ∼ √ 2t ε as ε → 0 .
The initial density of x 0 is then given by h 0 , (2) implies the convergence of the distribution of the random vector defined by (7) to C β ω −1 β (v) (h 0 * ρ t )(x) dv dx, ρ t being the density of a centered gaussian random vector with covariance matrix 2κ 3 t Id. In different term, by writing the convergence in law, we obtain the conclusion of Theorem 2.4