Propagation of chaos for the Vlasov-Poisson-Fokker-Planck system in 1D

We consider a particle system in 1D, interacting via repulsive or attractive Coulomb forces. We prove the trajectorial propagation of molecular chaos towards a nonlinear SDE associated to the Vlasov-Poisson-Fokker-Planck equation. We obtain a quantitative estimate of convergence in expectation, with an optimal convergence rate of order $N^{-1/2}$. We also prove some exponential concentration inequalities of the associated empirical measures. A key argument is a weak-strong stability estimate on the (nonlinear) VPFP equation, that we are able to adapt for the particle system in some sense.

1. Introduction. We consider here a one dimensional system of N particles, with position X N i ∈ R and velocity V N i ∈ R, interacting via the Poisson interaction, and submitted to independent Brownian noises and friction. The associated system of Stochastic Differential Equation (SDE) is the following: where the (B i,t ) t≥0 are independent Brownian motions. The interaction kernel is defined (everywhere) by The case K = 1 2 sign corresponds to the repulsive case, while the interaction is attractive when K = − 1 2 sign. This two cases lead of course to very different dynamics, but concerning the propagation of chaos in finite time the sign of K is not very relevant, so we will handle both cases in the same way.

MAXIME HAURAY AND SAMIR SALEM
Existence and uniqueness in law for the particle system. We will state a result of existence and uniqueness in law of the solution of the process (1). The proof is a simple application of the Cameron-Martin-Girsanov Theorem, but to an appropriate system equivalent to (1), since the diffusion in that SDE is not full. This result in law will be enough for our aim, which is to state some result of propagation of chaos. The strong existence and uniqueness is a more difficult problem, which is to the best of our knowledge not covered by the actual literature [7]. Existence, and weak-strong stability of the limit nonlinear SDE. When N is large, and if the initial conditions (X N i,0 , V N i,0 ) i≤N are independent (or asymptotically independent), then it is expected that the particles behave almost like N independent copies of the following non-linear limit process where (B t ) t≥0 is a Brownian motion (independent of the rest) andȲ t is an independent copy of Y t . The family of time marginals f t = L(Y t , W y ) is a solution of the 1D Vlasov-Poisson-Fokker-Planck equation where ρ t = f t dv is the law of Y t . We will prove two results on that non-linear limit process or equation, both in Theorem 2.3 and Corollary 1: • The existence of solution to the process (3) and to the VPFP equation (4), provided that there is a uniform control (exponential or polynomial) on the tail in velocity of the initial distribution; • A weak-strong stability result, ensuring that when there exists a solution to (3) or (4) with a bounded density in position, i.e. ρ t := L(Y t ) bounded for all time t > 0, then it is the unique solution in a much larger class: the only condition being that the order one moment in position and velocity remains finite along time.
Quantitative propagation of chaos and concentration inequalities. Weakstrong uniqueness or stability results are very interesting for the propagation of chaos. In fact, a very standard procedure to prove propagation of chaos, is to prove that: • the solutions of the particle systems form a tight sequence (compactness); • the accumulation points are solution to the expected limit problem, possibly with some extra a priori estimates; • uniqueness of the solutions holds, possibly under the extra a priori estimates.
That method goes back at least to [20], and is for instance used is recent works of the first authors and collaborators [12,11]. When using the tightness-consistencyuniqueness method, the tightness of the particle system and the consistency are usually more simple to prove than the uniqueness of the limit process, as for instance in [12,11]. Since we have here a very good uniqueness result on the limit problem, that strategy would be quite simple to use.
However, the situation is even better. In fact the proof of the weak-strong stability result on the limit equation could be adapted to get some stability between solutions of the particle system (1) and N independent copies of the limit system (3) and that will allow to state a (optimal) quantitative version of the propagation of chaos. We will prove two results: • An estimate of propagation of chaos in the mean using Monge-Kantorovich-Wasserstein distance of order one (Theorem 2.5). Our proof rely on the fact that empirical measures associated to solutions of the particle system 1 are solution to the limit nonlinear SDE 3 plus a reminder which is roughly a martingale with distribution value. This allows to compare the interacting and non-interacting systems using standard arguments of martingale theory, mainly Doob maximal inequality, in a quite subtle way. • An exponential concentration inequality of the solution to the particle system (1) around the (unique) solution of the nonlinear SDE (Theorem 2.6). Our proof rely roughly on a perturbed version of the weak-strong stability result of Theorem 2.3, where the infinite norm of the distribution in position is replaced by some discrete infinite norm (See Definition 2.7). That perturbed version is in fact not stated for the nonlinear SDE, but directly applied in order to control the distance between the interacting particle system and the system of independent copies. And concentration inequality on the discrete infinite norm (in fact its supremum in time along independent copies of solution to a given SDE, obtained in Proposition 2) implies the requested concentration inequalities on the particle system. Some related works. The literature on the convergence of particle systems towards non linear mean-field models is quite huge, so we will restrict ourselves to one dimensional models. The usual strategy, valid for smooth interaction, is well explained in Lecture notes by Sznitman [19]. In [3] Cépa and Lépingle prove the propagation of chaos for the Dyson model: a one dimensional first order model (i.e. without velocities) with a strongly singular interaction K(x) ∼ |x| −1 modeling the behavior of eigenvalues of large hermitian matrices. Their proof relies on the use of maximal monotone operators. Recently, that convergence result was extended to similar systems with even stronger interaction K(x) ∼ |x| −1−α with α ∈ [0, 1), by Berman andÖnnheim [1] using Wasserstein gradient flows.
For second order models (involving positions and velocities), our result is to the best of our knowledge one the two first result of propagation of chaos with the Poisson singularity in the stochastic case. The second one was obtained in the same time by P.-E. Jabin and Z. Wang in [15]. They precisely prove a much more general result: the propagation of chaos for interacting particle system (with or without independent noise) with bounded interaction force. That result is an average result, stated in relative entropy, with the optimal rate of convergence, but requires some regularity of the solution of the limit equation.
In the deterministic case, i.e. when the system under study is (1) without the Brownian motions (and the friction), then the mean field limit was proved originally by [21], and then by [5] as a special case of semi-geostrophic equations, and again by the first author [13]. Remark that the result of [15] also apply to that deterministic case.

Main results.
Well-posedness of the particle system. Since the force field is only of bounded variation and not Lipschitz, the standard theory for SDE does not apply. Neither does the theory of existence and uniqueness developed for uniformly elliptic diffusions (see for instance [22,6,2] among many others) since the diffusion act here only on the velocities. We also mention the very recent adaptation [7] of the previous works to the kinetic case, which still require too much regularity to be applied here (F ∈ W s,p , s > 2/3 and p > 6). However, due to the particular geometry of the problem, we can still get existence and uniqueness in law. The precise result is the following: , weak existence and uniqueness in law hold for SDE (1).
Since (1) is linear, it also implies weak existence and uniqueness for any random initial condition. Theorem 2.1 is proved in Section A using the following strategy: we reformulate (1) in an SDE with memory involving the (V N i,t ) i≤N,t≥0 only (simply because the X N i,t are time integrals of the V N i,t ). Then we apply to that new (nonmarkovian) SDE a standard technique relying on the Girsanov's theorem. To the best of our knowledge, the strong existence and uniqueness of solution to that system are yet unknown, and we were not able to prove it. A non-linear SDE related to the Vlasov-Poisson-Fokker-Planck equation. When the number of particles is large, and under the assumption that two particles picked up among all the others are roughly independent at any time (in particular this should be true at t = 0), we expect the N particles to behave almost like N i.i.d copies of the (expected unique) solution to the non linear SDE, or McKean-Vlasov process (3). We will prove the well-posedness of this nonlinear SDE for initial data with a uniform control on the velocity tails, and even a weak-strong stability estimate (here "strong solution" means that the law L(Y t ) of Y t remains uniformly bounded in time). Before stating the precise result, we introduce two useful norms: Definition 2.2. For any λ > 0, and any γ > 0, we define for any f ∈ L ∞ (R 2 ) the two following norms: where Assume that the law f 0 of the initial condition (Y 0 , W 0 ) satisfies f 0 ∈ P 1 (R 2 ) ∩ L 1 (R 2 ) and also f 0 e,λ < +∞, for some λ > 0 or f 0 p,γ < +∞, for some γ > 1. Then, given any Brownian motion (B t ) t≥0 (independent of the initial conditions) there exists a solution (Y t , W t ) to (3) with initial condition (Y 0 , W 0 ), and its law at where C γ is a constant depending only on γ.
(ii) Weak/strong stability for solution with bounded density (in position).
are two solutions to (3) built on the same probability space with the same Brownian motion (B t ) t∈R , if the density ρ 1 t = L(Y 1 t ) is uniformly bounded at any time (∀ t ≥ 0, ρ t ∞ < ∞), then the following stability estimate holds: The proof of the weak-strong stability estimate relies on the crucial Lemma 3.1 that allows to control the singularity of the force, when comparing the evolution of two solutions, assuming that only one of them has a bounded density in position. That Lemma was already used in [13], to get similar results for the associated deterministic particle system: i.e. particles interacting via the same kernel (2) but without noise and friction. The proof of the propagation of the norm f e,λ and f p,γ is based on a probabilistic version of the method of characteristic, that is also a simple case of the Feymann-Kac formula. All of this is proved in Section 3. Once the propagation of these norms is known, the proof of the existence of solutions relies on an usual approximation procedure, and is postponed to the Appendix C. The stability result on the Vlasov-Poisson-Fokker-Planck equation. The stability results on the process (3) simply translate on the associated Fokker-Planck or Kolmogorov forward equation, which is here the Vlasov-Poisson-Fokker-Planck equation (4). As the kernel K is bounded and defined everywhere by (2), remark that very few hypothesis are required to define solutions to (4) in the sense of distribution: if f ∈ L 1 loc (R + , P(R 2 )), where P(R 2 ) stand for the space of probability, then all the terms appearing in (4) define a distribution. Theorem 2.3 has the following consequence on VPFP equation: with a finite order one moment: |x| + |v| f 0 (dx, dv), and satisfying either f 0 e,λ < +∞, for some λ > 0 or f 0 p,γ < +∞, for some γ > 1. Then, there exists a solution f t to (4) with initial condition f 0 , and it satisfies (7).
(ii) Weak/strong uniqueness for solution with bounded density (in position). If f 1 t and f 2 t are two solutions to (4) and if ρ 1 t = f 1 t dv is uniformly bounded for any time t ≥ 0, then the following stability estimate holds: where W 1 stands for the order one Monge-Kantorovich-Wasserstein distance.
This corollary is proved in Appendix D. It is a direct consequence of Theorem 2.3, and of the fact that a weak solution f t to the VPFP1D equation (4), can always be represented as the time marginals of a process (Y t , W t ) t≥0 solution to (3), as it was first notice in [9,Theorem 2.6]. The quantitative propagation of chaos in the mean. When comparing a solution (X N i,t , V N i,t ) i≤N of the particle system (1) with the limit process (3) and its associated Fokker-Planck equation (4), a very natural strategy (which goes back to McKean [17]) is to introduce N independent copies (Y N i,t , W N i,t ) i≤N of the limit nonlinear SDE (3), constructed with the same Brownian motion as the (X N i,t , V N i,t ), and with initial conditions coupled in a optimal way. In our case, we are able to prove a sharp estimate on the average distance between these two systems.
To state our result properly, we recall that by definition exchangeable random variables have a law that is invariant under permutation, and that chaotic sequences of r.v. are defined as follows: Definition 2.4. Let f be a probability on R 2 . A sequence (X N i , V N i ) i≤N N ∈N of exchangeable random variables is said to be f -chaotic, if one of the equivalent conditions below is satisfied: The subscript w on the arrow, refers to the weak convergence of measure, the subscript L to the convergence in law. If Y is a random variable of law f , we will equivalently say that a sequence is f -chaotic or Y -chaotic. We refer to [19] for the equivalence of the three conditions above, and to [14] for a quantitative version of that equivalence.
We also state the following proposition, that reformulate propagation of chaos in term of coupling. It is a consequence of [14, Theorem 1.2] is a sequence of exchangeable random variables with uniformly bounded order two moments: Let f be a probability on R 2 . Then, the following statements are equivalent: i with common law f .
The above Proposition allows to state a precise result of propagation of molecular chaos. We emphasize that this result is a true result of propagation: it does not apply only to i.i.d. initial conditions, but to any chaotic initial conditions (with finite second order moment). However, the general case is somewhat more technical, so we warn the reader that the following theorem is simpler to understand if we only consider the case of i.i.d. initial conditions. Theorem 2.5. Let f 0 ∈ P(R 2 ) with finite order two moment: |x| 2 + |v| 2 f 0 (dx, dv) < ∞, and such that there exists a (necessary unique by Corollary 1) solution f t to (4) with initial condition f 0 satisfying t 0 ρ s ∞ ds < +∞ for any time t ≥ 0, where ρ s stands for the density in position: ρ s (x) := f s (x, dv). We also denote by Let (X N i,0 , V N i,0 ) i≤N be a sequence of f 0 -chaotic random variable with uniformly bounded order two moment: sup N ∈N E |X N 1,0 | 2 +|V N 1,0 | 2 < +∞. By Theorem 2.1, we may find a probability space together with a N -dimensional Brownian motion and a process ( More precisely, by standard arguments, we may also construct on that probability with the same Brownian motion, and with initial conditions of law f 0 , coupled with (X N i,0 , V N i,0 ) i≤N in an exchangeable way. Then the following estimate holds: Remark 1. Thanks to Corollary 1, the last hypothesis on f 0 in Theorem 2.5 is That result is proved in Section 4: as for of the stability part of Theorem 2.3, its proof strongly relies on Lemma 3.1 (the "rope argument"), that allows to write a kind of Grönwall inequality, which contains a fluctuation term. That fluctuation term is handled by "Poissonization" (replacement of the empirical measure by a very close Poisson Random Measure), which allows to use standard results on martingales, in a quite subtle way.
Using standard results on the convergence on empirical measures toward their mean [10, Theorem 1], we may deduce convergence results between the empirical measure of the particle system and the expected limit profile.
Corollary 2 (of Theorem 2.5). Under the same assumptions than in Theorem 2.5, and assuming moreover that |x| + |v| q f 0 (dx, dv) < ∞ for some q > 2 we obtain that for any time t ≥ 0, for some constant C t > 0 depending on t,q and the moments of f 0 .
Exponential concentration inequalities for the particle system. In the case were the initial conditions X n i,0 , V N i,0 i≤N are i.i.d., we also prove concentration inequalities for the solutions of the particle system (1), precisely: Theorem 2.6. Let f 0 ∈ P(R 2 ) with some finite exponential moment: e λ(|x|+|v|) f 0 (dx, dv) < ∞ for some λ > 0, and such that there exists a (necessary unique by Corollary 1) solution f t to (4) with initial condition f 0 satisfying κ t := sup s≤t ρ s ∞ ds < +∞ for any time t ≥ 0, where ρ s stands for the density in position: By Theorem 2.1, we may find a probability space together with a N -dimensional Brownian motion and a process ( (1). By standard arguments, we may also construct on that probability space N copies Then the following concentration inequality holds for any ε satisfying where the three constants A t , A t , B t depend on t, λ and the initial law f 0 (See the end of Section 6 for more precise values).
The proof of Theorem 2.6 relies on a different technique than the one of Theorem 2.5. Here, we rather use another perturbed version of the weak-strong stability estimate (8), applied to the N particle system and involving discrete infinite norms. Together with exponential concentration inequalities on discrete infinite norms of empirical measures, and on some fluctuation terms appearing naturally in the perturbed weak-strong estimate, it leads to above result.
Using deviation upper bounds for the approximation of probability measure by random empirical measures associated to i.i.d sample, as for instance in [10, Theorem 2], we could also obtain concentration inequalities between the empirical measure associated to the N particle system, and the limit profile.
Corollary 3. Under the same assumption and notation as in Theorem 2.6, then for any T ≥ 0, and any α > 1, there exists two constants C 1 , C 2 such that for any Exponential concentration for discrete infinite norms. We start by defining the discrete infinite norm of a probability measure.

Remark 3.
The quantity 2ε f ∞,ε is usually referred as the concentration function of the measure f (a notion due to P. Levy). See for instance [18, Definition 3.1 on p. 165]. However, we us it for rather different purposes, and prefer to introduce a slightly different definition (with the factor (2ε) −1 ), in order to emphasize on the relation with the infinite norm.
One of the key ingredients of the proof of Theorem 2.6 is a deviation inequality on time integral (or supremum) of discrete infinite norms for the ρ N t : solutions to a given SDE. Such a result have a interest by itself so we state it below. t≥0 , and assume that for t ≥ 0: • the common initial condition has a law f 0 which admits a finite exponential moment (in position and velocity) for some λ > 0: E e λ(|Y0|+|W0|) < +∞.
In particular we denote c λ := 5 2 + 1 λ ln E e λ|W0| ; • κ t := sup 0≤s≤t ρ s ∞ < +∞ where ρ s stands for the time marginal of Y i at time s. Then provided that λN −1/2 ≤ εγ ≤ 5κ t min 1 16 , λ −2 , there exists a constant C t depending on κ t , λ and the moments of f 0 such that: Essentially, the bound behave like N 3/2 e −2N (εγ) 2 , in the most interesting range, when εγ ∼ N −1/2 . Plan of the paper. The paper is organized in the following way. In Section 3, we focus on the nonlinear limit SDE and prove the uniqueness part of Theorem 2.3 and the propagation of the norms defined in Definition 2.2. The remaining part of the proof of Theorem 2.3 and the proof of Corollary 1 are postponed respectively to Appendix C and D. Section 4 is devoted to the proof of Theorem 2.5 on the propagation of chaos in the mean, Section 5 to the proof of Proposition 2 and Section 6 to the proof of Theorem 2.6 on the exponential concentration. We prove Theorem 2.1 (weak existence and uniqueness of the particle system) in section A. An useful proposition about propagation of moments (Proposition 3) is proved in Appendix B. A useful regularity lemma is proved in the last Appendix E.
3. Proof of the uniqueness part of Theorem 2.3.

Weak-strong stability and uniqueness.
A key bound. Here we will use an argument that we call "rope argument", which has already been used in [13] to treat the propagation of chaos for deterministic VP1D equation. We choose this name in order to illustrate the argument (which relies on the triangular inequality in a simple way) : if we imagine that two coupled particles are linked by a rope, then their relative displacement is limited. The argument consists in noticing that as soon as |y −ȳ| > |x − y| + |x −ȳ|. In fact, in that case x −x and y −ȳ have necessarily the same sign. It is also interesting to replace the later condition by the stronger one |y −ȳ| > 2 max(|x − y|, |x −ȳ|). Since K is bounded by 1/2, it implies the following bound, that we will use many times in the sequel That bound is one-sided: the indicator functions in the r.h.s. take only y −ȳ as argument, and this allows to prove a simple but important Lemma: is a random couple of real numbers and that (X,Ȳ ) is an independent copy of that couple.
(i) Then, respectively the density (with respect to the Lebesgue measure) of the law of X and Y .
(ii) For ε > 0, we also have a similar estimate involving the discrete infinite norms · ∞,ε defined in Definition 2.7: (iii) In the case where (X, Y ) and (X,Ȳ ) are still independent but with possibly different distributions, we get the more general estimate: for ε,ε ≥ 0 (in case set

Remark 4.
Contrary to the result obtained for the Coulomb kernel in dimension larger than 2 (see [16]), the point (i) involves the L ∞ norm of only one of the law of the random variables (X, Y ). This is key point in our works, and it enables to provide a weak-strong stability estimate.
We were not able to obtain a similar estimate in W p distance with p ∈ (1, ∞). In that case, we need to bound quantities like Unfortunately, we did not manage to do it with the help of the "rope argument". But it may be only a technical issue, since a similar estimate could also be obtain for the W ∞ distance. We refer to [4] for the use of that distance for a related system.
Proof. We first prove i). Starting from (10), we may bound In the second line, we used that the expectation remains unchanged if we permute (X, Y ) with (X,Ȳ ). In the third, we used that the law ofȲ has density ρ Y . If we apply the above bound to the couple (Y, X) and (Ȳ ,X), we obtain a similar result, with ρ X ∞ in place of ρ Y ∞ . To obtain ii), remark that for any interval [a, b] ⊂ R: Indeed, to use discrete infinite norm, we need to cover [a, b] by a union of small intervals of length 2ε. For this at most (the integer part of) |b − a|/(2ε) + 1 such intervals are requested. For the third point, we cannot use the permutation (X, Y ) ↔ (X,Ȳ ) in the previous calculation and have to estimate the two terms separately. The required adaptations are straightforward.
A simple Grönwall lemma. That bound allows us to prove the weak-strong stability part of Theorem 2.3. We introduce (X t , V t ) t∈R+ and (Y t , W t ) t∈R+ two solutions of the non-linear SDE (3) constructed on the same Brownian motion (B t ) t∈R , and also (X t ,V t ,Ȳ t ,W t ) t∈R+ an independent copy of the previous coupled processes. We also assume that ρ t := L(X t ) has a bounded density for all t ≥ 0. Then (X t − Y t , V t − W t ) t∈R solves the following ODE system (without noise): which naturally leads to If we take the expectation in the previous system, and apply the point (i) of Lemma 3.1 to the couple (Y t , X t ) and (Ȳ t ,X t ), we may write: Summing up the two inequalities and applying the Grönwall lemma lead to the requested estimate Propagation of moments. We state here a useful proposition about propagation of moments. We emphasize that it apply to any kernel K satisfying K ∞ ≤ 1 2 . Proposition 3. Let be (Y t , W t ) t≥0 be a weak solution to (3), with given (random) initial condition (Y 0 , W 0 ), and with an interaction kernel K which is not necessary given by (2) but satisfies K ∞ ≤ 1 2 . If (Y 0 , W 0 ) has an exponential moment of order λ > 0, it holds for t ≥ 0: It also holds for any 0 ≤ s < t ≤ s + min 1 16 , λ −2 : Lastly, simpler estimates on the order one moments also hold: for any 0 ≤ s < t ≤ s + 1 4 , The proof is performed in Appendix B.

3.2.
Propagation of uniform control on the velocity tails via Feymann-Kac estimates. In this section, we prove the propagation of the norms f e,λ and f p,γ defined in Definition 2.2, along solutions of the non-linear SDE (3). To be fully rigorous, we will first prove that result of propagation on a mollified version of (3). This is a key point in the proof of existence of solution to the initial SDE, and the estimates pass to the limit and also apply to solutions of the non-mollified problem.
Since the kernel K η is globally Lipschitz, [19, Thm 1.1] implies the strong existence and uniqueness of the process (Y η t , W η t ) t≥0 solving (12). And by an application of Ito's rule the family of the time marginals (µ η t ) t≥0 of that process is a weak solution of the following regularized Vlasov-Poisson-Fokker-Planck equation: with the initial condition µ η 0 = L(Y 0 , W 0 ) = f 0 * χ η and the notation We begin by proving some η independent estimates on µ η t , for t ≥ 0.
Lemma 3.2. Assume that the law of the initial condition of equation (3) f 0 ∈ P 1 ∩ L 1 (R 2 ) satisfies either f 0 e,λ < ∞ for some λ > 0 or f 0 p,γ < ∞ for some γ > 1. Then for all t > 0, the unique (measure) solution to the smoothed VPFP equation (14) with initial condition µ η 0 satisfies respectively where C γ is a constant depending explicitly on γ.
In particular, the associated spatial density ρ η t := R µ η t (x, v) dv satisfies respectively Proof.
Step 1. Regularization and Feynmann-Kac's formula. Fix t ≥ 0 and consider the following "backward" SDE: First note thatK η [µ η ] is uniformly Lipschitz in position on R + × R 2 . So strong existence and uniqueness of solution to the (linear) SDE (17) are guaranteed by standard results. We set: Moreover the initial condition µ η 0 = f 0 * χ η fulfills the hypothesis of Proposition 4 of the Appendix: ∂ k x ∂ l v µ η 0 ∈ L 2 (R 2 ) for any k, l ≥ 0. This implies that µ η t (x, v) possesses one continuous derivative in time, and two (continuous) derivatives in position and velocity. So, we may apply Ito's rule to θ: we get s , W x,v s )dB s , and since µ η is a strong solution of (14), we get precisely that for any 0 ≤ s ≤ s ≤ t: In particular, (θ s ) 0≤s≤t is a martingale, so that Step 2. The case of uniform exponential tails.
In this case, since χ has support in [−1, 1], Moreover, the definition (17) of W η also implies that for 0 ≤ s ≤ t: Remark that M s is in fact a centered Gaussian variable with variance e 2s − 1. Since K η [µ η u ] ∞ ≤ 1/2, it leads to the following lower bound Using all of this in the representation formula (18) leads to An application of (33) (in Appendix B) to M t leads to the following bound that is uniform in η (for η small): The conclusion follows by replacing λ by λe −t in the above bound, and the estimate on ρ η t ∞ follows by integration on v.
Step 3. The case of uniform polynomial tails.

4.
Proof of Theorem 2.5. We start with a useful lemma.
Proof. In fact, we choose a sequence (X n ) n∈N of i.i.d random variables of law ρ ∈ P(R d ) and L some Poisson random variable of parameter N independent of the (X n ) n∈N . We define two point processes by where M N,a u = 1I |a−y|≤u M N − N ρ (dy).
Since M N is a PRM, (M N,a u ) u≥0 is a martingale with respect to the filtration (F a u ) u≥0 = σ M N 1I B(a,u) u≥0 , where B(a, u) the ball of center a and radius u in R d . So using Doob's inequality and the fact that M N,a Taking now the expectation in (22) We are now in position to prove Theorem 2.5, in two steps.
We begin by a calculation valid for any fixed realization of these processes (i.e. given any initial conditions and Brownian paths): for all i = 1, · · · , N we have: We sum these inequalities over i = 1, · · · , N , divide by N ≥ 2 and then get: Using equality (10) we find with the notation ρ i,N s = 1 Then, β(t) : ds.
An application of Grönwall's Lemma leads to Taking the expectation and using the symmetry of the laws of the of (X We will bound the expectation of stochastic terms appearing in the r.h.s. with the help of Lemma 4.1.
Step 2. Control of the stochastic terms.
We recall that ρ 1,N , and is then independent of Y N 1,t . By the definition (24) of Γ and Lemma 4.1, we have Moreover, using again the fact that the (Y N i,t ) 1≤i≤N are i.i.d and that K ∞ ≤ 1/2, we find for N ≥ 2 Plugging this bound into equation (25) leads to and conclude the proof.

Proof of Proposition 2.
We distinguish here the deviation upper bounds for sup t∈[0,T ] µ N Y,t ∞,ε because such a result have an interest by itself, and because we will apply this result many times in the sequel. In the present proof, we apply the following strategy: we first find prove concentration inequalities for the quantity µ N Y,t ∞,ε at a fixed time t, then for the variation of this quantity on small time intervals, and finally conclude by mixing both estimates in an optimal way. That proof could be extended to dimension larger than 1, but we prefer to restrict here to the case of interest.

Concentration inequalities at fixed time.
We recall a special case of the Hoeffding's inequality, a deviation upper bounds for Binomial variable.  N, p). Then for any α > 0: Thanks to lemma 5.1 we are able to give some concentration inequalities for the empirical measure ρ N = 1 Lemma 5.2. Let α, ε > 0. Assume that (Y 1 , . . . , Y N ) are N independent random variables, all with law ρ ∈ L ∞ (we identify the law and its density). Assume also that ρ has an exponential moment of order λ > 0: M λ (ρ) := e λ|y| ρ(dy) < +∞. We denote by ρ N := 1 N i δ Yi the associated empirical measure. Then for any ε, α > 0, Proof.
Step 1. A first bound valid on compact subset. For any 0 < δ < ε we set k = R 2δ + 1. It is clear that for all x ∈ [−R, R], there exists ∈ {−k, · · · , k} such that B(x, ε) ⊂ B(2 δ, ε + δ). It implies Since for any , N ρ N B(2 δ, ε + δ) is a Binomial variable of parameter N and p = B(2 δ,ε+δ) ρ(dx) ≤ 2(ε + δ) ρ ∞ , we may apply Lemma 5.1 and bound each term in the r.h.s. by exp −8N (αε − δ ρ ∞ ) 2 . By the definition of k, and provided that εα ≥ δ ρ ∞ it leads to We now have to choose δ in order to minimize the right hand side. The particular choice δ ρ ∞ = (εα)/2 satisfies the previous restriction and already provides an interesting bound: Step 2. Extension to the whole space. It is clear that where we used (26) to bound the first term in the r.h.s, and a simple application of Chebyshev's inequality to bound the second term: precisely that We choose R = 2 λ N (αε) 2 , and finally get: and the result is proved.

Conclusion of the proof of Proposition 2.
We begin with the following corollary of Proposition 3, which will be useful in the sequel be a solution of (3) for some initial condition (Y 0 , W 0 ) of law f 0 , having an exponential moment of order λ > 0. We define c λ := 5 2 + 1 λ ln E e λ|W0| and denote by f t the law of (Y t , W t ). For 0 ≤ s < t ≤ s + min 1 16 , λ −2 , and β > 0, it holds: And if the (Y N i , W N i ) are N independent copies of the previous process, with the same notation Proof. Using point (iii) of the Proposition 3, and Chebyshev's inequality, and the definition of c λ we get for any 0 < λ ≤ λ: Optimization in λ leads to the particular choice λ = β, when β ≤ λ, and to the choice λ = λ otherwise. If we use β − λ 2 ≥ β 2 in the later case, we obtain the expected bound.
The proof of the second bound involving N independent copies follows the same lines: by independence and the conclusion follows with the same optimization on λ .
We are now in position to prove Proposition 2. We fix γ > 0, define α := γ 2 , and recall from Corollary 4 the notation c λ = 5 2 + 1 λ ln E e λ|W0| together with κ t := sup 0≤s≤t ρ s ∞ . We define Remark that β satisfies β 2 min β, λ = 2N (εα) 2 and that ∆t is always smaller than min 1 16 , λ −2 by the assumptions in Proposition 2, so that we may apply Corollary 4. We then choose K = t ∆t + 1, define t k = k∆t for all 0 ≤ k ≤ K (remark that t K ≥ t), and the two following events Ω 1 and Ω 2 as: If the events Ω c 1 and Ω c 2 are realized, then for any 0 ≤ s ≤ t, we choose k such that s ∈ [t k , t k+1 ) and get for any The last equalities follow from the definition of α and ∆ t . It means that if Ω c 1 and Ω c 2 are realized, then sup ρ N s ∞,ε ≤ κ t + γ.

MAXIME HAURAY AND SAMIR SALEM
Next, we can bound P(Ω 1 ) and P(Ω 2 ) with the help of Lemma 5.2, Corollary 4, and point (ii) of Proposition 3 (on the control of the exponential moments of ρ t ): it leads precisely to P sup Using that by assumption εγ ≤ 1 2 κ t , and the expression of β, leads to with . The conclusion follows using that λN −1/2 ≤ εγ by assumption.
6.1. MKW estimates on deviation between particle and coupled systems. The first step of the proof is similar to the first step of the proof of Theorem 2.5. Precisely we start with equation (23), which reads now since the initial conditions are now equal. Remark that if we introduce σ, τ two independent random variables with uniform law on {1, 2, . . . , N }, then the sum involving K becomes where we emphasize that the expectation is taken only with respect to (σ, τ ). So, we are in position to apply point (iii) Lemma 3.1 ; i.e. the part involving discrete uniform norms, and get: Applying finally Gronwall's Lemma on the interval [0, t] where the quantity sup u≤t Λ N u is a constant, we get: We now focus on finding some concentration inequalities for the random variable sup t∈[0,T ] Λ N t . In order to prove concentration inequalities for these supremum in time, we follow the same steps as in the proof of Proposition 2. Once it is be done, we will combine them with the concentration inequalities on sup t∈[0,T ] ρ N t ∞,ε given by Proposition 2, and we will obtain some deviation upper bounds for 6.2. Estimation of the fluctuation terms Λ N t at fixed time t. We first establish the random variables, with a diffuse common law ( i.e. a law that does not charge any atom). Define the random variable Λ N as follows: Then for all α > 0, Proof.
Step 1. A calculation with Y N 1 frozen. To begin, we "freeze" Y N 1 = a and define Λ N 1 (a) := By the definition (2) of K, and since P(Y N j = a) = 0 by assumption the random variable ζ a j defined by: . So, by an application of Hoeffding's inequality (Lemma 5.1) to the binomial variable j≥2 ζ a j : Step 2. Summing up on N . Using the notation introduced in the previous step, we can rewrite where we have used the fact that the variables (Y N i ) 1≤i≤N are exchangeable. But by independence of Y N 1 and (Y N i ) i≥2 , we obtain using the previous step that ≤ 2 e −2(N −1)α 2 and the conclusion follows.
6.3. Estimation on the supremum in time of the fluctuation terms.
Indeed, by the definition (28) of Λ N s : Using the second point of Lemma 3.1 with two copies of a vector of joint law , we may bound the first term in the r.h.s. by 1 To estimate the second term in the r.h.s, we use the third point of Lemma 3.1, applied to independent couples: The first one with law 1 Putting these two estimates together, using point (iv) of Proposition 3 in order to bound E |Y t − Y s | and taking the supremum in time leads to (29).

Defining also
A t := λ −1 D t 1 + (2κ t + 1)c λ + 5 + 4κ t + e λt 1 2 +λ E e λ(|Y0|+|W0|) , we obtain from the previous bound P Ω c 2 e −2N ε 2 , and that concludes the proof. In particular it can be checked that A t ≤ A t where A t is defined by Appendix A. Proof of Theorem 2.1.
Step 1. A related SDE with memory. First note that a weak solution to equation (1) is some stochastic basis, together with a N dimensional Brownian motion (B N t ) t≥0 on it and a R 2N valued processes (X N t , V N t ) t≥0 satisfying for all t ≥ 0 where we denoted K N (x 1 , · · · , x N ) the vector valued field which i-th component is . But, from that system, we may write a SDE "with memory" involving V N only: Conversely, given a solution to the SDE (30), it is not difficult to construct a solution to the original system. So it will be enough to prove weak existence and uniqueness in law for the delayed SDE (30).
Step 2. Weak existence. We follow a standard strategy in the case of SDE with constant diffusion, which relies on the Cameron-Martin-Girsanov (CMG) theorem. Roughly speaking, this Theorem ensures that adding a sufficiently regular drift to a diffusion does not change the shape (or regularity) of the trajectories, which still looks like the one of the Brownian motion: the support of the law of the Brownian motion, and the one of the trajectories of the drift-diffusion will be the same, with relative derivative given by the CMG theorem. With that in mind, the Girsanov's strategy to construct a solution to an SDE is to start from a Brownian motion, to revert the SDE and define a new process, and finally to change the law of the trajectories so that the new process becomes a Brownian Motion, and the initial BM a solution of the required SDE.
Precisely in our case, we let (Ω, F, (F) t≥0 , P) be a stochastic basis and (B N t ) t≥0 be a N -dimensional Brownian motion on it, and define The above definition of the two r.v. (U N t , V N t ) t≥0 implies that for any t ≥ 0, which is exactly (30) with B N t replaced by U N t . So, it remains to apply Cameron-Martin-Girsanov (CMG) theorem: with an appropriate change of the reference probability measure, (U N t ) t≥0 can be considered has a N -dimensional Brownian motion. For this, remark that where is F t -adapted and progressively measurable, and that for 0 < γ < 1 6t , Therefore we deduce from classical results about exponential martingales that the process Z N t defined by (where · stands for the scalar product) is a martingale, and due to CMG theorem (U N t ) t≥0 is a N -dimensional Brownian motion under the probability Q defined for Moreover by (30), V N t also satisfies where M N t is a Gaussian r.v. with law N 0, (1−e 2t )Id . It follows that E e γ|H N t | 2 < ∞, for t ≥ 0 and γ < 1/2. This implies that the processZ N t defined bỹ is a martingale, and by Cameron-Martin-Girsanov theorem, 2 −1/2 (V N t − V N 0 ) t≥0 is a N -dimensional Brownian motion on the filtered space (Ω, F, (F) t≥0 ,Q) whereQ is defined for any A in F t byQ(A) = AZ N t dP.

PROPAGATION OF CHAOS FOR VPFP1D
295 Now for any "cylindrical" function φ on C(R + , R N ) of the form φ (V N s ) s≥0 = ϕ 1 (V N t1 ) × · · · × ϕ(V N t k ), we get for t ≥ t k : The expression on the last line does not involve (B N t ) t≥0 anymore. Since the law of (V N t ) t≥0 underQ is the law of the Brownian motion, and sinceZ N t can be expressed in term of V N t only, that last expression does not depend on the specific solution we selected at the beginning of this paragraph: we will obtain exactly the same formula starting from a second solution. Since, solutions to (30) have continuous trajectories, it implies uniqueness in law of the solution to the SDE (30) and also to (1).
Appendix B. Propagation of moments. This section is devoted to the proof of Proposition 3.
Proof of Point (i). First, introducing the notation f t for the time marginal of (Y t , W t ) and F (t, x) := K(x − y)f t (dy, dw) we have by (3) Next, since |K| and also |F | are bounded by 1/2, we get a simple inequality: x 0 e −x 2 dx. Using that erf(x) ≤ min 1, 2 √ π x ≤ min 1, e 2 √ π x − 1 , we finally get the following bound, that will be very useful in the sequel: Together with the independence of (B t ) t≥0 and the initial condition W 0 , it leads to E e λ|Wt| ≤ e which is exactly (i).

MAXIME HAURAY AND SAMIR SALEM
Proof of Point (ii). For the second point, we integrate the inequality (31) and get: where we have used a stochastic version of Fubini's Theorem in the last line. N t is a centered random Gaussian variable with variance σ 2 t := 4e −t − e −2t − 3 + 2t ≤ 2t. The bound (33) and the independence of the initial condition and N t then lead to the claimed result: ≤ e λ t 2 E e λ(|Y0|+|W0|) 2 e 1 2 λ 2 σ 2 t ≤ 2 e λ t 2 +λ 2 t E e λ(|Y0|+|W0|) .
Proof of Point (iii). We first integrate equality (31) between s and u, and take a supremum in time: The law of supremum in time of the absolute value of a 1D Brownian motion is explicitly known, see for instance [8, p. 342]. Here, we will use only simple estimates on the exponential moments: E e λ sup 0≤u≤1 |Bu| ≤ E e λ sup 0≤u≤1 Bu + E e λ sup 0≤u≤1 (−Bu) ≤ 2 E e λ|B1| ≤ 4e and then the exponential moments given by (33). The constant 4 appearing above will raise some difficulties so we perform a little optimization to get rid of it. For any θ ≥ 1, we may also bound E e λ sup 0≤u≤1 |Bu| ≤ E e λθ sup 0≤u≤1 |Bu| 1 θ ≤ 4e The optimal θ seems to be θ = 2 Finally for t − s ≤ λ −2 , the bound τ ≤ (t − s) 3 leads to λ(t − s) −1 √ τ ≤ λ √ t − s ≤ 1 and E e λ(t−s) −1 sup s≤u≤t |N u | = E e λ(t−s) −1 √ τ sup 0≤u≤1 |Bu| ≤ e 2λ which concludes the proof, using that √ t − s ≤ 1 4 by assumption. Proof of Point (iv). Taking the expectation in (32), and using that M t is N (0, 1 − e −2t ) distributed, the expectation is simply bounded: E |M t | ≤ 2/π ≤ 1 and it implies the first bound of point (iv). The second bound uses (35) written with s instead of 0, which leads in average to Using, 1 − e s−t ≤ t − s and the previous bound on E[|W s |], and the fact that N t is and the conclusion follows when t − s ≤ 1 4 .