The stampacchia maximum principle for stochastic partial differential equations forced by lévy noise

In this work, we investigate the existence of positive (martingale and pathwise) solutions of stochastic partial differential equations (SPDEs) driven by a Levy noise. The proof relies on the use of truncation, following the Stampacchia approach to maximum principle. Among the applications, the positivity and boundedness results for the solutions of some biological systems and reaction diffusion equations are provided under suitable hypotheses, as well as some comparison theorems. This article improves the results of [ 15 ] where the authors only considered the case of the Wiener noise; even in this case we improve on [ 15 ] because the coefficients of the principal differential operator are now allowed to depend upon \begin{document}$ t $\end{document} .

1. Introduction. In [15], the authors considered a stochastic parabolic equation driven by a Wiener process du = N (u)dt + σ(u)dW. (1) Because of the randomness in the Wiener process W , the solution u must also be regarded as a stochastic process. In that paper, the authors established positivity and boundedness results for both martingale and pathwise solutions of such an equation under suitable hypotheses including e.g., the positivity of the initial data u(0) = u 0 . Their approach was mainly based on the use of truncation, following the Stampacchia approach to maximum principle. The results are widely applicable and they play an important role in the analysis of many biological and harvesting models. Specifically, if the initial conditions representing the initial populations of some biological models are positive almost surely, we want to study under what conditions on N and σ, the population density still remains positive as time evolves.

PHUONG NGUYEN AND ROGER TEMAM
Wiener processes are widely used in SPDEs to incorporate random fluctuations in PDEs that are continuous in time. Recently, it was realized that continuous Wiener processes were not suitable to represent sudden unpredictable variations occurring in some circumstances when modeling financial, biological or physical phenomena, see e.g. [5,6,12], and [14]. Lévy processes have been proposed to model such cases.
The existence and uniqueness of solutions for SPDEs driven by jump type noises have been remarkably investigated by many authors, see e.g. Applebaum and Wu [17], Truman and Wu [47], Rockner and Zhang [38]. The readers are also referred to the monograph by Peszat and Zabczyk [37] and the review paper [10] for more details. However, to the best of our knowledge, no one has yet addressed the use of positivity for the solution of an SPDE perturbed by a Lévy noise.
The present article builds upon two earlier articles co-authored by the third author [10,11]. In one of these articles [10], equation (2) was extended by incorporating an additional external stochastic forcing term driven by a canonical noise process with jump discontinuity known as a Lévy process. The equation then reads where E is a Hilbert space, in which the Lévy process takes its values and E 0 is the open unit ball in E; more details concerning Equation (2) are given below. The process u takes its value in a Hilbert space H which, in this article, represents a space of real valued functions. Equation (2) typically represents a parabolic equation in its deterministic part, associated with e.g. an initial and Dirichlet boundary conditions The Lévy noise, which has jump discontinuities, is described in more details in Section 2. It is the sum of a cylindrical Wiener process W , a Poisson random measure π and a compensated Poisson random measureπ. The Poisson random measure π represents the jumps of the Lévy process of large size, i.e., bounded away from 0 by a fixed constant, while the compensated Poisson random measurê π represents the jumps of the Lévy process of arbitrarily small size. In this work, assuming the existence of solutions to the equations (2) and (3), which is proved in other articles, we will prove the positivity of the solution u a.s and a.e by showing that the negative part u − of u vanishes, where In the context of deterministic partial differential equations, this is done by showing that In the context of stochastic partial differential equations, we prove this by showing that This article is an extension of [15] to the case of SPDEs driven by a Lévy noise. As in [15], in order to establish a result like (6), we need to consider the Itô differential dφ(u(t)) of φ for which we face the obstacles that φ is not C 2 and that the Itô formula is not directly applicable to φ since φ is defined on an infinite dimensional space. We will circumvent these difficulties by first constructing a smooth C 2approximations φ of φ and by deriving the corresponding expression associated with finite dimensional (projection) approximations u m of u. However, there are substantial differences between [15] and our paper. Because the Poisson random measure π is not a square integrable martingale, therefore, martingale theories cannot be applied and this induces many difficulties. To overcome these obstacles, one will first solve SPDEs in the absence of the Poison random measure π and then interlace that term in the end, see e.g. [7]. By using the same method, we will first establish the existence of a positive solution of the parabolic equation forced by a square integrable Lévy noise. We then later interlace a Poisson random measure π. However, to obtain a positive solution for the original problem, large jumps have to be positive almost surely. This article is organized as follows: In Section 2, we recall the background information from the PDE and probablistic frameworks and set the notations. In Subsection 2.1, we recall the relevant function spaces where the solution u to equation (2) takes its values. In Subsection 2.2 we define the notion of the Lévy process in an infinite dimensional Hilbert space. We also briefly recall the definition of the stochastic integrals appearing on the right hand side of (2). In Section 3, we introduce a typical stochastic parabolic equation similar to the heat equation and recall existence results for both martingale and pathwise solutions. The proofs, however, will not be given in details and we will refer to [10], [11] and other references. Then, in Subsection 3.2, we introduce a C 2 -approximation f of f (u) = (u − ) 2 and the corresponding functionals F = M f (u)dx and F (u) = M f (u)dx = M |u − | 2 dx.
Section 4 aims to derive the Itô formula for F (u) = |u − | 2 H . The procedure is to start with deriving the formula for F (u) in finite dimension m. We then pass to the limit m → ∞ (Subsection 4.1.1) and finally pass to the limit → 0 which is performed in Subsection 4.2. We obtain the positivity result for the solutions of equation (2) in Subsection 4.3 by strengthening the hypotheses on the noise terms. Section 4 is concluded by Subsection 4.4 which uses the interlacing method to include the Poisson random measure part π into equation (2) -(3), and deriving a positive solution for that equation. We generalize these results in Section 5.1. In that section, the Laplacian is replaced by more general second order elliptic operators. We also provide the positivity of the solution for a stochastic system of reaction diffusion equations with a polynomial nonlinearity. We as well consider some biological models such as the Lotka Volterra equation, known as the prey and predator model and a harvesting model arising in population dynamics. A comparison theorem of solutions is also mentioned in this section. These last applications developed in Sections 5.5 to 5.7 are borrowed from [15] and we just expand the parts of the proofs related to the Lévy noise. The main tools concerning weak convergence, tightness and the Skorokhod topology on the space of càdlàg functions are collected in Appendix A.2.
respectively. We also define the corresponding norms |.| H and . V , respectively by . Although we will consider more general elliptic operators and boundary conditions in Section 5, we first consider the standard Laplacian operator A = −∆ : V → V with Dirichlet boundary condition on M. It is well-known that there exists a Hilbert basis, {ϕ j } j≥1 of H made of smooth eigenfunctions of A, that is ϕ j ∈ H 1 0 (M) and The functions ϕ j are smooth; their level of smoothness depends on the regularity of the boundary ∂M of M. For instance, they belong to C ∞ if ∂M is C ∞ . We then introduce the finite dimensional spaces H n = span{ϕ 1 , . . . , ϕ n } and define the corresponding orthogonal projector P n from H onto this space. Given this projection and an element v = ∞ j=1 ψ j ϕ j in H, we denote by v n the projection with where the injections are continous and each space is dense in the next one. We can also introduce the chain of spaces V α = D(A α 2 ) , where, for α > 0, and V −α is the dual of V α . We recall that for every α 1 , α 2 ∈ R, α 1 > α 2 , the injection is compact and V α1 is dense in V α2 . We also let Q n = I − P n be the projection from in H onto H ⊥ n . We have the generalized and reverse Poincaré inequalities: which hold for any α 1 < α 2 and for all u ∈ H and n ≥ 1.

2.2.
The stochastic framework. In order to make sense of the stochastic terms in (1), we first recall the definitions and some properties of Hilbert space-valued Wiener processes and Lévy processes. In this section we work on a probability space (Ω, F, P). We begin by recalling the definition of Hilbert space-valued Lévy processes.
Every Lévy process admits a càdlàg modification. More precisely, for every Lévy process L there exists a Lévy processL such that P[L(t) =L(t)] = 1 for every t ≥ 0 (i.e.,L is a modification of L) and such that, P-a.s., the function t →L(t) is right continuous at every t ≥ 0 and has left-hand limits at every t > 0 (i.e.,L is càdlàg). See, e.g., Theorem 4.3 in [37] for a proof of this fact. We will only consider càdlàg Lévy processes. Almost surely, a càdlàg Lévy process L has at most countably many jumps on any interval [0, T ]. This is because, for each positive integer n, a càdlàg function can only have finitely many jumps of size exceeding 1/n on [0, T ]. Let L be a càdlàg Lévy process with values in a Hilbert space U . The jump of L at time t > 0 is denoted by ∆L t = L t − L t− . Let A ∈ B(U ), the family of Borel sets of U, and define π(t, A) := π(t, A, ω) := s:0<s≤t where χ A is the characteristic function of A. So, π(t, A) is the number of jumps of L that occur before or at time t and fall in the set A. More generally, for Γ ∈ B(R + × U ), we define There are at most countably many terms in the sum on the right-hand side of (12) (as in (11)). Equation (12) defines a random measure π that agrees with the quantity π(t, A) defined in (11) when Γ = (0, t] × A. We call π the jump measure of the Lévy process (L t ) t≥0 . It is well-known that π is a Poisson random measure (see, e.g., [37]). Each Lévy process L gives rise to a positive Borel measure ν on U \ {0} defined by the property that ν(A) is the expected rate of jumps of L that lie in A, for every A ∈ B(U \ {0}), i.e., The fact that the right-hand side of (13) does not depend on the value of t > 0 follows from the fact that π is a Poisson random measure. We call ν the Lévy measure of L. Since L is càdlàg P-a.s., we have ν(A) < ∞ when 0 ∈ A. Indeed, when 0 ∈ A, the nonnegative integer-valued process (π((0, t] × A)) t≥0 is a Poisson process and its rate, which is finite, is ν(A). In particular, ν is a σ-finite measure.
The prototypical examples of Lévy processes are the Wiener processes and the compound Poisson processes, which we recall below.
A Wiener process is, by definition, a Lévy process whose sample paths are continuous almost surely. Wiener processes are infinite dimensional generalizations of Brownian motions. They can be constructed as follows. Let {β n } ∞ n=1 be a sequence of i.i.d. real-valued Brownian motions and {u n } ∞ n=1 be an orthonormal basis of U . Let γ n ≥ 0 with ∞ n=1 γ 2 n < ∞ and consider the random sum In the coordinates of the orthonormal basis, W evolves according to independent Brownian motions in each direction scaled by γ n . The scaling factors {γ n } ∞ n=1 ∈ 2 are required in order to ensure that the sum in (14) converges to a U -valued random variable. One can show that the sum in (14) converges P-a.s. in the space C([0, T ]; U ) (see, for instance, Theorem 4.3 in [16]). It is simple to check that the process W defined in (14) is a Wiener process, i.e., W satisfies the conditions in Definition 2.1 and W is continuous almost surely. Conversely, every Wiener process is of the form in (14). Let W be a U -valued Wiener process, expressed in the form (14), and let Q denote the unique bounded linear operator on U with eigenvectors {u n } ∞ n=1 and eigenvalues {γ 2 n } ∞ n=1 . Then Q is positive and of trace class; we call it the covariance operator of W . The space U := Q 1/2 (U ) equipped with the inner product where Q −1/2 is the pseudo-inverse of Q 1/2 , plays an important role in constructing the stochastic integral with respect to W . We call U the reproducing kernel Hilbert space of W . Finally, let X be a Hilbert space. We denote by the set of Hilbert-Schmidt operators from U to X. This space L 2 (U, X) is a Hilbert space endowed with the following inner product and norm Re k , Se k X and The processes that we integrate with respect to the Wiener process W will take values in the space L 2 (U, X). In the context of SPDEs one often has a particular function in mind for the multiplicative Wiener noise, i.e., a specific choice of σ in (1) in the present case. As these functions are L 2 (U, X)-valued, this also amounts to a specific choice of the Hilbert space U. Using a cylindrical Wiener process construction, it is possible to define a Wiener process W on some larger Hilbert space U 1 such that U is the reproducing kernel Hilbert space of W . Any real, separable Hilbert space U 1 that contains a Hilbert-Schmidt embedding of U will do and the resulting stochastic integral does not depend on the space U 1 or the choice of the Hilbert-Schmidt embedding of U into U 1 . We refer the reader to [16] for the details of the cylindrical Wiener process construction. The fundamental example of a Lévy process with jump discontinuities is the compound Poisson process. A U -valued process P is a compound Poisson process if and only if there exists a finite positive Borel measure µ on U \ {0}, a Poisson process Π with intensity µ(U ) and i.i.d. U -valued random variables (Z j ) ∞ j=1 that are independent of Π such that Thus, P has jumps at the same times as Π and the value of the j th jump is Z j . We refer to [37] for a treatment of compound Poisson processes. When E|P (t)| U < ∞, which occurs if and only if U |y| U dµ(y) < ∞, we define the compensated compound Poisson processP (t) := P (t) − E[P (t)]. The structure of a general Lévy process is described by the Lévy-Khinchin decomposition (see, for example, Theorem 4.23 in [37]). This result says that every U -valued Lévy process L can be decomposed as a sum where a ∈ U is a fixed vector, W is a Wiener process, P 0 is a compound Poisson process, {P n } ∞ n=1 are compound Poisson processes and all of the processes on the right-hand side of (18) are independent. Noise driven by a Lévy process L is incorporated in the main equation (2) using three notions of stochastic integration, one for each of the processes W , P 0 and ∞ n=1P n appearing in the decomposition 18. Before discussing stochastic integration we first review the notions of filtrations and predictability. An increasing family of σ-subfields (F t ) t≥0 of F is called a filtration. We call (Ω, F, (F t ) t≥0 , P) a filtered probability space.
Definition 2.2. Let (F) t≥0 be a filtration on a probability space (Ω, F, P) and let T > 0. We denote by P [0,T ] the σ-field of subsets of Ω × [0, T ] generated by sets of the form where A ∈ F s and 0 ≤ s ≤ t ≤ T. We call P [0,T ] the predictable σ-field. We say that a stochastic process (X t ) t∈[0,T ] is predictable if and only if it is P [0,T ] -measurable as a function of both ω and t.
Definition 2.4. Let L be a U -valued Lévy process defined on a filtered probability space (Ω, F, (F t ) t≥0 , P). We say that L is an F t -Lévy process if L is adapted to the filtration (F t ) t≥0 and If L is also a Wiener process, then we will simply say that L is an F t -Wiener process.
We now recall the notions of stochastic integration that will be used here. Let X be a real, separable Hilbert space. First, we consider a Wiener process W on a filtered probability space (Ω, F, (F t ) t≥0 , P) with reproducing kernel Hilbert space U. For use in stochastic integration we assume that W is an F t -Wiener process. The space of integrands for stochastic integration with respect to W is i.e., The main facts about stochastic integration with respect to W are listed below. For every Ψ ∈ L 2 U,T (X) the stochastic integral By the Burkholder-Davis-Gundy (BDG) inequality (see, e.g., [37]), for every p ∈ [1, ∞) there exists a constant C p > 0 such that for every Ψ ∈ L 2 ,T (X) and every F t -stopping time τ we have Second, we consider stochastic integration with respect to the jump measure π of a U -valued Lévy process L. We assume that L is an F t -Lévy process. This implies that for each set A ∈ B(U \ {0}) the Poisson process (π((0, t] × A)) t≥0 is an F t -Lévy process. This says that π is the Poisson random measure induced by a stationary F t -Poisson point process, namely the jumps of L. We introduce the notation E := U \ {0} and the spaces for q = 1, 2. Below we gather basic facts from [29] about integration of functions in these spaces with respect to π. The main fact is that we are able to integrate functions f ∈ F 1 ν,T (X) with respect to the measure π for P-a.e. fixed ω ∈ Ω. To be precise, for every f ∈ F 1 ν,T (X) the following statements hold: is a sum of finitely many vectors in X, P-a.s. Stochastic integration with respect to the compound Poisson process P 0 in the Lévy-Khinchin decomposition of L, i.e. Equation (18), can be described in this manner by taking A := {y ∈ U : |y| U ≥ 1}. Third, we consider stochastic integration with respect to the compensated Poisson random measureπ, which is formally given by the rule dπ = dπ − dν ⊗ dt.
Then the stochastic integral (0,t] E0 f (s, z)dπ(s, z) represents stochastic integration with respect to the process n=1P n in the Lévy-Khinchin decomposition (18); see [10] for the details. The noise in equation (1) will be driven by a Lévyrocess L defined on a probability space (Ω, F, P). The Wiener process W is the Wiener part in the LévyKhinchin decomposition of L and the Poisson random measure π is the jump measure of L. For more general noises we can allow W to be an F t -cylindrical Wiener process and allow π to be the Poisson random measure induced by a stationary F t -Poisson point process (see [29] for the definition), where all of these processes are independent. Throughout this article, we call a stochastic basis a tuple where (F t ) t≥0 is a complete, right-continuous filtration such that W is an independent F t -cylindrical Wiener process, π is Poisson random measure on (0, ∞) × E induced by an independent stationary F t -Poisson point process that is independent of W .
3. The stochastic parabolic equations. We consider the stochastic evolution equation and its integral form is 3.1. Existence results. By using the same approach as in [12], we can show that under certain hypotheses on initial conditions and coefficient terms, equation (29) possesses both martingale and pathwise solutions. More precisely, we state the four following main theorems for the existence.
• σ : H → L 2 (U, H)) and L : H × E 0 → H satisfy the following growth and Lipschitz conditions i.
Theorem 3.2. Assume we are working relative to a given fixed stochastic basis S := (Ω, F, F t , P, W, π) and let u 0 be an F 0 -measurable H-valued random variable. Then there exists a global pathwise solution to the equation (29) that satisfies (32).
Theorem 3.4. Assume we are working relative to a given fixed stochastic basis S := (Ω, F, F t , P, W, π) and let u 0 be an F 0 -measurable V -valued random variable. Then there exists a global pathwise solution to the equation (29) satisfying equation (35).
3.2. The functionals F and F . Let k(r) = r − denote the negative part of r, or and set f (r) = k 2 (r) or

STAMPACCHIA MAXIMUM PRINCIPLE FOR SPDES FORCED BY LÉVY NOISE 2299
so that Then, it is not difficult to check that f has the following properties where θ(r) = 0 for r ≥ 0, and θ(r) = 1 for r < 0, and the convergence is uniform for r ∈ R.
This lemma is elementary so the proof is omitted. We now introduce the functionals F and F and We readily obtain the first and second derivatives of the functional F The strategy to establish the Itô formula for M |u − | 2 dx is to set L = 0 initially and then, in the end, we will interlace that coefficient which represents finitely many additional jumps almost surely. We therefore shall first focus ourselves on studying the positivity of the solution of the following equation the integral form of it is 4.1. Itô formula in finite dimension. This section aims to derive an Itô formula for F (u) where u is the (martingale or pathwise) solution of (29). We proceed by approximation in finite dimension and for that purpose we introduce u m = P m u which is solution of the system The Itô formula in finite dimension with F defined as in (42) gives 4.1.1. Passage to the limit as m → ∞. In this subsection, we aim to pass to the limit m → ∞, term-wise, in the right hand side of equation (48). We start with the term F (u m (0)).
Using that f is a Lipschitz function and using the Hölder inequality, we find which goes to 0, P-a.s. since P m u 0 → u 0 strongly in H, P-a.s On the other hand, since f (r) ≤ cr 2 : From (49) and (50), by applying the Vitali Convergence Theorem (Lemma A.3 for p = 1, q = 2), we find that The term EF (u m (t)) is treated similarly. We have

STAMPACCHIA MAXIMUM PRINCIPLE FOR SPDES FORCED BY LÉVY NOISE 2301
which goes to 0, P -a.s. In light of estimate (32), we have (52) and (53), by applying the Vitali Convergence Theorem (Lemma A.3 for p = 1, q = 2), we find that Next, by using integration by parts, chain rule differentiation and the Dirichlet boundary condition, we obtain: Observe that, by extracting a subsequence, the convergence below hold for a.e. s ∈ [0, T ], a.e. x ∈ M and a.s. for ω ∈ Ω, Since P m is an orthogonal projection in H 1 0 (M), we infer that ∇u m (t) → ∇u(t) in H a.s and for a.e. t. Then Therefore, by applying the Lebesgue Dominated Convergence Theorem, we obtain We then infer that P-a.s. and for all t ∈ [0, T ], we have by the triangle inequality,

PHUONG NGUYEN AND ROGER TEMAM
Firstly, K 1 is estimated by Hölder inequality (59) The last line holds true due to (58).
Then K 2 is treated similarly: The first term of (60) goes to 0 due to (58). By (56), we find that x and a.s. ω. Furthermore, Combining (61) and (62), along with the Lebesgue Dominated Convergence Theorem, we obtain that a.s. for ω ∈ Ω

STAMPACCHIA MAXIMUM PRINCIPLE FOR SPDES FORCED BY LÉVY NOISE 2303
It is direct to derive the following bound Gathering all relations from (59) through (64) and utilizing the Vitali Convergence Theorem (Lemma A.3 with p = 1, q = 2), we deduce that We next show that In order to do so, we first note that for P-a.s. and for all t The last line holds true because both u m (s) → u(s) and P m b(s) → b(s) strongly in H, P-a.s and a.e. t ∈ [0, T ]. Next, we have From (67) and (68), by means of the Vitali Convergence Theorem (Lemma A.3 with p = 1, q = 2), we obtain: The next term is treated differently We consider Thanks to (56), we obtain for a.e. t, x and a.s. for ω, By applying the Lebesgue Dominated Convergence Theorem, we deduce that | DF (u m (s−)) − DF (u(s−)), P m K (u(s−), z) | 2 dν(z)ds → 0 a.s.
(72) We now show that I 2 → 0 a.s. Thanks to the fact that P m K (u(s−), z) → K (u(s−), z) strongly in H, a.e. x, t, z and a.s. in ω, we obtain In addition, by utilizing the assumption (30), P-a.s.
where the last convergence follows thanks to the fact that P m σ(u(x, s)) → σ(u(x, s)) strongly in H for a.e t, x and a.s. in ω. Furthermore, we have By mean of the Lebesgue Dominated Convergence Theorem, we conclude that as Next, by utilizing the hypothesis (30), we find that |f (u m (x, s))P m (σ(u(s, x)) − f (u(x, s))σ(u(x, s))| 2 dxds Collecting all the relations from (83) through (87) and using the Vitali Convergence Theorem (Lemma A.3 for p = 1, q = 2), we find By utilizing the Itô isometry, we obtain: The next term in (48) is estimated as follows

PHUONG NGUYEN AND ROGER TEMAM
Observe that for all t ∈ [0, T ] and for ω ∈ Ω a.s., we have the followings: By applying the Lebesgue Dominated Convergence Theorem, we imply that I 5 → 0, a.s. ω. In the same manner, the term I 6 can be shown to converge to 0 due to the following facts Along with the Lebesgue Dominated Convergence Theorem, the result follows.
We readily obtain the following bound By the Lebesgue Dominated Convergence Theorem, we obtain that We are left to show that the term converges to To that end, we first note that For P-a.s. and for a.e. t ∈ [0, T ], we have The treatment for the terms I 7 and I 8 is identical so we only pay attention to the term I 7 . We have which tends to 0 since (98) We now consider the term I 9 : For a.e.t ∈ [0, T ] and a.s. ω ∈ Ω The last line follows thanks to (56) and P m K (u(s−), z)−K (u(s−), z) → 0 strongly in H.
We further obtain the following bounds by using the Taylor expansion By utilizing the Lebesque Dominated Convergence Theorem, we obtain and |P m K (u(s−), z))| 4 dν(z)ds ≤ CE Since both

4.2.
Itô formula in infinite dimension (Passage to the limit as → 0). We will now pass to the limit on each term in (104) as → 0.

(117)
Taking the mathematical expectation on both sides of the above expression, we see that the first term vanishes due to the fact that it is a square integrable martingale, therefore our task now is to handle the second term. We consider It can be seen that as → 0, thanks to the Lipschitz assumption (31), the term κ 1 converges to Regarding κ 2 , thanks to the assumption (125), which goes to 0 as → 0 a.s. ω.

5.
Generalization. We now present various natural generalizations of Propositions 1 and 2 to various other equations and boundary conditions. 5.1. More general elliptic operators. First, we will extend the results of the previous section by generalizing equation (29) by replacing −∆ in equation (29) by more general elliptic operators. We consider an elliptic operator where the matrix (a ij ) is symmetric. We further assume some hypotheses on these coefficients: • A is coercive, that is, there exists a constant α > 0 such that • a ij is uniformly bounded on M, that is Remark 1. There is a difference for proving the analogue of Proposition 1, because a ij is time-dependent. In order to overcome the obstacle, we will first set and then prove the validity of Proposition 1 with −∆ replaced by A ∆t which is defined similarly as (141) with a ij replaced by a ∆t ij . The procedure is first to derive the Itô formula on [(n − 1)∆t, n∆t] and then pass to the limit ∆t → 0 to conclude this section.
Proposition 1 gives the Itô formula with the Laplacian operator replaced by the operator A ∆t and for t ∈ [(n − 1)∆t, n∆t) EF (u(t)) = EF (u(0)) + E t 0 DF (u(s)), Au(s) ds + E t 0 DF (u(s)), b(s) ds DF (u(s)), K (u(s−), z) dπ(s, z) The procedure to let and ∆t → 0 is interchangeable. However, to reduce the unnecessary difficulties, we will pass to the limit ∆t → 0 first. We have for a.s. ω, by integration by parts, The last line follows due to the fact that a ∆t ij → a ij strongly in H, as ∆t → 0. On the other hand, From (146) and (147), along with the Lebesgue Dominated Convergence Theorem, we obtain We are left to let → 0 and it is done in exactly the same manner as in the previous section.

5.2.
More general boundary conditions. The results in both Sections 1 and 2 can be extended to more general boundary conditions. For example, we can consider the Neumann boundary condition: where n = (n 1 , n 2 , .., n d ) is the outward unit normal vector on ∂M. We may also consider the mixed boundary condition such as where Γ 1 , Γ 2 are two complementary components of ∂M such that, M =Γ 1 Γ 2 . See the details in [15].
We will consider the following stochastic boundary -value problem involving a scalar function u = u(x, t) For the mathematical setting of this problem, we write H = L 2 (M), V = H 1 0 (M) and by following the approach in [43] and [12], we obtain the following existence result: Theorem 5.1. For u 0 ∈ L 2 (Ω, H) and F 0 -measurable and f ∈ L 2 (0, T, H), there exists a unique martingale (pathwise) solution u of the system (152)which satisfies and for P-a.s, Proof. We only sketch the proof of the existence since it is not the main part of this subsection and we will remedy the proof in a subsequent work. The construction of both solutions are based on some truncation, the classical Faedo-Galerkin approximation scheme and a modified version of the Skorokhod Representation Theorem.
To derive the a priori estimates on the solutions on L 2 (Ω, L 2 (0, T, H)) ∩ L 2 (Ω, L 2p , 0, T, L 2p (M)), we apply the Itô formula to the function φ(u) = |u| 2 in (152); taking expectation on both sides, this yields By using the Young inequality, we readily obtain the bound for the polynomial ϕ and by using the BDG inequality, the estimates for the terms involving σ andπ are obtained.
The last term is bounded by simply utilizing of inequality (130). We will next apply the Itô formula to the function ψ(u) := |u − | 2 in equation (152). By Proposition 1, we obtain: In order to obtain a positive solution for the system (152), we shall require that the data u 0 , f are ≥ 0 a.e. and a.s. We further require special hypotheses on the coefficients. More precisely, the polynomial ϕ defined in (151) satisfies that b 2k = 0 for 0 ≤ k ≤ p − 1 and the noise terms σ and K satisfy (31). With all the above conditions and observing that u 2k+1 u − ≤ 0 a.e. and a.s. it is not hard to deduce that This inequality is identical to (129) and the rest is treated in the same manner as before.
5.4. Lotka Volterra system. We investigate in this subsection the positiveness of both martingale and pathwise solutions of the Lotka Volterra system in space dimension two. This system is a reduced form of the well-known Shigesada Kawasaki Teramoto system, SKT for short.  We observe that if u, v ≥ 0 are solutions of (159) then they are solution of the following system Classically, we will work on the system with both the terms L i = 0, i = 1, 2. Those terms will be subsequently included into the system by interlacing. We state the main results

STAMPACCHIA MAXIMUM PRINCIPLE FOR SPDES FORCED BY LÉVY NOISE 2323
Theorem 5.2. Fix a stochastic basis S := {Ω, F, P, (F t ) t≥0 , W 1 , W 2 , π 1 , π 2 }. We assume that • there exists a positive constant M 1 such that for a.e. t ∈ [0, T ], • there exists a positive constant M 2 such that for a.e. t ∈ [0, T ], Then, there exists a unique pathwise solution of the system (160) satisfying the following inequality The detailed proof will be performed in a separate work. Here we only focus on showing the existence of a positive solution to the system (160) when u 0 , v 0 ≥ 0 a.s. Proposition 3. Under the same assumptions as in Theorem 5.2 but instead of (161), we further assume that i ) ii ) L 1 , L 2 ≥ 0 a.s. Then, the system (160) possesses a positive solution.
Proof. We apply the Itô formula to the functions ψ(u) := |u − | 2 and ψ(v) := |v − | A.1. Weak convergence. Let (E, d) be a complete, separable metric space and let B(E) denote its Borel σ-algebra. Let C b (E) be the set of all real-valued, continuous, bounded functions on E, and let P r(E) be the set of all probability measures on (E, B(E)).
In order to pass to the limit in Section 5 we apply the Skorokhod convergence theorem to the weakly convergent sequence (v n k 0 , h n k 0 , v n k , h n k , W 1 , W 2 , π 1 , π 2 ) ∞ k=1 and obtain almost sure convergence on a new probability space. We invoke a modified version of the Skorokhod convergence theorem, which is stated next, that permits the noise in the resulting sequence ( v n k 0 , h n k 0 , v n k , h n k , W n k 1 , W n k 2 , π n k 1 , π n k 2 ) ∞ k=1 on the new probability space to remain constant in k. This is essential for passing to the limit in the stochastic integral terms involving π 1 and π 2 (almost sure convergence of { π n k 1 } ∞ k=1 in the weak-# topology on N # * (0,∞)×E is too limited for this purpose because of the small class of test functions for the weak-# topology). For a proof of the following modified version of the Skorokhod convergence theorem see [6].
Theorem A.2 (Skorokhod). Let (Ω, F, P) be a probability space and let E 1 and E 2 be two separable metric spaces. Let χ n : Ω → E 1 × E 2 , n ∈ N, be a family of random variables, whose laws are weakly convergent on E 1 × E 2 . Let p 1 : E 1 × E 2 → E 1 be the natural projection onto E 1 , i.e., p 1 (e 1 , e 2 ) = e 1 for every (e 1 , e 2 ) ∈ E 1 × E 2 . Assume that p 1 (χ n ) has the same law for every n ∈ N.
In Section 3.4 we use a characterization of convergence in probability from [27], which we recall here for convenience. Suppose that {Y n } n≥0 is a sequence of Evalued random variables on a probability space (Ω, F, P) and let {µ m,n } m,n≥0 be the collection of joint laws of {Y n } n≥0 , i.e.
The result characterizes convergence of probability for the sequence {Y n } n≥0 in terms of weak convergence along subsequences of {µ m,n } m,n≥0 .
Proposition 5 (Gyöngy-Krylov Theorem). A sequence of E-valued random variables {Y n } n≥0 converges in probability if and only if for every subsequence of joint probability laws, {µ m k ,n k } k≥0 , there exists a further subsequence that converges weakly to a probability measure µ such that µ({(x, y) ∈ E × E : x = y}) = 1.
We now recall a sufficient condition for L p convergence that is used several times in this paper (a variant of the Vitali Convergence Theorem). Lemma A.3. Let (Ω, F, P) be a probabilty space, let X be a Banach space and let p ∈ [1, ∞). Let f, f 1 , f 2 , .. ∈ L p (Ω, F, P) and suppose that i. |f n − f | X → 0 in probability as n → ∞ and ii. sup n≥1 E|f n | q X < ∞ for some q ∈ (p, ∞). Then f n → f in the space L p (Ω, F, P; X).
Proof. It suffices to show that: For each > 0, there exists δ( ) > 0 such that for every measurable A ⊂ Ω A |f | p dP < , whenever (A) < . Indeed, by Hölder's inequalilty Therefore, if we choose δ = C q q−p , we obtain the desired result.
A.2. Tightness and the Skorokhod topology. We recall the notion of tightness in this section along with the Skorokhod topology.
Definition A.4. A set Π of Borel probability measures on a metric space (E, d) is said to be tight if for every > 0 there exists a compact set K ⊆ E such that µ(K c ) < for every µ ∈ Π.
Tightness is a compactness property of sets of probability measures in the topology of weak convergence on P r(E). The exact relationship between tightness and weak convergence is described by the following theorem due to Prokhorov: Proposition 6 (Prokhorov's Theorem). Let (E, d) be a complete, separable metric space. Then a set Π ⊂ P r(E) is weakly compact if and only if it is tight.
A proof of Theorem 6 can be found in, e.g., Theorems 5.1 and 5.2 in [4]. We now turn to the specific case where (E, d) is the space of càdlàg functions in time. Let (S, ρ) be a separable and complete metric space. Let D(0, T ; S) denote the set of S-valued càdlàg functions defined on [0, T ], i.e., the functions that are right-continuous and have a left-hand limit at every t ∈ [0, T ]. This space is endowed with the Skorokhod topology. We now briefly describe the main facts about the Skorokhod topology that will be used here. More detailed treatments can be found in, e.g., [4,23]. A sequence u n of Prokhorov's theorem (Proposition 6) it is useful to have a sufficient condition for tightness of a family of probability measures on D(0, T ; S). We will use the following condition related to tightness that was introduced by Aldous in [1].
We can easily formulate a sufficient condition for (187) using Markov's inequality. Suppose that there exist constants α, β, C > 0 such that for every sequence τ n ∞ n=1 of F t -stopping times with τ n ≤ T we have sup n≥1 E ρ(X n (τ n + t), X n (τ n )) α ≤ Ct β for every t ≥ 0. Then X n ∞ n=1 satisfies the Aldous condition (187). See Theorem 13.2 of [42] for a compactness result in the deterministic setting that is analogous to condition (188).
Lemma A.6. Let (S, ρ) be a complete, separable metric space and let X n For a proof of Lemma A.6 see, e.g., Theorem 3.2 in [33].