The fractional nonlocal Ornstein--Uhlenbeck equation, Gaussian symmetrization and regularity

For $0<s<1$, we consider the Dirichlet problem for the fractional nonlocal Ornstein--Uhlenbeck equation $$\begin{cases} (-\Delta+x\cdot\nabla)^su=f&\hbox{in}~\Omega\\ u=0&\hbox{on}~\partial\Omega, \end{cases}$$ where $\Omega$ is a possibly unbounded open subset of $\mathbb{R}^n$, $n\geq2$. The appropriate functional settings for this nonlocal equation and its corresponding extension problem are developed. We apply Gaussian symmetrization techniques to derive a concentration comparison estimate for solutions. As consequences, novel $L^p$ and $L^p(\log L)^\alpha$ regularity estimates in terms of the datum $f$ are obtained by comparing $u$ with half-space solutions.


Introduction
In the present paper we are interested in developing Gaussian symmetrization techniques and, as consequences, to obtain novel L p and L p (log L) α regularity estimates for solutions to nonlocal equations driven by fractional powers of the Ornstein-Uhlenbeck (OU for short) operator subject to homogeneous Dirichlet boundary conditions. More precisely, we focus on problems of the form (−∆ + x · ∇) s u = f, in Ω, u = 0, on ∂Ω, for 0 < s < 1, (1.1) where Ω is an open subset of R n with γ(Ω) < 1. Here γ denotes the Gaussian measure on R n , see (1.3). Our problem (1.1) corresponds to a Markov process. Indeed, there is a stochastic process Y t having as generator the fractional OU operator (1.1) with homogeneous Dirichlet boundary condition. The process can be obtained as follows. We first kill an OU process X t at τ Ω , the first exit time of X t from the domain Ω. Let us denote the killed OU process by X Ω t . Then we subordinate the killed OU process X Ω t with an s-stable subordinator T t . Thus Y t = X Ω Tt is the resulting process (see for instance [6]). As explained in [16], (1.1) also arises in the context of nonlinear elasticity as the Signorini problem or the thin obstacle problem. Nonlocal equations with fractional powers of the OU operator in Ω = R n have been studied in the past. Indeed, a Harnack inequality for nonnegative solutions was proved in [43]. Fractional isoperimetric problems and semilinear equations in infinite dimensions (Wiener space) have been considered in [35] and [36]. Fractional functional inequalities were recently analyzed in [15].
The symmetrization techniques in elliptic and parabolic PDEs are nowadays very classical and efficient tools to derive optimal a priori estimates for solutions. The investigation in such direction started with the fundamental paper by H. Weinberger [49], see also [32]. The ideas were later fully formalized by G. Talenti in [44] for the homogeneous Dirichlet problem associated to a linear equation in divergence form with zero order term on a bounded domain of R n . In particular, [44] establishes a strong pointwise comparison between the Schwarz spherical rearrangement of the solution u(x) to the original problem, and the unique radial solution v(|x|) of a suitable elliptic problem defined on a ball having the same measure as the original domain and radial data. In turn, this kind of result allows to obtain regularity estimates of solutions with optimal constants. When dealing with parabolic equations, any form of pointwise comparison between the solution u(x, t) of an initial boundary value problem and the solution v(|x|, t) of a related radial problem with respect to x is in general no longer available. Indeed, in this case a weaker comparison result in the integral form, the so-called mass concentration comparison (or comparison of concentrations), holds for all times t > 0, see for instance [7,33]. For a detailed survey on this theory we refer the interested reader to [45].
Quite recently, symmetrization techniques have been successfully applied to a class of fractional nonlocal equations. More precisely, results in terms of symmetrization were obtained for equations driven by the fractional Dirichlet Laplacian (−∆ D ) s u = f, and by the fractional Neumann Laplacian in bounded domains of R n , for 0 < s < 1. These equations arise in several important applications, see for example [2,16,40,42]. The fractional operators above are defined in terms of the corresponding eigenfunction expansions. Then the characterization provided by the extension problem of [41] via the Dirichlet-to-Neumann map for a (degenerate or singular) elliptic PDE allows to treat the abovementioned problems with local techniques (we also refer the reader to [14] for the fractional Laplacian on R n and to [26] for the most general extension result available, namely, for infinitesimal generators of integrated semigroups in Banach spaces). This information was essential to start a program regarding the applications of symmetrization in PDEs with fractional Laplacians. Indeed, the first paper in such direction was the seminal work [21] for the case of the fractional Dirichlet Laplacian. Those ideas were extended and enriched with many other applications to nonlinear fractional parabolic equations in [39,46,47]. When Neumann boundary conditions in fractional elliptic and parabolic problems are assumed, the symmetrization tools applied to the extension problem still lead to a comparison result, though of a different type, see [48].
It is important to notice that all the comparison results in the nonlocal setting we just mentioned are not pointwise in nature, but in the form of mass concentration comparison. One motivation of such phenomenon relies on the fact that the symmetrization argument applies on the extension problem with respect to the spatial variable x, by freezing the extra extension variable y > 0. In other words, a comparison of the solution to the extension problem is given in terms of the so-called Steiner symmetrization.
On the other hand, for elliptic equations involving the OU operator the first comparison result through symmetrization, in the pointwise form, was obtained in [9]. The symmetrization has to take into account the natural variational structure of the OU operator. Indeed, the Dirichlet problem for L is of the form where ϕ = ϕ(x) is the density of the Gaussian measure dγ with respect to the Lebesgue measure: The source term f is then taken in the suitable class of weighted L p spaces. Moreover, the meaningful case is when Ω is an unbounded open set. Here we assume γ(Ω) < 1.
Hence, the comparison result must be done through Gaussian symmetrization instead of the usual Schwarz symmetrization. In this setting, one of the main tools in the proof is the Gaussian isoperimetric inequality, which states that among all measurable subsets of R n with prescribed Gaussian measure, the half-space is the minimizer of the Gaussian perimeter. It becomes rather intuitive to guess that the Schwarz spherical rearrangement of a function (which is a special radial, decreasing function), appearing in the comparison results in the Lebesgue setting, should now be replaced by the rearrangement with respect to the Gaussian measure. The latter is a particular increasing function, depending only on one variable, defined in a half-space (see Subsection 2.3 for definitions and related properties). The authors of [9] were able to apply this powerful machinery to compare the solution u (in the sense of rearrangement) to (1.2) with the solution v to the problem where Ω ⋆ is a half-space having the same Gaussian measure as Ω and f ⋆ is the n-dimensional Gaussian rearrangement of f . The solution v to (1.4) (parallel to the classical case described in [44]) can be explicitly written, allowing the authors to derive the sharp a priori pointwise estimate This was the starting point to obtain regularity results for u in Lorentz-Zygmund spaces. Generalizations of this result for elliptic and parabolic problems involving elliptic operators in divergence form which are degenerate with respect to the Gaussian measure are contained in [17,20], see also references therein. Our main concern is to get sharp estimates for the solution u to (1.1) by comparing it with the solution ψ to the problem As our previous discussion evidences, (1.5) is actually a one dimensional problem. Our idea that yields the desired result reads as follows. Using the main extension result of [41] we can characterize the fractional OU operator L s in (1.1) as a suitable Dirichlet-to-Neumann map. This allows us to obtain the solution u to (1.1) as the trace on Ω of the solution w = w(x, y) of the following degenerate elliptic boundary value problem, which will be called the extension problem associated to (1.1): on Ω. (1.6) Here a := 1 − 2s ∈ (−1, 1), (1.7) while C Ω := Ω × (0, ∞) is the infinite cylinder of basis Ω, and ∂ L C Ω := ∂Ω × [0, ∞) is its lateral boundary. In a similar way, the solution ψ to (1.5) can be seen as the trace over Ω ⋆ of the solution on Ω ⋆ , and ∂ L C ⋆ Ω := ∂Ω ⋆ ×[0, ∞). Therefore, the problem reduces to look for a mass concentration comparison between the solution w to (1.6) and the solution v to (1.8). More precisely, we prove that where, for all y ≥ 0, the functions w ⊛ (·, y) and v ⊛ (·, y) are the one dimensional Gaussian rearrangements of w(·, y) and v(·, y), respectively. The key role of this framework is played by a novel second order derivation formula for functions defined by integrals, see Corollary 2.13, whose proof presents new nontrivial technical difficulties owed to the Gaussian framework. As a consequence, we will obtain L p and L p (log L) α estimates for u in terms of f . The paper is organized as follows. Section 2 contains the preliminaries needed for the developments of our results. In particular, we briefly describe some basic properties of the Gaussian measure and the OU semigroup. Moreover, we carefully develop a full and self-contained analysis of the main functional setting where problems (1.1) and (1.6) are posed. Section 2 ends with the introduction of the basic definitions and properties of symmetrization with respect to the Gaussian measure. In this regard, we will present the proof of the derivation formula stated in Theorem 2.12, whose consequence is the above-mentioned second order differentiation formula, see Corollary 2.13. Section 3 is entirely devoted to the proof of the comparison (1.10), that is, our main result Theorem 3.1. In Section 4 we present our novel Gaussian-Zygmund L p (log L) α (Ω, γ) and L p (Ω, γ) regularity estimates for solutions u in terms of the datum f , see Theorem 4.3. More precisely, our main result (Theorem 3.1) is combined with L p (log L) α regularity estimates of the solution ψ to problem (1.5), which is obtained by using the explicit form of ψ in terms of the fractional integral L −s (f ⋆ ) and the OU semigroup. Finally, in the Appendix we shall use suitable estimates of the Mehler kernel to exhibit a semigroup-based proof of the regularity estimates when the datum f belongs to the smaller Gaussian-Lebesgue space L p (Ω, γ).

Preliminaries, functional setting, and the second order derivation formula
In this section we recall the basic tools we are going to use in the proof of our main comparison result, Theorem 3.1, and its consequences. First, we introduce some basics about Gaussian analysis and the OU semigroup. Then the necessary functional background to precise the fractional nonlocal equations (1.1) and (1.5), and their extension problems (1.6) and (1.8) will be developed. Finally, after presenting definitions and properties of rearrangement techniques in the Gaussian framework, we will prove our novel second order derivation formula, see Theorem 2.12 and Corollary 2.13.  Let Ω be an open subset of R n , possibly unbounded. We denote by H 1 (Ω, γ) the Sobolev space with respect to the Gaussian measure, which is obtained as the completion of C ∞ (Ω) with respect to the norm By H 1 0 (Ω, γ) we denote the closure of C ∞ c (Ω) in the norm of H 1 (Ω, γ). The following Poincaré inequality holds (see for instance [22]): if γ(Ω) < 1 then there exists a constant C Ω > 0 such that One of the main tools to prove the comparison result is the Gaussian isoperimetric inequality. Let us define the perimeter with respect to Gaussian measure as where E is a set of locally finite perimeter and ∂E denotes its reduced boundary. As usual, H n−1 denotes the (n − 1)-dimensional Hausdorff measure. It is well known (see [13]) that among all measurable sets of R n with prescribed Gaussian measure, the half-spaces take the smallest perimeter. More precisely, we have  [5,11] for further details) which will turn out to be useful in the following. The solution to the Cauchy problem It is a classical fact that such a semigroup can be expressed in terms of a suitable integral kernel. More precisely, if g ∈ L p (R n , γ), for 1 ≤ p ≤ ∞, then Here M t (x, y) is the so-called Mehler kernel, which is defined by We recall that R n M t (x, y) dγ(y) = 1, for all x ∈ R n , t > 0, (2.6) and that if g ∈ L p (R n , γ), 1 ≤ p < ∞, then It is standard to define the OU semigroup on a domain Ω of R n subject to homogenous Dirichlet boundary conditions. Indeed, the solution to the Cauchy-Dirichlet problem is given by the semigroup generated by the OU in Ω with Dirichlet boundary conditions: It follows from standard parabolic regularity theory that η is smooth in Ω× (0, ∞). Now, let us choose Observe that for 1 ≤ p < ∞ we have It is not difficult to check (see for example [37]) that in this case the semigroup associated to (2.8) is obtained as the restriction to H of the OU semigroup on R n applied tof , that is, Moreover, using the expression of the OU semigroup in terms of the Mehler kernel (2.4) we see that the following explicit formula holds in dimension n = 1: The fractional nonlocal OU equation and the extension problem. We introduce now an appropriate functional setting, which is essential when dealing with problems (1.1) and (1.6).
In order to define the fractional powers L s u, 0 < s < 1, we consider the sequence of eigenvalues 0 < λ 1 ≤ λ 2 ≤ · · · ≤ λ k ր ∞ and the corresponding orthonormal basis of Dirichlet eigenfunctions {ψ k } k≥1 of L in L 2 (Ω, γ), see for example [10]. In other words, for every k ≥ 1, ψ k ∈ L 2 (Ω, γ) is a weak solution to the Dirichlet problem Now, let us define the Hilbert space with scalar product Then the norm in H s (Ω, γ) is given by For u ∈ H s (Ω, γ), we define L s u as the element in the dual space H s (Ω, γ) ′ through the formula That is, for any function v ∈ H s (Ω, γ) we have This identity can be rewritten as where L s/2 is defined by taking the power s/2 of the eigenvalues λ k .
Remark 2.1 (The fractional OU operator is a nonlocal operator). By using the method of semigroups as in [41], see also [16,42,43], it can be seen that the fractional operator L s is a nonlocal operator. Indeed, we have the semigroup and kernel formulas where PV means that the integral is taken in the principal value sense. Here is the semigroup generated by L in Ω with Dirichlet boundary conditions, H t (x, y) is the corresponding heat kernel, In the particular case of Ω = R n , we have H t (x, y) = M t (x, y), the Mehler kernel, and, as a direct consequence of (2.6), we see that B s (x) ≡ 0. Though this description is important, we will not use it here. Instead, we will apply the extension technique.
Recalling the notation in (1.7), we define the Sobolev energy space on the infinite cylinder C Ω : Thus we can equip the space which is actually the norm defined through the scalar product The following Theorem is a particular case of [41, Theorem 1.1], see also [16,26,43]. It provides the characterization of L s u as the Dirichlet-to-Neumann map for a degenerate elliptic extension problem in the upper cylinder C Ω , for any u ∈ H s (Ω, γ). As the solution w(x, y) is explicitly given by (2.13) and (2.16), the proof is just a verification of the statements, see for example [41,42].
for y ≥ 0, where K s is the modified Bessel function of the second kind and order 0 < s < 1. Then w ∈ H 1 0,L (C Ω , dγ(x) ⊗ y a dy) and it is the unique weak solution to the extension problem on Ω, that vanishes weakly as y → ∞. More precisely, for all test functions ξ ∈ H 1 0,L (C Ω , dγ(x) ⊗ y a dy) with zero trace over Ω, tr Ω ξ = 0, and in L 2 (Ω, γ). Furthermore, the function w is the unique minimizer of the energy functional (2.16) Finally, the following energy identity holds: Theorem 2.2 shows in particular that the domain H s (Ω, γ) is contained in the range of the trace operator on H 1 0,L (C Ω , dγ(x) ⊗ y a dy) at y = 0. The next Lemma shows that actually these two spaces coincide.

Lemma 2.3 (Trace inequality). We have
In particular, equality holds in and define the function w as in (2.13). It is readily checked that w satisfies (2.14), so it minimizes the functional F in (2.15). Therefore, by (2.17), tr Ω (H 1 0,L (C Ω , dγ(x) ⊗ y a dy)) ⊂⊂ L 2 (Ω, γ). Proof. We need to check that the trace operator tr Ω : Similarly, the finite rank operators T j , j ≥ 1, defined by . By using (2.18) and the fact that λ k ր ∞, Therefore T j converges to tr Ω in the operator norm, as j → ∞, and tr Ω is compact.
Using the previous preliminaries, it is natural to give the following definitions of weak solutions.
Remark 2.7. If we assume that f is in the dual space H s (Ω, γ) ′ , it is clear that the right hand side in (2.19) must be replaced by the dual product f, v(·, 0) . Then the (unique) solution u to (1.1) will be again the trace over Ω of the unique solution w to the extension problem (1.6).
The following is just a restatement of Theorem 2.2, see [41, Theorem 1.1] and also [26].
Theorem 2.8 (Extension problem for negative powers). Given f ∈ L 2 (Ω, γ), let u ∈ H s (Ω, γ) be the unique solution to problem (1.1). The solution w (see (2.13)) to the extension problem (2.14) can be written as In particular, this is the unique weak solution to (1.6) and The domain H s (Ω, γ) of the fractional nonlocal operator L s can be characterized as a suitable interpolation space between two Hilbert spaces. Indeed, using the abstract discrete version of the J-Theorem (see for example the Appendix in [12]), it is straightforward to prove that where the space in the right hand side of (2.22) is the real interpolation space between H 1 0 (Ω, γ) and L 2 (Ω, γ). Then H 1/2 (Ω, γ) may be seen as the equivalent of the Lions-Magenes space H 1/2 00 (Ω) in the Gaussian setting.

Gaussian rearrangements.
We give the notion of rearrangement with respect to the Gaussian measure. For extra details, we refer the interested reader to the classical monographs [8] and [19]. If u is a measurable function in Ω, we denote by • u ⊛ the one dimensional decreasing rearrangement of u with respect to the Gaussian measure (also called one dimensional Gaussian rearrangement of u): where γ u (t) = γ({x ∈ Ω : |u(x)| > t}) is the distribution function of u; • u ⋆ the n-dimensional rearrangement of u with respect the Gaussian measure: where Ω ⋆ = {x = (x 1 , . . . , x n ) ∈ R n : x 1 > λ} is the half-space such that γ(Ω ⋆ ) = γ(Ω) and Φ is given by (2.3).
By definition, u ⋆ is a function which depends only on the first variable x 1 , it is increasing and its level sets are half-spaces. Moreover, u, u ⊛ and u ⋆ have the same distribution function. This implies that the Gaussian L p norm is invariant under these rearrangements: If u is defined on a half-space and u = u ⋆ we sometimes say that u is rearranged. Furthermore, if u and v are measurable functions then the following Hardy-Littlewood inequality holds: If u is defined on Ω, v on Ω ⋆ and the following estimate holds the same inequality is called mass concentration inequality (or comparison of mass concentration).
If v = v ⋆ and (2.24) occurs, we also say that u ⋆ is less concentrated that v and we write u ⋆ ≺ v. Moreover, (2.24) implies that (see for instance [18]) We will often deal with two-variable functions which are measurable with respect to x. In such a case it will be convenient to consider the so-called Gaussian Steiner symmetrization of C Ω with respect to the variable x, namely, the set C ⋆ Ω as defined in (1.9). In addition (see for instance [17,22]) we will denote by γ w (t, y) and w ⊛ (r, y) the distribution function and the one dimensional Gaussian decreasing rearrangements of (2.25), with respect to x, for each y fixed. We will also define the function which is called the Gaussian Steiner symmetrization of w, with respect to x, that is, with respect to the line x = 0. Clearly, for any fixed y, w ⋆ (·, y) is an increasing function depending only on x 1 . Now we recall a result that we will use in the proof of our main comparison result in Section 3. Equivalently, we need to derive the Gaussian version of the first and second order differentiation formulas established for the Lebesgue measure in [4,7,25,33]. The first order differentiation formula can be stated as follows: Proposition 2.10 (See [17], also [38]). If w ∈ H 1 (0, T ; L 2 (Ω, γ)) is a nonnegative function, for some T > 0, then w ⊛ ∈ H 1 0, T ; L 2 (0, γ(Ω)) . In addition, if γ({w(x, t) = w ⊛ (r, t)}) = 0 for a.e. (r, t) ∈ (0, γ(Ω)) × (0, T ), then the following derivation formula holds In order to prove our novel second order derivation formula, we need the following version of the coarea formula (see [23] and [28,Theorem 11]). Proposition 2.11. If u ∈ W 1,p loc (R n ), with p > 1, and ψ : R n → R is a nonnegative measurable function, then there exists a representative of u, denoted again by u, such that Now we present our new Gaussian derivation formulas, which are a nonstandard adaptation of the derivation formula exhibited in [25].
Since f ∈ L 2 (C Ω , dγ(x) ⊗ y a dy) and f is continuous, by Fubini's theorem we have that for a.e. y > 0, and Ψ(t) < ∞, for all t ≥ 0. Then (2.35) implies Since the function Ψ it monotone, it is also differentiable almost everywhere and then (2.33) holds. Now let us evaluate the second term in (2.32). First we consider the case y > y. For y sufficiently close to y, we have 1 where D 3 = x ∈ Ω : w(x, y) > t ≥ w(x, y), ∂w ∂y (x, y) > 0 .
In a neighborhood B r (x, y, t) of a point (x, y, t) ∈ R n+2 with x ∈ Γ t , the equality w(x, y) = t implicitly defines a function y = v(x, t) such that y = v(x, t) and w(x, v(x, t)) = t. Moreover for y sufficiently close to y we have Observe that the implicit function theorem gives |∇ x v(x, t)| = |∇ x w(x, y)|/ ∂w ∂y (x, y). Then using the coarea formula (2.27) we have In the same way we can prove the analogue of (2.37) and (2.38) with △ 2 replaced by △ 3 . Then Putting together (2.31) and (2.39) we obtain assertion (ii).
By recalling that the rearrangement w ⊛ of a function w is the generalized inverse function of the distribution function γ w , and applying the chain rule formula, we can prove our novel derivation formula.
Corollary 2.13 (Gaussian second order derivation formula). Under the assumptions of Theorem 2.12, for a.e. y ∈ (ε, T ) the following derivation formula holds: (2.40) In particular, if w(x, y) is C 1 and the functions w(x, y)ϕ(x), ∂ ∂y w(x, y)ϕ(x) are Lipschitz in y ∈ (ε, T ), uniformly with respect to x ∈ Ω, we have {x: w(x,y)>w ⊛ (r,y)} ∂ 2 ∂y 2 w(x, y) dγ(x) Proof. In order to prove (2.40) we need to evaluate the y-derivative of H(t, y) when t = w ⊛ (r, y). By a rearrangement property (see for example [8] . . Remark 2.14. The sum of the last two terms to the right-hand side of (2.41) is nonpositive, see [3,Remark 2.8].
The following Lemma shows that we can actually apply the second order derivation formula (2.41) to the solution w to the extension problem (1.6), namely, when w = P s y u is the extension of the solution u ∈ H s (Ω, γ) to the linear problem (1.1).
15. If f ∈ L 2 (Ω, γ) then the second order derivation formula (2.41) can be applied to the solution w to problem (1.6).
Proof. Since w ∈ C ∞ (C Ω ), by classical results on solutions of elliptic equations with analytic coefficients (see for instance [29]), w is analytic. Hence condition (2.28) holds. Next we have to show that the functions w(x, y)ϕ(x) and ∂ y w(x, y)ϕ(x) are Lipschitz in y ∈ (ε, T ), uniformly with respect to x ∈ Ω. This follows because it is known that the solution to the extension problem has the regularity w ∈ C ∞ ((0, ∞); H s (Ω, γ)), see [26,41]. For the sake of completeness, we also give a direct proof of this regularity result. By Theorem 2.8 and using the well known identity d dt t ν K ν (t) = −t ν K ν−1 (t), for ν ∈ R, it follows that and w(x, y)ϕ(x) is Lipschitz with respect to y ∈ (0, ∞), uniformly in x. On the other hand, ∞ ε y a Ω |∂ yy w| 2 dγ(x) dy Hence ∂ y w(x, y)ϕ(x) is Lipschitz with respect to y ∈ (ε, ∞), uniformly in x ∈ Ω.

The comparison result
With the previous results at hand, we are now in position to prove the main result of the paper.  1) and (1.5), respectively, with f ∈ L 2 (Ω, γ). Then that is, Proof. By making the change of variables y = (2s)z 1/(2s) (see [14]), we can write the extension problems (1.6) and (1.8) as on Ω.
for some explicit constant d s > 0, respectively. Now, since u is the trace on Ω of the solution w to (3.2) and ψ is the trace on Ω ⋆ of the solution v to (3.3), the result will immediately follow once we prove the concentration comparison inequality We recall that w is smooth for any z > 0. For a fixed z > 0 and t > 0, let By multiplying the first equation in (3.2) by ς z h (x) and integrating over Ω with respect to the Gaussian measure, we obtain Letting h → 0 we obtain On the other hand, the coarea formula (2.27) and the isoperimetric inequality with respect to the Gaussian measure (2.2) give ⋆ is the half-space having Gauss measure γ w (t). By Hölder's inequality, for any h > 0. Hence, by taking h → 0, Then (3.6) yields By plugging (3.7) into (3.5) we have Using Lemma 2.15 and the second order derivation formula (2.41) we find that W verifies the following differential inequality . Moreover, the first order derivation formula (2.26) implies Then, by the Hardy-Littlewood inequality (2.23), we easily infer Therefore W satisfies the following boundary conditions f ⊛ (σ) dσ, for r ∈ (0, γ(Ω)).
Next let us turn our attention to problem (1.8). By Proposition 2.9, it follows that the function η(x, t) := e −t(L Ω ⋆ ) f ⋆ (x), is rearranged with respect to x, that is, η(x, t) = η ⋆ (x, t). Recall the semigroup formula (2.20): It is then clear that (even after the change of variables y = (2s)z 1/(2s) ) v is rearranged with respect to x as well, namely, v(x, z) = v ⋆ (x, z). This implies that the level sets of v(·, z) are half-spaces, which gives in turn that all the inequalities involved in the symmetrization arguments for the solution u we performed above become equalities for v. Therefore, if Regarding the boundary conditions, we have ∂V ∂z (r, 0) = −d s f ⊛ (σ) dσ, for r ∈ (0, γ(Ω)).

Regularity estimates
We first introduce the Zygmund spaces, which appear naturally in the regularity scale for solutions to elliptic equations with Gaussian measure in the local setting, see [20]. We refer the reader to the monograph [8] for details about all the related properties we will use for our purposes. If α = 0 the Zygmund space L p (log L) 0 (Ω, γ) coincides with the weighted space L p (Ω, γ). Moreover, if p > q and α, β ∈ R then L p (log L) α (Ω, γ) ⊂ L q (log L) β (Ω, γ). When p = q and α > β one can prove that L p (log L) α (Ω, γ) ⊂ L p (log L) β (Ω, γ). (4.1) Remark 4.2. The Zygmund space L p (log L) α (Ω, γ) can be equivalently defined as the space of measurable functions u : Ω → R such that the quantity (which is a quasi-norm in this space) is finite. Moreover, L p (log L) α (Ω, γ) is a Banach space when equipped with the norm We stress that the quasi-norm (4.2) is equivalent to the norm (4.3), see [8] for more details.
The main result of this section is the following regularity result for solutions to the fractional nonlocal OU problem (1.1) in terms of the data f . Notice that when s = 1 we recover the corresponding regularity results for the OU equation via Gaussian symmetrization contained in [20].
In order to prove Theorem 4.3 we will first show that the space H s (Ω, γ) is embedded in the Zygmund space L 2 (log L) s/2 (Ω, γ). This will allow us to choose the datum f in the dual space L 2 (log L) −s/2 (Ω, γ) in problem (1.1). In this way Definition 2.5 will still make sense and u = w(·, 0), where w is the solution to (1.6), will be the unique weak solution to problem (1.1). Towards this end we introduce the fractional Gaussian Sobolev space H s (Ω, γ) as the real interpolation space defined by H s (Ω, γ) = H 1 (Ω, γ), L 2 (Ω, γ) 1−s . where C is a positive constant depending on n, s and Ω. In particular, H s (Ω, γ) ֒→ L 2 (log L) s/2 (Ω, γ).
Proof. Given any function u ∈ H s (Ω, γ) we consider the extension u of u by zero outside of Ω. Since u ∈ H s (R n , γ) and this last space coincides with the Gaussian Besov space B s (R n , γ) (see [34]), the embedding result contained in [31, Theorem 23] yields for some constant C > 0. A change of variable and the monotonicity of the decreasing rearrangement u ⊛ lead to Now we observe that the Exact Interpolation Theorem (see [1,Theorem 7.23]) implies that extending any function u ∈ H s (Ω, γ) by zero outside of Ω defines a continuous extension map between H s (Ω, γ) and H s (R n , γ). Thus it follows that the norm at the right-hand side of (4.5) is bounded (up to a constant depending on n, s and Ω) by u 2 H s (Ω,γ) and the result follows.
With these results at hand, we are able to show the generalization of the comparison result (Theorem 3.1) for f in Zygmund spaces. Corollary 4.5. Assume that f ∈ L 2 (log L) −s/2 (Ω, γ). Then Theorem 3.1 still holds.
Proof. Let f n be a sequence of smooth function such that f n → f strongly in L 2 (log L) −s/2 (Ω, γ). Let w n be the unique weak solution to problem (1.6) with data f n . By choosing w n as a test function in (2.19) Next we use (4.4) and the trace inequality (2.18) to find CΩ y a |∇ x,y w n | 2 dγ(x) dy ≤ C w n H 1 0,L (CΩ,dγ(x)⊗y a dy) f n (x, 0) L 2 (log L) −s/2 (Ω,γ) . This allows us to extract a subsequence from {w n } (still labeled by {w n }), such that w n ⇀ w weakly in H 1 0,L (C Ω , dγ(x) ⊗ y a dy). Then the compact embedding established in Proposition 2.4 gives that, up to a new subsequence, w n (·, 0) → w(·, 0) strongly in L 2 (Ω, γ). Thus we can pass to the limit in the weak formulation (2.19) of w n and find that w solves problem (1.6) corresponding to the data f . Thus u := w(·, 0) is the weak solution to problem (1.1). In order to obtain the concentration inequality (3.1), we just observe that f ⋆ n approximates f ⋆ in L 2 (Ω ⋆ , γ). Then, if {w n } and {v n } are sequences of approximating solutions converging to w and v respectively, passing to the limit in the integral inequality For the proof of Theorem 4.3 we need two further preliminary results, interesting in their own right. The following is a regularity result for solutions of problems of the type (1.1) with rearranged data, posed on the half-space H. Theorem 4.6 (Estimates for half-space solutions). Let H = {x ∈ R n : x 1 > 0}. Suppose that h(x) = h ⋆ (x), for all x ∈ H. If h ∈ L p (log L) α (H, γ) with α ∈ R for 2 < p < ∞, and α ≥ − s 2 for p = 2, then the weak solution ψ to L s ψ = h, in H, ψ = 0, on ∂H, (4.6) belongs to L p (log L) α+s (H, γ) and for some constant C = C(n, p, α, s) > 0, which is independent on ψ and h.
The next Lemma is a useful comparison principle for solutions of problems of the form (1.1) with rearranged data, having as a ground domain an half-space of Gaussian measure larger than 1/2. . Let H ω = {x ∈ R n : x 1 > ω}, for some ω > 0. Let h ∈ L p (log L) α (H, γ) be a nonnegative function such that h(x) = h ⋆ (x) and let ψ be the weak solution to L s ψ = h, in H ω , ψ = 0, on ∂H ω .
where ζ is the weak solution to (4.6) with datum h, where h denotes the zero extension of h in H \ H ω .
Proof. The function solves the initial boundary value problem Thus, by a standard maximum principle argument, F ≥ 0 in H ω × [0, ∞). In other words, Therefore, if v and v denote the extensions as in (2.16) of ψ and ζ, respectively, then v(x, y) ≥ v(x, y), for all x ∈ H ω , y ≥ 0.
The result follows by taking y = 0 in this last inequality.
Now we are finally able to present the proof of the regularity estimate, namely, Theorem 4.3.
Proof of Theorem 4.3. Let u be the weak solution to (1.1) defined in an open set Ω such that γ(Ω) ≤ 1/2, with corresponding datum f . By Theorem 3.1, u is less concentrated than the solution ψ to (1.5) defined in the half-space with the same Gauss measure as Ω and datum f ⋆ . If γ(Ω) = 1/2 the assertion follows by Theorem 4.6. If γ(Ω) < 1/2, we first apply Lemma 4.7 to estimate ψ in terms of the solution ζ to (4.6) defined in the half-space H = {x ∈ R n : x 1 > 0} and having the extension of f ⋆ by zero to H at the right-hand side.. Then Theorem 4.6 allows us to conclude.
Remark 4.8. We remark that other regularity results for problems involving fractional operators with bounded lower order terms, but posed on bounded smooth domains, are contained in [27].

Appendix: A semigroup method proof of the L p estimate
For completeness and convenience of the reader, we give an alternative and more explicit proof of Theorem 4.6 with L p data using the Mehler kernel to represent the inverse of the fractional OU operator. Observe that such result is a particular case of Theorem 4.6 since, when f ∈ L p (Ω, γ), Theorem 4.6 and the embedding (4.1) give u ∈ L p (log L) s (Ω, γ) ⊂ L p (Ω, γ).
Theorem 5.1 (Estimates for half-space solutions with L p data). Let H = {x ∈ R n : x 1 > 0}. Suppose that h(x) = h ⋆ (x), for all x ∈ H. If h ∈ L p (H, γ), for 2 ≤ p < ∞, then the weak solution ψ to (4.6) belongs to L p (H, γ) and ψ L p (H,γ) ≤ C h L p (H,γ) , for some constant C = C(n, p, s) > 0, which is independent on ψ and h.
Proof. The proof will be split in four steps.
Step 4. Estimates of the terms j = 2, 3 in (5.2). By Hölder's inequality and the estimates of Step 3, we get for j = 2, 3 and for some positive constant c = c(n, p, s).
Hence the desired result follows by collecting Steps 2 and 4 in estimate (5.2).