Solitary waves for weakly dispersive equations with inhomogeneous nonlinearities

We show existence of solitary-wave solutions to the equation \begin{equation*} u_t+ (Lu - n(u))_x = 0\,, \end{equation*} for weak assumptions on the dispersion $L$ and the nonlinearity $n$. The symbol $m$ of the Fourier multiplier $L$ is allowed to be of low positive order ($s>0$), while $n$ need only be locally Lipschitz and asymptotically homogeneous at zero. We shall discover such solutions in Sobolev spaces contained in $H^{1+s}$.


Introduction
A great deal of model equations for the evolution of water waves in one spacial dimension can be compactly written as u t + (Lu − n(u)) x = 0 , (1.1) where the dispersion L is a Fourier multiplier in space with real-valued symmetric symbol m, that is, and n is a local nonlinear term. Solutions of (1.1) tend to enjoy a variety of qualitative properties of water, see [12], but our focus will be on the existence of solitary waves. Traveling at constant velocity ν, these solutions take the form (x, t) → u(x − νt), where u(y) → 0 as |y|→ ∞. For such solutions (1.1) means − νu + Lu − n(u) = 0 , (1.2) in light of the assumption that u vanish at infinity.
A common approach to prove solitary waves in equations of the form (1.2) is Lion's concentration-compactness method introduced in [15]. Weinstein used this in 1987 to prove existence and orbital stability in the case of a monomial nonlinearity and a symbol of order s ≥ 1 [18]. The limit s = 1 is not only superficial: In [2] the authors study an equation corresponding to s = 1, and that method was later put in a more general framework in [1], (B) The symbol m: R → R is even and satisfies the growth bounds m(ξ) − m(0) ≃ |ξ| s ′ , for |ξ|< 1, m(ξ) − m(0) ≃ |ξ| s , for |ξ|> 1, with s ′ > p/2 and s > p/(2 + p). We also require ξ → m(ξ)/ ξ s to be uniformly continuous on R.
We will discuss these assumptions in detail below. Given them, we will prove the following existence result. Theorem 1.1. There exist µ * > 0 so that for every µ ∈ (0, µ * ), there is a solution u ∈ H 1+s of (1.2), with wave speed ν ∈ R, satisfying where the implicit constants in (i) and (ii) are independent of µ ∈ (0, µ * ).
An interesting special case of Theorem 1.1 is the case of the capillarygravity Whitham equation with strong surface tension, for which p = 1 and the symbol is which corresponds to s = 1 2 and s ′ = 2. Modelled on the water wave problem with surface tension, the capillary-gravity Whitham equation is known to admit generalized solitary waves in the case T < 1 3 (weak surface tension) [11], and decaying solitary waves for T > 0 (both weak and strong surface tension) [3], as well as periodic steady waves, including rippled solutions in the case of weak surface tension [8]. In the case T < 1 3 the solitary waves have wave speeds ν smaller than m(0) (called subcritical), whereas the generalized waves exhibit supercritical wave speeds ν > m(0); for strong surface tension we are only aware of sub-critical solutions. As we also prove the existence of sub-critical solutions, in the case of strong surface tension T ≥ 1 3 , there currently seems to lack super-critical truly solitary waves in the capillary-gravity Whitham equation. The same waves have also not been found for the capillary-gravity Euler equations (although we have not found a source actually stating this), but a proof of general non-existence is lacking. What has been shown is that there are no small-amplitude, exponentially decaying, even, supercritical solitary-wave solutions of the Euler equations in the slightly weak case when T is close to, but less than, 1 3 [17]. On a related note, it might be worth noticing that Theorem 1.1 is also an existence result for solitary waves tending to a general value c, not necessarily zero, at infinity. For ifñ(x) = n(c + x) − n ′ (c)x − n(c) satisfies the assumptions, then there is a solitary-wave solution u, with velocity ν, of the equation u t + (Lu −ñ(u)) x = 0, and thus, u + c is a traveling wave solution of (1.2) with velocity ν − n ′ (c).
1.2. The method. In this subsection, the framework used to prove Theorem 1.1 will be introduced. In particular, we develop a constrained minimization problem whose solutions satisfy (1.2), and in fact, it is exactly solutions of this minimization problem that we shall prove the existence of. For this purpose, we will be working with two 'extra' assumptions on (1.2), namely (C 1 ) n is globally Lipschitz continuous, While these auxiliary assumptions (especially the first) excludes many instances of (1.1) where we would like to prove the existence of solitary wave solutions, it turns out that proving our main theorem for this smaller class implies the result in the more general setting, as we now demonstrate.
and notice thatñ andm satisfy (A), (B), (C 1 ) and (C 2 ). By assumption, Theorem 1.1 now holds for the modified equation whereL is the Fourier multiplier whose symbol ism. Thus there is aμ * > 0 so that for each µ ∈ (0,μ * ) we have a solution u with velocityν satisfying where we omittedm(0) = 0 from the second expression. As H 1+s ֒→ L ∞ , we can pick µ * ∈ (0,μ * ) so that u ∞ ≤ 1 for all µ ∈ (0, µ * ). For such solutions u, we haveñ(u) = n(u), and setting ν =ν − m(0) we see that Thus, for µ < µ * the solutions provided by Theorem 1.1 for the modified equation are solutions of the original equation, but with a shifted velocity ν satisfying We now construct the minimization problem mentioned above, whose well-posedness is assured when the assumption (C 1 ) is added to (A) and (B). We will work in the Sobolev space H where we use the Japanese bracket ξ = 1 + ξ 2 1/2 . Our main tools shall be the functionals Q, L, N : H s 2 → R, defined by where N p (x) = x 0 n p dt, and N r (x) = x 0 n r dt. We will prove the above functionals to be Fréchet differentiable with H s 2 -derivatives Consider now the constraint minimization problem where E = L − N and and where we restrict µ ∈ (0, µ * ), for some fixed upper bound µ * that we shall require to be sufficiently small. Our strategy shall be to find minimizers of (1.3); a minimizer u must for some Lagrange multiplier ν ∈ R satisfy thus solving (1.2). Note that, although our solutions are 'discovered' in H s 2 , we additionally prove they lie in the more regular space H 1+s (or, in an even more regular space, see Prop. 8.2). Had we been working on a compact domain, then any "uniformly regular" minimizing sequence of (1.3), would admit a converging subsequence, implying the existence of a minimizer. As R is not compact, we instead use Lion's concentration-compactness theorem (see Section 2). Informally, any bounded sequence (ρ k ) ⊂ L 1 admits a subsequence (again indexed with k) that will, as k → ∞, either -vanish (the mass spreads out), -dichotomize (the mass splits in two parts that separate), or -concentrate (the mass remains uniformly concentrated in space).
We will show that for a 'concentrated' minimizing sequence, we can pick a converging subsequence. Thus, the existence of a minimizer of (1.3) follows if we can for minimizing sequences rule out the possibility of vanishing and dichotomy. To achieve this, we use a "long-wave ansatz" to find a low enough upper bound for I µ that will allows us to compare the size of µ, L and N on 'near minimizers'. This size comparison will directly exclude vanishing and also imply that µ → I µ is subadditive for small µ > 0, which excludes dichotomy. The paper concludes with some regularity estimates for our solutions (see Prop. 8.2).
We end this section with some discussion regarding the main assumptions (A) and (B).

A technical look at the assumptions (A) and (B).
In this subsection, we discuss our main assumptions on the the pair n and m; we mention what role the different parts play and whether some could be weakened. This discussion is easier to follow after a read through. (ii) Alternatively, if |n r (x)| |x| 1+p for |x|> 1, all steps in this paper (apart from Prop. 8.1) go through, granted we include the restriction u H s 2 < R to our minimization problem for some arbitrary constant R > 0, which only plays a role in proving Prop. 4.1.
We choose to assume local Lipschitz continuity of n to avoid these other conditions, and to provide a somewhat different technique in comparison to earlier proofs.
Finally, the reason for excluding the case n p (x) = cx|x| p , c < 0, is the same as in [3] and [7]. Our method breaks down at the first step in that regime, as we cannot hope to obtain the low upper bound for The symbol m. The upper bound of the growth at zero and the corresponding inequality s ′ > p/2 are needed to find a satisfactorily low upper bound for I µ by a long-wave ansatz (see Prop. 3), while the lower bound is necessary for Prop. 4.1, which is crucial for the remainder term n r to be negligible for sufficiently small µ.
As for the growth bounds when |ξ|> 1, the lower bound is chosen to control the H The upper growth bound is instead needed when excluding dichotomy: Indeed, if m(·) − m(0) was bounded by · s ,s > s, we would need to work in Hs /2 (for E(u) to be well defined). Then equation (4.3), which bounds the H s 2 -norm, would still be the best regularity estimate on a minimizing sequence, but Lemma 6.2 (now, for operators B r : Hs /2 → H −s/2 ), would require a bound on the stronger Hs /2 -norm to be of any use when proving Prop. 6.3.
Finally, the uniform continuity of ξ → m(ξ)/ ξ s is necessary for excluding dichotomy. It assures that L is not 'too' non-local, as described in Lemma 6.2. Note that a sufficient estimate for our regularity constraint is |m ′ (ξ)| ξ s , as it implies that ξ → m(ξ)/ ξ s is globally Lipschitz.

Preliminaries
In this section, we presents bounds and regularity estimates for the functionals Q, L, N , E introduced in subsection 1.2. Throughout section 2-7, we assume (only) that n and m satisfies the assumptions (A), (B), (C 1 ) and (C 2 ), introduced in subsection 1.1 and 1.2. In light of Lemma 1.2, proving Theorem 1.1 in this case, implies the validity of the theorem when either (C 1 ) or (C 2 ) fails.
Combining the growth bounds on m from (B) with (C 2 ), we see that 0 < m(ξ) ξ s for ξ = 0, and so bound (i) follows. By (A) and (C 1 ), we have |n(x)| |x|, and so we obtain (ii). From |n p (x)| |x| 1+p we immediately get (iii). For (iv), we note that where the the first bound follows from n r (x) = O(|x| 1+r ), while the latter follows from |n r (x)|= |n(x) − n p (x)| |x|+|x| 1+p . With this, and the fact that r > p, we obtain Note that a(x, y) and b(x, y) are bounded for |y|≤ 1 and |y|≥ 1 respectively, and so |N r (x + y)| |x| 2+r +|y| 2+p .
From here on, we will refrain from explicitly referring to the assumptions as done in the previous proof, so to attain a more straight forward presentation.
Proof. The Fréchet derivative of Q and E follows from an elementary calculation and linearity of the Fréchet derivative respectively. Turning to L, we note that L is self-adjoint, Lu, v = u, Lv , due to the symmetry of m.
For N , we exploit the global Lipschitz-continuity of n and calculate One important implication of the previous proposition is the following description of the continuity of E on H s 2 , that we shall utilize when excluding dichotomy.
The uniform continuity of ξ → m(ξ)/ ξ s is a simple assumption to state, but not directly convenient to work with. Instead we shall use an implied regularity constraint on m, described by the next lemma.
is easily obtained by the mean value theorem together with crude upper bounds. By assumption, there is a modulus of continuityω so that and lim λ→0ω (λ) = 0. As m(·)/ · s is a bounded function, we can assumeω to also be bounded. We arrive at where we used the estimate x x − y y , when going from second to third line.
By a more careful argument, it is possible to show that the two regularity constraints (2.1) and (2.2) are equivalent without any a priori knowledge of m, although we shall not prove this.
We conclude this section with the concentration-compactness theorem; the foundation of our proof of Theorem 1.1.
admits a subsequence, denoted again by (ρ k ), for which one of the following phenomena occurs. Vanishing: For each r > 0, k → ∞ implies that Dichotomy: There exist λ ∈ (0, µ), and sequences (x k ) ⊂ R and (r k ), (r k ) ⊂ R + , so that when k → ∞

Upper and lower bounds for I µ
In this section, we prove that the infimum I µ of the minimization problem (1.3) satisfies −∞ < I µ < −κµ 1+β , for two positive constants κ and β. The upper bound will give us Prop. 4.1, which declares some fruitful bounds on near minimizers. The importance of also having a lower bound is the trivial consequence I µ = −∞, allowing Prop. 6.1 to be meaningful. For clarity, we note that µ * , as of now, is an arbitrary fixed positive upper bound for µ. The proof of the following proposition is inspired by [7]. Proposition 3.1. There exists κ > 0, so that for µ ∈ (0, µ * ), we have −∞ < I µ < −κµ 1+β , where the exponent β = s ′ p/(2s ′ − p).
Proof. Note that (i) and (ii) in Prop. 2.1, immediately gives us that I µ > −Cµ for some C < ∞. For the upper bound, we pick a function ϕ, satisfying supp(φ) ⊂ (−1, 1), Q(ϕ) = 1 and cϕ(x) ≥ 0. This last inequality implies that N p (ϕ) = |c| 2+p ϕ 2+p 2+p . An example of such a function would be an appropriately scaled version of x → sinc(x) 2 . We define the ansatz function ϕ µ,t (x) = µ t ϕ(x/t), for t ≥ 1. By a substitution of variables we obtain When k = 2, we get Q(ϕ µ,t ) = µ, and moreover Exploiting the local growth of m, a simple computation gives the inequality L(ϕ µ,t ) ≤ C 2 µ/t s ′ , for some C 2 < ∞. We evaluate the ansatz to obtain We set t −s ′ = Bµ β with β = s ′ p/(2s ′ − p), where B > 0 is small enough to guarantee t ≥ 1 for µ ∈ (0, µ * ). The inequality above becomes Without loss of generality, we can choose B small enough so that κ > 0 and κµ 1+β is greater than the O-term for all values of µ ∈ (0, µ * ); this is possible as p < min{2s ′ , r} and µ * < ∞ is fixed. We get the desired result: Remark 3.2. From here on, we assume to have picked a constant κ > 0 as described in the last proposition. It is important to note that if we replace µ * by a lower upper bound µ ′ * < µ * , then (3.2) would still hold for the same κ, as (0, µ ′ * ) ⊂ (0, µ * ). This allows us to later assume µ * to be 'sufficiently' small, without having to worry about the effect on κ. Similarly, the implicit constants in Prop. 4.1 will also remain fixed when lowering µ * .

Near minimizers
A consequence of the preceding proposition is that the feasible region U µ = {u ∈ H where κ is some fixed positive constant independent of µ ∈ (0, µ * ). We will refer such functions as near minimizers. Only these functions are of interest to us; any minimizing sequence (u k ) ⊂ U µ must consist solely of near minimizers, except for a finite number of exceptions. Proposition 4.1 will give important bounds of such functions, that will serve as the main building blocks for excluding vanishing and dichotomy. We stress that throughout this paper, the implicit constants associated with our usage of , and ≃ are independent of µ ∈ (0, µ * ).

A congestion result for near minimizers
In this section, we show that a minimizing sequence (u k ) of (1.3) will never vanish in accordance with the Concentration-Compactness Theorem 2.5. We start by demonstrating some 'uniform' congestion of mass in L 2+pnorm of each element in (u k ). To formalize, we pick a smooth function ϕ, satisfying supp(ϕ) ⊂ [−1, 1] and j∈Z ϕ(x − j) = 1. An example would be the convolution of the characteristic function on [− 1 2 , 1 2 ] with a mollifier supported in [− 1 4 , 1 4 ]. For brevity, we set ϕ j (x) = ϕ(x − j). Proposition 5.1. For any near minimizer u ∈ U µ we have Proof. Consider the operator T : f → (ϕ j f ) j , mapping functions to sequences of functions. It is a fact that T H α →ℓ 2 (H α ) < ∞ for all α ≥ 0; this is a trivial calculation when α ∈ N 0 if one replaces · H α with the equivalent norm f → f 2 + f (α) 2 . For non-integer values of α > 0, the result follows immediately from the (so called) 'complex interpolation method'; in particular, the two results [ where the last equivalence uses j∈Z |ϕ j (x)| 2+p ≃ 1. Combining (5.1) and (5.2), we get for some C < ∞ independent of our choice of near minimizer u. At least one j 0 ∈ Z must then satisfy To exclude vanishing we would need congestion of mass in L 2 -norm; this is achievable from the previous result through the Gagliardo-Nirenberg inequality inequality. Indeed, setting j 0 = arg max j∈Z ϕ j u 2+p we obtain As 2 + p − p/s > 0, we conclude that µ δ ϕ j 0 u 2 , for some appropriate exponent δ > 0, and so we get the following corollary.
Corollary 5.2. No minimizing sequence of (1.3) has a subsequence for which vanishing occurs in accordance with Theorem 2.5.

Strict subadditivity of the mapping µ → I µ
Excluding dichotomy from a minimizing sequence is a more difficult task than that of vanishing, reflected by the laborious calculations in this subsection. The main idea however, is a simple one: Suppose dichotomy (as described in Theorem 2.5) occurs on a minimizing sequence (u k ) ⊂ U µ of (1.3), then we shall see it can be 'split' in two (u 1 This will contradict that the mapping µ → I µ is strictly subadditive for small µ, a fact we now prove. Proposition 6.1. For µ * > 0 sufficiently small, the mapping µ → I µ is strictly subadditive on (0, µ * ), that is, Proof. We begin by finding a µ * > 0 so that µ → I µ is strictly subhomogenous on (0, µ * ). Pick a near minimizer u ∈ U µ and t ∈ [1,2]. Notice that By (4.1) we get ϕ(t, u) (t−1)µ 1+β , where we exploited that t 1+ p 2 −t t−1, when t ∈ [1, 2]. As for φ, we see that φ(1, u) = 0 and so we use the mean value theorem for some t * ∈ [1, t] (and Leibniz integral rule) to get It should be clear that u → R un r ( √ tu) dx also satisfies an inequality of the form (iv) in Prop. 2.1, uniformly in t ∈ [1,2]. This in turn means it satisfies an inequality of the form (4.2) uniformly in t ∈ [1, 2]. Thus the above calculation implies that |ϕ(t, u)|= (t − 1)o(µ 1+β ). These two bounds on ϕ and φ implies we can pick µ * > 0 small enough so that is satisfied for some δ > 0, all t ∈ [1, 2] and all near minimizers u ∈ U µ with µ ∈ (0, µ * ). Assuming we have chosen such a µ * > 0, then (6.1) becomes Picking a minimizing sequence (u k ) ⊂ U µ and assuming 1 < t ≤ 2, this last inequality implies on (0, µ * ). Finally, for a general t > 1 and µ satisfying tµ ∈ (0, µ * ), we can pick an integer k > 0, so that k √ t ≤ 2, which combined with (6.2) implies that is, µ → I µ is strictly subhomogenous on (0, µ * ). To show that strict subhomogeneity implies strict subadditivity, we assume without loss of generality that 0 < µ 1 ≤ µ 2 and µ 1 + µ 2 < µ * , and calculate Now that strict subadditivity of µ → I µ has been established, we shall create the contradiction as described at the beginning of this section. It will be essential that the non-local component of E, namely L, behaves almost like a local operator on sums of functions whose mass is 'sufficiently' separated. It is exactly the regularity of m that allows L to enjoy such a property. This result is encapsulated in the next lemma, which roughly states that the commutator operator [L, ϕ(·/r)] tends to zero as r → ∞, for any Schwartz function ϕ. Here, the multiplication operator f → ϕf is defined for any distribution f in the canonical sense. Proof. Set ϕ r = ϕ(·/r). Using the bound (2.1), we have for any u, v ∈ H s 2 , As ω is bounded above by a polynomial and lim t→0 ω(t) = 0, the statement of the lemma follows.
We are now ready to prove that a dichotomized minimizing sequence can be 'split' in two as described at the beginning of the section. Proposition 6.3. Suppose a minimizing sequence (u k ) ⊂ U µ dichotomizes, then there exist 0 < λ < µ, and two sequences (u 1 k ) ⊂ U λ and (u 2 k ) ⊂ U µ−λ , so that Proof. By the Concentration-Compactness principle, we can pick (r k ) ⊂ R + with r k → ∞, and (x k ) ⊂ R so that as k → ∞; without loss of generality, we assume x k = 0 for all k. Next, we pick two smooth symmetrical functions ϕ, ψ: R → [0, 1], satisfying ϕ(x) = 1 when |x|≤ 1, ϕ = 0 when |x|≥ 2 and ϕ 2 + ψ 2 = 1. We denote ϕ k and ψ k for ϕ(·/r k ) and ψ(·/r k ), and set v 1 k = ϕ k u k and v 2 k = ψ k u k . By (6.3), these function automatically satisfies It is easily verified that if φ is Schwartz and symmetric, then v, φu = φv, u for any v ∈ H −s 2 and u ∈ H s 2 , and so we may write By Lemma 6.2, the RHS of these equations tend to zero, provided we can uniformly bound the H s 2 -norm of u k , ϕ k u k and (1 − ψ k )u k in k. By (4.3), this again is guaranteed if multiplication by ϕ k and (1 − ϕ k ) are uniformly bounded (in k) as operators on H s 2 . This is indeed true and follows by similar reasoning as in the proof of Prop. 5.1; it is trivially proven when s/2 ∈ N 0 , and the result for general s > 0 follows from interpolation. Thus By Prop. 2.1, we have |N (x)| x 2 , and so (6.3) guarantees the RHS of this equation to tend to zero as k → ∞. As (u k ) is a minimizing sequence, we conclude that for k → ∞. By the same reasoning as before, the H s 2 -norm of v 1 k and v 2 k is uniformly bounded in k, and so by Corollary 2.3 the proposition is proved for the two sequences u 1 With these two results at hand, we can exclude dichotomy; picking µ * > 0 so that µ → I µ is strictly subadditive and assuming (u k ), (u 1 k ) and (u 2 k ) to be as in the previous proposition, we arrive at the contradiction Corollary 6.4. Provided µ * > 0 is sufficiently small, no minimizing sequence of (1.3) has a subsequence for which dichotomy occurs in accordance with Theorem 2.5.

Solutions from concentrated minimizing sequences
Theorem 2.5 provided us with the three possible phenomena that could occur for a minimizing sequence of (1.3); the previous two sections excluded vanishing and dichotomy, and so it remains to see that we can construct a minimizer from a concentrating minimizing sequence. This is straight forward: Proposition 7.1. Provided µ * > 0 is sufficiently small, any minimizing sequence (u k ) ⊂ U µ of (1.3) admits a subsequence converging in L 2 -norm to a minimizer u ∈ U µ .
Proof. For µ * sufficiently small, the two preceding sections guarantees that (u k ) admits a subsequence, again denoted (u k ), that concentrates in accordance with Theorem 2.5. Without loss of generality, we assume (u k ) to consist solely of near minimizers and shifted appropriately to concentrate about zero (x k = 0 for all k). By the Kolmogorov-Riesz-Fréchet compactness theorem, (u k ) is relatively compact in L 2 , as it is bounded, concentrated about zero and uniformly continuous with respect to translation: uniformly in k as y → 0, as guaranteed by (4.3). We conclude that (u k ) admits a subsequence, yet again denoted (u k ), so that u k → u, for some u ∈ L 2 with Q(u) = µ. We now demonstrate that u is a minimizer of (1.3). As the positive functions m(·)|û k | 2 converges locally in measure to m(·)|û| 2 , Fatou's lemma implies Using the Fréchet derivative (Prop. 2.2) of N , and that |n(x)| |x|, we also obtain Not only is a minimizer of (1.3) a solutions of (1.2), we are also provided some additional control over the respective velocity ν, as described in the next proposition.
Proof. As the feasible set U µ is a Hilbert submanifold of H s 2 , it follows that there must be a Lagrange multiplier ν ∈ R (depending on the minimizer u), so that In particular, if we pair (7.1) with u and insert for Q ′ we obtain and so we attain the first part of the proposition. For the latter, note that n(u)u = (2 + p)N (u) + n r (u)u − (2 + p)N r (u), and as argued in the proof of Prop. 6.1, we have R n r (u)u − (2 + p)N r (u)dx = o(µ 1+β ). Then for some fixed C > 0, by Prop. 3 and (4.1). Thus, for a sufficiently small µ * > 0 we obtain −ν µ β when µ ∈ (0, µ * ). The upper bound on −ν follows trivially from where we used |n(x)x| |x| 2+p and (4.1).

Regularity of solutions
Before moving on, we summarize what has been proved so far. For the class of equations (1.2) that satisfies the assumptions (A) and (B) (see subsection 1.1) and the 'auxiliary' assumptions (C 1 ) and (C 2 ) (see subsection 1.2), we have proved all parts of Theorem 1.1, except the estimate u 2 H 1+s µ. By Lemma 1.2, when this estimate is proven, the theorem automatically holds in the case when only (A) and (B) are satisfied. Hence, we now introduce the final piece, concluding the proof of Theorem 1.1. .
We now obtain the desired conclusion by the following 'bootstrap' argument. Pick k ∈ N and 0 ≤ r < s so that 1 + s = ks + r. By a (finite) repeated use of (8.2), we obtain and so we are done.
8.1. Further regularity. We conclude this paper with a regularity result on the solutions we have constructed. Clearly, if equation (8.2) was satisfied for large α, we could (as done in the previous proof) bootstrap to corresponding regularity. It is ultimately the regularity of n that determines how large α can be in (8.2). In [5], the authors prove that for any γ > 3/2, the composition operator T f : u → f (u) maps H γ to itself if, and only if, f (0) = 0 and f ∈ H γ loc ; in particular, if we restrict u ∞ < R < ∞, then we have for some constant C depending only on f, R and α ∈ ( 3 2 , γ]. Moreover, using the result of [16], we can extend the inequality (8.3) to the case α ∈ [1, γ] (still with γ > 3/2). It is now an easy task to improve the regularity of our solutions when n ∈ H α * loc for some α * > 3/2; note that functions in these spaces are necessarily locally Lipschitz continuous. We present the final proposition of this paper.
Proof. Looking back at (8.2), this equation is now valid for 0 ≤ α ≤ α * . This follows from the previous discussion as: 1) η ∈ H α * loc with η(0) = 0, and 2) by Theorem 1.1 we have a uniform upper bound on the L ∞ -norm of our solutions u (µ * is fixed). The result is then attained by a similar bootstrap argument as the one used in the proof of Prop. 8.1.

Acknowledgements
The author would like to thank the referee for constructive feedback and Vincent Duchêne for helpful comments on an earlier version of this manuscript.