Uniqueness and radial symmetry of minimizers for a nonlocal variational problem

In this paper we prove the uniqueness and radial symmetry of minimizers for variational problems that model several phenomena. The uniqueness is a consequence of the convexity of the functional. The main technique is Fourier transform of tempered distributions.


Introduction and Statement of the Result
Functionals of the type E(u) = R n ×R n K(x − y)u(x)u(y) dx dy where K(x) is given above, are connected with the modelling of several phenomena such as self-assembly/aggregation models ( [2] and [4]) and flocking of birds and some other condensation phenomenon ( [1]).
If M > 0 and m > 0 are given and x denotes the euclidean norm in R n , following [2] we define the set For −n < p < 0 and 0 < q the existence of minimizers of E on A has been proved in [2]. So far, uniqueness and radial symmetry of the minimizer has been proved for q = 2 only. The uniqueness is a consequence of the convexity of E(u) on the admissible set.
In [3] the existence of certain classes of solutions is proved for p = 1 and n = 3.
For n = 3, p = 1 and q ≥ 2, it has been proved that the radially symmetric equilibrium is unique and compactly supported (see [4]).
Our main result is the following: Theorem 1.1. If 2 ≤ q ≤ 4 and −n < p < 0 then, except for translation, minimizer is unique. In particular, it is radially symmetric.
The uniqueness will be a consequence of the fact that E(u) is convex on the admissible set. The convexity of the second functional that appears in the definition of E(u) is very well known. The proof of the convexity of the first is the main contribution of this paper (theorem 2.4).
If the L ∞ condition is removed from the definition of A, in [2] the existence of minimizer for is proved in the set of the probability measures µ. It would be interesting to know if our technique can be extend to prove the uniqueness of the minimizing measure.
In section 3 and for n = 3, we give examples of minimizers for the powers p = −1, q = 2, p = −1, q = 3 and p = −1, q = 4. Perhaps the most interesting case is p = −1, q = 4 for which we construct (with computer assistance) radially symmetric minimizers u(r) such that both sets {r : 0 < u(r) < M } and {r : u(r) = M } have positive measure.

Proof of the main result
The proof will be broken is several lemmas. The proof of the first is elementary.
Lemma 2.1. If, as before, x denotes the euclidean norm in R n and q ≥ 1 we have x q − y q (2.1) x − y q ≤ 2 q−1 ( x q + y q ) (2.2) x + y q ≤ 2 q−1 ( x q + y q ) (2.3) Remark 2.2. Instead of (2.1), all we need is x − y q ≥ c 1 x q − c 2 y q and this can be achieved taking 0 < c 1 < 1 and c 2 convenient. x q u(x) dx is finite. In that case, the integrals R n x r u(x) dx are also finite for 1 ≤ r ≤ q.

UNIQUENESS AND RADIAL SYMMETRY OF MINIMIZERS FOR A NONLOCAL VARIATIONAL PROBLEM 3
Proof. If R is such that y ≤R u(y) dy = m/2, using (2.1) we get We have used u(y) ≤ M. Therefore, is finite, using (2.3) we see that the integral is also finite and the lemma is proved.
In view of the previous lemma, we define the Banach space and we redefine We also define and we consider the quadratic form F : Clearly F (h) is continuous. Our main result is the following: Proof. If q = 2, following [2], we have If q = 4 we have Expanding the square and using the definition of (2.6) of the space X 0 we get Moreover x, y 2 is a sum with positive coefficients of x 2 i y 2 i and of x i x j y i y j with i = j. Therefore the second term of F (h) in (2.9) is the sum of and then F (h) ≥ 0 for q = 4. For 2 < q < 4 we start with h ∈ X 0 ∩ S(R n ), where S(R n ) is the Schwartz space. Ifĥ(ξ) denotes the Fourier transform of h(x), from the definition (2.6) of the space X 0 we haveĥ (0) = 0; ∂ĥ(0) ∂ξ i = 0; i = 1, · · · , n. (2.10) Next we notice thatĥ(ξ) is a C 2 function with bounded second derivatives because the integral is finite. By Parseval and convolution we also have For a proof see [5], chapter II, section 3. Therefore, C(q) > 0 for 2 < q < 4 because Γ(z) is positive for either z > 0 or −2 < z < −1. Notice that C(q) has a singularity at q = 2 and q = 4. That is why those cases have been treated separately. Since ξ −q−n is not in L 1 loc at ξ = 0, the right hand side of (2.11) has to be understood in the sense of analytic continuation of a tempered distribution. However, in view of (2.10), we see that the derivatives of |ĥ(ξ)| 2 vanish at ξ = 0 up to order three. In fact, the right hand side of (2.11) can be written as Now we see that ξ −q−n+4 does belong to L 1 loc for q < 4 and ĥ (ξ) is bounded at ξ = 0 in view of (2.10) and the fact thatĥ(ξ) is C 2 . We conclude that F (h) ≥ 0 for h ∈ X 0 ∩ S(R n ). Since (2.13) makes sense for h in the space X 0 defined by (2.6) , we expect it to hold in this larger set. Taking convenient approximations, we next sketch a proof for that statement.
First we assume that h(x) = 0 for x > R and we denote by ρ : R n → R a nonnegative C ∞ function that vanishes for x ≥ 1 and has integral equal to one in R n . As usual, Another way to see that (2.14) and (2.15) hold is to look atĥ (ξ) =ρ (ξ)ĥ(ξ). Therefore defining since h ∈ X 0 ∩ S(R n ), (actually h ∈ X 0 ∩ D(R n )) in view of (2.14) and (2.15), we have just proved that F (h ) = F 1 (ĥ ). As tends to zero, we show that F (h ) tends to F (h) and F 1 (ĥ ) tends to F 1 (ĥ). Taking in account that and using that h tends to h in L 1 (R n ), we conclude that h tends to h in the space X and this shows F (h ) converges to F (h).
To analyze the convergence of F 1 (ĥ ) we write: Using that the second derivatives ofĥ (ξ) are uniformly bounded, given δ > 0 we first choose a(δ) in such way that the first integral is less than δ/2. For that choice of a, the second integral can be made less than δ/2 becauseĥ (ξ) converges toĥ(ξ) uniformly in R n and the integral ξ ≥a ξ −q−n dξ is finite. This takes care of the case h has compact support.
Next we take h ∈ X 0 and we define the following functions: g 0 (x) = n/ω n for x ≤ 1; g 0 (x) = 0 for x > 1.
x ≤m then the function goes to zero as m tends to infinity and then arguing as before we conclude F (h) = F 1 (ĥ(ξ)) and the theorem is proved.
For a proof of next lemma se [6].
Lemma 2.5. For −n < p < 0 and h ∈ L 1 (R n ) ∩ L ∞ (R n ) the quadratic form Proof of the theorem 1.1 Suppose u and v are distinct minimizers. Following [2], we can make a translation in the space variable (possibly different translations for u and v) in such way that We keep the same notation for the translated functions. Therefore, the function h = v − u belongs to the space X 0 defined by (2.6). Defining (in view of theorem 2.4 and lemma 2.5). This is a contradiction and uniqueness is proved.
To prove the radial symmetry, suppose u(x) is a minimizer satisfying This condition can be written in the vector form If C is any orthogonal matrix and v(x) = u(Cx), then v is also a minimizer and Therefore we must have u(x) = v(x) = u(Cx) and this implies the radial symmetry of the minimizer and the theorem is proved. Next theorem will be useful for the applications we are going to make.
Theorem 2.6. Let u ∈ A be a radially symmetric function and let φ(t) defined in the proof of the theorem1.1. If 0 < p < n and 2 ≤ q ≤ 4, then u minimizes E if and only if for any radially symmetric function v ∈ A. Or, equivalently, for any radially symmetric function v ∈ A.
Proof. The proof follows immediately from theorem 1.1.

Examples
As before, for given M > 0 and m > 0 and X defined by (2.4), we consider the set of admissible functions For −n < p < 0 and 0 < q we define We define problem P by In [3] problem P with p = −1, q > 0 and n = 3 is considered and three phases of the minimizers are defined: There it is shown that the minimizer is of phase 1 if the ratio m/M is below a certain critical value and it is of phase 3 if the ratio m/M is above a certain (perhaps different) critical value. In the case p = −1, q = 2 the minimizers are known for all values of the ratio m/M and, as a consequence, it is known that that phase 2 does not occur. Therefore, in that case, as the ratio m/M increases, the phase jumps directly from phase 1 to phase 3.
Here in this paper we adopt the same terminology and we construct very explicitly the phase 3 minimizers also in the cases p = −1, q = 3 and p = −1, q = 4 and we exhibit the critical ratio.
As far as phase 1 minimizers are concerned, in the case p = −1, q = 4 they are constructed very explicitly and the critical ratio is calculated. In the case p = −1, q = 3 the construction is less explicit because it depends on numerical calculations.
Finally, in the case p = −1, q = 4 we construct (with computer assistance) phase 2 minimizers. There is a strong indication the Phase 2 occurs also in the case p = −1, q = 3 but the calculations are heavier.
Although the construction of the minimizers is carried out for particular powers, it may give a good insight for more general cases.
Let us emphasize that the fact that the functions that we are going to construct are indeed minimizers is a consequence of our uniqueness result.
We have performed the calculation and the plotting using REDUCE. We start with some necessary conditions for minimizers for the problem P in the general case (see [2]). For −n < p < 0 and 0 < q and for a given function u : R n → R, we define the function If u ∈ A is a minimizer of problem P and the set {x : 0 < u(x) < m} has positive measure, then there is a η > 0 such that The second condition in (3.4) is not given in [2]) but it can be proved by the same method. For the examples we encounter in this paper, we prove that, basically, such conditions are also sufficient.
We start with some elementary calculation involving radial functions. If u(x) is a radial function and is also radial. Therefore we can assume that x = (0, 0, r). Taking spherical coordinates (s, θ, φ) in y If we drop the factor 2π corresponding to the integral of dθ we get Using a similar formula for the second term with p = −2 (if p = −2 a logarithm arises) and defining the function we see that the function defined by (3.3) can be written as Sometimes it is more convenient to deal with the function w(r) = ru(r) and then (3.5) becomes In view of those formulas for Λ(r), theorem 2.6 can be reformulated in the following way: for any radial function v(r) such that 0 ≤ v(r) ≤ M and All proofs of the sufficient conditions we are going to give rely on theorem 3.1.
We start with the phase 3 minimizers, that is, minimizers that assume the values M and zero only.
For M > 0 and a > 0 we define a radially symmetric function u(r) by u(r) = M for 0 ≤ r ≤ a; and u(r) = 0 for a < r, We will give necessary and sufficient conditions for u to be a minimizer for particular exponents. To start with we give a necessary and sufficient conditions in terms of the function Λ(r). Therefore, as a consequence of theorem 3.1, u is a minimizer. Conversely, suppose there is r 0 < R such that Λ(r 0 ) > Λ(a). It is easy to see that there are real numbers r 1 < r 2 < a < r 3 < r 4 such that Λ(r) > Λ(s) for r 1 < r < r 2 and r 3 < s < r 4 and the regions r 1 ≤ x ≤ r 2 r 3 ≤ x ≤ r 4 have the same measure. If we define v(r) by v(r) = M for 0 ≤ r ≤ r 1 or r 2 ≤ r ≤ R or r 3 ≤ r ≤ r 4 and zero otherwise we see that and, again as a consequence of theorem 3.1, u is not a minimizer. The other case is treated similarly and the theorem is proved.
If we define a number b 0 by b 5 0 = 2/3, we see that b 0 is the radius of the phase 3 solution for the critical ratio and the critical ratio can be written as 4π 3 b 3 0 . Next we find phase 1 minimizers and we start with a sufficient condition. In the cases we are going to consider, the phase 1 minimizers are of type given by the theorem 3.3.
We take p = −1 and q = 2, and suppose we want to find a minimizer u(r) that fits in theorem 3.3. If, as before, we denote by w(r) the function w(r) = ru(r), then the condition Λ(r) = η for 0 ≤ r ≤ a becomes:  and we assume that w(s) is continuous in some interval, then z (r) = w(r). Having that in mind and differentiating (3.10) two times with respect to r get w(r) = M 1 r, where M 1 is a constant. We conclude that u(r) = w(r)/r = M 1 is constant. Next we show that such functions satisfy the conditions of theorem 3.3.
In fact, if we define u(r) = M 1 for 0 ≤ r ≤ a and u(r) = 0 for a < r then for 0 ≤ r ≤ a we have Therefore, for 0 ≤ r ≤ a, Λ(r) is given by Performing the calculation for 0 ≤ r ≤ a we get Λ(r) = (3a 5 + 5a 3 r 2 + 15a 2 − 5r 2 )M 1 /15 and then Λ(r) is constant if a = 1. In that case, Λ(r) = M 1 (5r 3 + 3r + 10)/(15r) for 1 < r and Λ(r) − Λ(1) = ((r + 2)(r − 1) 2 M 1 )/(3r) > 0 for r > 1 and this implies that u(r) is the minimizer. As far the ratio m/M is concerned we We conclude that if the ratio m/M is less or equal to 4π 3 , then u(r) is the minimizer. Since there is no gap between the critical ratio for phase 1 and critical ratio for phase 3, we conclude, as it is well known (see [1]), that there no phase 2 minimizers. However, as we will see, things are different if q = 3 or q = 4. Now we construct phase 1 minimizers in the case p = −1 and q = 3 and we impose If w(s) is continuous in some interval and we differentiate this last equation six times with respect to r we get w (4) (r) + 8w(r) = 0 and then w(r)=c1* e**(p*r)*cos(p*r)+ c2*e**(-p*r)*cos(p*r) + c3*e**(p*r)*sin(p*r) +c4*e**(-p*r)*sin(p*r) with p 4 = 2. If we replace this formula back in (3.12), the left hand side is a polynomial of degree five in r.
If we plot w 1 = w(r)/c 1 (see Plot1 ) we find that it is negative and this means that c 1 has to be taken negative. If we denote by du the derivative of u(r)/c 1 (see Plot2) we find that it is also negative. We conclude that the maximum of u(r) is assumed at r = a and its value is −13.55c 1 . To calculate the ratio m/M we have to calculate the integral 4π a 0 rw(r) dr = 4π × 0.22. Since the critical ratio for phase 3 minimizers is 4π × 0.292, we see that there is a gap between those critical ratios.Probably this gap is filled by phase 2 minimizers. This question will be fully treated in the case p = −1 and q = 4. But, before that, we have still to verify that Λ(r) − η = Λ(r) − Λ(a) ≥ 0 for a ≤ r. After factorization and cancelation, that is equivalent to say that a certain polynomial p 1 (r) = w 3 r 3 + w 2 r 2 + w 1 r + w 0 ≥ 0 for r ≥ a. The coefficients depend on p and c only and their numerical value can be calculated explicitly. The answer is w 0 = −1877.39; w 1 = 1769.62; w 2 = 714.32; w 3 = 654.29.
Since w 1 , w 2 and w 3 are positive, the polynomial is monotonic in r and then, to show it is positive for r ≥ a it is sufficient to show it is positive for r = a. If we do that, we find p 1 (a) = 845.80 Therefore all conditions in theorem are verified and the solution we have found is indeed the minimizer. If we assume that the function w(r) is continuous in some interval and we differentiate this last equality twice with respect to r, we get that w(r) is a polynomial of third degree in r. Since we already know the answer, we set w(r) = a 3 r 3 + a 1 r so that u(r) = w(r)/r = a 3 r 2 + a 1 . Putting this w(r) back into equation (3.13), we find a 1 = (3a 3 (−a 5 + 1))/(5a 3 ); (4a 10 + 42a 5 − 21) = 0; η = (2a 3 (−2a 5 + 7))/(21a).
Then u is a minimizer.
and, as a consequence of theorem 3.1, the theorem is proved.
Before starting the calculation, we describe briefly how the minimizer evolves according to the ratio m/M. If we drop 4π in front of the rations, then we will show that given a ratio k between the critical ratios for the phase 1 and phase 3 minimizers, that is, where a 0 is defined by (3.14), then there are a, b with 0 < a < a 0 < b < b 0 such that the minimizer is a polynomial a 3 r 2 + a 1 for 0 ≤ r ≤ a and u(r) = M for a < r ≤ b and u(r) = 0 for r ≥ b. As the ratio k decreases, a goes to the right, b goes to the left and as k tends to (2a 5 0 + 3))/a 3 0 , a tends to a 0 and b also tends to a 0 so that the minimizer tends to be purely the critical function. On the other way around, as k increases, a goes to the left, b goes to the right and as k tends to 2 3 3/5 , a tends to 0 and b tends to b 0 so that the minimizer tends to be purely the constant function.
If we plot the polynomial The second formula for z(t) is more convenient because d 10 vanishes at t = 0 and then, at that value of t, the first formula gives 0/0. If we set t = 0 in (3.16) or (3.17), we get b 5 = 2/3 and then b = b 0 where b 0 is the radius of the critical phase 3 minimizer. If we set t = 1 in (3.16) we get 4b 10 + 42b 5 − 21 = 0 and this gives b = a 0 , where a 0 given by (3.14) is the radius of phase 3 minimizers. Therefore, we see that as the parameter t goes from 0 to 1, b(t) goes from the bottom of the interval for phase 3 minimizers to the top of the interval defined by a 0 and a(t) = tb(t) goes from zero to a 0 . In fact, b(t) is a decreasing function of t and a(t) is increasing. To see that b(t) is decreasing, we denote by dz the derivative by z(t) given by (3.17) and plot it (see Plot4). Since a 5 (t) = t 5 b 5 (t) to see that a(t) is increasing we plot da = 5z(t) + tz (t) (see Plot5) and we see that it is positive.
Plotting ww4 and ww2 (see Plot8 and 9) we see that they are positive and this implies p 3 (y) increases with y. Therefore, it is sufficient to show it is positive for y = 1. Taking y = 1 and plotting p 3 (1) (see Plot10) we see it is positive.