Construction of Gevrey functions with compact support using the Bray-Mandelbrojt iterative process and applications to the moment method in control theory

In this paper, we construct some interesting Gevrey functions of order α for every α > 1 with compact support by a clever use of the Bray-Mandelbrojt iterative process. We then apply these results to the moment method, which will enable us to derive some upper bounds for the cost of fast boundary controls for a class of linear equations of parabolic or dispersive type that partially improve the existing results proved in [P. Lissy, On the Cost of Fast Controls for Some Families of Dispersive or Parabolic Equations in One Space Dimension SIAM J. Control Optim., 52(4), 2651-2676]. However this construction fails to improve the results of [G. Tenenbaum and M. Tucsnak, New blow-up rates of fast controls for the Schrodinger and heat equations, Journal of Differential Equations, 243 (2007), 70-100] in the precise case of the usual heat and Schrodinger equation.


Introduction.
1.1. Presentation. The main motivation of this paper is to continue the study of [11] concerning the estimation for the cost of fast "boundary" controls for a class of linear equations of parabolic or dispersive type. The precise scope of the paper will be made more precise later.
Let us introduce some usual notations. Let H be some Hilbert space and U be another Hilbert space. Let A : D(A) → H be a self-adjoint operator with compact resolvent. The eigenvalues of A (which are here supposed to be different from 0 without loss of generality) are called (λ k ) k≥1 and supposed to be with multiplicity 1 in all what follows. To each eigenvalue λ k , we associate a normalized eigenvector e k . We assume that −A generates on H a strongly continuous semigroup S : t → S(t) = e −tA .
We call B ∈ L c (U, D(A) ) an admissible control operator for this semigroup, i.e. such that for every time T > 0, there exists some constant C(T ) > 0 such that for every z ∈ D(A), one has T 0 ||B * S(t) * z|| 2 U ≤ C(T )||z|| 2 H .
We consider the following class of controlled semigroups: and y t + iAy = Bu (dispersive case).
In the case of Equation (1), A will moreover be supposed to be a positive operator to ensure the existence of solutions. Then, it is well-known (see for example [2, Chapter 2, Section 2.3], the operators −A or −iA generates a strongly continuous semigroup under the hypothesis given before thanks to the Lummer-Phillips or Stone theorems) that if u ∈ L 2 ((0, T ), U ), System (1) or (2) with initial condition y 0 ∈ H has a unique solution satisfying y ∈ C 0 ([0, T ], H).
From now on and until the end of the paper, we will assume that B is a scalar control, that is to say of the form Bu = bu, where b ∈ D(A) and u ∈ L 2 ((0, T ), K) where here U = K with K = R or C. From now on, we will call where <, > (D(A) ,D(A)) is here the duality product between D(A) and D(A) with pivot space H. It is well-known (see [7]) that if ||(b k ) k∈N * || ∞ < +∞ and if (λ k ) k≥1 is regular in the sense that inf m =n |λ m − λ n | > 0, then B is an admissible control operator. Let us now introduce the notion of cost of the control. Assume that that system (1) or (2) is null controllable at some time T 0 > 0 (i.e. for every y 0 ∈ H, there exists some control u ∈ L 2 ((0, T 0 ), U ) such that y(T 0 , ·) ≡ 0). One can then verify easily that there exists a unique control u opt ∈ L 2 ((0, T 0 ), U ) with minimal L 2 ((0, T 0 ), U )norm. Moreover, the map y 0 → u opt is linear continuous (see for example [2, Chapter 2, Section 2.3]). The norm of this operator will be from now on called the cost of the control at time T 0 and denoted C T0 in this section. Thanks to the definition of C T0 , this constant is also exactly the infimum of the constant C > 0 such that for every y 0 ∈ H, there exists some control u driving y 0 to 0 at time T 0 with ||u|| L 2 ((0,T0),U ) ≤ C||y 0 || H .
Here we attend to give some precise upper bounds on C T when T → 0 for some families of operators A for which equations (1) and (2) are null controllable in arbitrary small time. Some applications to fractional heat and Schrdinger equations will be provided later.
Let us explain now precisely the scope of the paper: 1. We give a new family of possible multipliers for the moment method that depend on the asymptotic growth of (λ k ) k≥1 and seem well-adapted (this is the main result of the paper). 2. We manage to extend the results of [11] to the wider possible range of exponents for the asymptotic polynomial behavior of the eigenvalues (λ k ) k∈N * . In section 2.3, we will explain into more details why the multiplier used in [11] was not really adapted to the case of eigenvalues λ k behaving roughly like k α with α = 2 and how we managed to improve it. 3. As we will see later (see notably Figures 1 and 3), we will not improve the upper bounds in the case of the classical heat equation and Schrdinger equation given in [23]. However, we will also see that there are a wide range of families of eigenvalues for which our estimates are better than in [11], and notably when α → ∞ we dramatically improve the results of [11] and we are quite close to the lower bounds of [12].
2. State of the art. As far as the author knows, the study of the behavior of fast control for partial differential equations started with the paper [21], where an upper bound for the cost of fast boundary control for the one-dimensional heat equation was given.The, a lower bound were given in [4], proving that the cost of fast controls had to be necessarily roughly (up to fractional terms in T ) of the form exp(K/T ) for some K > 0 as T → 0. The same result for the boundary control of one-dimensional Schrdinger equations was proved later in [15]. The next natural question is then to try to estimate precisely K and notably its dependence with respect to the geometry of the problem. In the case of the heat equation, if we call L the length of the interval on which we control our one-dimensional equation, we know that The best upper bound was obtained in [23] whereas the lower bound was obtained in [12]. For the Schrdinger equation, we have The upper bound was obtained in [23] and the lower bound in [15].
Let us now explain with more details what was precisely done [11]. In this article, the author proved precise upper bounds concerning the cost of the control for some large classes of linear parabolic or dispersive equations when the time T goes to 0, where the underlying "elliptic" operator was chosen to be self-adjoint or skewadjoint with eigenvalues roughly as Rk α or ±Rk α (only for (2)) for some R > 0 and α ≥ 2 when k → +∞. The cost of the control is proved to be bounded from above by exp K/(RT ) 1/(α−1) where K is some explicit constant depending on α. This does not cover all possible cases, because it is well-known that equations like (1) and (2) are controllable in arbitrary small time if and only if α > 1.
Explicit lower bounds have later been derived in [12] for all controllable in small time fractional heat or Schrdinger equations. Let us mention that in the case of the Schrdinger equation (which corresponds to λ k = k 2 and equation (2)), the lower bound proved in [12] is exactly the same as the one given by Miller in [15], whereas the lower bound for the heat equation controlled on the boundary (which corresponds to the case λ k = k 2 and equation (1)) is twice the one obtained by Miller in [16] and was conjectured to be to exact behavior for the cost of the control until now.
To finish, let us mention that the study of the behavior of fast controls may also be applied in some cases to study the uniform controllability of convection-diffusion equations in the vanishing viscosity limit as explained in [9] and [10], however the results given in the present article does not enable us to deduce directly new results for this problem. In all what follows, f g (with f and g some complex valued functions depending on some variable x in some set S) means that there exists some constant C > 0 such that for all x ∈ S, one has |f (x)| ≤ C|g(x)|, (such a C is called an implicit constant in the inequality f g), and f g means that we have both f g and g f . Sometimes, when it is needed, we might detail the dependence of the implicit constant with respect to some parameters.
Theorem 2.1. Assume that (λ n ) n≥1 is a regular increasing sequence of positive numbers verifying moreover that there exist some α > 1 and some R > 0 such that holds, and assume that b n 1 (in the sense that the sequence (|b n |) n∈N is bounded from below and above by positive constants). Then system (2) is null controllable in arbitrary small time and the cost of the control C T verifies for T small enough: 2. If α ∈ (1, 2), then (the implicit constant in the previous inequalities might depend on α or R but not on T ) We also have a theorem in the dispersive case when the eigenvalues are not supposed anymore to be positive. Theorem 2.2. Assume that the sequence of increasing eigenvalues (λ n ) n∈Z * of A is a regular sequence of non-zero numbers verifying moreover that there exist some α > 1 and some constant R > 0 such that and assume that b n 1. Then system (2) is null controllable in arbitrary small time and the cost of the control C T verifies for T small enough: 1. If α ≥ 2, then (the implicit constant in the previous inequalities might depend on α and R but not on T ) Concerning the parabolic case, we obtain the following result: is a regular increasing sequence of positive numbers verifying moreover that there exists some α > 1 and some constant R > 0 such that (3) holds. Assume that b n 1. Then system (1) is null controllable in arbitrary small time . Moreover, the control can be chosen in the space C 0 ([0, T ], U ) and the cost of the control C T (in norm L ∞ (0, T ), so this is also true in L 2 (0, T )) verifies for T small enough (The implicit constant in the previous inequalities might depend on α and R but not on T ) Remark 1.
1. Concerning the asymptotic behavior of C S (α), we obtain by easy Taylor expansions that 2. We observe that for α = 2, the cost of the control is bounded by e K RT for every K > C S (2) = 2π 2 , which is worse than [23] which stated that we have the same result for every K > 3π 2 /2. However, we see on figure 1 (which compares the upper bound C S (α) and the one found in [11] in the case α ≥ 2) that our new bound becomes better for α ≥ 2.76 and is linearly better as α → ∞. Figure 1. Difference between C S (α) and the upper bound of [11] with respect to α.

Another interesting comparison can be done between the upper bound C S (α)
and the lower bound given in [12]. This is what is done in figure 2. We can see that our upper bound becomes close to the lower bound given in [12] when α → ∞, more precisely the difference between the two quantities converges to 1 + ln (2). However, the upper bound is very far from the lower bound for α → 1 + .
1. Concerning the asymptotic behavior of C H (α), we obtain by easy Taylor expansions that Figure 2. Difference between C S (α) and the lower bound of [12] with respect to α.
2. We observe that for α = 2, the cost of the control is bounded by e K RT for every K > C H (2) = π 2 , which is worse than [23] which stated that we have the same result for every K > 3π 2 /4. However, we see on figure 3 (which compares the upper bound C H (α) and the one found in [11] in the case α ≥ 2) that our new bound becomes better for α ≥ 2.221 and is linearly better as α → ∞.  [11] with respect to α.
3. To finish, let us compare this upper bound and the lower bound given in [12]. This is what is done in figure 4. We can see that our upper bound becomes close to the lower bound given in [12] when α → ∞, more precisely the difference between the two quantities tends to 1. However, the upper bound is very far from the lower bound for α → 1 + .  . Difference between C H (α) and the lower bound of [12] with respect to α.

Application to fractional heat or
with associated eigenvector Thanks to the continuous functional calculus for positive self-adjoint operators, one can define any positive power of −∆. Let us define our "boundary" control (for more explanations, see notably [11,Sections 3 (7)) or C (for (8)). Let us consider here some γ > 1 and two different equations, one particular case of (1), which are the fractional heat equations and one particular case of (2), which are the fractional Schrdinger equations Equation (7) is often used to model anomaly fast or slow diffusion (see for example [14]), whereas (8) was introduced to study the energy spectrum of a 1-D fractional oscillator or for some fractional Bohr atoms (see for example [5]).
As explained in details in [11], the above equations exactly fit our abstract setting and we directly deduce from our main Theorems the following results: (8) is null controllable in arbitrary small time and the cost of the control C T verifies for T small enough: 1. If γ ≥ 2, then 2. If γ ∈ (1, 2), then (7) is null controllable in arbitrary small time . Moreover, the control can be chosen in the space C 0 ([0, T ], U ) and the cost of the control C T verifies for T small enough (The implicit constant in the previous inequalities might depend on γ and L but not on T ).

2.3.
Gevrey functions, the moment method and the Bray-Mandelbrojt construction. In this "heuristic" Section, our goal is to give some informal explanations on the strong link between Gevrey functions and the moment method of Fattorini and Russell [3] that will be used here to prove our theorems. Let us consider here the parabolic case (1), the dispersive case being quite similar. Let us decompose our initial condition on our Hilbert basis of eigenfunctions: with (a k ) k∈N * ∈ l 2 (N * ). Then, using the notations of the previous section, it is well-known that we have for all k ∈ N * and t ∈ (0, T ] that Imposing that y(T, ·) = 0 is then equivalent to imposing that for every k ∈ N * , one has the right-hand side being in l 2 (N * ) as soon as b k 1.
Hence, if we assume that we are able to exhibit a bi-orthogonal family to {t → e −λn(T −t) } in L 2 (0, T ), i.e. a family of functions {ψ m } m∈N * such that for every (k, l) ∈ (N * ) 2 one has e λ k t , ψ l L 2 (0,T ) = δ kl , then one can use as a control function An usual way (but not the only one, see notably [1]) to construct the family{ψ k } is to use the Paley-Wiener Theorem (see for example [20,Theorem 19.3, Page 370]), which says that ψ k can be constructed (after some adequate translation) as the inverse Fourier Transform of a L 2 (R)-function J k of exponential type T /2 verifying moreover J k (iλ l ) = δ kl . As we will see during the proof, finding such a function J k can be decomposed into two main steps: • First exhibit some family of Weierstrass products Ψ k involving the eigenvalues and verifying Ψ k (iλ l ) = δ kl (see notably Lemma 3.1).
• Then multiply Ψ k with some adequate function M k (called multiplier ) so that the product J k := Ψ k M k will then be of exponential type T /2 and belongs to L 2 (R). Moreover M k has to verify the normalization condition The crucial point is that the behavior for x → ∞ of the Weierstrass product is very bad and can be proved to be exactly of the form exp(K|x| 1/α ) for some constant K > 0 if we assume that λ k k α for some α > 1. Hence, we observe that if we want to obtain a function J k which is in L 2 (R), it is necessary that the multiplier M k behaves at least as exp(−K|x| 1/α ) at infinity. This exactly means that M k has to be the Fourier transform of some function G k which has to be Gevrey of order α, i.e. verifying for some constant R > 0 > 0 (see for example [19, Section 1.6, Page 30]). Moreover, as it will be observed during the proof of our theorems, an important point is that the constant R determines in an optimal way the exact behavior of the exponential decreasing of M k (i.e. the constant K). Hence, imposing here that M k compensates quite "exactly" the growth of Ψ n means that this constant R in (12) is totally imposed by the data of our problem. To finish, since we want M k to be of exponential type T /2, applying one more time the Paley-Wiener Theorem, it is equivalent to saying that G k has to be of compact support [−T /2, T /2]. Using some estimations made more precise later, the function ψ k will also be bounded by C Gev (α, T, R) and then using (10) we deduce that C H (α, T ) is also bounded by C Gev (α, T, R). Hence, if we want to obtain a precise bound on C H (α, T ), it is necessary to make all our possible to obtain some constant C Gev (α, T, R) which is as small as possible. Let us also mention that the normalization condition (11) can be easily replaced by the condition |G k | = 1 (and some suitable inequality on G k (iλ k ) that we do not detail here for the sake of clarity, see notably (61)). Taking into account all these remarks, the rules of the game can be gathered as the following.
Optimizing the cost of the control is closely related to the following problem: given some T > 0, construct a Gevrey function G of order α, with support equal to [−T /2, T /2], verifying |G| = 1, with imposed coefficient R appearing in the growth of the derivative and minimizing the quantity C Gev (α, T, R) (which has of course to explode as T → 0) that is appearing in (12).
Let us now explain a possible way to construct Gevrey function (or more generally C ∞ functions) with compact support. This construction was first described by Szolem Mandelbrojt in [13, Section 13], but in this book it is mentioned that the construction comes from previous unpublished works of Hubert Bray, whence the name "Bray-Mandelbrojt construction" adopted here. The idea is to use repeated mean-values of functions, i.e. to make an infinite convolution product of rectangle functions on [−a k , a k ], (a k ) k≥0 being supposed to be a convergent series of positive numbers (whose sum will exactly be the support of the final function). This construction can also be found in [6,. Let us mention that since the Fourier transform of a rectangle function is exactly some cardinal sine function, and that the Fourier transform changes convolution into products, this construction is totally equivalent to the one we can find notably [16, Proof of Lemma 4.4], which is also used in [15] and that goes back to the work of Ingham in [8]. Our construction is also very similar to the one used in [22]. (we refer to [18] for extra explanations on the usefulness of this construction) To finish, let us mention that in [23], the authors used the following Gevrey function to construct the multiplier: where ν > 0 is some large parameter to be chosen. It can be proved that G is a Gevrey function of order exactly 2. Notably, this function is not Gevrey of order α for α ∈ (1, 2) (but it is a Gevrey function of order α for every α > 2), and this explains why in [11] (where the same multiplier was used) we were not able to treat the case α ∈ (1, 2) and why our estimations were quite bad for large α.

3.1.
Estimates on the Weierstrass product involving the eigenvalues. As usual when the moment method is concerned, we first study the asymptotic behavior for large z ∈ C of the Weierstrass product constructed from the family of eigenvalues (λ n ) n∈N . Lemma 3.1. Let (λ n ) n≥1 be a regular increasing sequence of positive numbers verifying moreover that there exists some α > 1 and some constant R > 0 such that (3) holds.
where P is a polynomial. (In the previous inequalities, the implicit constant may depend on α and R but not on z, x or n) Remark 3. One can see numerically by taking the particular case λ k = k α that inequalities (15) and (16) are optimal. However, as explained in [11,Remark 3] we are unable to extend inequality (15) to the case α ∈ (1, 2), where is seems numerically not to be true anymore, but it is likely that estimate (14) is far from being optimal (because notably of the gap when α → 2 − with what we have in (15) for α = 2).
Proof of Lemma 3.1. Without loss of generality, we can assume that R = 1 (one can go back to the general case by an easy scaling argument).
Let us prove inequality (14) (inequality (15) was already proved in [11]). We go back to the computations done in [11, Proof of Lemma 2.1]. One notably has We also have Using the change of variables t = x/v, we easily obtain and Using expression (19), one has Using expression (20), we also have Hence we deduce (14) taking into account that (see for example [11,Lemma 2 Concerning (16), one verifies that the proof provided in [11, Page 2661] for α ≥ 2 is also valid for α ∈ (1, 2) because we can use (21), replacing α by 2α (> 1).
In the dispersive case, we also need some estimate in the case where the eigenvalues are not supposed to be positive anymore.
Lemma 3.2. Let (λ n ) n≥1 be a regular increasing sequence of positive numbers verifying moreover that there exists some α > 1 and some constant R > 0 such that (6) holds.
Let Φ n be defined as follows:

PIERRE LISSY
Then, 1. If α ∈ (1, 2), then for every z ∈ C, where P is a polynomial. 2. If α ≥ 2, then for every z ∈ C, (In the previous inequalities, the implicit constant may depend on α and R but not on z, x or n) This Proposition was already proved in [11] for the case α ≥ 2. For α ∈ (1, 2), the proof is exactly the same as the corresponding case in Lemma 3.1 that we combine with the computations made in the proof of [11,Lemma 2.4 ] (see Pages 2666−2668 in this reference) and will be omitted.

3.2.
Construction of adequate multipliers. As explained before, we now have to construct an adequate multiplier, which has to be the Fourier transform of a Gevrey function with compact support. Let us emphasize that the main contribution of this paper is the construction given in (44).
To begin, let us give some useful estimates concerning the Bray-Mendelbrojt construction of functions with compact support (see [13,Section 13]).
u is even, u is non-decreasing on [−a, 0], such that for every j ∈ N, one has Proof of Proposition 1. We follow step by step the proof given in [6,.
For every b > 0 we call Let us remark that and H b is even.
We then consider u n := H a0 * H a1 * · · · * H an , where * represents the convolution product. u n is of class C n−1 and Moreover, one easily verifies that, for 1 ≤ j ≤ n − 1 we have where Using (31), the fact that for u and v some regular enough functions one has u * v = u v, and |u * v| ≤ ||u|| ∞ ||v|| 1 , we deduce, taking into account (34), that Let us also remark that u n is even and verifies a −a u n (x)dx = H a1 · · · H an = 1, H a1 * · · · * H an (−s)ds Let us now prove that u n is non-decreasing on [−a, 0] for every n ∈ N. Let us prove this by induction. it is true for n = 0 and n = 1. Let us assume that u n is non-decreasing on [−a, 0]. Then we compute the derivative of u n+1 , we obtain Let x ∈ [−a, 0]. Then, we distinguish two cases: 1. If x ≤ −a n+1 , then x − a n+1 ≤ x + a n+1 ≤ 0, hence, by using the fact that u n is non-decreasing on [−a, 0], it is clear that and then by (38), u n+1 (x) ≥ 0. 2. If x ≥ −a n+1 , then a n+1 − x ≥ a n+1 + x ≥ 0. Using the fact that u n is even, we know by induction that u n is non-increasing on [0, a], hence we deduce that u n (x + a n+1 ) ≥ u n (a n+1 − x) = u n (x − a n+1 ), and then by (38), u n+1 (x) ≥ 0.
We conclude as in [6] by letting n → ∞, u being the limit (that exists) of the u n .
According to Section 2.3 and to inequality (29) (that has to be verified for every n), we see that the problem of minimizing what we called C Gev (α, T, R) in the case of the Bray-Mandelbrojt construction can be reformulated as follows. We call Then we are interested in for some large enough (compared to a) parameter ν > 0 to be chosen later. This problem seems quite difficult and we do not know if there exists a solution, and in the case where there is no solutions, we do not know how to characterize the minimizing and maximizing sequences. As a toy model, we propose to investigate the behavior of on the set A adm , and notably to investigate if there exists some critical points. Let One has ∂f ∂a i ((a k )) = 1 a k .
We want to study f under the constraint g((a k )) for every j ∈ N (the constraint a k > 0 can be forgotten here). We also do not need here to use that we want (a k ) k∈N to be non-increasing.
We remark that for every j ∈ N, we have ∂L ∂aj = 0 if and only if a j = λ according to (40) and (42), hence the sequence (a j ) has to be constant. If we take into account the fact that we want the constraint g((a k )) = a to be verified, we have a contradiction. Hence, there does not exist any critical point for the Lagrangian. One then might have the idea to "mimic" in some sense constants sequences for the initial optimization problem (without being sure that it will be a maximizing sequence however). A good idea would be to consider some sequence (a k ) k∈N which is constant at least for small k and then decreases in some suitable way such that it becomes summable.
Taking into account this remark, we set ν > 0 some parameter (that is destined to compensate the bad growth of the Weierstrass product (13), see (60), and to be very large so we will always assume ν ≥ C > 0), and we consider the following sequence (a k ) k∈N defined by Remark 4. The second part of the construction of (44) may seem to be chosen quite arbitrarily, however the author did not find a more suitable construction. It might be possible that another appropriate non-increasing sequence that would enable to improve the bounds given in this article.
We consider the corresponding function σ ν constructed as in the proof of Proposition 1 from the sequence (a k ) k∈N . We call a(ν) its support, that is to say It may seem quite strange that we do not impose the support to be [−T /2, T /2] here, however for the sake of clarity we prefer to adjust the support thanks to the parameter β to be introduced later (see (59)). Taking into account the definition of a(ν), we may observe that a(ν) does not depend too much on ν in the sense that Let us remark that σ ν is even, Then, using (29), we obtain We can then deduce the following crucial estimate : Proof of Lemma 3.3. Using the expression of a k given in (44) and (51), we deduce that for j ≥ ν we have Combining (66), (64), and inequality we deduce |H β (x)| e αν eν −αj j α/2 e αν e −α(βν α−1 x) 1/α +δ/(2 sin(π/(α)))x 1/α , which proves the desired estimate.
Let us now prove (61). We have, since σ ν ≥ 0 and using (50) together with (49), Using (47) We deduce that for ν large enough (which is equivalent to T small enough, see (70) and (71)) which gives the desired result thanks to (68).
Proof of Theorem 2.1. The proof follows the one of [23, Theorem 3.1 and 3.4]. We still assume without loss of generality that R = 1. Let us first consider the dispersive case (Equation (2)) and the case α ≥ 2. Let δ > 0 a small enough parameter. We call g n (z) := Φ n (−z − λ n )H β (z + λ n ), so that one has g n (−λ k ) = δ kn by (58) and (13). We want to apply at the end the Paley-Wiener Theorem (see estimate (59)) in an optimal way, so we want a(ν)β to be close to T /2. From now on we will always consider ν large enough such that (see (46)) |a(ν) − α α − 1 | ≤ δ/4.