Semiglobal exponential stabilization of nonautonomous semilinear parabolic-like systems

It is shown that an explicit oblique projection nonlinear feedback controller is able to stabilize semilinear parabolic equations, with time-dependent dynamics and with a polynomial nonlinearity. The actuators are typically modeled by a finite number of indicator functions of small subdomains. No constraint is imposed on the sign of the polynomial nonlinearity. The norm of the initial condition can be arbitrarily large, and the total volume covered by the actuators can be arbitrarily small. The number of actuators depend on the operator norm of the oblique projection, on the polynomial degree of the nonlinearity, on the norm of the initial condition, and on the total volume covered by the actuators. The range of the feedback controller coincides with the range of the oblique projection, which is the linear span of the actuators. The oblique projection is performed along the orthogonal complement of a subspace spanned by a suitable finite number of eigenfunctions of the diffusion operator. For rectangular domains, it is possible to explicitly construct/place the actuators so that the stability of the closed-loop system is guaranteed. Simulations are presented, which show the semiglobal stabilizing performance of the nonlinear feedback.


Introduction
Nonlinear parabolic equations appear in many models of real world evolution processes. Therefore, the study of such equations is important for real world applications. In particular, it is of interest to know whether it is possible to drive the evolution to a given desired behavior or whether it is possible to stabilize such evolution process, by means of suitable controls. The simplest model involving parabolic equations is the heat equation, modeling the evolution of the temperature in a room [22,Chapitre II]. Parabolic equations also appear in models for population dynamics [4,15], traffic dynamics [41], and electrophysiology [42].
Usually, controlled parabolic equations can be written as a nonautonomous evolutionary system in the abstract formẏ where y is the state, y 0 and Ψ i , i ∈ {1, 2, . . . , M }, are given in a Hilbert space H, and u(t) = (u 1 , . . . , u M )(t) is a control function at our disposal, taking values in R M . The linear operator A is a diffusion-like operator and the linear operator A rc is a time-dependent reaction-convection-like operator. The operator N is a time-dependent nonlinear operator. The general properties asked for A, A rc , and N will be precised later on.
In the linear case, N = 0, is has been proven in [31] that the closed-loop systeṁ y + Ay + A rc (t)y − K F ,M U M (t, y) = 0, y(0) = y 0 ∈ H, It is not difficult to see that we can follow the arguments in [31, Thms. 3.5, 3.6, and Rem. 3.8] to conclude that system (1.2) is still stable if we replace (1.4) by F(y) = Ay + λ1y.
Observe that (1.5) concerns a single M ∈ N and a single pair (U M , E M ). The following result, which follows straightforwardly from the sufficiency of (1. ≤ C P remains bounded, with C P > 0 independent of M . Then system (1.2) is globally exponentially stable for large enough M , with F(y) ∈ {λ1y, Ay + λ1y}.
Our main goal is to prove that an analogous explicit feedback allow us to semiglobally stabilize nonlinear systems as (1.1), for a suitable class of nonlinearities. We underline that we shall not assume any condition on the sign of the nonlinearity N , which means that the uncontrolled solution may blow up in finite time. For results concerning blow up of solutions, see [7,34,36]. In particular, this means that we will have to guarantee that the controlled solution does not blow up, which is a nontrivial task/problem. This is a problem we do not meet when dealing with linear systems, because solutions of linear systems do not blow up in finite time.
In the linear case the number M of actuators that allow us to stabilize the system does not depend on the initial condition, while in the nonlinear case it does. We shall prove that M depends only on a suitable norm of the initial condition, this dependence is what motivates the terminology "semiglobal stability" we use throughout the paper.
For nonlinear systems, previous results on the related literature are concerned with local stabilization, and such results are often derived through a suitable nontrivial fixed point argument. In such situation the feedback operator is linear and is such that it globally stabilizes the linearized system, with N = 0. In general, such linearization based feedback will be able to stabilize the nonlinear system only if the initial condition is small enough, in a suitable norm. Here, in order to cover arbitrary large initial conditions, and thus obtain the semiglobal stabilization result for (1.1), we will use a nonlinear feedback operator. Instead of starting by constructing a feedback stabilizing the linearized system, we deal directly with the nonlinear system. is stable, provided the initial condition is in the ball {v ∈ V | |v| V < R} and the pair (U M , E M ) satisfies a suitable "nonlinear version" of (1.5). The number M of actuators needed to stabilize the system will (or may) increase with R. A precise statement of the main stability result concerning a single pair (U M , E M ), together a "nonlinear version" of the sufficient stability condition (1.5) is given hereafter, once we have introduced some notation and terminology. A consequence of that result will be the following "nonlinear version" of Theorem 1.2. The operator choice F(y) = λ1y, used in previous works for linear systems, will not necessarily satisfy the assumptions hereafter (Assumption 3.6, in particular). That is, we cannot conclude/guarantee (from our results) that such choice will semiglobally stabilize the nonlinear system. To better understand the differences between the two choices, we will consider a general operator F(y) = F M (P E M y) depending only on the orthogonal projection P E M y of the state y in H onto E M . Further 1.2. Motivation and short comparison to previous works. We find systems in form (1.1) when, for example, we want to stabilize a system to a trajectoryẑ. That is, supposeẑ solves the nonlinear systemż + Aẑ + f (ẑ) = 0,ẑ(0) =ẑ 0 , and thatẑ has suitable desired properties (e.g., it is essentially bounded and regular). In many situations, it may happen that the solution issued from a different initial condition z 0 may present a nondesired behavior (e.g., not remaining bounded, or even blowing up in finite time). In such situation, we would like to find a control approaches the desired behaviorẑ. More precisely, we would like to have for some normed space H. Now we observe that the difference y := z −ẑ satisfies a dynamics as (1.1), because from Taylor expansion (for regular enough f ) we may write f (z) − f (ẑ) =: A r,c (t)y + N (t, y), with A r,c (t) = d dz f (ẑ) and with a remainder N (t, y). Notice that N vanishes if, and only if, f is affine, otherwise N (t, y) is nonlinear. Therefore, stabilizing (1.7) to the targeted trajectory, is equivalent to stabilizing system (1.1) (to zero), because (1.8) reads |y(t)| H ≤ Ce −µt |y(0)| H .
In previous works on internal stabilization of nonautonomous parabolic-like systems including [11,14,29,30,46], the exact null controllability of the corresponding linearized systems (by means of infinite dimensional controls, see [17, 19-21, 23, 26, 57]) played a key role in the proof of the existence of a stabilizing control. See also [3] for the weakly damped wave equation. We would like to underline that for the proof of the stability of an oblique projection based closed-loop system, we do not need to assume the above null controllability result.
Our results are also true for the particular case of autonomous systems, which has been extensively studied. However, in such case other tools may be, and have been, used. Among such tools we have the spectral properties of the system operator A + A rc . We refer to the works [6,[8][9][10]12,16,24,40,43,49] and references therein. See also the comments in [31,Sect. 6.5]. Finally we refer to the examples in [56], showing that in the nonautonomous case, the spectral properties of A + A rc (t), at each time t ≥ 0, are not appropriate for studying the stability of the corresponding nonautonomous system.
Though we do not deal here with boundary controls, we refer to [44,48,50] for works on the stabilization of the Navier-Stokes equation, evolving in a bounded domain Ω ⊂ R 3 , to a targeted trajectory. In [44,48] the targeted trajectory is independent of time (autonomous case), while in [50] it is time-dependent (nonautonomous case). In [44] the global stability of the closed-loop is shown to hold in L 2 -norm for at least one (not necessarily unique) appropriately defined "weak" solution. In [48] the local stability of the closed-loop system has been shown to hold in the Sobolev W s,2 -norm, with s ∈ ( 1 2 , 1], and the solutions of the closed-loop system are more regular and unique. In [50] the local stability of the closed-loop system has been shown to hold in the W 1,2 -norm and the solutions are unique. Recall that L 2 = W 0,2 ⊃ W s 1 ,2 ⊃ W s 2 ,2 , for 0 < s 1 < s 2 .
Our results can be used to conclude the semiglobal stability of nonautonomous oblique projection based closed-loop parabolic-like systems with internal controls, where semiglobal stability lies between local and global stability. The stability of the closed-loop system is shown to hold in the W 1,2 -norm, and the solutions are unique. In previous results concerning local stability of parabolic systems, the control domain ω can be arbitrary and fixed a priori. For our results the volume of the support of the actuators can still be arbitrarily small and fixed a priori, but the support itself is not fixed a priori. See Section 2.2.
Finally, though we consider here the case of parabolic-like systems and are particularly interested in the case where blow up may occur for the free dynamics and on the case our control is finite dimensional, the stabilization problem is still an interesting problem for other types of evolution equations, where blow up does not occur, like those conserving the energy and/or other quantities. For stabilization results (by means of infinite-dimensional control) for nonparaboliclike systems we refer the reader to [5,33,52,53] and references therein.
1.3. Computational advantage. We underline that the feedback operators in (1.3) and (1.6b) are explicit and the essential step in their practical realization involves the computation of the oblique projection. A classical approach to find a feedback stabilizing control is to compute the solution of the Hamilton-Jacobi-Bellman equation, which is known to be a difficult numerical task, being related with the so-called "curse of dimensionality", for example see the recent paper [27] (for the autonomous case), where the authors, in order to compute the Hamilton-Jacobi-Bellman feedback, need to approximate a parabolic equation by a 14-dimensional ordinary differential equation (previous works deal with even lower-dimensional approximations). This also means that standard discretization methods as finite elements approximations are not appropriate for computing the Hamilton-Jacobi-Bellman solution, because a 14-dimensional finite elements approximation of a parabolic equation is hardly accurate enough. In the linear case (and with quadratic cost) the Hamilton-Jacobi-Bellman feedback reduces to the (algebraic) Riccati feedback. In this case finite elements approximations can be used, but the computational effort increases considerably as we increase the number of degrees of freedom. For parabolic systems, the computation of the feedback in (1.3) and in (1.6b) is considerably cheaper, because the numerical computation of the oblique projection P  [51]. Note that the size of Θ M is defined by the number M of actuators, and thus it is independent of the number of degrees of freedom of the space discretization, that is, computing Θ −1 M does not become a harder task as we refine our discretization. Even in case we are able to compute an approximation of an Hamilton-Jacobi-Bellman based feedback control, such (approximated) feedback may not guarantee stabilization for arbitrary initial conditions, as reported in [27, Sect. 5.2, Test 2], though we likely obtain a neighborhood of attraction larger than that of the Riccati closed-loop system.
Finally, the main idea behind solving the Riccati or Hamilton-Jacobi-Bellman equations is that of finding a feedback (closed-loop) stabilizing control or an optimal control, under the assumption/knowledge that a stabilizing (open-loop) control does exist. Instead, in this paper, the proof of existence of such a stabilizing control is included in the results.
1.4. Contents and general notation. The rest of the paper is organized as follows. In Section 2 we recall suitable properties of oblique projections, present an example of application of our results, and recall previous global and local exponential stability results, which are related to the problem we address in this manuscript. In Section 3 we introduce the general properties asked for the operators A, A rc , and N in (1.1), and also the properties asked for the triple (U M , E M , F) defining the feedback operator. In Section 4 we prove our main result. In Section 5 we show that our results can be applied to the stabilization of semilinear parabolic equations with polynomial nonlinearities. In Section 6 we present the results of numerical simulations showing the performance of the proposed nonlinear feedback. Finally, the appendix gathers proofs of auxiliary results used in the main text.
Concerning the notation, we write R and N for the sets of real numbers and nonnegative integers, respectively, and we define R r := (r, +∞) and R r := [r, +∞), for r ∈ R, and N 0 : For an open interval I ⊆ R and two Banach spaces X, Y , we write W (I, X, Y ) := {y ∈ L 2 (I, X) |ẏ ∈ L 2 (I, Y )}, whereẏ := d dt y is taken in the sense of distributions. This space is endowed with the natural norm |y| W (I, X, Y ) := |y| 2 L 2 (I, X) + |ẏ| 2 If the inclusions X ⊆ Z and Y ⊆ Z are continuous, where Z is a Hausdorff topological space, then we can define the Banach spaces X × Y , X ∩ Y , and X + Y , endowed with the norms defined as |(a, b)| X×Y := |a| 2 X + |b| 2 Y 1 2 , |a| X∩Y := |(a, a)| X×Y , and |a| X+Y := inf (a X , a Y )∈X×Y |(a X , a Y )| X×Y | a = a X + a Y , respectively. In case we know that X ∩ Y = {0}, we say that X + Y is a direct sum and we write X ⊕ Y instead.
The space of continuous functions from X into Y is denoted C(X, Y ). We consider the subspace of increasing continuous functions, defined in R 0 and vanishing at 0: Given a subset S ⊂ H of a Hilbert space H, with scalar product ( · , · ) H , the orthogonal Given a sequence (a j ) j∈{1,2,...,n} of real constants, n ∈ N 0 , a i ≥ 0, we denote a := max 1≤j≤n a j .
Further, by C [a 1 ,...,an] we denote a nonnegative function that increases in each of its nonnegative arguments. Finally, C, C i , i = 0, 1, . . . , stand for unessential positive constants.

Preliminaries
We introduce/recall here specific notation and terminology concerning oblique projections and stability. To simplify the exposition, we denote by #Z ∈ N the number of elements of a given finite set Z ⊆ Y . See [25,Sect. 13]. For N ∈ N 0 , #Z = N simply means that there exists a one-to-one correspondence from {1, 2, . . . , N } onto Z. Of course #Z = 0 means that Z = ∅, the empty set. We also denote the collection Now, instead of (2.1), we consider a more general sequence as follows where for each M ∈ N 0 , the M th term of each sequence is a #M σ -dimensional space, dim E Mσ = #M σ = dim U #Mσ , and the function σ M : For a given M ∈ N 0 , we will also need to underline two particular eigenvalues defined as Essentially, the results in [31] tell us that the linear closed-loop systeṁ is globally exponentially stable, with the feedback control operator holds true, which is a slightly relaxed version of (1.5). In case we also have that α M σ+ → +∞ as M → +∞, then we also have Theorem 1.

Mσ
U #Mσ remains bounded, if we take #M σ = M d , and the cartesian product actuators and eigenfunctions as follows with ω × j := {(x 1 , x 2 , . . . , x d ) ∈ Ω × | x n ∈ ω n jn } and e × j (x 1 , x 2 , . . . , x d ) := 1 ω n jn (x n ), and after ordering the eigenpairs (α i , e i ) of −∆ + 1 in Ω × , we can find σ M so that  For nonrectangular domains Ω ⊂ R d , with d ≥ 2, we do not know whether we can choose the actuators (as indicator functions) so that (2.5) is satisfied (again, in case the total volume of actuators is fixed a priori and arbitrarily small). This is an interesting open question. Numerical simulations in [31] and [32] show the stabilizing performance of a linear feedback K λ1,M U M in a nonrectangular domain.
Remark 2.1. For the nonlinear systems, to derive the semiglobal stability result hereafter we will also need that α Mσ α M σ+ remains bounded. This is again satisfied for the choice above for rectangular That is, for either boundary conditions we have lim 2.3. Global, local, and semiglobal exponential stability. We recall 3 different exponential stability concepts, in order to better explain the result. Let K ≥ 1, l > 0, and let H be a normed space. Let us consider the dynamics in (1.1), with a general feedback control operator F taken from a suitable class F.
Let us fix F ∈ F. We say that system (2.7) is globally (F, K, l, H)-exponentially stable if for arbitrary given y 0 ∈ H, the corresponding solution y F is defined for all t ≥ 0 and satisfies |y F (t)| 2 H ≤ Ke −lt |y 0 | 2 H . Definition 2.3. Let us fix F ∈ F. We say that system (2.7) is locally (F, K, l, H)-exponentially stable if there exists > 0, such that for arbitrary given y 0 ∈ H with |y 0 | H < , the corresponding solution y F is defined for all t ≥ 0 and satisfies |y F (t)| 2 H ≤ Ke −lt |y 0 | 2 H . Definition 2.4. Let us be given a class of operators F. We say that (2.7) is semiglobally (F, H)exponentially stable if for arbitrary given R > 0, we can find F ∈ F, K ≥ 1, and l > 0, such that: for arbitrary given y 0 ∈ H with |y 0 | H < R, the corresponding solution y F is defined for all t ≥ 0 and satisfies |y F (t)| 2 H ≤ Ke −lt |y 0 | 2 H . We will consider system (2.7) evolving in a Hilbert H, which will be considered as a pivot −→ H be the domain of the diffusion-like operator, and denote −→ H, and its dual by V . From the results in [31] we know that if N = 0 and (2.5) holds true, then there exist suitable constants C 1 ≥ 1, µ 1 > 0, and M > 0 so that system (2.7) is Note that A rc ∈ L ∞ ((0, +∞), L(H, V )) is assumed in (2.5). If we (also) have that A rc ∈ L ∞ ((0, +∞), L(V, H)), then we will (also) have strong solutions for system (2.7) which will lead to the smoothing property H , for all s ≥ 0, for a suitable constant C 2 > 0, independent of s. Hence, by standard estimates (e.g., following [46,Sect. 3], see also [32,Sect. 4]), we can conclude that there is C 3 > 0 such that system (2.7), again with N = 0, is again globally (K Afterwards, by a rather standard, still nontrivial, fixed point argument, we can derive that for a suitable constant C 4 > 0, the perturbed systeṁ for a general class of nonlinearities N . Let us now consider the nonlinear feedback operator (cf. (1.6b)), and the class (2.10) We will prove that the closed-loop system (2.7) is semiglobally (F, V )-exponentially stable, with F as in (2.10) and under general conditions on the state operators A, A rc , and N , in (2.7), under general conditions on F Mσ , and under a particular condition on the oblique projections P E ⊥ Mσ U #Mσ , i.e., under a suitable "nonlinear version" of condition (2.5) (see condition (3.7) hereafter). In other words, for arbitrary given R > 0 we want to find M ∈ N, M σ ∈ P #Mσ (N 0 ), and a set of #M σ actuators spanning U #Mσ such that the solution of system (2.
The assumptions on the state operators, on the "partial feedback" F Mσ , and on the oblique projection are given in the following sections. Such assumptions will lead to the following relaxed/generalized version of Theorem 1.3, with F Mσ = A+λ1, whose proof is given in Section 4.5.
Theorem 2.5. Suppose we can construct a sequence (U #Mσ , E Mσ ) M ∈N so that both the norm ≤ C P and the ratio α Mσ α M σ+ ≤ Λ remain bounded, with both C P and Λ > 0 independent of M . Then, for arbitrary given R > 0 we can find M ∈ N large enough so that the solution of system (2.7), with F = K A+λ1,N U #Mσ , satisfies (2.11), with (C 5 , µ 2 , M ) independent of y 0 . That is, system (2.7) is semiglobally (F, V )-exponentially stable.

Assumptions and mathematical setting
Here we present the mathematical setting and the sufficient conditions for stability of the closed-loop system.

3.1.
Assumptions on the state operators. Let H and V be separable Hilbert spaces, with V ⊆ H. We will consider H as pivot space, H = H.
From now on we suppose that V is endowed with the scalar product (y, z) V := Ay, z V ,V , which still makes V a Hilbert space. Therefore, A : V → V is an isometry.
Further, A has compact inverse A −1 : H → D(A), and we can find a nondecreasing system of (repeated) eigenvalues (α i ) i∈N 0 and a corresponding complete basis of eigenfunctions (e i ) i∈N 0 : For every β ∈ R, the power A β of A is defined by We For the time-dependent operators we assume the following: with ζ 2j + δ 2j < 1 and δ 1j + δ 2j ≥ 1.
Examples. We can show that our Assumptions 3.1-3.4 on the linear and nonlinear operators will be satisfied for parabolic equations evolving in a bounded smooth, or rectangular, domain Ω ∈ R d , d ∈ {1, 2, 3}, as 3.2. Auxiliary estimates for the nonlinear terms. Besides the assumptions on the state operators, presented in Section 3.1, we will need also assumptions on the triple (F Mσ , E Mσ , U #Mσ ), which defines the feedback operator. Before, we need to present suitable estimates resulting from Assumption 3.4. These are the content of the following Proposition, whose proof follows by straightforward computations. The proof is, however, not trivial and is given in the Appendix, Section A.1.
Recall the notation a := max 1≤j≤n {a j }, for a sequence of constants a j ≥ 0. We will also denote , which will not lead to ambiguity, as soon as the pair (E Mσ , U #Mσ ) is fixed.
Proposition 3.5. If Assumptions 3.1, 3.2, and 3.4 hold true, then there are constants C N 1 > 0, and Inequality (3.2) will be used to prove the existence of a solution for the closed-loop system, while (3.1) will be used to prove the uniqueness of the solution.
3.3. Assumptions on the oblique projection based feedback. We present here the assumptions on the triple (F Mσ , E Mσ , U #Mσ ). Observe that, from (2.8) and (2.9), the orthogonal projection q := P E Mσ y satisfiesq = −F Mσ (q), (3.3) For the exponential stability of (2.8) we need q(t) to decrease exponentially to zero. We will also ask for integrability of q andq as follows.
Finally, we present the assumptions involving P E ⊥ Mσ U #Mσ . Note that both U #Mσ and E ⊥ Mσ are closed subspaces. Thus, the oblique projection P In particular, by considering the feedback (1.3), we are necessarily assuming the following.
Recall that #M σ = dim(U #Mσ ) = dim(E Mσ ). Recall also that Assumption 3.7 means that for every given h ∈ H there exists one, and only one, Hence we simply take P The operator norm of an oblique nonorthogonal projection is strictly larger than 1. In particular, in case U #Mσ = E Mσ we have Orthogonal projections P F ⊥ F will be denoted by P F , for simplicity. We have the following properties, which are useful in the computations hereafter.
For further comments on oblique projections we refer to [31,Sect. 2.2] and [51,Sect. 3]. The next assumption is less trivial and it is the one that gives us the stability condition. In order to state the assumption we start by recalling the particular eigenvalues α Mσ and α M σ+ , defined in (2.3). Then we define suitable functions as follows. For a given triple γ = (γ 1 , γ 2 , γ 3 ) ∈ R 3 0 with positive coordinates, and a given function q ∈ L ∞ (R 0 , E Mσ ), we define , where the constants C rc and C N are as in Assumptions 3.3 and 3.4, respectively, and Assumption 3.8. With r > 1 as in Assumption 3.6, we have that (3.7) Remarks and examples. Note that Assumption 3.6 holds true with, for example, F Mσ = A + λ1.
Of course it would also hold true with F Mσ = λ1 if we would not ask for the constants in there to be independent of M σ . Such independence is helpful to prove that, in particular situations as in Corollary 3.9 below, Assumption 3.8 will be satisfied for large enough M . It is also helpful to prove, later on, that the number of actuators depend only in the V -norm of the initial condition y(0) = q(0) + Q(0), with (q(0), Q(0)) ∈ E Mσ × E ⊥ Mσ (cf. Thm. 2.5). Concerning Assumption 3.7, it is needed to define the oblique projection P E ⊥ Mσ U #Mσ and it is not difficult to find the actuators such that it holds true. What is not clear is whether we can find the actuators, for example a finite number of indicator functions 1 ω i in the setting of parabolic equations, so that Assumption 3.8 also holds true. Indeed, recalling (3.6) and (3.5), and using Assumption 3.4, we obtain Observe that from Assumption 3.6 we have , which allow us to derive that, with Recall also that β 1 + β 2 ≥ 1 and r ζ 1 + δ 1 + (η 1 + η 2 )(ζ 2 + δ 2 ≥ 1. Proof. We know that lim M →+∞ α M σ+ = +∞, then for fixed γ ∈ R 3 0 , such that a 0 > 0, and ε > 0, we see that (3.7) will be satisfied for large enough M , because 0 ≤ max{β 2 p, rη 2 ζ 2 + δ 2 p} < 1, due to Assumption 3.6. Note that the constant C in (3.8) is independent of α Mσ .
The boundedness of the ratio

Stability of the closed-loop system
Here we prove that system (2.7) is exponentially stable with the feedback in (2.9), provided the above assumptions are satisfied by the state operators and the triple (F Mσ , U #Mσ , E Mσ ).
The proof is given hereafter in Section 4.3, where the local stability of (4.3b) is reduced to the local stability of a suitable scalar ode system in the forṁ whereC 1 > 0,C 2 > 0, and w takes its values in R, say for some given τ > 0 we have w(t) ∈ R for t ∈ [0, τ ).

4.2.
Auxiliary ode stability results. BelowC 1 > 0 andC 2 > 0 are positive constants. We will look at (4.5) as a perturbation of the systeṁ p , then the solution of system (4.6) satisfies with ε :=C 1 −C 2 |w 0 | p R > 0. The proof is straightforward. For the sake of completeness we give it in the Appendix, Section A.2.
Next, for the perturbed ode we have the following.
Lemma 4.4. Let p > 0, r > 1, and h ∈ L r (R 0 , R). If there exists ε > 0 such that the inequality is satisfied, then the solution w = w h of system (4.5) satisfies, for all t ≥ 0 Proof. The linearization of system (4.6) around a constant function w, w(t) = w(0) ∈ R for all t ∈ R, readsż which is exponentially stable ifC 1 >C 2 (p + 1) | w| p R . That is, denoting the solution of (4.10) by we have that, with z(s) = z 1 ∈ R, (4.11) Let us also denote the solutions of systems (4.6) and (4.5), for t ≥ s ≥ 0, respectively by Notice that by the assumption (4.8) the initial condition w 0 satisfies which due to Proposition 4.3 implies that w 0 (t) is defined for all t ≥ 0 and satisfies (4.7). We also know that w h (t) will be defined for t ≥ 0 in a maximal time interval, say for t ∈ (0, τ h ) with τ h > 0. We show now that τ h = +∞. Indeed if τ h = +∞ then we would have that Thus we want to show that (4.12) does not hold with (finite) τ h ∈ R 0 . Let us fix an arbitrary τ 1 ∈ (0, τ h ), then both solutions remain bounded in [0, τ 1 ]. That is, for a suitable large enough ρ > 0, From [13,Lem. 3], since (4.10) is the linearization of (4.6), we know that we can write Next we prove that we actually have For this purpose, let h = 0 and suppose that there exists τ 2 ∈ (0, τ 1 ) such that From (4.11), we find that which combined with (4.14a) and with the fact that ε < ε, gives us ( r r−1 ε) − r−1 r > ( r r−1 ε) − r−1 r and which in turn implies w 0 = 0 and |h| L r (Rτ 2 ,R) = 0.

Proof of Theorem 4.2.
We can show the existence of the solution as a weak limit of Galerkin approximations of the system, following a standard argument. By taking the scalar product, in H, with 2AQ in (4.3b), we obtain Using Assumption 3.8, we fix a quadruple γ = (γ 1 , γ 2 , γ 3 , ε) ∈ R 4 0 satisfying (3.7). From Assumption 3.3, and, from (3.2), with γ 0 = γ 3 , we find Hence, the estimates in (4.16) lead us to with a 0 , a 1 , a 2 , q, p, and h as in (3.5).
We have just proven that (4.4) holds true, for any given strong solution. The existence of a strong solution follows from the fact that the previous estimates hold true for Galerkin approxi- and P E N : H → E N is the orthogonal projection in H onto E N , which solve the finite-dimensional systeṁ Let us fix an arbitrary s > 0. Hence, from (the analogous to) (4.4) we find Q N L ∞ ((0,s),V ) ≤ C 3 , where C 3 can be taken independent of N and s. Then, by integrating (4.17) we obtain that Since a 0 > 0, because a 0 > a 1 +a 2 q from which we can conclude that the limit Q ∞ solves (4.3b). We know thatQ Actually, we have strong convergence ) and the fact that the sequence Q N is uniformly bounded in the space W ((0, s), D(A), H), from Assumption 3.4 and the Hölder inequality, with y 1 = q + Q N and y 2 = q + Q ∞ , it follows that, with D N := Q N − Q ∞ , and since δ 2j + ζ 2j < 1, From δ 1j +δ 2j ≥ 1, it follows that , because D N is uniformly bounded in L ∞ ((0, s), V ). Observe also that by the Young inequality , which leads us to and consequently to To finish the proof of Theorem 4.2, it remains to prove the uniqueness in W ((0, s), D(A), H). For this purpose, observe that given two solutions Q 1 and Q 2 in W ((0, s), D(A), H), we find that Thus, from (3.1) with γ 0 = 1, and the Young inequality, with y 1 = q + Q N and y 2 = q + Q ∞ , it follows By using Assumption 3.3 and (4.16) with γ 1 = 1, we find with Φ 2 (t) := P   3) is exponentially stable. The solution y = q + Q, satisfies |y(t)| V ≤ Ce − µ 2 t |y(0)| V , for all t ≥ 0, where µ < min{ ε, 2λ} and ε is as in (4.1). Furthermore, C = C n, P L ,Crc,C N , 1 Proof. We have q ∈ L ∞ (R 0 , D(A)) because q ∈ L ∞ (R 0 , H) and E Mσ is finite dimensional, E Mσ ⊂ D(A) ⊂ H. By Theorem 4.2, we conclude that Q satisfies, for all t ≥ 0, Hence we obtain, using Assumptions 3.4 and 3.6, Through straightforward computations we can obtain, with µ < min{ ε, 2λ}, the estimates which leads us to (4.20) which finishes the proof. ≤ Λ, with C P and Λ independent of M . Let us also fix γ = γ ∈ R 3 0 so that a 0 = a γ 0 > 0, and fix also ε > 0.
Recalling (3.5) and (3.6), we see that a γ 1 , a γ 2 , and h γ , are the only terms in (3.7) depending on C M P . However, these terms remain bounded if C M P does. Hence, defining , we observe that Assumption 3.8, taking r = r ∈ (1, 1 ζ 2 +η 2 ) as in Assumption (3.6), follows from (4.21) Note that for M large enough it follows that a γ (4.22c) since, by Assumption 3.6, we have max Therefore, from the inequalities in (4.22) we can conclude that necessarily (4.21) holds true for large enough M , with (4.23) In particular, (4.23) means that M increases (or may increase) with the norm |y(0)| V , of the initial condition y(0) = q(0) + Q(0), but it also means that, for arbitrary given R > 0, M can be taken the same for all initial initial conditions in the ball {z ∈ V | |z| V ≤ R}.

4.6.
Boundedness of the control. In applications, besides the existence of a stabilizing feedback, it is important that the total "energy" spent to stabilize the system is finite. We show here that the control given by our nonlinear feedback operator in (2.9) is indeed bounded, with a bound increasing with the norm of the initial condition. Note that (2.7) and (4.2) are the same system.
Theorem 4.6. Let u(t) := F(t, y(t)) = K F Mσ ,N U #Mσ (t, y) be the control input given by the operator (2.9) stabilizing system (2.7), with initial condition y 0 as in Theorem 4.1. Then , for all z ∈ D(A), and To show the boundedness of the spent "energy" |u| L 2r (R 0 ,H) , we start by observing that with q 0 := P E Mσ y 0 , and where we have used (4.20). Observe that we have that |y 0 | Recall also that β 1 + β 2 ≥ 1.

4.7.
Remark on the transient bound. We have seen that, see (4.23) and (4.20), in Theorem 2.5 we may take Observe that by taking a larger M we still have a stable closed-loop system, but since the transient bound C 5 depends on α Mσ , the transient time t tr = log C 5 µ 2 may also depend on α Mσ . Note also that, from (4.1), µ 2 will depend on α Mσ if |h| L r (R 0 ,H) does. We see that C 5 gives us an upper bound for the norm of the closed-loop solution, max{|y(t)| V | t ≥ 0} ≤ C 5 |y(0)| V , and for time t ≥ t tr we necessarily have that |y(t)| V ≤ |y(0)| V . Therefore, it could be interesting to understand whether we can make C 5 and t tr as small as possible. Though we do not study this possibility in here, we would like to say that a positive answer does not follow from above, due to the dependence on α Mσ . Finding a positive answer to this question will likely require the derivation of new appropriate estimates.

5.2.
Polynomial reactions and convections in case Ω ⊂ R 3 . In case d = 3 we show now that Assumption 3.4 is satisfied for nonlinearities in the form The reaction components. We start by considereing the termsâ j (t, x) |y| . We also have the growth bounds Thus, the Nemytskij operator y → N j (t, y) :=â j (t, x) |y| r j −1 R y and its Fréchet derivative dN j | y satisfy: N j (t, · ) ∈ C(L 2r j , L 2 ) and dN j | y = r jâj (t, x) |y| r j −1 R ∈ C(L 2r j , L(L 2r j , L 2 )).
Remark 5.2. Above in (5.1), we may replace |y| r j −1 R by y r j −1 in case r j ∈ {2, 3, 4} is an integer. Analogously we may replace |y| s j −1 R by y s j −1 in case s j = 1. The reason the absolute value is taken in (5.1) is because we want N (t, x, y(t, x)) ∈ R, in order to have real valued solutions y(t, x) ∈ R.

Numerical results
We present here numerical results in the one dimensional case, showing the stabilizing performance of the controller. Our parabolic equation evolving in the unit interval (0, L) reads ∂ ∂t y+(−ν∆+1)y+(a−1)y+b·∇y−c N |y| p−1 R y = K(y), y(t, 0) = y(t, L) = 0, where Dirichlet boundary conditions are imposed and where we have taken Above (t, x) ∈ (0, +∞) × (0, 1). Recall that K For a given M ∈ N 0 , the actuators were taken as in (2.6), U #Mσ = U M = span{1 ω i | i ∈ M}, with r = 0.1 and L 1 = L = 1, that is, the actuators cover 10% of the domain (0, L). To solve the associated odes we followed a Crank-Nicolson scheme with the time interval (0, +∞) discretized with timestep k = 0.0001, [0, k, 2k, 3k, . . . ). For further details, see [51]. In the figures below we are going to plot the behaviour of either |y| H or |y| V . Note that since V − → H, if |y| H goes to +∞ then also |y| V does. Analogously, if |y| V goes to 0 then also |y| H does. These norms have been computed/approximated as |y(t j )| 2 H = y(t j ) My(t j ) and |y(t j )| 2 V = y(t j ) (νS + M)y(t j ). Here M and S are, respectively, the Mass and Stiffness matrices, and y(t j ) is the discrete solution at a given discrete time t j = jk. The simulations have been run for time t ∈ [0, 5], and have been performed in MATLAB.
In the figures below F M = F Mσ , and "Ktype = Klinz" means that we have taken the linearization based feedback K = K F Mσ U #Mσ , while "Ktype = Knonl" means that we have taken the nonlinear feedback K = K F Mσ ,N U #Mσ . Note that with c N = 0 the system is linear, while with c N = 1 the system is nonlinear. Furthermore, F eedOn stands for the time interval on which the control is switched on. For example in Figure 1 the control is switched off on the entire time interval [0, 5), while in Figure 2 it is is switched on on the entire time interval [0, 5).
In Figure 1, we observe that both the linear and the nonlinear systems are unstable. The linear system is exponentially unstable and the nonlinear system blows up in finite time. In Figure 2 we see that, with 6 actuators, the linear feedback is able to stabilize the linear system, for both choices of F M . In this example, the choice of F M = −ν∆ + λ1 leads to faster exponential decay rate of the V -norm.
In Figure 3 we see that the same linear feedback, is not able to stabilize the nonlinear system. This is because the initial condition is too big. Recall that it is known that we can expect such linearization based feedback to be able to stabilize the nonlinear system only if the norm of the initial condition is small enough (local stability).
In Figure 4 we observe that the full nonlinear feedback with 6 actuators and with F M = −ν∆ + λ1 succeeds to stabilize the solution, while with the choice F M = λ1 it fails to. The latter choice F M = λ1 succeeds by taking 7 actuators. Figure 5 shows that, for a bigger initial condition, the same nonlinear feedback, with 7 actuators is not anymore able to stabilize the system for both choices F M = −ν∆ + λ1 and F M = λ1. Finally, in Figure 6 we observe that by increasing the number M of actuators the nonlinear feedback is again able to stabilize the system. This could give raise to the question on whether by incresing M would also lead to the stability of the linearization based closed-loop system, Figure 7 shows that this is not the case.   . We would like to refer to [28,[37][38][39]47] for works related to finding a/the placement (and/or shape) of actuators, though the functional to be minimized in those works is not P In the particular case y 1 = q + Q and y 2 = q with (q, Q) ∈ E Mσ × E ⊥ Mσ , estimate (A.1) also gives us 2 P U #Mσ E ⊥ Mσ (N (t, q + Q) − N (t, q)) , AQ with ζ − 1 := max{|ζ k,j − 1| | 1 ≤ j ≤ n, 1 ≤ k ≤ 2}. By the Young inequality, with γ 0 > 0 and γ 0 > 0, 2 P with the following constants: where the constants C k , k ∈ {1, 2, 3} are of the form C k = C ζ 1 +δ 1 , A.2. Proof of Proposition 4.3. Observe that, since p ≥ 0, the function w → |w| p R w is locally Lipschitz. Therefore, the solutions of (4.18), do exist and are unique, in a small time interval, say for time t ∈ [0, τ ) with τ small. When w 0 = 0 the solution is the trivial one w = 0. Note that the equilibria of (4.6), that is, the solutions ofẇ = 0, are given by w 1 = 0 and w ± 2 = ± Furthermore, we observe thatẇ < 0 if w ∈ (0, w + 2 ), which implies that the solution issued from w(s) ∈ (0, w + 2 ) at time t = s, is globally defined, for all time t ≥ s, is decreasing, and thus remains in (0, w + 2 ). Note that −C 1 w ≤ẇ ≤ − C 1 −C 2 |w 0 | p R w, for w ∈ (0, w + 2 ). Therefore we can conclude that (4.7) holds for w 0 ∈ (0, w + 2 ). Next we consider the case w 0 ∈ (−w + 2 , 0). Denoting the solution issued from w(s) = w s ∈ R, at time s, by w(t) = S(t, s)(w s ), t ≥ s, we find S(t, s)(w s ) = −S(t, s)(−w s ), because with w + (t) := S(t, s)(−w s ), we have d dt (−w + ) = −ẇ + = − − C 1 −C 2 w + p R w + = − C 1 −C 2 −w + p R (−w + ), −w + (s) = w s .