Feedback Stabilization of the Three-Dimensional Navier-Stokes Equations using Generalized Lyapunov Equations

The approximation of the value function associated to a stabilization problem formulated as optimal control problem for the Navier-Stokes equations in dimension three by means of solutions to generalized Lyapunov equations is proposed and analyzed. The specificity, that the value function is not differentiable on the state space must be overcome. For this purpose a new class of generalized Lyapunov equations is introduced. Existence of unique solutions to these equations is demonstrated. They provide the basis for feedback operators, which approximate the value function, the optimal states and controls, up to arbitrary order.


Introduction
This work is concerned with feedback stabilization of the 3-D Navier-Stokes equations around a possibly unstable stationary solution. The approximation is achieved by Taylor series-like expansions of the value function associated to an infinite-horizon optimal control problem. A related goal was achieved in [13] for the two-dimensional case. But the approach from [13] cannot be generalized to the 3-D case since it builds on the differentiability of the value function on the state space. This is not possible in dimension 3 and thus an independent approach and analysis is required.
Indeed the difficulty that arises is related to the lack of a convenient energy equality for the Navier-Stokes equations in dimension 3. Such an equality is available in dimension 2 and it is the basis for proving the uniqueness of weak variational solutions of the Navier-Stokes equations with initial data in L 2 (Ω), where L 2 (Ω) denotes square integrable vector-valued functions over Ω. In dimension 3 we must resort to strong variational solutions with initial data in H 1 (Ω). As a consequence we can expect that the value function associated to optimal control problems of the Navier-Stokes equations in dimension 3 is well-defined and enjoys certain smoothness properties in H 1 (Ω) but not over L 2 (Ω). Having in mind that Taylor expansions to nonlinear operators on a space X involve multilinear forms on product spaces consisting of copies of X, it becomes clear that X = H 1 (Ω) is not a convenient space to work with, especially if ultimately, numerical realizations are desired. For this purpose X = L 2 (Ω) is significantly more convenient. Here we aim for an approximation of the value function with operators constructed in an L 2 (Ω)-setting, in spite of the fact that the value function is differentiable on H 1 (Ω) only. These operators will be constructed as the solutions to generalized Lyapunov equations.
We next introduce the specific problem of interest. Let Ω ⊂ R 3 denote a bounded domain with C 1,1 boundary Γ, and letB denote a control to state operator. We aim at designing a control u such that the solution (z, q) to the time-dependent Navier-Stokes equations ∂z ∂t = ν∆z − (z · ∇)z − ∇q + ϕ +Bu in Ω × (0, T ), div z = 0 in Ω × (0, T ), z = ψ on Γ × (0, T ), satisfies lim t→∞ z(t) =z for perturbations y 0 with div y 0 = 0, which are assumed to be suitably small.
Herez is the velocity component of the solution (z,q) to the stationary Navier-Stokes equations −ν∆z + (z · ∇)z + ∇q = ϕ in Ω, divz = 0 in Ω, for given vector-valued functions ϕ and ψ. All regularity assumptions will be specified below.
Our goal consists in proving that lim t→∞ (y(t), p(t)) = (0, 0). To achieve this, we focus on the following problem: where the spaces D(A λ ), Y and U are classical function spaces related to the Navier-Stokes equations that will be introduced below. Let us mention some references that address problems similar to the one considered here. Regarding feedback control of the (three-dimensional) Navier-Stokes equations, we point to, e.g., [3,4,6,7,8,18,26] where different feedback methodologies based on spectral decomposition or Riccati equations have been analyzed. For (local) exact null controllability results of the (linearized) Navier-Stokes equations, see, e.g., [16,19]. The idea of approximating the optimal feedback law by utilizing a Taylor series expansion of the minimal value function has its origin in finite-dimensional considerations proposed in [2,21] which, later on, have been picked up in, e.g., [1,22]. For a survey summarizing the approach (and related variants), we refer to [9]. One of the first references dealing with polynomial feedback laws of infinite-dimensional control systems can be found in [28]. For the special class of bilinear control systems, Taylor series expansions have recently been analyzed in detail in [12,14]. The literature on open loop control of the Navier Stokes equation is quite rich. Topics such as necessary and sufficient optimality conditions, numerical approximation of the optimality systems are well investigated, in general. If one focuses on the work dedicated to time dependent optimal control in three space dimensions the literature is scars, however. Here we mention [24,29], for finite horizon optimal control problems subject to the Navier-Stokes equations.
The contents of the paper is structured as follows. Section 2 contains the problem statement and function space preliminaries. Differentiability properties of the value function are discussed in Section 3. The subsequent section is devoted to the introduction and analysis of the generalized Lyapunov equations. This leads to the polynomial feedback laws which are described in Section 5. Section 6 contains the error estimates for the value function, the optimal states, and controls. We finish with short conclusions.
Notation. For Hilbert spaces V ⊂ Y with dense and compact embedding, we consider the Gelfand triple V ⊂ Y ⊂ V ′ where V ′ denotes the topological dual of V with respect to the pivot space Y . For vector-valued functions f ∈ (L 2 (Ω)) 3 , we use the notation f ∈ L 2 (Ω). Elements f ∈ L 2 (Ω) will be denoted in boldface and distinguished from scalar-valued functions g ∈ L 2 (Ω). Similarly, we use H 2 (Ω) for the space (H 2 (Ω)) 3 . For a closed, densely defined linear operator (A, D(A)) in Y , its adjoint (again considered as an operator Y ) will be denoted with (A * , D(A * )). Considering A as a bounded linear operator A ∈ L(D(A), Y ) its dual A ′ ∈ L(Y, [D(A)] ′ ) is uniquely defined. Let us recall that it is the unique extension of the operator A * ∈ L(D(A * ), Y ) to an element of L(Y, [D(A)] ′ ). In fact, we have for all p ∈ D(A * ), and y ∈ D(A), and for all p ∈ Y, and y ∈ D(A).
. For an infinitesimal generator A of an exponentially stable semigroup e At on Y , we consider the space W (0, T ; D(A), Y ) which we endow with the norm Generally, given T ∈ R and two Hilbert spaces X ⊂ Y , by W (0, T ; X, Y ) we denote the space For T = ∞, the space W (0, T ; X, Y ) will be denoted by W ∞ (X, Y ). For δ ≥ 0, we denote by B Y (δ) the closed ball in Y with radius δ and center 0. For k ≥ 1, we make use of the following norm: on the product space V k := V × · · · × V . Given a Hilbert space Z, we say that T : V k → Z is a bounded multilinear mapping (or bounded multilinear form for Z = R) if for all i ∈ {1, . . . , k} and for all (v 1 , . . . The set of bounded multilinear mappings on V k will be denoted by M(V k , Z). For all T ∈ M(V k , Z) and for all (v 1 , . . . , v k ) ∈ V k , we have Bounded multilinear mappings T ∈ M(V k , Z) are said to be symmetric if for all v 1 , . . . , v k ∈ V k and for all permutations σ of {1, . . . , k}, Finally, given two multilinear mappings T 1 ∈ M(V k , Z) and T 2 ∈ M(V ℓ , Z), we denote by T 1 ⊗T 2 ∈ M(V k+ℓ , Z) the bounded multilinear form defined by Throughout the manuscript, we use M as a generic constant that might change its value between consecutive lines.

Analytical preliminaries 2.1 Function spaces
Let us briefly summarize the classical functional analytic framework that allows us to consider (3) as an abstract differential equation on the space of solenoidal vector fields. Based on this formulation, we subsequently define our stabilization problem of interest. For more details on the following well-known decomposition, let us refer to, e.g., [5,6,17,25,27] for details. We introduce the spaces Y := y ∈ L 2 (Ω) | div y = 0, y · n = 0 on Γ , endowed with the canonical inner products and norms. Note that Y is a closed subspace of L 2 (Ω) which is associated to the orthogonal decomposition where In this context, we recall the Leray projector P : L 2 (Ω) → Y which orthogonally projects L 2 (Ω) onto Y . Consider the nonlinear operator F : Let us further define the bilinear mapping N (y, z) := P ((y · ∇)z) for which we recall the following properties: Let Ω be bounded domain of class C 1,1 in R 3 . Then there exists a constant M such that Proof. The first two properties follow from the standard Sobolev embedding results H 2 (Ω) ֒→ C(Ω), H 1 (Ω) ֒→ L 4 (Ω), and H 2 (Ω) ֒→ W 1,4 (Ω). For the third one, in addition we use that P ∈ L(H 1 (Ω)), [11,Proposition 4.3.7]. Here the C 1,1 property of the domain is used.
We shall also consider N as a bilinear mapping from V × V to V ′ , which is defined by We have the following properties, which can again be verified by standard Sobolev embedding results, and the fact that (y · ∇)z,

Proposition 2.
Let Ω be a bounded Lipschitz domain in R 3 . Then there exists a constant M such that Analogous properties can be obtained for the nonlinear operator F . The Oseen-Operator is defined by Given a stationary solutionz ∈ V , we associate with it the Stokes-Oseen operator A that is defined as follows D(A) = H 2 (Ω) ∩ V, Ay = P (ν∆y − (y · ∇)z − (z · ∇)y).
Considered as operator in L 2 (Ω) the adjoint A * , again as operator in L 2 (Ω), can be characterized by For the control operatorB we assume thatB ∈ L(U, L 2 (Ω)). Let us set B := PB ∈ L(U, Y ). We are now prepared to project the controlled state equation (3) onto the space of solenoidal vector fields Y . We arrive at the abstract control system d dt where the pressure p is eliminated. Before we state the optimal control problem, we collect some generalizations of the estimates in Proposition 1 to the time-varying case.

Existence of feasible solutions
Throughout the article we assume that the following assumptions hold true.
Assumption A2. The linearized system (A, B) is exponentially stabilizable, i.e., there exists K ∈ L(Y, U ) such that the semigroup e (A−BK)t is exponentially stable on Y .
Regarding Assumption A2, we refer to, e.g., [5] where finite-dimensional feedback operators are constructed on the basis of spectral decomposition as well as Riccati theory. Alternatively, exponential stabilizability of the linearized system also follows from exact controllability results available in [16].
We immediately obtain the following important consequences that will be used several times throughout the manuscript.
Below we will frequently make use of the spaces W (0, T ; D(A λ ), Y ) and W ∞ (D(A λ ), Y ), respectively, endowed with the norm defined in (4). This notation will also be employed for systems associated to operators A that do not necessarily generate themselves an exponentially stable semigroup on Y .
Consequence C2. For all (y 0 , f ) ∈ V × ∈ L 2 (0, ∞; Y ) and T > 0 the systeṁ has a unique solution y ∈ W (0, T ; D(A λ ), Y ). In addition, this solution satisfies with a continuous function c. In the case that y ∈ L 2 (0, ∞; Y ), we can replace (22) by the equivalent formulationẏ Here, we use that In the next lemma, by A s we denote an abstract generator of an exponentially stable, analytic semigroup on Y . The proof of the assertion is based on a classical fixed-point argument which has been used in similar contexts in, e.g., [13,25] and is given in the Appendix A.
Lemma 5. Let A s generate an exponentially stable, analytic semigroup e Ast on Y , let C denote the constant specified in Corollary 4 and let F be as in (9). Then there exists a constant M s such that for all has a unique solution y ∈ W ∞ (D(A s ), Y ). Moreover, we have the following estimate on y: Additionally, it holds that Proof. Since assumption A2 implies the existence of K such that e (A−BK)t is an exponentially stable analytic semigroup on Y , the result is a consequence of Lemma 5 applied to the systeṁ The estimate on the control follows from the feedback representation u = −Ky.
For the next statement, we can w.l.o.g. assume that M λ defined in Consequence C2 satisfies Proof. By assumption it holds that y ∈ L 2 (0, ∞; Y ). As a consequence, we can focus on the equivalent systemẏ Application of Lemma 5 then shows the assertion.
With the previous considerations, we can state problem ( P ) as the following abstract infinitehorizon optimal control problem: 3 Differentiability of the value function on V In this section, we show the differentiability on V of the associated value function, defined by J(y, u), subject to: e(y, u) = (0, y 0 ).
Our arguments are based on an analysis of the dependence of solutions to (P ) with respect to the initial condition y 0 .

Existence of a solution and optimality conditions
This section is devoted to existence of solutions to (P ) with small initial data and associated first-order necessary optimality conditions. Lemma 8. There exists δ 1 > 0 such that for all y 0 ∈ B V (δ 1 ) problem (P ) possesses a solution (ȳ,ū). Moreover, there exists a constant M > 0 independent of y 0 such that Proof. For now, let us define δ 1 = 1 with C as in Corollary 4 and M K as in Corollary 6. By Corollary 6 (with f = 0) there exists a control u ∈ L 2 (0, ∞; U ) with associated state y satisfying ). Let us now take a minimizing sequence (y n , u n ) n∈N which we can assume to satisfy J(y n , u n ) ≤ M 2 y 0 2 V (1 + α). Consequently, for all n ∈ N we obtain After possible reduction of δ 1 , we can assume that where M λ is as in Consequence C2 and Corollary 7. Hence, the sequence (y n ) n∈N is bounded in and (ȳ,ū) satisfies (29).
We are going to prove that (ȳ,ū) is feasible and optimal. Note that for each T > 0 and an From the convergence d dt y n ⇀ d dtȳ in L 2 (0, T ; Y ), we can pass to to the limit in the l.h.s. of the above equality. Similarly, using that With the same argument, we find that From the definition of F and Proposition 1(i)-(ii), it follows that Since D(A λ ) is compactly embedded in V , by the Aubin-Lions lemma it follows that y n − y L 2 (0,T ;V ) −→ n→∞ 0. Passing to the limit in (31) yields Since , it holds that e(ȳ,ū) = (0, y 0 ). From weak lower semicontinuity of norms we finally obtain This shows that (ȳ,ū) is optimal. Note that the bound (29) can be shown to hold for arbitrary optimal solutions since they necessarily have to satisfy (30) from which we can argue as above.
Remark 9. For the previous proof, Corollary 7 was essential. It is not available in dimension 3 with y 0 ∈ Y and y ∈ W ∞ (V, V ′ ). For this reason, we cannot expect differentiability of V on Y .
Then, for all f ∈ L 2 (0, ∞; Y ) and y 0 ∈ V , there exists a unique solution to the following system:ẏ Moreover, Proof. The assertion is a variant of [12, Lemma 2.5] and follows by the arguments provided in the latter reference.
Proposition 11. There exists δ 2 ∈ (0, δ 1 ] such that for all y 0 ∈ B V (δ 2 ), and for all solutions αū Moreover, there exists a constant M > 0, independent of (ȳ,ū), such that Proof. For now, let us assume that δ 2 = δ 1 . Then problem (P ) has a solution (ȳ,ū) by Lemma 8. We are going to derive optimality conditions by proving that the linearization of e is surjective. Since F (y) = P N (y, y) and by Corollary 3, we conclude that N and F are Fréchet differentiable.

Sensitivity analysis with respect to y 0
We define the space endowed with the l ∞ product norm. Consider now the mapping Φ, defined from The well-posedness of Φ follows from the considerations on e(y, u) and the costate equation (32) that have been given in the proof of Proposition 11. Lemma 13. There exist δ 3 > 0, δ ′ 3 > 0, and three C ∞ -mappings such that for all y 0 ∈ B V (δ 3 ), the triplet Y(y 0 ), U(y 0 ), P(y 0 ) is the unique solution to Proof. Let us show the statement by means of the inverse function theorem. For this purpose, note that Φ only contains polynomial terms and thus is infinitely differentiable. It further holds that Φ(0, 0, 0) = (0, 0, 0, 0). Let us prove that DΦ(0, 0, 0) is an isomorphism and take (w 1 , By Proposition 31, the linear system on the left-hand has a unique solution (y, u, p), moreover Hence, DΦ(0, 0, 0) is an isomorphism and, with the inverse function theorem, we have δ 3 > 0, δ ′ 3 > 0, and C ∞ -mappings Y, U, and P satisfying (45). With regard to estimate (46), let us (possibly) reduce δ 3 such that the norms of the derivatives of the three mappings are bounded on B V (δ 3 ) by some constant M > 0. As a consequence, the three mappings are in particular Lipschitz continuous with modulus M and estimate (46) follows from the fact that Y(0), U(0), (P(0) = (0, 0, 0).
Proof. For now, assume that δ 4 = min(δ 2 , δ 3 ) and consider y 0 ∈ B V (δ 4 ). Then, there exists a solution (ȳ,ū) to (P ) with associated costatep due to Lemma 8 and Proposition 11. This solution has to satisfy Proof. Note that by definition we have V(y 0 ) = J(Y(y 0 ), U(y 0 )). Since J is infinitely differentiable, differentiability of V is a consequence of the composition of infinitely differentiable mappings.
The proof is given in Appendix A. Using the optimality condition (33), we obtain the optimal control in feedback form.
In the two-dimensional case, see [14,Proposition 16], an approximation of the value function was obtained by investigating the equations which can be derived by successive differentiation of the Hamilton-Jacobi-Bellman equation for the optimal solution with respect to V. The HJB equation associated to (P ) is given by In the 2-D case this equation is rigorous, in the 3-D case, however, it is only formal. In fact, the term B * DV(y) is not well-defined for B ∈ L(U, Y ) since V is not differentiable on Y .

Multilinear Lyapunov operator equations
The purpose of this section is to analyze a sequence of certain multilinear operator equations.
In the two-dimensional case, in [13] it is shown that these equations can be derived by successive differentiations of the HJB equation. Moreover, their solutions are multilinear forms that represent the derivatives of the value function in zero. In [13], this enabled us to derive a polynomial feedback law via a Taylor series expansion of the value function. In contrast to the 2-D case, at this point we cannot follow the arguments provided in [13]. In particular, so far we only know that V is differentiable on B V (δ 4 ) but not necessarily on B Y (δ 4 ). As will be shown below, it is nevertheless possible to derive a unique sequence of multilinear forms that result in a polynomial feedback law which locally approximates the optimal control. Let us begin with the algebraic operator Riccati equation A general treatment of (50) for abstract linear control problems has been given in, e.g., [15,20]. We emphasize that the stabilizability assumption A2 and (21), which implies exponential detectability of (A, id), ensure the existence of a unique nonnegative stabilizing solution Π ∈ L(Y ) to (50). In the following, we use the notation for the closed-loop operator associated with the linearized stabilization problem. Since A π generates an analytic exponentially stable semigroup e Aπt on Y , for trajectories of the formỹ(·) = e Aπ· y, y ∈ V it follows thatỹ ∈ W ∞ (D(A π ), Y ). Similarly, for y ∈ V ′ , [ From Corollary 15 we already know that V is infinitely differentiable on B V (δ 4 ). We shall relate its k-th derivative D k V(0) in zero to a multilinear form T k ∈ M(V k , R). Below, we study such multilinear forms and show that they additionally satisfy i.e., the form T k can be extended from V to V ′ as bounded multilinear form separately in each coordinate. For T k ∈ S k (V, V ′ ), we introduce Let us illustrate the definition of S k (V, V ′ ) in the case that k = 2. For T 2 ∈ M(V × V, R), we can define an associated operator Π via The following considerations further clarify the additional regularity which can be gained if ·) can be represented by an operator Π 1 ∈ L(V ) such that Similarly, since T 2 ∈ M(V ′ × V, R) we have that v 1 → T 2 (v 1 , ·) can be represented by an operator Π 2 ∈ L(V ′ ) such that Thus (Π 1 − Π 2 )| V = 0, and since V is dense in V ′ , and Π 2 ∈ L(V ′ ), the operator Π 2 is the unique continuous extension of Π 1 from V to V ′ . We denote both Π 1 and Π 2 by Π. Since By continuity in the first coordinate of the inner product in Y therefore We turn to our first results on the existence of solutions of generalized Lyapunov equations. where: F (A 0 (z j , z j+i ), z 1 , . . . , z j−1 , z j+1 , . . . , z j+i−1 , z j+i+1 , . . . , z k ).
Moreover, if G is symmetric, then T , considered as an element of M(V k , R), is also symmetric.
Uniqueness of T . The uniqueness can be shown with the same arguments provided in [14] and is therefore skipped at this point.

Remark 19.
In the estimates of the previous proof it was used in (56) and (57) that only one coordinate belongs to V ′ and the others are in V .
With regard to a result similar to (53) but for a different right hand side in (53), consider F ∈ S i+1 (V, V ′ ). For z 1 , . . . , z i ∈ V , the definition of S i+1 (V, V ′ ) yields that F (·, z 1 , . . . , z i ) ∈ L(V ′ , R). We particularly have that F (·, z 1 , . . . , z i ) ∈ L(Y, R). Identifying the last term with its Riesz representative in Y , we can define B * F (·, z 1 , . . . , z i ) ∈ U . With the same technique used in the proof Theorem 18, we obtain the following result. where: For what follows, let us briefly recall a symmetrization technique introduced in [14]. Let i and j ∈ N, consider where S i+j is the set of permutations of {1, . . . , i + j}. A permutation σ ∈ S i,j is uniquely defined by the subset {σ(1), . . . , σ(i)}, therefore, the cardinality of S i,j is equal to the number of subsets of cardinality i of {1, . . . , i + j} and, hence, Theorem 21. There exists a unique sequence of symmetric multilinear forms (T k ) k≥2, with T k ∈ S k (V, V ′ ) and a unique sequence of multilinear forms (R k ) k≥3 , with R k ∈ M(D(A) k , R) such that for all (z 1 , z 2 ) ∈ V 2 , and such that for all k ≥ 3, for all (z 1 , ..., z k ) ∈ D(A) k , where with C i (z 1 , . . . , z i ) = B * T i+1 (·, z 1 , . . . , z i ), and r i=2 = 0 for r < 2. Proof. The statement follows by induction over k. We begin with k = 2. By definition and well-known results for linear quadratic control problems, see, e.g., [15,20], we obtain that T 2 ∈ M(Y × Y, R). Moreover, the operator Π is the unique stabilizing solution to the algebraic Riccati equation. Hence, T 2 is unique. Let us show that T 2 ∈ S 2 (V, V ′ ). For this purpose, note that the Riccati equation (50) can be rewritten as a Lyapunov equation for the closed-loop system: Similar as in the proof of Theorem 18, we have the explicit integral representation wherez i (t) = e Aπt z i , i = 1, 2. This implies the estimate We thus have For k ≥ 3, the equations (61) are linear and existence and uniqueness of T k ∈ S k (V, V ′ ) and R ∈ M(D(A) k , k) follow from Theorem 18 and Theorem 20. Symmetry follows from the explicit integral representation of T k as well as from the symmetry of R k which is a consequence of the relation

Remark 22.
We point out that it is essential to allow that T k ∈ S(V, V ′ ) rather than M(V k , R).
In fact, since B * ∈ L(Y, U ) the first summands on the right hand side of (62) would otherwise not be well-defined, though it would suffice to demand T k ∈ S(V, Y ). Similarly the second summands would not be well-defined, since they contain the terms A 0 (z i , z j+i ).

Polynomial feedback control
Let us next analyze the polynomial feedback law u d : V → U obtained by with T k given in (60) and (61). We then obtain the following closed-loop systeṁ The subsequent proofs rely on local Lipschitz continuity estimates for the nonlinear part of the feedback law. It will therefore be convenient to introduce for each k ≥ 3 such that we havė As mentioned before, for T k ∈ S k (V, V ′ ) and for fixed y ∈ V , the term T k (·, y, . . . , y) ∈ L(Y, R). With g k (y, . . . , y) ∈ Y let us denote its Riesz representative. Hence, we obtain g k (y, . . . , y) Y = sup z∈BY (1) |T k (z, y, . . . , y)| ≤ M y k−1 V .
One can easily show the following local Lipschitz estimate for G k which extends the 2-D result given in [13].
Lemma 24. For all k ≥ 3, there exists a constant C(k) > 0 such that for all y and z ∈ V , Moreover, for all δ ∈ [0, 1], for allỹ andz As a consequence, we obtain the local well-posedness of the closed-loop system.
Theorem 25. Let C and C(k) denote the constants from Corollary 4 and Lemma 24. There exists a constant M cls > 0 such that for all y 0 ∈ V with Proof. Similar to the proof of Lemma 5, we obtain the existence of a solution y ∈ W ∞ (D(A λ ), Y ), satisfying (67). Let us therefore focus on uniqueness and denote by y and z two solutions to (64) in W ∞ (D(A λ ), Y ). We set e = y − z. Again, as in the proof of Lemma 5, there exists M > 0 such that for all t ≥ 0. Since y and z ∈ W ∞ (D(A λ ), Y ) and e(0) = 0, we obtain with Gronwall's inequality that e = 0.

Error estimates
In this section, we analyze the feedback law (63) and compare it to the optimal value V(y 0 ). We follow a strategy used in [14] which is based on a polynomial function V d of the form The motivation for the specific definition of V d is that in the 2-D case, the sequence of multilinear forms T k coincides with derivatives D k V(0) of the value function considered as continuous multilinear forms on Y k . Hence, in that case the expression for V d represents a Taylor series expansion of V around 0 in the Y topology. In the 3-D case we utilize the structure of the 2-D case to propose approximating feedback controls based on the generalized Lyapunov equations from Theorem 21.
Before stating the announced result, note that V d is Fréchet differentiable on V with Moreover, by Theorem 21, we know that DV d can be uniquely extended to an element in L(Y, R).
As an element in L(Y, R) it satisfies the announced perturbed HJB equation.
Proposition 26. For all d ≥ 2 and all y ∈ D(A), we have Moreover, for all d ≥ 2, there exists a constant C > 0 such that for all y ∈ V , Proof. Let us prove (71). We fix y ∈ D(A). Since T 2 is characterized by Π which satisfies the algebraic Riccati equation 50, we obtain for d = 2 Our proof is based on Theorem 21. From Remark 23, we know that the expressions of the multilinear forms can be simplified when the mappings are evaluated at (y, . . . , y) ∈ Y i and (y, . . . , y) ∈ Y k , respectively. In particular, we have kT k (Ay, y, . . . , y) = 1 2α We are now ready to prove (71). By (70) we have and in a similar manner As a consequence, we obtain From (74) and (75), we conclude that The terms in brackets in the above expression are equal to zero by (50) and (73). This proves (71). For the estimate (72), we use T k ∈ S k (V, V ′ ) and the definition (69) to obtain The assertion now follows with Proposition 2.
whereȳ is the optimal trajectory for problem (P ) with initial value y 0 .
Proof. By Proposition 14 and Theorem 25, for δ > 0 sufficiently small there exists a constant C 1 such that for all y 0 ∈ B V (δ), Since we can assume that y 0 V ≤ 1, the statement is a consequence of Proposition 26.
Let us now consider a perturbation J d of the cost function J of the form Next, we show that the polynomial feedback law u d (y d ) = − 1 α B * DV d (y d ) with the corresponding trajectory y d performs better than (ȳ,ū) with regard to the perturbed cost function J d .
Lemma 28. Let d ≥ 2. Then there exists δ > 0 such that for all initial values y 0 ∈ B V (δ 0 ) where (ȳ,ū) is the optimal solution for problem (P ) with initial value y 0 .
Proof. By Lemma 27, it follows that J d (ȳ,ū) and J d (y d , u d ) are finite. We have thatȳ ∈ H 1 (0, ∞; Y ) and, hence, for all T > 0, it holds thatȳ ∈ W 1,1 (0, T ; Y ). We can apply a chain rule established in [14] to each of the bounded multilinear forms which appear in V d (ȳ(·)). Omitting the time variable in what follows, we obtain By Proposition 26, We deduce that With a similar derivation for u = u d , we infer that since for this control, the squared expression vanishes. We have lim T →∞ȳ (T ) = 0 and lim T →∞ y d (T ) = 0 in V. Since T k ∈ S k (V, V ′ ), this implies that Finally, passing to the limit in (77) and (78), we obtain The lemma is proved.
We now prove that V d is a Taylor expansion of V and analyze the quality of the feedback law u d in the neighborhood of 0.
Theorem 29. There exists δ > 0 and a constant M > 0 such that for all y 0 ∈ B V (δ) Proof. The following inequalities follow directly from Lemma 27 and Lemma 28: whereū is the unique solution to (P ) with initial value y 0 . Therefore, which proves inequalities (79)  Multiplication with e and subsequent integration yields Note that A s satisfies an expression analogous to (21). We thus have with α ≥ 0 and β > 0. From Proposition 1 and Young's inequality we conclude that Choosing ι large enough, this yields Since y, z ∈ W ∞ (D(A s ), Y ) and e(0) = 0 we can apply Gronwall's inequality and obtain that e(t) = 0 for all t ≥ 0. This shows uniqueness of solutions in W ∞ (D(A s ), Y ).
Proof of Theorem 30. The main idea is to express the dynamics of the error e(t) :=ȳ(t) − y d (t) in feedback form by utilizing classical results on remainder terms for Taylor approximations. Let us detail the most important steps. First, for δ 6 sufficiently small, from Corollary 15 we know that V is smooth and, hence, can be approximated by a Taylor series around 0. From Theorem 29 it also follows that Since the T k are multilinear forms on V k , the uniqueness of Taylor expansions implies that indeed D k V(0) = T k . Consequently, with [30, Theorem 4A], we also obtain a Taylor series expansion of DV of the form where the remainder term R d is given by In particular, along the optimal trajectoryȳ(·) = Y(y 0 )(·) it holds that p(t) = P(y 0 )(t) = DV(ȳ(t)) = d k=2 1 (k − 1)! D k V(0)(·,ȳ(t), . . . ,ȳ(t)) + R d (ȳ(t)).
Using (84) we then obtain R d (ȳ(·)) ∈ L 2 (0, ∞; Y ). Consider now the error dynamics which satisfẏ e = A π e − F (ȳ) + F (y d ) + where the constant M is independent of y 0 andδ can be made arbitrarily small by reducing the value of δ 6 . This shows the first estimate. The estimate for the controlsū and u d then follow exactly as in the proof of [14,Theorem 22].

B Linear optimality systems
Here, we analyze a class of linear optimality systems that arise in the proof of Lemma 13. With the space X defined as in (43), for a given (y 0 , f , g, h) ∈ X, we consider: Proposition 31. For all (y 0 , f , g, h) ∈ X, there exists a unique triplet (y, u, Moreover there exists a constant M > 0, independent of (f , g, h, y 0 ), such that Proof. For finite horizon problems the proof would be standard. For the infinite horizon case the result cannot readily be obtained from the literature, and thus we decided to provide a proof here. We prove the assertion with the help of the following two auxiliary statements. Claim 1. There exists a constant M > 0 such that for all (f , g, h, y 0 ) ∈ X, the linear-quadratic problem (LQ) has a unique solution (y, u) satisfying the following bounds: Proof of Claim 1. Due to Consequence C3, problem (LQ) is feasible. Let us now consider a minimizing sequence (y n , u n ) n∈N . We can assume that for all n ∈ N, Let us show that the sequence (y n , u n ) is bounded in W ∞ (D(A λ ), Y ) × L 2 (0, ∞; U ). By Young's inequality, for all ε > 0 it holds that 2α .
It follows that the sequence (y n , u n ) is bounded in W ∞ (D(A λ ), Y ) × L 2 (0, ∞; U ) and has a weak limit point (y, u) satisfying (87). One can prove the optimality of (y, u) with the same techniques as those used for the proof of Setting v = −Kw ∈ L 2 (0, ∞; U ) we have solved De(z, v) = (r, s). The remaining arguments are similar to those provided in the proof of Proposition 11 and are thus omitted here.