PONTRYAGIN MAXIMUM PRINCIPLE FOR THE OPTIMAL CONTROL OF LINEARIZED COMPRESSIBLE NAVIER-STOKES EQUATIONS WITH STATE CONSTRAINTS

. A Pontryagin maximum principle for an optimal control problem in three dimensional linearized compressible viscous ﬂows subject to state con- straints is established using the Ekeland variational principle. Since the system considered here is of coupled parabolic-hyperbolic type, the well developed con- trol theory literature using abstract semigroup approach to linear and semilin-ear partial diﬀerential equations does not seem to contain problems of the type studied in this paper. The controls are distributed over a bounded domain, while the state variables are subject to a set of constraints and governed by the compressible Navier-Stokes equations linearized around a suitably regular base state. The maximum principle is of integral-type and obtained for minimizers of a tracking-type integral cost functional.

Here p (·) denotes differentiation with respect to the argument. The two linearized equations (1) are, respectively, obtained from the conservation of fluid mass and fluid momentum, where ρ(t, x) ∈ R + denotes the fluid density and u(t, x) ∈ R 3 denotes the fluid velocity. The body force f (t, x) ∈ R 3 is fixed, denoting for instance a gravitational force. The solution (ρ, u) of the linearized system without control may be understood as representing the dynamics of a small perturbation of the state ( ρ, u).
The linearized system (1) is supplemented with a general barotropic pressure law, p = p( ρ).
Note that in (1b) the pressure appears only as a coefficient, and is independent of ρ. The viscous stress tensor S is given by the Newtonian law where the constant µ > 0 is the shear viscosity, and the constant η ≥ 0 is the bulk viscosity. Defining λ := η − (2/3)µ, the diffusive term in (1b) may be written div S(∇u) = µ∆u + (µ + λ)∇div u.

1.2.
Notation. The space-time domain Q T = (0, T ) × Ω is fixed, where T > 0, and Ω ⊂ R 3 is an open, bounded subset. The boundary of Q T is denoted Γ T = (0, T ) × ∂Ω. Since T and Ω are fixed, we use the abbreviated notation L p t (L q x ) = L p (0, T ; L q (Ω; R N )) and L p t (W k,p x ) = L p (0, T ; W k,p (Ω; R N )) to denote the standard Bochner spaces endowed with their strong topologies (the range R N will be clear from context). We also write H k x = W k,2 x when p = 2, and for H 1 x functions vanishing on the boundary in the sense of trace we write H 1 0,x . By C([0, T ]; X) we denote functions continuous on [0, T ] with respect to the strong topology on a Banach space X. By C B (R + ; X) we mean the space of continuous and bounded functions from R + to X.
For a vector quantity u ∈ R 3 , ∇u is the Jacobian matrix, while the Hessian ∇ 2 u is a third-order tensor. The tensor product a ⊗ b of two vectors a and b is a secondorder tensor defined componentwise by a i b j (i, j = 1, 2, 3). For two second-order tensors A and B, we denote their Frobenius inner product by A : B = 3 i,j=1 A ij B ij . By C we denote an arbitrary constant which may change values. Occasionally the dependence of C on other constants will be clearly specified. The L 2 (Ω) inner product will be denoted ·, · . Finally, a b means that a ≤ Cb for some positive constant C. Other notation will be introduced as necessary. 1.3. Strong solutions of the governing equations. In this paper we require sufficient regularity for solutions of the linearized compressible Navier-Stokes equations, and so require some regularity of the coefficients depending on ( ρ, u). To avoid unnecessary technicalities, we suppose the base state ( ρ, u) is smooth.
A proof of the following theorem is provided in the Appendix of this paper.
Theorem 1.1 (Global solutions of linearized problem). Fix q ≥ 3. Let Q T be fixed with ∂Ω ∈ C ∞ . Let ( ρ, u) be a smooth solution up to the boundary of the nonlinear system (5) emanating from initial data ( ρ 0 , u 0 ), where S(∇ u) is given by (2), p ∈ C 2 (R + ), u satisfies the no-slip boundary condition (3), and f ∈ L 2 t (L q x ). Suppose furthermore that for all (t, x) ∈ Q T , there exist constants m and M such that Let ρ 0 ∈ H 1 x and u 0 ∈ H 1 0,x , and suppose U ∈ L 2 t (L 2 x ). Then there exists a unique strong solution (ρ, u) of the linearized system (1), and supplemented with the no-slip condition (3), such that Theorem 1.1 remains valid under weaker regularity of the coefficients. For instance, it is sufficient to consider a strong solution ( ρ, u) having the regularity specified in Propositions 1 and 2 of the Appendix.
Global-in-time strong solutions for the nonlinear compressible Navier-Stokes equations: The state of a compressible and viscous fluid in Q T may be modeled by the compressible Navier-Stokes equations, While the nonlinear system (5) describes the general evolution of barotropic compressible fluids, global solutions are only known to exist with the required regularity provided the data are small enough. The following existence theorem is due to Valli [40] (see also Solonnikov [36]).
1.4. Optimal control problem. We now describe the optimal control problem addressed in this paper. The objective is to determine control-state triples (ρ, u, U) that minimize the following tracking-type cost functional, where desired target states are denoted by ρ d ∈ L 2 t (L 2 x ) and u d ∈ L 2 t (L 2 x ). In addition, we specify a state constraint of the form where the mapping F : → X is assumed to be continuously Fréchet differentiable, X is a Banach space with strictly convex dual X * , and W ⊂ X is a nonempty closed and convex subset.
We denote by x ≤ R}, where R > 0 is fixed. The set of (equivalence classes of) controls is taken as follows, U ad := L 2 (0, T ; B R (0)). Definition 1.3. The admissible class A ad is defined as the set of state-control triples (ρ, u, U), with U ∈ U ad , solving the linearized compressible Navier-Stokes system (1) in the sense of Theorem 1.1, and satisfying the state constraint (7).
The first order necessary conditions in the form of a Pontryagin maximum principle will be obtained for the following optimal control problem: where it is understood that the admissible class includes the PDE contraint (1) and the state constraint F (ρ, u) ∈ W .
Definition 1.4. A solution of Problem (P1) is called an optimal solution and denoted (ρ * , u * , U * ). The control U * is called an optimal control. Remark 1. Since the state (ρ, u) is determined uniquely by the control U through the governing PDE, i.e. ρ = ρ[U] and u = u[U], the cost J may be considered as a function of the control only.
Optimal control theory of incompressible viscous flow has an extensive literature, see for instance Fursikov [22], Gunzburger [24], Lions [27], Sritharan [37], and the references therein. In this paper we extend the techniques developed in [18,19,41] for incompressible viscous flow to the linearized compressible case.
Rigorous studies of optimal control problems for compressible Navier-Stokes are less numerous than their incompressible counterpart. Existing mathematical results tend to focus on problems with a number of simplifications including working with the linearized rather than nonlinear system, or possibly employing some nonphysical assumptions in order to make problems tractable.
An optimal control problem for linearized compressible Navier-Stokes flows is considered in [9]. The controls act on the boundary of a two-dimensional inletoutlet domain and the existence and necessary conditions are established. Existence of optimal controls for the nonlinear compressible system are established in [14], where the authors use a stronger cost functional and weak-strong uniqueness result to ensure the uniqueness of the state variables. In [2] an optimal control problem is formulated for a Navier-Stokes-Fourier system in one dimension for an inletoutlet domain in Lagrangian coordinates. For results on null and approximate controllability, and stabilizability, see [10,17,30] and the references therein.
Computational implementation of optimality conditions have been addressed in [11,12,33], motivated by problems in aeroacoustics and noise control, and [7,25] for problems in optimal design.
In order to derive the necessary conditions for Problem (P1), we make use of Ekeland's variational principle, due to I. Ekeland [15]. This tool provides the existence of minimizers for rather general cost functionals, and is useful when precise extrema are not required, or do not exist. In the context of incompressible fluids, the variational principle has been used by many authors to obtain the necessary conditions, cf. [18,20,41]. The reference [20] in particular deals with incompressible flow with state constraints.
We also would like to point out that the problem considered in this paper involves a coupled parabolic-hyperbolic type partial differential equations and hence does not fall within the general framework developed in the previous literature such as [3,4,5,18,20]. In addition the abstract semigroup theory approach for compressible flow developed in [38,39] first transform the compressible Navier-Stokes equations to the Lagrangian coordinate system (thereby eliminating the spatial derivative convective term in the density equation to remove the hyperbolicity) which does not seem amenable to control theoretical treatment. These difficulties may be overcome in the future with substantial effort. However, the approach given here seems best suited to control theory of linear and nonlinear compressible Navier-Stokes equations. Moreover, as pointed out in the papers by Fattorini and Sritharan [18,20] the approach using Ekeland's Variational Principle and spike variations yield the most powerful Pontryagin's Maximum principle with terminal and pointwise state constraints, even constraints on the derivative of the state variable. Since the current paper is written to set the stage for such future developments for the control of fully nonlinear compressible Navier-Stokes equations with state constraints, we did not attrempt to try to deduce our results from the best possible results available for convex optimization of linear partial differential equations, also in part because of the mathematical difficulties associated with the coupled systems as pointed out before.
2. The necessary conditions and main theorem. In this section we formally derive the Pontryagin maximum principle for Problem (P1). In the following we divide the momentum equation by ρ, which we recall is assumed to be strictly positive, in order to obtain explicitly the evolution of (ρ, u).
Begin by defining −N 1 (ρ, u) = div(ρ u) + div( ρu), The linearized system (1) may then be concisely written We define the augmented cost functional, J , as where σ and ξ denote adjoint variables to ρ and u respectively, d W denotes distance to the set W in the norm of X, λ is a multiplier for the cost functional, and ζ is a multiplier for the state constraint. Let (ρ * , u * , U * ) be an optimal triple. Formally, the adjoint equations are derived by differentiating J in the Gâteaux sense with respect to the state-control variables and taking adjoints,

Remark 2.
Differentiating J with respect to σ and ξ recovers the original system.
From (8) we obtain that (σ, ξ) should satisfy the following adjoint system on Q T , and in addition the Pontryagin principle for optimal control holds, , W , to be satisfied for all W ∈ B R (0). Equivalently, this may be written in the Hamiltonian formulation where we define the Hamiltonian and corresponding Lagrangian The following theorem states the main result of this paper regarding the necessary conditions for Problem (P1).
be an optimal solution of Problem (P1). Then there exist λ ∈ R, η ∈ X * , and a weak solution (σ, ξ) of the adjoint system (9) in the sense of distributions, and furthermore for all W ∈ B R (0), and where η ∈ ∂d W (F (ρ * , u * )), ∂d W denoting the convex subdifferential of the distance function to the set W .
3. Mathematical preliminaries. In this section we recall some relevant results that will be used in the proof of the Pontryagin maximum principle.
is Lipschitz continuous with Lipschitz constant 1.
Theorem 3.2 (Ekeland variational principle). Let (X, d) be a complete metric space, and F : X → R ∪ {+∞} a lower semi-continuous functional, not identically +∞, and bounded from below. Then for every point u ∈ X such that and every λ > 0, there exists some point v ∈ X such that Theorem 3.2 is due to Ekeland [15] (see [16] for an alternative proof credited to M. Crandall).

Remark 3.
A point u ∈ X such that (12) holds is called an ε-minimizer of F .
In Section 1.4 we introduced the set of admissible controls The set U ad is equipped with a suitable metric d E , the Ekeland metric, which is defined for all u, v ∈ U ad , and where the measure is Lebesgue measure on R.
The proof is classical and may be found in Ekeland [15] (see also Lemma 3.15 [18]). Note B R (0) is closed, convex, and bounded. In general, Lemma 3.3 is not valid if we allow unbounded control sets, i.e. U ad = L 2 (0, T ; U ad ) with an unbounded subset U ad ⊂ L 2 x (see [18], pg. 227). In addition, the following inequality holds, and so U n → U strongly in L 2 t (L 2 x ) whenever d E (U n , U) → 0. Next we introduce a set of variations on the admissible controls, known as spike (needle) variations.
For brevity, we denote the spike variation as U h , in which case τ and W are assumed fixed.
The following lemma ensures continuous dependence of solutions of (1) on the controls U. Lemma 3.5. Let U n , U ∈ U ad , and suppose d E (U n , U) → 0 as n → ∞. Let (ρ n , u n ) be the solution of (1) corresponding to the control U n , and (ρ, u) the solution corresponding to the control U, both emanating from the same initial data (ρ 0 , u 0 ). Then (ρ n , u n ) → (ρ, u) strongly in the topology determined by the estimates (4) of Theorem 1.1. In particular, it holds at least that Proof. Define τ n = ρ n −ρ and ω n = u n −u. Since the system (1) is linear, it follows that τ n and ω n satisfy the system The structure of (17) is therefore the same as the linearized system (1) and so the same estimates (4) from Theorem 1.1 hold. We deduce that where E(τ n , ω n ) := 1 2 τ 2 n + |∇τ n | 2 + |ω n | 2 + µ|∇ω n | 2 + (µ + λ)|divω n | 2 .
To obtain the convergence of ∂ t τ n , note that ∂ t τ n = −div(τ n u) − div( ρω n ), and the right side of this equality converges to 0 in L ∞ t (L 2 x ).
Next we study limits of the following quantities, where (ρ, u) is the solution of (1) corresponding to U, and (ρ h , u h ) is the solution corresponding to a spike variation U h . We show that as h → 0, the respective limits z and v satisfy an appropriate linearized problem. Formally, one may directly write the system satisfied by (z h , v h ), and taking the limit h → 0 arrive at where the singular term involving the Dirac delta corresponds to a jump due to the spike variation. Equivalently, the limiting system (20) may be cast in an abstract semigroup framework, where the singular term instead contributes as an initial datum at time τ . These observations are made rigorous in the following theorem. Its proof is based on ideas from [18,19].
Theorem 3.6. Suppose (ρ, u) is a solution of (1) corresponding to the control U, and let (ρ h , u h ) be a solution corresponding to the spike variation U h (cf. Section 3.4). Let τ ∈ (0, T ] be a left Lebesgue point of the function ( ρ) −1 (·)(W − U(·)). Define z h and v h as in (19). Then z h → z and v h → v uniformly in τ ≤ t ≤ T and strongly in L 2 (Ω), where and and S(t, τ ) is the evolution operator of the system ∂ t z + div(z u) + div( ρv) = 0, Proof. Define a linear operator We may then concisely write the system for (z h , v h ) T as d dt Similarly, the system (22) for (z, v) T over τ ≤ t ≤ T may be written as d dt while for 0 ≤ t < τ it holds that (z, v) T (t) = (0, 0) T . Next, define Note for any 0 ≤ t < τ − h we have F h (t) = 0. It follows that for t < τ − h, Since A is linear, we deduce η(t, h) = 0 uniformly in t ∈ (0, τ − h). Letting h → 0 we deduce the convergence for 0 ≤ t < τ . Now suppose τ ≤ t ≤ T . By definition of the evolution operator S(t, s), Therefore and so As h → 0, the right hand side of the above inequality converges to zero by virtue of strong continuity of the evolution operator in L 2 x × L 2 x and the left Lebesgue point property of F 0 at τ .
Finally, the following theorem provides the existence of weak solutions of the adjoint system. It can be proven similarly to Theorem 1.1.
x ) be given and suppose (ρ * , u * , U * ) is an optimal solution of Problem (P1). Let ( ρ, u) and f satisfy the same conditions of Theorem 1.1. Then there exists a weak solution (σ, ξ) of the adjoint system (9) (in the sense of distributions), with regularity 4. Proof of the Pontryagin maximum principle. Having collected all the preliminary results, we now prove Theorem 2.1. Let (ρ * , u * , U * ) be an optimal triple for Problem (P1). First we modify the cost functional by penalizing the state constraint. Following the choice of penalization in Wang and Wang [41], let ε > 0 and define the penalized cost functional where d W denotes the distance to the set W ⊂ X in the norm of X, i.e.
We recall the state trajectory (ρ, u)(·) is determined uniquely by the control U (cf. Theorem 1.1), so that J ε may be considered a functional of U only. Fix U ad as in (13 (14), it follows that J ε is lower semicontinuous (even continuous) with respect to the control U ∈ U ad . Furthermore, J ε is bounded from below, and the following inequalities hold, The triple (ρ * , u * , U * ) is therefore an ε-minimizer of J ε and we may apply the Ekeland variational principle, Theorem 3.2, with λ = √ ε, to deduce there exists For the following computations we suppose ε is fixed. The inequality (25c) holds for any control in U ad , and so we choose the admissible spike variation U h ε (cf. Definition 3.4) of U ε with corresponding trajectory (ρ h ε , u h ε ) and deduce From Lemma 3.5 and the L 2 -continuity provided by (14), we observe that (1), and using the identity we obtain from (26) that where C h ε := (2J ε (ρ ε , u ε , U ε ) + o(1)) −1 . Next we obtain the limit as h → 0 in (27). The computations are organized by first defining the following: where the I h j correspond to the preceding three integrals (j = 1, 2, 3), and distance term (j = 4).
By definition of the spike variation, on the time interval (0, τ − h), the control U ε and variation U h ε coincide, and so (ρ h ε , u h ε , U h ε ) ≡ (ρ ε , u ε , U ε ) on (0, τ − h). Throughout this section we assume τ is taken to be a Lebesgue point of all relevant functions. Such a choice is always possible. Taking these considerations into account, we compute first where the integral supported on (τ − h, τ ) vanishes in the limit since we choose τ to be a Lebesgue point. Let (z ε , v ε ) be a solution of the linearized system (22) corresponding to the control U ε , as specified in Theorem 3.6.
By Theorem 3.6 and Lemma 3.5, it follows that as h → 0. Passing to the limit in (29) we deduce Similarly, using that Next, using the definition of U h ε , By assumption, X * is strictly convex, and so if w / ∈ W , ∂d W (w) consists of a single element with unit norm in X * (cf. page 154, Li and Yong [26]). Hence without loss of generality we can write Furthermore, we have the following Fréchet derivative of F at the point (ρ, u) in terms of its partial Fréchet derivatives (cf. Proposition 2.53 [34]): From Theorem 3.6 we have that The composition of Fréchet differentiable functions is differentiable and obeys a chain rule, and so we obtain from (32)-(34) that where η ε ∈ ∂d W (F (ρ ε , u ε )) ⊂ X * and η ε X * = 1.
We are now in a position to let h → 0 in (27), obtaining where Next we introduce the Hamiltonian. From the weak formulation of the adjoint equations (cf. Theorem 3.7) we let (σ ε , ξ ε ) be a weak solution of Combining with (36) we get, (38) valid for all W ∈ B R (0). The strong solution property of (z ε , v ε ) allowed to remove the time integrals. In (38) we also used that σ ε (T ) = ξ ε (T ) = 0 and z ε (τ ) = 0, v ε (τ ) = ( ρ) −1 (τ )(W − U ε (τ )). The adjoint equations (37) and the inequality (38) may be interpreted as necessary conditions for ε-optimal control. To conclude, we must pass ε → 0 in (37) and (38). From the definition of a ε and λ ε , and using that η ε X * = 1, it follows that Therefore, there exist λ ∈ R and a ∈ X * such that (along subsequences), and a ε * a 0 in X * as ε → 0. (41) From the estimates on ξ ε using Theorem 3.7, we further obtain that ∂ t ξ ε ∈ L 2 t (H −1 x ) uniformly in ε. From this estimate and using that ξ ε ∈ L 2 t (H 1 0,x ) we obtain by continuous embedding that ξ ε ∈ C([0, T ]; L 2 (Ω; R 3 )). By an application of the Aubin-Lions lemma, we obtain that Furthermore, from (25b) we obtain convergence of the control terms, Integrating (38) in time from 0 to T and using (40), (42), and (43), we pass ε → 0 obtaining From the weak star convergence (41) and using that F is continuously Fréchet differentiable, we furthermore obtain that Passing to the limit ε → 0 in (37), we arrive at The integral maximum principle and adjoint equations for U * to be an optimal control have been obtained.
Furthermore, using that a ε ∈ ∂d W (F (ρ ε , u ε )), by the definition of subdifferential we get that Using that d W (w) = 0 and the nonnegativity of the distance function, it follows that a ε , w − F (ρ ε , u ε ) ≤ 0. Passing ε → 0, and using the weak convergence of a ε and the strong convergences of ρ ε and u ε , it follows that The condition (47) says that a 0 belongs to the normal cone of W at F (ρ * , u * ), i.e. a 0 ∈ N W (F (ρ * , u * )). This concludes the proof of Theorem 2.1. It only remains to justify the existence of an optimal triple (ρ * , u * , U * ).

5.
Existence of optimal controls. In this section we establish the existence of optimal controls for Problem (P1). Note we assume that the set of admissible triples A ad is nonempty.
given. Let A ad be defined as in (1.3), and suppose A ad = ∅. Then there exists an optimal triple (ρ * , u * , U * ) ∈ A ad such that Proof. We employ the direct method from the calculus of variations. By assumption, A ad is nonempty. Since J is bounded below, we deduce the existence of a minimizing sequence {(ρ n , u n , U n )} ∞ n=1 , of elements of A ad , such that lim n→∞ J (ρ n , u n , U n ) = j.
Furthermore, there exists R large enough such that 0 ≤ J (ρ n , u n , U n ) ≤ R < +∞, uniformly in n. In particular, U n L 2 t (L 2 x ) ≤ C(R), and by Theorem 1.1, we obtain estimates on ρ n ∈ L ∞ t (H 1 , and ∂ t u n ∈ L 2 t (L 2 x ), uniform in n. Without relabeling, there exists a subsequence (ρ n , u n , U n ) converging weakly to a triple (ρ * , u * , Recall U ad is a closed and convex set. Since J is continuous and convex over it follows that J is also sequentially weakly lower semi-continuous. We deduce that Combining (48) and (49) we get that Therefore (ρ * , u * , U * ) is a minimizer. It remains to check that (ρ * , u * , U * ) is a strong solution of (1) and satisfies the state constraint. However, from the uniform estimates, we also obtain along the subsequence that ρ n * ρ * in , and ∂ t u n ∂ t u * in L 2 t (L 2 x ). Since the equations (1) are linear, we may pass to the limit to conclude that (ρ * , u * , U * ) satisfies the PDE. In particular, we may pass to the limit in (1) weakly in L 2 t (L 2 x ) and use the density of test functions to conclude the governing equations are satisfied almost everywhere in Q T . Furthermore, the convergence of (ρ n , u n ) allows us to conclude by the Aubin-Lions lemma that along subsequences (ρ n , u n ) converges strongly to (ρ * , u * ) in . By continuity of F through its Fréchet differentiability and continuity of the distance function it follows that lim n→∞ d W (F (ρ n , u n )) = d W (F (ρ * , u * )), and so (ρ * , u * ) ∈ W since W is closed in X.
Proof. Multiplying (1a) by ρ, integrating by parts over Ω, and noting that the velocity u vanishes on the boundary, we deduce Next, using that ( ρ, u) satisfies equation (5a), we get and by a simple computation Taking the scalar product of the momentum equation (1b) with u, invoking (52), and integrating by parts we deduce Next we obtain an estimate on the density gradient. Applying the gradient operator to (1a) and taking the scalar product with ∇ρ, we get where the second equality follows from a few applications of the product rule and integrating by parts. Finally, by virtue of u vanishing on Γ T , Combining (51), (53), (54) and (55), and integrating in time we arrive at the energy identity where each of the I i denote one of the space-time integrals. Repeatedly invoking Hölder's inequality, Young's inequality with ε, Poincaré's inequality, and the Sobolev embedding W 1,2 x ⊂ L 6 x , we next estimate each of the I i as follows: x Ω |∇ρ| 2 dxds, x Ω |∇ρ| 2 dxds, t 0 Ω C(ε)|∇ρ| 2 + ε|∇divu| 2 dxds, x ds + ε t 0 Ω |∇u| 2 dxds, These estimates are now combined with the energy inequality (56) to deduce where the ε-terms were absorbed into the left-hand side of (56) by choosing ε small enough, and where A(·) ∈ L 1 (0, T ) depends on ε, x . This concludes the proof.
The next result provides the existence and regularity for the Lamé system, used to obtain the W 2,2 velocity estimates. It may be found in [32], Lemma 4.32.
Proposition 3 also has an L p version for 1 < p < ∞ (cf. [32] Lemma 4.32). We conclude this section with the following lemma concerning the a priori estimates. Proof. Using Proposition 3, we retrieve the estimate u W 2,2 x ≤ C(Ω) divS(∇u) L 2 x . Combining this estimate with the results of Propositions 1 and 2, we get that Ω E(ρ, u)(t) dx + An application of Grönwall's lemma concludes the proof.
Next define the ball which is closed, convex, and bounded. By choosing T = T small enough, the mapping T maps B into itself. Furthermore, it may be shown that the family {T [u n ]} n≥1 is equicontinuous in C([0, T ]; X n ), hence the Arzelá-Ascoli theorem applies and T is a compact operator. We deduce by Schauder's theorem that there exists a fixed point u n of (63). The estimates in Lemma A.1, being uniform up to time T , allow to extend the time interval for existence up to this time. Similarly, uniformity in n allows us to extract subsequences, still denoted ρ n , u n , such that Due to this regularity and linearity of the linearized Navier-Stokes system, we may pass to the limit to conclude (ρ, u) is a strong solution. This concludes the proof.