CONTINUITY OF COST FUNCTIONAL AND OPTIMAL FEEDBACK CONTROLS FOR THE STOCHASTIC NAVIER STOKES EQUATION IN 2D

. We show the continuity of a speciﬁc cost functional J ( φ ) = E sup t ∈ [0 ,T ] ( ϕ ( L [ t,u φ ( t ) ,φ ( t )])) of the SNSE in 2D on an open bounded non- periodic domain O with respect to a special set of feedback controls { φ n } n ≥ 0 , where ϕ ( x ) = log(1 + x ) 1 − (cid:15) with 0 < (cid:15) < 1.

1. Introduction. We consider the existence of optimal controls φ related to the stochastic Navier-Stokes equation (SNSE) describing the flow of a viscous incompressible fluid in a smooth bounded domain O ⊂ R 2 with a multiplicative white noise du φ + (u φ · ∇u φ + ∇p − ν∆u φ − φ)dt = g(u φ )dW, ∇ · u φ = 0, and Dirichlet boundary condition u φ = 0 on [0, ∞) × ∂O, where u φ = (u φ1 , u φ2 ) represent the velocity field, ν stands for the coefficient of kinematic viscosity, φ is the deterministic force and p represents the pressure. g(u φ )dW = ∞ k=1 g k (u φ )dW k is the cylindrical Brownian motion with independent one dimensional Brownian motions W k and Lipschitz coefficients g k (u φ ) [4,5,2,7,11,8,12,18,19,24,30,27,32,33,34,38]. Compared to their deterministic counterparts, the stochastic PDE's lead us to consider new questions and additional technical difficulties such as the existence and uniqueness of invariant measures or lack of compactness caused by the stochastic term driven by Brownian motion. There are two notions for the solutions of the SNSE. The first notion is called a martingale solution, where the stochastic basis is not given in advance but constructed as a part of the solution [7,11,18,31,43]. The second notion, which we consider here, is the so called pathwise solution or strong solution, i.e. a complete probability space and the Brownian motion are given a priori [3,24,25,29].
The existence of optimal control of stochastic evolution equations has been studied by [36,22,26,23,41,42] among others by adding linearity or semilinearity assumptions as well as putting boundedness restriction for nonlinearities. On the other hand, more specifically, the literature about the optimal control of SNSE is not very rich, since these assumptions do not apply to the SNSE. The nonlinearity of the SNSE causes the problem to be of non-convex type. We refer the reader the book of Ekeland and Temam about non-convex optimization for further investigations [16]. Related works about the control of SNSE can be mentioned as follows. Choi et al. investigated the optimal control problem in [9] for the stochastic Burgers equation (one dimensional Navier-Stokes equation) with additive noise. The paper [35] studies the control of turbulence for the stochastic Burgers equation. Another work in this direction is by Sritharan [39], where the existence of optimal controls is established using techniques for the martingale problem formulation of Stroock and Varadhan [40] in the context of stochastic Navier-Stokes equation. In [3], it is shown that there exist feedback controls for the SNSE of (1.1), which are controlled by different external forces φ for a specific cost functional satisfying some regularity conditions. In our paper, we follow the framework that is studied in [3]. We show using the recent bounds and approximations for the SNSE in 2D in [28] that the cost functional J(φ) is continuous with respect to a specific set φ. Contrary to [3] though, we control the supremum of SNSE up to a terminal deterministic time T , which is only natural to introduce when we want to control the extreme events on the whole path rather than integrating the path.
The rest of the paper is as follows: in Section 2, we give the functional setting, which describes the assumptions on the problem, deterministic and stochastic framework as well as the notion of the solution considered throughout the paper. In Section 3, we state and prove our main result for the control problem. Section 4 gives the crucial technical results used in our result.
2. Functional setting. First, we recall the deterministic and probabilistic framework used throughout the paper.
2.1. Deterministic framework. Let O be a bounded open connected subset of R 2 and ∂O be smooth. We take V = {u ∈ C ∞ 0 (O) 2 : ∇ · u = 0} and denote by H the closure of V in L 2 (O) and V the closure of V in H 1 (O), respectively. Hence, the spaces H and V are identified by Here n is the outer pointing normal to ∂O. On H we take the L 2 (O) inner product and norm as We denote the inner product on H by ·, · and the norm by · H . The Leray-Hopf projector, P H is defined as the orthogonal projection of L 2 (O) 2 onto H. Moreover, on V , we use the H 1 norm and inner products We note here that due to Dirichlet boundary condition in Equation 1.1, the Poincare inequality holds, justifying · V as a norm. We take V to be the dual of V , relative to H with the pairing notated by ·, · . We next define the Stokes operator A. A is understood as a bounded linear map from V to V via: A can be extended to an unbounded operator from H to H according to We also define the nonlinear term as a bilinear mapping V × V to V via We note here that the cancellation property B(u, v), v = 0 holds for u, v ∈ V . Moreover, we denote by L(H) and L(V ) as the space of all linear and continuous operators from the Banach space H and V to themselves respectively.

Stochastic framework.
In this section, we recall the necessary background material in stochastic analysis in infinite dimensions needed in the paper (see [13,14,17,37]). We fix a stochastic basis S = (Ω, F, P, {F t }, W), which consists of a complete probability space (Ω, P), equipped with a complete right-continuous filtration F t , and a cylindrical Brownian motion W, defined on a separable Hilbert space U adapted to this filtration. Given a separable Hilbert space X, we denote by L 2 (U, X) the space of Hilbert-Schmidt operators from U to X, equipped with the norm G L2(U,X) = ( k G 2 X ) 1/2 [13]. For an X-valued predictable process G ∈ L 2 (Ω; L 2 loc ([0, ∞]); L 2 (U, X)), we define the Ito stochastic integral which lies in the space O X of X-valued square integrable martingales. We also recall the Burkholder-Davis-Gundy inequality: For any p ≥ 1 we have for some C = C(p) > 0.

2.3.
Conditions on the noise. Given a pair of Banach spaces X and Y , we denote by Lip u (X, Y) the collection of continuous functions h : x, y ∈ X (2.10) for some constant K Y > 0 independent of t. The noise term g(u)dW is defined by We write the assumptions in explicit form as well and (2.14) Given u ∈ L 2 (Ω; L 2 (0, T ; H)) and g as above the stochastic integral t 0 g(u)dW is a well-defined H-valued Ito stochastic integral that is predictable and is such that

Notion of solution.
We consider strong pathwise solutions, which are solutions with values in V and strong in the probabilistic sense, i.e., the driving noise and the filtration are given in advance.
Definition 2.1. We fix a stochastic basis S and g is as above. We further assume that the initial data u 0 ∈ L 4 (Ω; H) ∩ L 2 (Ω; V ) is F 0 measurable. Moreover, we take φ ⊂ U of bounded feedback controls. Namely, for each time t ∈ T , we have φ(t, .) is a continuous linear functional from H to H. Moreover, we assume that where C 1 and C 2 are uniform for the family of controls φ ∈ U. Then, we say that the pair (u φ , τ ) is called a pathwise strong solution of the system if τ is a stricly positive stopping time, u φ (· ∧ τ ) is a predictable process in H such that is called a maximal pathwise strong solution if ξ is a strictly positive stopping time and there exists τ n → ξ increasing such that (u, τ n ) is a local strong solution and . ω ∈ Ω, where φ ∈ U and the assumptions on initial data are as in Definition 2.1. We consider the following cost functional with the objective of minimization of the cost functional 24) and 0 < < 1.
Remark 1. Since we require L to be only uniformly Lipschitz, using the concave function ϕ(x) = log(1 + x) 1− does not imply that, we should check only the end points for the functional L. We also note that the results in the paper still hold for the controls with the support in open subsets of the domain. In their paper F. Abergel and R. Temam [1] investigate the deterministic Navier-Stokes equation by controlling the turbulence inside the flow. They give a cost functional regarding the vorticity in the fluid. For our problem, the correspondent functional L would be We state now our main result.
To prove Theorem 3.1, we use the following result from [3].
then it holds that as n → ∞.
Next, we need the two technical lemmas.

KEREM UGURLU
By interchanging x 1 and x 2 we conclude the proof.
are real-valued, non-negative stochastic processes. Let τ < T be a stopping time so that Assume, moreover that for some fixed constant κ we have Suppose that for all stopping times where C 0 is a constant independent of the choice of τ a , τ b . Then we have where C depends on C 0 , T and κ.
Proof. Choose a finite sequence of stopping times Hence, we conclude the proof.
We continue with the following theorem.
Hence by Ito-lemma, we have By taking supremum up to τ , integrating and taking expectation, we get We treat each term above seperately. First as n, m → ∞. Next we have which goes to 0 as n, m → ∞ by Theorem 3.2.
Next, we treat the nonlinear term by seperating into 2 parts as follows.
For the first term above, we have To estimate the term C u φn − u φm 2 V Au φm H in the second line of inequality, we apply Theorem 3.4 with R = Au φm H , H and Z stand for the remaining terms in the right hand side of the equation 3.50, that we prove converging to 0. Moreover, 1 6 ν A(u φn − u φm ) 2 H is absorbed to the left hand side of the main equation.
Next, we treat the second nonlinear term as where the first term is absorbed to LHS of the equation 3.50, whereas for the second term we have as m, n → ∞ by Theorem 3.5. Hence , the first part of the proof is concluded. 2. The proof is identical with [25]. First by Ito we have We fix τ ∈ T M,T n and S > 0. Integrating from 0 to τ ∧ S, we get (3.59) Applying the classical estimate on nonlinear term (see [10]), we have This implies then P sup Next, using Doob's Inequality for the second term, we get P sup By letting S → 0 with the integrability condition imposed on function φ, we conclude the proof.     1 Ω n u n ∧τ (3.70) For the nonlinear term, we estimate for all w ∈ H as follows: Then we have using the classical estimates [CF] Hence the nonlinear terms converge to 0 by 3.64, we conclude that given any v ∈ H 1 t≤τ B(u n −1 , u n ), v → 1 t≤τ B(u, u), v ,  1 Ω n ,t≤τ B(u φ n , u φ n ) 1 t≤τ B(u, u), (3.75) in L 2 (Ω; L 2 ([0, T ]; H)). By Lipschitz condition on g we get 1 Ω n ,t≤τ g(u φ n ) → 1 t≤τ g(u φ ), (3.78) in L 2 (Ω; L 2 ([0, T ]; l 2 (H))). Next we have which goes to 0 as n, m → ∞ by Theorem 3.2 as well as by our assumption on φ.
Hence by combining all the estimates and using φ n being bounded as well we deduce that for any fixed v ∈ H weakly in L 2 (Ω × [0, T ]). If K ⊂ Ω × [0, T ] is any measurable set, then by 3.80, we have Since v ∈ H and K are arbitrary, we conclude that u φ satisfies the regularity conditions. Hence, we have shown the local existence of the solution u φ . Relaxing the restriction u 0 V ≤M , namely extending to the case u 0 ∈ L 2 (Ω, V ) and the global Next, we borrow the following theorem from [28].
Theorem 3.7 ( [28]). Let u φ , u 0 , φ, g be as defined above with the corresponding properties. Then, we have We continue with the following lemma.
Lemma 3.8. Given the assumptions on initial data and {φ n } n≥1 as in Theorem 3.8, we have that for any deterministic time T for any n, m ≥ N 0 for some N 0 , i.e. solutions with different deterministic force {u φn } n≥1 , converge in probability to u φ as n, m → ∞.
Proof. By assumption, we have u 0 ∈ L 2 (Ω, V ). Hence, by Chebyshev theorem we have, (3.88) for n l large enough. Then by taking any subsequence u φm l and by Theorem 3.5 and Lemma 4.1 repeating the same arguments above, we get that every subsequence {u φm l } has a further subsequence that converges in probability to u φ , which implies that the whole sequence {u φn } converges in probability to u φ , which concludes the proof.
Now we are ready to prove Theorem 3.1. Proof.
E sup ϕ( L(t, u φn , φ n )) − L(t, u φn , φ) where we appeal to Lemma 3.3 in the third inequality and the Lipschitz assumption on L(t, u φ , φ) in the fourth inequality. We have by boundedness assumption on {φ n } the followings for 0 < < 1 in probability as n → ∞. Moreover, using Theorem 3.7, we have that We note here that g(x) = x is a convex function with lim x→∞ Using de La-Vallee-Poussin criteria for uniform integrability (see e.g. [15]) we get that is uniformly integrable. Using uniform integrability and by Lemma 3.8 convergence in probability imply L 1 -convergence [15]. Thus, using that x 1− for 0 < < 1 being increasing and continuous, we get that as n → ∞. Hence, Theorem 3.1 is proven.