Numerical solution of bilateral obstacle optimal control problem, where the controls and the obstacles coincide

This work is deals with the numerical solution of a bilateral obstacle optimal control problem which is similar to the one given in Bergounioux et al [ 9 ] with some modifications. It can be regarded as an extension of our previous work [ 18 ], where the main feature of the present work is that the controls and the two obstacles are the same. For the numerical resolution we follow the idea of our previous work [ 18 ]. We begin by discretizing the optimality system of the underlying problem by using finite differences schemes, then we propose an iterative algorithm. Finally, numerical examples are provides to show the efficiency of the proposed algorithm and the used scheme.

Optimal control problems governed by variational inequalities have been extensively studied during the last years by many authors, such as [4], [5], [27], [28]. These authors have studied optimal control problems for obstacle problems and variational inequalities where the obstacle is a known function, and the control variables appear in the variational inequalities. On the other hand, in [3] and [9], the authors have studied another class of problems where the obstacle functions are unknowns and are considered as control functions. This kind of problems may concern the optimal shape of a dam [14], for which the obstacle gives the shape to be designed such that the pressure of the fluid inside the dam is close to a desired value. Besides, if we want to design a membrane having a desired shape, we need to choose a suitable obstacle. In this case, the obstacle can be considered as a control, and the membrane as the state (see for example [11], [20], and [33]).
We can also encounter this kind of problem in finance, for example in [1], the authors present methods to calibrate the local volatility with American put option.
Where, the price P of the American put of strike K and maturity T belongs to C 0 [0, T ] ; L 2 (0, T ; K) ∩ L 2 (0, T ; K) is given as a solution to ∀v ∈ K, ∂P ∂t (t) , v − P (t) + a t (P (t) , v − P (t)) ≥ 0, for all v in K, (1) where and a t is a bilinear form given by such that r the interest rate (which is assumed constant), and t the time to maturity, ∂P ∂t ∈ L 2 (0, T ; V ), satisfying P | t=0 = P 0 and P 0 is the payoff function and P 0 (S) = (K − S) + . Then, to solve the calibration problem, one way consists to find η in H minimizing J (η) + J R (η) from the observations of • the price S 0 to day; • the prices P i i∈I of a family of options with different maturities and different strikes (T i , K i ) i∈I , with where H is a suitable closed subset of a finite-dimensional function space and J R (η) is a Tychonoff regularization and η is the square of the volatility. It should be pointed out that, in the optimal control problem of a variational inequality, the main difficulty comes from the fact that the mapping T between the control and the state (control-to-state operator) is not Gâteaux differentiable but only Lipschitz-continuous as pointed out in [27] and [28] where one can only define a conical derivative for T and so it is not easy to get optimality conditions that can be numerically exploitable. To overcome this difficulty, different authors (see for example, Kunisch et al. [23], V. Barbu [5] and the references therein) consider a Moreau-Yosida approximation technique to reformulate the governing variational inequality problem into a problem governed by a variational equation. Our approach is based on the Barbu's penalty-regularization method by introducing a penalty parameter approaching zero, then we obtain an approximated optimality system for the original optimality system which can be easily used from the numerical point of view.
In this paper, the optimal control problems governed by varaiational inequalities of obstacle type which we will be investigating, can be set in a wider class of problems, and can be formally described as follows (see [9]) min {J (y, χ) , y = T (χ) , χ ∈ U ad ⊂ U} , where T is an operator which associates y to χ, when y is a solution to L (y, χ) , y − v ≥ 0, for all v in K (y, χ) , such that K is a multiapplication from Y × U to 2 Y , U is a Hilbert space, U ad is the set of admissible controls and L is a differential operator from Y to the dual Y . Let h be an application from R × R to R, then the variational inequality that relates the control χ to the state y can be written as In this formulation, the obstacle is considered to be the control variable.
For the numerical solution of optimal control problems, we use the direct and indirect methods. The direct method consists in the discretization of the cost function and the state equation, thus reducing the problem to a linear or a nonlinear optimization problem that can be solved by optimization algorithms. The indirect method consists in the discretization and the numerical solution of the optimality system given by the state, the adjoint and the projection equations. Nevertheless, the optimal control of variational inequalities of obstacle type is still a very active field of research especially for their numerical treatment. In the mathematical literature, there are few works on the numerical resolution of this kind of problem (where the control and the obstacle are the same). Ghanem et al. in [18] and Kunish et al. in [23] are to our knowledge, the only ones who solved numerically this kind of problems, where they have considered only the unilateral obstacle optimal control. By using the indirect method, Ghanem et al. in [18], have solved numerically the unconstrained unilateral optimal control of obstacle type given by instead of the one given in [8] and defined by where y = T (ϕ) belongs to H 1 0 (Ω) is a unique solution of the unilateral obstacle problem given by In [9], Bergounioux et al. considered the following bilateral obstacle optimal control problem such that Φ (ϕ, ψ) is given, where ν is a positive constant and z belongs to L 2 (Ω) as a target profile, such that y = T (ϕ) is a solution of the bilateral obstacle problem given by where K (ϕ, ψ) is given by To obtain a strong convergence of the minimizing sequences in H 1 (Ω) of the cost function given by (7), we need H 2 −a priori estimate, for that we listed two equivalent techniques; the first one is to take Φ (ϕ, ψ) = which allows to bound the minimizing sequences in H 2 (Ω), then we get a strong convergence of the minimizing sequences in H 1 (Ω) (see Bergounioux et al. [9] and Ghanem [17]).
The second one, consists in taking the term Φ (ϕ, ψ) = but it is necessary to add additional constraints, for example we can set ϕ H 2 (Ω) ≤ ρ 1 and ψ H 2 (Ω) ≤ ρ 1 which means that both ϕ and ψ are contained in the ball B H 2 (0, ρ 1 ) of center 0 and radius ρ 1 large enough (see Bergounioux et al. [8]), for example, we can suppose that . According to the result given in [18] the authors point out that, in spite of the elimination of the inequality constraint given by (6), we still get a local convergence property implied by the constraint ϕ n H 2 (Ω) ≤ R, where R is a positive radius large enough and ϕ n is an iterative solution (see [18]), hence, we are again confronted with the inequality constraint (6).
Thus, in our opinion, to study mathematically and numerically the optimal control problem given by (7), we note that it is not necessary to suppress the constraint (6), because it is going to appear again to get the local convergence of the used algorithm for the numerical solution of the problem given by (5) (see [18]).
Following the previous ideas, we may apply a smoothed penalization approach to our problem. More precisely, the idea is to approximate the obstacle problem by introducing an approximating parameter δ, where the approximating method is based on the penalization method and consists in replacing the obstacle problem (4) by a family of semilinear equations.
By considering the problem according to the second technique, the present work can be seen as the continuation of the previous work of Ghanem et al. [18] and the main contribution of this paper is twofold; first, by using the same techniques given in [9], we get the following optimality system where µ δ 1 = β δ y δ − ϕ δ and µ δ 2 = β δ ψ δ − y δ and A * is the adjoint operator associated to the operator A.
According to the remarks given above and results of [18], we must reformulate the optimality system (8) (see Theorem 2.1 below), because as it is the system (8) is difficult to solve using the same techniques given in [18].
Secondly, by using the indirect approach (after optimisation) for the numerical resolution of the reformulated bilateral obstacles optimal control problem (given below by (14)). This is the originality of this work. We first begin by discretizing the optimality system (14) by using finite differences schemes and then by proposing a simple iterative algorithm based on Gauss-Seidel method that is a combination of damped-Newton-Raphson method and a direct method by using the same ideas and techniques given in [18], where the proposed algorithm is designed to have a fixed point. Then, we give estimates of the distance between solutions to the original problem and the penalized problem. Therefore, we derive the convergence rate and justify the independence of the rate of convergence with respect to the penalized parameter δ.
By contrast, Achdou et al. in [1], in order to find the optimality conditions for the problem (3), replace the state equation (1) and (2), by the a penalized problem using a maximum function, whose penalty parameter, called ε, will go to 0. Doing so, they obtain a new least square problem, for which necessary optimality conditions are easily found.
Thus, for computing the price of the American option, they used finite element method to consider a semi-discrete problem, then they must regularize the problem to discuss the used actives set strategy algorithms.
The rest of paper is organized as follows: in section 2, we give precise assumptions and some well-known results. In section 3, we introduce the iterative algorithm and give convergence results to solve the optimality system. Section 4 is devoted to numerical examples that illustrate the theoretical findings and in section 5 we present some remarks and a conclusion.

Preliminaries and known results.
Let Ω be an open bounded set in R n (n ≤ 3), with smooth boundary ∂Ω. We adopt the standard notation H m (Ω) for the Sobolev space of order m ≥ 0 in Ω with norm · H m (Ω) , where defined as the closure of D (Ω) in the space H m (Ω), where D (Ω), is the space of C ∞ (Ω)-functions, with compact support in Ω (see for example [2]). We shall denote by · V , the norm of the Banach space V , and L p (Ω) the p − summable functions Further, in the sequel, we denote by C or c we denote a generic positive constant independent of any approximation parameters, which may not be the same at each occurrence.
In the same way, ·, · denotes the duality product between H −m (Ω) and H m 0 (Ω), and (·, ·) the L 2 (Ω) inner product. It is well known that H m (Ω) → L 2 (Ω) → H −m (Ω) and H m+1 (Ω) → H m (Ω) with compact injection. We consider the bilinear form σ defined on H 1 0 (Ω) × H 1 0 (Ω) by where Moreover, we suppose that a ij belongs to C 0,1 (Ω) (the space of Lipschitz continuous functions in Ω, whereΩ is the closure of Ω) and that a 0 is non-negative to ensure a good regularity of the solution (see for example [26]), where we assume that the following conditions on σ are fulfilled For any ϕ and ψ in H 1 0 (Ω), we define and consider the following variational inequality where f belonging to L 2 (Ω) is a source term. From now on, we define the operator T (control-to-state operator) from U ad × U ad to U, such that y = T (ϕ, ψ) is the unique solution to the obstacle problem given by (10) and (11) (see [25]). The boundedness assumption for U ad is crucial: it gives a priori H 2 -estimates on the control functions and leads to the existence of a solution. Now, instead of the problem given in [9] and according to the results given in [18], we consider the optimal control problem (P) defined as follows . and ν is a strictly given positive constant, z in L 2 (Ω). We seek the obstacles (optimals controls) φ,ψ in U ad × U ad , such that the corresponding state is close to a target profile z.
To derive necessary conditions for an optimal control, we would like to differentiate the map (ϕ, ψ) → T (ϕ, ψ). Since the map (ϕ, ψ) → T (ϕ, ψ) is not directly differentiable (see [28]), the idea here consists in approximating the map T (ϕ, ψ) by a family of maps T δ (ϕ, ψ) and replacing the obstacle problem given by (10) and (11) by the following smooth semilinear equation (see [12] and [27]): Then, the approximation map (ϕ, ψ) → T δ (ϕ, ψ) will then be differentiable and approximate necessary conditions will be derived, such that where β is negative and belongs to C 1 (R), such that δ is strictly positive and goes to 0. Then β δ is given by is nondecreasing, it is well known (see [19]), that the boundary value problem (12) admits a unique solution y δ in H 2 (Ω) ∩ H 1 0 (Ω) for a fixed ϕ and ψ in H 2 (Ω)∩H 1 0 (Ω) and f in L 2 (Ω). In the sequel, we set y δ = T δ (ϕ, ψ). So for any δ > 0, we define Then, the approximate optimal control problem is given by As it was indicated in the introduction, our goal is to solve numerically the optimal control problem given by (P δ ), by using the same algorithm given in [18]. If we consider the system given by (8), we can remark that the algorithm given in [18] cannot be applied, that is why we reformulate the optimality system (8) in order to be able to use the algorithm given in [18].
Then, by using the same techniques given in [5] and [9], the problem (P δ ) has, at least, one solution denoted by (y δ , p δ , ϕ δ , ψ δ ) and the optimality system can be characterized by the following Theorem Theorem 2.1. Let ϕ δ , ψ δ be in U ad × U ad is an optimal pair solution to (P δ ), and y δ = T δ ϕ δ , ψ δ . Then there exist p δ in U, µ δ 1 = β δ (y δ − ϕ δ )p δ and µ δ 2 = β δ (ψ δ − y δ )p δ in L 2 (Ω) such that the following optimality system (14) is satisfied Proof. The proof is inspired by the proof of Theorem 3.4 given in [9]. Now, we give some important results relevant for the sequel of this paper.
Proof. By the definition of β , we get Then, by Cauchy-Schwarz inequality and since p δ 1 belongs to B H 1 (0, ρ 3 ) and by the Mean-Value Theorem applied in the interval of sides where i = 1, 2 and by the properties of β, we get This means that Proof. It is easy to see that 3. Convergence study of an iterative algorithm. In this section, we give an iterative algorithm to solve problem (P δ ). Roughly speaking, we propose an implicit algorithm to solve the necessary optimality system (14). The proposed algorithm is based on the Gauss-Seidel method and is given below.
In the sequel, we show that the iterative solutions given by the Algorithm 1 are bounded. It is useful to prove the local convergence of the given algorithm. Corollary 1. Since ϕ δ n−1 and ψ δ n−1 belong to B H 2 (0,ρ 1 ) ∩ U and satisfy step 3 of algorithm 1, such that δ ≤ C, then we get This means that y δ n belongs to B H 1 (0,ρ 2 ) ∩ U, Proof. From the equation given in step 3 of the Algorithm 1, we obtain Then from the above inequality (15), we ger . Finally, we conclude by using the previous inequality. Corollary 2. Since the hypotheses of Corollary 1 are fulfilled, and by letting (y δ n , p δ n ) in (B H 1 (0,ρ 2 ) ∩ U) × U and satisfying steps 3 and 4 given by Algorithm 1, we get p δ n H 1 (Ω) ≤ρ 3 . This means that p δ n belongs to B H 1 (0,ρ 3 ) ∩ U, whereρ 3 = Cρ 2 . Proof. Now, from the equation defined by step 4 of the Algorithm 1 and by the coercivity condition of σ given by H 2 , we obtain that p δ n H 1 (Ω) ≤ C y δ n H 1 (Ω) . Finally we conclude by using Corollary 1. This means that ϕ δ n and ψ δ n belong to B H 2 (0,ρ 4 ) ∩ U, andρ 4 = C δνρ 3 + Cρ 1 , wherẽ ρ 3 is given by Corollaries 2.
Proof. By the use of the equation defined in step 5 of Algorithm 1, where λ δ n = ν∆ϕ δ n−1 + β δ y δ n − ϕ δ n−1 p δ n and ϕ δ n = 0 on ∂Ω, we deduce that Now, from the equation given in step 6 of Algorithm 1, by properties of the Laplacian operator ∆, we obtain ψ δ n H 2 (Ω) ≤ C δν p δ n H 1 (Ω) +C ϕ δ n−1 H 2 (Ω) . Then, using the equation given in step 7 of Algorithm 1, by properties of the Laplacian operator ∆, we get Finally, we conclude using Corollary 2.
Hence, from the results given in Corollaries 1, 2 and 3, and from the different steps of the above Algorithm 1, we define the following functions F i , for i = 1, 2, 3, 4 as • From step 3: We define F 1 : we see that F 1 depends on ϕ δ n−1 and ψ δ n−1 , and yields y δ n as the solution of the following state equation Ay δ n + β δ y δ n − ϕ δ n−1 − β δ ψ δ n−1 − y δ n = f in Ω, and y δ n = 0 on ∂Ω.
, such that ψ δ n = F 3 y δ n , p δ n , we see that F 3 depends on p δ n and y δ n , and since ψ δ n is given, we define λ δ n by the following equation − λ δ n = ν∆ψ δ n + β δ ψ δ n − y δ n p δ n in Ω, and ψ δ n = 0 on ∂Ω.
Remark 1. We note that the equation given by (22) is only used to solve the equation given by (24).
Then taking into account the above definitions of F i , where i = 1, 2, 3, 4, let us define the map F : Algorithm 1 is an iterative approximation method to compute the fixed-points of the function F . Let us give the following theorem to show that the mapping F is locally Lipschitz.
To prove the previous theorem, we need the following Lemmas.
Lemma 3.4. Since the following condition, Proof. From equation (22), by properties of the Laplacian operator ∆ and Lemma 2.4, we obtain For the previous inequality to have a meaning, we must havẽ Then, we get Lemma 3.5. Since the following conditioñ is fulfilled, then the function F 4 given by (23) is locally Lipschitiz from Proof. From equation (22), by properties of the Laplacian operator ∆ and Lemma 2.5, we get For the previous inequality to have a meaning, we must havẽ Then, we get where p δ,1 n = F 2 ϕ δ,1 n−1 , y δ,1 n , ψ δ,1 n−1 , p δ,2 n = F 2 ϕ δ,1 n−1 , y δ,2 n , ψ δ,1 n−1 , y δ,1 n = F 1 (ϕ δ,1 n−1 , ψ δ,1 n−1 ), and y δ,2 n = F 1 (ϕ δ,2 n−1 , ψ δ,2 n−1 ), and by Lemmas 3. Remark 2. From above, we have proven that the function F is locally Lipschitz, and we can see that it is very difficult to get a sharp estimate of the Lipschitz constant l of F . But if we take ρ 4 ≤ ρ 1 , so that by a strong version of Schauder's Theorem F will have a fixed point [29].
It remains to prove that the fixed point of F (whenever it exists) is a stationary point, i.e a solution of the optimality system.
In the sequel, we illustrate how the combined direct and dumped Newton method can be used most effectively for solving the optimality system (14). The main idea is to linearize equations given by (19), (21), (22) and (24), for the numerical solution of the set of equations (19), (21), (22) and (24). We use the iterative relaxed Newton method (see [18]) on each mapping F 1 , F 3 and F 4 , and prove the convergence of the proposed algorithm. Therefore for the solution of the system (14), we propose the following iterative algorithm.

15:
Else; n ← n + 1, go to Begin. 16: End if 17: End 3.1. Convergence results. In this subsection, we give some conditions on δ, ω y , ω ϕ and ω ψ to prove the convergence of Algorithm 2 and we derive the error estimates for the distance between the exact and the penalized solution, then we derive the convergence rate and justify the dependence of the rate of convergence on the penalized parameter δ. In the sequel, we denote byȳ δ ,p δ ,φ δ andψ δ the solutions respectively of the equations (30), (31), (32) and (33), and let y δ n , λ δ n , p δ n , ψ δ n and ϕ δ n be given respectively by step 5, step 7, step 6, step 9 and step 11 of the Algorithm 2.
Remark 3. As seen above, it is very difficult to give a sharp estimate of the constant k and prove that this is less than 1 to get the convergence of the latter algorithm. However, we believe that with a large enough radius R and suitable choices of δ, ω y , ω ϕ and ω ψ , we will be able to show in the numerical calculations that the constant k is less than 1.
In the following, the numerical results according to the variation of ω are summarized in Table 1 Table 6 according to the variation of N .
In the following, the numerical results according to the variation of N are summarised in Table   Table 2 where, the numerical results associated to Table 2 are shown respectively in figures 3 and 4.    Table  Table 3 where, the numerical results associated to Table 3 and displayed in figures 5 and 6.    Table 8 according to the variation of δ.
In the following, the numerical results according to the variation of ν are summarised in Table  Table 4 where, the numerical results associated to Table 4 are shown respectively in figures 7 and 8.   We show in the curves on the left given by Figures 1, 3, 5 and 7 graphical variations in a log-log scale of n for each iteration n and on the right we show the state function. Curves given by Figure 2, 4, 6 and 8 corresponding to the controls (obstacles) functions.

Conclusion and remarks.
We notice that techniques used in the paper of Ghanem et al. [18] can be adapted with suitable modification to the numerical resolution of the problem considered in this work. The given numerical results are acceptable although the convergence of the algorithm is not fast and also consolidate our prediction given in Remarks 3 and 4 about the Lipschitz constants.
The proposed algorithm converges locally i.e. the optimal solutions y δ , ϕ δ , ψ δ , p δ given by the iterative algorithm are contained in balls of center 0 with various radii. We tried to give a sharp estimates of these radius, but unfortunately by the presence of several parameters (like δ and ω) and unknown constants (for example Sobolev injection constants) in the estimates of these radii, we did not succeed in comparing these balls with each other. Finally, for a future work, we can either improve the used algorithm by optimizing the choice of the parameter ω (for example by the line search method) or apply others algorithms for the numerical resolution of the above problem, (for example semi-smooth Newton methods [24] or primal-dual strategy methods [22]).