Shape optimization approach for solving the Bernoulli problem by tracking the Neumann data: A Lagrangian formulation

The exterior Bernoulli free boundary problem is considered and reformulated into a shape optimization setting wherein the Neumann data is being tracked. The shape differentiability of the cost functional associated with the formulation is studied, and the expression for its shape derivative is established through a Lagrangian formulation coupled with the velocity method. Also, it is illustrated how the computed shape derivative can be combined with the modified $H^1$ gradient method to obtain an efficient algorithm for the numerical solution of the shape optimization problem.

1. Introduction. In this paper, we are interested in the exterior Bernoulli free boundary problem. In particular, we deal with the two-dimensional case of the problem which is formulated as follows: given a bounded and connected domain ω ⊂ R 2 with a fixed boundary Γ := ∂ω and a constant λ < 0, one needs to find a bounded connected domain B ⊂ U ⊂ R 2 with a free boundary Σ := ∂B, containing the closure of ω, and an associated state function u : Ω → R, where Ω = B \ω, such that the overdetermined boundary value problem − ∆u = 0 in Ω, u = 1 on Γ, u = 0 on Σ, is satisfied. Here, n denotes the outward unit normal vector to the free boundary Σ and ∂ n u is the normal derivative of u. In (1), the boundary ∂Ω = Γ ∪ Σ of the doubly connected bounded domain Ω has the following regularity: Γ is Lipschitz, the free component Σ is a C 2,1 boundary, and we assume that dist(Γ, Σ) > 0. The existence of solution for the free boundary problem (1) is discussed in [2]. On the other hand, it is known that the related interior free boundary problem admits an elliptic solution for any λ < 0 (see [26]). This type of free boundary problem models various physical systems such as electrochemical machining, potential flow in fluid mechanics, tumor growth, etc. (see, e.g., [9,26,28]).
Our goal is to examine the reformulation of the ill-posed system (1) in the context of shape optimization. In this way, we can deal with the issue of having an overdetermined boundary condition on Σ. We carry out our investigation by considering the following minimization problem: where u = u(Ω) is a solution to the pure Dirichlet problem − ∆u = 0 in Ω, u = 1 on Γ, u = 0 on Σ.
The variational least-squares cost function in (2) is known as the L 2 -least-square tracking of the Neumann data. This tracking functional for solving Bernoulli problems is better than tracking the Dirichlet data for the following reason. Its secondorder shape derivative or shape Hessian is strictly coercive in its corresponding energy space H 1 (Σ) (cf. [22]) while the tracking Dirichlet data is not, and whose positivity only holds on the weaker space L 2 (Σ) (cf. [23]). We note that the strict coercivity of the shape Hessian of a cost functional in its corresponding energy space implies the stability of a local minimum (see., e.g., [24]). Moreover, having this property implies that the shape optimization problem is well-posed. In contrary, if positivity only holds on a weaker space, the problem is said to be algebraically ill-posed. In the case of L 2 -least square tracking of the Dirichlet data, for instance, the algebraic ill-posedness of the problem implies that tracking the Dirichlet data in the L 2 -norm is not sufficient, and that, as strongly assumed by the authors in [24], they have to be tracked relative to H 1 . For more details about this topic, we refer the readers to the aforementioned works. As in [33], we want to characterize the shape derivative of the cost functional J over Σ along some perturbation field V. The derivation of the said expression requires the evaluation of the limit [J(Σ t ) − J(Σ)]/t as t goes to 0, where Σ t is the result of deforming the free boundary Σ with a non-autonomous perturbation field V. Here, instead of the rearrangement method introduced in [41], we shall compute the desired expression for the shape derivative of J using a minimax formulation in the spirit of [18], and then employ a Lagrangian method for the numerical realization of the problem, differing from the approach used in [33]. The idea behind minimax formulation is to rewrite the objective function as the min-max of an appropriate Lagrangian which, as in most optimal control problems, is expressed as the sum of a utility function and the equality constraints. In doing so, the directional differentiability of the cost functional is transferred to the differentiability of the Lagrangian with respect to a particular parameter. As a result, one needs a theorem to differentiate the minimax or the saddle point of the Lagrangian with respect to a parameter. Fortunately, a tool to execute the task is already available in the literature (see [16]). However, the application of this key result due to Correa and Seeger is not completely straightforward. The difficulty arises primarily on the time dependence of the underlying function spaces appearing in the minimax formulation [19]. Nevertheless, we can get around this difficulty by employing either one of the two strategies offered in [17]. That is, we can apply either the function space parametrization technique or the function space embedding technique to get rid of the time parameter t from the function spaces. We emphasize that Lagrange methods have the advantage of providing the shape derivative of cost functionals without the need to compute the expressions for the shape and material derivative of the states. For some recent investigations related to our work, see [12,29,30,43,45,47].
The rest of the paper is structured as follows. In Section 2, the essentials of our present work are provided. The shape differentiability of the the cost functional J, as well as the derivation of its form, is presented in Section 3. In Section 4, we illustrate how the computed expression for the shape derivative can be combined with the modified H 1 gradient method to obtain an efficient algorithm for the solution of the shape optimization problem. In this regard, two concrete examples with different fixed boundaries are provided. The application of the aforementioned numerical method to solve the optimization problem (2)-(3) is also novel to our work. Finally, in Section 5, a concluding remark regarding the present study is stated.

2.
Preliminaries. In this section, we give the requisites of our study. First, we give a brief discussion about the velocity method.
Let V be an element of C([0, t V ); D k (R 2 , R 2 )), for some integer k 2 and a small real number t V > 0, where D k (R 2 , R 2 ) denotes the space of k-times continuously differentiable functions with compact support contained in with the initial value X given. We denote the "transformed domain" T t (V)(Ω) at t 0 by Ω t (V), or simply Ω t =: T t (Ω). Given this transformation, we note that any function ϕ t : Ω t → R can be referred to the reference domain by the identity ϕ t = ϕ t • T t : Ω → R. In this work, we shall consider annular domains Ω t with boundary ∂Ω t , which is the union of two disjoint sets Γ t and Σ t , referred to as the fixed and free boundaries, respectively. The evolutions of the domain Ω are described using non-autonomous velocity fields For t ∈ [0, t V ], T t is invertible and T t , T −1 t ∈ D 1 (R 2 , R 2 ) (cf. [11,Lem. 11]). In addition, the Jacobian I t is strictly positive, i.e., I t = det DT t (X) > 0, where DT t (X) is the Jacobian matrix of the transformation T t = T t (V) associated with the velocity field V. In this paper, the notation (DT t ) −1 and (DT t ) −T refer to the inverse and inverse transpose of the the Jacobian matrix DT t , respectively. Also, for simpilicty, we denote A t = I t (DT −1 )(DT t ) −T , and w t = I t |(DT t ) −T n|, the Jacobian matrix of T t with respect to the boundary ∂Ω.
Before we end this section, we state the essentials of our analysis.
Proposition 1. For a function ϕ ∈ W 1,1 loc (R 2 ) and V ∈ V, we have the following formulas The above results can be found in [17,50], and given as properties of the transformation T t in [11,33,42].
where κ denotes the mean curvature of Γ and the tangential gradient ∇ Γ is given by Finally, we mention the following lemma which will also be useful in our investigation. Lemma 2.3. Let V ∈ V and u t = u t • T t . Then, the following results hold: (i) for any f ∈ L p (U ), p 2, we have (ii) for any f ∈ W 1,p (U ), p 2, we have (iii) let p 1 and t → u t : [0, τ ] → W 1,p (U ) be a continuous function in 0 with u := u 0 . Then, t → u t : [0, τ ] → W 1,p (U ) is continuous in 0 and we have Proof. The proof of (i) can be found in [17, p. 529] (see also [17,Thm. 6.1, p. 567]). Meanwhile, (ii) follows from the triangle inequality and the application of (i). Indeed, as it suffices to show that we have, by triangle inequality, Clearly, we can apply (i) on the first term of the right hand side of the above equation. Meanwhile, the second term vanishes as the the convergence (DT t ) T → I holds in C(U ; R 2×2 ). Finally, for (iii), we again apply the triangle inequality, so that, for all t ∈ [0, τ ], we have Applying (ii), we see that the last term on the right hand side converges to zero as t 0. On the other hand, by definition and using Lemma 2.1, the first term can be written as Using the boundedness of I t and A t (see, e.g., [11]), we get for some constant C > 0. Evidently, the right side of this inequality tends to zero as t 0, proving the last assertion. This completes the proof of the lemma.
We now present the minimax formulation of the cost function J in the forthcoming section.

Minimax formulation. Let Ω be a bounded open domain in
where E denotes some energy functional. We associate with u a cost function Consider the deformed domain Ω t = T t (Ω) of the reference domain Ω along the pertubation field V, where T t is the perturbation of the identity operator associated with V. Let u t = u(Ω t ) be the solution of problem (6) on the transformed domain Ω t , i.e., inf and let F (Ω t , u(Ω t )) := J(Ω t ). (9) The minimization of the cost function J with respect to Ω can be achieved by transforming the problem (8)-(9) into an inf-sup problem. This approach is widespread in the engineering and mathematical literature. The solution of (8) is completely characterized by the variational equation Define In this form, the spaces depend on the parameter t. So, one needs a theorem to differentiate a saddle point with respect to the parameter t, and there are two techniques that can be used, namely: • Function space parametrization technique; • Function space embedding technique.
We are now ready to establish the shape derivative of J through a Lagrangian formulation. To begin with, we recall the variational formulation of system (3): find The weak solution u ∈ H 1 (Ω) of the above variational equation also belongs to the Sobolev space H 3 (Ω) for bounded domain Ω having C 2,1 free boundary Σ (see [11,Thm. 29 . We mention here that we require this regularity condition on Σ so that we can apply an auxiliary result (see Lemma 3.1) for the computation of the shape gradient. Even so, this regularity assumption on Σ can be made weaker as we shall remark later on in our discussion. Meanwhile, the weak solution of the PDE system (11) characterizes the unique minimum of the energy functional E(Ω, ·) : 2 Ω |∇ϕ| 2 dx. However, the existence of the extra constraints on Γ and Σ demands the introduction of the Lagrangian multiplier ∂ n ϕ ∈ H −1/2 (∂Ω). Thus, we introduce the functional (3) and compute the first order directional derivative of F at v in the direction ψ ∈ H 1 0 (Ω), denoted by dF (Ω, v; ψ), as follows: To avoid finding the material derivative of u, we introduce an appropriate adjoint equation as follows: find p ∈ H 1 (Ω) such that where dL(Ω, u; p, ψ) = − Ω p∆ψ dx + Γ ψ∂ n p ds + Σ ψ∂ n p ds.
In the same virtue as in (10), we define This Lagrangian admits a unique saddle point (u t , p t ) ∈ H 1 (Ω t ) × H 1 (Ω t ) and is completely characterized by the following systems: State equations Adjoint state equations Given this formulation, our objective is to find the limit and G(Ω t , ϕ, ψ) is given by (13).
To attain our goal, we apply Theorem 5.1 (see Appendix) to get the derivative of the infimum with respect to the parameter t 0 at t = 0. Meanwhile, to get around the difficulty of dealing with the "time-dependent" function space H 1 (Ω t ) and obtain an infimum with respect to this function space that is independent of t, we employ the function space parametrization technique and function space embedding technique. First, we apply function space parametrization technique below, and then proceed on using the function space embedding technique.
Lemma 3.1. Let ϕ, ψ ∈ H 2 (Ω) and V be a vector field belonging to V. Then, Again, using the informations given in (21) and (22), we can further simplify the expression for the shape derivative of J as follows: In reference to Lemma 2.2, we take f = 1 2 p 2 + λp to obtain which, upon taking the dot product of both sides with the vector field V and then integrating over Σ, leads to Utilizing Lemma 2.2, we obtain Note that u = 0 on Σ, so ∇u = (∂ n u)n on Σ. Employing this identity, we get We formalize our result in the following theorem.

Theorem 3.2.
Let Ω ⊂ R 2 be a doubly connected, bounded domain with boundary ∂Ω = Γ ∪ Σ, where Γ is Lipschitz and Σ is a C 2,1 boundary. Also, let Θ ⊂ C 1,1 (U , R 2 ). Then, the cost functional where u satisfies (14) at t = 0, is shape differentiable. For every V ∈ Θ, its derivative dJ(Σ) [V] in the direction of the vector field V is given by where the adjoint state p is the only solution of the adjoint equation (15) at t = 0.

3.2.
Function space embedding technique. To compute the shape gradient dJ(Σ)[V] using function space embedding technique, we note that D = R 2 actually holds all of the transformations of Ω under the action of the velocity field V ∈ Θ. This gives us the following equivalent form of the cost functional: where the Lagrangian functional G(Ω t ,Φ,Ψ) is given by In this regard, the following result (see [15] and also [1, Thm. 5.28, p. 156]) will be useful for the verification of Theorem 5.1.

Proposition 2 (The Calderón extension theorem).
Let Ω ⊂ R d , d ∈ N, be a uniform Lipschitz domain and m be a positive integer. Then, there exists a linear continuous extension operator such that for each u ∈ H m (Ω) where the positive constant C depends only on the cone embedded in Ω.
We now proceed as follows.
Firstly, S(t) is non-empty since we can always construct a linear and continuous extension Π : H 2 (Ω) → H 2 (R 2 ) by Proposition 2. Then, using this map, we define t . Therefore, we can define the extensionsΦ t = Π tût andΨ t = Π tpt ofû t andp t , respectively. So,Φ t ∈ X(t) and Ψ t ∈ Y (t), and these show the existence of a saddle point, i.e., S(t) = ∅. Thus, (H1) is satisfied. Now, the regularity of the state and adjoint state allows us to use Hadamard's domain and boundary differentiation formulas [17]: , is a sufficiently smooth functional, to compute the partial derivative of (24). That is, we have =: where n t denotes the outward unit normal to the boundary Σ t . This expression for ∂ t G(t,Φ,Ψ) exists everywhere in [0, τ ] (τ is sufficiently small, i.e., τ ∈ (0, t V )) for all (Φ,Ψ) since V ∈ D 1 (R 2 , R 2 ). Thus, (H2) is also satisfied. Next, we verify (H3)(i) and (H4)(i) and to this end we need the following lemma.

Lemma 3.3 ([17]). For any integer m 1, the velocity field
One can also show that the this result also holds for the weak topology of H m (R d ).
We check (H3)(ii) and (H4)(ii) as follows. For (Φ,Ψ) ∈ H 2 (R 2 )×H 2 (R 2 ), we can use Stoke's formula to rewrite (25) Here we have used the fact that ∂Ω t = Γ ∪ Σ t and that V| Γ = 0. Now, we introduce the map (Φ,Ψ) → K(Φ,Ψ)V which is bilinear and continuous. Finally, the map Notice that the expression (25) is a boundary integral on Σ which does not depend on (Φ,Ψ) outside of Ω t , so the inf and the sup in (26) can be dropped. Hence, we have dJ(Σ)[V] = Σ K(Φ,Ψ)V · n ds. We proceed on simplifying the expression for dJ(Σ) [V]. Note that the saddle point (Φ,Ψ) restricted on Ω is, in fact, given by the pair (u, p), which is the unique solution of the systems (21) and (22). Thus,

dJ(Σ)[V]
= Σ ∂ ∂n Rearranging the above integrand leads to the same expression given in Theorem 3.2 as desired.

Remark 1. The computed expression (23) for the shape gradient dJ(Σ)[V]
can be written in another way. In view of (15), we know that p = ∂ n u − λ on Σ, and so, using this equation, we can rewrite (23) as where u and p satisfy (14) and (15), respectively, at t = 0. We shall utilize this form in the numerical realization of the optimization problem (2)-(3) in the next section.
Remark 2. We have mentioned in Introduction that the regularity assumption for Σ is necessary so that we may apply Lemma 3.1. Nevertheless, it is sufficient to assume that Ω is C 1,1 so that the weak solution u ∈ H 1 (Ω) of (11) also belongs to H 2 (Ω) (cf. [11,Thm. 29]). But, in this case, the integral Ω A∇u · ∇p dx should be expanded by evaluating the derivative d dt . However, taking the derivative of the given expression is difficult because p only belongs to H 1 (Ω) when Ω is only C 1,1 . Nonetheless, this difficulty can be overcome by approximating the functions u by {u k } ∞ k=1 ⊂ C ∞ (Ω) and p by smoother functions p k satisfying the adjoint equation (12), and then infer from these that p k ∈ H 2 (Ω) and lim k→∞ p k = p ∈ H 1 (Ω) as done in [33]. On the other hand, Σ being C 2,1 allows us to conclude that the state solution u is in H 3 (Ω). This, in turn, makes p = ∂ n u − λ to be in H 3/2 (Σ) by the trace theorem, and thus, p ∈ H 2 (Ω) (cf. [27,Thm. 2.4.2.5,). This gives us the freedom to employ Lemma 3.1 for the computation of the boundary expression for the shape gradient. In addition, this regularity of the free boundary ensures the existence of the shape gradient of J as it assures sufficient smoothness of the adjoint state. 4. Numerical examples. The existence of optimal solution of the shape optimization problem (2)-(3) has already been studied in [34], where the C 2 -regularity of the free boundary was used to construct a C 1 -diffeomorphism of a uniform tubular neighborhood of the boundary. Having this result at our disposal, we carry out here a numerical realization of the optimization problem. To numerically solve the problem, we employ an iterative algorithm based on the H 1 gradient method. This method was introduced in [3] and stems from the idea of the traction method (see [4]). It was later on referred to as the H 1 gradient method in [7], and its comparison with other techniques was described in [8]. The basic idea of the gradient method in a Hilbert space was presented in [44]. Although the gradient method was generally defined for a functional on a Hilbert space [13], it was then extended to functionals defined on a Banach space [4,5,6]. For more details of this method, we refer the readers to the aforementioned papers.
As mentioned above, the numerical solution of (2)-(3) is obtained by adopting an iterative process that decreases the cost value J at each iteration. Let us denote by Σ k the free boundary at the k-th iteration. Then, at the (k + 1)-th iterative step, the free boundary Σ k+1 becomes Σ k+1 = {x + tV k (x) : x ∈ Σ k }, where t 0 is a sufficiently small step size parameter and V is chosen such that it provides a descent direction for the cost functional J. In reference to [6] (see also [40]), if such a V exists, then (by traction method) it should satisfy where G denotes the kernel of the shape gradient given in (23), for all ϕ taken from an appropriately chosen functional space X . If we choose X := L 2 (Σ), then V| Σ = −Gn and for this choice, dJ(Σ)[V] < 0 [40]. However, this choice of V may lead to subsequent loss of regularity of Σ, hence creating oscillations of Σ [48]. It is well-known that direct application of gradient method often results in oscillating shapes [39] and these oscillations are caused by a lack of smoothness of the shape gradient [5,48]. In this work, we shall, however, employ the modified H 1 gradient method introduced in [6]. That is, we take X := H 1 (Ω) and utilize the equation where α > 0 is a penalization parameter, to compute for V. We remark that the resulting vector filed V (also called in some literature as the Sobolev gradient [46]), computed by using (28), provides an extension of Gn over the entire domain, which may, as well, be shown to have not only a regularizing effect on the boundary Σ (cf. [40,48]) but also preconditions the descent direction. This fact is also true in the case of using equation (29) in computing for V as seen in [6], wherein an intuitive interpretation of the equation was also provided. For an alternative numerical approach in solving the problem (2)-(3), using, for example, the fictitious domain methods, we refer the readers to [33,35]. The optimization algorithm using the H 1 gradient method with Robin condition (see [6]) can be summarized as follows: Algorithm The boundary variation algorithm (Problem (2)-(3)) 1: Choose an initial shape Ω 0 ; 2: Solve the state equation (3) and adjoint state equation (12) on the current domain Ω k using finite element method; 3: Compute the descent direction V k by using (29), which amounts to solving the following system with Ω = Ω k ; 4: Modify the current domain by V k to obtain a new domain Ω k+1 , i.e., set Ω k+1 := {x + t k V k (x) : x ∈ Ω k }, for some sufficiently small scalar t k > 0.
For a concrete example of the problem, we consider the shape optimization reformulation (2)-(3) of (1) with λ = −1. That is, we consider the optimization problem where u satisfies (3). Thus, in view of (3.2), the kernel G is given by However, in the numerical computations below, we shall utilize its equivalent form given by (27), i.e., we let We remark that the kernel G involves the expression for the mean curvature κ of the free boundary Σ. This may be evaluated as where n ε ∈ H 1 (Σ) is the smooth normal vector field on Σ satisfying the variational equation given as follows: where ε > 0 is some fixed small parameter (see [40]).
We shall now provide two numerical examples of the given optimization problem by considering two different fixed boundaries in the next two sections. 4.1. Example 1. First, we consider a circular fixed boundary centered at the origin with radius r = 0.5; that is, we describe it as Γ = {(x, y) ∈ R 2 : x 2 + y 2 = 0.5 2 }. For the initial shape of the free boundary, we let Σ = {(x, y) ∈ R 2 : x 2 + y 2 = 2 2 }. The initial geometric profile of the annular domain Ω is shown in Figure (1). We then implement the boundary variation algorithm presented above in FreeFem++ [37] with the following setup. The state and adjoint equation are solved by finite element method. During each iteration, the step size parameter t k is chosen on the basis of the scalar product V k , V k−1 H 1 (Ω) =: η. If η < 0, we suspect that the algorithm is becoming unstable, so we recalibrate t k by reducing its size and initialize the next iteration using the previous shape Ω k−1 . If, on the other hand, η > 0 (and is of a very small value) the step t k is increased. We also decreased the size of t k if there are reversed triangles within the mesh after the update.
In the remeshing process, we choose the maximum mesh size h max = 0.1. Meanwhile, we set ε = 0.1 in (30) for the computation of the mean curvature κ. For the parameter α in Step 3 of the algorithm, we consider two values, namely α 1 = 0.001 and α 2 = 0.01. The computed values for the cost until the fourth iteration are shown in Table (1). We experimented with other initial guesses for the free boundary Σ, such as concentric/eccentric circles and ellipses, and also for different values of α. In any case, the resulting (approx.) optimal shape after a modest number of iterations converges to a domain which is graphically indistinguishable from the ones shown in Figure ( 1) and (1, 0). Meanwhile, we take a circular shape free boundary Σ with radius R = 2.5 as an initial guess. Again, we implement the optimization process in FreeFem++ as in Example (4.1) but with slightly different parameters. Furthermore, we run the algorithm until J(Σ k ) < 10 −5 . The parameter ε in equation (30) is again set to 0.1. However, this time, we consider three different values for α which are much smaller compare to the values considered in Example (4.1). In particular, we take α = 10 −9 , 5 × 10 −9 and 5 × 10 −8 . The resulting (approx.) optimal shapes for each of these values of α after several iterations are shown in Figure (2). Meanwhile, the histories of values of the cost functional J corresponding to these parameter values are depicted in Figure (3). In the above setup, the computed initial value for the cost J is approximately 115.2051. In the case when α = 10 −9 , this cost value was reduced to approximately 3.34 × 10 −6 after 11 iterations. Moreover, in the case when α = 5 × 10 −9 , we got the value J = 1.39 × 10 −7 of the cost at the end of nine iterations. Lastly, for α = 5 × 10 −8 , we obtained the value J = 3.63 × 10 −6 of the cost after 10 iterations. In this example, we also performed several simulations, taking various shapes for the free boundaries and different values for α, and in any case, as expected, the computed free boundaries are almost indistinguishable to what are shown in Figure (3). 5. Concluding remark. We have successfully verified the differentiability of the cost functional J given in (2) and had established the boundary expression for the shape derivative of the given functional through a minimax formulation, employing Correa-Seeger's theorem combined with function space parametrization technique and also with the function space embedding technique. It is worth noting that there is another method closely related to the one we have used here. More precisely, the method, known as Céa's Lagrange method [14], utilizes the same Lagrangian as the minimax formulation. However, if one uses this approach without caution, the computations may lead to a wrong expression for the shape derivative. The method requires that the shape derivatives of the state and the adjoint state exist and belong to the solution space of the PDE. Indeed, one may define G(t, ϕ, ψ) := G(Ω t , ϕ, ψ) where G is given by (13) and assume that G is sufficiently differentiable with respect to t, ϕ and ψ. Since the strong material derivativeu exists in H 1 0 (Ω) (cf. [11]), then we may calculate the shape gradient as follows The second expression on the right side of the above equation vanishes due tȯ u ∈ H 1 0 (Ω), and therefore we are left with dJ(Σ; V) = d dt G(t, u t , p) t=0 . The boundary expression of the shape derivative is easily calculated, following the line of computations in Section 3 (see [49]). The computed expression for the shape derivative agrees with the result in [33]. Consequently, the shape derivative of the cost functional that was considered here was established independently from [33]. Also, we observed that the computed expression for the shape derivative of the cost functional J depends on the normal component of the deformation field V at the free boundary Σ; that is, there exists a function g Ω defined on the free boundary Σ such that dJ(Σ; V) = Σ g Ω V · n ds. This result agrees with the Hadamard-Zolésio structure theorem (cf. [32] and [17,Rem. 3.2,p. 481]). In addition, the computed expression for the shape derivative enabled us to apply an efficient boundary variation algorithm based on the modified H 1 gradient method to numerically solve two concrete examples of the shape optimization problem (2)-(3). The numerical results show the convergence of the proposed algorithm to an approximate solution of the free boundary problem. As a result, the proposed iterative algorithm provides an alternative efficient numerical procedure in solving the free boundary problem through shape optimization approach.