Optimal reinforcing networks for elastic membranes

In this paper we study the optimal reinforcement of an elastic membrane, fixed at its boundary, by means of a network (connected onedimensional structure), that has to be found in a suitable admissible class. We show the existence of an optimal solution that may present multiplicities, that is regions where the optimal structure overlaps. Some numerical simulations are shown to confirm this issue and to illustrate the complexity of the optimal network when their total length becomes large.

1. Introduction.In the present paper we consider the vertical displacement of an elastic membrane under the action of an exterior load f and fixed at its boundary; this amounts to solve the variational problem or equivalently the elliptic PDE Here Ω is a bounded Lipschitz domain of R d (d = 2 in the physical situation), f ∈ L 2 (Ω), and H 1 0 (Ω) is the usual Sobolev space of functions with zero trace on the boundary ∂Ω.
Our goal is now to rigidify the membrane by adding a one-dimensional reinforcement in the most efficient way; the reinforcement is described by a one-dimensional set S ⊂ Ω which varies in a suitable class of admissible choices.The effect of S on the membrane is described by the energy value that has to be maximized in the class of admissible choices for S.Here m > 0 is a fixed parameter that represents the stiffness coefficient of the one-dimensional reinforcement, the notation H 1 indicates the 1-dimensional Hausdorff measure, while C ∞ c (Ω) denotes the class of smooth functions with compact support in Ω.It has to be noticed that, in view of the fact that the infimum of a functional coincides with the minimum of its relaxation, we can write the energy E f (S) is an equivalent form using functions u in the Sobolev space H 1 0 (Ω) ∩ H 1 (S).Indeed, in [4] it is shown that the relaxed functional associated to is given by where ∇ τ denotes the tangential derivative along S. Therefore (2) can be equivalently written as In our model, the energy of the stiffener S is described by the line integral m 2 S |∇ τ u| 2 dH 1 ; this is consistent (see for instance [15] and [1]) with the fact that the integral above is the variational limit (as ε → 0) of energies where the stiffener S is replaced by the quantity m 2ε Sε |∇u| 2 dx, being S ε the thin strip In other words, the one-dimensional stiffener S is seen as the limit structure of two-dimensional thin strips of thickness ε and elastic constants m/ε.In the present paper, the optimization problem we deal with consists in finding the "best" reinforcement S among all the networks (i.e.closed connected onedimensional subsets) of Ω having a prescribed bound L on their total length, measured by means of the Hausdorff measure H 1 (S).We consider then the maximization of the energy functional E f (S) in (2) over the admissible class above, that is max E f (S) : S ⊂ Ω, S closed connected, H 1 (S) ≤ L .
We may consider, besides the case of distributed loads which consists in assuming the load f in some Lebesgue class L p (Ω), also the case of concentrated loads, in which f may have a more singular behavior.More precisely, we assume f is a signed measure, so that the linear term Ω f u dx in (2) has to be written as Ω u df .It is well known that if f is a measure we may have E f (S) = −∞ for some choices of S; in many cases however, these "singular" configurations are ruled out from our analysis because of the maximization problem (4) we are dealing with.
Another model requiring a similar analytical approach is the one concerned with the reduction of a traffic congestion in a given geographic region.Here the minimum problem is min and in the region Ω the function f + represents the density of residents while f − is the density of working places.The vector σ is the traffic flux and the function H describes the transportation cost: the case H(s) = |s| is the classical Monge's situation, while it is said that the transport is congested if the function H is superlinear, that is We refer to [3,5,6,16] and to the references therein for a detailed description of this model.In our case H(s) = |s| 2 /2, which reduces, via a duality argument, to a problem of the form (1). The optimization problem arises when a new road or network of roads S has to be built to reduce the congestion; the total length L is prescribed and on the new road the congestion function is lower than |s| 2 /2, say α|s| 2 /2 with α < 1.The problem consists in locating in an optimal way the onedimensional set S and we end up, via a duality argument, with a problem similar to the previous one, with m = 1/α.Since limits of measures of the form H 1 S can be general measures, it is convenient to define the energy functional (2) even for a measure µ, by setting where also the load f is assumed to be a general measure.Passing to the limit a maximizing sequence S n of one-dimensional sets, and using the fact that they are connected, we end up with an optimal measure µ which is concentrated on a onedimensional closed connected set S having H 1 (S) ≤ L. On the other hand, since the rigidity of the entire structure improves when the reinforcement S increases, we may assume the maximizing sequence S n satisfies H 1 (S n ) = L, which implies that the total mass of the measure µ is equal to L.
We stress the fact that the connectedness assumption on S is crucial; indeed, removing this constraint allows maximizing sequences (S n ) to spread over Ω and leads to consider a relaxed problem of the form where E f is the functional in (5) and M + (Ω) is the class of all nonnegative measures on Ω with µ(Ω) ≤ L. This optimization problem has been studied in [11] and in [7], where it is shown that the optimal measure µ actually belongs to L p (Ω) where the exponent p depends on the summability of the right-hand side f .Similar problems, in the extreme case when in the reinforcing region a Dirichlet condition is imposed, have been considered in [9,10].
In our case we show that the optimal measure µ is absolutely continuous with respect to H 1 S, hence it can be written as µ = θ(x)H 1 S with θ(x) ≥ 1 and S θ dH 1 = L. Therefore, when the optimal reinforcing structure S verifies H 1 (S) = L, we must necessarily have θ ≡ 1; however, in some situations, we show that the case θ(x) > 1 may actually occur.In Section 5 we present some numerical simulations, which show unexpected behaviors of the optimal reinforcing networks.
2. Formulation of the problem.In all the paper Ω is a bounded Lipschitz domain of R d and f belongs to the space M(Ω) of signed measures on Ω.The admissible class we consider consists of all networks (one-dimensional closed connected sets) S ⊂ Ω having a total length bounded by a given number L. More precisely we consider the class where H 1 is the one-dimensional Hausdorff measure.For every S ∈ A L we define the energy functional Again, when needed in the following, we can replace the term S |∇u| 2 dH 1 by its relaxed version S |∇ τ u| 2 dH 1 .The optimization problem we deal with is then It has to be noticed that when f does not belong to the dual Sobolev space H −1 (Ω) we may have E f (S) = −∞ for some choices of S; in many cases however, these "singular" configurations are ruled out from our analysis because of the maximization problem (7).However, it may happen that E f (S) = −∞ for every S in the admissible class A L ; this for instance occurs if f = δ A − δ B where A, B are two points in Ω and L < |A − B|.In our results therefore we always assume that E f (S) is finite for at least a set S in the admissible class A L .
As explained in the introduction, limits of maximizing sequences (S n ) in A L (intended as measures of the form H 1 S n ) may lead to measures; it is therefore convenient to define the energy functional for a general nonnegative measure µ ∈ M + (Ω) by (5).When µ = H 1 S we have that the energy E f (µ) simply reduces to E f (S).The choice of C ∞ c (Ω) in (6) and in (5) has been made to have meaningful integral terms in the definition of the energy functional; however, instead of C ∞ c (Ω) we may equivalently use functions which are continuous, smooth near the support of µ and in H 1 0 (Ω).An argument similar to the one illustrated above, to pass from the original formulation as in (5) to its relaxed version, can be used for a general measure µ, providing in this way an energy functional defined on a suitable Sobolev space H 1 0 (Ω) ∩ H 1 µ .The integral functionals of the form (5) and the corresponding Sobolev spaces H 1 µ have been studied in [4], to which we refer for all the details.In our case the infimum in (5) becomes a minimum in where ∇ µ denotes the tangential gradient with respect to µ and H 1 µ the Sobolev space of functions u in H 1 0 (Ω) whose tangential gradient ∇ µ u is in L 2 µ .When µ is of the form θH 1 S with S rectifiable (we recall that all connected one-dimensional sets are rectifiable) and θ ≥ 1, the tangential gradient ∇ µ reduces to the tangential gradient ∇ τ with respect to S and ( 8) becomes It is convenient to denote by E µ,f (u) the quantity Moreover, we denote by A first existence result relies on a compactness property with respect to the Hausdorff convergence and is given below.
admits at least a solution.
The optimal measures µ solving the maximization problem (10) are general measures in M + L (Ω) supported by a one-dimensional closed connected set S; moreover, since the energy E f (µ) increases when µ increases, we may reduce to consider only measures with µ(Ω) = L.However, the optimal measures µ above, in principle may have singular parts with respect to the Hausdorff measure H 1 S.The next result shows that this does not occur.Theorem 2.2.Let µ be a solution of the maximization problem (10).Then there exists a one-dimensional closed connected set S such that the absolutely continuous part of µ with respect to H 1 S is also a solution of (10).In other words, problem (10) admits a solution µ of the form µ = θ(x)H 1 S, with S ∈ A L and θ(x) ≥ 1.
By the theorem above it follows that there exists an optimal measure µ which is is absolutely continuous with respect to H 1 S, hence it can be written as In the following example we show that the case θ(x) > 1 may actually occur.
Taking in account the maximization problem (10) three possibilities may occur, depending on the choice of L.
• If L < 1 we have E f (S) = −∞ for every S in the class A L .In fact, since a connected set S of length L < 1 cannot contain both A and B, and since in R 2 the capacity of a point is zero, we may construct a sequence (u n ) in , vanishes on S, and with u n (A) − u n (B) arbitrarily large.
• If L = 1 the unique configuration for which the energy is not −∞ is the segment S connecting A and B, which is then the unique solution of the maximization problem (7).• If L > 1 we have θ(x) > 1 on some parts of the set S (see the numerical simulations of Section 5).

3.
Proofs.In the rest of the paper we use the following result.
Proposition 1.Let (µ n ) be a bounded sequence of measures on R d weakly* converging to a measure µ and let (S n ) be a sequence of closed connected subsets of R d converging to a set S in the Hausdorff sense.Assume that 1.
Proof.The fact that µ is supported by S easily follows from the weak* convergence of µ n to µ and from the Hausdorff convergence of S n to S. The fact that H 1 (S) ≤ L follows from the Golab theorem.To prove that µ ≥ H 1 S, take a point x 0 ∈ S and a ball B(x 0 , r); for all r except a countable family we have where the last inequality follows again by the Golab theorem.Since x 0 and r were arbitrary, this concludes the proof.
Proof of Theorem 2.1.Let µ n be a maximizing sequence for the optimization problem (10).Since µ n are bounded as measures, we may extract a subsequence weakly* converging to a measure µ; at the same time, the supports S n of µ n are compact for the Hausdorff convergence, hence we may suppose they converge to a closed connected set S. By Proposition 1 we have that µ ∈ M + L (Ω).The functional E f (µ), being the infimum of weakly* continuous functionals, is weakly* upper semicontinuous, so that Therefore µ is a solution of the optimization problem (10).
In order to prove Theorem 2.2 we need some preliminary results.
Lemma 3.1.Let K ⊂ R be a compact set with |K| = 0.For all ε > 0 there exists a function Proof.For every δ > 0 let • K δ be the open δ-neighbourhood of K; • ρ δ be a smooth regularizing kernel with support contained in [−δ, δ], • g δ := ρ δ * 1 K 2δ and f δ be a primitive of 1 − g δ such that f δ (0) = 0.One easily checks that g δ and f δ are smooth functions and satisfy the following properties: (i) for every Notice now that since K is compact then |K δ | converges to |K| = 0 as δ → 0, and therefore we can find δ sucht that We conclude by taking f ε equal to fδ and A ε equal to Proof.Let K i be the projections of K on the coordinates axes and denote by f i,ε : R → R, i = 1, . . ., d the functions constructed in Lemma 3.1 on K i .It is easy to see that Let Ω ⊂ R d and let µ be a measure concentrated on a one dimensional set Σ ⊂ Ω, with µ a and µ s the absolute continuous part and the singular part with respect to H 1 Σ respectively.Then we have: Proof.Taking into account the expressions of E f (µ) in ( 5) and ( 8) we have to show that for every v ∈ C ∞ c (Ω) and ε > 0 there exist for a suitable constant C. Let us denote by d(x) the function d(x) = dist(x, Σ) and for every ε > 0 by θ ε : R → R a smooth function such that We denote by E ⊂ Σ the subset of Σ on which the singular part µ s is concentrated, with H 1 (E) = 0.The set E can be approximated by compact sets the function u is clearly in H 1 0 (Ω) and u(x) = v φ ε (x) on Σ, so u also belongs to H 1 µ .We prove (12) in three steps, taking into account that Step 1.We prove the inequality for the term Ω u df in (12).Notice that, since v is smooth, we have where the last inequality follows by Lemma 3.2.Hence (the constant C in the following may vary from line to line), using ( 14), gives Step 2. For the Lebesgue term Ω |∇u| 2 dx in (12), again using ( 14) and the formula (13) for ∇u, we have by the properties of φ ε given by Lemma 3.2 and the smoothness of v, Step 3. It remains to prove the result for the term in (12) involving µ.Observing that θ ε = 1 near Σ, we have The first term in the last line can be estimated as where we have used the property (2) of Lemma 3.2.It remains to prove that Since µ s (E \ K ε ) < ε and φ ε is locally constant on K ε , we have Proof of Theorem 2.2.The proof follows straightforwardly from Lemma 3.3 and the existence Theorem 2.1.
4. Some necessary conditions of optimality.In this section we illustrate some necessary conditions of optimality that the solution µ = θH 1 S of the optimization problem (10) has to satisfy, in the case where S is smooth.First of all, let µ = θH 1 S be fixed and let u ∈ H 1 0 (Ω) ∩ H 1 µ be a solution of ( 9), with f ∈ L 2 (Ω).The Euler-Lagrange equation in its weak form is In order to integrate by parts the terms above we assume that S is smooth enough except for a finite number of branching points and that θ is continuous on S. We then have where, denoting by u + and u − the function u on the two sides of S, and by n + and n − the normal versor to S in the direction of u + and u − respectively, we have set ∂u − ∂n − .Note that the quantity above does not depend on the choice of the two sides of S. We integrate now by parts the integral on S and we obtain where we denoted by S # the set of points of S which are of terminal type or of branching type.Putting all terms together we obtain from ( 16) Concerning the conditions on a point x 0 ∈ S # we obtain: -if x 0 ∈ S # ∩ ∂Ω we have the Dirichlet condition u(x 0 ) = 0; -if x 0 ∈ S # ∩ Ω is a terminal point of S we have the Neumann condition where u i are the traces of u on the various branches and τ i the corresponding tangent vectors.Proof.We compute the first variation with respect to θ, keeping S fixed, considering the density µ ε = (θ + εη)H 1 S, with η ∈ L ∞ (S).In order to have that µ ε be admissible we require η ≥ 0 on {θ = 1} and 10 GIOVANNI ALBERTI, GIUSEPPE BUTTAZZO, SERENA GUARINO LO BIANCO, ÉDOUARD OUDET By the optimality of µ we have and, taking the optimal state u ε of µ ε as a test function for µ, we deduce that By the inequality we deduce that u ε is uniformly bounded in H 1 0 (Ω) ∩ H 1 (S) and converges weakly in H 1 0 (Ω) ∩ H 1 (S) to a function u that is necessarily the optimal state function associated to µ.In addition, this convergence is actually strong; indeed, by the convergences (as ε → 0) we deduce that Hence the strong convergence in H 1 0 (Ω) ∩ H 1 (S) follows, since is an Hilbertian norm in H 1 0 (Ω) ∩ H 1 (S).Passing now to the the limit as ε → 0 in (18) we have S η|∇ τ u| 2 dH 1 ≤ 0 for all η verifying (17). 5. Numerical approximation of optimal reinforcing networks.Let Ω be a convex domain of R 2 , L, m > 0 and f ∈ L 2 (Ω).We introduce in this section a numerical strategy to approximate solutions of the following reinforcement problem: max (S,θ) where ∇ τ u denotes the tangential derivative of u along S and the couple (S, θ) satisfies the constraints: Previous sections establish that this problem is well posed and satisfies optimality conditions of Proposition 2. First, we introduce the following simplification, replacing the tangential contribution in cost function (20) by the full gradient of the state function (see [4] or [2] Theorem 2.25 p. 164 for a complete justification).Namely, we consider in the following the cost function: Since we expect problem (21) to have many local minima, we focus on stochastic optimization algorithms which only require cost function evaluations to proceed.
5.1.Spanning tree parametrization and discrete functional.To discretize problem (21), we consider a mesh T associated to the domain Ω made of n p points and n t triangles.We denote by K and M respectively the stiffness and mass matrices of dimensions n p ×n p associated to the finite elements P 1 on T .Moreover, we define K x and K y to be the differentiation matrices of P 1 functions.More precisely, K x and K y are matrices of dimensions n t ×n p which evaluate the operators ∂ x and ∂ y on piecewise linear continuous functions on the mesh T .Observe that due to the linearity of P 1 elements, ∂ x and ∂ y are constant on every triangle of the mesh.Denoting by V areas the column vector of size n t × 1 containing the area measures of every triangle, we recall the simple identity We denote by the letter U a real vector of n p node values representing an element of P 1 ∩ H 1 0 (Ω).Problem (21) involves both a connected set and an associated weight function.In order to parametrize connected one dimensional structures, we follow the strategy developed in [8].Let n d ∈ N * and consider a set of n d points P 1 , . . ., P n d ∈ Ω.We associate to such a set its canonical spanning tree SP (P 1 , . . ., P n d ) which is the polygonal set of minimal length connecting the n d points without introducing new branching points.Let us point out that SP (P 1 , . . ., P n d ) is the union of n d − 1 arcs generically.It is straightforward to establish that the union for n d ∈ N * of such spanning trees is dense with respect to Hausdorff distance among compact connected subsets of Ω.To describe an L 1 element of SP (P 1 , . . ., P n d ), we simply consider a vector θ weights of n d −1 values greater than 1 which represents a piecewise constant function on every arc of the tree.
Algorithm 1 Projection on weighted length and bound constraints.
Input: L, SP (P 1 , . . ., P n d ), θ weights , h s .step 1: Compute the length L of SP (P 1 , . . ., P n d ) and the center of mass C of the points P 1 , . . ., P n d .step 2: Define (P 1 , . . ., P n d ) to be the image of SP (P 1 , . . ., P n d ) by the homothety of center C and ratio h s L/L.step 3: Project the weight vector θ weights on the convex set which is the intersection of the linear constraint ( 22) with respect to SP (P where F is the linear interpolation of the given function f at the vertices of the mesh T .The pair (SP (P 1 , . . ., P n d ), θ weights ) must satisfy the constraints that every value of θ weights is greater than 1 and the following measure equality constraint is satisfied: where the (S i ) 1≤i≤n d are the n d − 1 edges of SP (P 1 , . . ., P n d ).Since the second minimization problem is a strictly convex quadratic problem, it reduces to solve the linear system 5.2.Parametrization of the constraints.As explained in the previous sections, we need the couple (SP (P 1 , . . ., P n d ), θ weights ) to have weights greater than one and satisfies equality constraint (23).To parametrize such admissible couples we introduce a last scale parameter denoted by h s ∈]0, 1[.We introduce in algorithm 1 a three steps procedure to produce an admissible pair SP (P 1 , . . ., P n d , θ weights ) for a given triplet of parameters SP (P 1 , . . ., P n d ), θ weights , h s .
5.3.Technical details and complexity.We summarize in algorithm 2, the different steps required to compute the cost associated to a given set of parameters, that we choose as SP (P 1 , . . ., P n d ), θ weights , h s .We give below some technical details and underline the computational complexity of every step.
In the first phase of projection, only the final step of the procedure is not computationally trivial.Whereas the projection of a point onto an hyperplane can be analytically described, the projection on an hyperplane intersected with a box requires a specific attention.In all our experiments, we used Dai and Fletcher algorithm [12] to obtain a fast and precise approximation of this projection.Observe that the spanning tree SP (P 1 , . . ., P n d ) is precisely by construction of length Algorithm 2 Summary of one cost evaluation.
Input: m, l, SP (P 1 , . . ., P n d ), θ weights , h s .step 1: Project (SP (P 1 , . . ., P n d ), θ weights ) with algorithm 1 to obtain an admissible couple (SP (P 1 , . . ., P n d ), θ weights ).step 2: Locate points P 1 , . . ., P n d in the mesh T .step 3: Compute the intersection of every arc of SP (P 1 , . . ., P n d ) with every triangle of T to evaluate V lengths (P 1 , . . ., P n d , θ weights ).step 4: Assemble matrix K T x V lengths K x +K T y V lengths K y and solve linear system (23) to compute its solution U .Return: 22) and θ weights ≥ 1 are compatible.In our situation, an order of only n d iteration was required to reach a relative error of 10 −6 on first order optimality conditions with respect to the infinity norm which reduces to a complexity of order n 2 d .Second and third step have been carried out using an hash structure representation of the mesh T combined with a Quad-tree associated to its vertices.Using those precomputed information, these operations required in practice an order of (n d + n p ) log(n d + n p ) operations.
Finally, assembling and solving the linear system has been performed by a standard Cholesky decomposition which concentrated the main part of the computational effort in our experiments where the number of parameters 3n d was negligible with respect to n p which was of order 10 4 .

Numerical experiments.
Based on previous discretization, we approximate optimal triplet solution of problem (21) using a stochastic algorithm.We focus our study on the homogeneous load case corresponding to f constantly equal to 1 and on the sum of two Dirac masses f = δ (−1/2,0) − δ (1/2,0) .In all our experiments, we used the NLopt library (see [13]) and its implementation of ISRES algorithm with its default parameters which combine local and global stochastic optimization.We carried out optimization runs limited to five hours of computation leading to an order of 2 × 10 6 cost function evaluations based on algorithm 2 on a standard computer for a mesh made of 10 4 triangles.In figures 1 and 3, we describe the optimal configurations we obtained for L = 1 to L = 6 obtained with n d = 20.Observe that the resulting number of parameters in the triplet is exactly 3n d .Moreover, in order to obtain a fine and stable description of optimal structures, we performed a local optimization step of the obtained structure increasing the number of points to n d = 50.We used the NLopt implementation of the BOBYQA algorithm for this final step which do not requires gradient base information.
Finally, we give in table 1 several numerical estimates obtained on a fine mesh with 10 5 elements of our computed sets and also of natural networks which could be guess to be optimal.As illustrated by these numerical values, neither the radius (for L = 1), a diameter (for L = 2), a triple junction (for L = 3) or a cross for (L = 4) seem to be optimal.
As described in proposition 2, we recover the fact that, for optimal structures when θ > 1, the tangential gradient is almost constant whereas we can observe drastic changes of magnitude when θ = 1 (see Figures  1. Reinforcement values computed on a fine mesh of 10 6 elements for classical and computed connected sets for m = 0.5 6. Some open questions.There are several interesting open questions related to the optimization problem (10); we try to list here below some of them.We denote by µ = θH 1 S an optimal solution of (10), whose existence was proved in Theorem 2.1 and in Theorem 2.2.
• According to Theorem 2.2 the optimal density θ is in L 1 (S) with respect to the one-dimensional Hausdorff measure H 1 S.It would be interesting to investigate the cases in which θ is bounded and, possibly refining the assumptions on the data, when θ has some regularity properties.• In the numerical simulations we made, the optimal set S never contains closed curves; it would be interesting to show this fact under fairly general assumptions.• The optimal set S is not in general a curve since it may present branching points; this seems to occur only for values of L large enough.Moreover, when a branching occurs, it is not clear what is the necessary condition of optimality for the related angles.• The regularity of the optimal set S seems a difficult issue: is it true that, under suitable assumptions on the data, the set S is smooth except a finite number of branching points?• When the admissible total length L tends to +∞ the optimal set S tends to fill the entire Ω.Is there an asymptotic behavior of the set S as L → +∞?This reminds a Γ-convergence result for the irrigation problem, studied in [14].

Proposition 2 .
Let µ = θH 1 S be a solution of the optimization problem (10) and let u be the optimal state function associated to µ.Then we have for a suitable constant c|∇ τ u| = c H 1 a.e. on {θ > 1} |∇ τ u| ≤ c H 1 a.e. on {θ = 1}.

Figure 1 .
Figure 1.Approximation of globally optimal reinforcement structures for m = 0.5, L = 1, 2 and 3.The upper colorbar is related to the weights θ which colors the optimal reinforcement set on the left whereas the lower colorbar stands for the tangential gradient plotted on the connected set on the right picture

Figure 2 .
Figure 2. Approximation of globaly optimal reinforcement strucutres for m = 0.5, L = 1.5, 2.5 and 5 for a source consisting of two dirac masses.The upper colorbar is related to the weights θ which colors the optimal reinforcement set on the left whereas the lower colorbar stands for the tangential gradient plotted on the connected set on the right picture

Figure 3 .
Figure 3. Approximation of globaly optimal reinforcement strucutres for m = 0.5, L = 4, 5 and 6.The upper colorbar is related to the weights θ which colors the optimal reinforcement set on the left whereas the lower colorbar stands for the tangential gradient plotted on the connected set on the right picture 1 , . . ., P n d ) and the bound constraints θ weights ≥ 1.The projected vector is denoted by θ weights .Output: SP (P 1 , . . ., P n d ), θ weights .Let V lengths (P 1 , . . ., P n d , θ weights ) be the vector of size n t × 1 which contains the weighted lengths of SP (P 1 , . . ., P n d ) intersected with every triangle of the mesh T .With previous notations, we can now introduce a discrete functional to approximate cost function (21): max (SP (P1,...,Pn d ),θ weights ) 1, 3 and 2).