GLOBAL OPTIMIZATION REDUCTION OF GENERALIZED MALFATTI’S PROBLEM

. In this paper, we generalize Malfatti’s problem as a continuation of works [6, 7]. The problem has been formulated as a global optimization problem. To solve Malfatti’s problem numerically, we propose the co-called “Hill method” which is based on a heuristic approach. Some computational results for two and three-dimensional test problems are provided.


1.
Introduction. In 1803, Malfatti (1737-1807) of the University Ferrara posed the problem of determining the three circular columns of marble of possibly different sizes which, when carved out of a right triangular prism, would have the largest possible total cross section [14]. This is equivalent to finding the maximum total area of three circles which can be packed inside a right triangle of any shape without overlapping. Malfatti gave the solution as three circles (the Malfatti circles) tangent to each other and to two sides of the triangle. In [12], it was shown that the Malfatti circles were not optimal. The most common methods used for finding the best solutions to Malfatti's problem were algebraic and geometric approaches [1,11,9] . In 1994 Zalgaller and Los [24,13] proved that the greedy arrangement solves the Malfatti's problem. Melissen conjectured in [15]: Conjecture 1. The greedy arrangement has the largest total area among of n (n ≥ 4) non-overlapping circles in a triangle.
Most recently, Malfatti's problem was examined from the standpoint of the global optimization theory using the algorithm from [6,7]. In these works, Malfatti's problem was formulated as a convex maximization problem and then the global optimality conditions developed by A.S. Strekalovsky [21] together with the Algorithm MAX [5] were applied to solve the original Malfatti's problem for the case n = 4. In this paper, we formulate Malfatti's generalized problem and propose a global optimization method and an algorithm for its numerical solution. The paper is organized as follows. In Section 2 we present a formulation of Malfatti's generalized problem as well as its reduction to a nonconvex optimization problem. Section 3 discusses the global optimization method and the algorithm. Numerical results are presented in Section 4.

2.
Malfatti's Generalized Problem. We generalize Malfatti's problem as follows: how to pack K non-overlapping balls of maximum total volume in a given bounded polyhedral set D ⊂ R n ?
We introduce the following sets. Denote by B(x 0 , r) a ball with a center x 0 ∈ R n and a radius r ∈ R: here , denotes the scalar product of two vectors in R n , · is the Euclidean norm, and int D = ∅.
Proof. Necessity. Let y ∈ B(x 0 , r) and y ∈ D. The point y ∈ B(x 0 , r) can be easily presented as y = x 0 + rh, h ∈ R n , h ≤ 1. It follows from the condition y ∈ D that a i , y ≤ b i , i = 1, m or, equivalently, a i , x 0 + r a i , h ≤ b i , i = 1, m, ∀h ∈ R n . Hence, we have Let the condition (3) be satisfied, and on the contrary, assume that there existsỹ ∈ B(x 0 , r) such thatỹ ∈ D. Clearly, there existsh ∈ R n such thatỹ = x 0 + rh, h ≤ 1. Sinceỹ ∈ D, there exists j ∈ {1, 2, . . . , m} for which a j ,ỹ > b j or a j , x 0 + rh = a j , x 0 + r a j ,h > b j . On the other hand, we have a j , x 0 + r a j > b j which contradicts (3).
3. Global Optimization Method and Algorithm. Problem (4) − (7) is a convex maximization problem over nonconvex sets and belongs to a class of global optimization. It can be verified that the problem is NP hard. The problem was solved globally by an algorithm in [7] which seems to us computationally expensive for a high demensional case. For this purpose, to solve the problem we propose the "Hill method" which is heuristic.
The algorithm assumes elimination of several local minima from a set of local extrema by moving to a saddle-point like to the hill with the most gentle sloping. Literally, it means that a new local descent should start not from a neighborhood of a local minimum as it was done in the method by K.L. Teo [22], but from a saddlepoint in the same neighborhood with the lowest function value. The idea is based on the same hypothesis as the MSBH method [4,3], which has been proven to perform well when solving problems of global optimization with a large number of local extrema. The "Hill method" includes a systematically random multistart, which is a basis of all global optimization techniques. The algorithm of the method proposed consists of the multistart, the local descent (a specific version of the conjugate gradient method) and a phase of withdrawal from the local extremum to the "hill" for a new local descent. The full iterations procedure is described below (the Hessian matrix eigenvalues at Step 11 are computed approximately).
As a stop criterion of the algorithm proposed, one can use the number of extrema found by the local search or the elapsed computing time. All constraints of problem (4) have been penalized into a single objective function before we applied the "Hill method". Then problem (4)-(7) reduces to a global optimization problem over a box constraints To solve a subproblem of the local minimum search, we implement a combination of two algorithms which enables us to retain superlinear convergence without the need to employ the procedure of the search space reduction when we touch the constraints. In the reduced gradient algorithm the auxiliary point is constructed at the k-th iteration: Then we solve a one dimensional minimization problem to obtain z k+1 .
1 np is a number of the algorithm iterations; 2 nd is a number of the random directions; Let z * i be a stationary point obtained by the conjugate gradient method started from the point z trial .
if (Stop criteria = true), then 9 end of the algorithm. 10 for k ← 1 to nd do 11 Compute the Hessian matrix of the function f at the point z * i and its eigenvalues λ k . 12 Find the eigenvalue λ min = min{λ 1 , ..., λ s }. Assume that λ min ≥ 0. 13 Let k be the index of λ min and u k a corresponding eigenvector. Let S be the intersection of the ray z k i + τ u k , τ real, with Z.
14 Determine the strict local maximizerz of f nearest to z * i in S.
The conjugate gradient method is implemented with the use of a sinusoidal space known as the Gernet-Valentine transformation [10,23,8]. The pre-computed gradient of the function is transformed component-wise to ensure that variations satisfy the direct constraints.
A conjugate direction can be chosen according to the Polak-Polyak-Ribiere method [2,16] The variation step length is obtained by solving min f (z(α)) = f (z k+1 ), α ≥ 0, where the variation is z(α) = 1 2 (z g − z l ) cos arcsin 2z k −z l −z g z g −z l + αq k . It is well know that this version of the conjugate gradient algorithm has some shortcomings associated with the "stick to the border" effect. It can be eliminated if we periodically run the reduced gradient algorithm described above.
Proposed "Hill method" can be used for solving various problems of nonconvex optimization, not only for Malfatti's generalized problem but also for such fields arisen in applications as packing problems, transportation problems, inventory control problems, see e.g. [19,20,17,18].
4. Numerical Results. The algorithm proposed was tested on Malfatti's generalized problems described below. The algorithm was coded using C-language. Problems (4)-(7) were solved numerically for the following feasible sets and given number of circles.
Test Problem 1 The set D is given by: The solution for K = 3 is f * 3 = 48.5424 and circle parameters are presented in the Table 1.    Table 3. Test Problem 1 for K = 5.

Test Problem 2
The set D is defined as: The problem was solved for three, four and five circles. Table 4 shows that the greedy algorithm [15] works correctly for Problem 2 with K = 3, 4, 5. The computed best values of the total area are f * 3 = 46.2041, f * 4 = 46.9827, f * 5 = 47.7479. The numerical results for Problem 2 are shown in Table 4 and at Fig. 2. Table 4. Test Problem 2 for K = 3, 4, 5.