Outcome space algorithm for generalized multiplicative problems and optimization over the efficient set

In this paper, an algorithm of the branch and bound type 
in outcome space is proposed for solving a global optimization 
problem that includes, as a special case, generalized multiplicative 
problems. As an application, we solve the problem of optimizing over 
the efficient set of a bicriteria concave maximization problem. 
Preliminary computational experiments show that this algorithm works 
well for problems where the dimensions of the decision space can be 
fairly large.


1.
Introduction. The problem of central interest in this paper is given by where f (x) = (f 1 (x), f 2 (x)), f 1 , f 2 are two concave functions defined on R n , X ⊂ R n is a nonempty, compact convex set, and ϕ(y 1 , y 2 ) is continuous and increasing in the sense of ϕ(y 1 ) > ϕ(y 2 ) whenever y 1 ≥ y 2 , y 1 = y 2 . An application of Problem (GP) follows from the observation that the Generalized Concave Multiplicative Programming problem where k ≥ 2 and X is as in Problem (GP), the functions g i are concave on R n (i = 0, 1, . . . , k) such that ∀x ∈ X g i (x) > 0, i = 1, 2, . . . , k, is a special case of Problem (GP). Indeed, in Problem (GP), let f 1 (x) = g 0 (x) and A direct reformulation of problem (GP) as an outcome space problem is given by By the definition, it is simple to prove that if y * is a global optimal solution to Problem (OP 1 ) then any x * ∈ X such that f (x * ) ≥ y * is a global optimal solution to Problem (GP).
To globally solve Problem (GP), we present an algorithm of branch and bound type in the outcome space R 2 for solving a problem (OP 2 ) that is equivalent to Problem (OP 1 ). The proposed algorithm is established on the basis of the relationship between the global optimal solution to Problem (OP 1 ) and the efficient set of the outcome set Y . As an application, we solve the problem max Φ(x) = ϕ(f (x)) s.t. x ∈ X E , where X E is the efficient set of the following bicriteria programming problem with ϕ, f 1 , f 2 and X are as in Problem (GP). Problem (P X E ) could happen in certain common situations and has attracted the attention from many researchers (see e.g. [2,3,5,6,7,9,10,11,14,18,19,21]). See [6] for a detailed discussion about Problem (P X E ). The paper organized as follows. In the Section 2, Problem (OP 1 ) is reformulated as a nonconvex optimization problem (OP 2 ) in the outcome space R 2 . Section 3 presents the basic operations which are used for the establishment of our algorithm of branch and bound type in Section 4. Section 5 shows that an optimal solution to Problem (P X E ) can be obtained by solving Problem (OP 2 ). In the last section, preliminary computational experiments are reported and show that our algorithm works well for problems where the dimensions of decision space can be fairly large.
2. Reformulation in the outcome space. Let a 1 , a 2 ∈ R 2 . As a matter of notation, we write a 1 ≥ a 2 to mean that a 1 i ≥ a 2 i for i = 1, 2, and write a 1 a 2 to mean that a 1 i > a 2 i for i = 1, 2. Let A be a nonempty subset of R 2 . We denote the interior of A by intA, the boundary of A by ∂A. A point a 0 ∈ A is said to be an efficient point of A if there is no point a ∈ A such that a ≥ a 0 and a = a 0 , i.e., (a 0 + R 2 + ) ∩ A = {a 0 }. Analogously, a point a 0 ∈ A is said to be an weakly efficient point of A if there is no point a ∈ A such that a a 0 , i.e., (a 0 + intR 2 + ) ∩ A = ∅. Denote by E(A) and W E(A) the set of all efficient points of A and the set of all weakly efficient points of A, respectively. By the definition, The property of the optimal solutions to Problem (OP 1 ) is stated by the following proposition.
Proposition 1. Any global optimal solution to Problem (OP 1 ) must belong to the efficient set E(Y ).
Proof. Let y 0 be a global optimal solution to Problem (OP 1 ). Assume the contrary that y 0 ∈ E(Y ). By the definition, there is y * ∈ Y such that y * ≥ y 0 and y * = y 0 . Since ϕ(y) is increasing, we have ϕ(y * ) > ϕ(y 0 ). This contradicts the hypothesis that y 0 ∈ Argmax{ϕ(y)|y ∈ Y } and the proof is complete. Now, consider the set Z given by It is clear that Z is a nonempty, full-dimensional closed convex set. Below is some interesting properties of Z (see [13,17,22]) which will be useful in the sequel.
Combining Proposition 1 and Proposition 2(ii), to globally solve Problem (OP 1 ), the algorithm will instead solve the problem Recall that a point z I = (z I 1 , z I 2 ) ∈ R 2 is called the ideal point of the set Z if z I i = max{z i | z ∈ Z}, i = 1, 2. Notice that the ideal point z I need not belong to Z. If z I ∈ Z then E(Z) = {z I } and z I is the unique optimal solution for Problem (OP 2 ). We therefore assume henceforth that z I ∈ Z. In this case, by Proposition 2(i), the efficient set E(Z) is a connected curve on the boundary of Z with the starting efficient point z start and the end efficient point z end (see Fig. 1). By the geometry of the efficient set E(Z) ⊂ R 2 , we can see that the efficient extreme point z start is the unique optimal solution of the problem max{z 1 |z ∈ Z, z 2 = z I 2 } (P 1 ) and the efficient extreme point z end is the unique optimal solution of the problem By definition, it is easily seen that for each i = 1, 2, Problem (P i ) has the explicit formulation as follows max z i By the geometry, if p * is the projection of z * on Z then p * is the efficient point of Z. It is easily seen that p * is an optimal solution of the convex programming problem which has the explicit formulation as follows .
According to [15], we find that d * is the normal vector of the supporting hyperplane of Z at p * .
In the remain part of this paper, an algorithm of a branch and bound type will be established to solve Problem (OP 2 ).

3.1.
Upper bound for subproblem. Let z , z r ∈ E(Z) satisfy z r 1 > z 1 and z 2 > z r 2 .
and E ⊆ E(Z) be the unique curve connecting z and z r . Let d and d r be two normal vectors associated with z and z r , respectively. It is clear that if z = z start and z r = z end then E ≡ E(Z) and d = (0, 1) T , d r = (1, 0) T . Otherwise, we have E ⊂ E(Z). In the section, we present a procedure for computing an upper bound α = α(E) for the subproblem There are following three cases: Case 1. Two vectors d and d r are dependent (See Fig. 2(a)). For this case, it is plain that E is the line segment joining the two points z and z z and denoted by [z , z r ]. By the definition, each z ∈ [z , z r ] can be formed by Let Then Problem (SP(E)) becomes the following one-variable optimization problem (P 1 sub ) Let t opt denote an optimal solution of this problem. Thenẑ = z r + t opt (z − z r ) ∈ E(Z) is a feasible solution of Problem (OP 2 ), and can be used for improving the lower bound of Problem (OP 2 ). Moreover, α = α(E) = ϕ(ẑ) is the exactly upper bound of Problem (SP(E)).
Case 2. Two vectors d and d r are independent and the linear system has the solution z * ∈ {z , z r } (See Fig. 2(b)). For this case, it is easily seen that E = [z , z r ] and solving Problem (SP(E)) is carried out as Case 1. Case 3. Two vectors d and d r are independent and the linear system (3) has the solution z * ∈ {z , z r } (See Fig. 3). Then z * ∈ (z I − R 2 + ) \ Z. Let S be a 2-simplex with three vertices z , z r and z * . It is clear that Letẑ be an optimal solution of the following relaxation problem Then α = α(E) = ϕ(ẑ) is an upper bound of Problem (SP(E)). Remark 2. By Proposition 1, any global optimal solution to Problem (RP(E)) must belong to the efficient set E(S) of the simplex S. By the geometry, it is easy to check that the weakly efficient set of S is By the definition, each z ∈ [z , z * ] ∪ [z * , z r ] can be formed by Then Problem (RP(E)) becomes the following one-variable optimization problem The procedure for computing the upper bound for subproblem (SP(E)) can be described as follows.

Procedure Solve (RP(E))
Input: Two efficient points z , z r sastifying (2) and two normal vectors d , d r respectively.
Step 1. If d = td r with t > 0 Then go to Step 3.
Else Find the solution z * of the linear system (3).
Step 2. If z * ∈ {z , z r } Then Go to Step 3 Else Set id = 1. Solve Problem (P 2 sub ) to obtain the optimal solution t opt .
Step 3. Set id = 0. Solve Problem (P 1 sub ) to obtain the optimal solution t opt . Letẑ = z r + t opt (z − z r ).

3.2.
Partitioning and branching. Let Q be a subset of R 2 . Let us recall that a collection At the beginning of Iteration k of the algorithm, we have available a partition D k of the efficient set E(Z). Denote R k the set consisting of the elements of the partition D k that cannot yet be excluded due to not containing an global optimal solution for Problem (OP 2 ). As usual, we find E k ∈ R k that is an unique efficient curve connecting z k ∈ E(Z) and z r k ∈ E(Z) the such that where α(E) is the upper bound for Problem (SP(E)). Then, α k = α(E k ) is the upper bound for Problem (OP 2 ). Letẑ k be the optimal solution for Problem (RP(E)) with E = E k . By the construction, Problem (RP(E)) with E = E k belongs to the third case, i.e. id = 1 andẑ k ∈ (z I − R 2 + ) \ Z.  By Remark 1, the optimal solutionp k to Problem (Pro(z * )) with z * =ẑ k is an 0 is a normal vector of the supporting hyperplane with Z atp k (see Fig. 4). Then the pointp k can be used for improving the lower bound of Problem (OP 2 ). Let E k1 ⊂ E(Z) be the efficient curve connecting z k andp k , and E k2 ⊂ E(Z) be the efficient curve connectingp k and z r k . Then and the pointp k is called to be a bisection point for E k . Let Then D k+1 is a partition of E(Z) that refiner than partition D k .
4. The algorithm. Let ε > 0 be a given sufficiently small number and z opt ∈ E(Z). It is clear that ϕ(z opt ) is a lower bound of Problem (OP 2 ). The point Using notations and basic operations given in the Section 3, in this section the branch and bound algorithm in outcome space for solving Problem (OP 2 ) may be stated as follows.
Algorithm OS Initialization step. (0.1) Solve problem (IP i ), i = 1, 2, to find the starting point z start and the end point z end of the efficient set E(Z). (0.2) Set β 0 = max{ϕ(z start ), ϕ(z end )} (the currently best lower bound ) and takẽ connecting the two points z = z start and z r = z end , to obtain the optimal solutionẑ 0 . Set α(E 0 ) = ϕ(ẑ 0 ). Step where E k is the efficient curve connecting two efficient points z k and z r k . (1.2) Let α k := α(E k ). (currently best upper bound) Step 2. (Branching) (2.1) Solve the problem (Pro(z * )) with z * =ẑ k to obtain the projectionp k ∈ E(Z).
where E k1 is the efficient curve connecting two efficient points z k andp k , E k2 is the efficient curve connecting two efficient pointsp k and z r k . If Id i = 0 and β k+1 < ϕ(ẑ ki ), then set β k+1 = ϕ(ẑ ki ) (the currently best lower bound ), z k+1 =ẑ ki (currently best feasible solution).
Remark 3. By construction, when the algorithm terminates at Step 4, we receive an ε-optimal solution z opt to Problem (OP 2 ). Then a feasible solution x opt to Problem (GP) that satisfies f (x opt ) ≥ z opt is an approximate optimal solution to Problem (GP). It is easy to see that the point x opt can be obtained by solving a convex programming problem with a linear objective function over the nonempty compact convex feasible set {f (x) ≥ z opt , x ∈ X}.
is the projection ofẑ k at Step 2.1. By continuity of projections, there exists p * ∈ E(Z) satisfying thatp k → p * . Now let us show that z * ≡ p * . On the contrary, suppose that z * = p * . Then, for k sufficiently large, one has which means thatẑ k+1 ∈ {z I − R 2 + | ẑ k −p k , z −p k ≤ 0}, contrary to the selection ofẑ k+1 at Step 1.1 and Step 3.1.
Suppose that the algorithm generates the sequence {z k } belonging to the compact set E(Z) (Proposition 2) and it has a cluster pointz ∈ E(Z). There is no loss of generality in assuming that {z k } is the subset of the above sequence {p k }. Sincep k → z * , we havez ≡ z * . Since ϕ is continuous, ϕ(z * ) = lim k→∞ ϕ(ẑ k ) = lim k→∞ α(E k ) is a upper bound of Problem (OP 2 ). This means that z * is an optimal solution of Problem (OP 2 ). The proof is complete.
5. Application to optimization over the efficient set. Consider the problem where X E is the efficient set of the following bicriteria programming problem with ϕ, f 1 , f 2 and X are as in Problem (GP). Recall that a point x 0 ∈ X is called an efficient solution of problem (BOP), if there is no point As usual, the Y E is called the outcome efficient set for problem (BOP). A direct outcome space formulation of Problem (P X E ) is By the definition, it is easy to see that the set of all efficient points of Y equals the outcome efficient set of problem (BOP), i.e.
The following proposition states the relationship between the Problem (P X E ) and (OP 2 ). Proposition 4. Problem (P X E ) is equivalent to problem (OP 2 ) in the following sense: If z * is a global optimal solution of Problem (OP 2 ), then any x * ∈ X such that f (x * ) ≥ z * is a global optimal solution of Problem (P X E ).
Proof. Suppose that z * is a global optimal solution of problem (OP 2 ), i.e. ϕ(z * ) ≥ ϕ(z) for all z ∈ E(Z). From Proposition 2(ii) and (5) For any x * ∈ X such that f (x * ) ≥ z * , combining the monotone of ϕ(z) and the definition of Y E gives Therefore, x * must be a global optimal solution for Problem (P X E ).
By Proposition 4, we can see that the optimal solution x * to Problem (P X E ) can be obtained by finding the optimal solution z * to Problem (OP 2 ) and optimizing a linear objective function over the nonempty compact convex set {f (x) ≥ z * , x ∈ X}.
6. Illustrative example and computational experiments. Now we consider some examples to illustrate the algorithm. Example 1. Consider the following problem with the tolerance ε = 10 −6 .
where E 0 is the efficient curve connecting z 0 = z start and z r0 = z end , we select E 0 for branching at Step 2. Set α 0 = α(E 0 ) = 24.7471.
Since β 0 > ϕ(p 0 ) = 16.4354, we update the currently best lower bound β 1 = β 0 andz 1 =z 0 . Set where E 01 is the efficient curve connecting two efficient points z start andp 0 , E 02 is the efficient curve connecting two efficient pointsp 0 and z end .
Since β 0 > ϕ(p 0 ) = 156.2253, we update the currently best lower bound β 1 = β 0 andz 1 =z 1 . Set where E 01 is the efficient curve connecting two efficient points z start andp 0 , E 02 is the efficient curve connecting two efficient pointsp 0 and z end . Step 4. Since R 1 = ∅, set k := 1 and go to Step 1.
Since β 1 > ϕ(p 1 ) = 156.4646, we update the currently best lower bound β 2 = β 1 where E 21 is the efficient curve connecting two efficient points z 1 andp 1 , E 22 is the efficient curve connecting two efficient pointsp 1 and z r1 .
Then the optimal solution to Problem (GP) isx 2 = (4.0000, 3.0000) and the ε−optimal value of Problem (GP) is ϕ(z 2 ) = 156.5000. This computational result is the same as one in [1]. Furtheremore, our algorithm terminated after only 2 iterations while the algorithm in [1] terminated after 5 iterations. It is easily to see that the function ϕ(y) is an increasing function on R 2 + but it is neither quasiconcave nor quasiconvex on the set R 2 + . Let ε = 10 −6 . At the initialization step, by solving two convex problems (IP i ), i = 1, 2, we obtain the optimal solutions Since ϕ(z start ) = 2.5029 < ϕ(z end ) = 3.7476, set the currently best lower bound β 0 =ϕ(z end ) = 3.7476 and the currently best feasible solutionz 0 = z end .
Since β 0 > ϕ(p 0 ) = 3.7340, we update the currently best lower bound β 1 = β 0 = 3.7476 andz 1 =z 0 . Set where E 01 is the efficient curve connecting two efficient points z start andp 0 , E 02 is the efficient curve connecting two efficient pointsp 0 and z end .
The following test is implemented by codes written in Matlab2009a and performed on a laptop HP Pavilon 1.8Ghz, RAM 2GB. Example 4. Problem (GCMP) with randomly generated values are given in two types (see [8]): Type I: where β i = 1 − min x∈X α i , x for i = 1, . . . , k. Type II: where β i = 1 − min x∈X α i x + x T D i x for i = 1, . . . , k.
The parameters are defined as follows:  For ε = 0.005, the set of 10 problems associated with each set of parameters k, m, n, are solved. We obtain the numerical results in Table 2. In this table, the number of iterations and the computing time in seconds are presented by #Iter and Time, respectively. We also compare two types of problems in linear case (Type I) and convex case (Type II). The computational experiments show that the algorithm works well for problems with the fairly large dimension of the decision space. 7. Conclusion. A global optimization approach for maximizing the composite function of an increasing function and a concave vector function was proposed in this paper. By using some transformations, the original problem was reformulated in the outcome space as a problem over the efficient curve, which can be solved by partitioning and branching. The algorithm ultilizes the interesting properties of the efficient curve and solves efficiently the relaxation problems by optimizing onevariable functions. As an application, we also solved the Generalized Multiplicative Concave Programming and the problem of optimizing over the efficient set of a bicriteria concave maximization problem. The proposed algorithm can be implemented easily by the available optimization packages and the numerical experiments have presented its efficiency.