Perturbation of Image and conjugate duality for vector optimization

This paper aims at employing the image space approach to investigate the conjugate duality theory for general constrained vector optimization problems. We introduce the concepts of conjugate map and subdifferential by using two types of maximums. We also construct the conjugate duality problems via a perturbation method. Moreover, the separation condition is proposed by means of vector weak separation functions. Then, it is proved to be a new sufficient condition, which ensures the strong duality theorem. This separation condition is different from the classical regular conditions in the literature. Simultaneously, the application to a nonconvex multi-objective optimization problem is shown to verify our main results.


1.
Introduction. Image space analysis (for short ISA) was initiated in [9] and carried on in [3,6,7,12,19,22,24]. Since then, it has been extensively used as a preliminary and auxiliary tool for investigating various mathematical topics like constrained extremum problems, variational inequalities. More generally, it can be used for any problem, which can be expressed under the form of the infeasibility of a parametric system. The infeasibility of a parametric system is characterized by the disjunction of two suitable subsets in the image space. Separation plays a key role in the image space. By virtue of separation, several theoretical aspects can be developed, such as alternative and saddle point optimality conditions [4,9,[12][13][14][15][16]26], scalarization [5], duality [8,27,28], regularity [6,20,21], Courant penalty methods [18], gap functions and error bounds [25].
Duality theories for constrained extremum problems and vector optimization problems based on image space approach have attracted much attention [8,11,12,15,17,27,28]. Giannessi et al. [12,15] defined a family of vector Lagrangian dual problems and established the strong duality theorem. Except for investigating vector optimization problems by means of Lagrangian duality in the image space, the conjugate duality is also an important aspect. The core technique of conjugate duality is the perturbation method.
Bot [1] constructed conjugate dual problems for general scalar optimization problems. Under convexity assumptions, the stability of the primal problem (or equivalently, the perturbation function is subdifferentiable at zero point) was proven to be a sufficient condition for strong duality theorem. Therefore, it becomes a main issue to formulate sufficient conditions, called regularity conditions, which ensure that the primal problem is stable. For unconstrained vector optimization problems, Tanino [23] introduced new concepts of conjugate map and subdifferential for a set-valued map. Under the convexity hypothesis, he proved that a regular condition , which is somewhat like the generalized interior point condition, ensures the stability.
Motivated by [1,12,15,23], we devote to investigate conjugate duality theory for general vector optimization problems via image space approach. We introduce conjugate maps, biconjugate maps and subdifferentials of a set-valued map. First, we define a family of perturbation problems and introduce the optimal value maps for the perturbation problems. Then, we construct the corresponding conjugate dual problems by applying the conjugate maps of the optimal value maps. Also, we define how a set-valued map is said to be weakly subdifferentiable. This concept arises from weakening how a set-valued map is subdifferentiable in [23]. The existent regular conditions in the literature [1,2,23] are generally related to convexity assumptions. In order to establish the strong dual theorem for some nonconvex optimization problems, we construct a new condition based on the separation argument in the image space, which is called the separation condition. The separation condition is proved to imply that the optimal value map of a perturbation problem is weakly subdifferentiable. By virtue of this result, we further derive strong duality theorem. In fact, the separation condition is proposed from a new point of view, which is completely different from the regular conditions used in [1,2,23] and just reflects the main features of image space approach.
The rest of this paper is organized as follows. In Section 2, we recall some symbols, basic concepts and the main features of image space approach. In Section 3, we introduce the concepts of conjugate maps, biconjugate maps and subdifferentials according to two classes of maximums. In Section 4, we construct the conjugate duality problems by perturbation approach. Moreover, we use the separation condition to establish strong duality theorem and illustrate a nonconvex example.

2.
Preliminaries. Let X, Y and Z be finite dimensional linear topological spaces, where Y is partially ordered by a pointed closed convex cone C. If there is no confusion, we will always denote by 0 the zero element in arbitrary finite dimensional linear topological space. We set C 0 := C \ {0} and denote by int C the interior of C. We always use the following symbols, which describe three order relations. For any a, b ∈ Y , In this paper, we pay attention to the following constrained vector optimization problem (P): Motivated by the definition of scalar indicator function, we introduce the positive infinity +∞ and negative infinity −∞ in the linear topological space Y . By convention, +∞ and −∞ satisfy Then, for a nonempty set G ⊂ X, we define the vector-valued indicator function  According to the concepts of efficient solutions and weak efficient solutions (see reference [10]), searching for a weak efficient solution of vector optimization problem (P) means finding a feasible pointx ∈ S such that f (x) ∈ IMin (P), where IMin (P) := IMin{f (x) : x ∈ S}, and searching for an efficient solution of vector optimization problem (P) means finding a feasible pointx ∈ S such that f (x) ∈ IIMin (P), where IIMin (P) := IIMin{f (x) : x ∈ S}.
Next, we recall the main features of image space analysis associated with (P). In general, we denote and introduce the set Kx is called the image of the problem (P), while Y × Z is the image space. Denote Considering Kx and H 1 , we havē x is a weak efficient solution of (P) (1) Similarly,x is an efficient solution of (P) if and only if H 2 ∩ Kx = ∅. Obviously, the formula H i ∩ Kx = ∅ (i ∈ {1, 2}) gives a geometric way to characterize the optimality of a feasible pointx.
3. Conjugate maps and subgradients. Conjugate maps, biconjugate maps, subgradients and subdifferentials have been proposed by scholars in the literature, which are important concepts in the analysis of set-valued maps. Let L(X, Y ) be the space of all linear operators which maps X into Y . Because X and Y are finite dimensional spaces, then every linear operator in L(X, Y ) is also continuous. L(X, Y ) is regarded as a dual space of X with respect to Y . For x ∈ X and T ∈ L(X, Y ), T x represents the value of T at x. Also, we take F to be a set-valued map from X to Y . Reasonably, a vector-valued function f from X to Y can be regarded as a set-valued map from X to Y such that each x is maped into a single point set {f (x)}.
is called the conjugate map of F in the sense of weak maximal points. Furthermore, a set-valued map F * * from X to Y defined by is called the biconjugate map of F in the sense of weak maximal points. The conjugate map F and biconjugate map F of F in the sense of maximal points are defined analogously by changing IMax into IIMax, namely, Moreover, we recall the definitions of subgradients and subdifferentials. Definition 3.2. Letx ∈ X andŷ ∈ F (x). An element T ∈ L(X, Y ) is said to be a subgradient of F at (x;ŷ) in the sense of weak maximal points or maximal points if The set of all subgradients of F at (x;ŷ) in the sense of weak maximal points or maximal points is called the corresponding subdifferential of F at (x;ŷ) and is denoted by F is said to be weakly subdifferentiable atx in the sense of weak maximal points or maximal points.
4. Conjugate duality. In this section, we define the perturbation functions and the corresponding optimal value maps and optimal solution maps. We analyze some properties of the optimal value maps based on the perturbation of image. Moreover, we establish the perturbation problems of the primal problem (P). By virtue of the optimal value maps of perturbation problems, we construct the dual problem of the primal problem (P). Without convexity assumptions, strong duality theorem is derived under the separation condition, which ensures the disjoint of two suitable subsets Kx and H i (i ∈ {1, 2}) in the image space.
Problem (P) has the following equivalent form We denote F(x) := f (x) +δ S (x) for convenience. As we know, searching for a weak efficient solution implies finding a weak minimal point of the set f (S), and searching for an efficient solution implies finding a minimal point of the set f (S), then we have We introduce a perturbation parameter u ∈ Z in the feasible set S. The corresponding perturbation feasible sets S(u) are defined as It can be easily observed that S(0) = S and F 0 (x, 0) = F(x). Based on the perturbation function, we obtain a series of perturbation problems (P u ) associated with the perturbation parameter u ∈ Z formulated by where the meaning of "min" contains two cases as we have taken into account for the primal problem (P), that is (P u ) means to solve Now we define the following optimal value map for (P u ) in the sense of weak minimal points or minimal points, which is a set-valued map from Z to Y given by The optimal solution map in the sense of weak minimal points or minimal points, which is a set-valued map from Z to X, is also defined by Then, the set of optimal values for the primal problem (P) in the sense of weak minimal points or minimal points is equal to W (0) or V (0). In fact, we calculate Similarly, we obtain IIMin (P) = V (0). There are some properties of the optimal value maps. To this end, we define the perturbation image and extended perturbation image for (P) associated with efficient solutions and weak efficient solutions given by The relation between the optimal value map and the extended perturbation image is stated in the following proposition. (i): Forx ∈ X and any u ∈ Z, we have If in addition,x is a weak efficient solution of (P), then Forx ∈ X and any u ∈ Z, we have If in addition,x is an efficient solution of (P), then Proof. We prove the first statement and the second is similar. Since Additionally, ifx is a weak efficient solution of (P), then f (x) ∈ W (0 Z ) and Both the efficient solutions and weak efficient solutions of the perturbation problem (P u ) can be characterized by the perturbation extended image.
The proof of the two statements are similar, we only prove the first one. In order to prove the necessity, we use reduction to absurdity, i.e., suppose that (3) does not hold, then there exists Now we prove the sufficiency. To this end, we also prove by contradiction, i.e.,   holds, or equivalently, y ⊀ŷ holds. So the proof is completed. Before we derive strong duality theorem, we need to introduce the class of vector weak separation functions in the image space, which is a crucial tool to realize the separation between the two subsets Kx and H i (i ∈ {1, 2}).
In order to introduce a suitable class of vector weak separation functions in Definition 4.1, we need to extend the classical concept of positive polar cone (also called duality cone) to vector case.
similarly, the vector positive polar cone of C with respect to C is defined as Based on Definition 4.2, we easily observe that linear vector separation functions are both the class of vector weak separation functions corresponding to weak efficient solutions and efficient solutions. We give a simple proof here. Let Ω := C * C × D * C , the linear vector function ω : Y × Z × Ω → Y is given by w(u, v; Θ, Λ) := Θu + Λv, Θ ∈ C * C , Λ ∈ D * C . Since we get Θu ≥ C 0 and Λv ≥ C 0 for any (u, v) ∈ H 1 = int C × D and (u, v) ∈ H 2 = C 0 × D according to (6) and (7). Then, (4) and (5) hold.
In particular, if we take Θ = I ∈ C * C , where I is the unit operator from Y to Y , then we obtain the following sufficient optimality conditions.
thenx is a weak efficient solution of (P).
thenx is an efficient solution of (P).
It results from (10) that Thus, we obtain Kx ∩ H 1 = ∅ due to (8) and (11). Then it follows from the equivalence between (1) and (2) thatx is a weak efficient solution of (P). In fact, from the proof above, condition (8) indicates Kx ∩ H 1 = ∅ and condition (9) indicates Kx∩H 2 = ∅. So conditions (8) and (9) ensure the existence of a disjoint between the two subsets Kx and H i (i ∈ {1, 2}) in the image space. For this reason, we call them separation conditions. In fact, the separation conditions are not strict. We give a nonconvex constrained multi-objective optimization problem satisfying such separation conditions to illustrate Theorem 4.2.

Now we pay attention to the following equations
Since −x 2 ≤ 0 for all x ∈ R, it results When −x 2 = 0, then we have x = 0 and 1 − ( 1 16 ) x + x 2 = 0. As a consequence, (13) and (14) show that the separation conditions (8) and (9)  is a vector positive polar cone of R 2 + with respect to R 2 and if x = 1, we have Therefore, we verified thatM satisfies (8) and (9). According to Theorem 4.2,x = 1 is also both an efficient solution and a weak efficient solution. In fact, we can easily observe from Figure 1 that the points 0 and 1 are both efficient solutions and weak efficient solutions. Now, we prove a lemma, which states that the optimal value maps for (P u ) are weakly subdifferentiable under the separation conditions. This lemma is a basis to establish strong duality theorem.
then W is weakly subdifferentiable at u = 0 in the sense of weak maximal points.
then V is weakly subdifferentiable at u = 0 in the sense of maximal points.
Proof. We just prove (i), and the proof of (ii) is similar. If there existsΛ ∈ D * C such that (15) holds, then For any u ∈ Z and any x ∈ S(u), we have g(x) ∈ u − D, then u − g(x) ∈ D. Sincē Λ ∈ D * C , by the definition of vector positive polar cone, we get Λ(u − g(x)) ∈ C.
It follows from (16) and (17) that, for any u ∈ Z and any x ∈ S(u), Using proof by contradiction, we suppose that there exists a u ∈ Z and a x ∈ S(u) such that (18) Immediately, (17) and (19) deduce Thus, we obtain f (x) − f (x) Λ g(x), which is a contradiction with (16).
Since we have proven that (18) holds for any u ∈ Z and any x ∈ S(u), then It follows from W (u) = IMin{f (x) : x ∈ S(u)} and (20) that Moreover, by Theorem 4.2, if there existsΛ ∈ D * C such that (15) holds, thenx is a weak efficient solution of (P), that is f (x) ∈ IMin(P) = W (0). So (21) indicates According to the definition of subgradient, (22) implies that −Λ is a subgradient of W at (0; f (x)) in the sense of weak maximal points, namely −Λ ∈ ∂ I W (0; f (x)). Therefore, ∂ I W (0) = ∅ and we proved that W is weakly subdifferentiable at u = 0 in the sense of weak maximal points. Now, we reconsider Example 4.1 to verify Lemma 4.1. To avoid redundance, we just choosex = 0 for discussion. We have shown that the assumptions (i.e., separation conditions) of Lemma 4.1 are satisfied for the given problem in Example 4.1. Let us verify that the conclusions of Lemma 4.1 are correct. We have . Sincex = 0 is both a weak efficient solution and an efficient solution, which has been shown in Example 4.1, then f (0) ∈ IMin (P) = W (0) and f (0) ∈ IIMin (P) = V (0), where 0 = (0, 0) ∈ R 2 . Taking the matrixΛ as in (12), we calculate and (23) and (24) indicate For convenience, we introduce the following condition (C 1 ): From (23) and (25), we obtain Since −x 2 < 0 for x ∈ R \ {0}, and −x 2 = 0, −u 1 − ( 1 16 ) x + 1 = −u 1 for x = 0, we conclude that if the condition (C 1 ) holds, then −Λ0 − f (0) ∈ IIMax u∈R 2 [−Λu − V (u)] and −Λ ∈ ∂ II V (0; f (0)). Thus, we get ∂ II V (0) = ∅. Moreover, the condition (C 1 ) is equivalent to the following condition (C 2 ) So, the rest work is to prove that the condition (C 2 ) holds. For this purpose, we first calculate the optimal solution map Ψ(u) for all the cases such that u 1 < 0. Case 1: u 1 < 0, u 2 < 0, for S(u) = ∅, there are four subcases.   . Then, from the above discussion, we can see that for any u ∈ R 2 such that u 1 < 0, 0 / ∈ Ψ(u). Thus, the condition (C 2 ) holds. Now, we prove an important property which states that the zero duality gap is equivalent to the weak subdifferentiability of optimal values maps at u = 0. Proof. We just prove (i), and the proof of (ii) is analogous. We first prove the sufficiency. If W is weakly subdifferentiable at u = 0 in the sense of weak maximal points, then there existsŷ ∈ W (0) andT ∈ ∂ I W (0;ŷ), namely, y ∈ W (0) and (28) indicateT ∈ ∂ I W (0;ŷ), and so W is weakly subdifferentiable at u = 0 in the sense of weak maximal points. Finally, Lemma 4.1 and Proposition 4.3 immediately derive the strong duality theorem, which shows that separation conditions are sufficient to ensure zero duality gaps.
It is worth mentioning that the strong duality theorem above enable us to find optimal solutions and optimal values of duality problems. We still consider the problem in Example 4.1. Since there exists Λ = 0 0 1 0 such that the separation conditions (8) and (9)

5.
Conclusions. Image space approach is a powerful tool to deal with vector optimization problems, and conjugate duality is a classical duality method. At present, there has not been related works on investigating conjugate duality theory via this approach. Based on the features of the image space, we constructed a new condition (the separation condition). Then we proved that the separation condition is a sufficient condition for establishing the strong duality theorem without convexity assumptions. Furthermore, we illustrated a nonconvex multi-objective optimization problem to verify our results.