SECOND-ORDER OPTIMALITY CONDITIONS FOR CONE CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

. The aim of this paper is to develop second-order necessary and second-order suﬃcient optimality conditions for cone constrained multi- objective optimization. First of all, we derive, for an abstract constrained multi-objective optimization problem, two basic necessary optimality theorems for weak eﬃcient solutions and a second-order suﬃcient optimality theorem for eﬃcient solutions. Secondly, basing on the optimality results for the abstract problem, we demonstrate, for cone constrained multi-objective optimization problems, the ﬁrst-order and second-order necessary optimality conditions un- der Robinson constraint qualiﬁcation as well as the second-order suﬃcient optimality conditions under upper second-order regularity for the conic constraint. Finally, using the optimality conditions for cone constrained multi-objective optimization obtained, we establish optimality conditions for polyhedral cone, second-order cone and semi-deﬁnite cone constrained multi-objective optimization problems.


1.
Introduction. For and a and b in l , we use the following conventions: a b iff a i ≥ b i , i = 1, . . . , l; a ≥ b iff a b, and a = b; a > b iff a i > b i , i = 1, . . . , l.
We consider the following multi-objective optimization problem where Φ ⊂ n is the feasible set.
Definition 1.1. A point x ∈ Φ is said to be a locally efficient solution to Problem (1) if there is no x ∈ Φ ∩ V such that f (x) ≤ f (x) for some neighbourhood V of x.

Definition 1.2.
A point x ∈ Φ is said to be a locally weak efficient solution to Problem (1) if there is no x ∈ Φ ∩ V such that f (x) < f (x) for some neighbourhood V of x.
The first order necessary optimality conditions for multi-objective programming have been studied since the middle seventies of the last century, see interesting results in [8], [14], [2] [9]. These conditions turn out to be also sufficient when the problem satisfies suitable convexity assumptions, see [12] and [10]. Results about second-order necessary optimality conditions and second-order sufficient optimality conditions for multi-objective programming can be found in [7], [13], [1], [4] and [3]. For the abstract constrained optimization problem (1), the second-order tangent set in variational analysis (see [5]) is used to develop second-order necessary optimality conditions in [4] and second-order sufficient optimality conditions in [3].
The literature on optimality conditions for multi-objective optimization is enormous, and even a short summary about the most important results achieved would be far beyond our reach. Most of the works in the literature mainly deal with the constraint set defined by a finite number of equality and inequality constraints, namely the case when Φ = {x ∈ n : h(x) = 0, g(x) ≤ 0}. From our knowledge, however, there are few results about second-order optimality conditions for other cone constrained multi-objective optimization problems. For instance, there are no results about Problem (1) when Φ is defined by the second-order cone or the cone of positively semi-definite matrices.
In this paper, we consider the cone constrained multi-objective optimization problem defined by min f (x) s.t. G(x) ∈ K, where K is a closed convex cone in a finite dimensional Hilbert space. Instead of using Motzkin Theorem of the Alternative [11], as in most papers for the first-order necessary optimality for multi-objective optimization, we use the dual theorem for convex optimization to develop both first-order and second-order necessary optimality conditions for cone constrained multi-objective optimization problems. As in cone constrained single-objective optimization, the upper second-order regularity of a convex cone is required in "no-gap" second-order optimality conditions (see [5]), we also use this property for convex cone to derive second-order sufficient optimality conditions. The paper is organized as follows. In Section 2, for the abstract constrained multi-objective optimization problem (1), we demonstrate two basic necessary optimality theorems for weak efficient solutions and a second-order sufficient optimality theorem for efficient solutions. In Section 3, for cone constrained multi-objective optimization problem (2), we use the dual theorem for convex optimization to establish first-order and second-order necessary optimality conditions under Robinson constraint qualification; And under upper second-order regularity for conic constraint, we demonstrate second-order sufficient optimality conditions. In Section 4, based on the variational geometry for polyhedral cone, second-order cone and the cone of positive semi-definite matrices, we obtain optimality conditions for polyhedral cone, second-order cone and semi-definite cone constrained multi-objective optimization problems. Section 5 draws a conclusion.
2. Basic theorems for optimality. In this section, we consider Problem (1), the multi-objective optimization problem with an abstract constraint set. For

SECOND-ORDER OPTIMALITY FOR CONIC MULTI-OBJECTIVE OPTIMIZATION 1043
developing necessary and sufficient optimality conditions, we need two concepts: tangent cone and second-order tangent set.
, the second-order tangent set of Φ at x 0 with respect to d is defined as The following theorem is a basic result about the first-order necessary optimality conditions for the multi-objective problem with an abstract constraint set. This theorem can be regarded as an alternative version of Corollary 3.1 in [4]. Theorem 2.3. If x ∈ Φ is a locally weak efficient solution to Problem (1) and f : n → l are continuously differentiable around x. Then the following optimization problem min max has zero optimal value.
Proof. By contradiction. Suppose that there exists a vector d ∈ T Φ (x) such that Form the definition of tangent cone T Φ (x), there exist a sequence {t k } ⊂ + and a sequence {d k } ⊂ n such that t k 0, d k → d and x + t k d k ∈ Φ. Then we have for j = 1, . . . , l, when k is large enough, which contradicts with the assumption that x is a locally weak efficient solution to Problem (1).
Define the critical cone of Problem (1) at x by Remark 1. The critical cone is the solution set for Problem (3). Now we give the basic result about the second-order necessary optimality conditions for Problem (1), in terms of the second-order tangent set and second-order derivative of f i for i = 1, . . . , l. This theorem can be regarded as an alternative version of Theorem 3.1 in [4].
Theorem 2.4. Let x ∈ Φ be a locally weak efficient solution to Problem (1) and f : n → l be twice continuously differentiable around x. Then for any d ∈ C(x), the following optimization problem min max has non-negative optimal value, where J(x, d) = {i : ∇f i (x) T d = 0}.
Proof. By contradiction. Let d ∈ C(x) be given and By the definition of T 2 Φ (x, d), there exist t k 0, w k → w such that Since f j is twice continuously differentiable around x, we obtain Thus we obtain from (5) which contradicts with the assumption.
Definition 2.5. Let x ∈ Φ be a feasible solution for Problem (1) and f : n → l be continuously differentiable around x. We say x to be a weak stationary point for Problem (1) if the optimization problem has zero optimal value.
For developing second-order sufficient optimality conditions, we need the upper second-order regularity for the constraint set, see [5].
Definition 2.6. We say Φ is upper second-order regular at x with respect to d ∈ Now we give the second-order sufficient optimality conditions for an efficient solution to Problem (1) when Φ satisfies the upper second-order regularity property.
Theorem 2.7. Let x ∈ Φ be a weak stationary point for Problem (1) and f : n → l be twice continuously differentiable around x. Suppose that, for any d ∈ C(x), d = 0, Φ is upper second-order regular at x with respect to d, and then x is a locally efficient solution to Problem (1).

SECOND-ORDER OPTIMALITY FOR CONIC MULTI-OBJECTIVE OPTIMIZATION 1045
Proof. By contradiction. Suppose that there exists a sequence {x k } ⊂ Φ with . If necessary, taking a subsequence, we may assume that where , we obtain from (9) that 0 ≥ β 0 , a contradiction.
3. Cone constrained multi-objective optimization. Let Y be a finite dimensional Hilbert space and K ⊂ Y be a closed convex cone. We consider the following cone constrained multi-objective optimization problem where f : n → l and G : n → Y . Problem (10) is a special case of Problem (1) For developing optimality conditions for Problem (10), we need to calculate the tangent cone T Φ (x). For this purpose, we use Robinson constraint qualification of Φ at x, which is of the form The standard Lagrange function for Problem (10) is defined as and the set of Lagrange multipliers as ) .
Now we demonstrate the following necessary optimality conditions for a locally weak efficient solution to Problem (10) under Robinson constraint qualification.
Theorem 3.1. Let x ∈ Φ be a locally weak efficient solution to Problem (10). Assume that f and G are continuously differentiable around x and Robinson constraint qualification (11) holds at x. Then the set of multipliers Λ(x) is non-empty and compact.
Proof. Since Robinson constraint qualification (11) holds at x, we have from Corollary 2.91 of [5] that Then we have from Theorem 2.3 that the following convex optimization problem min z has zero optimal value. Since Robinson constraint qualification holds at x, we get which implies that Slater condition holds for Problem (12). Therefore, the duality gap between Problem (12) and its Lagrange dual is zero and the solution set for the dual is non-empty and compact. It is not difficult to obtain the Lagrange dual of Problem (12) as follows min 0 s.t. (θ, λ) ∈ Λ(x). Hence we have that the set of multipliers Λ(x) is non-empty and compact.
Actually, the compactness of the set of multipliers for a stationary point also implies Robinson constraint qualification. Combining this with Theorem 3.1, we obtain the following proposition. Proof. The necessity comes from Theorem 3.1, we only need to prove the sufficiency. Let (θ 0 , λ 0 ) ∈ Λ(x) be any element. Suppose that Robinson constraint qualification fails at x. Then, for one has cl Z(x) = Y , which implies that there exists a non-zero element λ ∈ Z(x) • . The non-zero element λ satisfies It is easy to check that (θ 0 , λ 0 + t λ) ∈ Λ(x), ∀t ∈ + ,

SECOND-ORDER OPTIMALITY FOR CONIC MULTI-OBJECTIVE OPTIMIZATION 1047
which contradicts with the compactness of Λ(x).
Based on Theorem 2.4 and the expression of T 2 Φ (x, d) when Robinson constraint qualification holds at x, we may demonstrate the following second-order necessary optimality conditions for Problem (10). Theorem 3.3. Let x ∈ Φ be a locally weak efficient solution to Problem (10). Assume that f and G are twice continuously differentiable around x and Robinson constraint qualification (11) holds at x. Suppose that for any d ∈ C(x), T 2 K (G(x), DG(x)d) is a non-empty convex set. Then (i) the set of multipliers Λ(x) is non-empty and compact.
(ii) for any d ∈ C(x), Proof. Assertion (i) comes from Theorem 3.1. Now we prove Assertion (ii). Since Robinson constraint qualification (11) holds at x, we have from Proposition 3.33 of [5] that . Then we have from Theorem 2.4 that, for any d ∈ C(x), the following optimization problem min max has non-negative optimal value, where J(x, d) = {i : ∇f i (x) T = 0}. It is obvious that Problem (13) can be reformulated as min z Since T 2 K (G(x)DG(x)d) is a convex set, Problem (14) is a convex optimization problem. It is not difficult to check that Slater condition holds for Problem (14). The duality gap between Problem (14) and its conjugate dual is zero. The conjugate dual of Problem (14) is The proof is completed.
Under the upper second-order regularity of K, like Theorem 3.86 in [5], we obtain the following second-order sufficient optimality theorem.
Theorem 3.4. Let x ∈ Φ be a feasible point for Problem (10). Assume that f and G are twice continuously differentiable around x. Assume Λ(x) = ∅, and for any d ∈ C(x), T 2 K (G(x), DG(x)d) is a non-empty convex set and K is upper secondorder regular at G(x) with respect to DG(x)d. Suppose that for any d ∈ C(x), Then x is an efficient solution to Problem (10).
Proof. By contradiction. Suppose that there exists a sequence {x k } ⊂ Φ with . If necessary, taking a subsequence, we may assume that d k → d. Then we can easily obtain that d ∈ T Φ (x) and ∇f i (x) T d ≤ 0, i = 1, . . . , l. This implies d ∈ C(x) and d = 0.
It follows from (15) that there exists 0 > 0 such that for some (θ, λ) ∈ Λ(x). Define w k = 2(x k − x − t k d)/t 2 k , then using Taylor expansion, we have for j = 1, . . . , l. Since K is outer second-order regular at G(x) w.r.t. DG(x)d, we obtain from (17) that Thus, we have from (17) that there exists a sequence k → 0 such that For (θ, λ) ∈ Λ(x), noting that d ∈ C(x) satisfies θ i ∇f i (x) T d = 0 for i = 1, . . . l, we obtain from (18) that Therefore, we have that , which contradicts with (16) when k is large enough.
4. Three cone constrained multi-objective optimization problems. In this section, we consider the three important cases when K is polyhedral cone, a finite product of second-order cones and the cone of positively semi-definite symmetric matrices. It follows from Example 3.138 and Example 3.140 of [5] that the polyhedral cone and the cone of positively semi-definite symmetric matrices are upper second-order regular, and it is not difficult to verify that the second-order cone is also upper second-order regular. Thus we are not only able to develop necessary optimality conditions but also able to develop second-order sufficient optimality conditions for the corresponding three conic multi-objective optimization problems.

SECOND-ORDER OPTIMALITY FOR CONIC MULTI-OBJECTIVE OPTIMIZATION 1049
where f : n → l , h : n → q and g : n → p . The feasible set for Problem (19) is denoted by The standard Lagrange function for Problem (19) is defined as .

Robinson constraint qualification for
MFCQ is said to hold at the feasible point x for Problem (19) if (a) ∇h j (x), j = 1, . . . , q are linearly independent; (b) there exists a vector d 0 ∈ n such that Jh(x)d 0 = 0 and ∇g i (x) T d 0 < 0 for i ∈ I(x), where I(x) = {i : g i (x) = 0}. If MFCQ holds at a feasible point x ∈ Φ nlo , then the tangent cone of Φ nlo at x is expressed as We define the set of multipliers for Problem (19) at x is defined as Definition 4.1. We say a feasible point x ∈ Φ nlo is a stationary point for Problem For a feasible point x of Problem (19), the critical cone of Problem (19) at x is defined as If x is a stationary point for Problem (19), (θ, µ, λ) ∈ Λ nlo (x), then  . Assume that f , h and g are twice continuously differentiable around x and MFCQ holds at x. Then (i) the set of multipliers Λ nlo (x) is non-empty and compact.
(ii) for any d ∈ C nlo (x), Proposition 4.4. Let x ∈ Φ nlo be a feasible solution for Problem (19). Assume that f , h and g are twice continuously differentiable around x. Assume that (i) the set of multipliers Λ nlo (x) is non-empty; Then x is an efficient solution to Problem (19).

4.2.
Second-order cone constrained multi-objective optimization. When K = Q with Q = Q m1+1 × · · · × Q m J +1 , where Q mj +1 is the second-order cone in 1+mj defined by then Problem (10) becomes the second-order conic multi-objective optimization problem as follows where f : n → l , g j : n → mj +1 for j = 1, . . . , l. We denote with λ j ∈ mj +1 . The feasible set for Problem (20) is denoted by Robinson constraint qualification for Q = Q m1+1 × · · · × Q m J +1 is expressed as For a feasible point x ∈ Φ sco , we introduce three index sets: If Robinson constraint qualification holds at x ∈ Φ sco , then the tangent cone of Φ sco at x is expressed as The standard Lagrange function for Problem (20) is defined as (m i + 1). We define the set of multipliers for Problem (20) at x is defined as For a stationary point x for Problem (20), (θ, λ) ∈ Λ sco (x), the critical cone at x is defined as For (θ, λ) ∈ Λ(x), it follows from [6] that Proposition 4.7. Let x ∈ Φ sco be a locally weak efficient solution to Problem (20). Assume that f and g are twice continuously differentiable around x and Robinson constraint qualification holds at x. Then (i) the set of multipliers Λ sco (x) is non-empty and compact.
(ii) for any d ∈ C sco (x), Then x is an efficient solution to Problem (20).

4.3.
Semidefinite cone constrained multi-objective optimization. Let S p denote the space of p × p symmetric matrices and S p + be the cone of positively semi-definite symmetric matrices in S p . When K = S p + , Problem (10) becomes the semi-definite multi-objective optimization problem as follows where f : n → l , G : n → S p . The feasible set for Problem (21) is denoted by Φ sdo = x ∈ n : G(x) ∈ S p + . Robinson constraint qualification for S p + is expressed as 0 ∈ int G(x) + DG(x) n − S p + . If Robinson constraint qualification holds at x ∈ Φ sdo , then the tangent cone of Φ sdo at x is expressed as where E is the matrix whose columns form an orthogonal basis for Ker G(x).
The standard Lagrange function for Problem (21) is defined as We define the set of multipliers for Problem (21) at x is defined as Definition 4.9. We say a feasible point x ∈ Φ sdo is a stationary point for (21) if Λ sdo (x) = ∅.
Proposition 4. 10. Let x be a locally weak efficient solution to Problem (21). Assume that f and G are continuously differentiable around x. Then Robinson constraint qualification holds at x if and only if Λ sdo (x) is non-empty and compact.
For a feasible point x for Problem (21), the critical cone of Problem (21) at x is defined as where λ i = λ i (A) for i = 1, . . . , p. Suppose that A has the following spectral decomposition where Λ α is a |α|×|α| positive definite diagonal matrix and Λ γ is a |γ|×|γ| negative definite diagonal matrix. Then the critical cone at x is expressed as

SECOND-ORDER OPTIMALITY FOR CONIC MULTI-OBJECTIVE OPTIMIZATION 1053
Proposition 4.11. Let x ∈ Φ sdo be a locally weak efficient solution to Problem (21). Assume that f and G are twice continuously differentiable around x and Robinson constraint qualification holds at x. Then (i) the set of multipliers Λ sdo (x) is non-empty and compact; (ii) for any d ∈ C sdo (x), Proposition 4.12. Let x ∈ Φ sdo be a feasible solution for Problem (21). Assume that f and G are twice continuously differentiable around x. Assume that (i) the set of multipliers Λ sdo (x) is non-empty; (ii) for any d ∈ C sdo (x), d = 0, Then x is an efficient solution to Problem (21).

5.
Conclusion. This paper focuses on optimality conditions for cone constrained multi-objective optimization, including first-order necessary and second-order necessary optimality conditions as well as second-order sufficient optimality conditions. The results obtained are extensions of optimality conditions for conic single-valued optimization from [5]. Different from the classical results developed from Motzkin Theorem of the Alternative, we adopted the classical duality theorem for convex optimization for deriving necessary optimality conditions. For second-order sufficient optimality conditions, different from Theorem 2 of [3], here we assume that K satisfies the upper second-order regularity because many important cones satisfy this property. For three important multi-objective optimization problems, polyhedral cone, second-order cone and semi-definite cone constrained multi-objective optimization problems, we establish specific first-order and second-order necessary optimality conditions as well as second-order sufficient optimality conditions.