SECOND ORDER MODIFIED OBJECTIVE FUNCTION METHOD FOR TWICE DIFFERENTIABLE VECTOR OPTIMIZATION PROBLEMS OVER CONE CONSTRAINTS

. In the paper, a vector optimization problem with twice diﬀerentiable functions and cone constraints is considered. The second order modiﬁed objective function method is used for solving such a multiobjective programming problem. In this method, for the considered twice diﬀerentiable multi- criteria optimization problem, its associated second order vector optimization problem with the modiﬁed objective function is constructed at the given arbi- trary feasible solution. Then, the equivalence between the sets of (weakly) eﬃcient solutions in the original twice diﬀerentiable vector optimization problem with cone constraints and its associated modiﬁed vector optimization problem is established. Further, the relationship between an (weakly) eﬃcient solution in the original vector optimization problem and a saddle-point of the second order Lagrange function deﬁned for the modiﬁed vector optimization problem is also analyzed.

1. Introduction. The extremum problem, in which more than one objective functions are optimized simultaneously, is known in the optimization literature as a vector optimization problem or multiobjective programming problem. Recently, vector optimization problems have wide applications, for example, in product and process design, finance, aircraft design, climate change, the oil and gas industry, automobile design, evaluation of equity portfolios, etc. In a vector optimization problem, objectives are often in conflicting nature and consequently, there is no a single solution which optimizes all objectives simultaneously. The concept of efficiency has a vital role in solving such a type of optimization problems.
Convexity notion is a useful tool in proving the fundamental results in optimization theory for such vector optimization problems in which the involved functions are convex. However, there exist nonconvex operations research (O.R.) problems and, therefore, the convexity notion cannot be used for such practical optimization problems in proving, for example, sufficient optimality conditions for (weakly) efficiency of a feasible solution. Therefore, in recent years, some generalizations of the convexity notion were proposed in the literature, also in the case when the functions involved in an optimization problem are twice differentiable (see, for example, [5,6]). Aghezzaf and Hachimi [1] developed second order necessary conditions for optimality for a vector optimization problem. Under the second order invexity assumptions, Gulati et al. [9] established duality theorems for a class of multiobjective programming problems with twice differentiable functions. Recently, Suneja et al. [13] introduced the concepts of second order ρ-invexity over cones and established some second-order multiobjective symmetric duality results over arbitrary cones.
Saddle-point criteria, which have an important role in order to find (weakly) efficient solutions in vector optimization problems, have been studied by several authors (see, for example, [3,4,8,11,12]). A new approach with the so-called η-modified objective function for solving a smooth vector optimization problem was first introduced by Antczak [2] . Thereafter, Chen et al. [7], based on the η-modified objective function method proposed by Antczak [2], established the equivalence between efficient solutions of the original vector optimization problem and η-saddle-points in its associated vector optimization problem with modified objective function under assumption that the involved functions are generalized invex functions over cone. Recently, Suneja et al. [14] constructed an η-modified vector optimization problem by modifying the objective function for the considered nondifferentiable vector optimization problem and discussed the relationship between efficient solutions and η-saddle-points of these two multiobjective programming problems over cone constraints.
Inspired and motivated by the above research works, we use a second order modified objective function method for the considered vector optimization problem involving second order cone convex functions. In this method, we select a point from the feasible set of the original vector optimization problem and then we construct its associated second order vector optimization problem with the modified objective function at such a fixed feasible point. Then we analyze the relationship between (weakly) efficient solutions in the original vector optimization problem and its associated vector optimization problem constructed in the used approach under second order cone convexity assumptions. Further, the relationship between an (weakly) efficient solution in the original vector optimization problem and a saddle-point of the modified Lagrange function defined for the associated vector optimization problem with the modified second order objective function is also investigated. This paper is organized as follows: in Section 2, we recall some preliminary results which are useful in proving the main results. Also we present the formulation of the considered vector optimization problem with cone constraints in which the involved functions are twice differentiable and the second order necessary optimality conditions for such multi-criteria optimization problems. In Section 3, we use the second order modified objective function method for solving the considered twice differentiable vector optimization problem with cone constraints. Therefore, in this section, we construct a second order vector optimization problem with the modified objective function by modifying the objective function in the original twice differentiable multiobjective programming problem at the given arbitrary, but fixed feasible solution. Then, under the concept of second order cone convexity, we establish the equivalence between the sets of (weakly) efficient solutions in aforesaid vector optimization problems. Further, the relationship between (weakly) efficient solutions in the original twice differentiable vector optimization problem and a second order saddle-points of the second order Lagrange function defined for the associated vector optimization problem with the modified objective function is analyzed in Section 4. Finally, we conclude our paper in Section 5.

Notations and Preliminaries.
For any x = (x 1 , x 2 , . . . , x n ) T , y = (y 1 , y 2 , . . . , y n ) T , we define: Let K ∈ R m be a closed convex cone with the nonempty interior and let int K denote the interior of K. The positive dual cone K * of K is defined as follows: where the symbol "T " denotes the transpose of a vector. Lemma 2.1. (Chen et al. [7]): Let K ⊂ R m be a convex cone with nonempty interior and K * is the dual cone of K. Then the following statements hold: Now, we define the second order convex functions with respect to a cone which will be used in proving main results of the paper. Definition 2.2. (Suneja et al. [15]): A twice differentiable function f : R n → R m is said to be second order K-convex atx ∈ R n on R n if, for every x ∈ R n , where ∇f (x) is the (m × n) Jacobian matrix. For any d, h ∈ R n , we have Definition 2.3. (Suneja et al. [15]): A twice differentiable function f : R n → R m is said to be strictly second order K-convex atx ∈ R n , (x =x) on R n if, for every In the paper, consider the following vector optimization problem (VOP): where f : R n → R m , g : R n → R l are twice differentiable functions on R n and K and Q are closed convex cones with nonempty interiors in R m and R l , respectively. Let S denote the set of all feasible solutions of (VOP), i.e., The set of active constraints at the feasible pointx is denoted by J(x) and is defined by For such optimization problems, minimization means obtaining of (weakly) efficient solutions in the following sense [14]: Definition 2.5. (Aghezzaf and Hachimi [1]): A direction d ∈ R n is said to be a critical direction for a feasible pointx ∈ S if it satisfies the following conditions : The set of all critical directions atx is denoted by A(x).
Motivated by Aghezzaf and Hachimi [1], we present the following modified version of the second order necessary optimality conditions for efficiency of a feasible pointx in (VOP). Theorem 2.6. Let f and g be twice differentiable functions on R n . Assume that x ∈ S is an (weakly) efficient solution in (VOP) at which the second order Abadie constraint qualification (ACQ) [1] is satisfied. Then, for every d ∈ A(x), there exist λ ∈ K * \{0} andμ ∈ Q * such that We need the following lemma in order to prove the main results in the subsequent sections.
Lemma 2.7. (Li and Li [10]): Let K be a closed convex cone of topological vector space X with nonempty interior. Then, for any x, y ∈ X, the following statements are true : 3. Vector optimization problem with the second order modified objective function and optimality conditions. Letx be an arbitrary given feasible solution to the considered vector optimization problem (VOP) . Now, for the considered multiobjective optimization problem (VOP), we construct a vector optimization problem (VOP) 2 (x) with the second order modified objective function as follows: where the functions f, g and cones K and Q are defined in the similar way as for problem (VOP).
Remark 1. Note that the set of all feasible solutions of (VOP) 2 (x) is the same as in (VOP). Now, we prove the equivalence between an (weakly) efficient solution in the original multiobjective programming problem (VOP) and an (weakly) efficient solution in its associated vector optimization problem (VOP) 2 (x) with the second order modified objective function.
Theorem 3.1. Letx be a weakly efficient solution in (VOP) at which the second order (ACQ) is satisfied. Assume that the constraint function g is second order Q-convex atx on S. Thenx is also a weakly efficient solution in (VOP) 2 (x).
Proof. Sincex is a weakly efficient solution in (VOP) at which the second order (ACQ) is satisfied, therefore, the second order necessary optimality conditions (1)-(5) hold together with multipliersλ ∈ K * \{0} andμ ∈ Q * . Suppose, contrary to the result, thatx is not a weakly efficient solution in (VOP) 2 (x). Then, by Definition 2.4 (b), there exists another solutionx ∈ S such that Thus, Sinceλ ∈ K * \{0}, therefore, using Lemma 2.1 (i), the above relation can be written asλ On the other hand, sinceμ ∈ Q * , therefore, by the feasibility ofx and (3), we havē 138 A. JAYSWAL, T. ANTCZAK AND S. JHA By assumption, g is second order Q-convex atx ∈ S. Hence, by Definition 2.2, we have In particular, this is also true for d =x −x ∈ R n . Thus, Sinceμ ∈ Q * , the above relation gives By using (7), the above inequality yields Combining (6) and (8), we get The above inequality together with condition (1) imply that the following inequalitȳ holds, which contradicts (2) for d =x −x. This completes the proof. is satisfied. Assume that the constraint function g is second order Q-convex atx on S. If the Lagrange multiplierλ satisfiesλ ∈ intK * , thenx is also an efficient solution in the associated vector optimization problem (VOP) 2 (x) with the second order modified objective function.
Proof. By assumption,x is an efficient solution in problem (VOP) at which the second order (ACQ) is satisfied. Hence, the second order necessary optimality conditions (1)-(5) are fulfilled atx with Lagrange multipliersλ ∈ K * \{0} andμ ∈ Q * . Suppose, contrary to the result, thatx is not an efficient solution in problem (VOP) 2 (x). Then, by Definition 2.4 (a), there exists a pointx ∈ S such that Thus, the above relation can be re-written as follows: Since the Lagrange multiplierλ is assumed to satisfyλ ∈ int K * , therefore, using Lemma 2.1 (ii), the above relation can be written as The last part of this proof is the same as the proof of Theorem 3.1 and, therefore, it has been omitted in the paper. Now, we prove the converse results to those ones established in Theorems 3.1 and 3.2.
Theorem 3.3. Let the objective function f be second order K-convex atx on S. Ifx is a weakly efficient solution in the vector optimization problem (VOP) 2 (x) with the second order modified objective function, thenx is also a weakly efficient solution in the original twice differentiable vector optimization problem (VOP).
Proof. Letx be a weakly efficient solution in problem (VOP) 2 (x). Suppose, contrary to the result, thatx is not a weakly efficient solution in problem (VOP). Then, by Definition 2.4 (b), there exists a pointx ∈ S such that Since the objective function f is second order K-convex atx on S, therefore, by Definition 2.2, we have Hence, the above relation is also true for d =x −x ∈ R n . Thus, the above relation yields Combining (9), (10) and using Lemma 2.7 (ii), we get that the following relation holds, which contradicts the assumption thatx is a weakly efficient solution in (VOP) 2 (x). Thus, the proof of this theorem is completed. Proof. Letx be an efficient solution in problem (VOP) 2 (x), but it is not efficient in problem (VOP). Then, by Definition 2.4 (a), there exists a pointx ∈ S such that Since the objective function f is second order K-convex atx on S, therefore, by Definition 2.2, we have In particular, this is also true for d =x −x ∈ R n . Hence, the above relation gives On combining (11), (12) and using Lemma 2.7 (iv), we get which contradicts the assumption thatx is an efficient solution in (VOP) 2 (x). This completes the proof of the theorem. Now, we give an example to illustrate the result established in Theorem 3.4.
y, y 0} be two closed convex cones in R 2 . Let us consider the following vector optimization problem: where f : R → R 2 , g : R → R 2 . Note that the set of all feasible solutions of (VOP1) is given by S = {x ∈ R : −1 x 0}. One can easily verify atx = 0 that Hence, by Definition 2.2, f is second order K-convex atx = 0 on S. Now, the corresponding vector optimization problem (VOP1) 2 (x) with the second order modified objective function for the considered vector optimization problem (VOP1) is defined as follows: Clearly,x = 0 is an efficient solution in (VOP1) 2 (x). Since all hypotheses of Theorem 3.4 are fulfilled atx = 0, therefore,x = 0 is also an efficient solution in the considered vector optimization problem (VOP1).

Remark 2.
As it follows even from Example 3.1, the associated vector optimization problem (VOP) 2 (x) with the second order modified objective function is, in general, less complex than the original vector optimization problem (VOP). Therefore, it is possible, by using the second order modified objective function approach, to solve a nonlinear complex vector optimization problem by solving its associated vector optimization problems constructed in this method which is easier to solve than the original one.

4.
Relationship between second order saddle-point and (weakly) efficient solutions. In this section, firstly, we shall define the second order Lagrange function and its saddle-point for problem (VOP) 2 (x). Then we establish the relationship between second order saddle point in (VOP) 2 (x) and (weakly) efficient solutions in (VOP).
Definition 4.1. The second order Lagrange function for (VOP) 2 (x) is denoted by L 2 : S × K * \{0} × Q * → R and defined as is satisfied. Further, we assume that the constraint function g is second order Q-convex atx on S. Then there existλ ∈ K * \{0} andμ ∈ Q * such that (x,λ,μ) is a second order saddle-point for the second order Lagrange function in the modified vector optimization problem (VOP) 2 (x) with the second order modified objective function.
Proof. Sincex is an (weakly) efficient solution in (VOP) at which the second order (ACQ) is satisfied, therefore, the necessary optimality conditions (1)-(5) are fulfilled with Lagrange multipliersλ ∈ K * \{0} andμ ∈ Q * . Further, the constraint function g is second order Q-convex atx on S. Hence, by Definition 2.2, it follows that In particular, the above relation is also true for d = x −x ∈ R n . Thus, Sinceμ ∈ Q * , the above relation gives Usingμ ∈ Q * together with the feasibility of x and the necessary optimality condition (3), we obtain Hence, by the necessary optimality conditions (1) and (2), (13) implies Thus, (14) yields Thus, by the definition of the second order Lagrange function (Definition 4.1), the following inequality holds. Further, by the necessary optimality condition (3) and feasibility ofx, the following inequality holds for any µ ∈ Q * . Thus, (16) gives Hence, by the definition of the second order Lagrange function (Definition 4.1), we get L 2 (x,λ, µ) L 2 (x,λ,μ), ∀ µ ∈ Q * .
Thus, by (15) and (17), we conclude that (x,λ,μ) is a saddle-point of the Lagrange function defined in problem (VOP) 2 (x). This completes the proof.
Theorem 4.4. Letx ∈ S and the objective function f be second order strictly Kconvex atx on S. If (x,λ,μ) ∈ S × K * \{0} × Q * is a second order saddle-point of the second order Lagrange function for the vector optimization problem (VOP) 2 (x) with the second order modified objective function, thenx is an efficient solution in (VOP).
Proof. Assume that (x,λ,μ) is a saddle-point of the Lagrange function defined in the vector optimization problem (VOP) 2 (x) with the second order modified objective function. Hence, by Definition 4.2 (i), we have By the definition of the Lagrange function in problem (VOP) 2 (x), it follows that Thus, the above inequality gives If we set µ = 0 in the above inequality, then we obtain Sinceμ ∈ Q * , therefore, from the feasibility ofx, we havē On combining (19) and (20), we getμ T g(x) = 0. Now, suppose, contrary to the result, thatx is not an efficient solution in the original twice differentiable vector optimization problem (VOP). Then, by Definition 2.4 (a), there exists x ∈ S such that Since the objective function f is second order strictly K-convex atx on S, therefore, by Definition 2.3, it follows that the following relation holds for all x ∈ S. In particular, it is also fulfilled for x = x ∈ S and d = x−x ∈ R n . Thus, (22) yields Combining (21), (23) and using Lemma 2.7 (v), we get Sinceλ ∈ K * \{0}, by Lemma 2.1 (i), (24) gives Thus,λ Now, using the definition of the second order Lagrange function (Definition 4.1), we get This contradicts inequality (ii) in the definition of a second order saddle-point of the second order Lagrange function defined for (VOP) 2 (x). Hence,x is an efficient solution in (VOP). This completes the proof . Proof. Proof of this theorem is similar to that of Theorem 4.4 and hence it is omitted in the paper. Now, we give an example of a twice differentiable vector optimization problem with cone constraints in order to illustrate the result established in Theorem 4.4. Example 4.1 Let K = {(x, y) ∈ R 2 : 2x − y 0, y 0} and Q = {(x, y) ∈ R 2 : x 0, y 0} be two closed convex cones in R 2 . Let us consider the following vector optimization problem with cone constraints: where f : R → R 2 , g : R → R 2 . Note that the set of all feasible solutions of (VOP2) is given by S = {x ∈ R : 0 x 1}. Now, we show by Definition 2.3, that the objective function f is second order strictly K-convex atx = 0 on S. Indeed, we have for all d ∈ R. Hence, by Definition 2.3, f is second order strictly K-convex atx = 0 on S. Now, the corresponding vector optimization problem (VOP2) 2 (x) with the second order modified objective function is defined as follows: The second order Lagrangian L 2 : S × K * \{0} × Q * → R defined in the modified vector optimization problem (VOP2) 2 (x) with the second order modified objective function is given by Letλ = (−1, 1) ∈ K * \{0},μ = (0, −1) ∈ Q * . Now, by Definition 4.2, we show that (x,λ,μ) is a second order saddle-point of the second order Lagrangian L 2 defined in (VOP2) 2 (x). We have and Hence, we conclude (x,λ,μ) = (0, (−1, 1), (0, −1)) is a second order saddle-point of the second order Lagrangian L 2 defined in (VOP2) 2 (x). Since all hypotheses of Theorem 4.4 are fulfilled atx = 0, by Theorem 4.4,x = 0 is an efficient solution in the considered vector optimization problem (VOP2).

5.
Conclusion. In this paper, the second order modified objective function method has been used for solving the considered vector optimization problem with twice differentiable functions and cone constraints. Then, the equivalence between an (weakly) efficient solution in the original vector optimization problem with cone constraints and an (weakly) efficient solution in its associated vector optimization problem with the second order modified objective function has been established under second order cone convexity hypotheses. Further, the relationship between an (weakly) efficient solution in the original vector optimization problem with cone constraints and a saddle-point of the second order Lagrange function defined for the vector optimization problem with the second order modified objective functions has been investigated also under assumption that the involved functions are second order cone convex. Also the relationship between an (weakly) efficient solution in the original vector optimization problem and a saddle-point of the modified Lagrange function defined for the associated vector optimization problem with the second order modified objective function has been analyzed. It has been illustrated that the original vector optimization problem is, in general, reduced to a simpler one in the used second order modified objective function method. Therefore, it is simpler to solve in such cases. The property of this method is important from the practical point of view. Hence, the second order modified objective function approach for vector optimization problems with twice differentiable functions and cone constraints turned out to be useful to determine (weakly) efficient solutions of a vector optimization problem with a complex objective function by the help of (weakly) efficient solutions and/or saddle points in its associated vector optimization problems with the second order modified objective functions. Furthermore, we shall extend this idea to an η-approximation approach which will orient the future work of the authors.
6. Acknowledgements. The first author is grateful to CSIR, New Delhi, India through Grant no.: 25(0266)/17/EMR-11, for offering financial support. The authors are also thankful to the reviewer for his careful reading and valuable suggestions which improved the presentation of work.