On fractional vector optimization over cones with support functions

In this paper we give necessary and sufficient optimality conditions for a fractional vector optimization problem over cones involving support functions in objective as well as constraints, using cone-convex functions. We also associate Mond-Weir type and Schaible type duals with the primal problem and establish weak and strong duality results under cone-convexity, pseudoconvexity and quasiconvexity assumptions. A number of previously studied problems appear as special cases.

1. Introduction. The optimization problems, in which the function to be maximized or minimized is given as a ratio of functions, are called fractional programming problems. The problems, where we have two or more such fractional functions to be optimized, are called multiobjective fractional programming problems. These problems arise in different areas of modern research such as game theory, goal programming, minimum risk problems, portfolio selection, production, information theory and numerous decision-making problems in management science and heat exchange networking. More specifically, it can be used in engineering and economics to minimize a ratio of physical or economical functions, or both, such as cost/time, cost/volume, cost/benefit, etc., in order to measure the efficiency or productivity of the system. Many economic, non-economic and indirect applications of fractional programming problems have also been given by Schaible and Ibaraki [23].
Those fractional programming problems where the functions concerned are linear in nature were introduced by Charnes and Cooper [3] in their classical paper in 1962. They used a transformation to convert a linear fractional objective function into an ordinary linear program. Then Dinkelbach [7] introduced the most known approach used for solving nonlinear fractional programming problems called the parametric approach. Later in 1976 Schaible [21] developed duality theory for linear and concave-convex fractional programs and in the same year he [22] proposed a revised version of Dinkelbach's Algorithm using duality theory introduced in [21]. Liang et al. [15] introduced the concept of (F, α, ρ, d)-convexity and presented optimality and duality results for a class of nonlinear fractional programming problems which were based on the properties of sublinear functionals and generalized convex functions. Fractional programming problems have also been studied in nonsmooth settings where either the quotients are taken to be nondifferentiable or a support function of compact convex set is added to numerator or denominator of the objective function or both. Husain and Jabeen [8] worked in this direction and gave optimality conditions and duality results for a nonlinear fractional program in which a support function appears in the numerator and denominator of the objective function as well as in each constraint function.
Later Liang et al. [16] gave efficiency conditions and duality results for a class of multiobjective fractional programming problems using the concept of (F, α, ρ, d)convexity introduced in [15]. Antczak [1] used a new method for solving nonlinear multiobjective fractional programming problems having V -invex objective and constraint functions with respect to the same function η. Recently, Jayswal et al. [9] established sufficient optimality conditions and duality results for a multiobjective fractional programming problem under the assumption of (p, r)-ρ-(η, θ)-invexity.
Also many authors have studied nonsmooth multiobjective fractional programming problems. For example, Kuk et al. [14] considered a class of such problems in which functions are taken to be locally Lipschitz and obtained optimality conditions and derived duality results using (V, ρ)-invex functions. Kim et al. [12] studied a class of nondifferentiable multiobjective fractional programs in which the numerator of each component of the objective function contains a term involving the support function of a compact convex set. Recently Kim and Kim [13] considered a generalized nondifferentiable fractional optimization problem, which consists of a maximum objective function defined by finite differentiable fractional functions involving support functions and they have obtained optimality conditions and duality results by modifying the approach of Kim et al. [12].
In 2005, Kim [10] considered a multiobjective fractional programming problem in which all the concerned functions were taken to be continuously differentiable and the denominator of each objective function consisted of the same scalar function. He gave the necessary and sufficient optimality conditions and saddle point theorems under the generalized invexity assumptions. Then, in 2006, he [11] again considered the same problem with the only difference that all the concerned functions were taken to be locally Lipschitz. He also introduced the property of generalized invexity for fractional functions and presented necessary and sufficient optimality conditions and duality relations under suitable generalized invexity assumptions.
In this direction we keep a step forward and formulate a fractional vector optimization problem over arbitrary cones in which the constraint function, the numerator and denominator of each component of the objective function include support functions of compact convex sets and also each component of the objective function contains the same scalar function in the denominator. We give Fritz-John type necessary optimality conditions by applying the parametric approach and a result given by Craven [6]. Using the constraint qualification given by Suneja et al. [25], we prove KKT type necessary optimality conditions. We establish sufficient optimality conditions under cone-convexity assumptions and illustrate them with examples. We formulate Mond-Weir type and Schaible type dual and give weak and strong duality results. Finally, we relate our primal and dual problems with special cases that often occur in the literature in which a support function is the square root of a positive semi-definite quadratic form or an L p norm.

2.
Notations and definitions. Let K ⊆ R m be a closed convex pointed (K ∩ (−K) = {0}) cone such that intK = φ, where intK denotes the interior of K. The positive dual cone K + is defined as follows: In this section, we recall some of the basic definitions and results, which are to be used throughout the paper.
Let h : R n −→ R and f : Definition 2.1. The function h is said to be convex at x ∈ R n , if for all x ∈ R n and t ∈ [0, 1], The function h is said to be convex, if it is convex at each x ∈ R n .

Definition 2.2. [19]
The function f is said to be K-convex at x ∈ R n , if for all x ∈ R n and t ∈ [0, 1], The function f is said to be K-convex, if it is K-convex at each x ∈ R n .
Gradient vector of f i at x.
The function f is said to be strictly K-convex at x ∈ R n , if for all x ∈ R n \ {x} and t ∈ (0, 1), The function f is said to be strictly K-convex, if it is strictly K-convex at each x ∈ R n . Remark 2. Every function that is strictly K-convex is also K-convex but the converse is not true.
Proof. Since f is strictly K-convex, so for all x, x ∈ R n such that x = x, and t ∈ (0, 1), By Remark 2 it follows that f is K-convex and by Remark 1 we have that for all In the above relationship we replace x by tx + (1 − t)x for any t ∈ (0, 1) and get 552 POOJA LOUHAN AND S. K. SUNEJA Adding (1) and (2), we get Since t > 0, we have Definition 2.5. The function h is said to be pseudoconvex at x ∈ R n , if for every Definition 2.6. The function h is said to be quasiconvex at x ∈ R n , if for every Now we briefly describe the Clarke's [5] notion of generalized directional derivative and subdifferential of a locally Lipschitz function.
The function h is said to be locally Lipschitz at x ∈ R n if there exists a nonnegative constant L and a neighborhood N (x) of x such that for all x, y ∈ N (x), we have The Clarke generalized directional derivative of a locally Lipschitz function h at x ∈ R n in the direction d ∈ R n is given as where y ∈ R n and t > 0. The Clarke generalized gradient or the Clarke subdifferential of a locally Lipschitz function h at x ∈ R n is given as If h is convex function, then for any x ∈ R n , h is locally Lipschitz at x and in this case We now review some well known facts about support functions. Let C ⊆ R n be a compact convex set. The support function of C is defined by The support function of a compact convex set, being convex and finite everywhere, has a subgradient at every x ∈ R n and the set of all subgradients at x, that is, the subdifferential at x is given by 3. Optimality conditions. Consider the following fractional vector optimization problem where f : R n −→ R m , h : R n −→ R and g : R n −→ R p are continuously differentiable functions, C, D and E are non-empty compact convex subsets of R n , K and Q are closed convex pointed cones in R m and R p respectively with non-empty interiors and k = (k 1 , k 2 , . . . , k m ) T ∈ intK and q = (q 1 , q 2 , . . . , q p ) T ∈ Q are any arbitrary but fixed vectors. The feasible set of (FVP) is given by Remark 3.
1. If we replace m by k, p by m, the function h by g : R n −→ R and we take K = R k + , Q = R m + and C = D = E = {0}, then our problem (FVP) reduces to the problem (MFP) considered by Kim [10] where X = R n . 2. If we interchange the roles of m and p, replace the function h by g : R n −→ R and we take K = R k + , Q = R m + and C = D = E = {0}, then our problem (FVP) reduces to the problem (NMFP) considered by Kim [11] where X 0 = R n . 3. If we replace m by k, p by m, the function h by q : R n −→ R and if we take K = R k + , Q = S and C = D = E = {0}, then our problem (FVP) reduces to the problem (MFP) considered by Chen et al. [4]. 4. If we replace m by k, p by m, g j by h j and we take K = R k + , Q = R m + , C = D = E = {0}, then our problem (FVP) reduces to the problem (FP) considered by Suneja and Gupta [24] where , g i = h and S = R n and also to the problem (FP) given by Antczak [1] where , g i = h and X = R n . 5. If we interchange the roles of m and p and replace g j by the function h j : (1, 1, . . . , 1) ∈ R p . Then our problem (FVP) reduces to (MFP) considered by Kim et al. [12] where X 0 = R n , C i = C and g i = h for each i = 1, 2, . . . , p. 6. If we replace m by k, p by m, g i by h i and we take K = R k + , Q = R m + , C = D = E = {0}, then our problem (FVP) reduces to the problem (FP) given by Jayswal et al. [9] where X = R n . 7. If we take m = 1, interchange p by m, replace g j by the function h j : R n −→ R, for each j = 1, 2, . . . , m and the function h by g : R n −→ R. Also K = R + , k = 1, Q = R m + , C = C 1 , E = C 2 , then our problem (FVP) reduces to the problem (FP) considered by Husain et al. [8] where D j = D for j = 1, 2, . . . , p.
Consider the following vector optimization problem.
Since the constraint in both the problems (FVP) and (FVP) v is same therefore their feasible sets are also same.
Proof. Suppose x is a weak minimum of (FVP) and if possible not a weak minimum of (FVP) v , then there exists x ∈ S 0 such that which implies that Since h(x) − s(x|E) > 0, therefore multiplying the above relation by 1 h(x) − s(x|E) , which is a contradiction to the fact that x is a weak minimum of (FVP). Therefore, x is also a weak minimum of (FVP) v . Conversely, suppose x is a weak minimum of (FVP) v and not a weak minimum of (FVP), then there exists x ∈ S 0 such that Since h(x) − s(x|E) > 0 for every x ∈ S 0 , therefore multiplying the above relation by h(x) − s(x|E) and using the fact that we get which is a contradiction to the fact that x is a weak minimum of (FVP) v . Therefore, x is also a weak minimum of (FVP).
Proof. This Lemma can be proved by taking F = Φ, G = Ψ, Q = K, S = Q and T = φ(the empty set) in the problem (P) considered by Craven [6] and applying Theorem 2 given by him.
We now establish Fritz-John type necessary optimality conditions for the problem (FVP), based on the above lemma.
Theorem 3.4. Let x ∈ S 0 be a weak minimum of (FVP). Then there exist λ ∈ K + , µ ∈ Q + with (λ, µ) = 0 and z ∈ ∂s(x|C), y ∈ ∂s(x|E) and w ∈ ∂s(x|D) such that Proof. Since x is a weak minimum of (FVP), therefore by Lemma 3.2, x is also a Let Φ : R n −→ R m and Ψ : R n −→ R p be such that and Ψ(x) = g(x) + s(x|D)q.
Since s(·|C), s(·|E) and s(·|D) are convex functions, therefore they are locally Lipschitz at any x ∈ R n and hence s(·|C)k, s(·|E)v and s(·|D)q are vector-valued locally Lipschitz functions at any x ∈ R n .
Also −h is continuously differentiable, therefore it is locally Lipschitz function at any x ∈ R n and hence −h(·)v is a vector-valued locally Lipschitz function at any x ∈ R n and f and g are vector-valued continuously differentiable functions, so they are vector-valued locally Lipschitz at any x ∈ R n . Due to the fact that sum of locally Lipschitz functions is locally Lipschitz, therefore Φ and Ψ are vector-valued locally Lipschitz functions. Now due to Lemma 3.3, there exist λ ∈ K + , µ * ∈ Q + with (λ, µ * ) = 0 such that and (µ * T Ψ)(x) = 0. That is, and (µ * T g)(x) + s(x|D)(µ * T q) = 0. Due to convexity of support function, ∂ c s(x|C) = ∂s(x|C), ∂ c s(x|E) = ∂s(x|E) and ∂ c s(x|D) = ∂s(x|D). Thus there exist z ∈ ∂s(x|C), y ∈ ∂s(x|E) and w ∈ ∂s(x|D) such that and substituting the value of v i for i = 1, 2, . . . , m, we get Hence substituting the value µ * = (h(x) − x T y) µ in (6) we get that there exists λ ∈ K + , µ ∈ Q + with (λ, µ) = 0 and z ∈ ∂s(x|C), y ∈ ∂s(x|E) and w ∈ ∂s(x|D) such that equations (3) and (4) hold.
On the lines of Suneja et al. [25], we establish the following Kuhn-Tucker type necessary optimality conditions for the problem (FVP).
Moreover if g is Q-convex at x and there exists x * ∈ R n such that Then λ = 0.
Proof. Let x be a weak minimum of (FVP), then we invoke Theorem 3.4 to deduce that there exist λ ∈ K + and µ ∈ Q + with (λ, µ) = 0 and z ∈ ∂s(x|C), y ∈ ∂s(x|E) and w ∈ ∂s(x|D) such that (3) and (4) hold. Suppose now that g is Q-convex at x and there exists x * ∈ R n such that We have to prove that λ = 0. Let, if possible, λ = 0, then µ = 0 and (3) reduces to Now as g is Q-convex at x, hence Since w ∈ ∂s(x|D), therefore Adding relations (9) and (10) and using the fact that µ ∈ Q + and w ∈ ∂s(x|D), that is, x T w = s(x|D) we get Using (4) and (8) in the above inequality, we get which is a contradiction to (7). Hence, λ = 0.
Next we give sufficient optimality conditions for a point to be weak minimum of the problem (FVP).
Proof. Let, if possible, x be not a weak minimum of (FVP), then there exists x ∈ S 0 such that Since, z ∈ ∂s(x|C) and y ∈ ∂s(x|E), therefore Using the fact that h(x) − s(x|E) > 0, in the above relation, we have Again since, z ∈ C, we have s(x|C) ≥ x T z, and hence Also since, y ∈ E, therefore s(x|E) ≥ x T y and then using the fact that f (x) Adding (11), (12) and (13), we get Since y ∈ E, so s(x|E) ≥ x T y and hence h(x)−x T y ≥ h(x)−s(x|E) > 0. Therefore the above relation can be written as or, which gives that That is Since y ∈ E, we have s(x|E) ≥ x T y and hence h(x) − x T y ≥ h(x) − s(x|E) > 0. Therefore, multiplying the above relation by h(x) − x T y, we get and since −h is convex at x, therefore Adding (14), (15) and (16), we get Again, since h(x) − x T y > 0, so multiplying the above relation by That is, which implies that Using (3) in above inequality, we get Now since w ∈ ∂s(x|D), therefore and since g is Q-convex at x, therefore Adding (18) and (19), we get Using µ ∈ Q + in above relation, we get Adding (17) and (20), we have Using (4) in above inequality, we get Since x ∈ S 0 , therefore −g(x) − s(x|D)q ∈ Q.
Using µ ∈ Q + in above inequality, gives that which is contradiction to the inequality (21). Hence x is a weak minimum of (FVP). Now we give an example to illustrate the above theorem.
Example 1. Consider the problem The feasible set of the problem (NVP) is Then f is K-convex at x, because for every x ∈ R, we have g is Q-convex at x, because for every x ∈ R, we have and it is clear that −h is convex at x and h(x) − s(x|E) > 0 for all x ∈ S 0 . Also there exists λ = (−1, 3) Therefore, x is a weak minimum of problem (FVP). Now proceeding on the lines of proof of Theorem 3.6, we give the following sufficient optimality conditions for a point to be a minimum of (FVP).
Theorem 3.7. Let f be strictly K-convex, −h be convex and g be Q-convex at x ∈ S 0 . Suppose that there exist λ ∈ K + \ {0}, µ ∈ Q + , z ∈ ∂s(x|C), y ∈ ∂s(x|E) and w ∈ ∂s(x|D) such that (3) and (4) hold and f (x) + (x T z)k ∈ K. Then x is a minimum of (FVP). Now we give an example to illustrate the above theorem.
Remark 4. If we take m = 1, interchange p by m, replace g j by the function h j : R n −→ R, for each j = 1, 2, . . . , m and the function h by g : R n −→ R. Also K = R + , k = 1, Q = R m + , C = C 1 , E = C 2 , then our dual (FMD) reduces to the dual (FD) considered by Husain et al. [8] where D j = D for j = 1, 2, . . . , p.
i Next we establish the Weak Duality relation between the primal problem (FVP) and its Mond-Weir type dual (FMD).