Solving the interval-valued optimization problems based on the concept of null set

We introduce the concept of null set in the space of all bounded closed intervals. Based on this concept, we can define two partial orderings according to the substraction and Hukuhara difference between any two bounded closed intervals, which will be used to define the solution concepts of interval-valued optimization problems. On the other hand, we transform the interval-valued optimization problems into the conventional vector optimization problem. Under these settings, we can apply the technique of scalarization to solve this transformed vector optimization problem. Finally, we show that the optimal solution of the scalarized problem is also the optimal solution of the original interval-valued optimization problem.


1.
Introduction. Imposing the uncertainty upon the optimization problems is an interesting research topic. The uncertainty may be interpreted as randomness or fuzziness. The randomness occurring in the optimization problems is categorized as the stochastic optimization problems. The books written by Birge and Louveaux [2], Kall [11], Prékopa [13], Stancu-Minasian [16] and Vajda [17] provide many interesting ideas and useful techniques for tackling the stochastic optimization problems. On the other hand, the fuzziness occurring in the optimization problems is categorized as the fuzzy optimization problems. The collection of papers on fuzzy optimization edited by S lowiński [14] and Delgado et. al. [8] gives many interesting topics. The fusion of randomness and fuzziness occurring in the optimization problems is even a challenge research topic. The book edited by S lowiński and Teghem [15] gives the comparisons between fuzzy optimization and stochastic optimization for the multiobjective programming problems. Inuiguchi and Ramík [9] also gives a brief review of fuzzy optimization and a comparison with stochastic optimization in portfolio selection problem.
In the stochastic optimization problems, the coefficients of the problem are assumed as random variables with known probability distributions. On the other hand, in the fuzzy optimization problems, the coefficients of the problem are assumed as fuzzy numbers with known membership functions. However, the specifications of probability distributions and membership functions in the stochastic optimization problems and fuzzy optimization problems, respectively, are very 1158 HSIEN-CHUNG WU subjective. For example, many researchers adopt the Gaussian (normal) distributions with different parameters in the stochastic optimization problems, and the bell-shaped or S-shaped membership functions in the fuzzy optimization problems. These specifications may make the optimization problems more complicated to be solved. Therefore interval-valued optimization problems can provide an alternative choice for considering the uncertainty into the optimization problems. That is to say, the coefficients in the interval-valued optimization problems are assumed as bounded closed intervals. Although the specifications of bounded closed intervals may still be judged as subjective viewpoint, we may argue that the bounds of uncertain data (i.e., determining the bounded closed intervals to bound the possible observed data) are easier to be handled than specifying the probability distributions and membership functions in stochastic optimization and fuzzy optimization problems, respectively.
The duality theorems and optimality conditions for interval-valued optimization problems have been studied in Wu [18,19,20,21] using the so-called Hukuhara derivative. Recently, Chalco-Cano et al. [4] and Osuna-Gomez et al. [12] extend to study the optimality conditions by using the so-called generalized Hukuhara derivative. Also, Jayswal et al. [10] studied the duality theorems and optimality conditions by using the concept of generalized convexity. Bhurjee and Panda [1] study the efficient solution of interval optimization problem by using the parametric form of interval-valued functions. Many other interesting articles regarding the interval-valued optimization problems can also be consulted from the references therein.
In this paper, we are going to solve the interval-valued optimization problems based on the concept of null set. Let I denote the space of all bounded closed intervals. We also assume that I is endowed with the interval addition and scalar multiplication as follows. Given A = [a L , a U ] and B = [b L , b U ] in I, the interval addition is defined to be the set addition Then it is clear to see that For k ∈ R and A ∈ I, the scalar multiplication is defined by kA = {ka : a ∈ A}. Then it is clear to see that We also we have This says that the difference between a bounded closed interval and itself cannot be the zero element. In other words, each element in I cannot have the additive inverse element. This also says that the set I cannot form a real vector space. Although I cannot be a real vector space, the concept of interval addition and scalar multiplication that are described above can still be used to define the concept of convex cone given in Definition 2.4 below. Based on the convex cone, we can define two different types of partial orderings on I given in Definition 3.2 below. In order to define the partial orderings via the convex cone, two kinds of difference called substraction A B and Hukuhara difference A H B between any two closed bounded intervals A and B will be defined, which are given in Definition 2.1 below.
Since the difference A A cannot be the zero element for any A ∈ I as described above, we define the so-called null set that can be regarded as a kind of "zero element" of I for the purpose of defining two partial orderings A B based on the substraction A B and A H B based on the Hukuhara difference A H B. Using these two partial orderings, the solution concepts of interval-valued optimization problems can be defined.
In order to solve the interval-valued optimization problem, we are going to transform it into a vector optimization problem. Given an interval-valued function f : U → I defined on a vector space U , we are going to minimize f subject to some constraints. This is categorized as the interval-valued optimization problem. The solution concepts of interval-valued optimization problems can be defined as described above. Let L : I → V be a function from I into another vector space V . Then we can consider the composition function L • f : U → V of functions L and f . In this case, we are going to minimize the vector-valued function L • f subject to the same constraints. This transformed problem is categorized as the vector optimization problem. Considering the vector optimization problem to solve the interval-valued optimization problem is a new attempt according to the limited knowledge of the author. By providing a suitable linear functional φ : V → R, we are going to solve the scalar optimization problem φ(L • f ) : U → R subject to the same constraints. The solution concepts of vector optimization problem and scalar optimization problem are defined in the conventional way. The key issue is to show that the optimal solution of the scalar optimization problem is also the optimal solution of the original interval-valued optimization problem. This will be presented in section 5.
In section 2, we present many properties in the space of all bounded closed intervals that will be used in the further study. In section 3, by introduce an vector-valued function defined on the space of all bounded closed intervals, we shall present the order-preserving properties that will be used to study the optimal solutions. In section 4, we formulate a vector optimization problem that can be regarded as an auxiliary problem for the purpose of solving the original intervalvalued optimization problems. In section 5, we apply the scalarization technique to solve the vector optimization problem proposed in section 4. Finally, in section 6, we apply the proposed methodology to solve the interval-valued linear programming problems.
2. Interval spaces. Throughout the paper, let I be the set of all bounded and closed intervals in R. Since any real number x ∈ R can be regarded as a bounded closed interval [x, x], it means that R is contained in I. Given A = [a L , a U ] and B = [b L , b U ] in I, the interval addition is given by and the scalar multiplication in I is given by By the above definition, we have

HSIEN-CHUNG WU
The concepts of substraction and difference between any two bounded closed intervals should be clarified below.
Definition 2.1. Let A and B be two bounded closed intervals.
• The substraction between A and B is denoted and defined by • If there exists a bounded closed interval C such that A = B ⊕ C. Then we say that C is the Hukuhara difference between A and B. In this case, we write It is clear that the substraction A B always exists. However, the Hukuhara difference A H B does not necessarily exist in general.
Suppose that A = B ⊕ C. Then

Now we have
which says that the additive inverse element in I does not exist. One of the reasons is that the concept of "zero element" of I is not defined. This also says that I cannot form a vector space under the above interval addition and scalar multiplication.
The following set Ω = {A A : A ∈ I} is called the null set of I, which can be regarded as a kind of "zero element" of I. We also see that the true zero element of I is [0, 0], since it is clear that A⊕[0, 0] = A for any A ∈ I. This also says that A H A = [0, 0]. Remark 1. We have the following observations.
Since the null set Ω can be regarded as a kind of "zero element", we can propose the concept of almost identical concept for elements in I.  and We consider the following cases.
• If k 1 ≥ k 2 , then we can obtain A ⊕ ω = B, where ω is taken by • If k 1 < k 2 , then we can obtain A = B ⊕ ω, where ω is taken by To prove part (ii), it is clear that A ∈ Ω implies A Ω = ω for some ω ∈ Ω. Now we assume that A ⊕ ω 1 = ω ⊕ ω 2 for some ω 1 , ω 2 ∈ Ω. Since ω ⊕ ω 2 ∈ Ω, we can say that for some k 1 , k 2 ≥ 0, which implies a L = k 1 − k 2 and a U = k 2 − k 1 . Since a L ≤ a U , it follows that k 1 ≤ k 2 and A ∈ Ω. This completes the proof. Proposition 2. The following statements hold true.
the function L is linear. We can also define the function Then we can show that L is additive and positively homogeneous.
Since −ω = ω for any ω ∈ Ω, we can obtain the following interesting results that will be used for the further discussion.
where θ V is the zero element of vector space V . (ii) Suppose that L(ω) = θ V for any ω ∈ Ω, and that L is additive. Then Although the interval space I is not a vector space, we can also consider the concept of convexity based on the interval addition and scalar multiplication.
Definition 2.4. Let C be a subset of I. We say that C is convex if and only if λA ⊕ (1 − λ)B ∈ C for A, B ∈ C and λ ∈ [0, 1]. We say that C is a cone if and only if λA ∈ C for A ∈ C and λ > 0. A cone C is said to be a convex cone if and only if it is also convex.
We remark that the cone C does not necessarily contains {[0, 0]} since λ = 0. From Remark 1, it is easy to see that the null set Ω is a convex cone.
which say that C is a convex cone. We also see that Ω ⊂ C.
Proposition 4. Let C be a convex cone in I. Then A ⊕ B ∈ C for any A, B ∈ C.
Proof. Since C is a cone, we have Then since C is a cone. This completes the proof.
The following interesting and useful result is clear. 3. Order-preserving. In this section, we shall consider many kinds of partial orderings, and present the order-preserving properties under a transformation.
Definition 3.2. Let C be a convex cone in I. For A, B ∈ I, we define two binary relations on I as follows: Proof. To prove part (i), we have A A ∈ Ω ⊆ C for any A ∈ I by the assumption, which shows A A by Definition 3.2.
for some ω i ∈ Ω for i = 1, · · · , 4. By adding the above two equalities side by side, we obtain Since Ω is closed under the interval addition, the equality (3) says that A D, which shows the transitivity.
To prove part (iv), by definition, we have for some C 1 , C 2 ∈ C and ω i ∈ Ω for i = 1, · · · , 4. By adding the above equalities side by side, we obtain Since Ω is closed under the interval addition, by using Proposition 4 again, we conclude that A ⊕ D B ⊕ E. This completes the proof.
Proposition 7. Let C be a convex cone in I. We have the following properties.
that is, the binary relation H is compatible with set addition.
Proof. To prove part (i), since A = A ⊕ [0, 0], we see that Since Ω is closed under the interval addition, it follows that A H C, which shows that the binary relation H is transitive.
To prove part (iii), since A H B, by definition, we have that C = B H A exists and C ⊕ ω ∈ C for some ω ∈ Ω. Then B = A ⊕ C that implies λB = λA ⊕ λC since λ > 0. This shows that λC = λB H λA. Since C is a cone, we see that To prove part (iv), by definition, we first have that E = B H A and F = D H C exist, and that E ⊕ ω 1 ∈ C and F ⊕ ω 2 ∈ C for some ω 1 , ω 2 ∈ Ω. We also have Since Ω is closed under the interval addition, it follows that A ⊕ C H B ⊕ D. This completes the proof.
Let C be a subset of I. We define From part (ii) of Proposition 1, we also see that for some ω ∈ Ω. It suffices to say that there exists k 1 , k 2 ∈ R + such that Therefore we obtain The following useful result is not hard to prove.
Since C ⊆ C ⊕ Ω, it is clear that if C is strongly pointed-like then it is also pointed-like.
and a L + a U ≤ 0 for some [c L , c U ] ∈ C and k ≥ 0. This says that c L + c U ≥ 0, a L = c L − k and a U = c U + k, which implies a L + a U ≥ 0. Therefore we must have a L + a U = 0, i.e., a L = −a U , which says that [a L , a U ] ∈ Ω. Therefore we conclude that C is strongly pointed-like, and is also pointed-like. Proposition 9. Let C be a convex cone in I. Suppose that C is strongly pointed-like. Then we have the following properties.

SOLVING THE INTERVAL-VALUED OPTIMIZATION PROBLEMS 1167
This shows that (B H A) ⊕ ω 2 ∈ C − , since (A H B) ⊕ ω 1 ∈ C. In other words, we have

This shows that
To prove part (ii), suppose that A Hω1 andω 2 H A for someω 1 ,ω 2 ∈ Ω. By definition, we have that E =ω 1 H A and F = A Hω2 exist, and that E ⊕ ω 1 ∈ C and F ⊕ ω 2 ∈ C for some ω 1 , ω 2 ∈ Ω. We also haveω 1 = A ⊕ E and A =ω 2 ⊕ F , which implyω which says that A Ω =ω 1 . This completes the proof. Proposition 11. Let be an arbitrary binary relation on I. If is compatible with the interval addition and scalar multiplication, then the following set C = {A ∈ I : A ω for some ω ∈ Ω} is a convex cone. If we further assume that A ω 1 and ω 2 A for some ω 1 , ω 2 ∈ Ω implies A Ω = ω for some ω ∈ Ω, then C is also strongly pointed-like.
Proof. Suppose that A ∈ C, i.e., A ω for some ω ∈ Ω. Then λA λω for λ > 0 by the compatibility with scalar multiplication. Since Ω is a cone, i.e., λω ∈ Ω, it shows that λA ∈ C. Suppose that A, B ∈ C, i.e., A ω 1 and B ω 2 for some ω 1 , ω 2 ∈ Ω. Therefore, by the compatibility with interval addition and scalar multiplication, we have Under the further assumption, we want to show that C is also strongly pointedlike. For A ∈ (C ⊕ Ω) ∩ C − , we have A ∈ C − and A = C ⊕ ω 1 for some C ∈ C and ω 1 ∈ Ω. Therefore we have C ω 2 for some ω 2 ∈ Ω, which implies C ⊕ω 1 ω 1 ⊕ω 2 by adding ω 1 on both sides. This shows On the other hand, since A ∈ C − , by definition, there exist B ∈ C and ω 3 , ω 4 ∈ Ω such that A ⊕ B ⊕ ω 3 = ω 4 . Since B ∈ C, we also have B ω 5 for some ω 5 ∈ Ω, which implies A ⊕ B ⊕ ω 3 A ⊕ ω 3 ⊕ ω 5 by adding A ⊕ ω 3 on both sides, i.e., By adding ω 3 ⊕ ω 5 on both sides of (4), we can obtain Since Ω is closed under the interval addition, using the further assumption, it follows that (5) and (6) implies A ⊕ ω 3 ⊕ ω 5 Ω = ω 6 for some ω 6 ∈ Ω, which also says that A Ω = ω for some ω ∈ Ω. Using Remark 2, we complete the proof. The difference between and H will be more clear when we investigate the order-preserving properties. This completes the proof.
Proposition 13. Let L : I → V be an additive and positively homogeneous function from I into a vector space V , and let C be a convex cone in I. Then the binary relation is transitive and compatible with the interval addition and scalar multiplication on L(I). If we further assume that θ V ∈ L(C), then the binary relation is reflexive.
Proof. Since L(C) is a convex cone in the vector space V by Proposition 5, the conventional argument is available for proving the desired results. Proof. To prove part (i), by definition, we have (B A) ⊕ ω 1 ∈ C ⊕ Ω for some ω 1 ∈ Ω, which also says that (B A) ⊕ ω 1 = C ⊕ ω 2 for some C ∈ C and ω 2 ∈ Ω. By adding A on both sides, we obtain where ω 3 = A A ∈ Ω. Using the additivity, from part (i) of Proposition 3, we have

This shows that L(A) L(B).
To prove part (ii), part (ii) of Proposition 3 says that L(A) L(B) implies L(B A) ∈ L(C). Therefore there exists C ∈ C such that L(B A) = L(C), i.e., L(B A)−L(C) = θ V that is the zero element of the vector space V . Using part (ii) of Proposition 3 again, we obtain B A C ∈ ker L, which says that B A C = D for some D ∈ ker L. By adding C on both sides, we obtain (B A) ⊕ ω = C ⊕ D, where ω = C C. Since ker L ⊆ C, from Proposition 4, it follows that which says that A B. This completes the proof.  Since the binary relation is not a total ordering (it may just be a partial ordering by referring to Proposition 6), if we consider the concept of minimal element A * in the sense of A * A for all A ∈ F then, frequently, the minimal element A * does not exist. It is too strong to consider A * A for all A ∈ F when the the binary relation is just a partial ordering. We also remark that if the the binary relation is antisymmetric-like, then A * ∈ F is a minimal element of F if and only if A A * for A ∈ F implies A Ω = A * by Definition 3.4. Moreover, if the binary relation happens to be antisymmetric, then A * ∈ F is a minimal element of F if and only if A A * for A ∈ F implies A = A * . The same situation also applies to the binary relation H regarding the H-minimal element. Proof. Applying Proposition 14 to the proof of Proposition 16, we can similarly obtain the desired result.
Let U be another vector space. By referring to Chalco-Cano et al. [4], Osuna-Gomez et al. [12] and Wu [18,19,20,21], the interval-valued function and intervalvalued optimization problem are considered below. The function f : U → I defined on U is called an interval-valued function. Now we consider the following intervalvalued optimization problem: where G is a subset of U . Let In order to solve problem (IOP), we are going to introduce an auxiliary optimization problem that is solvable by the well-known techniques. Let L : I → V be a function from I into a vector space V . Then we can consider the composition function L • f : U → V of functions L and f . Now we consider the following vector optimization problem: It is clear that the set of all objective values of (VOP) is L(F).
• We say that u * is an optimal solution of problem (VOP) if and only if (L • f )(u * ) ∈ MIN L(C) (L(F)). • We say that u * is an H-optimal solution of problem (VOP) if and only if (L • f )(u * ) ∈ H-MIN L(C) (L(F)).
Considering the vector optimization problem to solve the interval-valued optimization problem is a new attempt according to the limited knowledge of the author.
Proposition 18. Let L : I → V be an additive and positively homogeneous function from I to V , and let C be a convex cone in I. Suppose that Ω ⊆ ker L ⊆ C.
(i) u * is an optimal solution of problem (IOP) if and only if u * is an optimal solution of problem (VOP). (ii) u * is an H-optimal solution of problem (IOP) if and only if u * is an H-optimal solution of problem (VOP).
Proof. Since the feasible sets of problems (IOP) and (VOP) are identical, the result follows from Propositions 16 and 17 immediately by taking A * = f (u * ).
Inspired by Proposition 18, in order to solve problem (IOP), it suffices to solve problem (VOP), where the domain U and range V of the objective function L • f in problem (VOP) are all vector spaces. Therefore we can apply the well-known techniques in vector optimization problem to solve problem (VOP). For example, the scalarization technique in vector optimization will be invoked in this paper to solve problem (VOP).

5.
Scalarization. In order to present the scalarization results, we first present the following interesting and useful result. which implies A⊕B ∈ Ω by the assumption of ker L = Ω. It also shows that A ∈ C − , since B ∈ C. In other words, we have A ∈ C ∩ C − , i.e., A Ω = ω for some ω ∈ Ω, since C is pointed-like. Therefore we have A ⊕ ω 1 = ω ⊕ ω 2 for some ω 1 , ω 2 ∈ Ω.
Since Ω is closed under the interval addition, it follows that A ⊕ ω 1 ∈ Ω. Using the assumption of ker L = Ω, we obtain This completes the proof. Let the function L : I → V be additive and positively homogeneous, and let C be a convex cone in I. ThenC ≡ L(C) is a convex cone in V by Proposition 5. We write V to denote the set of all linear functionals from V to R. The dual cone of C is defined byC V = φ ∈ V : φ(c) ≥ 0 for allc ∈C) . Now we are in a position to present the scalarization results.
Theorem 5.1. Let the function L : I → V be additive and positively homogeneous. Let C be a convex cone and be pointed-like in I. Assume that ker L = Ω ⊆ C. If there exists a linear functional φ ∈C V and an element u * ∈ G such that then u * is an H-optimal solution of problem (IOP).
Proof. Suppose that u * is not an H-optimal solution of problem (VOP). We are going to lead a contradiction. By definition, we have (L • f )(u * ) ∈ H-MIN L(C) (L(F)), which says that (L•f )(u * ) is not an H-minimal element of L(F) based on the binary relation H . By Remark 3, there exists u ∈ G such that ( (8). We also see that u = u * . Since φ ∈C V , we obtain which contradicts (9). This contradiction says that u * is an H-optimal solution of problem (VOP). Using part (ii) of Proposition 18, we complete the proof.
Theorem 5.2. Let the function L : I → V be additive and positively homogeneous. Let C be a convex cone and be pointed-like in I. Assume that ker L = Ω ⊆ C. If there exists a linear functional φ ∈C V and an element u * ∈ G such that then u * is an optimal solution of problem (IOP).
Proof. The result follows from the similar arguments of Theorem 5.1 by considering the binary relation and using part (i) of Proposition 18.
Then we have the following interesting result. Theorem 5.3. Let the function L : I → V be additive and positively homogeneous. Let C be a convex cone and be pointed-like in I. Assume that ker L = Ω ⊆ C. If u * is a unique optimal solution of problem (SAOP), then u * is both an optimal solution and H-optimal solution of problem (IOP).
Proof. The desired result follows from Theorems 5.1 and 5.2 immediately. Now we consider the quasi-interior of the dual cone ofC then u * is an H-optimal solution of problem (IOP).
Proof. From the proof of Theorem 5.1, there exists u ∈ G such that (L according to (8).
which contradicts (10). This contradiction says that u * is an H-optimal solution of problem (VOP). Using part (ii) of Proposition 18, we complete the proof.
then u * is an optimal solution of problem (IOP).
Proof. The result follows from the similar arguments of Theorem 5.4 by considering binary relation and using part (i) of Proposition 18.
Given φ • ∈C • V , we consider the following real-valued (scalar) optimization problem Then we have the following interesting result.
Theorem 5.6. Let the function L : I → V be additive and positively homogeneous. Let C be a convex cone and be pointed-like in I. Assume that ker L = Ω ⊆ C. If u * is an optimal solution of problem (SOP • ), then u * is both an optimal solution and H-optimal solution of problem (IOP).
Proof. The desired result follows from Theorems 5.4 and 5.5 immediately.
6. Interval-valued linear programming problems. From Example 4, we see that the following set C = c L , c U ∈ I : c L + c U ≥ 0 is a convex cone and is pointed-like in I satisfying Ω ⊆ C. This special type of convex cone C will be taken in this section for studying the interval-valued linear programming problems. In order to realize the essential idea of general problem, we first present a particular problem.
i , a U i ] be bounded closed intervals for i = 1, · · · , n. We consider the following interval-valued linear programming problem (ILP) min f (x 1 , · · · , x n ) = x 1 A 1 ⊕ x 2 A 2 ⊕ · · · ⊕ x n A n subject to x = (x 1 , · · · , x n ) ∈ G ⊂ R n and x ∈ R n + . where G is a feasible set consisting of linear constraints.
Using Theorem 5.6, if x * = (x * 1 , · · · , x * n ) is an optimal solution of linear programming problem (LP • ), then x * is both an optimal solution and H-optimal solution of problem (ILP).
It is clear that solving the linear programming problem (LP • ) is equivalent to solving the following linear programming problem (LP) min a L 1 + a U 1 x 1 + · · · + a L n + a U n x n subject to x = (x 1 , · · · , x n ) ∈ G ⊂ R n and x ∈ R n + , where the constant k can be ignored. Therefore, in order to solve the interval-valued linear programming problem (ILP) by obtaining the optimal solution and H-optimal solution, we can simply solve the conventional linear programming problem (LP) shown above. 6.2. General Problem. Given any fixed 0 = σ i ∈ R for i = 1, · · · , n, we consider the following generalized interval-valued linear programming problem (GILP) min f (x 1 , · · · , x n ) = σ 1 A 1 x 1 ⊕ σ 2 A 2 x 2 ⊕ · · · ⊕ σ n A n x n subject to x = (x 1 , · · · , x n ) ∈ G ⊂ R n and x ∈ R n + . where G is a feasible set consisting of linear constraints.
Given A = [a L , a U ] ∈ I, let a = a L + a U . Given any fixed 0 = α i ∈ R for i = 1, · · · , m, let L : I → R m be defined by L a L , a U = (α 1 a, α 2 a, · · · , α m a) .
It is clear that solving the linear programming problem (LP • ) is equivalent to solving the following linear programming problem (LP) min g L (x 1 , · · · , x n ) + g U (x 1 , · · · , x n ) subject to x = (x 1 , · · · , x n ) ∈ G ⊂ R n and x ∈ R n + , where the constants γ and k can be ignored. We define the following sets of indices: P = {i : σ i ≥ 0} and N = {j : σ j < 0} .
Then P ∪ N = {1, 2, · · · , n}. Therefore we obtain g L (x 1 , · · · , x n ) = i∈P σ i a L i x i + j∈N σ j a U j x j and g U (x 1 , · · · , x n ) = i∈P σ i a U i x i + j∈N σ j a L j x j .
which says that problem (LP) is given by (LP) min σ 1 a L 1 + a U 1 x 1 + σ 2 a L 2 + a U 2 x 2 + · · · + σ n a L n + a U n x n subject to x = (x 1 , · · · , x n ) ∈ G ⊂ R n and x ∈ R n + , Therefore, in order to solve the generalized interval-valued linear programming problem (GILP) by obtaining the optimal solution and H-optimal solution, we can simply solve the conventional linear programming problem (LP) shown above.