Some characterizations of robust solution sets for uncertain convex optimization problems with locally Lipschitz inequality constraints

In this paper, we consider an uncertain convex optimization problem with a robust convex feasible set described by locally Lipschitz constraints. Using robust optimization approach, we give some new characterizations of robust solution sets of the problem. Such characterizations are expressed in terms of convex subdifferentails, Clarke subdifferentials, and Lagrange multipliers. In order to characterize the solution set, we first introduce the so-called pseudo Lagrangian function and establish constant pseudo Lagrangian-type property for the robust solution set. We then used to derive Lagrange multiplier-based characterizations of robust solution set. By means of linear scalarization, the results are applied to derive characterizations of weakly and properly robust efficient solution sets of convex multi-objective optimization problems with data uncertainty. Some examples are given to illustrate the significance of the results.

1. Introduction. The study of characterizations of solution sets has become an important research direction for many mathematical programming problems. Based on understanding characterizations of solution sets, solution methods for solving mathematical programs that have multiple solutions can be developed. The notion of characterizations of solution sets was first introduced and studied by Mangasarian for a convex extrema problem with differentiable function [29]. Some useful examples clarifying such characterizations of solution sets can be found in [7] for characterizing the problems that have weak sharp minimum. This being a reason why several characterizations of solution sets for some classes of constrained optimization problems have appeared in the literature (see [6,8,13,14,19,23,32,33,36,38,39] and other references therein).
However, dealing with real-world optimization problems, the input data associated with the objective function and the constraints of programs are uncertain due to prediction error or measurement errors (see [1,2,3,4]). Moreover, in many situations often we need to make decisions now before we can know the true values or have better estimations of the parameters. Robust optimization is one of the basic methodologies to protect the optimal solution that it is no longer feasible after realization of actual values of parameters. This means that any feasible points must satisfy all constraints including each set of constraints corresponding to a possible realization of the uncertain parameters from the uncertainty sets. Precisely stated, let us first consider the following optimization problem: where f, g i : R n → R, i = 1, . . . , m, are functions. The problem (P) in the face of data uncertainty both in the objective and constraints can be written by the following optimization problems: where f : R n × R q0 → R, and g i : R n × R qi → R, i = 1, . . . , m, are functions, u and v i are uncertain parameters and they belong to the specified nonempty convex and compact uncertainty sets U ⊆ R q0 and V i ⊆ R qi , respectively. The robust (worst case) counterpart of (UP), by construction in [3], is obtained by solving the single problem: where the objective and constraints are enforced for every possible value of the parameters within their prescribed uncertainty sets U and V i . The set of feasible solutions of problem (RP), refer to robust feasible set of the problem (UP). The optimal solution to the problem (RP) is known as a robust optimal solution of (UP). A successful treatment of the robust optimization approaches for treating convex optimization programs with data uncertainty to derive characterizations of robust optimal solution sets was given in [15,27,34,35]. For issues related to optimality conditions and duality properties, see [5,11,16,17,18,25,26] and other references therein. This paper is an attempt to investigate optimality conditions and to derive characterizations of robust solution sets of (UP). Unlike various related works in the literature mentioned above, in the present paper, appearing constraint functions are not convex necessarily while the robust feasible set F is convex. In this way, we refer to convex problems without convex representation in the sense that the constraint functions to represent the convex feasible set are non necessarily convex. Optimality conditions and characterizations of convexity of feasible set for such problems in the absent of data uncertainty can be found in [24] for differentiable case, and in [10,30,21] for non-differentiable case.
To the best of our knowledge, completely characterizations of robust solutions for uncertain scalar and multi-objective optimization problems over a robust convex feasible set described by non necessarily convex functions within the framework of robust optimization approach are not available in the literature. So, in this paper we examine a robust optimization framework for studying characterizations of the robust optimal solution set for uncertain convex optimization problems with a robust convex feasible set described by locally Lipschitz constraints. First, complete optimality conditions for uncertain convex optimization problems are given. In order to characterize the robust optimal solution set of a given problem, we introduce the so-called pseudo-Lagrange function and then, we show that pseudo-Lagrange function is constant on the robust optimal solution set. Afterwards, we then use this property to derive various characterizations of the robust optimal solution set that these are expressed in terms of convex subdifferentials, Clarke subdifferentials and Lagrange multipliers. Finally, the results are then applied to derive characterizations of weakly robust efficient solution set and properly robust efficient solution set of uncertain convex multi-objective optimization problems without convexity assumption on constraint functions.
The remainder of the present paper is organized as follows. In Sect. 2, we gives some notations, definitions and preliminary results. In Sect. 3, we establish a multiplier characterization for the robust optimal solution of uncertain convex optimization problem. Sect. 4 provides characterizations of robust solution set of uncertain convex optimization without convexity assumption on constraint functions. In Sect. 5, we give a sufficient condition that a robust efficient solution of uncertain multi-objective convex optimization problems can be a properly robust efficient solution. Moreover, characterizations of weakly robust efficient solution set and properly robust efficient solution set of such problem are given.

2.
Preliminaries. We begin this section by fixing certain notations, definitions and preliminary results that will be used throughout the paper. We denote by R n the Euclidean space with dimension n whose norm is denoted by · and x, y denotes the usual inner product between two vectors x, y in R n , that is, x, y = x T y. Let R n + := {x := (x 1 , . . . , x n ) ∈ R n : x i ≥ 0, i = 1, . . . , n} be non-negative orthant of R n . Note also that the interior non-negative orthant of R n is denoted by intR n + and is defined by intR n + := {x ∈ R n : x i > 0, i = 1, . . . , n}. Given a set A ⊆ R n , we recall that a set A is convex whenever λx + (1 − λ)y ∈ A for all λ ∈ [0, 1], x, y ∈ A. A set A is said to be a cone if λA ⊆ A for all λ ≥ 0. We denote the convex hull and the conical hull generated by A, by convA and coneA, respectively. The normal cone at x to a closed convex set A, denoted by N (A, x), is defined by A function f : R n → R is said to be convex if for all λ ∈ [0, 1] and x, y ∈ R n , It is a well known fact that a convex function need not be differentiable everywhere. However if f : R n → R is a convex function then the one-sided or rather rightsided directional derivative always exists and is finite. The right-sided directional derivative of f at x ∈ R n in the direction d ∈ R n is denoted by denoted by f (x; d), is defined as It is important to note that for every fixed x the function f (x; .) is a positively homogeneous convex function. The subdifferential of convex function f at x is defined as We now recall the following useful result, which is a subdifferential max-function rule of convex functions over a compact set, that will be used later in the paper.
Lemma 2.1. [15, Lemma 2.1] Let U ⊆ R p be a convex compact set, and let f : R n × R q0 → R be a function such that for each fixed u ∈ U, f (·, u) is a convex function on R n and for each fixed x ∈ R n , f (x, ·) is a concave function on R q0 . Then, Definition 2.2. A function h : R n → R is said to be locally Lipshitz at x ∈ R n , if there exists a positive scalar L and a neighborhood N of x such that, for all y, z ∈ N , one has |h(y) − h(z)| ≤ L y − z .
From the definition of the Clarke generalized subdifferential, it follows that Definition 2.5. Let h : R n → R be locally Lipshitz at a given point x ∈ R n . The function h is said to be regular at x ∈ R n if, for each d ∈ R n , the directional derivative h (x; d) exists and coincides with h o (x; d).
For a given compact subset V of R q and a given function g : R n × R q → R, the following conditions will be considered in this paper.
exist an open neighborhood U of x and a constant L > 0 such that for all y and z in U , and v ∈ V, one has Remark 1. In a suitable setting, if the function g is convex in x and continuous in v, the conditions (C2), (C3), and (C4) are then automatically satisfied. These conditions also hold whenever the derivative ∇ x g(x, v) with respect to x exists and is continuous in (x, v).

Remark 2. [25]
Under the conditions (C1) and (C2) the function ψ : R n → R, is defined and finite. Further, ψ is locally Lipschitz on R n , and hence for each is a nonempty closed subset of R q .
We conclude this section by the following lemmas which will be useful in our later analysis.
Lemma 2.6. [9] Let the function ψ be defined in Remark 2. Suppose that the conditions (C1) -(C4) are fulfilled. Then the usual one-sided directional derivative ψ (x; d) exists, and satisfies the following: for each x, d ∈ R n , For a given compact convex subset V of R q and a given function g : R n ×R q → R, suppose that the basic conditions (C1) -(C4) are fulfilled. Further, suppose that g(x, ·) is concave on V, for each x ∈ R n . Then 3. Multiplier characterization for the robust solution. In this section, we give a multiplier characterization for the robust optimal solution of (UP), which will play an important role in deriving characterizations of the robust optimal solution sets in the next section. Let us recall the following robust (worst case) counterpart optimization problem of (UP): where f : R n × R q0 → R, and g i : R n × R qi → R, i = 1, . . . , m, are given functions and for each i = 1, 2, . . . , m, where U and V i are the specified nonempty convex and compact uncertainty sets. The robust feasible set of (UP) is defined by Assumption 3.1. Throughout this paper, we always assume that F = ∅, f : R n × U → R is a convex-concave in the sense that f (·, u) is a convex function for any u ∈ U, and f (x, ·) is a concave function for any x ∈ R n while g i (x, ·), i = 1, . . . , m, are concave functions for any x ∈ R n . Further, let the functions g i , i = 1, . . . , m, be satisfied the conditions (C1) and (C2).
Definition 3.1. We say thatx ∈ F is a robust optimal solution of (UP) if and only ifx is an optimal solution of (RP).
By using Proposition 2.2 in [10], we can derive the following characterization of convexity for robust feasible set of (UP) in terms of the Clarke directional derivative. Before doing so let us denote, for each x ∈ F , and for all i = 1, . . . , m, . . , m, be satisfied the robust Slater constraint qualification, that is, there exists x 0 ∈ R n such that For each x ∈ F and i ∈ I(x), let the function g i be satisfied the conditions (C3), (C4), and 0 Applying the conditions (C1) and (C2), we have, for each i = 1, . . . , m, ψ i is locally Lipschitz on R n . To achieve the result, we will use Proposition 2.2 in [10] and then we need to justify that for any x ∈ F , ψ i , i ∈ I(x), are regular in the sense of Clarke and 0 / ∈ ∂ o ψ i (x), and the system ψ i (x) ≤ 0, i = 1, . . . , m, satisfies the Slater condition. The first and the second requirements will follow from Lemma 2.6 and Lemma 2.7 that for any x ∈ F , and for each i ∈ I(x) Finally, the robust Slater constraint qualification leads us to the following strict inequality . . , m, which means that the system x ∈ R n , ψ i (x) ≤ 0 (i = 1, . . . , m) satisfies the Slater's condition 1 . Now applying [10, Proposition 2.2] and taking (1) into consideration, we obtain the desired results.
Remark 3. It should be noted that in Proposition 1 without robust Slater constraint qualification and 0 Furthermore, for every x ∈ F one has In order to establish a multiplier characterization for the robust optimal solution of (UP), we first recall a robust basic constraint qualification which was introduced in [5], where the constraint data uncertainty g i (·, v i ), i = 1, . . . , m, are assumed to be convex for each v i ∈ V i .
Now the following theorem declares a result that the robust basic constraint qualification defined in Definition 3.2 is a necessary and sufficient constraint qualification of a robust optimal solution for the given problem, that is, the robust basic constraint qualification holds if and only if the Lagrange multiplier conditions are satisfied for a robust optimal solution.

Theorem 3.3 (Characterizing the robust basic constraint qualification).
Suppose that for each x ∈ F and i ∈ I(x), the function g i satisfies the conditions (C3) and (C4). Then, the following statements are equivalent: (i) the robust basic constraint qualification holds atx ∈ F ; (ii) for each real-valued convex-concave function f on R n × U, the following statements are equivalent: and Proof. [(i) ⇒ (ii)] Suppose that (i) holds. Let f be a real-valued convex-concave function on R n × U. Firstly, we assume that (a) holds. Then,x is a solution of the following constrained convex optimization problem: Minimize max which can be equivalently expressed as, Then, it follows from Lemma 2.1 that there existsū ∈ U such that (2) and (3) hold.
To prove sufficiency, assume that there existū ∈ U,λ i ≥ 0, andv i ∈ V i , i = 1, . . . , m such that (2) and (3) hold. According to (2), we can find ξ ∈ ∂f (·,ū)(x) and It stems from ξ ∈ ∂f (·,ū)(x) and and for any x ∈ R n . Multiplying each of inequalities in (6) byλ i and summing up the obtained inequalities with (5), we obtain that, for all x ∈ R n , Taking (4) into account together with the conditionλ Note that for each i ∈ I(x) with g i (x,v i ) = 0,λ i = 0. So, we consider in the case of g i (x,v i ) = 0 for i ∈ I(x), and hencev i ∈ V i (x). By Remark 3, the last inequality becomes Thus, together with max u∈U f (x, u) ≥ f (x,ū) for all x ∈ R n and (3), we obtain It means thatx is a robust optimal solution of problem (UP).
[(ii) ⇒ (i)] The proof is similar to the one in [35, Theorem 3.1], and so is omitted.
In the uncertainty free case, we can easily obtain the following result, which was obtained by Yamamoto and Kuroiwa in [37].
. . , m} be a feasible solution, g i : R n → R, i = 1, . . . , m, be locally Lipschitz on R n . Assume further that for any x ∈ F and any i = 1, . . . , m such that g i (x) = 0, the function g i is regular, and F is convex. Then the following statement are equivalent: (ii) for each real-valued convex function f on R n , the following statements are equivalent: Remark 4. Both the robust Slater constraint qualification condition and robust non-degeneracy atx, i.e., 0 / ∈ ∂ o g i (·, v i )(x) whenever i = 1, . . . , m and v i ∈ V i such that g i (x, v i ) = 0, is a sufficient condition for the robust basic constraint qualification holds atx. Indeed, according to Remark 3, we only have to show that the result as require.
The following example is given to illustrate the condition (i) of Theorem 3.3 is essential.

Remark 5. According to Remark 4, Example 3.2 demonstrates that only robust
Slater constraint qualification condition is not sufficient to ensure the robust basic constraint qualification holds at consideration point. The reason is that the robust non-degeneracy condition at such a point is destroyed. 4. Characterizations of the robust solution sets. In this section, we will establish some characterizations of robust optimal solution set in terms of a given robust solution point of the given problem.
We begin by recalling the following constrained convex optimization problem in the face of data uncertainty (UP): where f : R n × U → R is a convex-concave function, the functions g i , i ∈ I, satisfy the condition (C1) and (C2), g i (x, ·) : V i → R, i ∈ I, are concave functions for any x ∈ R n , and the robust feasible set F is convex. Assume that the robust solution set of the problem (UP), denoted by is nonempty. In what follows, for any given y ∈ R n , λ := (λ 1 , . . . , λ m ) ∈ R m + , u ∈ U, v i ∈ V i , i ∈ I and v := (v 1 , . . . , v m ), we introduce the so-called pseudo Lagrangian-type function L P (·, y, λ, u, v) by, for all x ∈ R n , Now, we show that the pseudo Lagrangian-type function associated with a Lagrange multiplier vector and uncertainty parameters according to a solution is constant on S.
Proposition 2. Assume all conditions of Theorem 3.3 hold. Let a ∈ S be a robust optimal solution fulfilling the robust basic constraint qualification. Then there exist a Lagrange multiplier vector λ a := (λ a 1 , . . . , λ a m ) ∈ R m + , and uncertainty parameters u a ∈ U, v a i ∈ V i , i ∈ I, such that for any x ∈ S, λ a i g o ix (a, v a i ; x − a) = 0, ∀i ∈ I(a), f (x, u a ) = max u∈U f (x, u), and L P (·, a, λ a , u a , v a ) is constant on S.
Proof. It follows from a ∈ S and Theorem 3.3 that there exist a Lagrange multiplier vector λ a := (λ a 1 , . . . , λ a m ) ∈ R m + , and uncertainty parameters u a ∈ U, v a i ∈ V i , i ∈ I, satisfying the conditions (2) and (3). Then, it stems from the fact that for all i ∈ I(a) and (2), we get which is noting else than Notice that , u), for any a ∈ S and x ∈ S, (8) and taking this into account, (7) deduces i∈I(a) λ a i g o ix (a, v a i ; x−a) ≥ 0, for any x ∈ S. Let us notice that for indices i ∈ I(a) such that λ a i > 0, we have g i (a, v a i ) = 0, and consequently, v a i ∈ V i (a). This in turn, by Remark 3, implies that Now, we prove that In fact, by (7) and (9), we get the assertion This together with (8), (10) holds. Therefore, for any x ∈ S, (3), (8), (9) and (10) entail showing that L P (·, a, λ a , u a , v a ) is constant on S, and this completes the proof.

Remark 6.
It is worth noting that if g i (·, v i ), i ∈ I, are convex functions for any v i ∈ V i then, for each i ∈ I, Proposition 2 gives x − a) = 0 for any x ∈ S. This together with x ∈ F and λ a i g i (a, v a i ) = 0, i ∈ I, arrives λ a i g i (x, v a i ) = 0, i ∈ I. Furthermore, it yields This shows that pseudo Lagrangian-type function collapses to the well-known Lagrangian-type function on the robust solution set S.
In the sequel, we are now in a position to establish the characterizations of the robust solution set for problem (UP) in terms of convex subdifferentials, Clarke subdifferentials and Lagrange multipliers. But before doing so it will thus be convenient to denote the following:   Characterizing the robust solution set). Assume all conditions of Theorem 3.3 hold. Let a ∈ S be a robust optimal solution fulfilling the robust basic constraint qualification. Then there exist a Lagrange multiplier vector λ a := (λ a 1 , . . . , λ a m ) ∈ R m + , and uncertainty parameters u a ∈ U, v a i ∈ V i , i ∈ I, such that the robust solution set for the problem (UP) is characterized by , ∀i ∈ I(a); ξ, x − a = ζ, a − x = 0 for some ζ ∈ ∂f (·, u a )(x) and ξ ∈ C(x); f (x, u a ) = max u∈U f (x, u)}, , ∀i ∈ I(a); ξ, x − a = ζ, a − x for some ζ ∈ ∂f (·, u a )(x) and ξ ∈ C(x); f (x, u a ) = max u∈U f (x, u)}, Proof. Evidently, the following containments hold:

ROBUST SOLUTION SETS FOR UNCERTAIN CONVEX OPTIMIZATION PROBLEMS 481
Hence, we only have to show that S ⊆ S 1 and S 7 ⊆ S. In order to establish S ⊆ S 1 , let x ∈ S be arbitrarily given. It follows from (2), we therefore obtain vectors ζ ∈ ∂f (·, u a )(a) and (since λ a i = 0 for i / ∈ I(a)). According to ζ ∈ ∂f (·, u a )(a), ξ i ∈ ∂ o g i (·, v a i )(a), i ∈ I(a), and x, a ∈ S, one has and Once we have shown, in Proposition 2, that λ a i g o ix (a, v a i ; x − a) = 0, ∀i ∈ I(a), after multiplying both sides of (13) by λ a i , i ∈ I(a) we get 0 ≥ λ a i ξ i , x − a , ∀i ∈ I(a). Summing up these inequalities and using (11) we obtain that Again, it follows from Proposition 2 that and for each i ∈ I(a), max ηi∈∂ o gi(·,v a i )(a) η i , x − a = g o ix (a, v a i ; x − a) = 0, the latter which in turn leads to there exists η i ∈ ∂ o g i (·, v a i )(a) such that η i , x − a = 0.
On the one hand, taking (3) and (15) into account (12) we obtain This together with (14) arrives at ζ, x − a = 0. Now, we only need to prove that ζ ∈ ∂f (·, u a )(x). In fact, for any y ∈ R n , which means ζ ∈ ∂f (·, u a )(x) and so, x ∈ S 1 . This proves S ⊆ S 1 .
To obtain S 7 ⊆ S, we now let x be arbitrary point of S 7 . It follows that x ∈ F , and it is easy to see that The last inequality together with the fact that a ∈ S gives x ∈ S, and the proof is complete.

NITHIRAT SISARAT, RABIAN WANGKEEREE AND GUE MYUNG LEE
Now, we give the following example to illustrate the significance of Theorem 4.1 that at least one of the constraint functions g i (·, v i ) for some v i ∈ V i , is not convex while the robust feasible set is convex. Then the results in [15,35,34,27] may not be relevant to this example. 1]. Consider the following constrained optimization problem with uncertainty data (UP): A robust solution of (UP) is obtained by solving its robust (worst-case) counterpart (RP) Minimize max Evidently, the function f : R 2 × U → R is a convex-concave function. Let us notice that and max Thus a := (a 1 , a 2 ) = (0, 0) ∈ S, which means that the robust basic constraint qualification holds at a. Also, for each u ∈ U, the convex subdifferential of f (·, u) at any point x is given by ∂f (·, u)(x) = (u 1 − 1, u 2 − 1).

ROBUST SOLUTION SETS FOR UNCERTAIN CONVEX OPTIMIZATION PROBLEMS 483
With the help of Proposition 2, we see now how the robust solution set can be characterized in terms of pseudo Lagrangian-type function.
Proposition 3. Assume all conditions of Theorem 3.3 hold. Let a ∈ S be a robust optimal solution fulfilling the robust basic constraint qualification. Then there exist a Lagrange multiplier vector λ a := (λ a 1 , . . . , λ a m ) ∈ R m + , and uncertainty parameters Proof. It will thus be convenient to denote , ∀i ∈ I(a); 0 ∈ ∂L P (·, a, λ a , u a , v a )(x) and f (x, u a ) = max u∈U f (x, u)}.
By Proposition 2, we have that for each x ∈ S, λ a i g o ix (a, v a i ; x − a) = 0, ∀i ∈ I(a), f (x, u a ) = max u∈U f (x, u), and L P (·, a, λ a , u a , v a ) is constant on S. The latter means that ∂L P (·, a, λ a , u a , v a )(x) = {0}, and so, S ⊆ S * . To obtain the converse inclusion, let x ∈ S * be given. Then, by the definition of S * , x ∈ F , there exist η i ∈ ∂ o g i (·, v a i )(a), ∀i ∈ I(a), such that and = f (x, u a ) for all y ∈ R n .
Using (16) and taking y = a in the last inequality, we get that Hence, max which is noting else than x ∈ S.
Corollary 3. For the problem (P), let f : R n → R be convex function and F := {x ∈ R n : g i (x) ≤ 0, i ∈ I} be convex. Assume that for any x ∈ F and i ∈ I (x) the functions g i are locally Lipschitz and regular in the sense of Clarke, a ∈ S is an optimal solution fulfilling N (F , a) = cone i∈I (a) ∂ o g i (a), and the optimality conditions (vi) Similarly, if θ = ( 2 3 , 1 3 ) then we can take a θ = ( 1 4 , 1) T ∈ S θ , λ θ = (0, 0, 1 6 , 0) and so, Therefore, by Theorem 5.2, weakly and properly robust efficient solution sets of (UMP) look like 6. Conclusions. In this paper, following the framework of robust optimization, we consider an uncertain convex optimization problem without convexity assumption on constraint functions. We provide a new pseudo Lagrangian-type function which is constant on the robust optimal solution set. We also obtain some characterizations of the robust optimal solution set of all robust optimal solutions of a given problem. Furthermore, as applications, we obtain some characterizations of both weakly robust efficient solution set and properly robust efficient solution set for a convex multi-objective optimization problem with data uncertainty.