Characterizing robust weak sharp solution sets of convex optimization problems with uncertainty

We introduce robust weak sharp and robust sharp solution to a convex programming with the objective and constraint functions involved uncertainty. The characterizations of the sets of all the robust weak sharp solutions are obtained by means of subdiferentials of convex functions, DC fuctions, Fermat rule and the robust-type subdifferential constraint qualification, which was introduced in X.K. Sun, Z.Y. Peng and X. Le Guo, Some characterizations of robust optimal solutions for uncertain convex optimization problems, Optim Lett. 10. (2016), 1463-1478. In addition, some applications to the multi-objective case are presented.

1. Introduction. The notion of a weak sharp minimizer in general mathematical programming problems was first introduced in [15]. It is an extension of a sharp minimizer (or equivalently, strongly unique minimizer) in [22] to include the possibility of non-unique solution set. It has been acknowledged that the weak sharp minimizer plays important roles in stability/sensitivity analysis and convergence analysis of a wide range of numerical algorithms in mathematical programming (see [6,7,19,21,8,9] and references therein).
In the context of optimization, much attention has been paid to concerning sufficient and/or necessary conditions for weak sharp minimizers/solutions and characterizing weak sharp solution sets (of such weak sharp minimizers) in various types of problems. Particularly, the study of characterizations of the weak sharp solution sets covers both single-objective and multi-objective optimization problems (see, [10,11,12,30] and references therein) and, recently, is extended to mathematical programs with inequality constraints and semi-infinite programs (see, e.g., [27,28]). As it might be seen, the study of characterizations of the weak sharp solution sets has been popular in many optimization problems. How about the issue of this study, particularly,in a robust optimization?
Robust (convex) optimization has been known as an important class of convex optimization deals with uncertainty in the data of the problems [2,3]. The goal of robust optimization is to immunize an optimization problem against uncertain parameters in the problem. In the last two decades, it has been through a rapid development owing to the practical requirement and its effective implementation in real-world applications of optimization.(see, e.g., [17,23,18,20] and the references therein). A successful treatment of the robust optimization approaches to convex optimization problems under data uncertainty was given in ( [2,3,4,5,24]).
While the characterizations of optimal solution sets have been in the limelight presently, there has been no research concerning the characterizations robust weak sharp solution sets for such problems. Indeed, a robust weak sharp solution of an uncertain optimization problem is the weak sharp minimizer of the robust counterpart of such problem. Our main goal in this paper is to establish characterizations of the robust weak sharp solution set of the convex optimization problem under the data uncertainty.
This paper is organized as follows. In section 2, we recall the basic definitions. In Section 3, we establish necessary conditions for a robust weak sharp solution, constancy of Lagrangian-type function on the robust weak sharp solution set, and some characterizations of robust weak sharp solution sets are established respectively. Some properties of subdiferentials of convex functions and the (RSCQ), which was introduced in [24], are employed in the section. Finally, in section 4, we consider the characterizations of the robust weak sharp weakly efficient solutions for the multi-objective optimization problem under data uncertainty.
2. Preliminary. Throughout the paper, let R n , n ∈ N, be the n-dimensional Euclidean space, and the inner product and the norm of R n are denoted respectively by ·, · and · . The symbol B(x, r) stands for the open ball centered at x ∈ R n with the radius r > 0 while the B R n stands for the closed unit ball in R n . For a nonempty subset A ⊆ R n , we denote the notations of the closure, boundary and convex hull of A by clA, bdA, and coA, respectively. In particular, when λx ∈ E ⊆ R n for every λ ≥ 0 and every x ∈ E, the set E in R n is said to be a cone. A dual cone E * of the cone E is given as E * := {x ∈ R n : x, y ≥ 0 for all y ∈ E}. Observe that the dual cone E * is always closed and convex (regardless of E).
In general, for a given nonempty set A ⊆ R n , the indicator function δ A : R n → R∪{+∞} of A and the support function σ A : R n → R∪{+∞} of A are, respectively, defined by The distance function d A : R n → R + : [0, +∞) is defined by

CHARACTERIZING ROBUST WEAK SHARP SOLUTION SETS 2653
A normal cone of the set A at the point x is the following set: The normal cone N A (x) is always closed and convex for any set A.
For any extended real-valued function h : R n → R := [−∞, +∞] the following notations stand, respectively, for its effective domain and epigraph: The function h is said to be a proper function if and only if h(x) > −∞ for every x ∈ R n and domh is nonempty. Further, it is said to be a convex function if for any x, y ∈ R n and λ ∈ [0, 1], or equivalently, epih is convex. On the other hand, the function h is said to be a concave function if and only if −h is a convex function. In the case of vector valued function, let h : R n → R p be a given function and D ⊆ R p is a convex set. The function h is said to be D-convex if and only if for any x, y ∈ R n and λ ∈ [0, 1], Simultaneously, the function h is called a lower semicontinuous at x ∈ R n if for every sequence {x k } ⊆ R n converging to x, where the term on the right-hand side of the inequality denotes the lower limit of the function h defined as For any proper and convex function h : R n → R, the subdifferential of h atx ∈ domh, is defined by More specifically, for each ε ≥ 0, the ε-subdifferential of h atx ∈ domh, is defined by In particular, if h is a proper lower semicontinuous convex function, then for everyx ∈ domh, the ε-subdifferential ∂ ε h(x) is a nonempty closed convex set and

If x /
∈ domh, then we set ∂h(x) = ∅. Simultaneously, for the nonempty subset A of R n we get ∂δ A (x) = N A (x) and ∂d A (x) = B R n ∩ N A (x).
The conjugate function h * : R n → R of any h : R n → R is defined by for all x ∈ R n . The function h * is lower semicontinuous convex irrespective of the nature of h but for h * to be proper, we need h to be a proper convex function.
Next, let us recall some basic concepts dealing a DC problem/programming. A DC function is the difference of two convex functions. The minimization (or maximization) problem of a DC function is called a DC problem, i.e., the DC proplem concerned about finding where f, φ : R n → R are convex. Note that the function h is DC and it is not expected to be convex.
It shall be found later that some DC problems are considered and their properties, in particular the following lemma, are employed.
Lemma 2.2. [18] Let U ⊆ R p be a convex compact set, and f : R n × R p → R be a function such that, f (·, u) is a convex function for any u ∈ U, and f (x, ·) is a concave function for any x ∈ R n . Then, Let C ⊆ R n be a nonempty closed convex set. Let D ⊆ R p be a nonempty closed convex cone. Consider the following convex optimization problem: where f : R n → R is a convex function and g : R n → R m is a D-convex function. The feasible set of (P) is defined by The problem (P) in the face of data uncertainty both in the objective and constraints can be captured by the following uncertain optimization problem : where U ⊆ R p and V ⊆ R q are convex and compact uncertainty sets, f : R n ×U → R is a given real-valued function such that, for any uncertain parameter u ∈ U, f (·, u) is convex as well as f (x, ·) is concave for any x ∈ R n , g : R n × V → R m is a vectorvalued function such that, for any uncertain parameter v ∈ V, g(·, v) is D-convex as well as g(x, ·) is D-concave for any x ∈ R n . The uncertain sets can be apprehended in the sense that the parameter vectors u and v are not known exactly at the time of the decision. For examining the uncertain optimization problem (UP), one usually associates with its robust (worst-case) counterpart, which is the following problem: (RUP) It is worth observing here that the robust counterpart, which is termed as the robust optimization problem, finds a worst-case possible solution that can be immunized opposed the data uncertainty. The problem (RUP) is said to be feasible if the robust feasible set K is nonempty where it is denoted by Now, we recall the following concept of solutions, which was introduced in [1].

Definition 2.3. [1]
A pointx ∈ K is said to be a robust optimal solution for (UP) if it is an optimal solution for (RUP), i.e., for all x ∈ K, The robust optimal solution set of (UP) is the set which consists of all robust optimal solutions of (UP) and is given by In this paper, using the idea of weak sharp minimizer, and the robust optimal solution,we introduce a new concept of solutions for (UP), which related to the sharpness, namely the robust weak sharp solution.
Definition 2.4. A pointx ∈ K is said to be a (or an optimal) weak sharp solution for (RUP) if there exist a real number η > 0 such that for all x ∈ K, Definition 2.5. A pointx ∈ K is said to be a (or an optimal) robust weak sharp solution for (UP) if it is a weak sharp solution for (RUP). The robust weak sharp solution set of (UP) is given by Throughout the paper, we assume that S is nonempty.

Remark 1.
It is worthwhile to be noted that every robust weak sharp solution for (UP) is a robust optimal solution. In general, the reverse implication need not to be valid.

Characterizations of robust weak sharp solutions.
In this section, we establish some optimality conditions for the robust weak sharp solution in convex uncertain optimization problems as well as obtain characterizations of the robust weak sharp solution sets for the considered problems. For anyx ∈ R n , we use the following notations: The following definition, which was introduced in [24], plays a vital role in determining characterizations of robust optimal weak sharp solution sets.

Remark 2.
In an excellent work, [24], Sun et. al. introduced the (RSCQ) and then obtained some characterizations of the the robust optimal solution set, for an uncertain convex optimization problem.
Although it has been used as a guideline for dealing with the (UP), our attention is paid to characterizing the sets containing the robust weak sharp solutions of such problem. Furthermore, the presence of the term d K (x) in this paper has led us to deal with some different tools and methods from those in work of Sun et.al.
The following theorem presents that the robust type subdifferential constraint qualification (RSCQ) defined in Definition 3.1 is fulfilled if and only if optimality conditions for a robust weak sharp solution of (UP) are satisfied.
Theorem 3.2. Let f : R n × R p → R and g : R n × R q → R m satisfy the following properties : (i) for any u ∈ U and v ∈ V, f (·, u) is convex and continuous as well as Then, the following statements are equivalent: (a) The (RSCQ) is fulfilled atx ∈ K; (b)x ∈ R n is a robust weak sharp solution of (UP) if and only if there exists a positive constant η such that Proof. (a) ⇒ (b) Assume that the (RSCQ) is satisfied atx ∈ K. Letx be a robust weak sharp solution of (UP). Consequently, there exists η > 0 such that By (3), we obtain that for all x ∈ K, As max u∈U f (·, u) is continuous on R n and δ K is proper lower semicontinuous convex It can be noted that ∂d is satisfied atx, we have the following: which implies that (2) holds. Conversely, assume that there is a positive number η such that (2) holds. Since N K (x)∩ηB R n always contains 0, it is a nonempty set and so is ε>0 ∂ ε (ηd K )(x). Thus, for any ε ≥ 0, ∂ ε (ηd K )(x) = ∅. Let ε > 0 be arbitrary and let ξ ∈ ∂ ε (ηd K )(x). Then for any x ∈ K, Note that 0 ∈ ∂ ε (ηd K (x). It follows that Above inequality and (4) imply that and for any x ∈ R n , we have Adding these above inequalities implies that for each Sinceû belongs to U(x), for each x ∈ K, above inequality becomes This along with (μg)(x,v) ≤ 0, (μg)(x,v) = 0, and (6) imply for all x ∈ K. Observe that, combining inequalities (5) and (7) leads to Since the inclusion holds for arbitrary ε ≥ 0, it follows from the Lemma 2.1 thatx is a minimizer of the DC problem: inf x∈R n {max u∈U f (x, u) − ηd K (x)} and hence for any Therefore, for any x ∈ K, This meansx is a robust weak sharp solution of (UP).
(b) ⇒ (a) Let ξ δ ∈ ∂δ K (x) be given. Then, we have holds for all x ∈ K. Letη > 0 be given, and then, set f (x, u) := − ξ δ , x +ηd K (x). Thus, for any x ∈ K, Thus,x is a robust weak sharp solution of (UP). By hypothesis, there is η :=η such that (2) is fulfilled. Since for any It follows that and so we get the desired inclusion. Therefore, the proof is complete.
Remark 3. In [26], the necessary conditions for weak sharp minima in cone constrained optimization problems, which can be captured by weak sharp minima in cone constrained robust optimization problems, were established by means of upper Studniarski or Dini directional derivatives. With the result in Theorem 3.2, the mentioned necessary conditions are established by an alternative method different from the referred work.
The following result is established easily by means of the basic concepts of variational analysis.
Corollary 1. Let f : R n × R p → R and g : R n × R q → R p satisfying the following properties: 1. for any u ∈ U, and v ∈ V, f (·, u) is convex and continuous as well as g(·, v) is D-convex on R n ; 2. for any x ∈ R n , f (x, ·) is concave on U and g(x, ·) is D-concave on V, respectively. The following two below statements are equivalent: (a) The (RSCQ) is fulfilled atx ∈ K; (b)x ∈ R n is a robust weak sharp solution of (UP) if and only if there exists a real number η > 0 such that for any The result, which deals with a special case that U and V are singleton sets, can be obtained easily and be presented as follows: Corollary 2. Let f : R n → R is convex and continuous and g : R n → R m is D-convex. The following statements are equivalent: 1. The (SCQ) is fulfilled atx ∈ K 2.x ∈ R n is a weak sharp solution of (P) if and only there exists a real number η > 0 such that for any x * ∈ N K (x) ∩ ηB R n , there existμ ∈ D * such that x * ∈ ∂f (x) + ∂δ C (x) + ∂(μg)(x) and (μg)(x) = 0.
Next, a characterization of robust weak sharp solution sets in terms of a given robust weak sharp solution point of our considered problem is also illustrated in this section. In order to present the mentioned characterization, we first prove that the Lagrangian-type function associated with fixed Lagrange multiplier and uncertainty parameters corresponding to a robust weak sharp solution is constant on the robust weak sharp solution solution set under suitable conditions. In what follows, let u ∈ U, v ∈ V and µ ∈ D * . The Lagrangian-type function L(·, µ, u, v) is given by

Now, we denote by
the robust weak sharp solution set of (UP), and then we prove that the Lagrangiantype function associated with a Lagrange multiplier corresponding to a robust weak sharp solution is constant on the robust weak sharp solution set. Theorem 3.3. Letx ∈ S be given. Suppose that the (RSCQ) is satisfied atx. Then, there exist uncertainty parametersû ∈ U,v ∈ V, and Lagrange multiplier µ ∈ D * , such that for any x ∈ S, (μg)(x,v) = 0,û ∈ U(x), and L(x,μ,û,v) is a constant on S.
Proof. Sincex ∈ S with the real number η 1 > 0 and the (RSCQ) is satisfied at this pointx, by Theorem 3.2 we have that (2) holds for η := η 1 . Clearly N K (x) ∩ ηB R n contains 0, then it is nonempty and so is any ∂ ε (ηd K )(x) where ε > 0. Let ε > 0 and x * ∈ ∂ ε (ηd K )(x) be arbitrary. Again, we obtain that there existû ∈ U,v ∈ V andμ ∈ D * such that (2) is fulfilled. Let x ∈ S be arbitrary, then we have Since f (·, u) and g(·, v) are convex, for all u ∈ U and v ∈ V respectively, , and so Note that, as x ∈ S, there exists η 2 > 0 such that and so max Fromμ ∈ D * , g(x,v) ∈ −D, and (11), it is not hard to see that Then, by (11) and the positivity of ηd K (x), we see that which together with (12) leads to It follows that L(x,μ,û,v) = f (x,û), which is constant. Since x ∈ S was arbitrary, we finish the proof.
In the case that D := R + , which is a closed convex (and pointed) cone in R, the problem is reduced to be an inequality constrain problem. Suppose that f : R n × U → R is a function such that f (·, u) is convex for any u ∈ U and f (x, ·) is concave for any x ∈ R n as well as g : R n × V → R is a function such that g(·, v) is convex for any v ∈ V and g(x, ·) is concave for any x ∈ R n . Here, the problem (UP) is represented as min {f (x, u) : g(x, v) ≤ 0, ∀v ∈ V} , and its robust counter part is In this case, we can see that robust feasible set K is denoted by Corollary 3. Let f : R n × R p → R and g : R n × R q → R satisfying the following properties: 1. for any u ∈ U, and v ∈ V, f (·, u) is convex and continuous as well as g(·, v) is convex on R n ; 2. for any x ∈ R n , f (x, ·) and g(x, ·) are concave on U and V, respectively.
Corollary 5. For the problem (UP), let S be the robust weak sharp solutions set of (UP) andx belongs to it. Suppose that the (RSCQ) is satisfied atx ∈ S. Then, there exist uncertain parametersû ∈ U,v ∈ V and Lagrange multiplierμ ≥ 0 such that 4. Applications to multi-objective optimization. In this section, in order to apply our general results of the previous section, we investigate the class multiobjective optimization problem where where C ⊆ R n is a nonempty convex set,D ⊆ R m , f i : R n → R is a convex function for any i ∈ I and g : R n → R m is a D-convex function. The feasible set of (MP) is defined by The problem (MP) in the face of data uncertainty both in the objective and constraint can be captured by the following multi-objective optimization problem where f i : R n × R p → R, i = 1, . . . l, and g : R n × R q → R m . u i , i = 1, . . . , l, and v are uncertain parameters, and they belong to the corresponding convex and compact uncertainty sets U ⊆ R p , and V ⊆ R q . Suppose that for any u i ∈ U i , i ∈ I, the function f i (·, u i ) is convex on R n and for any x ∈ R n , f i (x, ·) is concave on U i , i ∈ I. Besides, suppose that for any v ∈ V, the function g(·, v) is D-convex on R n and for any x ∈ R n , g(x, ·) is D-concave on V.
Similarly, we obtain some characterizations of the robust weak sharp weakly efficient solutions of (UMP) by using investigation of its robust (worst case) counterpart: where the robust feasible set of (UMP) is also defined by Now, we recall the following concepts of robust weak sharp weakly efficient solutions in multi-objective optimization, which can be found in the literature; see e.g., [20] and [30].
Now, we introduce a new concept of solution, which related to the sharpness, namely the robust weak sharp weakly efficient solutions.
Definition 4.4. A pointx ∈ K is said to be a robust weak sharp weakly efficient solution for (UMP) if it is a weakly weak sharp weakly efficient solution for (RUMP).
The following lemma is useful for establishing our results in this section.

Lemma 4.5. [25]
Let U 1 , . . . , U l be nonempty convex and compact sets of R p and for any u i ∈ U i , i ∈ I, the function f i (·, u i ) : R n → R be convex as well as for any Now, by using the similar methods of Section 3, we can characterize the corresponding robust weak sharp weakly efficient solutionss of (UMP).
Theorem 4.6. Let f : R n ×R p → R l and g : R n ×R q → R m satisfying the following properties: 1. for any u i ∈ U i , i ∈ I and v j ∈ V j , j ∈ J, f i (·, u i ) is convex and continuous as well as g(·, v) is D-convex on R n ; 2. for any x ∈ R n , f i (x, ·) is concave on U i , i ∈ I and g(x, ·) is D-concave on V. Then, the following statements are equivalent: (a) The (RSCQ) is fulfilled atx ∈ K; (b)x ∈ R n is a robust weak sharp weakly efficient solutions of (UMP) if and only if there exists η > 0 such that for any x * ∈ N K (x) ∩ ηB R n , there exist u i ∈ U i (x), σ i ≥ 0, i ∈ I, not all zero,v ∈ V, andμ ≥ 0 such that and Proof. (a)⇒(b) Assume that the (RSCQ) is satisfied atx ∈ R n . Letx be a robust weak sharp weakly efficient solutions of (UMP) i.e., there exists η > 0 such that there does not exist y ∈ K \ {x} satisfying or equivalently, for any x ∈ K, By (26), there is s ∈ I such that for all x ∈ K, Besides, according to (27), we follow the techniques used in Theorem 3.2 and obtain that for any ξ ∈ ∂ηd K (x), Therefore, Note that the right hand side term of above inclusion is in the subdifferential of the max function: Due to the well-known fact, subdifferential of maximum of functions at x is the convex hull of the union of subdifferentials of the active functions at x, the inclusion (29) becomes Further, settingσ i = σ, i ∈ I(x), and otherwise equals to 0 leads to By the definition of φ i , i ∈ I, the continuity of max ui∈Ui f i (·, u i ), i ∈ I and the lower semicontinuity and convexity of δ K , we have It follows from Lemma 4.5 and the hypothesis such (RSCQ) is satisfied atx ∈ K that Becauseσ i ≥ 0, i = 1, 2 . . . , l, all nonzero, thereby As ∂d K (x) = N K (x) ∩ B R n , we obtain (23) as desired.
Conversely, assume that there is η > 0 such that (23)- (25) hold. Then, for any nonempty. Let ε > 0 and ξ ∈ ∂ ε (ηd K )(x) be arbitrary, then for any Therefore, we obtain Above inequality and (31) imply that Further, since 0 ∈ N K (x)∩ηB R n , we have that there exist ξ f ∈ i∈Iσ i (∂f i (·,û i )(x)) , ξ δ ∈ ∂δ C (x), and ξμ g ∈ ∂ ((μg)(·,v)) (x) such that As Then, adding these inequalities yields Sinceû i belongs to U i (x), above inequality becomes the following one: This together with (μg)(x,v) ≤ 0, (μg)(x,v) = 0, and (33), for any x ∈ K, i∈Iσ i max By summing (34) with (31), for any x ∈ K, we obtain which is equivalent to the following inequality: for all x ∈ K i∈Iσ i max It follows that for any x ∈ K, which yields for any i ∈ I, Therefore, for any This meansx is a robust weak sharp weakly efficient solutions of (UMP).
(b) ⇒ (a) Letη > 0 be given. Consider f i (x, u i ) := − ξ δ , x +ηd K (x), i ∈ I. Thus, for any x ∈ K, Thus,x is a robust weak sharp weakly efficient solutions of (UMP). By hypothesis, there is η :=η such that (23) is fulfilled. Since for any whereσ i ≥ 0, i ∈ I and all nonzero. Thus, we obtain that for any As 0 ∈ N K (x) ∩ ηB R n , we obtain It follows that and so we get the desired inclusion. Therefore, the proof is complete.
Remark 4. (i) In [13] and [14], the authors presented the necessary condition for the local sharp efficiency for the semi-infinite vector optimization problem by using the different method with Theorem 4.6. In fact, they employed the exact sum rule for Fréchet subdifferentials to obtained their results. (ii) In [29], the exact sum rule for Mordukhovich subdifferentials was used as a vital tool under some regularity and differentiability assumptions for establishing their results. This means Theorem 4.6 use the different medthod from the mentioned work.
for anyσ i ≥ 0, i ∈ I, all nonzero. Therefore, we obtain i∈Iσ i max Sincex ∈ S and x ∈ K, the conclusion that x ∈ S is satisfied.
Conclusion. In this paper, we examined convex optimization problems with uncertain constraints and have defined a robust weak sharp solution by studying weak sharp solution of robust convex optimization problems where the uncertain constraints are enforced for all possible uncertainties within prescribed uncertainty sets. By employing tools of convex analysis and the valuable of constraint qualifications, we have established the necessary and sufficient conditions of robust weak sharp solutions, and characterizations of robust weak sharp solution set. As an application, we provided the characterization of the robust weak sharp weakly efficient solution sets for multi-objective convex optimization problems with uncertain constraints.