Hadamard directional differentiability of the optimal value of a linear second-order conic programming problem

In this paper, we consider perturbation properties of a linear secondorder conic optimization problem and its Lagrange dual in which all parameters in the problem are perturbed. We prove the upper semi-continuity of solution mappings for the pertured problem and its Lagrange dual problem. We demonstrate that the optimal value function can be expressed as a minmax optimization problem over two compact convex sets, and it is proven as a Lipschitz continuous function and Hadamard directionally differentiable.

1. Introduction. It is well known that stability theory plays an important role in studying the following linear two-stage stochastic optimization problem min x∈ n where x ∈ n is the first stage decision variable and d ∈ n , y ∈ m is the second stage decision variable and c ∈ m , W ∈ l×m , T ∈ l×n ,h ∈ l . ξ is a random variable which is composed of some elements in {c, W, T, h}. The continuity and differential properties of θ(x, ξ) are particularly important in the stability analysis for the linear two-stage problem when the probability distribution is perturbed. There are many publications about the stability of two-stage optimization but among them only a few papers consider the case ξ = (c, W, T, h), namely all parameters in second stage linear program are random. For examples, in Section 3 of [12], Römisch and Wets obtained the Lipschitz continuity of the optimal value θ(x, ξ). Han and Chen [8] investigated continuity properties of parametric linear programs.
The literature on perturbation analysis of optimization problems is enormous, and even a short summary about the most important results achieved would be far beyond our reach. For the perturbation analysis of general optimization problem one may refer to [1] and [4]. For structured optimization problems, one may refer to [9] and for stability results about linear complementarity and affine variational inequality problems, see for [5] and [6].
In this paper, instead of linear programming in the second stage, we consider a linear second-order conic optimization problem. There are a variety of engineering applications about the second-order cone optimization problem in [10], such as filter design, antenna array weight design, truss design, and grasping for the optimization in robotics. It's worth to mention that a classical Portfolio Problem with n assets or Stocks holding over one period can be transferred into a second-order cone optimization. The problems can be stated as follows. Given a closed convex set X ⊂ n and a point x ∈ X, the second-order conic optimization problem is defined by where ξ = (c; A; Q; B; b) is a given parameter and · means 2-norm in this paper without special instructions. Here c ∈ m , A = (a 1 , · · · , a l ) T ∈ l×m , Q = (q 1 , · · · , q l ) T ∈ l×n , b ∈ l , B = (B 1 ; . . . ; B l ) with B i ∈ J×m , i = 1, . . . , l.
In this paper we discuss the stability properties of P(x, ξ) when u is perturbed tõ u, and the differentiability property of θ(·, ·). We denote the perturbed problem by P(x,ξ). The remaining parts of this paper are organized as follows. In Section 3, we demonstrate upper continuity of the solution mapping for P(x,ξ) at some point (x, ξ). In Section 4, we study the upper continuity of the solution mapping for the Lagrange dual of P(x,ξ) at some point (x, ξ). The local Lipschitz continuity of θ and its Hadamard directional differentiability at a point (x, ξ) are established Section 5. We conclude our paper in Section 6.

Preliminary.
The mapping S : where lim inf x→x S(x) is the inner limit of S atx: The mapping S is continuous atx if Definition 2.2. [11](upper limits and upper semicontinuity) The upper limit of a function f : R →R atx is the value inR defined by .
Definition 2.3. [11](lower limits and lower semicontinuity) The lower limit of a function f : R →R atx is the value inR defined by .
Definition 2.5. [11](horizon cones) For a set C ⊂ R n , the horizon cone is the closed cone C ∞ ⊂ R n representing the direction set hznC, so that this cone being given therefore by [11](horizon criterion for boundedness) A set C ⊂ R n is bounded if and only if its horizon cone is just the zero cone: where lev ≤α f (·,ũ) = {y ∈ m : f (y,ũ) ≤ α}, α ∈ .
In the following discussions, we need to adopt Proposition 4.4 of Bonnans and Shapiro(2000) [1]. Consider the parameterized optimization problem of the form the perturbed form where u ∈ U , X, Y and U are Banach spaces, K is a closed convex subset of Y .
there exist α ∈ and a compact set C ⊂ X such that every u in a neighborhood of u 0 , the level set is nonempty and contained in C, (iv) for any neighborhood V X of the set S(u 0 ) there exists a neighborhood V U of u 0 such that V X ∩ Φ(u) is nonempty for all u ∈ V U . Then: (a) the optimal value function ν(u) is continuous at u = u 0 , (b) the multifunction S(u) is upper semicontinuous at u 0 .
For the discussions in the following, we need use the result in Theorem 7.24 of [13], for this we consider the minimax problem where X ⊂ n and Y ⊂ m are convex and compact and the function f : X×Y → is continuous. Consider the perturbation of the minimax problem (5) : Denoted by υ(t) the optimal value of the above problem (6). Clearly υ(0) is the optimal value of the unperturbed problem (5). Then the following lemma holds.
Lemma 2.6. [13, Theorem 7.24] Suppose that the following conditions hold: (i) the sets X ⊂ n and Y ⊂ m are convex and compact, (ii) for all t ≥ 0, the function ζ t := f +tη t is continuous on X ×Y , convex respects to x ∈ X and concave respects to y ∈ Y , (iii) η t converges uniformly as t ↓ 0 to a function γ(x, y) ∈ C(X, Y ).
where X * and Y * respectively represent the optimal solution set of problem (6).
3. Upper continuity of primal solution mapping. Let f (y,ũ) =c T y. We denote by Φ(ũ) the feasible set of P(x,ξ), namely and by Y * (ũ) the set of optimal solutions for P(x,ξ). For a given parameter u, we analyze properties of the optimal value function θ(·, ·) when u is perturbed toũ. For this purpose we make the following assumptions.
Assumption 3.1. The set X ⊂ n is a non-empty compact convex set.
For each x ∈ X, the optimal value of P(x, ξ) is finite and the solution set for P(x, ξ) is compact.
The slater condition of P(x, ξ) holds for each x ∈ X, namely for each x ∈ n , there exists y x such that which can be written as . . , l. The Assumption 3.2 can be guaranteed by the boundedness of feasible set to second-order cone optimization problem. Meanwhile, it's easy to see that if B i = 0, i = 1, · · · , l and (a 1 , · · · , a l ) is row full rank, then Assumption 3.3 is satisfied.
Proof. From Assumption 3.3, for eachx ∈ X, there exist yx ∈ m and εx > 0 such that . . , l. Due to Assumptions 3.1 and 3.2, we know thatx ∈ X and the solution set of P (x,ξ) are bounded.
Given ξ, let δ 0 > 0 be the positive number in Lemma 3.1 satisfying that the Slater condition holds for P(x,ξ) when x ∈ X and ξ − ξ ≤ δ 0 . Let us denote by for r > 0, Proof. As the following inclusion is obvious, we only need to verify that lim inf For arbitraryŷ ∈ Φ(û), we now proveŷ ∈ lim infũ → u Φ(ũ). By Lemma 3.1, we have that ∃ y such thatâ T i y +q T i x −b i − B i y ≥ˆ , i = 1, . . . , l.
Proof. Without loss of generality, we assume that Ψ(ũ, α) = ∅. Because Ψ(ũ, α ) ⊂ Ψ(ũ, α), ∀α ≤ α, we only need to prove Ψ(ũ, α) ⊂ B. We prove the result by contradiction. Suppose that there exists a sequenceũ k = (x k ,ξ k ) such that x k ∈ X andξ k → ξ and y k ∈ Ψ(ũ k , α) with y k → ∞ as k → +∞. Let d k y = y k / y k , and notice X is compact, we can find a subsequence k j such that x kj → x * and d kj y → d y for x * ∈ X and d y ∈ bdryB. In view of y kj ∈ Ψ(ũ kj , α), one has c kj T y kj ≤ α a kj T i l. Dividing both sides of the above inequalities by y kj , we obtaiñ Taking the limits by j → ∞, we have . . , l, which contradicts with Assumption 3.2. Since the set of solutions to P(x, ξ) is compact, we have that such d y must be zero.
Theorem 3.4. For given ξ, let Assumptions 3.1-3.3 hold. For any u ∈ U δ1 (ξ) with δ 1 defined in Lemma 3.3, one has that θ is continuous at u and the solution set mapping Y * is upper semi-continuous at u, namely for > 0 there exists a number δ > 0 such that Proof. We only need to show conditions (i)-(iv) in Proposition 4 hold. Let G(y,ũ) := (g 1 (y,ũ), · · · , g l (y,ũ)) and K := l i=1 Q J+1 . Obviously we have that f (y,ũ) is continuous in m × U δ1 (ξ), namely condition (i) of Proposition 4 holds. From Lemma 3.2 and noticing the equivalence between the outer semi-continuity and the closedness for set-value mappings, we have that Φ is a closed set-value mapping so that (ii) of Proposition 4 holds. Condition (iii) of Proposition 4 comes from Lemma 3.3. From Lemma 3.1, the Slater condition holds for P(x,ξ), where ξ − ξ ≤ δ 0 ,x ∈ X. This implies Robison constraint qualification for Φ( u) at any point y ∈ Y * ( u). Then it follows form Theorem 2.87 in [1] that forũ ∈ V U , where V U is some neighborhood of u and κ > 0. Since G is Lipschitz continuous, we have that condition (iv) of Proposition 4 holds. Therefore, we have from Proposition 4 that the optimal value function θ is continuous at u and the solution set Y * (ũ) is upper semicontinuous at u, namely for > 0 there exists a number δ 2 > 0 such that The proof is completed. 4. Upper continuity of dual solution mapping. First of all, we derive the Lagrange dual of the P(x,ξ). Letλ = (λ 1 , · · · ,λ l ) ∈ Q := Q l J+1 , the Lagrange function of problem P(x,ξ) defined by whereÃ : m → (J+1)×l is a linear operator defined bỹ Then the Lagrange dual of P(x,ξ) becomes whereÃ * is the adjoint ofÃ andÃ * λ is calculated bỹ We denote the feasible set and the objective function for D(x,ξ) by E(c,Ã,B) = {λ = (λ 1 ; · · · ;λ l ) ∈ Q :c −Ã * λ = 0} (7) and φ(λ,ũ) = l i=1λ i J+1 b i −q T ix respectively. Set Λ * (ũ) be the set of optimal solutions of D(x,ξ). Moreover, since the Lemma 3.1 is satisfied, the dual problem D(x,ξ) has a nonempty compact solution set and the duality gap betweenD(x,ξ) andD(x,ξ) is zero. Proof. Since the Theotem 5.81 in [1] and Assumption 3.2, we know that Slater condition for Dual Problem of P(x, ξ) holds, namely there exists a λ such that Firstly we prove that the operator A * is onto. In fact, suppose that there exist d y ∈ m such that Ad y = 0, which implies that B i d y = 0, a T i d y = 0, i = 1, . . . , l. From (8) we obtain that c T d y = 0. Therefore we obtain d y ∈ Y * (u) ∞ and this implies d y = 0 because otherwise Y * (u) is unbounded, a contradiction with Assumption 3.2. Thus we have that ker A = {0} and operator A * is onto. Define M = [((B 1 ) T , a 1 ), . . . , ((B l ) T , a l )], in view of (6) for A * , we have that matrix M is of row full rank.
We first prove that, for anyλ ∈ Γ(ũ, α),λ is bounded by contradiction. Suppose that there exists a sequenceũ k = (x k ,ξ k ) such that x k ∈ X,ξ k → ξ andλ k ∈ Γ(ũ k , α) with λk → ∞. Letd k λ =λ k / λk and notice X is compact, we can find a subsequence k j such that x kj → x * for x * ∈ X andd kj λ →d λ withd λ ∈ bdryB. In view ofλ kj ∈ Γ(ũ kj , α), we have that Dividing the above inequalities by λkj , we get Taking the limits by j → ∞, we have which contradicts with the compactness of the optimal solution set of D(x,ξ). This is implied by Slater condition for the problem P(x,ξ) proved in Lemma 3.1.
Proof. The results in this theorem can be proved by Lemma 4.2 and Lemma 4.3.
The proof is similar to that of Theorem 3.4. We omit it here.
Without loss of generality, we assume that θ(ũ) ≤ θ(u ). Then we have Since Q is given as one part of ξ, we choose δ 5 ≤ min{δ 4 , Q } and let ũ−u ≤ δ 5 . Due to the boundedness of B, Q∩D and X, there exist three constants D y , D λ , D x > 0 such that for any y ∈ B, λ ∈ Q∩D and x ∈ X, y ≤ D y , λ ≤ D λ and x ≤ D x . Furthermore x , Q are around x, Q respectively, then we obtain x ≤ D x + 1, Q ≤ Q + 1. Thus we have that where κ = max {D y , D λ , D x + 1, Q + 1}. Combing the above inequality with (19), we obtain the inequality (18) whenũ, u ∈ B δ5 (u, x).
From definition, L(y, λ, u t ) is continuous, convex respects to y ∈ B and concave respects to λ ∈ [Q ∩ D]. For the convex and compact set of saddle points Y * (u) × Λ * (u), the directional derivative of θ at u in direction ∆u can be derived by Lemma 2.6 as follows: θ (u; ∆u) = lim Combining with the Lipschitz continuity of θ(ũ) from Proposition 5, we have that θ(ũ) is Hadamard directionally differentiable at u. Therefore Taylor expansion of θ(ũ) is obtained at u: The proof is completed. 6. Conclusions. We consider the stability of a second-order conic optimization problem when all parameters in the problem are perturbed. Under Slater constraint qualification, we prove the upper semi-continuity of the solution set of both the original problem and the dual problem. Furthermore, we show that the optimal value function is locally Lipschitz continuous and Hadamard directionally differentiable. Interestingly, when we express the optimal value function as a min-max optimization problem over two compact convex sets, the asymptotic distribution of the optimal value function can be discussed by Theorem 7.59 of [13] when ξ is a random variable and the sample average approach is adopted.