ROBUST SENSITIVITY ANALYSIS FOR LINEAR PROGRAMMING WITH ELLIPSOIDAL PERTURBATION

. Sensitivity analysis is applied to the robust linear programming problem in this paper. The coeﬃcients of the linear program are assumed to be perturbed in three perturbation manners within ellipsoidal sets. Our robust sensitivity analysis is to calculate the maximal radii of the perturbation sets to keep some properties of the robust feasible set. Mathematical models are formulated for the robust sensitivity analysis problems and all models are either reformulated into linear programs or convex quadratic programs except for the bi-convex programs where more than one row of the constraint matrix is perturbed. For the bi-convex programs, we develop a binary search algorithm.


1.
Introduction. Generally, the robust counterpart of linear programming is formulated as follows: where x ∈ R n is the decision variable, c ∈ R n denotes the objective vector, (A, b) ∈ R m×n × R m denotes the coefficients perturbed in a given ellipsoidal uncertainty set U (l) and l denotes the ellipsoidal radius or radii of the uncertainty set depending on different perturbation manners considered later. The reason why we emphasize l in U (l) is that we intend to let this fixed parameter l in concept of the robust optimization become a variable in our sensitivity analysis. Denote the feasible set of (1) as Without loss of generality, we do not consider the uncertainty of the objective function in (1) since we can equivalently reformulate the problem by introducing a new variable t, replacing the objective with min t and adding a new constraint c T x ≤ t. And we do not consider the simultaneous perturbation of the matrix A and the right-hand-side vector b either, because it is equivalent to the following reformulation: min c T x s.t. Ax + by ≤ 0, ∀(A, b) ∈ U (l) y = −1 x ≥ 0, where perturbation only occurs in the left part of the constraints.
In this paper, our sensitivity analysis for the robust linear programming problem, which is also called robust sensitivity analysis for linear programming (RSALP), is to get the maximal perturbation radius (radii) l to ensure some properties of the robust feasible set F(l) of (1), wherel is a given fixed nonnegative parameter, can be kept in F(l) when l ≥l. In this paper, we assume that (1) is feasible when the uncertainty set is U (l). We use a deterministic polytope S = {x ∈ R n |Ex ≤ f } with given E ∈ R w×n , f ∈ R w to represent the properties of x we expect to be maintained in F(l), for example, some entries of x must be zero. And we hope S ∩ F(l) = ∅ which means there exists at least one robust feasible solution in F(l) satisfying such properties. Then the model is formulated as follows: where x, l are decision variables and l ≥l is defined component-wisely. It is assumed that S ∩ F(l) = ∅, then the model (2) must be feasible. We use the activity-analysis problem [9] to interpret the motivation of our model. The nonnegative production level x ∈ R n + consumes resources Ax ∈ R m respecting the resource supply b ∈ R m . For a given current activity level x * , x * i > 0 means the i-th item is indeed produced. We set P = {j|x * j = 0} as the set of indices corresponding to current zero entries. In real application, the class of items produced should be maintained in a certain time period because it is difficult to react timely owing to some new setup costs.
For example, let us consider the resource b fluctuating in a time period by the market. Based on the perturbation data in hand, we find the uncertainty set is in an ellipsoidal form, and then (1) is formulated with the uncertain data in an ellipsoidal set for a fixed radiusl. An activity level x * was scheduled before the time period. Ideally, the manager expects the activity level not to change for the time period, then the property set is S = {x * }. Our model gives the maximal perturbation radius l * (l * ≥l) of the perturbation of b which keeps the feasibility of x * . If l * >l, then x * can be kept when b is perturbed in this period in a larger uncertainty set U (l * ). Or the manager can select the property set S = {x|x j = 0, j ∈ P} in our model expecting the items unproduced before not to be produced in this period.
In this paper, the uncertainty set U (l) is considered in the following three ellipsoidal manners in this paper: b is perturbed holistically, A is perturbed row-wisely and holistically, respectively, leading to the following three different forms.
ROBUST SENSITIVITY ANALYSIS FOR LP 2031 Case 2. A-row-wise perturbation. For any l = (l 1 , l 2 , . . . , l m ) T ∈ R m + , In the above three cases, β = (β 1 , · · · , β t ) T ∈ R t , α i = (α i 1 , · · · , α i ni ) T ∈ R ni , i = 1, · · · , m and γ = (γ 1 , · · · , γ s ) T ∈ R s are perturbation parameters. v j ∈ R m , j = 1, · · · , t and u i j ∈ R n , j = 1, · · · , n i ; i = 1, · · · , m are given perturbation vectors, and A j ∈ R m×n , j = 1, · · · , s are given perturbation matrices. A i 0 is the i-th row of A 0 , A i is the i-th row of A and · refers to 2-norm · 2 . Causing no confusion, we omit the subscript 2. And all the vectors mentioned in this paper are column vectors. All the perturbation sets considered are in ellipsoidal forms controlled by a nonnegative scalar or a nonnegative m-dimensional vector l, called the "perturbation radius (radii)". So we do sensitivity analysis for the uncertain data by its radius (radii). Actually, when the uncertainty set is of polytope form, similar conclusions can be obtained by the methods provided in this paper. The model (2) is a multi-objective optimization problem for the A-row-wise perturbation case while is a single-objective one for the other two cases.
From the view of computability, the set S is a polytope of variable x in line with linear programming in this paper. The simplest pattern is that S = {x * } is a singleton set where x * ∈ F(l) is a pre-decision. In this situation, E = (I, −I) T and f = ((x * ) T , −(x * ) T ) T where I ∈ R n×n is an identity matrix. Then (2) yields the maximum radius (radii) when x * is still in F(l). Particularly, if x * is optimal to (1) with l =l, then it keeps not only feasible but also optimal, see details in Section 3. And S can be a general polytope containing more than one point, for example, to keep some x-variables zero, it is described by {x ∈ R n | x j = 0, j ∈ P} where P is a given index set.
The concept of robust sensitivity analysis comes from the evaluation for a predecision persisting with optimality for an uncertain system. For this pre-decision or some properties of this pre-decision, we want to know the maximal perturbation set in which the pre-decision or its properties remain. For example, in the case of b-perturbation, x * is an optimal solution of (1) with l =l ≥ 0, can this solution still be optimal for some l * >l? Of course, it would be the best if x * is still optimal under b-perturbation with the radius l * since we need not change the decision at all.
In the classical sensitivity analysis for linear programming, the ranges of the objective coefficients and the ranges of the right-hand-side vector are analyzed to keep the optimal solution or the optimal basis unchanged, respectively, which can be easily found in books about linear programming. In both situations, the ranges of the coefficients are given in a box form.
Robust optimization which came from the robust control community (refer to [7] and [16]) dates back to the work in [14], and has developed rapidly in the past four decades, see e.g., [6], [2] and the references therein. The construction of uncertainty sets, the performance assessment of robust optimal solutions and the computational complexity of the models are three main topics in robust optimization modeling. With poor uncertainty sets, robust models may be computationally intractable or their robust solutions are too conservative. So how to choose a good uncertainty set is crucial. Based on the basic concept of robust optimization, the uncertainty set(s) should contain all the observed data. So the easiest one is an interval providing the minimum and the maximum of the data for each individual parameter. And the uncertainty sets like ellipsoids or polytopes are then used to describe those data. For interval sets, Bertsimas and Sim [5] found the traditional robust optimization model is rather conservative because it is sensitive to the uncertain data and even has no feasible solution sometimes. They gave a modified model with a probabilistic guarantee of feasibility for its robust optimal solution by counting partial uncertain data of the interval form. With the same concerns on feasibility, Bertsimas and Brown [3] established convex uncertainty sets like polytopes and ellipsoids by using distortion measures to evaluate the feasibility. All models in [3] and [5] preserve computational tractability which are polynomially solvable. But any robust optimal solution of the models in [3] and [5] may not be feasible with a probability for the original robust problems, which somehow does not satisfy the concept of robust optimization. Motivated by the growing availability of data, Bertsimas et al [4] proposed a scheme to design uncertainty sets by using statistical hypothesis tests. And by utilizing asymptotic confidence levels, Ben-Tal et al [1] discussed the construction of uncertainty sets as confidence sets based on φ−divergences when the uncertain parameters contain elements of a probability vector. Furthermore, for those intractable robust counterparts, approximation methods are considered. Marandi et al [12] derived tractable inner and outer approximations for quadratic constraints with convex uncertainty whose robust counterpart is generally intractable.
Xu and Burer [15] presented copositive programs to state the best-case and worstcase optimal values when the parameters in the objective function and the righthand side are perturbed. For fixed parameters, they got the optimal solution and its optimal value which are not robustly optimal for the perturbation problem although they called their work as "robust sensitivity analysis of the optimal value of linear programming". They developed tight, tractable conic relaxations to provide the lower and upper bounds.
Our main contributions are stated as follows. First, to the best of our knowledge, this robust sensitivity analysis concept is first brought up and can be applied to evaluate the robustness of a pre-decision. Second, we establish a unified model and give analysis in three different perturbation manners. Generally the radius (radii) in robust optimization is assumed fixed and then is a decision variable in our consideration. Third, the programs in this paper are either linear or bi-convex. As for the bi-convex programming problem with only one row perturbed, it is equivalently reformulated into a convex quadratic programming problem. However, for those bi-convex programs where more than one row is perturbed, we haven't found equivalent polynomial-time solvable reformulations and thus provide a binary search algorithm, which further triggers our interest in the bi-convex optimization.
The rest of the paper is organized as follows. In Section 2, we analyze the model (2) in three different perturbation manners. In Section 3, we have a discussion on the selection of the pre-decision. In Section 4, the numerical experiment is showed for our algorithm. In addition, one note and the conclusions are provided in Section 5 and Section 6 respectively.
2. The maximum radii. This section gives equivalent reformulations of (2) to get the maximum radius (radii) for the three different perturbation cases. To solve (2), the key point is that how to describe the semi-infinite constraint Ax ≤ b, ∀(A, b) ∈ U (l) and how to find equivalent tractable models. In this section, x * is supposed to be a given pre-decision feasible for (1) with l =l.
2.1. b-perturbation. In the b-perturbation case (3), for any l ≥l with given l ∈ R + , Let v ·i ∈ R t consist of all the i-th entries of v j , j = 1, · · · , t. Then (2) becomes a linear programming problem as follows: Particularly, when S = {x * }, we can write down the maximum radius of (6) directly in the following theorem.
Theorem 2.1. Denote l * as the maximum radius of (6) and Remark 1. From the above theorem, we get l * −l = 0 if there exists i such that In this case, any larger perturbation is not allowed. This is a general case to a robustly optimal solution x * of the robust linear programming problem (1). How to reconstruct the system (1) to get a more robust x * ? We shall discuss this issue in Section 3.

2.2.
A-row-wise perturbation. By (4), for any l ≥l with fixedl ∈ R m + , x ∈ F(l) Obviously, (7) is no longer easy because of the cross terms (7) is a linearly constrained multi-objective programming problem. Since l i , i = 1, · · · , m are independent, maximizing l is equivalent to maximizing all l i , i = 1, · · · , m individually. Then we can divide (7) into m subproblems and treat the following subproblem for each constraint i, T is an absolute optimal solution to the multi-objective programming problem (7) i.e., for any feasible solution l, we have l * ≥ l.
Next we consider (7) for a general polytope case S = {x ∈ R n |Ex ≤ f } where E ∈ R w×n and f ∈ R w are given. We start with a special situation where only one row is perturbed, which is to be proved polynomially computable. Assuming only the k-th row is perturbed, (7) is in the following form Remark 2. The feasible region of (9) is nonempty if and only if (9) is feasible with l k =l k .
Remark 3. For convenience, we do not consider the constraint l k ≥l k in subsequent analysis for (9). If the optimal value l * k we get without constraint l k ≥l k satisfies l * k ≥l k , then we know (9) is feasible and the optimal value is the same as the one when we retain the constraint. Otherwise (9) is infeasible. Proposition 1. The optimal value of (9) is infinite if the following linear system is feasible. {H Proof. If the linear system is feasible, then there exists anx such that A 0x ≤ b 0 , Ex ≤ f,x ∈ R n + and H kx = 0. Thus (x, l) is feasible for (9) for any l ≥ 0, leading to an infinite optimal value of (9).
, then the converse of Proposition 1 also holds.
Proof. If the optimal value of (9) is infinite, let l (p) , p = 1, · · · be a positive series with l (p) → +∞. Then for any l (p) , there exists Because of the boundedness of x (p) , p = 1, · · · , we can select a convergent subsequence from x (p) , p = 1, · · · , which is also denoted by x (p) , p = 1, · · · for convenience with H k x (p) → 0 and x (p) → x 0 . It is obvious that x 0 is feasible for the linear system.
is unbounded, then the converse of Proposition 1 does not hold necessarily. Here we are to give an example.
Problem (9) is a bi-convex program [10] as it is a second-order cone program when l k is fixed and is a linear program when x is fixed. Generally a bi-convex optimization problem is intractable. But specially (9) is a fractional programming problem and can be transformed into a convex quadratic program equivalently, see the following theorem. The fractional programming problems were studied in [13] and the idea of the proof below can be found therein.
The optimal value of (9) is infinite (i.e., the perturbation can be arbitrarily large) if the linear system (10) is feasible. (2)The optimal value is 0 (i.e., the problem can not be perturbed at all) if and only if the linear system (10) is (3) If the linear system (10) is infeasible and there exists an x > 0, then the following convex quadratic programming problem gives the square of the reciprocal of the optimal value of (9).
where z, t are variables.
Proof. The first two claims are obvious. If the linear system (10) is infeasible, then we have H k x > 0 for any x ∈ {A 0 x ≤ b 0 , Ex ≤ f, x ∈ R n + }, thus H k x > 0 holds for any feasible solution x of (9) as {A 0 x ≤ b 0 , Ex ≤ f, x ∈ R n + } concludes the feasible domain of (9). Therefore the following fractional program is well defined and is equivalent to (9).
Reverse the fractional objective as below: Here we define that α/0 = +∞ for α > 0. Next we are to show the square of the optimal value of (12) is equal to the optimal value of (11). Obviously these two problems are both feasible. For any feasible solution (z, t) of (11), if t = 0, then (x, y) = (x 0 + dz, y 0 + d) is feasible for (12) for any d ≥ 0. Here (x 0 , y 0 ) is a feasible solution to (12).
It means we can always find a feasible solution of (12) with objective approaching H k z as closely as possible. If t > 0, then (x, y) = (z/t, 1/t) is well defined and is feasible for (12) with H k x /y = H k z . Then the optimal value of (12) is not larger than the positive square root of the optimal value of (11).
For any feasible solution (x, y) of (12) with y = 0, (z, t) = (x/y, 1/y) is well defined and is feasible for (11) with H k z = H k x /y. When y = 0, H k x /y is infinite and (z, t) = (z 0 + dx, t 0 + d) is feasible for (11) for any d ≥ 0 where (z 0 , t 0 ) is a feasible solution to (11). And we have H k z = H k (z 0 + dx) ≥ | H k z 0 − d H k x | → +∞ as d → +∞. It follows that the positive square root of the optimal value of (11) is not larger than the optimal value of (12).
Therefore the square of the optimal value of (12) is equal to the optimal value of (11).
We have not found an equivalent polynomial-time solvable reformulation when S is a general polytope and more than one row of A is perturbed. But we can design an algorithm to solve it. Generally, there does not necessarily exist an absolute optimal solution to a multi-objective programming, refer to [11]. We illustrate that problem (7) may not have an absolute optimal solution using the example below.
Example 2. An instance of (7) is formulated as follows: As (7) is a multi-objective optimization problem, we tried to use the techniques of transferring a multi-objective function into a single-objective one and got an optimization problem with only one variable radius remained, but no polynomially computable result was obtained. Two techniques are tried: one is letting l = l k g where g = (g 1 , g 2 , . . . , g m ) T ∈ R m + is a given vector with g k = 1 and the objective be max l k , and the other is letting the objective be max min i=1,··· ,m l i .
For the case of l = l k g with given g ∈ R m + and variable l k ∈ R + , we consider the following bi-convex programming problem max l k Remark 6. In the above model, we do not add the constraint l = l k g ≥l. With the same arguments of Remark 3, we can check it after computation of (14).
We have the following proposition which is a generalized version of Proposition 1.
Proposition 2. The optimal value of (14) is infinite if the following linear system is feasible. Here Similarly, if the set {A 0 x ≤ b 0 , Ex ≤ f, x ∈ R n + } is bounded, then the converse also holds. The proof is almost the same so is omitted here.

Proposition 3. The optimal value of (14) is 0 if and only if for any
If the condition above does not hold, then we know the optimal value is strictly larger than 0 and thus we can provide a positive lower bound for the optimal value. Remark 7. If the optimal value of (14) is strictly larger than 0, then there exists and obviously l b > 0 is feasible for (14). If Supp ∩ W = ∅, then (x, l) is feasible for (14) for any l > 0. Therefore, l b provides a positive lower bound.
Obviously, the feasible region of l k in (14) is connected and 0 is a feasible solution. Notice that once we obtain such a pointx, we can derive a positive lower bound l b for the optimal value. For the convenience of discussion and computation, we assume that there existsx such that A 0x < b 0 , Ex ≤ f andx ∈ R n + to assure the maximum perturbation radius is strictly positive. We are now in a position to design an algorithm as follows: Algorithm: Step1: If the linear system (15) is feasible, then the maximal radius is l k = +∞ and goto Step4, otherwise goto Step2.
Step2: Obtain a pointx ∈ R n + with A 0x < b 0 , Ex ≤ f and a positive lower bound l b for the maximal radius which is given by (16). Let p = 0 and q = 1/l b .
Step4: Output the maximal radius l k .
To obtain such a pointx, we just need to solve (14) with l k = 0. Since the builtin algorithm in MATLAB for linear programming is interior point algorithm and the objective is constant 0, it will return the analytical center of the feasible region, which is a relative interior point.
Here ε is a given precision. The number of iterations is no more than log 2 (1/(l b ε)) and in each iteration it costs polynomial time to solve a secondorder cone programming problem.
Under the second technique, we get max l 0 By the Remark 8 stated below, we can design almost the same algorithm to solve (17).

2.3.
A-holistic perturbation. In this case, m, x ≥ 0} and the corresponding model to (2) becomes: Particularly, we obtain It is interesting that the problem (18) looks similar to the problem (7). Actually the problem (18) where A is perturbed holistically is a specialization of (7) where A is perturbed row-wisely. In (7), if we let every row have s perturbation vectors and the perturbation direction set of the j-th row consist of all the j-th rows of A i , i = 1, · · · , s, and additionally force all the perturbation radii to be equal, then the problem (7) is the same as (18). Henceforth, we omit the analysis for this case here.
At the end of this section, we provide a concrete example to demonstrate our results.
Example 3. To simplify the analysis, let the fixedl = 0. The original problem is as follows: Here we let S be two special forms which have been mentioned in Section 1. The first one is S = {x * } where x * = (0.05, 0.4, 0) T is a pre-decision and the second one is S = {x| x 3 = 0}. Notice that x * is in the latter polytope. Then the maximal radius in the second form is supposed to be larger. In the b-perturbation case, let v 1 = (0.2, −0.1) T and v 2 = (0.1, 0.1) T be two perturbation vectors for the right-hand-side vector. Then solving (6) yields the maximal radius L 1 = 1.1180 for S = {x * }. It means x * = (0.05, 0.4, 0) T keeps feasible as long as l ≤ 1.1180 in (1), alternatively, x * is infeasible. And for S = {x| x 3 = 0}, we get the maximal radius L 2 = 1.6175. It follows that L 2 > L 1 .
In the A-row-wise perturbation case, let α 1 1 = (0.1, 0.1, 0.2) T and α 1 2 = (0.3, 0.2, −0.1) T be two perturbation vectors for the first row and α 2 1 = (−0.1, −0.2, 0.1) T be the perturbation vector for the second row respectively. Then for S = {x * }, the maximal radii L 3 = (2.3783, 2.9412) T . When l ≤ L 3 in (1), x * = (0.05, 0.4, 0) T is feasible and the maximal radii can not be enlarged. For S = {x| x 3 = 0}, using the first technique, we reformulate problem (13) into a single-objective one and let g = (1, 1) T to make the radii of the two rows equal. With the help of the binary search algorithm, we can get the maximal radius L 4 = 3.2106 for both two rows.
And if we let the two radii equal in the first form, then the maximal radius is 2.3783, which is less than L 4 .
In the A-holistic perturbation case, let two perturbation matrices be 3. Discussions on the pre-decision. Section 2 deals with (2) and yields the maximal perturbation radius (radii) when the polytope S is given. In this section, we give a discussion on how the pre-decision set S affects the maximal perturbation radius and show the tradeoff between the optimality of the pre-decision and the robustness.
3.1. Optimality preference. Generally, we first solve (1) and get an optimal solution x * with which the value c T x * is the minimum for any l ≥l by the following proposition.
Proposition 4. F(l 2 ) ⊆ F(l 1 ) whenever l 2 ≥ l 1 . If x * is a robustly optimal solution of (1), x * keeps robustly optimal for l ≥l if and only if x * ∈ F(l).
Proof. The first claim is obvious. Since F(l) shrinks as l increases, for any x ∈ F(l) ⊂ F (l) with l ≥l, c T x ≥ c T x * must hold. If x * ∈ F(l), then x * is robustly optimal for l. Thus the second claim is true.
For sensitivity analysis, one way is to keep this optimal solution unchanged, which gets the optimal value c T x * of (1). By setting S = {x * } or S = {x|c T x = c T x * }, we do keep the optimality of value c T x * of (1).
To simplify our discussions, we only consider the case of settingl = 0. Similar results can be got for the casel ≥ 0. According to the linear programming theory, generally there exists at least one active row constraint of A 0 x * ≤ b 0 , i.e., there exists at least one i such that A i 0 x * = b i 0 . Then no more robustness l >l = 0 exists such that x * ∈ F(l) with l >l = 0 due to Theorem 2.1, Theorem 2.2 and Theorem 2.4. But this may not be absolute. One extreme situation is that all rows in Ax * ≤ b are inactive or the active ones are not involved in perturbation, only in which the maximal perturbation radius is positive. An example is shown as below.
Obviously, the optimal solution is x * = (0, 0) T and the corresponding maximal radius is positive no matter how the problem is perturbed.
When a larger perturbation is not available, one way is to relax the right-hand sides b i 0 to b i 0 + δ i for all the active constraints in A 0 x * ≤ b 0 where the tolerance δ i > 0, then we get a larger perturbation l >l = 0 to keep the optimal value c T x * .
The other way is to relax the singleton point set {x * } to a more flexible set S which may have more robustness for data perturbation with l >l. Different aims are followed by different relaxations. To keep the optimal value not far from c T x * , we can relax S = {x * } to S = {x ∈ R n |c T x ≤ c T x * + δ} where δ > 0. To keep less new setup costs, we can relax it to S = {x ∈ R n |x j = 0, j ∈ P} where P ⊆ I = {j|x * j = 0}. These relaxation sets are polytopes and their corresponding robust sensitivity analysis models have been studied in Section 2.
3.2. Robustness preference. Relaxing S to R n thoroughly, we solve (2) to get an optimal solution (x * , l * ). Then x * is the most robust with the greatest maximal perturbation radius (radii) l * if we let x * be a pre-decision.
Different selections of the pre-decision set S may affect the maximal perturbation radius (radii) l * . In our model (2), the pre-decision set is deterministic. 4. Numerical experiments. We are to implement the algorithm for problem (14) withl = 0 in this section. Whether all entries of g are equal or not has no effect on the computational complexity of the problem, so we take g = (1, · · · , 1) T for simplicity. It is realized by MATLAB R2015b on a laptop with Intel Core i5, CPU 2.7GHz and 8G memory. CVX-64 (version 2.0) (http://cvxr.com/cvx/) is used to solve all convex programming problems here.
Since the concrete form of the general polytope will not affect the complexity, we take S = {x| x j = 0, j ∈ P} here and the cardinality of the index set P is given as 30% of n. Andx is supposed to be a strict interior point withx j = 0, j ∈ P andx j = 1, j / ∈ P. A 0 and c are uniformly distributed in [−10, 10] m×n and [−10, 10] n respectively. And b 0 − A 0x |A 0 is uniformly distributed in [2,10] n to ensure A 0x < b 0 holds strictly. Here symbol |A 0 refers to conditional distribution. Besides, we impose a box constraint x ∈ [−100, 100] n to make the feasible region bounded. Then the feasibility of linear system (15) is a sufficient and necessary condition for an infinite perturbation radius. The number n i of perturbation vectors for the i-th row is set in two levels, 25% and 50% of n + m. And perturbation vectors u i j , j = 1, · · · , n i ; i = 1, · · · , m are independently uniformly distributed in [−0.5, 0.5] n or [−1, 1] n . (m, n) is set to be (20, 20), (20, 100) and (80, 100), and the precision is 10 −6 .
We will show the improvement from the lower bound (16) to the maximum radius, and the effect on CPU time of different combination of (m, n, pert, scale), where m is the number of constraints, n is the dimension of the variable, pert is the perturbation ratio -25% or 50%, and scale is the perturbation scale of u i j -0.5 or 1. Here we let the numbers of perturbation vectors of all the rows be equal for simplicity. In every setting (m, n, pert, scale), we do 10 random instances repeatedly. Ave and Std stand for the average value and the standard deviation of the 10 experiments.
For these numerical experiments in Table 1, infinite perturbation radius doesn't occur since it is very rare and special. We can see that the CPU time increases as the scale of the problem (m, n) increases, and m plays a more important part than n. Besides, the more the perturbation vectors are, the more CPU time it costs whereas the scale of perturbation vectors basically has no effect on CPU time. As for the maximal radius, it is mainly affected by the scale of perturbation vectors. It is easily found that the maximal radius of level [−0. 5 improvement ratio is about between 2 and 6, which shows that a naive lower bound is too weak.

5.
A note. At the end of the paper, we point out why we do not use another different formulation of (2). Denote the robustly optimal solution set of (1) by OPT (l). If we replace F(l) with OPT (l) in (2), then we obtain a new model as follows: max l s.t. x ∈ OPT (l) x ∈ S = {x ∈ R n |Ex ≤ f } l ≥l, l ≥ 0.
We take the case that perturbation only occurs in the right-hand-side vector b as an example. According to the strong duality theory of linear programming, (19) is equivalent to the following bilinear problem: max l s.t.
When l is fixed, all constraints are linear. Therefore, whether l is feasible for (20) can be determined in polynomial time. We assume the problem above is feasible. Then is the feasible region of l connected? In other words, when l 1 , l 2 are both feasible for (20) withl ≤ l 1 < l 2 , is l feasible for any l ∈ (l 1 , l 2 )? If yes, we can use the binary search algorithm. Unfortunately, OPT (l) does not intersect the polytope {x ∈ R n | Ex ≤ f } necessarily for l ∈ (l 1 , l 2 ). Thus this model is meaningless and the binary search algorithm is invalid here. We present the following example to illustrate it.
The optimal solution is (4, 2, 0) T . And we let the perturbation direction vector of b be (2, 0) T and S = {x| x 3 = 0}. For l = 3, (0.4, 3.2, 0) T is feasible for (20) and is optimal to (1) which is the robust counterpart of the problem above. For l = 6, (20) is not feasible and there is no optimal solution to (1) with the third entry being 0. For l = 10, (0, 0, 0) T is feasible for (20) and is optimal to (1), and now the feasible set of (1) is a singleton set including (0, 0, 0) T only.
Thus l = 3 and l = 10 are both feasible for (19) while l = 6 is not.
Particularly, when S = {x * } where x * is an optimal solution of (1), the operation that replacing F(l) with OPT (l) does not make any difference. 6. Conclusions. In this study, we introduce a new concept-robust sensitivity analysis, which is really different from the classical sensitivity analysis and some new variants of robust optimization. We have established a unified model based on this concept and have provided analyses in three perturbation cases. All the programs in our study can be equivalently reformulated into polynomial-time solvable models except for the ones where more than one row of the constraint matrix is perturbed and the feasible solution is not fixed (i.e., the polytope S is not a singleton set). For those, we have developed a binary search algorithm to obtain the maximal perturbation radius and it is still open whether there exist equivalent polynomialtime solvable models with regard to them. Besides, this concept can be also used for the convex quadratically constrained quadratical programming (CQCQP) problem, which has been considered in another paper. Also, the robust sensitivity analysis for the semidefinite programming (SDP) problem is a potential research direction.