SEMIDEFINITE PROGRAMMING VIA IMAGE SPACE ANALYSIS

. In this paper, we investigate semideﬁnite programming by using the image space analysis and present some equivalence between the (regular) linear separation and the saddle points of the Lagrangian functions related to semideﬁnite programming. Some necessary and suﬃcient optimality conditions for semideﬁnite programming are also given under some suitable assumptions. As an application, we obtain some equivalent characterizations for necessary and suﬃcient optimality conditions for linear semideﬁnite programming under Slater assumption. optimization problems. This approach can be applied to any kind of problem that can be expressed under the form of the impossibility of a parametric system. The impossibility of such a system is reduced to the disjunction of two suitable subsets of the image space. The disjunction between the two suitable subsets is proved by showing that they lie in two disjoint level sets of a separating functional.

1. Introduction. As pointed by Vandenberghe and Boyd [12], there are good reasons for studying semidefinite programming. First, positive semidefinite (or definite) constraints directly arise in a number of important applications. Second, many convex optimization problems, e.g., linear programming and (convex) quadratically constrained quadratic programming, can be cast as semidefinite programs. So, semidefinite programming offers a unified way to study the properties of and derive algorithms for a wide variety of convex optimization problems. Most importantly, semidefinite programs can be solved very efficiently, both in theory and in practice. Extensive lists of applications from various areas can be found in [9,12]. In recent years, many authors have investigated semidefinite programming (see, for example, [5,11,13,15]).
The image of a constrained extremum problem was developed by Giannessi [2], by exploiting previous results on theorems of the alternative. Recently, there has been an increasing interest in the Image Space Analysis (for short, ISA) of constrained variational inequalities and constrained optimization problems (see, for example, [3,1,4,6,7,14]).
The ISA is a powerful tool and a unifying scheme for studying both variational inequalities and optimization problems. This approach can be applied to any kind of problem that can be expressed under the form of the impossibility of a parametric system. The impossibility of such a system is reduced to the disjunction of two suitable subsets of the image space. The disjunction between the two suitable subsets is proved by showing that they lie in two disjoint level sets of a separating functional.
The purpose of this paper is to carry on the ISA of semidefinite programming. We present some equivalence between the (regular) linear separation and the saddle points of the Lagrangian functions related to semidefinite programming. We give necessary and sufficient optimality conditions for semidefinite programming under suitable a assumptions. As an application, under Slater assumption we obtain some equivalent characterizations for necessary and sufficient optimality conditions for linear semidefinite programming.
The paper is organized as follows. In Section 2, we recall some notations that will be used in the sequel. In Section 3, we define the image of semidefinite programming and the conical extension of the image, and give the equivalence between the solvability of semidefinite programming and an empty intersection of H and the conical extension of the image. In Section 4, we characterize the (regular) linear separation for semidefinite programming and we give the equivalence between (regular) linear separation and saddle points of Lagrangian functions for semidefinite programming. We present sufficient and necessary conditions for the solvability of semidefinite programming in Section 5. Section 6 investigates some equivalent results of necessary and sufficient optimality conditions for linear semidefinite programming.
2. Notations and problem formulation. Let R k be the k dimensional Euclidean space, where k is a given positive integer. We denote by R k k}, where the denotes the transpose. Let R + := R 1 + and R ++ := R 1 ++ . A nonempty subset P of R k is said to be a cone with apex at the origin if λP ⊆ P for all λ ≥ 0. P is said to be a convex cone if P is a cone and P + P = P . The dual cone (or positive polar cone) of a convex cone P is given by where ·, · denotes the inner product.
Denote by dom h := {x ∈ R k : h(x) < +∞} the effective domain of h : for any x, y ∈ K, t ∈ [0, 1]. The subdifferential of a proper convex function h : R k → R ∪ {+∞} at x ∈ dom h is given by It is well known that if h is differential at x, then ∂h(x) = {∇h(x)}, where ∇ denotes the gradient. Let K be a nonempty subset of R k . The indicator function of K is defined by The normal cone to K at x ∈ K, denoted by N K (x), is defined by Denote by S l the linear subspace of the symmetric l ×l matrices with real entries, i.e., Denote by S l + the cone of the symmetric positive semidefinite l × l matrices with real entries, i.e., S l + := {A ∈ S l : x Ax ≥ 0, ∀x ∈ R l }, and by S l ++ the set of the symmetric positive definite l × l matrices with real entries (which actually coincides with intS l + , see [11]), i.e., S l ++ := {A ∈ S l : x Ax > 0, ∀x ∈ R l with x = 0}. Then the so-called Löwner partial order can be introduced as follows: The scalar product and Frobenius norm in S l are given by respectively, where A ij and B ij are (i, j) elements of A and B, respectively. It is well known that the space S l is self-dual, i.e., (S l ) * = S l . It is proved in [15] that the convex cone S l + is also self-dual, i.e., (S l + ) * = S l + . Also one can easily check that S l + is a closed subset of S l , i.e., clS l + = S l + , where the cl denotes the closed hull. Notice that S l can be considered as the l(l+1) 2 dimensional Euclidean space. We say that g : R k → S l is S l + -convex (res., S l + -convexlike) on a convex set for any x, y ∈ K, t ∈ [0, 1] (res., g(K) + S l + is convex). Clearly, if g is S l + -convex on K, then g(K) + S l + is convex. But the converse is not true in general.
In this paper, without other specifications, let K be a nonempty convex subset of R k and let f : R k → R and g : R k → S l . We consider the following semidefinite programming (for short, SDP): Then SDP collapses to the following linear semidefinite programming (for short, LSDP): 3. Preliminaries results on ISA for SDP. In this section, we shall carry on ISA for SDP. Observe that,x ∈ F p solves SDP if and only if the system (in the unknown x): is impossible. We can associate SDP with the following sets: We call the set K(x) the image of SDP atx ∈ F p and R × S l image space. Define the mapping Gx : Clearly, we have the following results: Consequently,x ∈ F p is a solution of SDP if and only if (3.2) is true.
As pointed by Giannessi, to prove directly whether (3.2) holds or not is generally too difficult. The reason is that, in general, the image of SDP is not convex even when the functions involved enjoy some convexity properties. To overcome this difficulty, similar to that in [3,4] (see also [6,7,14]), we introduce a regularization of the image K(x), namely, the extension with respect to the cone clH, denoted by E: We have the following: The following statements are true: (1) The mapping −Gx is R + × S l + -convexlike on K, i.e., −Gx(K) + R + × S l + is convex, if and only if the set E is convex;

2) holds if and only if
or equivalently, Then there is (u 0 , A 0 ) ∈ H ∩ E, i.e., there exists x 0 ∈ K such that As a consequence, We now prove (3.3) and (3.4) are equivalent. We only need to prove (3.4) implies

Linear separation and saddle points of Lagrangian functions for SDP.
In this section, we shall characterize the (regular) linear separation of SDP and investigate the saddle points of the Lagrangian function associated to SDP. (4.1) (2) admit a regular linear separation, if there is (λ,Ā) ∈ R ++ ×S l + such that (4.1) holds.
Similarly, we can define the (regular) linear separation of the sets K(x) and E. The following lemma is well known (see, for example, [6]). (1) The sets K(x) and H are linearly separable; (2) The sets E and H are linearly separable; (3) The sets K(x) and H admit a regular linear separation,; The sets E and H admit a regular linear separation. Then we have (1)⇔ (2)⇐(3)⇔ (4). If the following Slater condition holds:
Letx ∈ F p . Consider the generalized Lagrangian function associated to SDP, defined by L : We also consider the following Lagrangian function associated to SDP, defined by L 0 : S l , ∀(A, x) ∈ S l + × K. Definition 4.4. The point (λ,Ā,x) ∈ R + × S l + × K is said to be a saddle point of the generalized Lagrangian function L on R + × S l + × K, if the following inequalities hold: L(x; λ, A,x) ≤ L(x;λ,Ā,x) ≤ L(x;λ,Ā, x), ∀(λ, A, x) ∈ R + × S l + × K; The point (Ā,x) ∈ S l + × K is said to be a saddle point of the Lagrangian function L 0 on S l + × K, if the following inequalities hold: It is easy to verify the following proposition: Proposition 4.5. Letx ∈ F p and (λ,Ā) ∈ R ++ × S l + . Then the point (λ,Ā,x) ∈ R + × S l + × K is a saddle point of the generalized Lagrangian function L on R + × S l + × K, if and only if (Ā λ ,x) ∈ S l + × K is a saddle point of the Lagrangian function L 0 on S l + × K. We now characterize the linear separation for SDP by using the saddle points of the generalized Lagrangian function related to SDP. Proof. Necessity. Suppose that K(x) and H are linearly separable. Then there exists (λ,Ā) ∈ R + × S l + , with (λ,Ā) = 0, such that (4.1) holds. Letting x :=x in (4.1) allows that Ā , g(x) ≤ 0. Sincex ∈ F p , we have g(x) 0 and therefore Ā , g(x) ≥ 0. As a consequence, Ā , g(x) = 0 and again from (4.1) one has Again from g(x) 0 we have Letting A = 0 in the first inequality leads to Ā , g(x) ≤ 0. Sincex ∈ F p , one has g(x) 0 and so Ā , g(x) ≥ 0 sinceĀ ∈ S l + . Thus Ā , g(x) = 0 and it follows from which yields that the sets K(x) and H are linearly separable.
Similarly, we can prove the following result: Theorem 4.7. Letx ∈ F p . Then the sets K(x) and H admit a regular linear separation if and only if there exists (λ,Ā) ∈ R ++ ×S l + , such that the point (λ,Ā,x) is a saddle point for L on R + × S l + × K. From Proposition 4.5 and Theorem 4.7 we have: Theorem 4.8. Letx ∈ F p . Then the sets K(x) and H admit a regular linear separation if and only if there exists (λ,Ā) ∈ R ++ × S l + , such that (Ā λ ,x) ∈ S l + × K is a saddle point of the Lagrangian function L 0 on S l + × K. Under some convexity assumptions of f and −g we also have the following proposition: Proposition 4.9. Letx ∈ F p . Suppose that f is convex on K and −g is S l + -convex on K. Then the point (λ,Ā,x) ∈ R + × S l + × K is a saddle point of the generalized Lagrangian function L on R + ×S l + ×K, if and only if it is a solution of the following system: Proof. Necessity. Suppose that the point (λ,Ā,x) ∈ R + × S l + × K is a saddle point of the generalized Lagrangian function L on R + × S l + × K. Then from the proof of sufficiency in Theorem 4.6, we have Ā , g(x) = 0. It follows from (4.3) that . Thenx is the minimum point of the function f on R k . Since K is convex, f is convex on K and −g is S l + -convex on K, we have − Ā , g(·) is convex on K (see, for example [8]) and as a consequence, f is convex on R k . Since dom(λ(f (·) − f (x))) ∩ dom(− Ā , g(·) ) ∩ domi K = K and riK = ∅, it follows from [10] that which implies that (λ,Ā,x) solves system (4.4).
Sufficiency. Let (λ,Ā,x) solves system (4.4). Again since dom(λ(f (·) − f (x))) ∩ dom(− Ā , g(·) ) ∩ domi K = K and riK = ∅, it follows from [10] that . Thus it follows that (4.5) holds and since Ā , g(x) = 0, we have Sincex ∈ F p , i.e., g(x) 0, we have A, g(x) ≥ 0 for any A ∈ S l + . As a consequence, Similarly, we have Proposition 4.10. Letx ∈ F p . Suppose that f is convex on K and −g is S l +convex on K. Then the point (Ā,x) ∈ S l + × K is a saddle point of the Lagrangian function L 0 on S l + × K, if and only if it is a solution of the following system: x ∈ F p , A ∈ S l + . Corollary 4.11. Letx ∈ F p . Suppose that K is open, f is convex and differentiable on K and −g is S l + -convex on K and A, g(·) is differentiable on K for each A ∈ S l + . Then the point (Ā,x) ∈ S l + × K is a saddle point of the Lagrangian function L 0 on S l + × K, if and only if it is a solution of the following system: for any x ∈ K. The conclusion follows immediately from Proposition 4.10.

5.
Optimality conditions for SDP. In this section, we shall present the necessary and sufficient optimality conditions for SDP. First we present the following necessary optimality condition for SDP.
From Theorems 5.1 and 4.6 we have the following corollary: Ifx is a solution of SDP, then there exists (λ,Ā) ∈ R + ×S l + , with (λ,Ā) = 0, such that the point (λ,Ā,X) is a saddle point for L on R + × S l + × S l + . Now we present the following sufficient optimality condition for SDP. Theorem 5.3. Letx ∈ F p . If the sets K(x) and H admit a regular linear separation, thenx is a solution of SDP.

From Theorems 5.3 and 4.3 we have the following corollary:
Corollary 5.4. Letx ∈ F p . If the sets K(x) and H are linearly separable and the Slater condition (4.2) holds, thenx is a solution of SDP.
6. Applications to LSDP. In this section, we shall apply the obtained results to characterize the necessary and sufficient optimality conditions of LSDP.
Proof. The conclusion follows immediately from the facts that ∂f (x) = {c 0 } and ∂(− A, g(·) )(x) = {−( A, A 1 , · · · , A, A k ) }. 7. Conclusion. Recently, there has been an increasing interest in the ISA. Separation plays a vital role in the ISA. In this paper, the ISA was employed to investigate semidefinite programming. Some equivalence between the (regular) linear separation and the saddle points of the Lagrangian functions related to the problem were characterized. Some necessary and sufficient optimality conditions for semidefinite programming were given under some suitable assumptions. An application to linear semidefinite programming was also given to illustrate the obtained results.