ON THE EXTENSION OF AN ARC-SEARCH INTERIOR-POINT ALGORITHM FOR SEMIDEFINITE OPTIMIZATION

. This paper concerns an extension of the arc-search strategy that was proposed by Yang [26] for linear optimization to semideﬁnite optimization case. Based on the Nesterov-Todd direction as Newton search direction it is shown that the complexity bound of the proposed algorithm is of the same order as that of the corresponding algorithm for linear optimization. Some preliminary numerical results indicate that our primal-dual arc-search path-following method is promising for solving the semideﬁnite optimization problems.


1.
Introduction. Semidefinite optimization (SDO) problem is a generalization of linear optimization (LO) problem over an affine subspace of symmetric matrices. SDO has wide applications in diverse areas, such as continuous and combinatorial optimization [1,22], system and control theory [4] or mechanical and electrical engineering [21].
Interior-point methods (IPMs) are now among the most effective methods for solving SDO problems. Due to their polynomial complexity and practical efficiency, many IPMs for LO problems are successfully extended to the SDO case [9,11,19,29]. Pioneering researches for this extension were independently done by Alizadeh [2] and Nesterov and Nemirovskii [16] in which the SDO problems were solved by the primal-dual interior-point algorithms with polynomial time complexity. Kojima et al. [9] and Potra and Sheng [19] independently discussed a generalization to SDO of the Mizuno-Todd-Ye (MTY) predictor-corrector algorithm [12] for infeasible starting points and derived the best known iteration bound under some mild assumptions. The convergence analysis of IPMs for SDO problems is more complicated than for LO case. A large part of the theoretical difficulty is due to the issue of maintaining symmetry in the linearized complementarity condition. A strategy to overcome this problem was proposed by Alizadeh et al. [3], but one of the most common strategies was developed by Monteiro [14]. Such formulation reveals some deep similarity between LO and SDO cases and allows insightful complexity analysis for SDO problems (see [29]). It turned out that the resulting directions by this approach could be seen as two special cases of the class of directions introduced earlier by Kojima et al. [8]. Nesterov and Todd [17,18] introduced the so-called Nesterov-Todd (NT) direction in their attempt to generalize primal-dual IPM beyond SDO. In [29], based on Monteiro's idea, Zhang generalized all these aforementioned approaches to a unified scheme parameterized by a nonsingular scaling matrix. This family of search directions is referred to as the Monteiro-Zhang (MZ) family of search directions.
Yang [25,26] proposed the arc-search approach for LO and convex quadratic optimization problems, respectively, which searches optimizers along an ellipse that approximates the entire central path. Based on the arc-search strategy, the author proved that the complexity bound can be improved for a higher-order feasible IPM. Yang et al. [27] used the same idea and proposed an arc-search infeasible IPM with a complexity bound O(n 5 4 log ε −1 ) for LO problems, where n is the larger dimension of a standard LO and ε is the required precision. Later, Yang et al. [28] extended the algorithm to symmetric optimization (SO). Very recently, Kheirfam [7] proposed an arc-search infeasible-interior-point algorithm for SO in a wide neighborhood N − ∞ of the central path and derived the complexity bound O(r 3 2 log ε −1 ), where r is the rank of the associated Euclidean Jordan algebra.
The purpose of this paper is to establish iteration-complexity bound for an extension of the arc-search strategy introduced by Yang in [26] for LO problems to the SDO case. Based on the NT direction, our derived complexity bound for the algorithm is O( √ n log µ 0 ε ), which is analogous to the LO case. Numerical results show that our algorithm performs significantly better than the MTY predictor-corrector algorithm.
This paper is structured as follows. In Section 2, we first introduce the SDO problems and discuss the arc-search approach for SDO problems. Then we present a primal-dual arc-search algorithm. In Section 3, we provide some technical lemmas, which will be needed in Section 4. Section 4 is devoted to complexity analysis of the proposed algorithm. Numerical results are given in Section 5. Finally, we end the paper with some concluding remarks in Section 6.
The following notations are used throughout the paper. Given m, n ∈ N, R n and R m×n denote the n-dimensional Euclidean space and the space of all real m × n matrices, respectively. The set of all n × n symmetric matrices is denoted by S n . S n + and S n ++ denote the set of all matrices in S n which are positive semidefinite and positive definite respectively. For Q ∈ S n , Q 0 (Q 0) means that Q is positive semidefinite (positive definite). For a matrix Q ∈ S n , with all real eigenvalues, we denote its eigenvalues by λ i (Q), i = 1, . . . , n and its smallest and largest eigenvalues by λ min (Q) and λ max (Q), respectively. Q F denotes the Frobenius norm of Q ∈ R n×n , i.e., Q F = Tr(Q T Q). The Kronecker product of any two matrices of arbitrary size A and B is denoted by A ⊗ B. Finally, for A ∈ R m×n , vec(A) denotes the mn-vector obtained by stacking the columns of A one by one from the first to the last column.
2. SDO problem and search direction. In this paper, we deal with the primal SDO problem in the form min Tr(CX) : along with its dual problem, which can be given as follows where C, X, S ∈ S n , b, y ∈ R m and A i ∈ S n , i = 1, . . . , m, are, without loss of generality, linearly independent. Throughout the paper, we assume that both problems (P) and (D) satisfy the interior-point condition (IPC), that is, It is well known [5] that under the assumptions that F 0 = ∅ and the matrices A i , i = 1, . . . , m, are linearly independent, the set of all primal and dual optimal solutions consists of all the solutions (X, y, S) ∈ S n × R m × S n of the following optimality system: The basic idea of primal-dual IPMs is to replace the third equation in (1), the socalled complementarity condition for (P) and (D), by the parameterized equation XS = µI, where µ > 0 and I denotes the n × n identity matrix. Thus, we consider the following perturbed system: It is proven in [8,16] that the system (2) has a unique solution (X(µ), y(µ), S(µ)) for any µ > 0 as a µ-center of (P) and (D) under assumption that F 0 = ∅ and A i , i = 1, . . . , m, are linearly independent. The set of µ-centers with µ > 0 is said to be the central path of (P) and (D), i.e., an arc that is parameterized as a function of µ and is denoted as C = X(µ), y(µ), S(µ) : µ > 0 .
As µ → 0, the limit X(µ), y(µ), S(µ) is a primal-dual optimal solution of the corresponding SDO problem. We consider a set of neighborhoods of the central path as where θ ∈ (0, 1) and µ := Tr(XS) n is associated with the actual duality gap. As we reduce the duality gap to zero, N F (θ) will be a neighborhood of the central path that approaches the optimizer(s) of the SDO problem. Thus, all points inside N F (θ) will approach the optimizer(s) of the SDO problem.
A promising way to obtain a search direction is to apply Newton's method to system (2). Observe that for X, S ∈ S n , the product XS is generally not in S n . Hence, the left-hand side of (2) is a map from S n × R m × S n to R m × S n × R n×n . Thus, the system (2) is not a square system when X and S are restricted to S n , which it is needed for applying Newton-like methods. To remedy, we use the socalled similar symmetrization operator H P : R n×n → S n introduced by Zhang [29] defined as follows where P ∈ R n×n is some nonsingular matrix. Zhang [29] also observed that for any nonsingular matrix P , any matrix M similar to a (symmetric) positive definite matrix (e.g. M = XS with X, S ∈ S n + ) and any µ ∈ R. Consequently, for any given nonsingular matrix P , system (2) is equivalent to the following square system from R m × S n × S n to itself: A Newton-like method applied to system (4) leads to the following linear system for direction (∆X, ∆y, ∆S) ∈ S n × R m × S n : Todd et al. [20] showed that the system (5) has a unique solution for any (X, y, S) ∈ S n ++ × R m × S n ++ and the scaling matrix P satisfies P XSP −1 ∈ S n . The choice of leads to the NT-direction [18]. For the NT scaling matrix P = W 1 2 , we have P XP = P −1 SP −1 and hence P XSP −1 ∈ S n .
Recently, Yang [24,25,26] proposed an interesting approach based on approximating the central path by an ellipse and presented some polynomial feasible interior-point algorithms for LO problems that search optimizers along an ellipsoidal estimation of the central path. Yang [26] showed that arc-search along ellipse may be a better method than line-search methods because the algorithm is proved to be polynomial with a better bound than the bounds of all existing higher-order algorithms. Based on the NT direction as Newton search direction, we extend the arc-search strategy proposed by Yang [26] for LO problems to SDO case. For this purpose, we define an ellipse E ∈ S 2n+m to estimate the central path C as follows where B 1 and B 2 are the axes of the ellipse, and B 3 is the center of the ellipse. Let Ẋ ,ẏ,Ṡ and Ẍ ,ÿ,S respectively denote the first and second derivatives of X(µ), y(µ), S(µ) and Z = Z(α 0 ) ∈ E be close to or on the central path. We proceed to determine the matrices B 1 , B 2 and B 3 such that the first and second derivatives of X(µ), y(µ), S(µ) satisfy and For the scaling matrix P , we definê Now, we scale systems (8) and (9) and obtain the following scaled systems and where H(·) = H I (·),X := PẊP,Ŝ := P −1Ṡ P −1 ,X := PẌP andŜ := P −1S P −1 . Furthermore, similar to [10,15], in terms of the Kronecker product, the third equations in (11) and (12) can be expressed aŝ whereÊ .

BEHROUZ KHEIRFAM AND MORTEZA MOSLEMI
By taking the first and second derivatives of E and using (11) and (12), after some calculations, we obtain Let α ∈ (0, π 2 ]. The following lemma denotes the mathematical formulation of the ellipsoidal approximation of the central path C, which substituting B 1 , B 2 and B 3 into (7), the lemma follows.
Step 3 of the algorithm finds a step size sin(α) such that (X(α), y(α), S(α)) ∈ N F (2θ). Lemma 4.1 will show that the step size sin(α) is determined by the smallest positive root of q(α). Since q is a monotonic increasing function of α ∈ [0, π 2 ] for (X k , S k ) 0, (which will be shown in Lemma 4.2,) q(0) < 0 and q( π 2 ) ≥ 0. Therefore q(α) has exactly one positive root in [0, π 2 ]. Moreover, all the solutions are analytical and the computational cost is independent of the size of A (n and m) and is negligible [6]. We will show that the correction step brings the iterate from N F (2θ) back to N F (θ), and will also estimate the step size.

BEHROUZ KHEIRFAM AND MORTEZA MOSLEMI
where the last equality become from the first equations of (11) and (12). Similarly, we obtain Therefore, the proof is complete.
The next lemma shows that searching along the ellipse reduces the duality gap. where the fourth equality is due to the identity Tr(H P (M )) = Tr(M ) and the third equations of (11) and (12). The fifth equality is due to Lemma 3.1(i) and the last equality follows from part (i). This proves part (ii).
Here, we cite the following two lemmas that will be frequently used in the convergence analysis.
The following lemma will be useful in the next sections.
Taking norm squared of both sides of (20) and using vec(X) T vec(Ŝ) = Tr(XŜ) = 0, we get Thus, we have Using (X, y, S) ∈ N F (θ) and proceeding exactly as in (22) and (23) on (21), we have and The inequalities (23) and (25) imply part (i). To prove (ii), we have where the last inequality is due to (23) and (25). Taking the square root of both sides of the above inequality implies part (ii). Therefore, the proof is complete.

BEHROUZ KHEIRFAM AND MORTEZA MOSLEMI
Hence, by part (i), we have where the third equality is obtained by the third equation of (5) at (X(α), y(α),Ŝ(α)), the second inequality is due to the triangle inequality and the last inequality follows from (29) and (X(α), y(α),Ŝ(α)) ∈ N F (2θ). Taking t = 1 gives It is easy to see that where the first equality is due to Lemma 3.2 (i), the second equality is due to part (i) and the last equality follows from Lemma 3.2. This implies that X k+1 0 and S k+1 0. Therefore (X k+1 , y k+1 , S k+1 ) ∈ N F (θ). This completes the proof.
The next lemma gives an upper bound for the number of iterations produced by our algorithm.
We are ready to state our main result.

5.
Numerical results. In this section, we compare the proposed primal-dual arcsearch algorithm with MTY predictor-corrector algorithm [5]. These two algorithms will be denoted by NewAlgor and MTYAlgor, respectively. Numerical results were obtained by using MATLAB R2014b on an Intel Core i3 (3.40 Ghz) with 4 GB RAM.
We stop the iteration of each algorithm if µ = Tr(XS) n ≤ ε, where ε = 10 −6 , 10 −8 for Example 1, and ε = 10 −5 for Example 2.  Table 1 where we present the duality gap (DGAP), the CPU time (in seconds) and the required iteration numbers (Iter.) Example 2. In this example, we test some various dimensions primal-dual pair of (P) and (D) in which the coefficient matrices A i are generated randomly. The Table  2 list the number of the constraints (m), the dimension of the blocks (n), the average number of iterations and the average of CPU times of ten randomly generated SDO problems with the same m and n.  Tables 1 and 2, we can see that our primal-dual arc-search algorithm for the SDO is robust and promising. 6. Conclusion. In this paper, we extended an arc-search strategy introduced by Yang [26] for LO problems to the SDO case. The proposed algorithm searches the optimizers along the ellipses that approximate the entire central path. The derived complexity bound is of the same order as the one for LO case. Moreover, some preliminary numerical results illustrate the effectiveness of our primal-dual arcsearch path-following method for solving the SDO. The used problems are small-size. Therefore, as a further research the efficiency of our algorithm could be analyzed for sparse problems as well.