A SEMIDEFINITE RELAXATION ALGORITHM FOR CHECKING COMPLETELY POSITIVE SEPARABLE MATRICES

. A symmetric matrix A is completely positive (CP) if there exists an entrywise nonnegative matrix V such that A = V V T . A real symmetric matrix is called completely positive separable (CPS) if it can be written as a sum of rank-1 Kronecker products of completely positive matrices. This paper studies the CPS problem. A criterion is given to determine whether a given matrix is CPS, and a speciﬁc CPS decomposition is constructed if the matrix is CPS.


1.
Introduction. Let n be a positive integer. Denote by S n the space of n × n real symmetric matrices. For A ∈ S n , A ≥ 0 (resp., A 0) means that A is a symmetric entrywise nonnegative (resp., positive semidefinite) matrix. A symmetric matrix A ∈ S n is completely positive (CP) if there exist nonnegative vectors v 1 , · · · , v r ∈ R n + such that (1) The number r is called the length of the decomposition (1) and the smallest length of (1) is called the CP-rank of A. If A is CP, we call (1) a CP-decomposition of A. So, A is CP if and only if A = V V T for an entrywise nonnegative matrix V .
Clearly, a CP matrix is doubly nonnegative (DNN), i.e., it is not only positive semidefinite but also nonnegative entrywise. Denote It was shown in [2] that CP n = DN N n holds only for n ≤ 4. The cone CP n is a proper cone (i.e. closed, convex, pointed and full-dimensional). It is also well-known that checking the membership in CP n is NP-hard, while checking the membership in its dual cone is co-NP-hard [8,19].
Completely positive matrices have wide applications in combinatorics and statistics. For example, the block designs, maximin efficiency-robust tests and Markovian 894 ANWA ZHOU AND JINYAN FAN model of DNA evolution need to check the complete positivity of a given matrix [2]. A lot of problems in applications can be formulated as completely positive programming problem. Interested readers are referred to [1,3,9,33].
Let p and q be two positive integers. For matrices B ∈ S p and C ∈ S q , B ⊗ C denotes their Kronecker product, i.e., B ⊗ C is the block matrix B ⊗ C := (B ik C) 1≤i,k≤p .
Let K p,q be the subspace spanned by all such Kronecker products: The set K p,q is a proper subspace of S pq , and its dimension is dimK p,q = 1 4 p(p + 1)q(q + 1).
If both B ∈ S p and C ∈ S q are rank-1, we call B ⊗C a rank-1 Kronecker product. A matrix is called separable if it can be written as a nonnegative linear combination of rank-1 Kronecker product of semidefinite matrices. Checking whether or not a density matrix is separable is important in quantum information theory [5]. A quantum system is separable (resp., entangled) if its density matrix is separable (resp., not separable). We refer to [7,11,24] for recent work on entanglement or separability.
Given two positive integers p and q, a matrix A ∈ K p,q is said to be completely positive separable (CPS) if there exists B j ∈ CP p , C j ∈ CP q (j = 1, 2, . . . , r) such that where pq = n, 1 < p, q < n. The equation (3) is called a CPS-decomposition of A. Denote by CP p,q the cone of all such completely positive separable matrices: The completely positive separable matrices have lots of applications in biquadratic polynomial optimization (see Section 6).
In this paper, we study how to determine whether a given symmetric matrix is CPS or not. We show that the CPS problem is equivalent to a truncated moment problem with special structures, then construct a hierarchy of semidefinite relaxations for solving it. If a matrix is not CPS, we can give a certificate for that. If it is, we can give a CPS-decomposition for it.
The paper is organized as follows. In Section 2, we present some preliminaries in the field of polynomial optimization and truncated moment problems. In Section 3, we reformulate the problem of checking CPS matrices as a truncated moment problem and study the properties of the CPS cone. A semidefinite relaxation algorithm is proposed to check whether a matrix is CPS or not in Section 4, and the convergence of the algorithm is also analysed. Some computational results are given in Section 5. Finally, we summarize the paper and discuss the applications of CPS matrices in biquadratic polynomial optimization in Section 6.

2.
Preliminaries. Notation. The symbol N (resp., R) denotes the set of nonnegative integral (resp., real) numbers. For t ∈ R, t denotes the smallest integer that is greater than or equal to t. Denote [n] := {1, . . . , n}. Let p, q be positive integers.
Given variables x ∈ R p , y ∈ R q , denote (x, y) = (x 1 , . . . , x p , y 1 , . . . , y q ) ∈ R p × R q . Let R[x, y] := R[x 1 , . . . , x p , y 1 , . . . , y q ] be the ring of polynomials in (x, y) with real coefficients and M[x, y] ⊆ R[x, y] be the set of all real monomials in (x, y). For a monomial set Ω ⊆ M[x, y] and a pair (a, b) ∈ R p × R q , [(a, b)] Ω denotes the vector of all monomials in Ω evaluated at the point (a, b).
denotes the set of all monomials (resp., polynomials) with degrees at most d. When For a set S ∈ R n , |S| denotes its cardinality, and int(S) denotes its interior. Clearly, the dimension of the vector In the following, we introduce some concepts, terminology and notations in polynomial optimization, which are inherited from [14].
We also denote the k-th truncation of the ideal I(h) as Clearly, is called a sum of squares (SOS) if there exist f 1 , . . . , f l ∈ R[x, y] for some l ∈ N such that f = f 2 1 + · · · + f 2 l . The set of all SOS polynomials in (x, y) is denoted by Σ[x, y]. For a degree d, denote the truncation It is a closed convex cone for all even d > 0. For a tuple g := (g 1 , . . . , g t ) of polynomials in R[x, y], the quadratic module generated by g is the set The 2k-th truncation of Q(g) is the set where each d i = 2k − deg(g i ). Then, Q(g) = ∪ k∈N Q 2k (g). Let h and g be the polynomial tuples as above. Denote It is clear that if f ∈ I(h) + Q(g), then f ≥ 0 on S(h, g). In fact, the converse is also true under some general conditions. When I(h) + Q(g) is archimedean (i.e., there exists N > 0 such that N − x 2 2 − y 2 2 ∈ I(h) + Q(g)), if f > 0 on S(h, g), then f ∈ I(h) + Q(g). This is called Putinar's Positivstellensatz in the literature (cf. [25]). Moreover, as shown recently in [23], if f ≥ 0 on S(h, g), then we also have f ∈ I(h) + Q(g) under some general optimality conditions. We refer to [16,17] for more details in polynomial optimization.
Let R M[x,y] d be the space of real vectors indexed by monomials in the set M[x, y] d , i.e., for a vector w ∈ R M[x,y] d , we index it as For example, let p = 1, q = 2, d = 2, x ∈ R p , y = (y 1 , y 2 ) ∈ R q , then M[x, y] d = 1, x, y 1 , y 2 , x 2 , xy 1 , xy 2 , y 2 1 , y 1 y 2 , y 2 2 . A vector w = (1, 2, . . . , 10) T ∈ R M[x,y] d implies that its elements is indexed as For a polynomial f ∈ R[x, y] d and a tms w ∈ R M[x,y] d , define the scalar product where f α,β are the coefficients of the polynomial f ∈ R[x, y] d . We say that the tms w admits an S(h, g)-representing measure if there exists a nonnegative Borel measure µ such that its support is contained in S(h, g) and If so, such µ is called an S(h, g)-representing measure for w.
For a polynomial f ∈ R[x, y] 2k , the k-th localizing matrix of f generated by In the above, vec(φ i ) denotes the coefficient vector of the polynomial φ i in the graded lexicographical ordering. In particular, when f = 1 (the constant one polynomial), L (k) 1 (w) is called a moment matrix and is denoted as The columns and rows of L Recall that h = (h 1 , . . . , h s ) and g = (g 1 , . . . , g t ) are two polynomial tuples. For In the above, diag(X 1 , . . . , X r ) denotes the block diagonal matrix whose diagonal blocks are X 1 , . . . , X r . Let S(h, g) be given as in (8). In applications, an interesting question is how to check whether or not a tms w ∈ R M[x,y] 2k admits an S(h, g)representing measure. For this to be true, a necessary condition (cf. [4,21]) is that L However, the converse is typically not true. Let d 0 = max{1, deg(h)/2 , deg(g)/2 }. If w satisfies (13) and the rank condition then w admits a unique S(h, g)-representing measure µ, which is supported on r := rankM k (w) distinct points in S(h, g) (cf. Curto and Fialkow [4]). We say that w is flat with respect to h = 0 and g ≥ 0, if (13) and (14) are both satisfied.
For convenience, we denote by z| d the subvector of z whose entries are indexed by x α y β ∈ M[x, y] d . If z| 2k = w and z is flat, we say that z is a flat extension of w. Flat extensions are very useful for checking whether a tms w admits an S(h, g)-representing measure or not.
3. Reformulation and properties of completely positive separable matrices. In this section, we formulate the problem of checking completely positive separable matrices as a special truncated moment problem. The properties of the cone of completely positive separable matrices are also discussed.
3.1. Reformulation. Recall the matrix space K p,q as in (2). Denote Then the dimension of the space K p,q is the cardinality |E| = 1 4 p(p + 1)q(q + 1). Thus, each A ∈ K p,q can be identified by the vector such that a ijkl = A (i−1)q+j,(k−1)q+l for all (i, j, k, l) ∈ E. Such a vector a ∈ R E is called the identifying vector of A ∈ K p,q . By the definition, each completely positive separable matrix in CP p,q is a nonnegative linear combination of rank-1 Kronecker products like (aa Thus, A ∈ CP p,q if and only if for ρ 1 , . . . , ρ r > 0 and (a 1 , b 1 ), . . . , (a r , b r ) ∈ ∆. Furthermore, the above is also equivalent to its identifying vector a satisfying If we denote then the monomial x i y j x k y l ∈ E can be uniquely identified by the tuple (i, j, k, l) ∈ E as in (15). Since there is a one-to-one correspondence between E and E, we can also index the identifying vector a ∈ R E of A ∈ K p,q equivalently by monomials in We call such a an E-truncated moment sequence (E-tms) (cf. [21]). The E-truncated ∆-moment problem (E-T∆MP) studies whether or not a given E-tms a admits a ∆-representing measure µ. A measure is called finitely atomic if its support is a finite set, and is called r-atomic if its support consists of at most r distinct points. Let µ be the weighted sum of Dirac measures: Then, (19) is equivalent to Thus, a matrix A ∈ K p,q is completely positive separable (i.e., A ∈ CP p,q ) if and only if its identifying vector a ∈ R E admits a ∆-measure. In other words, if we denote the cone where E and ∆ are given as in (20) and (17), respectively.

3.2.
Properties of the cone CP p,q . The completely positive separable matrix cone CP p,q can be thought of as subsets of the vector space R E , for E as in (20). Denote the polynomial cone The ∆-fullness is crucial for checking the completely positive separability to be discussed later.
Proposition 1. For any positive integer numbers p, q, we always have (i) CP p,q ⊆ CP pq ; (ii) the cone CP p,q is a proper cone (i.e., it is closed, convex, pointed, and fulldimensional).
Using basic algebraic rules for tensor products, we obtain It is clear that each b i ⊗ c j ∈ R pq + and λ i τ j ≥ 0 for all i, j. So A ∈ CP pq . Since a completely positive separable matrix is a nonnegative linear combination of such matrices, it follows that CP p,q ⊆ CP pq .
(ii) It is easy to check that CP p,q is pointed. Note that ∆ is a nonempty compact set, E is finite and R[x, y] E is ∆-full. Hence, by [22,Proposition 3.2], the cone CP p,q is proper.

4.
A semidefinite relaxation algorithm for checking completely positive separability. In this section, we propose a semidefinite relaxation algorithm to check whether a given matrix A ∈ K p,q is completely positive separable or not. The convergence of the algorithm is also analysed. 4.1. A semidefinite relaxation algorithm. As shown in Section 3, a matrix A ∈ K p,q is completely positive separable if and only if its identifying vector a ∈ R E admits a ∆-measure. Let Then the set ∆ as in (17) can be rewritten as Suppose w ∈ R M[x,y]2t is an extension of a, i.e., w| E = a. As we show above, if w is flat with respect to h = 0 and g ≥ 0, i.e., it satisfies and the rank condition then w admits a unique ∆-measure µ, which is r-atomic with r := rankM t (w). In other words, there exist The extension condition w| E = a and (28) imply that Furthermore, we can get the following CPS-decomposition for A:  (24). For relaxation orders k ≥ d/2, consider the semidefinite relaxation The dual problem of (30) is The variable in (31) is the vector of coefficients of f . Based on solving the hierarchy of (30), we can propose the semidefinite relaxation algorithm for checking the membership in the cone CP p,q as follows. Step 0. Input A ∈ K p,q and ∆ as (17). Choose a generic F ∈ Σ[x, y] 6 . Let k := 3.
Step 1. If (30) is infeasible, then A / ∈ CP p,q and stop; otherwise, solve it for a minimizer w * ,k . Let t = 2.
Step 2. Let z := w * ,k | 2t . If the rank condition in (27) is not satisfied, go to Step 4.
Step 3. Compute ρ i > 0 and (a i , b i ) ∈ ∆. Output the CPS-decomposition of A as Step 4. If t < k, set t := t + 1 and go to Step 2; otherwise, set k := k + 1 and go to Step 1.
Remark 1. Algorithm 4.1 can be easily implemented by the software GlotpiPoly 3 [13], which solves the generalized problem of moments. In Step 0, we choose . In Step 1, we solve the semidefinite relaxation problem (30) by the semidefinite programming solver SeDuMi. It is a Matlab toolbox for optimization over symmetric cones, based on primal-dual interior point methods [30]. The infeasibility of (30) can be certificated by the Farkas dual solution computed by the SeDuMi (cf. [30,31]). In Step 2, we evaluate the rank of a matrix as the number of its singular values that are larger than 10 −6 , which is a standard procedure in numerical linear algebra [6]. In Step 3, the method in Henrion and Lasserre [12] is used to compute the finitely atomic ∆-measure µ admitted by z, i.e., ρ i and (a i , b i ).

4.2.
Convergence of the algorithm. We first show how to decide that A is not completely positive separable, then prove the asymptotic convergence of Algorithm 4.1.
Theorem 4.2. Let A ∈ K p,q with its identifying vector a as in (21). Then Algorithm 4.1 has the following properties: (1) If (30) is infeasible for some k, then A is not completely positive separable, i.e., A / ∈ CP p,q . (2) If A / ∈ CP p,q , then (30) is infeasible when k is big enough. (3) If A ∈ CP p,q , then for almost all generated F , we can asymptotically get a flat extension of a by solving the hierarchy of (30). (20) and (25), respectively, the conclusions can be deduced from Nie [21, Section 5].

Proof. Since R[x] E is ∆-full for the E and ∆ given in
Next, we investigate when Algorithm 4.1 converges within finitely many steps. Recall that the set ∆ is given in (25). Denote the polynomial cone Since ∆ is compact, its dual cone is R 6 (∆) := z ∈ R M[x,y]6 : z admits a ∆-measure .

Consider the optimization problem
Then the dual problem of (32) is min F, z s.t. z| E = a, z ∈ R 6 (∆).
As shown in [23], if f is nonnegative on ∆, then we often have f ∈ I(h) + Q(g) under some general optimality conditions. So, Assumption 4.3 is generically satisfiable. Now we prove the finite convergence of Algorithm 4.1 under Assumption 4.3.
Theorem 4.4. Let A ∈ CP p,q and a be as in (21). Suppose F ∈ int(Σ[x, y] 6 ), f * is a maximizer of (32), and Assumption 4.3 holds. For all k sufficiently large, if w * ,k is a minimizer of (30), then the condition (27) must be satisfied for some t ≥ 2.
Proof. Since F ∈ int(Σ[x, y] 6 ), the feasible set of (31) has an interior point. Then, for all k ≥ 3, (30) is feasible and has a minimizer, and the optimization problems (30) and (31) So, for all k ≥ k 1 , f * is a maximizer of (31), and This implies that f , w * ,k = 0 for all k ≥ k 1 . The strong duality holds between (32) and (33), because F ∈ int(Σ[x, y] 6 ). Since A ∈ CP p,q , a admits a ∆-measure and (33) must have a minimizer z * . Then, we have where µ is a ∆-measure for z * . Note thatf is nonnegative on ∆. This implies that the minimum value off on ∆ is zero. Consider the polynomial optimization problem min (x,y)∈R p ×R qf (x, y) s.t. h(x, y) = 0, g(x, y) ≥ 0. (34) The k-th order SOS relaxation for (34) is Its dual problem is Sincef ∈ I 2k1 (h) + Q 2k1 (g), we have f 1,k ≥ 0 for all k ≥ k 1 . On the other hand, the minimum value off on ∆ is 0, so f 1,k ≤ 0 for all k. Hence, This implies that the problem (35) achieves its optimal value for k ≥ k 1 and the sequence {f 1,k } has finite convergence. By Assumption 4.3,f has finitely many critical zeros on ∆ and Assumption 2.1 in [20] for (34) is satisfied.
By Theorem 4.4, it is very likely that Algorithm 4.1 has finite convergence. Indeed, this always happens in our numerical experiments.

Numerical experiments.
In this section, we present some numerical experiments for checking completely positive separability of matrices by Algorithm 4.1. The computation is implemented on a Lenovo Laptop with Intel Core i5-3210M CPU@2.50 GHz and RAM 4.0 GB, using MATLAB R2014a. We use the software GlotpiPoly 3 [13] and SeDuMi [30] to solve semidefinite relaxations (30). We only displayed 4 decimal digits for computational results. 2 + e 2 e T 1 ) ⊗ (e 1 e T 2 + e 2 e T 1 ) + (e 1 e T 3 + e 3 e T 1 ) ⊗ (e 1 e T 3 + e 3 e T 1 ) + (e 2 e T 3 + e 3 e T 2 ) ⊗ (e 2 e T 3 + e 3 e T 2 ). The semidefinite relaxation (30) is infeasible for k = 3, so A is not completely positive separable, i.e., A / ∈ CP 3,3 . It takes about 8 seconds.
Example 5.3. Consider the Hilbert matrix A ∈ S 6 , where It was shown in [2] that every Hilbert matrix is completely positive, i.e., A ∈ CP 6 . (i) Firstly, we check whether A ∈ CP 3,2 or not. By Algorithm 4.1, we got A ∈ CP 3,2 and obtained a CPS-decomposition A = It takes about 7 seconds, and the rank of the moment matrix is 6.
(ii) Next, we check whether A ∈ CP 2,3 or not. By Algorithm 4.1, we got A ∈ CP 2,3 and obtained a CPS-decomposition It is easy to check that A is doubly nonnegative with the order 4, so A ∈ CP 4 = CP 1,4 = CP 4,1 . Indeed, all Pascal matrices A ∈ S n are completely positive matrices [18].
(i) Firstly, we verify that A ∈ CP 4,1 by Algorithm 4.1. We got A ∈ CP 4,1 and obtained a CPS-decomposition It takes about 28 seconds.
(ii) Next, we check whether A ∈ CP 2,2 or not. By Algorithm 4.1, the semidefinite relaxation (30) is infeasible for k = 3, so A is not completely positive separable, i.e., A / ∈ CP 2,2 . It takes about 4 seconds.
Example 5.5. Consider the Lehmer matrix A ∈ S n , where It was shown in [18] that all Lehmer matrices are completely positive. Here, we consider the case n = 6.
(i) Firstly, we verify that A ∈ CP 6,1 by Algorithm 4.1. We got A ∈ CP 6,1 and obtained a CPS-decomposition It takes about 1363 seconds, and the rank of the moment matrix is 13.
(ii) Next, we check whether A ∈ CP 2,3 or not. By Algorithm 4.1, the semidefinite relaxation (30) is infeasible for k = 3, so A is not completely positive separable, i.e., A / ∈ CP 2,3 . The computational time is around 6 second.
Example 5.6. Consider the matrix A in the space K 2,3 Obviously, A is completely positive separable. By Algorithm 4.1, we got a CPS- It takes about 10 seconds.
In the following, we consider some randomly generated separable matrices.
where (a i , b i ), i = 1, . . . , 6 are given column by column as Clearly, A is completely positive separable. By Algorithm 4.1, we get a CPS- It takes about 140 seconds to get the CPS-decomposition, and the rank of the moment matrix is 6. The computed CPS-decomposition is the same as the input one, up to a permutation and scaling of u i , v i . That is, there exist real numbers τ i,j , with i = 1, . . . , 6 and j = 1, 2 such that each |τ i,1 τ i,2 | = 1 and In the above, the permutation vector is σ = (2, 3, 1, 5, 4, 6). Clearly, A is completely positive separable. By Algorithm 4.1, we got a CPS- It takes about 613 seconds, and the rank of the moment matrix is 4. The computed CPS-decomposition is the same as the input one, up to a scaling of u i , v i . That is, there exist real numbers τ i,j , with i = 1, . . . , 4 and j = 1, 2 such that each |τ i,1 τ i,2 | = 1 and 6. Summary and discussions. In this paper, we introduce the completely positive separable matrices, which have wide applications in biquadratic polynomial optimization problem. A semidefinite algorithm is proposed for checking whether a matrix is CPS or not. If it is not CPS, a certificate for this can be obtained; if it is CPS, a CPS-decomposition can be obtained. Computational results show that the algorithm is efficient at checking CPS. As mentioned in Section 1, completely positive separable matrices have important applications in biquadratic polynomial optimization, which arises from the strong ellipticity condition problem in solid mechanics (for p = q = 3) [28,29,32], the entanglement problem in quantum physics [5,10], as well as the best rank-one tensor approximation [26,27]. In practice, it is often desirable to solve the biquadratic polynomial optimization with nonnegative variable constraints, stated as Without loss of generality, we assume that the coefficients b ijkl satisfy the symmetric property, i.e, b ijkl = b kjil = b ilkj for i, k = 1, . . . , p and j, l = 1, . . . , q.
Let B ∈ S pq be a symmetric matrix with elements Then we have 1≤i,k≤p,1≤j,l≤q b ijkl x i y j x k y l = B, xx T ⊗ yy T .
Clearly, the problem (37) is equivalent to        min B, xx T ⊗ yy T s.t. I pq , xx T ⊗ yy T = 1, x T x = 1, y T y = 1, x ∈ R p + , y ∈ R q + , where I pq ∈ S pq is the identity matrix. Then, the relaxation problem of (38) can be obtained as follows: v * CP S = min B, Z s.t. I pq , Z = 1, Z ∈ CP p,q .
It is clear that v * BIQ ≥ v * CP S . In fact, we can prove that the problems (37) and (39) have the same optimal value, i.e., v * BIQ = v * CP S . Theorem 6.1. Let ∆ be as in (25) and Z * ∈ CP p,q be an optimal solution of the problem (39). Then there exists (x * , y * ) ∈ ∆ such that B, Z * = B, x * x * T ⊗ y * y * T , that is, we have v * BIQ = v * CP S . Proof. Since Z * ∈ CP p,q is an optimal solution of the problem (39), we always have B, Z * ≤ B, xx T ⊗ yy T for any (x, y) ∈ ∆. In the following, we show that there exists a vector (x * , y * ) ∈ ∆ such that B, Z * ≥ B, x * x * T ⊗ y * y * T .