ON BOUNDS OF THE PYTHAGORAS NUMBER OF THE SUM OF SQUARE MAGNITUDES OF LAURENT POLYNOMIALS

. This paper presents a lower and upper bound of the Pythagoras number of sum of square magnitudes of Laurent polynomials ( sosm-polynomials ). To prove these bounds, properties of the corresponding system of quadratic polynomial equations are used. Applying this method, a new proof for the best (known until now) upper bound of the Pythagoras number of real polynomials is also presented.


THANH HIEU LE AND MARC VAN BAREL
The positive integer number π(f ) min {r ∈ N : f is sum of r squares} is called the Pythagoras number or the length of f [4,6,19]. It is well-known that a polynomial f is an sos-polynomial if and only if there exists a positive semidefinite real symmetric matrix F such that f can be expressed as where v d (x) is the column vector of all possible monomials x α x α1 1 . . . x αn n in R[x] n,d .
To define sum of square magnitude (sosm) polynomials, we need the following notations. Let C[x] n,d denote the set of all complex-valued n-variable polynomials of degree at most d. In this case the polynomials are defined on the n−torus T n {z ∈ C n : |z i | = 1, ∀i = 1, . . . , n}.
The set of sum of square magnitude (sosm) polynomials in n variables of degree d Σ (n, d) is defined as 2 , ∀x ∈ T n ; q i ∈ C[x] n,d , ∀i = 1, . . . , r .
We note that there has not been a formula for the cardinality of Γ (n, d) k in the literature. To formulate the following theorems on lower and upper bounds for the Pythagoras number of Σ(n, d) and Σ (n, d), we need the following notation: Lower and upper bounds for the Pythagoras number of either sos-polynomials or sosm-polynomials are given by the following theorems.
Theorem 2. For any g ∈ Σ (n, d), we have The paper is organized as follows. Section 2 summarizes some important properties of the cones of positive semidefinite real symmetric and complex Hermitian matrices which will be used in subsequent sections. The bounds in Theorem 1 were given in [6] with the corresponding proof. Section 3 presents a new proof for the upper bound U (n, d). The key result to give the proofs of the upper bound of either sos-or sosm-polynomials, Proposition 3, is also shown in this section. Section 4 deals with the proof of Theorem 2. In Section 5, a formula is derived for k Γ (n, d) with n = 2, 3, . . . , 6. Also, some examples for different values of n and d are presented showing that U (n, d) can be less thanê e(n, d) for certain values of n and d and vice versa. Section 6 gives the conclusions.
2. Cones of positive semidefinite matrices. This section summarizes some properties of the cones S µ + and H µ + of real symmetric and complex Hermitian positive semidefinite µ × µ matrices, respectively. The results in this section are well-know in the literature. They are listed here without proofs. Proposition 1. [5,10] The cones S µ + , H µ + are • proper, i.e., they are closed, convex, have nonempty interior (solid) and contain no line (pointed); • self-dual.
The following proposition summarizes the fact that either the space S µ of real symmetric matrices or the space H µ of complex Hermitian matrices can be identified with an adapted real Hilbert space. Throughout this chapter, unless otherwise stated, ., . denotes either the "trace" inner product A, B = Trace(A H B) of matrices or the standard inner product in C r . We use the notation (x ij ) condition on indices iand j to denote a vector containing the elements x ij where the index j is varied faster than the index i. Proposition 2. i) (See, e.g., [9]) Suppose S µ is endowed with the "trace" inner product ., . and R µ(µ+1)/2 is endowed with the inner product where D = diag(d 11 , d 12 , . . . , d 1µ , . . . , d µµ ) is a diagonal matrix with d ii = 1 and d ij = 2 for 1 ≤ i < j ≤ µ. Then the space S µ is isometrically isomorphic to R µ(µ+1)/2 under the map ii) (See, e.g., [10]) The space H µ is isometrically isomorphic to R µ 2 , endowed with the standard inner product, under the map that maps each matrix [a ij ] ∈ H µ to the following vector in R µ 2 (a 11 , √ 2Re(a 12 ), √ 2Im(a 12 ), . . . , a 22 , √ 2Re(a 23 ), √ 2Im(a 23 ), . . . , a µµ ) T .

For any
where the corresponding vectorỹ,ṽ of the skew-symmetric matrices Y, V is defined, respectively, via the map 3. Upper bounds on the Pythagoras number of sos-polynomials. In this section, we give a new proof for the upper bound of the Pythagoras number of sospolynomials given in Theorem 1. Some lower and upper bounds of the Pythagoras number of such polynomials were also presented in [19,4]. The upper bound U (n, d) in Theorem 1 is the sharpest (by now) and given in [6]. The authors proved such bound by using the "method of cages". This method is based on the Newton polytope of the sets of exponents Ω(n, d) and Γ(n, d). A polynomial f ∈ R[x] n,2d can always be represented as a linear combination of monomials. Moreover, if it is n,d , then its coefficients can be represented as a quadratic polynomial of the coefficients of the f i 's. Each of these quadratic polynomials is called a "vectorial quadratic form" [3]. Given the coefficients of f , determining the coefficients of the polynomials f i is equivalent to solving a system of quadratic equations.
Theorem 1 is a direct consequence of Theorems 4.4 and 6.1 in [6]. It says that every sum of squares polynomial can be expressed as a sum of at most U (n, d) squares, where . denotes the integer part of a real number. We now prove the upper bound using the theory of systems of "vectorial quadratic form" equations.

3.1.
A new proof for the upper bound U (n, d) of Theorem 1. We first recall some facts of vectorial quadratic forms from [3]. One can view each m × r real matrix H as an m−tuple of column vectors in R r , i.e., Then it is easy to see that q(H) = Q, HH T .
Notice that the (i, j)−entry of HH T is h i , h j for all i, j = 1, . . . m. Before giving our proof, we list the following result from [3].
Proposition 3. Suppose Q 1 , . . . , Q l are symmetric matrices of order n and a 1 , . . . , a l are real numbers. If a positive semidefinite matrix X exists such that then there exists a positive semidefinite matrix X 0 satisfying the l equations above and Now, suppose f is a sos-polynomial in n real variables and of degree 2d, say Suppose furthermore that f is expressed in the classical basis as Let V be the matrix whose columns are the column vectors of coefficients of p i s.
To apply Proposition 3, we define a vectorial quadratic form as follows. For each The corresponding vectorial quadratic form q γ : Rê ×r −→ R is defined by From (3), (4) and (5), it follows that the associated matrix V V T of f satisfies Proposition 3, and hence a positive semidefinite matrix X 0 exists such that rank(X 0 ) ≤ √ 8â + 1 − 1 2 = U (n, d) .

THANH HIEU LE AND MARC VAN BAREL
The conclusion is obtained from the fact that

Remarks.
Remark 1. Results on the facial structure of linear programs and semidefinite programs [17,16,18] also give an upper bound on the Pythagoras number of real polynomials but it is not as sharp as U (n, d). Several nice properties of faces of the cone of positive semidefinite matrices can be found in [20,18,17,2,10,1,16]. This unsharp upper bound is derived by considering the following primal and dual semidefinite programs, respectively, and where C ∈ Sê. Pataki [16,17,18] proved that for any feasible point X (with rank r) of the primal semidefinite program (6), the following rank inequality holds where F is the smallest face of the feasible set containing X. This certainly gives a weaker upper bound than the one given in Theorem 1 because Remark 2. In [3] it is shown that there always exists a positive definite matrix C for which the following inequality holds for all {x γ } γ∈Γ(n,d) ⊂ R. A consequence when such a matrix C exists is that both primal and dual semidefinite programs (6) and (7) have an optimal solution. The key is that the matrices {Q γ } γ∈Γ(n,d) are linearly independent. Proof. Notice that for γ, γ ∈ Γ(n, d), if γ = γ then for any α, α , β, β ∈ Ω(n, d) such that α + β = γ and α + β = γ we have (α, β) = (α , β ). This implies that any nonzero entry of the matrix Q γ does not appear at the same position as the nonzero ones of Q γ . This gives us the conclusion of the proposition.

4.
Bounds on the Pythagoras number of sosm-polynomials. We start this section by stating the following proposition which allows us to consider only the polynomials being sums of square magnitudes of linearly independent polynomials. In a sosm-representation of a polynomial, if a sosm-term polynomial is a linear combination of the polynomials of other terms then its square magnitude is not necessarily a linear combination of the other square magnitudes.
Proposition 5. If g(z) is a sum of r square magnitudes of polynomials q i (z) ∈ C[z] n,d , i = 1, . . . , r, and the column vectors of coefficients of the polynomials q i (z) are linearly dependent then it can be expressed as a sum of at mostê square magnitudes of linearly independent polynomials.
. . , r. Then it has a matrix representation (see, e.g., [13]) where q i denotes the column vector of coefficients of the polynomial q i (z), G = [q 1 , . . . , q r ], G is the element-wise conjugate of G. Since the sosm-term polynomials q i (z) are linearly dependent, rank(G) = s < r. Applying the Cholesky factorization we have GG T = LL T where L ∈ Cê ×s is lower triangular of rank s. We obtain the new representation of the polynomial This implies that the new polynomial is a sum ofê square magnitudes of polynomials, and the sosm-term polynomials are linearly independent.
Because of Proposition 5, one can assume in the rest of this chapter that the sosm-term polynomials of a sosm-polynomial are linearly independent. Now, suppose is a sum of r square magnitudes of polynomials with a matrix representation as in (9). Note that its sosm-term polynomials are linearly independent. For each i = 1, . . . , r, by q αi denote the αth coefficient of the polynomial q i (z). Identifying the coefficients of g(z) in the matrix and the canonical-basis representations as in (9) and (10), respectively, we get β−α=γ α,β∈Ω(n,d) r i=1q αi q βi = g γ , ∀γ ∈ Γ (n, d).
One also notices from the matrix representation of sosm-polynomials that the Pythagoras number of sosm-polynomials is bounded above byê, i.e., r ≤ê. We now give another upper bound for the Pythagoras number of such polynomials in the next subsection.
4.1. The upper bound. In this subsection, we convert the system of complex quadratic equations (12) to one of real quadratic equations. Then we apply Proposition 3 to obtain an upper bound for sosm-polynomials.
By Proposition 3, there is a matrix A 0 ∈ S 2ê + satisfying the conditions above and Finally, let A 0 = A 1 A T 1 be the Cholesky decomposition of A 0 . Then A 1 ∈ R 2ê×s with rank(A 0 ) = rank(A 1 ) = s. Let X 0 and Y 0 be the matrices taken from the first and lastê rows of A 1 , respectively. Then A 0 = sym(X 0 , Y 0 ). The matrix G 0 = X 0 + ıY 0 ∈ Cê ×s is also an associated matrix of the polynomial g(z), and if s ≤ r then we have The second inequality can be found in [15].
Proposition 6. If the Laurent polynomial g(z) of degree d is sosm on the n−torus T n then π(g) ≤ min{ê, U (n, d)}.

The lower bound.
To give a proof for the lower bound we need a result from function theory and dimension theory. In particular, one concerns the dimension under polynomial mappings. The "dimension" here stands for the dimension of topological spaces, see, e.g., [11,6]. More precisely, we say that a subset of R µ has dimension µ if its interior is nonempty.
Below we prove that the set of sosm-polynomials can be embedded in the range space of a polynomial mapping. Indeed, consider the polynomial mapping (14). Since for any g(z) ∈ Σ (n, d) there exist two real matrices X, Y so that (13) is satisfied, Σ (n, d) is isomorphic to a subset of Im(Φ). We will prove that int(Σ (n, d)) = ∅ then so is int(Im(Φ)). We then apply Proposition 7 to give the lower bound in Theorem 2. For any ε ∈ C and any α, β ∈ Ω(n, d), the polynomial is sosm. Let f (z) = |Γ (n, d)| + 1, ∀z ∈ T n . It is certain that f is sosm. We prove that f ∈ int(Σ (n, d)). Indeed, for any then h belongs to the unit ball in R k , and the polynomial Proposition 7 implies that r 2 − 2êr + k ≤ 0. Moreover, if g(z) is sosm then from g(z) = v d (z) H Av d (z), A = (a αβ ) we have g γ = β−α=γ a αβ , ∀γ ∈ Γ (n, d).
By the Fejér-Riesz Theorem, the Pythagoras number of sosm-polynomials in one variable is one. In case of two variables, one can prove that U (2, d) ≤ e(2, d), ∀d ≥ 2. The same estimation for n = 3, one also obtains U (3, d) ≤ e(3, d), ∀d ≥ 2. Table 5 shows some values of n and small d for which e(n, d) < U (n, d) .
Note that the conjectured formula of the Pythagoras number of sosm-polynomials given in [13] satisfies the bounds of Theorem 2. The upper bound turns out to be a sharp one. 6. Conclusion. A lower and sharp upper bound for the Pythagoras number of sosm-polynomials were presented. These bounds are new and could be useful in practice, leading to a reduction in computational complexity when problems are considered over the cone of such polynomials. A new proof for the known upper bound of the Pythagoras number of sos-polynomials has also been presented.