PARTIAL STABILIZABILITY AND HIDDEN CONVEXITY OF INDEFINITE LQ PROBLEM

. Generalization of linear system stability theory and LQ control theory are presented. It is shown that the partial stabilizability problem is equivalent to a Linear Matrix Inequality (LMI). Also, the set of all initial conditions for which the system is stabilizable by an open-loop control (the sta- bilizability subspace) is characterized in terms of a semi-deﬁnite programming (SDP). Next, we give a complete theory for an inﬁnite-time horizon Linear Quadratic (LQ) problem with possibly indeﬁnite weighting matrices for the state and control . Necessary and suﬃcient convex conditions are given for well-posedness as well as attainability of the proposed (LQ) problem. There is no prior assumption of complete stabilizability condition as well as no as- sumption on the quadratic cost. A generalized algebraic Riccati equation is introduced and it is shown that it provides all possible optimal controls. More- over, we show that the solvability of the proposed indeﬁnite LQ problem is equivalent to the solvability of a speciﬁc SDP problem.


1.
Introduction. In the well-know case of complete stabilizability condition, efficient tools for the numerical computation of the underlying Riccati equation for the optimal definite LQ control problem, are available. Riccati equations have a great practical impact on real world problems, especially, on certain problems related to the theory of dissipative systems [7], robust control [13,16], filtering and estimation [8,12,5].
The present study brings a novel treatment of the stability and control of linear systems which are not necessary stabilizable. The theoretical contribution is reflected by a complete existence theory with necessary and sufficient conditions and without any prior complete stabilizability assumption on the system. Moreover, no convexity assumption is made on the proposed quadratic optimization problem and indeed the control variable weight matrix may be singular. In fact, it is shown that such problems, termed here as indefinite LQ problems, have a hidden convexity and can be cast as convex optimization problems. The term hidden convexity in connection with indefinite quadratic optimization problems has been used in the optimization literature, for example, in [9,23].
The pedagogical contribution of this work lies in the simplicity of the approach. Here, by appropriate problem formulations, all of the results are derived only from first principles. The solutions are simply obtained by a matrix algebra manipulation and/or tackled by convex optimization. Sharing the same point of view of [24], and avoiding high level analysis this paper is self containing and provides accessible and complete approach to a general LQ control theory.
Since the innovative work [18] related to a definite linear quadratic problem with connection to the classical Riccati equation [14], much research has been devoted to the optimization of more general quadratic cost functionals. Such problems have a theoretical interest in their own right, but also have natural applicability in several fields of system theory. Certainly, the seminal paper [29] has a quite complete story about the subject. However, there is a standing assumption on the positive definiteness of the weighting matrix on the control variable. This is imposed to ensure the existence of solutions to the proposed Riccati equations and thereby to provide the solution of the underlying optimal control problems. Various studies followed on from [29], see for instance the tutorial paper [19] and the specialized paper [25]. Next, there has been no simultaneous relaxation of complete controlability or stabilizability conditions and definiteness of the quadratic cost. Moreover, most papers dealing with singular LQ problem do not take into account the indefiniteness of the cost function, see for instance the tutorial paper [17] and [20,11]. Recently, in [27] semidefinite programming is proposed for solving a definite LQ problem. In fact, this seems to be a special case of our formulation in this paper. We would like to emphasize that this approach has been used before in a more general framework [1,2,3].
In [15], the stabilizability condition is dropped but the definiteness of the quadratic cost is assumed and the optimal control law is not necessarily stabilizing. From the control point of view, without stability considerations, any optimality criteria for processes with a long duration time may be meaningless. Our first focus then is a fundamental stability issue in linear systems. We provide a simple treatment of the partial stabilizability problem: Suppose there is given a fixed initial condition x 0 for the following finite dimensional linear time-invariant system: where A, B are real matrices of dimension n × n, n × n u , respectively. Then under what are the conditions for the existence of a control law u(·), such that the resulting trajectory x(·) is asymptotically zero?
Of course the answer specializes to known results when the system is stabilizable in the classical sense.
Next, we focus on an optimality issue: For a given initial condition x 0 and a quadratic cost function, the set of admissible controls for System (1) is restricted as It can be shown that U (x 0 ) is a convex subset of L 2 (IR nu ) and when it is nonempty it necessarily contains linear state-feedback controls. Here, we associate to each u(·) ∈ U (x 0 ) the following quadratic (possibly indefinite) cost function where L ∈ IR nu×n , and Q = Q T ∈ IR n×n , R = R T ∈ IR nu×nu are without loss of generality assumed to be symmetric matrices. Then the optimal control problem under consideration is to minimize the cost function J x0 (u(·)), over u(·) ∈ U (x 0 ). The value function V is defined as The optimization problem is called a Linear Quadratic (LQ) problem, and since the quadratic form x T Qx + 2x T Lu + u T Ru can be indefinite, it is also referred to as a possibly indefinite LQ problem. Moreover, we allow R to be singular in contrast to the standing assumption R > 0 used commonly in the literature of indefinite LQ problems. In our case, the quadratic cost function J x0 (u(·)) may involve singularities and since it is possibly nonconvex it may be unbounded from below or above.
In this paper there is no assumption that System (1) is stabilizable, that is, there may not exist a static state-feedback law which stabilizes the System (1) for every x 0 ∈ IR n . Consequently, for a given initial condition x 0 , the set of admissible controls may be empty. The problem of whether or not U (x 0 ) is empty is addressed. Moreover, we provide a complete characterization of the union ∪ x0∈I R U (x 0 ), defined as the stabilizability subspace.
In this paper, we first develop preliminary results for the stability of the free System (1) (with u = 0). Then these results are applied to the stability synthesis problem. Specifically, it is shown that the stabilizability problem for a fixed initial condition can be formulated by equivalent convexity conditions, which in turn are expressed in terms of a Linear Matrix Inequality (LMI). The set of all initial conditions for which the system is stabilizable by an open-loop control (the stabilizability subspace) is characterized by its projection operator. There is a deeper meaning behind the proposed formulation which leads naturally to a procedure for the solution of a more general class of LQ problems then studied hitherto.
We study the infinite-time horizon LQ problem with possibly indefinite weighting matrices and without any complete stabilizability assumption on the system. Necessary and sufficient convex conditions are given for well-posedness as well as attainability of the proposed LQ problem. A generalized algebraic Riccati equation is introduced and it is shown that it provides all possible optimal controls. Moreover, we show that the solvability of the possibly indefinite LQ problem can be tackled by using semidefinite programming (SDP).
The remainder of the paper is organized as follows. Section 2 introduces some definitions and technical preliminaries. Sections 3 is devoted to the stability analysis. Section 4 deals with the stabilizability synthesis problem. In Section 5, we first provide a necessary and sufficient condition for the well-posedness of the LQ problem. Next, a generalized algebraic Riccati equation (GARE) is introduced and its connection to the attainability problem is analyzed. In Section 6, we give a complete treatment of the attainability problem via SDP. We show that using the SDP formulation or the GARE we obtain various characterizations of all optimal stabilizing state-feedback control laws. In Section 7, we provide some concluding remarks. Also, a new challenging problem is proposed.
Notation. We make use of the following notation. IR denotes the set of real numbers. M T denotes the transpose of the real the matrix M . M † denotes the Moore-Penrose of the matrix M . Tr(M) is the sum of diagonal elements of a square matrix M. For a real matrix M , M > 0 (resp. M ≥ 0) means M is symmetric and positive-definite (resp. positive semidefinite). I denotes the identity matrix, with size determined from the context.

Stability Analysis.
Here we develop a simple stability theory of linear systems based on convex optimization. We stress that neither the modal analysis of linear systems nor the classical Lyapunov Theorem (which actually does not apply for stability with a fixed initial condition) are used. The new results are derived from first principles.  (i) The System (1) is x 0 -stable.
(ii) There exists a positive semidefinite matrix P ≥ 0 such that Proof. Let x(t) be the trajectory of System (1) associated with u = 0, x(0) = x 0 . Then the implication (i) ⇒ (ii) is straightforward by using the integration x(s)x(s) T ds in (5). We stress out that +∞ 0 x(s)x(s) T ds < +∞, since x(t) goes to zeros and the expression of x(t) contains only the products of polynomials and exponents. Next, The implication (ii) ⇒ (i) can be shown as follows. Let P ≥ 0 be a solution to (5) and define Φ(t) = e tA P e tA T . Since any matrix commutes with its exponential, a simple calculation givesΦ(t) = −x(t)x(t) T . Therefore, Φ(t) is decreasing as time goes to infinity. So the limit l(P ) = lim t→+∞ Φ(t) exists since Φ(t) is necessarily bounded from below by 0 and In the forthcoming results the following set will be used.
Remark 1. It is trivial that S 0 is a linear subspace of IR n . Also, any free trajectory of System (1) belongs to S 0 whenever its initial condition belongs to S 0 .
Along the same line of reasoning, a useful generalization of the previous result can be derived as follows.
(ii) There exists positive semidefinite matrix P ≥ 0 such that The following result can be viewed as a generalization of the classical Lyapunov theorem.
Corollary 1. Given a matrix C, the following statements are equivalent Moreover, when either any previous item holds, then A is a Hurwitz matrix (all its eigenvalues have strictly negative real part) if and only if ker(C) ⊂ S 0 , or equivalently if and only if the following is feasible Proof. The equivalences between (i),(ii)and (iii) are an immediate consequence of Theorem 2.4. The rest of the proof follows from the fact that CC † is the projection on the range of C and I − C † C is the projection on the null space of C. The minima P, X of (10) are always achievable and unique in the variable X. Moreover, the projection operator X 0 onto the linear subspace S 0 , is the only optimal solution for (10) and the optimal index value Tr(X 0 ) equals the dimension of S 0 .
Proof. We show first that the projection X 0 is feasible solution to (10). It is trivial that the projection X 0 satisfies X 0 ≤ I. Let k be the dimension of S 0 , then X 0 = x i x T i with x 1 , . . . , x k an orthonormal basis of S 0 . So that Theorem 2.4 implies that there exist P such that P, X 0 is feasible solution to (10). Next, we prove that X 0 is the only minimum of (10). Let X * be any minimum, then necessarily it has only 0 and 1 as eigenvalues. Thus X * is a projection onto a subspace of S 0 . Necessarily, X * = X 0 , since X 0 is feasible and k = Tr(X 0 ) ≤ Tr(X * ).
3. Stability Synthesis. The solution of the x 0 -stabilizability problem (which will be defined in the sequel) is essential in our investigation for solving a possibly indefinite LQ problem. We show that this stabilizability problem can be expressed in terms of LMI. The following definitions will be essential in our development.  (1) is called x 0 -stablizable if there exists a control law such that the corresponding trajectory with x(0) = x 0 vanishes at infinity. In this case, the control law is called x 0 -stabilizing.
Definition 3.2. The stabilizability subspace S u is defined as Remark 3. It is trivial that the stabilizability subspace S u is a linear subspace of IR n . Also, any trajectory x(·) of System (1) with x(0) ∈ S u , belongs to S u .
Next, we show that the x 0 -stabilizability of System (1) can be expressed in terms of LMI. For this purpose, we need the following key role Lemmas. [4] ). Let matrices Γ = Γ T , Θ = Θ T and ∆ be given with appropriate sizes. Let Θ † denotes the pseudo inverse of Θ. Then the following conditions are equivalent:

Lemma 3.3 (Schur's Lemma
The following lemma can be found in many references but its origin is due to Penrose [21,22].
has a solution X if and only if Moreover, the set of all solutions to (12) is given by where Y is an arbitrary matrix of appropriate size.
Now, we are in position to provide the following result.
Theorem 3.5. Given v 1 , ..., v k ∈ IR n , then following conditions are equivalent: Moreover, from (iii) we have that the state feedback control law is x 0 -stabilizing for any arbitrary matrix Y .
Proof. Assume that System (1) is x 0 -stabilizable for given initial conditions v 1 , . . . , v k . Denote by u v1 , . . . , u v k the associated controls and by x v1 , . . . , x v k the corresponding trajectories, then Since it is easily seen that S, U, T satisfy condition (iii). Now, assume that (iii) holds. Using the Schur Lemma we have and Y arbitrary. Substituting this expression into (15) we obtain Then by Theorem 2.4 we conclude that the state-feedback control u = Kx is x 0stabilizing for any x 0 in span(v 1 , ..., v k ).
Next, consider the projection operator X u onto the linear stabilizability subspace S u , i.e X u x = x, ∀x ∈ S u . Then the following result provides a characterisation of X u . Theorem 3.6. Consider the following optimization problem The minima P, X of (10) are always achievable and unique in the variable X. Moreover, The projection operator X u ≥ 0 onto S u is the only optimal solution for (17) with optimal index values Tr(X u ) equal to the dimension of S u .
Proof. It suffices to apply Theorem 3.5 to get the first part of the result. The proof of the second part follows the same reasoning as for Theorem 2.5.
The main contribution of this section is now stated.
Theorem 3.7. Let X u be the projection onto the stabilizability subspace S u . Then the following statements are equivalent: (i) There exist S = S T , T = T T and U such that In this case, for any arbitrary matrix Y the state feedback control law , In this case, for any arbitrary matrix Y the feedback x 0 -stabilizing for any initial condition in span(v 1 , . . . , v k ). Denote by x v1 , . . . , x v k the corresponding trajectories, then Since the trajectories stay in the stabilizability subspace, we have also X u S = S and U X u = U . The rest of the proof is straighforward.

Optimality Analysis via GARE.
In this section, we investigate the existence of optimal controls in connection with a generalized algebraic Riccati equation (GARE). The set of admissible controls for System (1) is given by We associate to each u(·) ∈ U (x 0 ) the following quadratic (possibly indefinite) cost function where L ∈ IR nu×n , and Q = Q T ∈ IR n×n , R = R T ∈ IR nu×nu are without loss of generality assumed to be symmetric matrices. The aim here is to look for a control law which minimizes the cost function J x0 (u(·)), over u(·) ∈ U (x 0 ). The value function V is defined as In the sequel, we use the following definitions.
Definition 4.1. Let X u be the projection onto the stabilizability subspace. The following quadratic-linear equation in the unknown matrix P = P T , is called a generalized algebraic Riccati equation (GARE).
Notice that in the following definition the boundness of V (x 0 ) from above is guaranteed by U (x 0 ) = ø.
Any control u(·) ∈ U (x 0 ) that achieves V (x 0 ) is called optimal. In this case, the LQ problem is called attainable (or achievable).

4.1.
Well-posedness. Define the following convex set of symmetric matrices.
A useful property of the set P is introduced by the following definition.
Further, it will be seen that the maximality property is intrinsic to the set P, in the sense that when P is nonempty the set of its maximal elements is nonempty and necessarily infinite if X u = I. However, if System (1) is stabilizable, that is X u = I, then the set P has a unique maximal element. Moreover, we will prove below that the nonemptiness of the set P is necessary and sufficient for the wellposedness of the LQ problem. In addition, the optimal cost is determined by any maximal element of P. To do so we need the following helpful technical lemma which is a simple adaptation of a well-known result (see, e.g., [6]).
Also, the following technical lemma will be used.
Lemma 4.5. Given matrices Γ = Γ T , ∆ and Θ = Θ T with appropriate sizes, consider the following quadratic form where u ∈ IR m and x ∈ V, with V a linear subspace of IR p . Denote by P V the projection onto the subspace V. Then the following conditions are equivalent: (ii) There exists a symmetric matrix S = S T such that inf u q(x, u) = x T Sx, for any vector x ∈ V. (iii) Θ ≥ 0 and P V ∆(I − ΘΘ † ) = 0. (iv) Θ ≥ 0 and ker(Θ) ⊆ ker(P bf V ∆).
(v) There exists a symmetric matrix T = T T such that Moreover, if any of the above conditions hold, then (ii) is satisfied by S = P V (Γ − ∆Θ † ∆ T )P V . In addition, S ≥ T for any T satisfying (v). Finally, for any vector x ∈ V, the vector u * = −Θ † ∆ T x is optimal with the following optimal value Proof. The required equivalences are proved as follows.
First note that the equivalence (iv) ⇔ (iii) is trivial since I − ΘΘ † represents the projection onto ker Θ.
(iii) ⇒ (ii): A simple calculation gives (ii) ⇒ (v): We have for any variables x ∈ V, u ∈ IR m : or equivalently, Thus (v) holds with T = S.
is achievable and its optimal solutions consist of all maximal elements P of P. Moreover, given any solution P to the problem (28), then the cost function of the LQ problem is uniquely determined by Notice that for any symmetric matrix P the following identity holds Now, let P ∈ P then by using (29) we have where the matrix M(P ) denotes Since X u x(·) = x(·) then by assumption we have for any (x 0 , u(·)) ∈ X u × U (x 0 ) So that obviously V (x 0 ) ≥ x T 0 P x 0 holds and then the original LQ problem is wellposed. Now, the necessity part of the result follows. Suppose that the LQ problem is well-posed, then Lemma 4.4 implies the existence of a symmetric matrix P such Then by the idempotence of the projection (X 2 u = X u ) we have Tr(P X u ) ≥ Tr(P X u ), ∀P ∈ P. Therefore, the proof will be complete if we can show that this particular P is effectively an element of P. To prove this claim let us apply the optimality principle: for any scalars h > 0 and u(·) ∈ U (x 0 ) we have Combining the above inequality with the preceding identity (29), we obtain Dividing the above inequality by h and taking the limit h → 0, then or equivalently, Since u(0) ∈ IR nu and x(0) ∈ S u can be chosen arbitrarily, it suffices to apply Lemma 4.5 to see that the above inequality is equivalent to Therefore, P belongs to P and the proof is complete.

Attainability.
A special class of the solutions to GARE related to the statefeedback optimal controls is provided.
Definition 4.7. A solution P to the GARE (24) is called stabilizing if there exists a matrix Y such that the feedback control law u(t) = K(P, Y )x(t) with the following feedback gain is x 0 -stabilizing for any x 0 ∈ S u .
Remark 4. When the system is not necessarily stabilizable, that is X u = I, then there exist an infinite number of stabilizing solutions in the sense of Definition 4.7.
Effectively, if P is stabilizing then take anyP =P T such that X uP = 0. Hence, P +P is also a stabilizing solution to the GARE.
In the spirit of Definition 4.7, we show below that any solution to the GARE always provides an optimal solution to the LQ problem. More importantly, it will be shown that given any optimal open-loop control one can construct an optimal feedback control of the form (33). The first claim is proved by the following result.
Theorem 4.8. Assume that the GARE has a stabilizing solution P . Then the control law (33) is an optimal solution to the possibly indefinite LQ problem. Furthermore, V (x 0 ) = x T 0 P x 0 , ∀x 0 ∈ S u Proof. Assume that P is a stabilizing solution to the GARE. Then for any admissible pair (x(·), u(·)) of System (1), it is immediate that , ∀t ≥ 0. Now, by using the preceding identity (29), the above equation and the completion of squares argument, we obtain where K(P, Y ) is the static gain defined by (33). Hence, the rest of the proof follows easily.
Remark 5. An immediate consequence of the above result is that any stabilizing solution to the GARE is also a maximal element of the set P (in the sense of Definition 4.3). This fact is an immediate consequence of Theorem 4.6.
A partial part of a previous claim on the GARE related to the structure of optimal control laws, is now proved.
where Y (·) and z(·) are chosen arbitrary such that u(·) is admissible.
Proof. First we show that the GARE admits a solution whenever the LQ problem is achievable. In this case, the LQ problem is obviously well-posed. Then apply Theorem 4.6 to get P ∈ P = ø with V (x 0 ) = x T 0 P x 0 , ∀x 0 ∈ S u . Now, let x(·), u(·) be an optimal pair associated with an arbitrary initial condition x 0 ∈ S u . Then the completion of squares argument leads to 0 P x 0 = 0 and both integrals in the right hand side of the above identity are positive, we obtain x T 0 (A T P + P A − (P B + L)R † (B T P + L T ) + Q)x 0 = 0. Since the above equation holds for any element x 0 ∈ S u , then P is necessarily a solution to the GARE.
Using the identity (36) we have for any optimal pair x(·), u(·) or equivalently, R[u(t) + R † (P B + L)x(t)] = 0. Then a simple utilization of Lemma 3.4 shows that any optimal control u(·) possesses the form (35) and the proof is complete.
Remark 6. Notice that any optimal control law includes a time-varying gain with the same structure as the static gain introduced in Definition 4.7. We ask: Does the GARE have a stabilizing solution when the possibly indefinite LQ problem is achievable? The answer is yes. In addition, an expected result is that any optimal open-loop control is necessarily linked to another state-feedback optimal control with static gain of the form (33).
One important further result will establish a connection between the solvability of the GARE and the solvability of a special bilinear system. It turns out that this connection is inherent to the attainability of the possibly indefinite LQ problem. Actually, such a connection will emerge naturally when we will explore the duality gap between the SDP problem (28) and its dual. Next, we provide a preliminary analysis of a particular equation related to this connection.
The following result is immediate by applying twice Lemma 3.4. Moreover, the set of all solutions to (37) is characterized in terms of Y as where Y = Y T is an arbitrary matrix.
The following Lemma plays a key role in the sequel.
Lemma 4.11. Given matrices Γ and ∆ with appropriate sizes, consider the following linear system in the matrix variables X = X T , Y, Z = Z T , whose solutions are assumed to satisfy the additional LMI condition Then any solution satisfies where W = W T .
Proof Theorem 4.12. Assume that there exists P ∈ P,S and U with ker S ⊂ ker X u such that then P is a solution to the GARE.
Proof. Applying the preceding Lemma 4.11 we obtain Notice that the condition ker S ⊂ ker X u is equivalent to (I −S † S)X u = 0. Since the projection X u is idempotent (X 2 u = X u ), it suffices to multiply the expression (44) by X u to see that P is actually a solution to the GARE.

Remark 7.
In the next section, we will see that the condition (43) is connected to the zero duality gap condition in the SDP formulation of the LQ problem.

5.
Optimality Synthesis via SDP. The aim of this section is to investigate the attainability of the possibly indefinite LQ problem based on SDP. Our approach shows that the LQ problem has a hidden convexity which emerges naturally from its SDP formulation. We will see that such a formulation allows a relaxation of the usual LQ problem conditions. In fact, this leads to an equivalence of an SDP with the original possibly indefinite LQ problem. 5.1. Basic LMI, SDP Formulations. We have shown that the general indefinite LQ problem can be solved via the proposed GARE. Another main result of this paper is to express and solve LQ problem as Semidefinite Programming (SDP) problem. For this purpose, let us start by the following basic definition.
Definition 5.1. Let symmetric matrices F 0 , F 1 , . . . , F m be given. Then, inequalities consisting of are called LMI with respect to the variable x = (x 1 , . . . , x m ) T ∈ IR m .
We show later and exploit a natural relationship between LQ problems and a certain SDP problems and their duals [28]. We will use the following definition.
is called a SDP problem. In addition, the dual SDP problem is defined as Let p * denote the optimal value of the SDP (46) as and d * denote the optimal value of the dual SDP problem (47) as In our optimality synthesis we will use the following well-known weak duality condition which is straightforward by simply using Definition 5.2.
Theorem 5.3. The optimal values of the SDP problem and its dual are such that p * ≥ d * . Moreover, the following condition is necessary and sufficient to achieve optimal values for both SDP problems. In this case, we have p * = d * .
We would like to stress out that for solvability of the proposed LQ probem we do not need any strict feasibility conditions such those in [28]. In this paper, both the proposed SDP problem and its dual (related to our LQ problem) can possibly have constraints with an empty interior. However, we will show that there is no duality gap between the proposed SDP problem and its dual. For this purpose, we will only utilize the weak duality condition and exploit an inherent property of the LQ problem. 5.2. SDP, Duality and LQ Problem. Using Definition 5.2 an easy verification shows that the dual of the SDP Problem (28) is given by Now, the following constitutes our first main result. (ii) There exist P ∈ P and S, T, U such that Tr(SQX u ) + 2Tr(LU X u ) + Tr(T R) − Tr(X u P ) = 0, (iii) There exist P ∈ P,and S, T, U such that Moreover, both in (ii) and (iii) we have that P is a solution to the GARE.
Proof. The proof is an immediate consequence of the results of Theorem 4.6, Theorem 5.3 and Theorem 5.4.
Theorem 5.5. Assume that, given any initial condition x 0 ∈ S u , there exists an open-loop control u x0 which achieves the optimal cost V (x 0 ) of the possibly indefinite LQ problem. Then the SDP problem (49) is achievable.
. . , k, consider the associated optimal control u vi with its corresponding optimal trajectory x vi . Define for i = 1, . . . , k the following matrices It is easily seen that Moreover, we have Now define Since X u S = S and U X u = U , then it is straightforward that So that (S, T, U ) is a feasible solution to the dual problem. What remains to prove is that (S, T, U ) is optimal. First, as the LQ problem is achievable by Theorem 4.6 there exists a P such that M(P ) ≥ 0 and or equivalently, Hence, by Theorem 5.4 we conclude that (S, T, U ) is an optimal solution.
Theorem 5.6. Assume that the SDP problem (49) is attainable then the possibly indefinite LQ problem is attainable. Moreover, the GARE has a stabilizing solution.
Proof. Using Theorem 5.4, there exists a solution P to the GARE and (S, T, U ) satisfying the conditions (57) from which we have (B T P + L t )X u SX u + RU X u = 0. Then using Lemma 2.2 twice we obtain for some matrix Z. Since S, U satisfy (57), Theorem 4.3 shows that the control law is x 0 -stabilizing for all x 0 ∈ S u . The rest of the proof is straightforward by applying Theorem 4.8.

5.3.
State-Feedback Optimal Controls. Using the previous results and following the same reasoning as above, we then obtain the main result of the paper.
Theorem 5.7. Let k be the dimension of the stabilizability subspace S u and assume that the possibly indefinite LQ problem is attainable at k independent initial conditions v 1 , . . . , v k ∈ S u . Then it is attainable at any initial condition x 0 ∈ S u . Moreover, any maximal element of the set P is a stabilizing solution to the GARE. In addition, the following different constructions ((a)or (b) or (c)) generate the same set of all optimal state-feedback control laws.
(a) Let P be any maximal element of P. Then the set of all optimal feedback control laws is given by u(·) = Kx(·), with where Y is any matrix such that u = Kx is x 0 -stabilizing ∀x 0 ∈ S u . (b) The set of all optimal feedback control laws is given by u(·) = Kx(·), with where S, U is any optimal solution to the SDP problem (49) and Y is an arbitrary matrix. (c) The set of all optimal feedback control laws is given by u(·) = Kx(·), with where Y is an arbitrary matrix and v 1 , . . . , v k ∈ S u is an arbitrary sequence of independent initial conditions such that for i = 1, . . . , k the open-loop control u vi is optimal at v i with the corresponding optimal trajectory x vi .
6. Concluding Remarks. In this study we have addressed the problem of designing optimal stabilizing control laws for a possibly indefinite indefinite LQ problem involving possibly singular control. To deal with such problems we have first used linear algebra to restate the problems in a framework of linear optimization over linear matrix inequality (LMI) constraints, which is well-known as semidefinite programming (SDP). Our treatment encompasses the special cases: the definite, the singular and the cheap optimal quadratic control problems. The proposed approach may have practical and theoretical advantages. One can have possible application to realistic problems arising in many fields in system theory.
Besides coping with the possible indefiniteness of the LQ optimization problem, the novelty of our approach relies on the fact that the class of the systems under consideration are not necessarily stabilizable, in contrast to what is commonly assumed in the literature. This means that there does not necessarily exist any control law which is stabilizing for every initial condition.
In this paper, the partial stabilizability problem is first tackled and its solution is given in terms of LMI. The stabilizability subspace is characterized by its projection operator which is utilized in defining the generalized Riccati equation and also in solving the possibly indefinite LQ problem in terms of an appropriate SDP problem.
To recapitulate, we have solved the LQ problem without any definiteness and any stabilizability assumption. Necessary and sufficient conditions are provided for the well-posedness as well as the attainability of the proposed LQ problem. Various characterizations of all possible optimal controls are given. In other words, we have achieved a certain closed theory for this kind of problem. However, we have used the assumption that the LQ problem is solvable for the whole stabilizability subspace but it may occur that the LQ problem is only well-posed for a smaller set of initial conditions. We see that a remaining issue is: Given an indefinite LQ problem, with or without stabilizability assumption: What is the set of initial conditions for which it is solvable?
We conjecture that such a problem is nonconvex and then hard to solve.