ASYMPTOTIC PROPERTIES OF AN INFINITE HORIZON PARTIAL CHEAP CONTROL PROBLEM FOR LINEAR SYSTEMS WITH KNOWN DISTURBANCES

. An inﬁnite horizon quadratic control of a linear system with known disturbance is considered. The feature of the problem is that the cost of some (but in general not all) control coordinates in the cost functional is much smaller than the costs of the other control coordinates and the state cost. Using the control optimality conditions, the solution of this problem is reduced to solution of a hybrid set of three equations, perturbed by a small parameter. One of these equations is a matrix algebraic Riccati equation, while two others are vector and scalar diﬀerential equations subject to terminal conditions at inﬁnity. For this set of the equations, a zero-order asymptotic solution is constructed and justiﬁed. Using this asymptotic solution, a relation between solutions of the original problem and the problem, obtained from the original one by replacing the small control cost with zero, is established. Based on this relation, the best achievable performance in the original problem is derived. Illustrative examples are presented.


(Communicated by Michael Ulbrich)
Abstract. An infinite horizon quadratic control of a linear system with known disturbance is considered. The feature of the problem is that the cost of some (but in general not all) control coordinates in the cost functional is much smaller than the costs of the other control coordinates and the state cost. Using the control optimality conditions, the solution of this problem is reduced to solution of a hybrid set of three equations, perturbed by a small parameter. One of these equations is a matrix algebraic Riccati equation, while two others are vector and scalar differential equations subject to terminal conditions at infinity. For this set of the equations, a zero-order asymptotic solution is constructed and justified. Using this asymptotic solution, a relation between solutions of the original problem and the problem, obtained from the original one by replacing the small control cost with zero, is established. Based on this relation, the best achievable performance in the original problem is derived. Illustrative examples are presented.
1. Introduction. The cheap control problem is an optimal control problem with a small control cost in the cost functional. This problem is of a considerable importance in such topics of control theory and its applications as singular optimal control and its regularization [2,14,15,16,23], limitations of optimal regulators and filters [5,25,35], high gain control [22,40], inverse control problems [27], robust control of systems with disturbances [37,38,39], and some others. The cheap control also arises in differential games (see e.g. [19] and references therein).
The smallness of the control cost can yield the singular perturbation [8,9,22,41] in the Hamilton-Jacobi-Bellman equation, as well as in the Hamilton boundaryvalue problem, associated with the original problem by control optimality conditions.
In the present paper, an infinite horizon linear-quadratic optimal control problem for systems with known disturbances (nonhomogeneous systems) is considered. This problem arises in such topics as an optimal infinite-time suppression of outer disturbances in dynamic systems [20,33], an optimal infinite-time trajectory tracking [6]. Here, we study this problem in the case where the cost of a part of control coordinates in the cost functional is much smaller than the costs of the other control coordinates and the state cost, i.e. the considered problem is a partial cheap control problem. In this case, there is not a limitation only on a part of control coordinates, while an effort of the rest of control coordinates is bounded. The considered problem contains two important features: (1) it is an infinite horizon problem for a system with a known additive disturbance; (2) it is a partial cheap control problem. Problems with only one of these features, have been studied in the literature. Thus, in the works [20,33,34], the control optimality conditions were derived and the open-loop and state-feedback optimal controls were designed for the infinite horizon linear-quadratic control problem with a known additive disturbance in the dynamics and with a non-cheap control in the cost functional. In the work [29], an infinite horizon partial cheap control of a system without an additive disturbance was analyzed. In the work [18], a finite horizon zero-sum differential game with a partial cheap control for the minimizer and without a disturbance in the dynamics was studied. In the present paper, we consider the problem containing simultaneously both above mentioned features. This makes the considered problem to be essentially novel, yields additional difficulties in its asymptotic analysis and novel nontrivial results.
Now, let us briefly compare the problems themselves, as well as their analysis and results, in [29,18] and in the present paper. In [29] an infinite horizon partial cheap control problem without a know additive disturbance in the dynamics was considered. Using the control optimality conditions, the solution of this problem is reduced to a matrix algebraic Riccati equation having multiple solutions. Since the problem in [29] is a cheap control problem, then this Riccati equation is perturbed by a small positive parameter. For this equation an asymptotic solution was formally constructed. The conditions, subject to which, the lower dimension parameter-free matrix algebraic Riccati equation, associated with this asymptotic solution, has the unique stabilizing solution were derived. However, in [29] the formally constructed asymptotic solution to the original algebraic Riccati equation is not justified, and the existence of the stabilizing solution to this equation is not established. Based on the formal asymptotic solution to the Riccati equation, the asymptotic expansions of the optimal trajectory, the optimal open-loop control and the optimal value of the cost functional were formally constructed, however estimates of the reminders were not obtained. Then, in [29] the slow-fast asymptotic decomposition of the original control problem was carried out. Based on this decomposition, two asymptotically suboptimal state-feedback controls were designed. In [18], an auxiliary finite horizon linear-quadratic game with a partial cheap control for the minimizer and without a known additive disturbance in the dynamics was studied. This game is reduced to solution of singularly perturbed set of Riccati-type matrix differential equations with given terminal conditions. This set has no more than one solution in the entire finite time-interval. The asymptotic solution to this set is a sum of two terms. The first term is the outer solution. This terms does not satisfy, in general, the given terminal conditions. The second term is the boundarylayer term. This term corrects the outer solution in a small neighbourhood of the terminal time moment such that the sum of both terms does satisfy the terminal conditions. Moreover, the boundary-layer term decays exponentially outside of this neighbourhood. The estimate of the above mentioned asymptotic solution depends only on the corresponding power of the small positive parameter multiplying a part of the derivatives in the set of Riccati-type equations. In contrast with [29], in the present paper the optimal control problem with a known additive disturbance in the dynamics is studied. Its solution is reduced to the set of three equations perturbed by a small parameter. These equations are: matrix algebraic Riccati equation, vector differential equation and scalar differential equation with terminal conditions at infinity. The asymptotic solutions to all these equations are constructed and justified. The existence of the stabilizing solution to the original Riccati equation is proven in the present paper. Moreover, another mathematical technique is proposed in the present paper for the formal construction of the asymptotic solution to the Riccati equation. This technique is based on a preliminary equivalent transformation of the original Riccati equation. This transformation simplifies considerably the formal construction of the asymptotic solution and the corresponding conditions, providing its justification. The vector differential equation with terminal condition at the infinity, appearing in the present paper, represents a new type of singularly perturbed problems. Its asymptotic solution does not contain a boundary-layer correction term, and consists only of an outer solution term. For the reminder of this asymptotic solution, an estimate of a novel form is derived. This estimate depends not only on the small parameter but also on the independent variable (time). For any fixed time value, the estimate asymptotically decays as the small parameter tends to zero, and, for any fixed value of the small parameter, the estimate exponentially decays as time tends to infinity. To derive such a form of the estimate, a new mathematical technique is developed in the present paper. For asymptotic solution to the scalar differential equation, the similar type of the reminder is obtained. Also, in the present paper, the asymptotic expansion of the optimal value of the cost functional is not only constructed formally, but also is justified, i.e., the estimate of its reminder is obtained. In addition to these results, we consider another optimal control problem, obtained from the original one by replacing the small control cost with zero (the degenerate problem). The relation between the original partial cheap control problem and the degenerate control problem is established. Namely, it is proven the following: (1) the limit of the optimal value of the cost functional in the original partial cheap control problem coincides with the infimum of the cost functional in the degenerate control problem; (2) the optimal state-feedback control in the original partial cheap control problem constitutes the minimizing sequence of controls in the degenerate problem. The issue on the relation between the original partial cheap control problem and the corresponding degenerate control problem did not studied in [29]. Furthermore, in the present paper, in contrast with [18], the infinite horizon optimal control problem with a partial cheap control and with a known additive disturbance in the dynamics is studied. Thus, there are three considerable differences between the present paper and [18]. Namely, (1) in the present paper an optimal control problem is studied, while in [18] the differential game is investigated; (2) in the present paper an infinite horizon problem is studied, while in [18] a finite horizon problem is investigated; (3) the dynamics of the problem in the present paper contains a known additive disturbance, while the dynamics of the problem in [18] does not. What do these differences yield? First, since the problem of the present paper is an optimal control problem (and not a differential game), then the Riccati equation arising in its solution contains a positive semi-definite quadratic term, but not an indefinite quadratic term as in the Riccati equation arising in the solution of a zero-sum linear-quadratic differential game. Second, since the problem of the present paper is an infinite horizon one, then the Riccati equation arising in its solution is algebraic, but not differential as in a finite horizon problem. Third, since the dynamics of the problem studied in the present paper contains a known additive disturbance, then its solution is reduced to the set of three equations (mentioned above), but not to one equation (the Riccati equation) as in a problem without a known additive disturbance in the dynamics. Since the Riccati equation appearing in the present paper is algebraic, the form of its asymptotic solution differs considerably from the form of the asymptotic solution to the differential Riccati equation in [18]. Namely, for the asymptotic solution of the algebraic equation there is not a need in a boundary-layer term. However, in contrast with [18], there is a need in conditions providing the existence of the stabilizing solution to a lower dimension parameter free matrix algebraic Riccati equation. These conditions, along with some additional conditions, provide the existence of the unique symmetric positive semi-definite stabilizing solution to the original Riccati equation for all sufficiently small values of the parameter. This result does not appear in [18]. Moreover, it cannot appear in this work, because there a finite horizon problem is studied.
In addition to the above described papers, we would like to mention several works where a total cheap control problem was studied (see e.g. [4,9,10,12,13,21,22,24,26,28,32] and references therein). These works analyze the case, where the cost of all control coordinates is small and the controlled dynamics does not contain a known additive disturbance.
Thus, the above made comparison of the present paper with the ones known in the literature clearly shows the significant novelty of the problem considered in the present paper, as well as of the proposed mathematical technique and the obtained results.
The paper is organized as follows. In Section 2, the rigorous formulation of the problem and a part of main assumptions are presented. Objectives of the paper are stated. Asymptotic solution of the hybrid set of the equations, arising in the control optimality conditions of the original problem, is constructed and justified in Section 3. Using the results of this section, parameter-free conditions of the existence and uniqueness of the solution to the original control problem are established in Section 4. A relation between the infimums of the cost functionals in the original and degenerate problems, as well as a relation between optimal controls of these problems, are obtained. Based on these relations, the best achievable performance in the original problem is derived. In Section 5, two illustrative examples are presented. Concluding remarks are placed in Section 6. Technically complicated proof of one important lemma is presented in Section 7 (Appendix).
The following main notations are applied in the paper: (1) E n is the n-dimensional real Euclidean space; (2) · denotes the Euclidean norm either of a vector or of a matrix; (3) the superscript "T " denotes the transposition of a matrix A, (A T ) or of a vector x, (x T ); (4) L 2 [0, +∞; E n ] is the space of n-dimensional vector-valued functions v(t), squareintegrable in the interval [0, +∞); (5) O n1×n2 is used for the zero matrix of the dimension n 1 × n 2 , excepting the cases where the dimension of the zero matrix is obvious. In such cases, the notation 0 is used for the zero matrix; (6) I n is the n-dimensional identity matrix; (7) Reλ is the real part of a complex number λ; (8) col(x, y), where x ∈ E n , y ∈ E m , denotes the column block-vector of the dimension n + m with the upper block x and the lower block y, i.e., col(x, y) = (x T , y T ) T ; (9) diag A, B , where A and B are matrices of the dimensions n × n and m × m, is a block-diagonal matrix with the upper left-hand block A and the lower right-hand block B; (10) For two n × n-dimensional symmetric matrices A and B, the notation A > B (A ≥ B) means that the matrix A − B is positive definite (positive semi-definite).
2.1. Partial Cheap Control Problem Formulation . Consider the following controlled differential equation: where z(t) ∈ E n is the state vector; u(t) ∈ E r , (r ≤ n) is the control; A and B are given constant matrices of corresponding dimensions; F(t), t ≥ 0 is a given vector-valued function; z 0 ∈ E n is a given vector. The cost functional, to be minimized by u, is where D is a given constant symmetric n × n-matrix; the given constant symmetric r × r-matrix G(ε) has the form the matrices G 1 and G 2 are of the dimensions q ×q, (0 ≤ q < r,) and (r −q)×(r −q), respectively; ε > 0 is a small parameter.
In what follows, we assume: (A1) the matrix B has full column rank r; where a > 0 and γ > 0 are some constants.

Remark 1.
In the sequel of the paper, for the sake of simplicity but without any loss of the generality, we take G 2 = I r−q .
Remark 2. Due to the form of the matrix G(ε) and the smallness of the parameter ε, the optimal control problem (1)-(2) is a partial cheap control problem. Therefore, in the sequel, we call this problem the Partial Cheap Control Problem (PCCP).
then this control is optimal in the PCCP.

2.2.
Optimal State-Feedback Control of the PCCP. For any given ε > 0, the PCCP is an infinite horizon linear-quadratic optimal control problem with a known disturbance. Such a kind of problems was studied in the work [34] where using the Bellman's Dynamic Programming Method and the Lyapunov's Second Method for stability, conditions of the existence and the uniqueness of an optimal state-feedback control were established and the control itself was derived. This control is a linear nonhomogeneous function of the state, where the constant gain matrix and the additive time depended vector are proper solutions of an algebraic matrix Riccati equation and a linear differential equation, respectively. To apply this result to the PCCP, we consider the algebraic Riccati equation with respect to the matrix P PA Based on the above mentioned results of [34], we have the following proposition.
Proposition 1. Let the assumptions (A1)-(A4) be valid. Let for a given ε > 0 the equation (6) have a symmetric solution P = P(ε) ≥ 0 such that the matrix is a Hurwitz one. Then the set of the admissible state-feedback controls M is nonempty. The optimal control of the PCCP exists in this set, is unique and has the form where the n-dimensional vector-valued function H(t), t ∈ [0, +∞) is the unique solution of the terminal-value problem The optimal value of the cost functional in the PCCP has the form where the scalar function Q(t), t ∈ [0, +∞) is the unique solution of the terminalvalue problem 2.3. Degenerate Optimal Control Problem. Along with the PCCP, we consider the problem obtained from the PCCP by setting formally ε = 0 in its cost functional. Thus, the new optimal control problem consists of the equation of dynamics (1) and the cost functional where, due to (3), In what follows, we call the problem (1), (12) the Degenerate Optimal Control Problem (DOCP). Since G(0) is a singular matrix, then the DOCP is a singular optimal control problem, i.e., it can be solved neither by application of the Pontriagin's Maximum Principle [30], nor using the Hamilton-Jacobi-Bellman equation approach (Dynamic Programming approach) [3].
Similarly to the PCCP, we choose the set M as a set of all admissible statefeedback controls in the DOCP.
Remark 4. Degenerate (or singular) optimal control problems were studied in the literature in finite and infinite horizon cases (see e.g. [2,7,14,15,16,23] and references therein). The dynamics of the problems, considered in these works, does not contain a known additive disturbance. In contrast with these works, the dynamics of the DOCP does contain such a disturbance. This feature considerably complicates the analysis of relation between the DOCP and the PCCP, and requires to develop a new mathematical technique for such an analysis.

2.4.
Objectives of the Paper. The objectives of the paper are: (i) to establish ε-free conditions, guaranteeing the existence and the uniqueness of the optimal control in the PCCP for all sufficiently small ε > 0; (ii) to estimate the value J * ε − J * 0 for all sufficiently small ε > 0, where J * 0 is the infimum of the cost functional in the DOCP, i.e., (iii) to derive the best achievable performance value in the PCCP, i.e., the value defined as follows for all sufficiently small ε > 0: 3. Preliminary Results. In this section, an asymptotic analysis of the equations (6), (9), (11) is carried out. In the subsequent sections, results of this analysis are used to achieve the above formulated objectives of the paper. (6), (9), (11). Let us partition the matrix B into blocks as B = B 1 , B 2 , where the blocks B 1 and B 2 are of the dimensions n × q and n × (r − q), respectively. We assume that:

VALERY Y. GLIZER AND OLEG KELIS
(A5) det B T 2 DB 2 = 0. By B c , we denote a complement matrix to the matrix B, i.e., the matrix of the dimension n × (n − r), and such that the block matrix (B c , B) is nonsingular. Hence, Consider the following matrices: and the block matrix (L, B 2 ). Due to the assumptions (A1) and (A5), the matrix (L, B 2 ) is nonsingular (see e.g. [17]).
In the set (6), (9), (11), we make the following transformation of the variables: where P is a new unknown n × n-matrix, H(t) is a new unknown n-dimensional vector-valued function.
Proof. We prove here that the first transformation in (16) converts the equation (6) to the equation (17). The other statements of the lemma are proven similarly. Substituting the expression for P (see (16)) into (6), multiplying the resulting equation from the left and from the right by L, B 2 T and L, B 2 , respectively, and taking into account (20)- (22), we obtain the equation (17). In order to complete the proof on the transformation of the equation (6) to the equation (17), we should prove the expressions for B and D in (21) and (22), and the properties of the matrices D 1 , D 2 . We start with the expression for B. Since the matrix L, B 2 is invertible, then in order to prove this expression, it is necessary and sufficient to show that L, Let us partition the matrix L into blocks as L = L 1 , L 2 , where the matrices L 1 and L 2 are of the dimensions (r − q) × (n − r) and (r − q) × q, respectively. Using this block-form, the expression for L in (15) and the block form of the matrix B c yields L, Now, we substitute this block matrix and the block matrix in the expression for B (see (21)) into the left-hand side of (25) instead of L, B 2 and B, respectively. Due to this substitution, we obtain after a simple algebra that the equation (25) is correct if L = L 2 . Moreover, since the matrix L, B 2 is invertible, such a matrix L is unique. Thus, the expression for B in (21) is proven. Proceed to the expression for D in (22). We have Using (15) The latter, along with (26), proves (22). The positive semidefiniteness of D 1 directly follows from the expression for this matrix in (22) and such a property of the matrix D. The positive definiteness of the matrix D 2 directly follows from the expression for this matrix in (22) and the assumption (A5). Thus the lemma is proven.  (17) such that A P (ε) is a Hurwitz matrix.
Proof. The statements of the corollary are direct consequences of Lemma 3.1 and the equality A P (ε) = (L, Remark 5. Transformation (16) corresponds to the following transformation of the state variable in the PCCP: where Z(t) is a new state variable. Transformation (27) converts the PCCP to the equivalent optimal control problem where Due to the block form of the matrix B (see (21)), the dynamics of the new problem (28)- (29) consists of three modes. The first mode is uncontrolled directly. The second mode is controlled only by the regular coordinates of the control. The third mode is controlled by the entire control. In a total cheap control problem, there 220 VALERY Y. GLIZER AND OLEG KELIS are not regular coordinates of the control. Therefore, the transformation similar to (27) converts the total cheap control problem to a new problem with the dynamics consisting only of two modes. These modes are similar to the first and the third modes of the dynamics in the transformed partial cheap control problem (28)-(29). (17). First of all, let us note that by substitution of the block representations of the matrices G(ε) and B (see the equation (3), Remark 1 and the equation (21)) into the expression for S(ε) (see (20)), we obtain after a routine algebra the following block representation of the matrix S(ε):
We also partition the matrix A into blocks as follows: where the blocks A 1 , A 2 , A 3 and A 4 have the dimensions (n − r + q) × (n − r + q), Substitution of the block representations for the matrices D, S(ε), P (ε), and A (see (22), (31)-(32), (33), and (34)) into the equation (17) yields after a routine rearrangement the following equivalent set of Riccati-type algebraic matrix equations with respect to P 1 (ε), P 2 (ε) and P 3 (ε): We are going to construct a zero-order asymptotic solution P 10 , P 20 , P 30 of the system (35)- (37). Equations for this asymptotic solution are obtained by setting formally ε = 0 in (35)-(37) and replacing there P 1 (ε), P 2 (ε), P 3 (ε) with P 10 , P 20 , P 30 . Thus, we have the following system: Solving the equations (40) and (39) with respect to P 30 and P 20 , we obtain where the superscript "1/2" denotes the unique symmetric positive definite square root of the corresponding symmetric positive definite matrix, while the superscript " − 1/2" denotes the inverse matrix for such a square root. Since D 2 1/2 is positive definite, then there exists a positive number β such that all eigenvalues λ − D 2 1/2 of the matrix − D 2 1/2 satisfy the inequality Substitution of the expression for P 20 from (41) into (38) yields the algebraic matrix Riccati equation with respect to P 10 where By virtue of the results of [18], the matrix S 0 can be represented in the form Let F 1 be a matrix such that In what follows, we assume: Using the equation (45), the assumptions (A6), (A7), and the results of [1], one directly obtains that the algebraic matrix Riccati equation (43) has the unique symmetric solution P 10 ≥ 0. Moreover, is a Hurwitz matrix. Therefore, there exists a positive number α, (α = γ), such that all eigenvalues λ A 0 of this matrix satisfy the inequality

VALERY Y. GLIZER AND OLEG KELIS
Now, using Lemma 3.1 (the positive definiteness of the matrix D 2 ), the above mentioned features of the equation (43), and the results of [22] (Sections 3.4 and 3.6.1), we directly obtain the following assertion.  (17) has the unique symmetric solution P = P (ε) ≥ 0. This solution has the block form (33), where the blocks P 1 (ε), P 2 (ε), P 3 (ε) satisfy the inequalities b > 0 is some constant independent of ε. Moreover, the matrix A P (ε), given in (20), is a Hurwitz matrix.  (6) such that the matrix A P , given by (7), is a Hurwitz matrix.
Proof. The statement of the corollary directly follows from Lemma 3.2 and Corollary 1. (18). Using the expression for the matrix A P (ε) (see (20)) and the equations (31), (33), (34), we can represent this matrix in the block form

Asymptotic Solution of the Problem
where Also, let us partition the vector-valued function F (t) into blocks as: where the blocks F 1 (t) and F 2 (t) are of the dimensions n−r+q and r−q, respectively. We look for the solution of the problem (18) in the block form where the blocks H 1 (t, ε) and H 2 (t, ε) are of the dimensions n − r + q and r − q, respectively. Substitution of the block representations for A P (ε), P (ε), F (t) and H(t, ε) into the problem (18) yields the following equivalent terminal-value problem: Let us construct a zero-order asymptotic solutions {H 10 (t), H 20 (t)} of the problem (55)-(57). The equations for this asymptotic solution are obtained from the system (55)-(56) by setting there formally ε = 0 and using Lemma 3.2. Thus, we have The terminal condition for H 10 (t) is obtained from the condition for H 1 (t, ε) (see (57)) by formal replacing there H 1 (+∞, ε) with H 10 (+∞), i.e., Solving the equation (59) with respect to H 20 (t), and taking into account the equalities A P 4 (0) = −P 30 = − D 2 1/2 and A P 2 (0) = A 2 , we obtain Substitution of (61) into (58) and use of the expressions A P 1 (0) = A 1 − S 1 P 10 , A P 3 (0) = −P T 20 , as well as the equations (44), (48) and the expression for P 20 in (41), yield the differential equation for H 10 (t) where A 0 is given by (48). Due to the inequalities (24) and (49), this equation, subject to the condition (60), has the unique solution satisfying the inequality where b > 0 is some constant. The equation (61), along with the condition (60) and the inequality (64), yields where b > 0 is some constant. This completes the formal construction of the zero-order asymptotic solution to the problem (55)-(57).
The proof of the lemma is presented in Appendix. (19). Using the block form of the matrix S(ε), and the vectors F (t) and H(t, ε) (see (31), (53), (54)), we can rewrite equivalently the problem (19) as follows:

Asymptotic Solution of the Problem
Let us construct a zero-order asymptotic solutions Q 0 (t) of the problem (69). The equation for this asymptotic solution is obtained from the differential equation in (69) by setting there formally ε = 0 and using Lemma 3.3. Thus, we have Substituting the expression for H 20 (t) (see (61)) into the right-hand side of (70), and using (44), we obtain The terminal condition for Q 0 (t) is obtained from the condition for Q(t, ε) in (69) by formal replacing there Q(+∞, ε) with Q 0 (+∞), i.e., The solution of the problem (71)-(72) has the form Due to the inequalities (24) and (64), the integral in (73) converges. Moreover, where b > 0 is some constant. This completes the formal construction of the zero-order asymptotic solution to the problem (69). Similarly to Lemma 3.3, we obtain the following lemma.

ε-Free
Conditions for the Existence and Uniqueness of the PCCP Solution. Let us partition the vector Z 0 , given by (30), into blocks as: where Q(t, ε) is the solution of the problem (19) mentioned in Lemma 3.4.
Proof. The statements of the theorem follow immediately from Proposition 1, Lemmas 3.1, 3.2, and Corollary 2.

4.2.
Relation between J * ε and J * 0 . Remember that J * ε is the minimum with respect to u(·) ∈ M of the cost functional in the PCCP, while J * 0 is the infimum with respect to u(·) ∈ M of the cost functional in the DOCP. Remark 6. Due to Theorem 4.1, the set M of the admissible state-feedback controls in the PCCP (and in the DOCP) is nonempty. Hence, due to the assumptions (A2) and (A3), the infimum of the cost functional in the DOCP (see (13)) is finite and nonnegative.

Let us introduce the value
Lemma 4.2. Let the assumptions (A1)-(A7) be valid. Then, where c > 0 is some constant independent of ε; the number ε 2 > 0 has been introduced in Lemma 3.4 and then was used in Theorem 4.1.
Proof. We prove the lemma by contradiction. Namely, let us assume that (80) is wrong. This means the fulfilment of the inequality Let us show that the inequality (81) yields the inequality First of all note, that the control u * ε (z, t) (see (8)), being optimal in the PCCP for all ε ∈ (0, ε 2 ] (see Theorem 4.1), belongs to the set M for these values of ε. Now, using the equations (2), (12), (13) and the assumption (A3), we directly obtain the following chain of inequalities and equalities: Moreover, from the inequality (79), we have for all ε ∈ (0, ε 2 ] Remember that c > 0 is independent of ε. The inequalities (83)-(84) imply the inequality The inequalities (81) and (85), yield immediately the inequality (82).

VALERY Y. GLIZER AND OLEG KELIS
Since (82) is valid, then there exists a state-feedback controlũ(z, t) ∈ M such that t) is the optimal control in the PCCP, then the following inequality is satisfied for any ε ∈ (0, ε 2 ]: where is the lower block of the dimension r − q of the vectorũ(z, t);z(t), t ≥ 0 is the solution of (1) generated by u(t) =ũ(z, t).
The inequalities (84) and (87) directly lead to the inequality which yields immediately J * ≤ J 0 ũ(z, t) . The latter contradicts the right-hand side of the inequality (86). This contradiction proves the equality (80). Thus, the lemma is proven.
where c > 0 is some constant independent of ε.
Remark 7. The equation (91) means that the optimal state-feedback control u * ε (z, t) in the PCCP constitutes the minimizing control sequence in the DOCP, i.e., the sequence of admissible state-feedback controls along which the DOCP cost functional (12) tends to its infimum J * 0 . 4.3. The Best Achievable Performance in the PCCP.
where J * best is the best achievable performance value in the PCCP (see (14)); J * is given by (78).
The following two corollaries are direct consequences of Lemma 4.3, Corollary 3 and Theorem 4.5.
Corollary 4. Let the assumptions (A1)-(A7) be valid. Then, for ε → +0, the effort of the cheap part of the optimal control u * ε (z, t) in the PCCP cost functional (2) tends to zero, i.e., where u * ε,2 (z, t) is the lower block of the dimension r − q of the vector u * ε (z, t); z * (t, ε), t ≥ 0 is the solution of (1) generated by u(t) = u * ε (z, t). Corollary 5. Let the assumptions (A1)-(A7) be valid. Then, for all sufficiently small ε > 0, the following equality is satisfied along trajectories of the system (1): Remark 8. The best achievable performance (also known in the literature as the maximally achievable accuracy or the performance limitation) by a cheap control was extensively studied in the literature (see e.g. [5,25,31,35] and references therein). In all these papers, the case of a total cheap control is treated, and the best achievable performance value is defined as the limit of the optimal value of the cost functional when the small control cost tends to zero. In the present paper, in contrast to such an approach, we define the best achievable performance as the infimum (with respect to the small control cost) of the optimal value of the cost functional. Then, we show that this infimum coincides with the limit of the optimal value of the cost functional when the small control cost tends to zero.
Remark 9. Note, that there is a considerable difference between the physical meaning of the best performance, achieved by the total cheap control (see e.g. [5,25,31,35]), and such a meaning of the best performance, achieved by the partial cheap control derived in the present paper. Namely, the former represents the minimal value of the state part of the cost functional, while the latter represents the minimal value of the sum of the state part of the cost functional and the essential (non-cheap) control part of this functional. In other words, the best performance, achieved by the total cheap control, means the minimal state regulation error, while the best performance, achieved by the partial cheap control, means the minimal sum of the state regulation error and the effort of the essential part of the control.

Example 1.
5.1.1. Analytical study. Consider a particular case of the PCCP (1)- (2). Namely, we consider the case where n = r = 2 and q = 1. The matrices of the coefficients in (1)-(2) are of the corresponding dimensions. In particular, G 1 > 0 is scalar and G 2 = 1, while B = B 1 , B 2 becomes a 2 × 2-dimensional nonsingular matrix. Note that the matrix B T 2 DB 2 , appearing in the assumption (A5), becomes a positive scalar. Since n = r, then B c = B 1 , while the matrix L, defined in (15), becomes a scalar. Then the matrix L, defined in (15), becomes a 2-vector and has the form L = B 1 − LB 2 . The matrix L, B 2 in the transformation (16) is of the dimension 2 × 2, and it has the form B 1 − LB 2 , B 2 . Since det B 1 − LB 2 , B 2 = det B = 0, then the matrix B 1 − LB 2 , B 2 is nonsingular. Thus, the nonsingular transformation (16) converts the set (6), (9), (11) in this example to a particular case of the set (17), (18), (19). In this particular case, the matrices A, B, D and S(ε) are of the dimension 2 × 2. The matrices B and D have the form where D 1 ≥ 0 and D 2 > 0 are scalars. The matrix S(ε) has the form (31), where Now, let us construct the zero-order asymptotic solution to the set (17), (18), (19). We start with the equation (17). In our example, the components P i0 , (i = 1, 2, 3) of the asymptotic solution to this equation become scalars and, due to (41), we obtain where A 2 is the upper right-hand entry of the matrix A.
Due to (43)-(44), the scalar P 10 satisfies the following algebraic equation: where A 1 is the upper left-hand entry of the matrix A.
The matrixB, given in (46), has the form (1, A 2 ), thus providing the fulfilment of the assumption (A6). Moreover, subject to the condition D 1 > 0, the assumption (A7) is fulfilled for any A 1 . Therefore, the equation (98) has the unique positive solution and A 0 = A 1 − S 0 P 10 = − A 2 1 + S 0 D 1 < 0. Proceed to the construction of the components of the zero-order asymptotic solution to the problem (18). In order to calculate these components, we choose the 2-dimensional vector-valued function F (t) as where f 1 , f 2 , γ 1 > 0 and γ 2 > 0 are given constants. Now, due to (63) and (61), we obtain Finally, due to (73), the zero-order asymptotic solution to the problem (19) is Now, based on the equations (78) and (99)-(101), we obtain the value J * = P 10 X 0 2 + 2X 0 P 10 f 1 Due to Lemma 4.2, this value is the zero-order term in the asymptotic expansion of the optimal value of the PCCP cost functional in this example. Moreover, due to Lemma 4.3, the value (102) is the infimum of the DOCP cost functional. Finally, due to Theorem 4.5, this value is the best achievable performance value in the PCCP.
5.1.2. Numerical calculation. We choose the following numerical data for the PCCP: and the cost functional where z 1 (t), z 2 (t), z 3 (t), u 1 (t) and u 2 (t) are scalar functions meaning that n = 3, r = 2, q = 1.
In this example, in contrast with Example 1, the dimensions of the state vector z(t) = col z 1 (t), z 2 (t), z 3 (t) and of the control vector u(t) = col u 1 (t), u 2 (t) do not coincide with each other. Moreover, in this example the dimension of the state vector z(t) is higher than in Example 1.
The matrices of the coefficients in the problem (104)-(105) are: Therefore, in this example the equations (6), (9) and (11) coincide with the equations (17), (18) and (19), respectively. Let us proceed to the asymptotic solution of these equations. We start with (17). The 2 × 2-matrix algebraic Riccati equation (43) In Table 1 the optimal value J * ε of the cost functional (105) is presented for several values of ε > 0. It is seen that, for decreasing ε, the value J * ε also decreases approaching J * = 8.36, which is the best achievable performance value in the PCCP and the infimum of the cost functional in the DOCP. 6. Conclusions. In this paper, an infinite horizon linear-quadratic optimal control problem for a system with known time-varying additive disturbance is considered. A weight matrix of the control cost in the cost functional of this problem is blockdiagonal with a small positive multiplier in one of the blocks. Both blocks are positive definite. However when the small multiplier is replaced with zero, the weight matrix of the control cost becomes singular (but in general non-zero). Due to such a structure of this matrix, the considered problem is a partial cheap control problem, i.e., a part of the coordinates of the control is cheap, while the others are non-cheap (essential). Along with the original (partial cheap control) problem, the degenerate (singular) optimal control problem is considered. The latter is obtained from the former by replacing there the small multiplier in the weight matrix of the control cost with zero. Using the control optimality conditions, the solution of the original problem is reduced to the solution of the set of three equations: the algebraic matrix Riccati equation, the vector differential equation and the scalar differential equation. The differential equations are subject to the zero conditions at infinity. All the equations are perturbed by a small parameter. The algebraic equation and the scalar differential equations are regularly perturbed, while the vector differential equation is singularly perturbed. The zero-order asymptotic solution to this set of equations is constructed and justified, yielding a new type of the reminders' estimates for the asymptotic solutions of the differential equations. These estimates depend not only on the small parameter but also on the time. The estimates asymptotically decay when either the small parameter tends to zero, or the time tends to infinity. Based on the above mentioned asymptotic solution, two relations between the solutions of the partial cheap control and degenerate problems are established. The first relation shows the closeness of the optimal value of the cost functional in the partial cheap control problem to the infimum of the cost functional in the degenerate problem. The second relation shows that the optimal state-feedback control in the partial cheap control problem constitutes a minimizing sequence of state-feedback controls in the degenerate problem. Using these relations, the best achievable performance value in the partial cheap control problem is obtained. This value represents the minimal sum of the state regulation error and the effort of the essential (non-cheap) part of the control. 7. Appendix: Proof of Lemma 3.3.

VALERY Y. GLIZER AND OLEG KELIS
Then, there exists a positive numberε, (ε ≤ε), such that for all ε ∈ (0,ε], the following inequalities are satisfied: where b > 0 is some constant independent of ε. Proof. The statement of the proposition directly follows from the results of [11] (Theorem 2.3).