SINGULAR INFINITE HORIZON ZERO-SUM LINEAR-QUADRATIC DIFFERENTIAL GAME: SADDLE-POINT EQUILIBRIUM SEQUENCE

. We consider an inﬁnite horizon zero-sum linear-quadratic diﬀerential game in the case where the cost functional does not contain a control cost of the minimizing player (the minimizer). This feature means that the game under consideration is singular. For this game, novel deﬁnitions of the saddle-point equilibrium and game value are proposed. To obtain these saddle-point equilibrium and game value, we associate the singular game with a new diﬀerential game for the same equation of dynamics. The cost functional in the new game is the sum of the original cost functional and an inﬁnite horizon integral of the square of the minimizer’s control with a small positive weight coeﬃcient. This new game is regular, and it is a cheap control game. Using the solvability conditions, the solution of the cheap control game is reduced to solution of a Riccati matrix algebraic equation with an indeﬁnite quadratic term. This equation is perturbed by a small parameter. Subject to a proper assumption, an asymptotic expansion of a stabilizing solution to this equation is constructed and justiﬁed. Using this asymptotic expansion, the existence of the saddle-point equilibrium and the value of the original game is established, and their expressions are derived. Illustrative example is presented.

1. Introduction. A singular differential game is the game, which cannot be solved by application of the first-order solvability conditions. For instance, a singular zerosum differential game cannot be solved using the Isaacs MinMax principle, as well as the Bellman-Isaacs equation method, ( [3,5,17,23]). A Nash equilibrium set of controls in a singular non zero-sum differential game can be obtained neither by application of the first-order variational method, nor by application of the generalized Hamilton-Jacobi-Bellman equation method ( [3,35]). All this occurs, because the problem of minimization (maximization) of the game's Variational Hamiltonian with respect to a singular control either has no solution or has infinitely many solutions.
In the open literature, a finite horizon version of singular differential games was mostly studied. Thus, in [6,26,32,34], higher order optimality conditions were used for solution of various such zero-sum games. Numerical method of solution of a singular zero-sum game was proposed in [7]. In [16,31], solutions of singular zerosum games were derived by geometric analysis of the set of all candidate optimal trajectories. In [1], an open-loop solution of a singular zero-sum differential game was derived in a class of generalized functions. In [12,33], singular zero-sum differential games were analyzed by regularization, yielding for each game its value, an optimal state-feedback control of the maximizing player and a minimizing sequence of state-feedback controls for the opponent. In [12], an optimal trajectory sequence and a generalized optimal trajectory in the considered singular game also were obtained. In [11], using a regularization approach, a Nash equilibrium sequence was obtained in a two-person singular non zero-sum differential game.
Infinite horizon singular differential games were considered much less. To the best of our knowledge, such games were studied only in few works. In [36], the existence of an almost equilibria of a singular zero-sum game was established using a Riccati matrix inequality. In [13], using a regularization approach, a minimizing sequence of state-feedback controls was designed and an upper value was obtained in a singular zero-sum game. In [41], an asymptotic (with respect to time) Nash equilibrium was designed in a two-person singular non zero-sum game.
In the present paper, an infinite horizon zero-sum linear-quadratic differential game is considered. The feature of this game is that a weight matrix of the minimizer's control cost in the cost functional is zero. Due to this feature, the game cannot be solved either by application of the Isaacs MinMax principle, or using the Bellman-Isaacs equation approach, i.e. this game is singular. Because of such a singularity, the well known definitions of the saddle-point equilibrium and the game value are not applicable to this game. Therefore, novel definitions for these notions are proposed. The considered game is treated using the regularization method, which yields a new game (cheap control game). Using perturbation techniques, an asymptotic behavior of this cheap control game is analyzed. Based on this analysis, the existence of the saddle-point equilibrium and the game value in the original (singular) game is established, and their expressions are obtained.
It is important to note the following. Although the regularization approach to study of the singular zero-sum game in the present paper is similar to the approach in the works [12,13,33], the main results of the present paper, as well as their derivation, differ considerably from the ones of these works. Namely, in the works [12,33], the finite horizon games were considered in wide classes of the players' admissible feedback controls. Each control of this class guarantees the existence and uniqueness of the absolutely continuous solution to the equation of the game's dynamics (subject to a given initial condition) against any square integrable openloop control of the opponent. Moreover, the time realization of the feedback control along this solution is square integrable. In these classes of the players' admissible feedback controls, subject to proper conditions, the complete solutions of the games (optimal control of the maximizing player, optimal control sequence of the minimizing player and the game value) were obtained. It is well known that a regular finite horizon zero-sum linear-quadratic differential game also is solvable in these classes of the players' admissible controls subject to the absence of conjugate points in the solution of the corresponding Riccati matrix differential equation (see e.g. [3]). However, it is not so in the case of a regular infinite horizon zero-sum linear-quadratic differential game. For this game, its value and the optimal feedback control of the minimizing player exist subject to the existence of the minimal positive definite solution of the corresponding Riccati matrix algebraic equation. However, the only suboptimal feedback control of the maximizing player (the maximizer) exists in this game (see e.g. [3,25]). Such a feature yields the existence of an almost saddle-point equilibrium, but not the exact equilibrium in this infinite horizon game. The exact saddle-point equilibrium in the regular infinite horizon zero-sum linear-quadratic differential game exists in a restricted class of the players' admissible feedback controls [18]. In the short conference paper [13], the singular infinite horizon game was considered in the wide class of the minimizer's admissible feedback controls. Due to the above mentioned feature of the regular infinite horizon game, only the minimizing control sequence was designed but neither an optimal maximizer's feedback control nor a saddle point. In the present paper, we consider the singular infinite horizon game in the restricted class of the players' admissible feedback controls. This consideration allows to obtain essentially novel results, namely: (i) to define a saddle-point equilibrium sequence of the players' controls in the game; (ii) to define a value of the game; (iii) to derive explicit expressions for the saddle-point equilibrium sequence and the game value.
The paper is organized as follows. In Section 2, the rigorous formulation of the problem is presented. Main definitions are formulated. Objectives of the paper are stated. Auxiliary results are placed in Section 3. These results include the regularization of the original (singular) game and an asymptotic analysis of the regularized (cheap control) game. Main results of the paper (the saddle-point equilibrium and the game value) are derived in Section 4. In Section 5, an illustrative example is solved. Conclusions appear in Section 6.
The following main notations are applied in the paper: (1) E n is the n-dimensional real Euclidean space; (2) · denotes the Euclidean norm either of a vector or of a matrix; (3) the superscript "T " denotes the transposition of matrices and vectors; (4) L 2 [0, +∞; E n ] is the space of n-dimensional vector-valued functions f (t), square-integrable in the interval [0, +∞); (5) I n is the n-dimensional identity matrix; (6) col(x, y), where x ∈ E n , y ∈ E m , denotes the column block-vector of the dimension n + m with the upper block x and the lower block y; (7) for a symmetric matrix A, the notation A > 0 (A ≥ 0) means that the matrix A is positive definite (positive semi-definite).

Problem Statement and Main Definitions.
2.1. Game Formulation. The dynamics of the game is described by the system

VALERY Y. GLIZER AND OLEG KELIS
where x(t) ∈ E n and y(t) ∈ E m are the state vectors, u(t) ∈ E m , v(t) ∈ E s are the players' controls; A i , (i = 1, .., 4) and C j , (j = 1, 2) are given constant matrices of corresponding dimensions; x 0 ∈ E n and y 0 ∈ E m are given vectors.
The cost functional, to be minimized by u (the minimizer) and maximized by v (the maximizer), is where D j , (j = 1, 2) and G are given symmetric positive definite matrices of corresponding dimensions.
Since the cost functional J(u, v) does not contain a quadratic control cost of the minimizer, the game (1) -(3) can be solved neither by the Isaacs's MinMax principle nor by the Bellman-Isaacs equation method, i.e. it is singular. We call the game (1) -(3) the Singular Differential Game (SDG).

Main Definitions . Let us introduce the block vectors
where x ∈ E n , y ∈ E m ; x 0 and y 0 are given in (1)- (2). Consider the set U of all functions u = u(z, t) : E n+m × [0, +∞) → E m , which are measurable w.r.t. t ≥ 0 for any fixed z ∈ E n+m and satisfy the local Lipschitz condition w.r.t. z ∈ E n+m uniformly in t ≥ 0. Similarly, let V be the set of all functions v = v(z, t) : E n+m × [0, +∞) → E s , which are measurable w.r.t. t ≥ 0 for any fixed z ∈ E n+m and satisfy the local Lipschitz condition w.r.t. z ∈ E n+m uniformly in t ≥ 0. Definition 2.1. By U V , we denote the set of all pairs u(z, t), v(z, t) such that the following conditions are valid: (i) u(z, t) ∈ U, v(z, t) ∈ V; (ii) the initial-value problem (1)-(2) for u(t) = u(z, t), v(t) = v(z, t) and any x 0 ∈ E n , y 0 ∈ E m has the unique locally absolutely continuous solution z uv (t; z 0 ) = col x uv (t; z 0 ), y uv (t; z 0 ) , In what follows, U V is called the set of all admissible pairs of the players' statefeedback controls (strategies) in the SDG.
For a given u(z, t) ∈ U, consider the set Definition 2.2. For a given u(z, t) ∈ H u , the value is called the guaranteed result of u(z, t) in the SDG.
Similarly, for a given v(z, t) ∈ V, consider the set F u v(z, t) = {u(z, t) ∈ U : is called the guaranteed result of v(z, t) in the SDG.
is called a saddle-point equilibrium sequence of the SDG if: (i) the limit value , the following inequality is satisfied: The value J * (z 0 ) is called a value of the SDG.

2.3.
Objectives of the Paper. The objectives of this paper are: (I) to establish the existence of a saddle-point equilibrium sequence and a value of the SDG; (II) to derive the expressions for these sequence and value.

Auxiliary Lemma. Consider the block matrices
The matrices A and D are of the dimension (n + m) × (n + m), while the matrices B and C are of the dimensions (n + m) × m and (n + m) × s, respectively. Let the pair {A, B} be stabilizable, and M be a m × (n + m)-matrix, such that the trivial solution of the system is asymptotically stable. Consider the following strategy of the minimizer: Along with this strategy, let us consider the Riccati matrix algebraic equation Lemma 3.1. Let the equation (10) have a symmetric solution K = K M , such that the trivial solution of the system is asymptotically stable. Then: Proof. Let us start with the item (i). To prove this item, it is sufficient to show the given in (12). Since v M (z) is linear with respect to z and the gain in this function is independent of t, then v M (z) ∈ V. Also, it should be noted that the system (11) is obtained from the system in (1)-(2) by replacing u(t) and v(t) with u M (z) and v M (z), respectively. Let, for any given z 0 ∈ E n+m , z M (t; z 0 ) be the solution of (11) subject to the initial condition z(0) = z 0 . Since the trivial solution of the system (11) is asymptotically stable, then z M (t; z 0 ) satisfies the inequality z M (t; z 0 ) ≤ a z 0 exp(−βt), t ≥ 0, where a > 0 and β > 0 are some constants. This inequality implies the inclusion z M (t; z 0 ) ∈ L 2 [0, +∞; E n+m ]. Therefore, due to (9) and (12), Proceed to the item (ii). Consider the Lyapunov-like function Let, for any given v(z, t) ∈ F v u M (z) and any given with respect to t and taking into account (7), we obtain after a simple algebra: Due to (10), we can rewrite the equation (14) as Using (12), the equation (15) can be represented in the form Since the trivial solution of the equation (8) is asymptotically stable, the pair . Then the equation (16) yields the inequality dV z M 0 (t; z 0 ) /dt ≤ 0, t ≥ 0. Integrating this inequality with respect to t from 0 to +∞ and taking into account that lim t→+∞ z M 0 (t; Now, let us proceed to the proof of the items (iii) and (iv). Equation (16) directly yields the inequality Now, setting v(z, t) = v M (z) in the equation (16) and treating the obtained equation similarly to the inequality (17) Comparison of this equality and the inequality (18) immediately implies the validity of the items (iii) and (iv) which completes the proof of the lemma.

3.2.
Regularization of the SDG. In the analysis of the SDG, we use a regularization of this game. Namely, we replace the original singular game with a regular differential game, which is close in some sense to the SDG. This new game consists of the same dynamics (1)-(2) as the SDG and the new "regular" cost functional (19) where ε > 0 is a small parameter.
Remark 1. The regularization approach was applied widely in the literature to analysis and solution of singular optimal control problems (see e.g. [4,9,10,24] and references therein). However, to the best of our knowledge, such an approach to rigorous analysis of a singular zero-sum linear-quadratic differential game was applied only in these three papers [12,13,33].
Remark 2. Since the parameter ε > 0 is small, the game (1)-(2), (19) is a cheap control differential game, i.e., the differential game in which a control cost of at least one of the players in the cost functional is much smaller than a state cost. In what follows, we call the game (1)-(2), (19) the Cheap Control Differential Game (CCDG). Finite horizon cheap control differential games (zero-and non zero-sum) were studied in the literature in a number of works (see e.g. [8,11,33,35,38,39] and references therein), while infinite horizon cheap control differential games were studied only in the works [13,29]. Since for any ε > 0 the weight matrix for the minimizer's control cost in the cost functional (19) is positive definite, the CCDG is a regular differential game. The set of all admissible pairs of players' state-feedback controls (strategies) in the CCDG coincides with such a set in the SDG, i.e., it is U V .
Remark 3. Cheap control differential games are closely related to differential games with singularly perturbed dynamics. The latter were considered in a number of works (see e.g. [27,28,40] and references therein).

3.3.
Saddle-Point Equilibrium in the CCDG. Consider the following Riccati matrix algebraic equation: where S u (ε) = 1

VALERY Y. GLIZER AND OLEG KELIS
Let ε > 0 be any given. In the sequel of this subsection, we assume: (A1) the equation (20) has a symmetric solution P = P * (ε) such that the trivial solution of each of the following systems: is asymptotically stable. Consider the functions Lemma 3.2. Let, for a given ε > 0, the assumption (A1) be valid. Then: , this pair is a saddle-point equilibrium in the regular zero-sum CCDG; The item (a) of the lemma directly follows from the equations (21), (24) and the asymptotic stability of the trivial solution to the system (22). The item (b) is proven similarly to the item (ii) of Lemma 3.1 with replacing M by − 1 ε 2 B T P * (ε) and K M by P * (ε). Finally, the items (c) and (d) are immediate consequences of the results of [18].
Remark 4. In the next subsection, an asymptotic analysis (for ε → +0) of the CCDG is carried out. Using this analysis, the existence of the solution P * (ε) to the equation (20), mentioned in the assumption (A1), is established. Some additional results on the CCDG are obtained. To the best of our knowledge, an asymptotic analysis of infinite horizon zero-sum linear quadratic cheap control differential games was carried out only in two papers [13,29]. In both papers, the case of the wide classes of the players' admissible feedback controls (see [3,25]) was considered. In [29], under the assumption on the transfer function matrix to be right invertible and strictly minimum phase, a limit behaviour of the game value was studied. It was shown that this value tends to zero for the control cost of the minimizer tending to zero. In [13], the zero-order asymptotic expansion of the minimal positive definite solution to the corresponding Riccati matrix algebraic equation was constructed and justified. In the present paper, the CCDG is considered in the restricted class of the admissible pairs of the players' feedback controls (the class U V ). Therefore, here we analyze an asymptotic behaviour of the solution to the equation (20) which satisfies another requirement, namely, the conditions of the assumption (A1).

Asymptotic Analysis of the CCDG .
Similarly to [13], let us make the following block transformation in the equation (20): where the blocks P 1 (ε), P 2 (ε) and P 3 (ε) have the dimensions n × n, n × m and m × m, respectively.

SINGULAR INFINITE HORIZON DIFFERENTIAL GAME 9
Due to this transformation and the equations (7), (21), the equation (20) can be rewritten in the equivalent form as: We seek the zero-order asymptotic solution P 10 , P 20 , P 30 of the system (26)- (28). Equations for this asymptotic solution terms are obtained by setting formally ε = 0 in (26)-(28) yielding Solving this system, we obtain similarly to [13] where D 2 1/2 is the unique symmetric positive definite square root of the matrix D 2 , D 2 −1/2 is the inverse matrix of this square root, the matrix P 10 satisfies the equation In what follows, we assume: (A2) The Riccati matrix algebraic equation (31) has a symmetric solution P 10 = P * 10 such that the trivial solution of each of the following systems: is asymptotically stable. Using the above mentioned solution of (31) and the equation (30), we obtain the second component of the solution to the system (29) as: Lemma 3.3. Let the assumption (A2) be valid. Then, there exists a positive number ε 0 , such that for all ε ∈ (0, ε 0 ] the system (26)-(28) has the solution P * 1 (ε), P * 2 (ε), P * 3 (ε) satisfying the inequalities P * i (ε) − P * i0 ≤ aε, i = 1, 2, 3,
Proceed to the proof of the second part of the lemma. Here, we start with the system (22). Substituting the block representation of the vector z (see (4)) and the block representations of the matrices A, S u (ε), S v , P * (ε) (see (7), (21), (36)) into (22), we obtain after a routine algebra the equivalent system Due to the smallness of the parameter ε > 0, the system (37) is singularly perturbed [22]. We prove the asymptotic stability of the trivial solution to this system using the result of [22] (Corollary 3.1). Due to this result, if the trivial solutions of the slow and fast subsystems associated with (37) are asymptotically stable, then for all sufficiently small ε > 0 the trivial solution of (37) is asymptotically stable. Setting formally ε = 0 in (37), using the inequalities (35), and eliminating the state variable y(t) from the resulting system yield after a routine algebra the slow subsystem associated with (37) where S 1 is given in (31). The differential equation (38) coincides with the equation (32). Therefore, due to the assumption (A2), trivial solution of (38) is asymptotically stable.
The fast subsystem, associated with (37), is obtained from the second equation of this system in the following formal way. First, we remove from this equation the term depending on x(t). Second, we make in the obtained equation the transformation of variables t = εξ, y f (ξ) = y(εξ), where ξ and y f (ξ) are new independent variable and state variable. Finally, setting formally ε = 0 in the transformed equation yields the fast subsystem dy f (ξ)/dξ = −P * 30 y f (ξ), ξ ≥ 0. Since the matrix P * 30 = (D 2 ) 1/2 is positive definite, the trivial solution of this differential equation is asymptotically stable. Therefore, by virtue of the above mentioned result of [22], there exists a positive number ε 1 such that, for all ε ∈ (0, ε 1 ], the trivial solution of the system (37) with P * (ε), given by (36), is asymptotically stable. Since the system (22) is equivalent to (37), the trivial solution of the former also is asymptotically stable for all ε ∈ (0, ε 1 ].
The asymptotic stability of the trivial solution to the system (23) with P * (ε), given by (36), is shown similarly. Thus, the lemma is proven. Corollary 1. Let the assumption (A2) be satisfied. Then, for all ε ∈ (0, ε 1 ], all the statements of Lemma 3.2 are valid.
Proof. Due to Lemma 3.3, all the conditions of the assumption (A1) are satisfied for all ε ∈ (0, ε 1 ]. The latter means the validity of Lemma 3.2 for all these ε.

Consider the valueJ
Corollary 2. Let the assumption (A2) be satisfied. Then, for all ε ∈ (0, ε 1 ], the value of the CCDG satisfies the inequality |J * Proof. The corollary follows immediately from Lemma 3.2 (item (d)), Lemma 3.3 and the equation (39). 3.5. Reduced Differential Game. Consider the zero-sum linear-quadratic differential game, the dynamics of which is wherex(t) ∈ E n is the state vector;ū(t) ∈ E m ,v(t) ∈ E s are the players' controls. The cost functional, to be minimized byū(t) and maximized byv(t), has the formJ We call the game (40)-(41) the Reduced Differential Game (RDG). Consider the set U of all functionsū =ū(x, t) : E n × [0, +∞) → E m , which are measurable w.r.t. t ≥ 0 for any fixedx ∈ E n and satisfy the local Lipschitz condition w.r.t.x ∈ E n uniformly in t ≥ 0. Similarly, let V be the set of all functionsv =v(x, t) : E n × [0, +∞) → E s , which are measurable w.r.t. t ≥ 0 for any fixedx ∈ E n and satisfy the local Lipschitz condition w.r.t.x ∈ E n uniformly in t ≥ 0.
By U V , let us denote the set of all pairs ū(x, t),v(x, t) such that the following conditions are valid: (1)ū(x, t) ∈ U,v(x, t) ∈ V; (2) the initial-value problem (40) forū(t) =ū(x, t),v(t) =v(x, t) and any x 0 ∈ E n has the unique locally absolutely continuous solutionx uv (t; x 0 ) on the entire interval [0, +∞); For any givenū 0 (x, t) ∈ U andv 0 (x, t) ∈ V, consider the sets Similarly to Lemma 3.2, we have immediately the following lemma.
Remark 5. Due to Lemma 3.4, the equation (31) is connected with the RDG by the solvability conditions of the latter. This connection presents a game-theoretic interpretation of the equation (31), arising in the asymptotic solution of the equation (20). 4. Main results. For any given ε ∈ (0, ε 1 ], consider the functions Remember that z = col(x, y), x ∈ E n , y ∈ E m .
Lemma 4.1. Let the assumption (A2) be valid. Then, there exists a positive number ε 2 ≤ ε 1 such that, for all ε ∈ (0, ε 2 ], the pair u * ε,0 (z), v * 0 (z) is an admissible pair of the players' state-feedback controls in the SDG, i.e., Proof. The players' strategies (42) are linear with respect to x and y with the gain matrices independent of t. Therefore, due to Definition 2.1, to prove the inclusion (43), it is sufficient to show the asymptotic stability of the trivial solution to the closed-loop system obtained from (1)-(2) by replacing there u(t) and v(t) with the above mentioned strategies. This closed-loop system is dx(t)/dt = A 1 + S v1 P * 10 x(t) + A 2 y(t), t ≥ 0, εdy(t)/dt = εA 3 + εS T v2 P * 10 − P * 20 T x(t) + εA 4 − P * 30 y(t), t ≥ 0. The asymptotic stability of the trivial solution to this system is shown quite similarly to such a stability of the system (37) in the proof of Lemma 3.3. This completes the proof of the lemma. Lemma 4.2. Let the assumption (A2) be valid. Then, there exists a positive number ε 3 ≤ ε 2 such that, for all ε ∈ (0, ε 3 ], the guaranteed result J u u * ε,0 (z); z 0 of u * ε,0 (z) in the SDG satisfies the inequality where c(z 0 ) is some positive constant independent of ε, while depending on z 0 ; J * (x 0 ) is the RDG value given by (39).
Proof. Using the matrix M , given as we can rewrite the minimizer's feedback strategy in (42) as u * ε,0 (z) = M z. By virtue of Lemma 3.1, we can conclude the following. If for some ε > 0 the trivial solution of the system (8), (45) is asymptotically stable and the equation (10), (45) has a symmetric solution K = K M such that the trivial solution of the system (11) is asymptotically stable, then u * ε,0 (z) ∈ H u and its guaranteed result in the SDG is The asymptotic stability of the trivial solution to the system (8), (45) for all ε ∈ (0,ε 3 ], whereε 3 > 0 is sufficiently small, is shown similarly to such a stability of the system (37) in the proof of Lemma 3.3. Now, we are going to show the existence of the above mentioned solution to the equation (10), (45) for all sufficiently small ε > 0. For this purpose, we seek this solution in the block form where the matrices K M 1 (ε), K M 2 (ε) and K M 3 (ε) have the dimensions n × n, n × m and m × m, respectively. Substitution of the block representations for the matrices A, B, D, S v = CG −1 C T , M and K M (ε) (see (7), (21), (45) and (47)) into (10) transforms this equation to the following equivalent set: Looking for the zero-order asymptotic solution K M 1,0 , K M 2,0 , K M 3,0 of the equations (48)-(50), we obtain similarly to Subsection 3.4 the following set of equations for these terms: (53) It should be noted that the equation (53) differs considerably from the third equation of the system (29). Namely, the latter is a Riccati algebraic equation, while (53) is a Lyapunov algebraic equation.
Proof. In order to calculate J v v * 0 (z); z 0 , let us note that this value is an optimal value of the cost functional in the optimal control problem, obtained from the SDG by substitution of v(t) = v * 0 (z) into the equations of dynamics (1)-(2) and the cost functional (3). This problem has the form where The optimal control problem (62)-(63) is singular, and, due to the mentioned above, Corollary 3. Let the assumption (A2) be valid. Then in the SDG, the following limit equality is satisfied: Proof. The limit equality (80) is a direct consequence of Lemma 4.3. This equality is proven similarly to Theorem 5.2 of the work [14].
Now, the inequality (81) is an immediate consequence of the inequalities (83) and (84), which completes the proof of the first statement of the theorem. The second statement of the theorem directly follows from the limit equality (80). Thus, the theorem is proven.
Remark 7. Due to Theorem 4.4 and Lemma 3.4, in order to construct the saddlepoint equilibrium sequence of the SDG and obtain the value of this game, one has to solve the lower dimension regular RDG, and calculate two gain matrices P * 20 and P * 30 using the equations (34) and (30). Various methods and algorithms, applicable for computing the stabilizing solution P * 10 to the Riccati matrix algebraic equation (31), can be found in [2] and references therein. x , q = 1, 2, ....

6.
Conclusions. In this paper, an infinite horizon zero-sum linear-quadratic differential game is considered. The case where the cost functional does not contain a control cost of the minimizing player (the minimizer) is treated. In this case, the game is singular. For this game, novel definitions of the saddle-point equilibrium (the saddle-point sequence) and the game value are proposed. The singular game is solved by a regularisation approach, consisting in approximation of this game with an auxiliary regular game. The approximating game has the same equation of dynamics, while its cost functional is augmented by an infinite horizon integral of the square of the minimizer's control with a small positive weight (small positive parameter). Hence, the auxiliary game is an infinite horizon zero-sum linear quadratic differential game with cheap control of the minimizer. An asymptotic analysis of the auxiliary cheap control differential game is carried out. Using this analysis, the saddle-point sequence in the original singular game is constructed. The expression for the value of this game is derived. It is shown that the obtaining the saddle-point sequence and the value of the singular game is based on the solution of a lower dimension regular zero-sum differential game. Solution of the latter is reduced to solution of Riccati matrix algebraic equation. The future issue of the topic of the present paper, requiring a further investigation, is an algorithm for numerical solution of the singular differential game. In this investigation, the works [42,43] can be helpful.