Nash equilibrium points of recursive nonzero-sum stochastic differential games with unbounded coefficients and related multiple\\ dimensional BSDEs

This paper is concerned with recursive nonzero-sum stochastic differential game problem in Markovian framework when the drift of the state process is no longer bounded but only satisfies the linear growth condition. The costs of players are given by the initial values of related backward stochastic differential equations which, in our case, are multidimensional with continuous coefficients, whose generators are of linear growth on the volatility processes and stochastic monotonic on the value processes. We finally show the well-posedness of the costs and the existence of a Nash equilibrium point for the game under the generalized Isaacs assumption.


1.
Introduction. In this article, we discuss a recursive nonzero-sum stochastic differential game (NZSDG for short) under Markovian framework. Generally speaking, stochastic differential game theory deals with conflict or cooperate problems in a dynamic system which is influenced by multiple players. Let us introduce the setting of the problem briefly. Assume that we have a system which is described as follows: dx t = σ(t, x t )dB t for t ≤ T and x 0 = x, (1) where B is a Brownian motion. This system can also be controlled by two players which we represent by weak formulation of a stochastic differential equation (SDE for short): The process B u,v is a new Brownian motion generated from B by applying Girsanov's transformation. The precise analysis will be introduced in the following text. Processes u = (u t ) t≤T and v = (v t ) t≤T represent the control actions of the two players imposed on this system. Indeed, the controls are not free, which bring some costs for players. What we discussed is a recursive type of cost functional, which is defined by the initial value of the following backward stochastic differential equation (BSDE for short): for i = 1, 2, (3) The costs are defined by J i (u, v) = y i,u,v 0 for players i = 1, 2, respectively. The objective of this game model is to find a Nash equilibrium point (u * , v * ) such that, for any admissible control (u, v). This is actually to say that both of the two players would like to minimize their costs and no one can cut more by unilaterally changing her own control.
In the following, let us discuss the main contribution of our work, as well as the main difference between recursive cost and the classical one. The concept of stochastic differential recursive utility has been considered by Duffie and Epstein in [3] which extends the classical utility. The recursive one involves instantaneous utility depending not only on instantaneous consumption rate but also on the future utility. The manner of using solutions of BSDEs to describe cost functionals of stochastic differential game is initially inspired by [5], where some formulations of recursive utilities and their properties are also discussed. In BSDE (3), If the functions h i are independent on parameters y i , then by applying Girsanov's transformation, the costs J i will be reduced to , which is the accumulation of the instantaneous cost h i and the terminal cost g i . This is the classical structure of non-recursive cost functions as studied in [9], [11]. Some recursive optimal control problems are studied by [19]. There are also works study the zero-sum case of recursive game, such as [20]. Readers are referred to a series of works by Hamadène for research on classical NZSDGs without the recursive part, say [7,8,9] and the references therein. Our main contribution is that we study a nonzero-sum game with recursive cost which is defined by initial value of BSDE (3). Besides, assumptions on the coefficients are irregular. We finally show the existence of a Nash equilibrium point.
The method of BSDEs has been shown as an efficient tool to deal with the recursive nonzero-sum stochastic differential game, see the works by [13], [20] for example. A complete review on BSDEs theory as well as some applications are introduced in a survey paper by [18]. The connection of BSDE with NZSDG and some other popular methods to deal with game problem, such as partial differential equation, are presented in a celebrated survey paper by [2].
In the present paper, we study the recursive NZSDG through BSDE technique in the same line as [20]. However in [20], the drift function f of the state process in (2) is bounded or almost equivalent to bounded one. This boundedness is important when we consider the related BSDEs since this guarantees the good Liptsitz property of the generator of the corresponding BSDE with respect to z component. However, this restriction is too strict to some extent. Therefore, the motivation of our work is to relax this limitation on f . To instead, we consider a drift f which is of linear growth on the state process x. This has already been considered in some classical game problems without recursive part by [9] and [10]. To our knowledge, this general recursive case has not been studied in literatures. This is our main innovation. Besides, under appropriate assumptions on function h i (see Assumption 3), we find that the generator of BSDE (3) is not regular, which is of stochastic linear growth on z and stochastic monotonic on y. This BSDE is new and has not been studied before. We give the existence of solutions for BSDE (3) which provides the wellposedness of the cost function. This result is summerized as Theorem 3.1 which we mainly deal with in this article. Then, with the help of the generalized Isaacs condition and a kind of multiple dimensional BSDE (22) below (whose existence of solutions has been show in [16]), we show the existence of NEP for this recursive NZSDG by applying comparison properties between BSDEs.
Finally, we point out that this work establishes a model involving only two players, however, it can be generalized to multiple players case following the same way without any difficulty.
The rest of this work is organized as follows: In Section 2, we give the precise statement of the recursive game problem and some assumptions on coefficients. Section 3 is devoted to the well-posedness of cost functionals, i.e., the existence of solutions of BSDE (3). The idea is to take a partition of interval [0, T ] and firstly solve this BSDE in a small interval [T − δ, T ], then, extend it backwardly to the whole interval. The existence of Nash equilibria is shown in Section 4. Finally, in Section 5, we give a simple one-dimensional example. We can clearly see that Nash equilibrium point exists for the recursive game under our assumptions.
2. Statement of the problem. In this section, we will give some basic notations, the preliminary assumptions throughout this paper, as well as the statement of the recursive nonzero-sum stochastic differential game. Let T be fixed and let (Ω, F, P) be a probability space on which we define a d-dimensional Brownian motion B = (B t ) 0≤t≤T with integer d ≥ 1. Let us denote by F = {F t , 0 ≤ t ≤ T }, the natural filtration generated by the process B and augmented by N P the P-null sets, i.e. F t = σ{B s , s ≤ t} ∨ N P .
Let P be the σ-algebra on [0, T ] × Ω of F t -progressively measurable sets. Let p ∈ [1, ∞) be a real constant and t ∈ [0, T ] be fixed. We then define the following spaces: Hereafter, S p 0,T and H p 0,T are simply denoted by S p T and H p T . The following assumptions are in force throughout this paper. Let σ be the function defined by: σ : [0, T ]×R m −→ R m×m which satisfies the following assumption. Assumption 1.
(i): σ is uniformly Lipschitz w.r.t x. i.e. there exists a con- (ii): σ is invertible and bounded and its inverse is bounded, i.e., there exits a constant C σ such that Remark 1 (Uniform elliptic condition). Under Assumption 1, we can verify that, there exists a real constant > 0 such that for any (t, Suppose that we have a system whose dynamic is described by a SDE as follows: The solution X = (X t,x s ) s≤T exists and is unique under Assumption 1. (cf. [17], p.289). We recall here two well-known results associate to the integrability of the solution. For any fixed (t, where the constant C is only depend on the Lipschitz coefficient and the bound of σ. In addition, for a constant α ∈ (0, 2), we also have, P-a.s.
We consider a two players game model in this article for simplicity. The general multiple players case is a straightforward adaptation. Each of the two players imposes a control type strategy to this system. Let us now denote by U 1 and U 2 two compact metric spaces and let M 1 (resp. M 2 ) be the set of P-measurable Hereafter M is called the set of admissible control.
We then introduce the following Borelian function f : t,x be the measure on (Ω, F) defined as follows: where for any (F t , P)-continuous local martingale M = (M t ) t≤T , The notation . denotes the quadratic variation process. By Assumptions 1 and 2, we know P u,v t,x is a new probability on (Ω, F) (see Appendix A, [4] or [17] x )-Brownian motion and the process (X t,x s ) s≤T satisfies the following SDE in weak formulation: Actually, the process (X t,x s ) s≤T is not adapted with respect to the filtration generated by the Brownian motion (B u,v s ) s≤T . Therefore, it is known as the weak solution of (9). Besides, properties (5) and (6) hold true, as well, for the expectation under the probability P u,v t,x (see [11], Lemma 3.3-(ii)). Now, this system is controlled by two players through dynamic function f . The control actions are not free, which bring the players corresponding costs, or payoffs in some circumstances. Before introducing the costs, we first present the following x and of linear growth w.r.t. y and satisfies the stochastic monotonic property on y, for i = 1, 2. i.e. there exist constants C h , γ ≥ 0 and α ∈ (0, 2) such that there exist constants C g and γ ≥ 0 such that Now, let x 0 ∈ R m be fixed. The costs (or payoffs) of the players for their controls (u, v) ∈ M are given by the initial values of related BSDEs. More precisely, we define: Actually, BSDE (11) has solution in some appropriate space which will be shown in the next section (see Theorem 3.1). Therefore, the costs (10) are well-defined. However, for the integrity of the statement of the problem, we would like to go ahead and put the proof of the existence later. This manner of definition of the cost functional has already been considered in [20], [13]. Indeed, if h i is independent of y component, which is the unrecursive case, we can express the value process of the is the expectation under the probability P u,v 0,x0 . It is easy to check this conditional expectation is well-defined from the assumptions on g i and h i . Apparently, in this case cost function is s , u s , v s )ds] since F 0 is nothing but some null measure sets. Then functions h i and g i can be viewed as the instantaneous cost and the terminal cost respectively for player i = 1, 2. This situation is coincident with the classical nonzero-sum stochastic differential game model as in the work by Hamadène (see [8]).
Hereafter E u,v 0,x0 (resp. P u,v 0,x0 ) will be simply denoted by E u,v (resp.P u,v ). What we concerned in this article is to find an admissible control (u * , v * ) such that The control (u * , v * ) is called a Nash equilibrium point for the recursive NZSDG. It reads that each player chooses her best control, while, an equilibrium is a pair of controls, such that, when applied, no player will lower any cost by unilaterally changing her own control.  x, y, u, v).
Obviously, under Assumptions 2 and 3, H i satisfies the following hypothesis, for each (t, x, y, y , z, z , u, v) (12) For the existence of Nash equilibria, we also need the following assumption.
(ii) the mapping (y 1 , 3. Well-posedness of costs. In this section, we focus on the well-posedness of costs J i (u, v) for admissible control (u, v) ∈ M and i = 1, 2. Precisely speaking, we need to show the existence of the solutions for BSDE (11) which we summarized as the following theorem. For simplicity, in this section, we omit the subscript (u, v) and denote the pair (Y i,(u,v) , Z i,(u,v) ) by (Y i , Z i ). We first provide an uniform priori estimate of the solution for BSDE (11). For this, we need the following result by [12], which related to the integrability of the Doléans-Dade exponential of X t,x . Actually, we only need the following Lemma 3.2, Lemma 3.3 and Lemma 3.7, readers who are not interested in the technique proofs can skip the other lemmas and the proof process in the following subsection.  ). Under Assumption 1, let ϕ be a P ⊗B(R m )-measurable application from [0, T ]×Ω×R m to R m which is uniformly of linear growth, that is, P-a.s., ∀(s, x) ∈ [0, T ] × R m , |ϕ(s, ω, x)| ≤ C ϕ (1 + |x|). Then, there exists some p 0 ∈ (1, 2) and a constant C, where p 0 depends only on C σ , C ϕ , m while the constant C, depends only on m and p 0 , but not on ϕ, such that: where the process (E t ) t≤T is the density function defined in (8).
For the same function ϕ in Lemma 3.2 and a fixed t ∈ [0, T ], let us now define a process (Γ t,s ) t≤s≤T as follows: Then, the following lemma holds true by using the same mind as Lemma 3.2. We provide the proof here for readers' better understanding. To prove Lemma 3.3, we need the following lemmas.
Lemma 3.4. Under Assumption 1, let M t = t 0 σ(s, X t,x s )dB s for each t ≤ T , then for any p > 1, there exists a constant C 0 depending on C σ , T and p, such that, We are now ready to provide the proof of Lemma 3.3.
Proof of Lemma 3.3. In this proof, the process X t,x is denoted simply by X. For a constant δ ∈ (0, T ), let us define a stopping time then the process ( t∧τ N T −δ Γ T −δ, s (−pϕ(r, X r ))·ϕ(s, X s )dB s ) T −δ≤t≤T is a F t -martingale.

RUI MU AND ZHEN WU
Therefore, by Itô's formula, We now define M t := t T −δ σ(s, X s )dB s for each t ∈ [T − δ, T ]. Then we obtain from the linear growth of ϕ and Lemma 3.4 that where the constant C depends on T, C 0 and C ϕ . Let the process −pϕ(s, X s )ds) t≤T . Hence the process B N is a Brownian motion under the probability P N which satis- and from Assumption 1, the linear growth of ϕ and Lemma 3.4, we know, where the constantC = 2 ∨ (2p 2 C 2 σ C 2 ϕ C 0 δ). Thanks to Gronwall's inequality, we have, Back to (14) and take expectation on both sides, we obtain, there exists a constant which we still denoted byC depending on C 0 , C σ , C ϕ , p, m, T , such that, where E N is the expectation under the probability P N . If σ i (t) is the i th row (i = 1, 2, ..., m) of the matrix σ(t, X t ), then by a technique of splitting the stochastic integral into the integrals on random intervals, we get the following inequality, It follows from Lemma 3.5 that β N i is a Brownian motion on the random interval [T − δ, R i (T )]. Now Hölder's inequality implies for a constant λ, where β is a scalar Brownian motion on (Ω, F, P N ). Since R i (T ) ≤ δ(C σ ) 2 , then by Lemma 3.6, if 2λmδ(C σ ) 2 < 1, we have, Now let λ = 1 2 (p 2 + p)CeC δ , the sameC as in (16). Considering inequality (16) and the fact that δ < T , we can conclude that if (p 2 + p)CeC T < ε = (mδ|C σ | 2 ) −1 then 0 ≡ C. We can choose δ small enough, such that, Then Fatou's lemma yields that there exists a δ ∈ (0, T ) small enough such that E [|Γ T −δ, T (ϕ(s, X s ))| −p ] ≤ C holds true for any p > 1, with constant C depending only on p, m but not on ϕ.
Finally, BDG inequality yields that Lemma 3.3 is true.
By exactly examining the proof of Lemma 3.3, we can also have the following result. The following technique is inspired by [1]. We first take a linearization and take i = 1 for example. Let
Besides, if we assume that BSDE (11) has solution and Z 1 belongs to H p T −δ,T for some δ ∈ (0, T ) and any p > 1, then for any q ∈ (1, p), This enable us to take the conditional expectation of (18) under the probability P * , i.e., e t Y 1 . As we analyzed above, the conditional expectation is well-posed. More precisely, let us now denote by Γ t,s the process (Γ t,s (b r )) t≤s≤T for fixed t ∈ [0, T ] as in (13).
Equation (19) can be rewrite as: for t ∈ [T − δ, T ], For any p > 1, by applying Young's inequality and conditional Jensen's inequality, and considering the fact that functions g 1 and h 1 are of polynomial growth on x and Lemma 3.3, 3.7, we have  20) with p <q andp = pq q−p <p. The uniform integrability of process Z 1 follows from Itô's formula and the facts that Y 1 ∈ S p T −δ,T for any p > 1 from (20). Indeed, we have, for 1 < q < p, which is finite with the constantC depending only on C h , C σ , C f , T , p and q. The details are omitted here. Let us summarize the estimates (20) and (21) as the following lemma.  (u,v) ) are solutions to BSDE (11), such that, Y i,(u,v) ∈ S p T −δ,T for some δ ∈ (0, T ) and any p > 1. Then, for any q ∈ (1, p), which is obviously finite by considering g i (x) and h i (s, x, 0, u, v) are of polynomial growth on x and the fact that X 0,x0 has moments of any order. The constant C depends only on C h , C σ , C f , T, q,q.
We come back to Proof of Theorem 3.1. For each n ≥ 1, let us define the following stopping time: This stopping time is of stationary type and it will converge P-a.s. to T as n tends to infinity. Let us set g 1n (x) = g 1 (x)1 g 1 (x)≤n . Then by the result of [15], we know that there exists a bounded process Y 1n and a process Z 1n ∈ H 2 T −δ,T , which solve the following BSDE:∀t ∈ [T − δ, T ], Indeed, for any (s, y, z, u, v) Then, Lemma 3.8 yields that, (Y 1n , Z 1n ) ∈ S q T −δ,T ×H q T −δ,T uniformly with respect to n for any q > 1.
Let us show that (Y 1n , Z 1n ) n≥1 is a Cauthy sequence in S q T −δ,T × H q T −δ,T for all q > 1. Let m, n be integers such that m > n > 1 and let us set δY = Y 1m − Y 1n , δZ = Z 1m − Z 1n . Then (δY, δZ) solves the BSDE: ∀t ∈ [T − δ, T ], } which converges to 0 as n converges to infinity, considering (Y 1n , Z 1n ) ∈ S q T −δ,T × H q T −δ,T for q > 1 and X 0,x0 has moments of any order, and τ n → T , P-a.s. Since, g 1m − g 1n → 0 in Lq for anyq > 1 as n, m → ∞, through Lemma 3.8, we obtain that, (δY, δZ) → 0 in S q T −δ,T × H q T −δ,T for q <q as n, m → ∞. It is easy to check that the limit of this sequence is the solution to BSDE (11) for t ∈ [T − δ, T ].
Applying the same method, we will find the solution for BSDE (11) for t ∈ [T − 2δ, T − δ]. Repeating the same method backwardly finitely many times, we finally find the solution for BSDE (11) on the global interval [0, T ].