THE VALUE OF A MINIMAX PROBLEM INVOLVING IMPULSE CONTROL

. We consider the minimax impulse control problem in ﬁnite horizon, when the cost functions are positive and not bounded from below with a strictly positive constant. We show existence of value function of the problem. Moreover, the value function is characterized as the unique viscosity solution of Hamilton-Jacobi-Bellman-Isaacs equation. This problem is in relation with an application in mathematical ﬁnance.

1. Introduction. In this paper we study a minimax impulse control problem with finite horizon.
Minimax impulse control problems appear in many practical situations. In the game, player-ξ would like to minimize the pay-off by choosing suitable impulse control ξ(.), whereas player-τ wants to maximize the pay-off by choosing a proper control. In mathematical finance, one may consider the option pricing problem of references [5,6,7]. If the piecewise linear transaction costs are replaced by a more realistic piecewise affine cost, i.e. a fixed cost is charged for any transaction in addition to a variable part, then the problem at hand is exactly that considered here. We refer the reader to [4] (and the references cited therein) for extensive discussions. For deterministic autonomous systems with infinite horizon, optimal impulse control problems were studied in [2], and optimal control problems with continuous, switching, and impulse controls were studied by the author [24] (see also [25]). Differential games with switching strategies in finite and infinite duration were also studied [26,27]. J. Yong, in [28], also studies differential games where one person uses an impulse control and other uses continuous controls. Two examples of application of this game is outlined below. Example 1. Consider a given foreign currency (e.g. the Dollar) and a domestic currency (e.g. the Euro) and define, for each t ∈ [0, T ] the exchange rate at time t as y(t): units of domestic currency for one unit of foreign currency. Player I is the Central Bank aims at selecting an admissible impulse control defined by a double sequence t 1 , ..., t k , ..., ξ 1 , ..., ξ k , ..., k ∈ IN * = IN \{0}, where t k are the strategy, t k ≤ t k+1 and ξ k ∈ IR the control at time t k of the jumps in y(t k ). Player II is an opponent with the opposite goal who uses continuous control K(τ ).
• G(x) = l(x − ρ) 2 represents the the payoff. where l > 0. Then we consider the following game: the Central Bank are directed towards minimizing (1), and those of player II towards maximizing (1).

Example 2.
Consider the inventory control problem or the management of stocks and define, for each t ∈ [0, T ] the instantaneous demand of the goods at time t   ẏ (t) = f (t, y(t), τ (t)) y(t 0 ) = x y(t + k ) = y(t − k ) + ξ k , x denotes the initial inventory, f is a measurable function that corresponds to the instantaneous demand of the goods, the times t k are the times where the player I decided to replenish the inventory and ξ k are the levels of replenishment. Player II is an opponent with the opposite goal who uses continuous control τ (t). The payoff is given by where • ψ the running cost for player I (resp, gain for player II).
• B is the cost of an intervention of size ξ.
The study of minimax impulse control problems with continuous controls, gives rise to Hamilton-Jacobi-Bellman equations which are satisfied by the value function corresponding to the problem, if it is smooth. It is known that the value function of these problems, whenever smooth satisfy different variational and quasivariational inequalities. But most of the time these value functions are only continuous and not sufficiently smooth. The notion of viscosity solutions, a kind of generalized solutions, introduced by Crandall and Lions [9] is extremely well suited for these problems. The value function satisfies the corresponding equations or inequalities in the viscosity sense. These control problems are studied in the viscosity solution set up, for example, in [2,8]. In all these works the existence and uniqueness results are obtained assuming that the dynamics and cost functionals are bounded and uniformly continuous and hence the value functions are in the bounded uniformly continuous function class.
In the finite horizon framework El Farouq et al [17] extended the work of Yong [28] but allowing general jumps. In this work the existence of the value functions of minimax impulse control problem and uniqueness of viscosity solution are obtained assuming that the dynamics and costs functionals are bounded and the impulse cost function should not depend on x. Recently El Asri [13] have considered this minimax impulse control problem when the dynamics unbounded and cost functionals are bounded from below and the impulse cost function depends on x. In [13] and [17] the cost not less than α > 0, a near optimal strategy will never attempt to make an infinite number of jumps.
The purpose of this work is to fill in this gap by providing a solution to the minimax impulse control problem using dynamic programming principle tools and partial differential equation approach.
We prove existence of the value function of the problem when costs functionals B > 0. We show that the value function of the problem is associated of deterministic functions v which is the unique solution of the following system of HJBI equation: where K compact. It turns out that this HJBI equation is the deterministic version of minimax impulse control problem in finite horizon. This paper is organized as follows: In Section 2, we formulate the problem, we give the related definitions and we prove the value function is bounded from below with linear growth. In Section 3, we give some properties of the value function, especially the dynamic programming principle. Then we introduce the approximating scheme which enables us to construct a solution for value function of the minimax impulse control problem which, in combination with the dynamic programming principle, play a crucial role in the proof of the existence of the value function. Section 4 is devoted to the connection between the minimax impulse control problem and quasi-variational inequality. In Section 5, we show that the solution of QVIs is unique in the subclass of bounded from below continuous functions which satisfy a linear growth condition. 2 2. Formulation of the problem and preliminary results.

2.1.
Setting of the problem. Let a two-players differential game system be defined by the solution of following dynamical equations   where y(t) is the state of the system, with values in IR m , at time t, x is the initial state. The time variable t belongs to [t 0 , T ] where 0 ≤ t 0 < T , and y(t ± k ) = lim We assume that y is left continuous at the times t k : y(t − k ) = y(t k ), k ≥ 1. The system is driven by two controls, a continuous control τ (t) ∈ K ⊂ IR m , where K is compact set, and an impulsive control defined by a double sequence t 1 , ..., t k , ..., ξ 1 , ..., ξ k , ..., k ∈ IN * = IN \{0}, where t k are the strategy, t k ≤ t k+1 and ξ k ∈ IR m the control at time t k of the jumps in y(t k ). Let S := ((t k ) k≥1 , (ξ k ) k≥1 ) the set of these strategies denoted by D.
For any initial condition (t 0 , x), controls τ (·) and S generate a trajectory y(·) of this system. The pay-off is given by the following: ψ(s, y(s), τ (s))ds+ k≥1 B(t k , y(t k ), ξ k )1 1 [t k ≤T ] +G(y(T )), (6) where if t k = T for some k then we take G(y(T )) = G(y(T + )). The term B(t k , y(t k ), ξ k ) is called the impulse cost. It is the cost when player-ξ makes an impulse ξ k at time t k . In the game, player-ξ would like to minimize the pay-off (6) by choosing suitable impulse control ξ(.), whereas player-τ wants to maximize the pay-off (6) by choosing a proper control We shall sometimes write τ ∈ Ω instead of τ (.) ∈ Ω.
We now define the admissible strategies ϕ for the minimizing impulse control D, as non-anticipative strategies. We shall let D a be the set of all such non-anticipative strategies.
Definition 1. A map ϕ : Ω → S is called a non-anticipative strategy if for any two controls τ 1 (.) and τ 2 (.), and any t ∈ [t 0 , T ], the condition on their restrictions In the next, we define the value function of the problem v : Assumptions. Throughout this paper T (resp. m) is a fixed real (resp. integer) positive constant. Let us now consider the followings assumptions: ( (2) B : [0, T ] × IR m × IR m → IR, is continuous with respect to t and ξ uniformly in x. For and and is bounded from below. (4) G : IR m → IR is uniformly continuous with linear growth and is bounded from below. These properties of f and g imply in particular that y(t) 0≤t≤T solution of the standard DE (5) exists and is unique (see [1] ), for any t ∈ [0, T ] and x ∈ IR m .

Preliminary results.
We want to investigate the problem of minimizing sup τ ∈Ω J through the impulse control. We mean to allow closed loop strategies for the minimizing control. We remark that, being only interested in the inf sup problem, and not a possible saddle point. We also state the following definition: x, ξ)) + B(t, x, ξ)]. Remark 1. The above assumption (9) ensure that multiple impulses occurring at the same time are suboptimal.
3. The value function.
3.1. Dynamic programming principle. The dynamic programming principle is a well-known property in optimal impulse control. In our minimax impulse control problem, it is formulated as follows: where ((t n ) n≥1 , (ξ n ) n≥1 ) be an admissible control. Proposition 1. The value function v(.,.) has the following property: Proof. Assume first that for some x and t: and let the difference be 3 . Then we have for t ≤ t : . Among the admissible strategy ϕ 's there are those that place a jump at time t.
which implies that: Choosing now t = t, yields the relation We obtain B(t, x, ξ) ≤ − , which is a contradiction.

Continuity of value function.
In this section we prove the continuity of the value function. The main result of this section can be stated as follows.
Lemma 1. ( [13], Lemma 3.3.) There exists a constant C such that for any s ∈ [t, T ], x 1 , x 2 ∈ IR m , and k ∈ {1, 2..., n} We are now ready to give the main Theorem of this article. The value function of the problem in which the controller chooses n impulse time is defined as We will denote the value of making no impulse by q 0 , which we define as ψ(s, y(s), τ (s))ds + G(y(T ))].
For n = 0 the property is obviously true, since it is enough to take t = T in the definition of w 1 to obtain that w 0 ≥ w 1 . On the other hand taking into account that ψ and G are linear growth, then Suppose now that, for some n, we have Replace w n+1 by w n in the definition of w n+2 , to obtain that w n+1 (t, x) ≥ w n+2 (t, x).
On the other hand, since the cost B(t k , y(t k ), ξ k ) are non negative functions and since ψ and G are bounded from below, then by induction on n ≥ 0, w n is bounded from below.
Since for any (t, x, ξ) ∈ [0, T ] × IR m × IR m , B(t, x, ξ) ≥ 0 then we have: a contradiction with w n is non increasing.

BRAHIM EL ASRI
Using the uniform continuity of ψ, C, G in y and property (12), then the right-hand side of (18), the first and the second term converges to 0 as t tends to t and x tends to x.
Taking the limit as (t , x ) → (t, x) we obtain: As 1 is arbitrary then sending 1 → 0, to obtain: Therefore q n is upper semi-continuous.
Step 2. Now we show that q n is lower semi-continuous. Fix an arbitrary 2 > 0. Let ϕ 2 = ((t n ) n≥1 , (ξ n ) n≥1 ) belongs to D a such that Also, Then Next w.l.o.g we assume that t < t. Then we deduce that: {(ψ(s, y (s), τ 2 (s)) − ψ(s, y(s), τ 2 (s)))1 1 [s≥t] }ds Using the uniform continuity of ψ, C, G in y and property (12). Then the right-hand side of (19) the first and the second term converges to 0 as t → t and x → x.
Taking the limit as (t , x ) → (t, x) to obtain: As 2 is arbitrary then putting 2 → 0 to obtain: Therefore q n is lower semi-continuous. We then proved that q n is continuous.
Step 3. Let us show that v is continuous. we have: x )| (20) For n large enough, then we put n → +∞ and using lim n→∞ q n (t, x) = v(t, x) and the continuity of q n (t, x) in t and x, we get that the right hand side terms of (20) converge to 0 as (t , x ) → (t, x). Therefore v(t , x ) → v(t, x) as (t , x ) → (t, x). So v is continuous. 4. Viscosity characterization of the value function. In this section we prove that the value function v is a viscosity solution of the Hamilton-Jacobi-Bellman-Isaacs equation, that we replace by an equivalent QVI easier to investigate.
We now consider the following HJBI equation: with the terminal condition: v(T, Notice that it follows from hypothesis (7) and (10) that the term in square brackets in (21) above is continuous with respect to τ so that the minimum in τ over the compact K exists.
Recall the notion of viscosity solution of QVI (21). (i) A viscosity supersolution of (21) if for any (t, x) ∈ [t 0 , T [×IR m and any func- (ii) A viscosity subsolution of (21) if for any (t, x) ∈ [t 0 , T [×IR m and any function

(iii) A viscosity solution if it is both a viscosity supersolution and subsolution.
Theorem 4. The value function v is the viscosity solution of the quasi-variational inequality (21).
Proof. The viscosity property follows from the dynamic programming principle and is proved in [17]. Now we give an equivalent of quasi-variational inequality (21). In this section, we consider the new function Γ given by the classical change of variable Γ(t, x) = exp(t)v(t, x), for any t ∈ [t 0 , T ] and x ∈ IR m . Of course, the function Γ is bounded from below and continuous with respect to its arguments.
A second property is given by the where M [Γ](t, x) = inf ξ∈I R m [Γ(t, x + g(t, x, ξ)) + exp(t)B(t, x, ξ)]. The terminal condition for Γ is: Proof. We will show that if v is a viscosity solution of (21) then Γ is a viscosity solution of (24). The relation between v and Γ is symmetric and the proof the other way around would be the same.
Actually let ϕ(t, x) be a C 1 ([t 0 , T [×IR m ) function such that ϕ−Γ has a minimum at (t, x) and ϕ(t, x) = Γ(t, x). Therefore e −t ϕ − v has a minimum at (t, x) and e −t ϕ(t, x) = v(t, x). As v is a subsolution then max min and then max min This means that Γ is a viscosity subsolution to (24). In the same way one can show that Γ is a viscosity supersolution of (24), whence the claim. Next, we present the following lemma. For the proof, one is referred to Lemma 4.4 in [28] Lemma 2. Let v viscosity solution of (24) and uniformly continuous function. Let

5.
Uniqueness of the solution of quasi-variational inequality. We are going now to address the question of uniqueness of the viscosity solution of quasivariational inequality (21). We replace (21) by equivalent QVI (24) defined in Proposition 3. We have the following: Proof. The proof is divided in four steps. We will show by contradiction that if u and w is a subsolution and a supersolution respectively for (24) then u ≤ w. Therefore if we have two solutions of (24) then they are obviously equal. Now we fix λ ∈ (0, 1), close to 0, and prove the comparison result for (1 − λ)u and w. Actually for some R > 0 suppose there exists (t,x) ∈ (t 0 , T ) × B R (B R := {x ∈ IR m ; |x| < R}) such that: Step 1. We can find (t 1 , x 1 ) ∈ [t 0 , T ] × B R and δ > 0 such that and for all (t, x) ∈ I ×B δ (x 1 ), where I : x), then considering the uniform continuity of u and w on [t 0 , T ] × B R , we obtain (26) and (27) by taking t 1 =t and x 1 =x. Suppose it is not the case. Then for > 0 there exists ξ 0 ∈ IR m , for which w(t,x) ≥ w(t,x + g(t,x, ξ 0 )) + exp(t)B(t,x, ξ 0 ) − . We choose t 1 =t and x 1 =x + g(t,x, ξ 0 ). Then by Lemma 2, there exists δ > 0 such that w(t, x) < M [w](t, x), ∀(t, x) ∈ I ×B δ (x 1 ). On the other hand, it's easy to check that (1 − λ)u(t 1 , Hence, we obtain (26) and (27).

Taking in to account
c + d = 2β(t 0 − t).
So that by plugging into (39) and note that λ > 0 we obtain: (39) By sending → 0, θ → 0 and taking into account of the continuity of ψ, we obtain η ≤ 0 which is a contradiction. Now sending λ → 0, we get the required comparison between u and w. The proof of Theorem 5 is now complete.