A COMPARISON PRINCIPLE FOR HAMILTON-JACOBI EQUATION WITH MOVING IN TIME BOUNDARY

. In this paper we consider an Hamilton-Jacobi equation on a moving in time domain. The boundary is described by a C 1 function. We show how we derive this equation from the work of [26]. We only prove a comparison principle since the proof of other theoritical results can be found in [20]. At the end of the paper, we consider a short homogenization result in order to reinforce the traﬃc ﬂow interpretation of the equation.

1. Introduction. In this paper, we consider an Hamilton-Jacobi equation posed on a moving in time domain. More precisely, the equation is posed in several interval of the real axis whose boundary (called "junction points") move in time. The junction points are denoted by b i (t) ∈ R at time t and we set for j ∈ {1, ..., N + 1}, We will show in Section 2 that the considered equation can be obtained by a first order bus-vehicles interaction model, introduced in [26], where authors assumed that buses represent a moving capacity restriction, i.e. the density of vehicles is reduced near the buses zones. In order to simplify the notations, let us first introduce the flux limiting function, (see [20]). For i ∈ {1, ..., N }, t ∈ R + and p = (p 1 , p 2 ) ∈ R 2 F Ai (t, p 1 , where A i is a locally Lipschitz function and H + i,i (resp. H − i+1,i ) is the nondecreasing (resp. nonincreasing) part of the Hamiltonian H i,i (resp. H i+1,i ) whose definition is given later. For j ∈ {1, ..., N + 1} and i ∈ {1, ..., N }, the equation is given by

NICOLAS FORCADEL AND MAMDOUH ZAYDAN
where u t = ∂u ∂t and u x = ∂u ∂x denotes respectively the time and the space derivative. Moreover, we denote by Equation (1) is quite similar to the one introduced by Imbert and Monneau in [20]. The difference here is that we consider a junction which moves in time. Stability, existence of solution and even the reduction of the class of test functions for (1) can be easily obtained adapting the proofs of these results in [20]. In this paper, we prove a comparison principle for equation (1). We borrow the idea introduced in [4] and we use a localization procedure in order to insert the "good" test function in the next step of the proof. Let us now clarify the notations used in (1). Assumptions and notations (A)..
• (A1) The functions b 1 , ..., b N are time dependent derivable functions such that b i+1 > b i . We denote also by b 0 = −∞ and b N +1 = +∞. Moreover, we assume that for all j ∈ {1, ..., N }, b j is a locally Lipschitz function. • (A3) For i ∈ {1, ..., N +1} and for k = i, i+1 , H k,i (t, p) = H k (p)−b i (t)p. Morover, we assume that for all i ∈ {1, ..., N }, k = i, i + 1 and for all t ∈ R + , the Hamiltonian H k,i (t, ·) is quasi-convex. We denote by H + k,i (t, ·) and H − k,i (t, ·) respectively the non-decreasing and the non-increasing part of H k,i (t, ·). • (A4) For all i ∈ {1, ..., N } ,the flux limiter A i : [0, T ] → R is a locally Lipschitz function. Main results. Our main result is the proof of a comparison principle for equation (1). In [20,4,21], a proof of comparison principle for (1) in the case where b i =constant is done. In fact, they prove this result in a more general domain (such a network, junction or two half spaces in R N ) and more general Hamiltonians (depending on x and t). In [20] increasing and decreasing functions. Let us mention also the work [28] where the authors consider a Kirchoff-type Neumann condition at the junction and proved that its solution satisfy a comparison principle and then they proved that the fluxlimited solutions reduce to Kirchoff-type viscosity solutions. Finally, concerning comparison principle for Hamilton-Jacobi equations with boundary conditions of Neumann type, let us cite [27,1,5,19,2,13,23]. Combaining the comparison principle for (1) with Perron method, we obtain the following main result Theorem 1.1. Assume (A) and that the initial datum u 0 is Lipschitz continuous function. Then there exists a unique continuous viscosity solution u of (1) such that for all T > 0, there exists a constant C T > 0 such that for all (t, x) ∈ [0, T ] × R, The second main result of this paper is an homogenization result. We consider a macroscopic model describing the presence of a bus (or a large truck) and prove that the solution of the Hamilton-Jacobi formulation of this model converges towards the unique solution of equation (1) with one Hamiltonian and one boundary function. As previous works [15,14,16], the proof of convergence relies on the construction of suitable correctors. The difference here is that we don't consider a microscopic model since to our knowledge, no microscopic model considering the bus as a moving capacity constraint exist.
2. Traffic flow motivation and derivation of a Hamilton-Jacobi equation.

2.1.
A first order bus-vehicles interaction model. In this section, we show how we can obtain equation (1). To simplify the work and since the idea remains the same, we consider the case of one Hamiltonian H and one function b describing the bus trajectory. Before starting, we mention that our model was introduced in [26] in order to study the interaction between buses and the surrounding traffic flow. Several papers about modeling the effect of buses on the traffic flow exist, see [24,11,12,9,17].
The idea is to consider the traffic flow on a single road where a bus is moving. In this model, we assume that the fundamental physical parameters of the model, i.e. the maximum density and the maximum mean velocity, don't depend on the position x if x = b (t), i.e. the characteristics of the infrastructure don't change with the position far from the bus. The traffic flow is assumed to be described by a first order macroscopic model of the LWR type if the space variable x = b (t). Bus should be considered as a moving capacity restriction from other drivers point of view. Authors in [26] extended the notion of demand and supply introduced in [25] to the moving frame using the change of variables ζ = x − b (t). The model is given by where ρ is the density of vehicles at time t and position x, f is a stricly concave function (as Greenshield model [18]), reaching its unique maximum at a critical density ρ c , describing the flow andf (t, The function B is the limiter of the passing flux through the bus at time t. The definition off yields that for all t, the functionf (t, ·) reaches a unique maximum at a point denotedρ c (t).
The functionsf D andf S are respectively the Demand and Supply functions defined as followsf

NICOLAS FORCADEL AND MAMDOUH ZAYDAN
Before passing to the Hamilton-Jacobi formulation, let us present the two points below in order to clarify the model.
• The trajectory of the bus can be approximated by assuming that b = 0 (busstops) or that b is equal to the desired bus-speed V b (if the bus enjoys special lanes) or is the minimum between the desired bus speed V b and the local traffic speed, i.e.
In this paper, we will only consider the second case i.e. when the velocity of b is V b (see section 4). The case where b = 0 reduces to the work [20]. In the case where the velocity of the bus depends on the density of vehicles, we will obtain a strongly coupled PDE-ODE system and we will have to introduce a good notion of solution for the system. In this case, we were not able to get a uniqueness result. Note that several paper like [24,7,6,8,12] considered the case where b depends on the density of vehicles but considered a different macroscopic model as this paper. 2.2. The Hamilton-Jacobi formulation. In order to derive the Hamilton-Jacobi equation, we proceed as in [22] considering the continous analogue of the discrete vehicles label defined by Formally, we have the following equalities Recalling the definition of g, we deduce that In fact, the last equality is true because −g (t) represents the passing flux at b(t) which is equal to the outgoing flux at b(t), i.e.
We now set and we define the Hamiltonian H(p) = −f (−p). Then we deduce that we have The junction condition. Recalling the definition of U 1 and U 2 , we have that respectively the non-decreasing part and the non-increasing part ofH (t, ·), we deduce the following junction condition 3. Comparison principle for (1). In this section we present the main result of this paper which is the comparison principle for (1). We give first the definition of viscosity solutions. As usual, we begin by introducing the class of test functions.
Test functions. We denote by C 1 (B) the class of test functions. If ϕ ∈ C 1 (B), then • ϕ is continuous.
. We recall the definition of the upper and lower semi-continuous envelopes u * and u * of a locally bounded function u on B, and if for all test function ϕ ∈ C 1 (B) touching u * from above (resp. touching u * from below) at (t 0 , x 0 ) ∈ B, we have ii) We say that u is a viscosity solution of (1) if u is a sub-solution and a supersolution of (1).
Theorem 3.2 (Reduction of test functions). Assume (A). We fix i ∈ {1, ..., N } and assume that Let t 0 ∈ (0, T ) and let p We consider the following Hamilton-Jacobi equation • Let u : (0, T ) × R → R an upper semi-continuous sub-solution of (3) and satisfying If for any test function ϕ touching u from above at • Let u : (0, T ) × R → R a lower semi-continuous super-solution of (3). If for any test function ϕ touching u from below at (t 0 , b i (t 0 )) with ϕ is defined as in (5), we have The proof of this theorem is similar to the proof of Theorem 2.7 in [20]. The next proposition is concerned with the supremum of sub-solutions. Such a result is used in the Perron process to construct solutions. The proof is standard. The only new idea is to prove that u * satisfies (4) in order to use the result of the preceding theorem. By Perron method, and the last proposition, we easily obtain the following result.
Theorem 3.4. Assume (A) and that the initial datum u 0 is Lipschitz continuous. Then there exists a viscosity solution u of (1) in [0, T ) × R and a constant C T > 0 such that As we mentioned before, we will adapt the idea introduced in [4]. The main difference here is the localization procedure in order to choose the good test function. Before starting the proof, we state the following useful remarks.
Remark 3.6. We recall that for all t > 0, and for all i ∈ {1, ..., N + 1}, j ∈ {1, ..., N }, the Hamiltonian H i,j (t, ·) is superlinear (see (A2)). Therefore, there exists a constant C t > 0 , such that for all p ∈ R, we have |p| ≤ max (C t , H i,j (t, p)). We will denote by C T the upper bound of C t for t ∈ [0, T ].  )) . Proof. The proof of these inequalities is very simple. We get the first line by the definition of the Hamiltonian H k,i . To prove the second and the third lines, we simply use the continuity of the functions for k Proof of Theorem 3.5. We introduce We want to prove that M ≤ 0. We argue by contradiction and assume that M > 0. Let L and R two constant such that L < min Since we consider the maximum of an upper-semi continuous function on a compact set, we deduce that the maximum is reached at a point that we denote (t η , x η ). Case 1: M η ≤ 0. Then we consider the following supremum . Classicly, M ε,α ≥ M/2 > 0 for η and α small enough. Moreover, the maximum is reached at (t, s, x, y) and αx → 0, αy → 0 as α → 0. We denote byx the common limit of x and y as ε goes to zero and byt the common limit of t and s as ε goes to zero. It's clear that t = 0 since u 0 is Lipshitz. Moreover, taking ε to zero and using the upper-semi continuity property, we obtain that We deduce that whether x < b 1 (t) and y < b 1 (s) or x > b N (t) and y > b N (s). Using the fact that u is a sub-solution and v is a super-solution, we obtain for j = 1 or j = N + 1 Subtracting the two inequalities and taking α to zero, we obtain a contradiction. Case 2: M η > 0 and x η = b i (t η ) for all i ∈ {1, ..., N }. In this case, we consider Classicly, M ε,α ≥ M η > 0. Moreover, the maximum is reached at (t, s, x, y) and we denote byx andt respectively the common limit of x and y and the common limit of t and s as ε goes to zero. Moreover, taking ε to zero, and using the upper-semi continuity, we obtain that Ifx / ∈ [L, R], we proceed as the case where M η ≤ 0. If not, then (7) and the definition of M η implies that which yields thatt = t η andx = x η . Writing the viscosity inequalities, we obtain that where j is the index such that b j−1 (t η ) < x η < b j (t η ). Sending α to zero, we obtain a contradiction.
Case 3: M η > 0 and there exists i 0 ∈ {1, ..., N } s.t. x η = b i0 (t η ). We first introduce The maximum is reached at a point that we denote by (t ν , s ν , x ν ) and we have The second point implies that for ν small enough, We need the following lemma.
Lemma 3.8. Let t ,ŝ,x be the limit (up to a subsequence) of (t ν , s ν , x ν ) as α goes to zero. We have that Proof. The proof is very simple and relies only on the upper-semi continuity property of the function. Since M ν,α ≥ M η , taking α to zero, we obtain Then, taking ν to zero and recalling that lim The last inequality implies that and in particular (9) is true.
We now continue the proof. We have to distinguish two different cases: Subcase x ν = b i0 (t ν ). We define the new supremum, with χ defined in (8) and The maximum is reached at (t, s, x, y) and the fact that u 0 is Lipschitz continuous, that b i0 is a continuous function and the definition of G ε , yields that Equation (11) implies that for ε small enough, x = b i0 (t) and y = b i0 (s). We now write the viscosity inequalities assuming that x ν < b i0 (t ν ). The case where x ν > b i0 (t ν ) is similar only replacing H i0 by H i0+1 . In order to simplify the notations, we will use the following notations: Using the fact that u is a sub solution of (1), and the definition of H i0,i0 , we deduce that η Using the fact that v is a supersolution of (1), we obtain Combaining (13) and (14), we obtain The goal is to take first ε, then α and finally ν to zero. Using (13) and Remark 3.6, we deduce that there exists a constant C T > 0 such that which implies that Estimates (16) and (17) implies that p ε,ν,α and p ε,ν converge as ε goes to zero (up to sub-sequence). Denoting byp ν,α = lim ε→0 p ε,ν,α and byp ν = lim ε→0 p ε,ν and taking ε to zero in (15), we obtain Recalling Remark 3.7 and using (16),(more precisly, we use (16) after taking ε to 0) , we deduce that First, we send α to zero to get that the limit of H i0,i0 (s ν ,p ν ) − H i0,i0 (s ν ,p ν,α ) = 0 and then, recalling Lemma 3.8 and the definition ofC ν,T , we send ν to zero to obtain a contradiction.
. In this case, we will use the following lemma Lemma 3.9. We have the following inequality − η Proof. We can assume that the maximum M ν,α is strict, (if not we add the term This function reaches its strict maximum at t ν . Let φ : with L > 0 a constant such that for all i ∈ {1, ..., N + 1} with K T an upper-bound of b j on [0, T ] for all j ∈ {1, ..., N }. The constant L is well defined due to the superlinearity property of H i (see (A2)). The maximum of this function is reached at a point (t, x) with t close to s ν (which implies that t = 0 and t = T ). If x = b i0 (t), then writing the subsolution inequality, we obtain a contradiction using (18). We deduce that x = b i0 (t). Moreover, using that the strict maximum of ω is reached at t ν , we deduce that t = t ν and x = b i0 (t ν ). Writing the subsolution inequality, we obtain The inequality above implies directly the desired result.
In order to introduce the new supremum M ν,α,ε , we will define two constants λ 1 and λ 2 whose existence is due to the preceding lemma and the properties of H k,i (t, ·) for k = i, i + 1. Let ν small enough such that 2 (t η − t ν ) < η 2T 2 . We define λ 1 and λ 2 such that The existence of λ 1 and λ 2 is due to the quasi-convexity property of H i0+1,i0 (t, ·) and H i0,i0 (t, ·). We also have that In fact, using that lim p→+∞ H + i0+1,i0 (t ν , p) p = +∞, we deduce that there exists C T > 0 such that Using the fact that the continuous function p i0+1,i0 0 is bounded on [0, T ], we deduce using Remark 3.7 that and that for ν small enough, Similary, we have also H − i0,i0 (s ν , λ 2 ) < s ν − t ν ν .
Before defining M ν,α,ε , we recall the definition of functions G ε and χ (see (10) and (8)) and the notations used above (see (12)). We set The maximum is reached at a point (t, s, x, y) and we have that We distinguish three cases depending on the sign of x − b i0 (t). If x > b i0 (t). If y > b i0 (s), we obtain the contradiction proceeding as in the case where x ν = b i0 (t ν ). If y ≤ b i0 (s), then using the fact that u is a subsolution, we obtain η Using that H i0+1,i0 (t, p) ≥ H + i0+1,i0 (t, p) and the fact that p ε,ν,α > 0, and using (19), we deduce that Sending ε to zero, we obtain a contradiction with the definition of λ 1 . If x < b i0 (t). We proceed as in the case where x > b i0 (t) using that H i0,i0 (t, p) ≥ H − i0,i0 (t, p), that p ε,ν,α < 0 and the definition of λ 2 . If x = b i0 (t). Using the fact that u is a subsolution, we obtain that This time, we distinguish three cases depending on the sign of y − b i0 (s). If y > b i0 (s). Note first that using (20), we have that Using the fact that v is a super-solution, we have that We claim that In order to obtain this inequality, we will prove that If (24) is true, then combining it with (22), (23) will remain true. For ε small enough and using the fact that p ε,ν < 0, we have that In fact, the above inequality is true for ε small enough using the definition of λ 1 . Finally, combaining (21) and (23), we deduce that η As in the case where x ν = b i0 (t ν ), we take first ε to zero in (25), and then taking ν to zero, thanks to Remark 3.7 and Lemma 3.8, we obtain a contradiction.
If y < b i0 (s). Note first that using (20), we have that As above, we can prove that t − s ν and then we obtain the contradiction.
If y = b i0 (s). In this case, we have As above, we use the sub-solution inequality and the locally Lipschitz property for A i0 then we send first ε to zero and then ν to zero to obtain the contradiction.

4.
A homogenization problem. The goal of this section is to prove that after rescaling, the solution of the Hamilton-Jacobi equation formulation of (27) below converges towards the unique solution of (1) including only one Hamiltonian and one function b. Most of the results are presented without much details since they can be found in previous works [15,14].

Presentation of the model.
We consider the following model which modelize a moving capacity restriction (like a bus or more generally called "moving bottelneck") of the density of the vehicles, for (t, x) ∈ R + × R, where ρ is the density of vehicles, b represents the position of the bottelneck, f is the flux function outside the bottleneck region, g is the flux function in the bottelneck region and φ is a transition function. We make the following assumptions. Assumptions (B).
• (B1) The flux function f is the Greenshields fundamental digram [18] given by where V max represents the maximal mean velocity of vehicls and ρ max is the maximal density far from the bus. • (B2) The flux function around the bus g is given by where σ max is the maximal density around the bus. Moreover, σ max < ρ max . • (B3) b is a linear function describing the trajectory of the bus and is defined by b (t) = V b t and we assume that 0 < V b < V max .
• (B4) The function φ is a C 1 transition function and is given by 1 if x < −r − 1 ou x > r + 1. We assume that the initial density satisfies
A simple computations yields to Setting H (p) = −f (−p) and F (p) = −g (−p) and recalling the definition of the function b (see assumption (B3)), we obtain the following Hamilton-Jacobi equation In order to introduce the convergence result, let us define the new Hamiltonians H andF defined as Clearly,F >H and we will use the following notations The main result of this section is the following theorem. Let u ε be the unique solution of where L t ε , We assume that the initial condition u 0 is a Lipshtiz function satisfying For ε > 0, let u ε be the unique solution of (28). Then there exists A ∈ F 0 , 0 such that u ε converges locally uniformly to the unique viscosity solution u 0 of the following equation 4.3. Viscosity solutions. In this subsection, we give the definition of viscosity solutions of equation (28) for ε = 1. We then study the space and time oscillations of the solution. The considered equation is given by 4.3.1. Definition. We will introduce now the standard notion of viscosity solutions of equation (31).

4.4.
Results for viscosity solutions of (31). We begin by stating the comparison principle for (31) whose proof is standard [3,10].  (31)). Let u be a sub-solution of (31) and v be a super-solution of (31). Let us also assume that there exists a constant K > 0 such that for all (t, x) ∈ [0, T ] × R, Then we have u(t, x) ≤ v(t, x) for all (t, x) ∈ [0, T ] × R.
In order to prove Theorem 4.4, we will study the following simpler equation since it's invariant by time translation.
The unique solution u of (31) is given by where w is the unique viscosity solution of (33).  w + (t, x) = u 0 (x) + C 1 t and w − (t, x) = u 0 (x) − C 2 t are respectively super and sub-solutions of (33).
Applying Perron's method joint to the comparison principle, we obtain the following result.
Proof. We begin by proving inequality (34). Let h > 0. We define v (t, x) = w (t + h, x) and the goal is to prove that All members of inequality (36) are viscosity solutions on (0, +∞) of (33) since equation (33) is invariant by time translation and by addition of constants. Using Lemma 4.6, we have that