A flame propagation model on a network with application to a blocking problem

We consider the Cauchy problem \[\partial_t u+H(x,Du)=0 \quad (x,t)\in\Gamma\times (0,T),\quad u(x,0)=u_0(x) \quad x\in\Gamma\] where $\Gamma$ is a network and $H$ is a convex and positive homogeneous Hamiltonian which may change from edge to edge. In the former part of the paper, we prove that the Hopf-Lax type formula gives the (unique) viscosity solution of the problem. In the latter part of the paper we study a flame propagation model in a network and an optimal strategy to block a fire breaking up in some part of a pipeline; some numerical simulations are provided.


Introduction
We study the Cauchy problem where Γ is a network and the operator H : Γ × R → R is continuous, convex, nonnegative and positive homogeneous in the last variable.
In R n , the problem (1.1) arises in flame propagation models and evolution of curves whose speed of propagation only depends on the normal direction. Existence, uniqueness and evolution of level sets of the solution of (1.1) have been extensively studied in the framework of viscosity solution theory (see [3,4,17]). The unique viscosity solution of (1.1) is given by the Hopf-Lax formula u(x, t) = min{u 0 (y) : S(y, x) ≤ t} (x, t) ∈ R n × (0, ∞) (1.2) where S is a distance function characterized by solving the associated stationary equation In the recent time, there is an increasing interest in the study of nonlinear differential equations on networks since they describe various phenomena as traffic flow, blood circulation, data transmission, electric networks, etc (see [9,12]). Concerning Hamilton-Jacobi equations on networks, we mention the recent papers [1,6,8,11,14]. In this paper, following the approach in [14], we cope existence, uniqueness and regularity for evolutive Hamilton-Jacobi equations on networks. In particular we prove that the Hopf-Lax formula (1.2) can be extended to this framework.
The main issue of our investigation is to tackle transition vertices (namely, points of the network where several edges meet each other). Actually, in our framework a suitable definition of viscosity solution at transition vertices (together with the standard one at points inside edges) will ensure the well posedness of the problem. Let us recall that this feature also happens for stationary first order equations (see [1,11,14]) whereas, for second order equations, some transition conditions (the so-called Kirchhoff condition) need to be imposed (see [12] and references therein).
In the second part of the paper we illustrate our results with a concrete application: the blocking problem. Suppose that a fire breaks up in some region of an oil pipeline. A central controller can stop the propagation of the fire by closing the junctions of the pipes, represented by the vertices of the network. The controller spends some time to reach the junctions which become unavailable when they are reached by the fire front. Therefore only a subset of the vertices can be closed on time to get fire under control. The aim is to find a strategy which maximizes the part of the network preserved by the fire. We give a characterization of the optimal strategy and we study the corresponding flame propagation in the network. Moreover we describe a numerical scheme for the solution of the problem and we present some numerical examples.
Notations: A network Γ is a finite collection of points V := {x i } i∈I and edges E := {e j } j∈J in R n . The vertices of V are connected by the continuous, non selfintersecting arcs of E. Each arc e j is parametrized by a smooth function π j : [0, l j ] → R n , l j > 0.
For i ∈ I we set Inc i := {j ∈ J | e j is incident to x i }. We set I B := {i ∈ I | #Inc i = 1}, I T := I \ I B , and we denote by ∂Γ := {x i ∈ V | i ∈ I B }, the set of boundary vertices of Γ, and by Γ T := {x i | i ∈ I T }, the set of transition vertices. For simplicity, we assume ∂Γ = ∅ (otherwise, one can introduce appropriate boundary condition on ∂Γ).
The network is not oriented, but the parametrization of the arcs induces an orientation which can be expressed by the signed incidence matrix A = {a ij } i∈I,j∈J with a ij :=    1 if x i ∈ e j and π j (0) = x i , −1 if x i ∈ e j and π j (l j ) = x i , 0 otherwise.
In the following we always identify x ∈ e j with y = π −1 j (x) ∈ [0, l j ]. For any function u : Γ → R and each j ∈ J we denote by u j : [0, l j ] → R the restriction of u to e j , i.e. u j (y) = u(π j (y)) for y ∈ [0, l j ]. The derivative are always considered with respect to the parametrization of the arc, i.e. if x ∈ e j , y = π −1 j (x) then Du(x) := du j dy (y). At x = x i ∈ V , we write by D j u(x) to intend the derivative relative to the arc e j , j ∈ Inc i . We denote by d :

Evolutive Hamilton-Jacobi equations on networks
In this section we consider the Hamilton-Jacobi equation with the initial condition The Hamiltonian H : Γ × R → R satisfies the following assumptions (2.5) H(x, ·) is convex and positive homogeneous in p for any x ∈ Γ; (2.6) inf{H(x, p) : |p| = 1, x ∈ Γ} > 0. (2.7) Remark 2.1 (2.3)-(2.4) are standard assumptions in the framework of viscosity solution theory. Assumption (2.5) is the independence of H of the parametrization of the arc: if we change the parametrization of e j incident to x i and we invert its orientation then also the derivative du j dy (y) for y = π −1 j (x i ) changes sign. By (2.6), the equation is called geometric and it is connected with front propagation (see [3,4,17]), while (2.7) implies in particular the coercivity of the Hamiltonian, i.e. lim |p|→∞ H(x, p) = +∞ for any x ∈ Γ.

Example 1 An Hamiltonian satisfying the previous assumptions is given by
where A is a compact metric space, b : Γ × A → R is a continuous function such that, for some r > 0, there holds (−r, r) ⊂ co{b(x, a) : a ∈ A} and sup{b(x, a) : a ∈ A} = sup{−b(x, a) : a ∈ A}. In particular, if A = (−1, 1) and b(x, a) = a/c(x) with c bounded and strictly positive, then the Hamiltonian is H(x, p) = |p|/c(x) (called eikonal Hamiltonian) and it satisfies assumptions (2.3)-(2.7).
In the next definitions we introduce the class of test functions and the notion of viscosity solution for (2.1).
where (a ij ) is the signed incidence matrix.
Remark 2.2 By the continuity of φ and point (ii) we infer

Definition 2.2
i) If (x, t) ∈ e j × (0, T ), j ∈ J, then a function u ∈ USC(Γ × (0, T )) (resp., v ∈ LSC(Γ × (0, T ))) is called a subsolution (resp. supersolution) of (2.1) at (x, t) if for any test function φ for which u − φ attains a local maximum (resp., a local minimum) at (x, t), we have is called a subsolution at (x, t) if for any j, k ∈ Inc i and any (j, k)-test function φ for which u − φ attains a local maximum at (x, t) relatively to (e j ∪ e k ) × (0, T ), then is called a supersolution at (x, t) if for any j ∈ Inc i , there exists k ∈ Inc i , k = j (said i-feasible for j at (x, t)) such that for any (j, k)-test function φ for which u−φ attains a local minimum at (x, t) relatively to (e j ∪ e k ) × (0, T ), we have Remark 2.3 By Remark 2.2 and assumption (2.5), at a vertex x = x i , it holds In the next proposition we exploit the geometric character of (2.1) to give equivalent definitions of subsolution and supersolution.
if and only if for any α ∈ R, for any j, k ∈ Inc i with i = j, and any (j, k)-test function φ which has a local minimum on {u ≥ α} ∩ ((e j ∪ e k ) × (0, T )) at (x, t), then (2.11) holds.
The proof is postponed to the Appendix.

A comparison theorem
We prove a comparison principle for the equation (2.1). We shall use some techniques introduced for evolutive equations in the Euclidean setting (see [2] and references therein) and for stationary equations on networks (see [14,Theorem 5.1]). Note that for the next result we only exploit assumptions (2.3)-(2.5).
be a subsolution and, respectively, a supersolution of (2.1) such that u( Proof We proceed by contradiction, assuming For η sufficiently small, the function u(x, t) − v(x, t) − η/(T − t) attains a positive maximum at some point (x, t) ∈ Γ × (0, T ) and we set (1.3). By the choice of δ and η, we have whence, the function Ψ attains a positive maximum with respect to G := Γ×[0, T )× Γ × [0, T ] at some point ( x, t, ξ, s). Moreover, by the last relation, for every α, we have sup For later use, we denote by ( (2.15) By the inequality Ψ( x, t, ξ, s) ≥ Ψ( x, t, x, t) and the boundedness of Γ, we get for some constant c independent of η, δ and α; in particular, we infer the estimates It follows that, since t < T , s < T for α sufficiently small. We deduce that, as α → 0 (possibly passing to a subsequence), there holds x, ξ → x 0 and t, s → t 0 as α → 0 + for some (x 0 , t 0 ) ∈ Γ × [0, T ). We claim that, in fact, there holds Indeed, if the claim is false, by the semicontinuity of u and v, (recall that (x 1 , t 1 ) is a maximum point for Ψ(x, t, x, t) in Γ × [0, T )) passing to the limit in the inequality we get a contradiction to the definition of (x 1 , t 1 ) in (2.15). Whence (2.16) is proved. We now distinguish two cases: t 0 = 0 and t 0 ∈ (0, T ). Assume first that t 0 = 0; in order to contradict relation (2.13), observe that Passing to the limit as α → 0 + and taking into account relation (2.14) and u(x, 0) ≤ v(x, 0) on Γ, we infer which is the desired contradiction to (2.13). Assume now that t 0 > 0. For α sufficiently small, both t and s are strictly positive. For x 0 / ∈ V (i.e., x 0 ∈ e j for some j ∈ J), one can accomplish the proof by standard arguments so we consider only the case x 0 = x i ∈ V . We assume without any loss of generality that the unique path of length d( x, ξ) connecting x with ξ runs at most through one vertex and, if this happens, the vertex is x i .
We observe that the functions attain respectively a maximum at ( x, t) and a minimum at ( ξ, s). We now distinguish several cases according to the position of x and ξ. Case 1: x ∈ e j , ξ ∈ e k for some j, k ∈ Inc i j = k. We note that the functions ψ 1 and ψ 2 are admissible test function respectively at ( x, t) and at ( ξ, s); therefore, the definition of sub-and of supersolution entails Subtracting the latter inequality from the former, we deduce

By (2.4), we obtain
By the last two inequalities and relation (2.16), passing to the limit as α → 0 + , we conclude δ ≤ 0 which is the desired contradiction. Case 2: x ∈ e j , ξ = x i for some j ∈ Inc i . As before, the function ψ 1 is an admissible test function at ( x, t). We claim that the function ψ 2 is an admissible (j, k)-test function for every k ∈ Inc i \ {j} (recall that the point x belongs to the edge e j ). Actually, for k = j, we have The definition of sub-and of super solution yields again inequalities (2.17) and (2.18). Since the rest of the proof can be achieved following the arguments of the previous case, we shall omit it.
This case is similar (and even simpler, because the definition of subsolution is less restrictive than the one of supersolution) to Case 2; hence, we shall omit it.
namely, ψ 1 (respectively, ψ 2 ) is an admissible (j, k)-test function at ( x, t) (resp., at ( ξ, s)) for every j, k ∈ Inc i with j = k. By the same arguments as those in Case 2 and 3, we infer inequalities (2.17) and (2.18) and we conclude as in Case 1. ✷

A regularity result
This section is devoted to establish a regularity result for solution of the equation (2.1). We note that the functions are respectively a super-and a subsolution to problem (2.1)-(2.2). The comparison principle ensures hence u is bounded. Moreover, for every h > 0, the functions are respectively a sub-and a supersolution to problem (2.1)-(2.2). Invoking again the comparison principle, we infer which amounts to the Lipschitz continuity in the variable t. Hence, the function u verifies (in viscosity sense)

A representation formula
In this section we exploit the geometric character of the equation to give a representation formula for the solution of (2.1)-(2.2). Given the Hamiltonian H, we define the support function of the sub-level set {p ∈ R : H(x, p) ≤ 1} by The function s : Γ × R → R is continuous, convex, positive homogeneous and non negative (see [16]). For example, for the eikonal Hamiltonian We introduce a distance function related to the Hamiltonian H on the network. For x, y ∈ Γ define Note that the distance defined by (2.19) coincides with the one defined by (1.3) for H(x, p) = |p| since in this case s(x, q) = |q|. The next proposition summarizes some properties of S (for the definition of viscosity solution on a network in the stationary case, we refer the reader to the paper [14]).

Proposition 2.3 S is a Lipschitz continuous function on Γ × Γ and it is equivalent
to the path distance d, i.e. there exists C > 0 such that for any x, y ∈ Γ. (2.20) Moreover, for any K ⊂ Γ, closed, S(K, ·) is a subsolution in Γ and a supersolution in Γ \ K of the Hamilton-Jacobi equation For the proof of the previous proposition we refer to [14,Prop. 4.1].
Theorem 2.2 Let u 0 : Γ → R be a continuous function. Then the solution of (2.1)-(2.2) is given by In order to prove this result, let us first establish a preliminary lemma.
being the case x 0 ∈ V similar. Assume by contradiction that there exists η > 0, j, k ∈ Inc i and a (j, k)-admissible test function φ at (x 0 , t 0 ) such that u − φ has a local maximum at (x 0 , t 0 ) relatively to (e j ∪ e k ) × (0, T ) and Moreover, by (2.24) with t = t 0 , we get that w(x) − φ(x, t 0 ) has a local maximum at x 0 relatively to e j ∪ e k , hence By (2.25) and (2.26) we get a contradiction to (2.23). The proof that u is a supersolution in x 0 = x i ∈ Γ T is similar. We claim that u verifies the supersolution condition choosing as i-feasible edge for j ∈ Inc i the same edge e k which is i-feasible for j for the function w at x i . The calculations follows the same arguments as before so we shall omit them. ✷ Proof of Theorem 2.2 i) u is continuous: T ] be such that lim n→∞ (x n , t n ) = (x 0 , t 0 ) and set δ n = |t n − t 0 | + Cd(x n , x 0 ) where C is as in (2.20). We claim that and therefore (2.27). Moreover we claim where the latter inequality is due to the subadditivity of S; hence, relation (2.28) follows. Therefore, by (2.27) and (2.28), we deduce where ω is the modulus of continuity for u 0 in a neighborhood of x 0 . This gives u(x n , t n ).
By {y ∈ Γ : S(y, x n ) ≤ t n } ⊂ {y ∈ Γ : S(y, x 0 ) ≤ t 0 + δ n } we get in a similar way ii) u is a supersolution: We only prove that u is a supersolution at (x 0 , t 0 ) with Assume by contradiction that, for some j ∈ Inc i and for any k ∈ Inc i \ {j}, there is a (j, k)-admissible test function φ k such that u − φ k has a local minimum at (x 0 , t 0 ) relatively to e j ∪ e k with φ k (x 0 , t 0 ) = u(x 0 , t 0 ) = α and such that Observe that We claim that φ k has a local maximum on the set {w ≤ 0} ∩ ((e j ∪ e k ) × (0, T )) at (x 0 , t 0 ). In fact if (x, t) ∈ {w ≤ 0} then by (2.30), u(x, t) ≤ α and since t 0 ) and the claim is proved. By Lemma 2.1 w is supersolution to (2.1) and therefore Prop.2.1 gives a contradiction to (2.29). If S({u 0 ≤ α}, x 0 ) = 0, we claim that (x 0 , t 0 ) is a local maximum point for u. In fact, and the claim is proved. By (2.31) (x 0 , t 0 ) is also a local maximum point for φ k on (e j ∪ e k ) × (0, T ) for any k = j. By (2.7) and (2.29), we get φ t (x 0 , t 0 ) < 0 and therefore a contradiction to (x 0 , t 0 ) being a local maximum point for φ k . iii) u is a subsolution: We only consider the case (x 0 , t 0 ) with x 0 = x i ∈ V . We first observe that Given j, k ∈ Inc i let φ be an (j, k)-test function such that u−φ has a local maximum at (x 0 , t 0 ) relatively to (e j ∪ e k ) × (0, T ). Arguing as in the supersolution case we define w(x, t) = S({u 0 ≤ α}, x) − t and we show that φ has a local minimum on the set {w ≥ 0} ∩ ((e j ∪ e k ) × (0, T )). Then we conclude by applying Prop.2.1 and Lemma 2.1. ✷

An application: the blocking problem
In this section we provide a concrete application of our results: now, the network Γ represents an oil pipeline ( a network of computer, the circulatory system, etc.) and at initial time a fire breaks up in the region R 0 ⊂ Γ (a virus is detected in a subnet, an embolus occurs in some vessel). The speed of propagation of the fire is known but it may depend on the state variable. Our aim is to determine an optimal strategy to stop the fire and to minimize the burnt region. As in the flame propagation model described in [3], let R 0 be the initial burnt region and R t the region burnt at time t. Assume that the front ∂R t propagates in the outward normal direction to the front itself. Then R t is given by the 0sublevel set of a viscosity solution of (2.1) -(2.2) where the initial datum u 0 satisfies R 0 = {x ∈ Γ : u 0 (x) ≤ 0}.
Recalling the representation formula (2.22) we observe that the 0-sublevel set of the solution of (2.1)-(2.2) is given by where S is defined as (2.19). Note that, since Γ is composed by a finite number of bounded edges and therefore its total length is finite, then the burnt region R = {x ∈ Γ : S(R 0 , x) < ∞} = ∪ t≥0 R t coincides with Γ. In other words, without any external intervention, the pipeline will be completely burnt in a finite time.
We assume that an operator, located at x 0 ∈ V (the "operation center"), can block the fire by closing the junctions of the pipeline (i.e., vertices of the network) and that this operation is effective only after a delay which depends on the distance of the junction from x 0 .
Our problem is reminiscent of other models described in literature (for instance, see [10,18] and references therein) which concern the control of some diffusion in a network (e.g. minimizing the spread of a virus or maximizing the spread of an information). In this framework, let us stress the main novelties of our setting: in our model, the diffusion has positive finite speed and it affects both vertices and edges, the spread is not reversible (namely, "infected" points cannot become again "healthy") and the effect of the operator's action has finite speed (in other words, it is effective after a delay depending on the distance from the operation center).

Definition 3.1 An admissible strategy σ is a subset of V such that
where δ is a given nonnegative constant. We denote by V ad the vertices of the network which satisfy the admissibility condition (3.1) and by Σ ad the set of the admissible strategies.
Remark 3.1 Condition (3.1) means that the time to reach the vertex x i ∈ σ from x 0 at the velocity 1/δ is less than or equal to the time the fire front reaches x i . Therefore the junction x i can be blocked before the front goes through it.
with S σ (y, x) = ∞ if there is no admissible curve joining y to x. We also set which are respectively the region burnt at time t and the total burnt region using the strategy σ. Observe that -if δ is very small, then the optimal strategy is given by the extremes of the edges containing R 0 ; -if δ is very large and x 0 ∈ R 0 , then every strategy is useless since the whole pipeline will burn whatever the operator does.
Aside the previous simple cases an optimal strategy for the blocking problem may be not obvious and we aim to find an efficient way to compute it. To find a strategy which minimizes the burnt region, we first give a characterization of R σ in terms of a problem satisfied by the distance S σ (R 0 , ·).
ii) u is a viscosity solution of the problem Proof Note that, for j ∈ J, either e j ⊂ R or e j ∩ R = ∅, i.e. an edge is either completely burnt or it cannot be reached by the fire. The function u can be discontinuous at x i ∈ σ and Actually, if x i ∈ σ and u j (x i ) < ∞, an admissible trajectory for S σ connecting x i to R 0 and containing the edge e j , j ∈ Inc i , necessarily enters from x i into e j . Hence In R \ σ, S σ locally behaves as the distance S defined in (2.19). Therefore the continuity of u in R \ σ and the sub and supersolution properties in the open set R \ ( R 0 ∪ σ) are obtained with the same arguments [14,Prop.4.1].
To prove iii), assume by contradiction that there exists x 0 ∈ R such that u(x 0 ) < w(x 0 ). For any x, y ∈ R such that S σ (x, y) < ∞, a minimizing trajectory always exists since (up to reparametrization) there is only a finite number of trajectories connecting the two points.
Iterating the same argument in [t m , t m+1 ] we finally get w(ξ(T )) ≤ u(ξ(T )) and therefore a contradiction since ξ(T ) = x 0 . We conclude that x 0 ∈ R w and w(x 0 ) ≤ u(x 0 ). ✷ We now show that the strategy composed by all the admissible nodes which are adjacent to a non admissible node is optimal, in the sense that it maximizes the preserved region.

Proposition 3.2 The admissible strategy
Proof Assume by contradiction that there exist σ ∈ Γ ad and x 0 ∈ (Γ\R σ )∩R σopt . Hence there exists an admissible trajectory ξ for σ opt connecting x 0 to R 0 , i.e. there exists y 0 ∈ R 0 , t > 0 and ξ ∈ B t y 0 ,x 0 such that ξ(r) ∈ σ opt for r ∈ [0, t]. Note that σ opt disconnect the subgraph containing the admissible vertices V ad by the one containing the non admissible vertices V \ V ad and therefore ξ([0, t]) is contained in the subgraph with vertices (V \ V ad ) ∪ σ opt . Since σ ⊂ V ad , then ξ is also admissible for S σ (y 0 , x 0 ) and therefore a contradiction to x 0 ∈ Γ \ R σ . ✷ Remark 3.2 One could be interested in a cost functional on Σ ad which takes into account other terms, as the cost of blocking a junction, beside the length of the burnt region. For instance, consider the cost where α i represents the cost of blocking the vertex x i while β j represents the damage of the destruction of edge e j by fire. Clearly, the minimum of I exists since Σ ad is finite, but it seems more difficult to characterize the optimal strategy because it will strongly depend on the geometry of the network.

Numerical simulations
In this section we propose a numerical method to compute the optimal strategy for the blocking problem. The scheme is based on a finite difference approximation of the stationary problem (3.2); for simplicity, we only consider the case of the eikonal Hamiltonian H(x, p) = |p|/c(x).
On each interval [0, l j ] parametrizing the arc e j , we consider an uniform partition y j,m = mh j with M j = l j /h j ∈ N and m = 0, . . . , M j . In this way we obtain a grid G h = {x j,m = π j (y j,m ), j ∈ J, m = 0, . . . , M j } on the network Γ. We define R h 0 = G h ∩ R 0 , the set of the nodes in the initial front. For x 1 , x 2 ∈ G h , we say that x 1 and x 2 are adjacent and we write x 1 ∼ x 2 if and only if they are the image of two adjacent grid points, i.e. x k = π j (y k ), for y k ∈ [0, l j ], k = 1, 2, j ∈ J and |y 1 − y 2 | = h j . Note that if x i ∈ V is a vertex of Γ, then the nodes of the grid G h adjacent to x i may belong to different arcs. We compute the optimal strategy by means of the following Algorithm, based on the results of Prop. 3.2.
Blocking strategy [B] (i) In the first step we solve the front propagation problem on the network computing the approximated time u h (x) at which a node x j,m ∈ G h gets burnt 3) Note that if x j,m coincides with a vertex x i ∈ V , the approximating equation reads as (ii) In the second step we determine the vertices which satisfy the admissibility condition (3.1). We define where w h : G h → R represents the approximated time to reach a node x ∈ G h , starting from the operation center x 0 and moving with a constant speed 1/δ. The function w h is computed by means of the finite difference scheme (iii) We define the approximated optimal strategy by setting and, for σ = σ h opt , we compute the corresponding approximate distance by solving the following finite difference scheme (3.2). Note that as in the continuous case, the value of u h at x i ∈ σ can depend on the edge e j and in general the function is discontinuous at these points.

Example 1: a simple network
We consider a network with a simple structure where the fire starts in one vertex, R 0 = {(0, 0)} and propagates with speed c = 1.
We first perform step i) of Algorithm [B] and we compute the approximated time u h (x) at which a node x ∈ G h get burnt. The results are shown in Fig.1 together with the graph structure.
Next, we perform step ii) of Algorithm [B] . We suppose the operation center x 0 is located on the vertex (−1.5, 2.5) and the velocity to reach a node x i from x 0 is 1 δ = 1. Using (3.4), we compute the set of nodes V h ad . The result is shown in in Fig.2, the set of nodes in V h ad are represented by the square markers. Once computed the set of admissible nodes V h ad , we can compute the optimal strategy, following step iii). The result is shown in Fig.3. It is clear, from the simple structure of thenetwork, that any other choice of σ h opt would lead to a greater burnt region and consequently to a smaller preserved network region.

Example 2: a more complex network
We consider a more complex network, with 20 vertices and 32 arcs. We suppose the fire starts in two vertices and propagates with a non constant normal speed c(x) = |x|. We proceed as in the first example and we compute the approximated time u h (x) at which a node x ∈ G h get burnt. The results are shown in Fig.4 together with the graph structure. We suppose the operation center x 0 is located on the vertex x 0 = (3.8, 6.5) and the velocity to reach a node x i from x 0 is 1 δ = 1/5. Using   compute the optimal strategy, following step iii). The result is shown in Fig.6. In this case we get the optimal solution blocking only three vertices. By changing the set R 0 as in Fig.7, the region of the admissible node V h ad , shown in Fig. 8, turns out to be much smaller. In this case the optimal strategy is formed by all the vertices in V h ad , and the preserved region becomes smaller than the previous case, see Fig. 9.          subsolution (resp., supersolution) to (2.1). For any function θ : R → R continuous and nondecreasing, the function θ • u is still a subsolution (resp., supersolution) to (2.1). In particular, if u solves (2.1), then also θ • u solves the same equation.
Proof For a point inside an edge, the proof follows by the same arguments as in the proof of [7,Theorem 5.2]. For the subsolution case in a vertex x i , we observe that an USC function is a subsolution if, and only if, it is a standard viscosity subsolution on the arc e j ∪ e k for any j, k ∈ Inc i . Taking into account this observation and using again the arguments of [ (ii). Consider a function u ∈ USC(Γ × (0, T )). For x = x i ∈ V , assume that, for each α ∈ R, j, k ∈ Inc i with j = k, (j, k)-test function φ which has a local minimum on {u ≥ α} ∩ ((e j ∪ e k ) × (0, T )) at (x, t), inequality (2.9) holds. Our aim is to prove that u satisfies the subsolution condition at (x, t). To this end, we assume by contradiction that there exists an admissible test function φ which verifies namely, the function φ attains a local minimum in (x, t) with respect to ∆. Invoking our assumption, we infer inequality (2.9) which amounts to the desired contradiction. Hence, the first implication of the statement is achieved. We now prove the reverse implication. Let u be a subsolution to (2.1). Fix α ∈ R, x = x i ∈ Γ T and j, k ∈ Inc i with j = k. Let φ be an admissible (j, k)-test function which attains a local minimum on ∆ := {u ≥ α} ∩ ((e j ∪ e k ) × (0, T )) at (x, t).
By Lemma A.1, it suffices to prove that there exists a continuous nondecreasing function θ such that θ • u − φ attains a local maximum in (x, t) relatively to (e j ∪ e k ) × (0, T ).
For (y, s) ∈ W ∩ ∆, by the definition of ∆ (recall that φ attains a local minimum on ∆), there holds θ(u(y, s)) = φ(x, t) ≤ φ(y, s); we observe that the sets E n are not empty for n sufficiently large with E n ⊂ E n+1 and E 0 = ∪ n E n . Moreover, the sequence {β n } is nondecreasing and, by the compactness of E n , it fulfills: β n < α and β n → α as n → +∞. Hence, there exists an increasing sequence {n m } such that β nm is increasing and E n m+1 \ E nm = ∅. We set for s ∈ [α, +∞) φ(x, t) − 1/n m−1 for s = β nm inf W φ for s ∈ (−∞, β n 1 ] linear function for s ∈ (β n m−1 − β nm ).

Whence, inequality (A.3) is established and statement (ii) is completely proved.
(iii). The proof of this part of the statement follows by the same arguments as those in the previous one reversing each inequality and choosing the test functions only along the i-feasible edge. ✷