Construction of the Minimum Time Function for Linear Systems Via Higher-Order Set-Valued Methods

The paper is devoted to introducing an approach to compute the approximate minimum time function of control problems which is based on reachable set approximation and uses arithmetic operations for convex compact sets. In particular, in this paper the theoretical justification of the proposed approach is restricted to a class of linear control systems. The error estimate of the fully discrete reachable set is provided by employing the Hausdorff distance to the continuous-time reachable set. The detailed procedure solving the corresponding discrete set-valued problem is described. Under standard assumptions, by means of convex analysis and knowledge of the regularity of the true minimum time function, we estimate the error of its approximation. Higher-order discretization of the reachable set of the linear control problem can balance missing regularity (e.g., Holder continuity) of the minimum time function for smoother problems. To illustrate the error estimates and to demonstrate differences to other numerical approaches we provide a collection of numerical examples which either allow higher order of convergence with respect to time discretization or where the continuity of the minimum time function cannot be sufficiently granted, i.e., we study cases in which the minimum time function is Holder continuous or even discontinuous.


1.
Introduction. Reachable sets have attracted several mathematicians since longer times both in theoretical and in numerical analysis. The approaches for the numerical computation of reachable sets mainly split into two classes, those for reachable sets up to a given time and the other ones for reachable sets at a given end time. We will give here only exemplary references, since the literature is very rich (more references are given in an early version of this paper in [10]). There are methods based on overestimation and underestimation of reachable sets based on ellipsoids [30], zonotopes [2,27] or on approximating the reachable set with support functions resp. supporting points [11,2]. Other popular and well-studied approaches involve level-set methods, semi-Lagrangian schemes and the computation of an associated Hamilton-Jacobi-Bellman equation, see e.g. [12,17,25] or are based on the viability concept [3] and the viability kernel algorithm [38]. Further methods [11,9,6] are set-valued generalizations of quadrature methods and Runge-Kutta methods initiated by the works [23,40,24,43,22].
Here, we will focus on set-valued quadrature methods and set-valued Runge-Kutta methods with the help of support functions or supporting points, since they do not suffer on the wrapping effect or on an exploding number of vertices and the error of restricting computations only for finitely many directions can be easily estimated. Furthermore, they belong to the most efficient and fast methods (see [2,Sec. 3.1], [32,Chap. 9,p. 128]) for linear control problems to which we restrict the computation of the minimum time function T (x). We refer to [6,11,32] (and references therein) for technical details on the numerical implementation, although we will lay out the main ideas of this approach for reader's convenience.
In optimal control theory the regularity of the minimum time functions is studied intensively, see e.g. in [20,21] and references therein. For the error estimates in this paper it will be essential to single out example classes for which the minimum time function is Lipschitz (no order reduction of the set-valued method) or Höldercontinuous with exponent 1 2 (order reduction by the square root). Minimum time functions are usually computed by solving the Hamilton-Jacobi-Bellman (HJB) equations and by the dynamic programming principle, see e.g. [16,17,13,14,15,19,28]. In this approach, the minimal requirement on the regularity of T (x) is the continuity, see e.g. [13,19,28]. The solution of a HJB equation with suitable boundary conditions gives immediately -after a transformation -the minimum time function and its level sets provide a description of the reachable sets. A natural question occurring is whether it is also possible to do the other way around, i.e., to reconstruct the minimum time function T (x) if knowing the reachable sets. One of the attempts was done in [16,17], where the approach is based on PDE solvers and on the reconstruction of the optimal control and solution via the value function. On the other hand, our approach in this work is completely different. It is based on very efficient quadrature methods for convex reachable sets as described in Section 3. In this article we present a novel approach for calculating the minimum time function. The basic idea is to use set-valued methods for approximating reachable sets at a given end time with computations based on support functions resp. supporting points. By reversing the time and start from the convex target as initial set we compute the reachable sets for times on a (coarser) time grid. Due to the strictly expanding condition for reachable sets, the corresponding end time is assigned to all boundary points of the computed reachable sets. Since we discretize in time and in space (by choosing a finite number of outer normals for the computation of supporting points), the vertices of the polytopes forming the fully discrete reachable sets are considered as data points of an irregular triangulated domain. On this simplicial triangulation, a piecewise linear approximation yields a fully discrete approximation of the minimum time function.
The well-known interpolation error and the convergence results for the set-valued method can be applied to yield an easy-to-prove error estimate by taking into account the regularity of the minimum time function. It requires at least Hölder continuity and involves the maximal diameter of the simplices in the used triangulation. A second error estimate is proved without explicitely assuming the continuity of the minimum time function and depends only on the time interval between the computed (backward) reachable sets. The computation does not need the nonempty interior of the target set in contrary to the Hamilton-Jacobi-Bellman approach, for singletons the error estimate even improves. It is also able to compute discontinuous minimum time functions, since the underlying set-valued method can also compute lower-dimensional reachable sets. There is no explicit dependence of the algorithm and the error estimates on the smoothness of optimal solutions or controls. These results are devoted to reconstructing discrete optimal trajectories which reach a set of supporting points from a given target for a class of linear control problems and also proving the convergence of discrete optimal controls by the use of nonsmooth and variational analysis. The main tool is Attouch's theorem that allows to benefit from the convergence of the discrete reachable sets to the time-continuous one.
The plan of the article is as follows: in Section 2 we collect notations, definitions and basic properties of convex analysis, set operations, reachable sets and the minimum time function. The convexity of the reachable set for linear control problems and the characterization of its boundary via the level-set of the minimum time function is the basis for the algorithm formulated in the next section. In Section 3, we briefly introduce the reader to set-valued quadrature methods and Runge-Kutta methods and their implementation and discuss the convergence order for the fully discrete approximation of reachable sets at a given time both in time and in space.
In the next subsection we present the error estimate for the fully discrete minimum time function which depends on the regularity of the minimum time function and on the convergence order of the underlying set-valued method. Another error estimate expresses the error only on the time period between the calculated reachable sets. The last subsection discusses the construction of discrete optimal trajectories and convergence of discrete optimal controls. A series of accompaning examples can be found in Section 5. We compare the error of the minimum time function with respect to time and space discretization studying the influence of its regularity and of the smoothness of the support functions of corresponding set-valued integrands. We first consider several linear examples with various target and control sets and study different levels of regularity of the corresponding minimum time function. The nonlinear example in Subsection 5.2 demonstrates that this approach is not restricted to the class of linear control systems. Although first numerical experiences are gathered there, its theoretical justification has to be gained by a forthcoming paper. In Subsection 5.3 one example demonstrates the need of the strict expanding property of (union of) reachable sets for characterizing boundary points of the reachable set via time-minimal points. The section ends with a collection of examples which either are more challenging for numerical calculations or partially violate our assumptions. Finally, a discussion of our approach and possible improvements can be found in Section 6.

2.
Preliminaries. In this section we will recall some notations, definitions as well as basic knowledge of convex analysis and control theory for later use. Let C(R n ) be the set of convex, compact, nonempty subsets of R n , · be the Euclidean norm and ·, · the inner product in R n , B r (x 0 ) be the closed (Euclidean) ball with radius r > 0 centered at x 0 and S n−1 be the unique sphere in R n . Let A be a subset of R n , M be an n × n real matrix, then B r (A) := x∈A B r (x), M denotes the lub-norm of M with respect to · , i.e., the spectral norm. The convex hull, the boundary, the interior and the diameter of a set A are signified by co(A), ∂A, int(A), diam(A) 4 ROBERT BAIER AND THUY T. T. LE respectively. We define the support function, the supporting points in a given direction and the set arithmetic operations as follows.
The support function and the supporting face of A in the direction l are defined as, respectively, An element of the supporting face is called supporting point.
Known properties of the convex hull, the support function and the supporting points when applied to the set operations introduced above can be found in ,e.g. [4,Chap. 0], [3,Sec. 4.6,18.2], [6,32,2]. Especially, the convexity of the arithmetic set operations becomes obvious. We also recall the definition of Hausdorff distance which is the main tool to measure the error of reachable set approximation.
Then the distance function from x to D is d(x, D) := min d∈D x − d and the Hausdorff distance between C and D is defined as Now we will recall some basic notations of control theory, see e.g., [12,Chap. IV] for more details. Consider the following linear time-variant control dynamics in R n The coefficients A(t), B(t) are n × n and n × m matrices respectively, y 0 ∈ R n is the initial value, U ∈ C(R m ) is the set of control values. Under standard assumptions, the existence and uniqueness of (1) are guaranteed for any measurable function u(·) and any y 0 ∈ R n . Let S ⊂ R n , a nonempty compact set, be the target and the set of admissible controls and y(t, y 0 , u) be the solution of (1). We define the minimum time starting from y 0 ∈ R n to reach the target S for some u ∈ U t(y 0 , u) = min {t ≥ t 0 : y(t, y 0 , u) ∈ S} ≤ ∞.
The minimum time function to reach S from y 0 is defined as T (y 0 ) = inf u∈U {t(y 0 , u)}, see e.g., [12,Sec. IV.1]. We also define the reachable sets for fixed end time t > t 0 , up to time t resp. up to a finite time as follows:

By definition
is a sublevel set of the minimum time function, while for a given maximal time t f > t 0 and some t ∈ I := [t 0 , t f ], R(t) is the set of points reachable from the target in time t by the time-reversed systeṁ for shortening notations. In other words, R(t) equals the set of starting points from which the system can reach the target in time t. Sometimes R(t) is called the backward reachable set which is also considered in [16] for computing the minimum time function by solving a Hamilton-Jacobi-Bellman equation.
The following standing hypotheses are assumed to be fulfilled in the sequel.
are n × n, n × m real-valued matrices defining integrable functions on any compact interval of [t 0 , ∞). (ii) The control set U ⊂ R m is convex, compact and nonempty, i.e., U ∈ C(R m ). (iii) The target set S ∈ R n is convex, compact and nonempty, i.e., S ∈ C(R n ).
Especially, the target set can be a singleton.
Under our standard hypotheses, the control problem (3) can equivalently be replaced by the following linear differential inclusioṅ with absolutely continuous solutions y(·) (see [39,Appendix A.4]). All the solutions of (4)-(5) are represented as for all y 0 ∈ S, u ∈ U, and t 0 ≤ t < ∞, where Φ(t, s) is the fundamental solution matrix of the homogeneous systeṁ with Φ(s, s) = I n , the n × n identity matrix. Using the Minkowski addition and the Aumann's integral [5], the reachable set can be described by means of Aumann's integral as follows For time-invariant systems, i.e.,Ā(t) =Ā, we have Φ(t, t 0 ) = eĀ (t−t0) . For the linear control system, under Assumptions 2.3(i)-(iii), (1) the reachable set at a fixed end time is convex which allows to apply support functions or supporting points for its approximation. Furthermore, the reachable sets change continuously with respect to the end time. 6 ROBERT BAIER AND THUY T. T. LE The following proposition provides the connection between R(t) and the level set of T (·) at time t which is essential for this approach. We will benefit from the sublevel representation in (2). The result is related to [16,Theorem 2.3], where the minimum time function at x is the minimum for which x lies on a zero-level set bounding the backward reachable set. Proposition 1. Let Assumption 2.3 be fulfilled and t > t 0 . Then Proof. "⊂": Assume that there exists x ∈ ∂R(t) with x / ∈ {y 0 ∈ R n : T (y 0 ) = t}. Clearly, x ∈ R ≤ (t) and (2) shows that T (x) ≤ t. By definition there exists s ∈ [t 0 , t] with x ∈ R(s). Assuming s < t we get the contradiction x ∈ R(s) ⊂ int R(t) from Assumption 2.3(iv). "⊃": Assume that there exists x ∈ {y 0 ∈ R n : T (y 0 ) = t} (i.e., T (x) = t) be such that x / ∈ ∂R(t). Since x ∈ R(t) by (2) and we assume that x / ∈ ∂R(t), then x ∈ int(R(t)).
Hence, there exists ε > 0 with The continuity of R(·) ensures for Hence, The order cancellation law in [35, Theorem 3.2.1] can be applied, since R(t 1 ) is convex and all sets are compact. Therefore, which implies x + ε 2 B 1 (0) ⊂ R(t 1 ). Hence, x ∈ int(R(t 1 )) with t 1 < t so that T (x) ≤ t 1 < t which is again a contradiction. Therefore, {y 0 ∈ R n : T (y 0 ) = t} ⊂ ∂R(t). The proof is completed.
In the previous characterization of the boundary of the reachable set at fixed end time the assumption of monotonicity of the reachable sets played a crucial role. As stated in Remark 1, Assumption 2.3(iv) also guarantees that the union of reachable sets coincides with the reachable set at the largest end time and is trivially convex. If we drop this assumption, we can only characterize the boundary of the union of reachable sets up to a time under relaxing the expanding property (iv) while demanding convexity as can be seen in the following proposition.
Proposition 2. Let t > t 0 , Assumptions 2.3(i)-(iii) and Assumption (iv)' R ≤ (t) has convex images and is strictly expanding on the compact Proof. The proof can be found in [31, Proposition 7.1.4].
Remark 2. Assumption (iv)' implies that the considered system is small-time controllable, see [12, Chap. IV, Definition 1.1]. Moreover, under the assumption of small-time controllability the nonemptiness of the interior of R and the continuity of the minimum time function in R are consequences, see [12, Chap. IV, Propositions 1.2, 1.6]. Assumption (iv)' is essentially weaker than (iv), since the convexity of R ≤ (t) and the strict expandedness of R ≤ (·) follow by Remark 1. The inclusion for R ≤ (·) in this assumption is equivalent to small-time controllability (STC) for time-invariant systems, sufficient conditions for STC in this case via generalized Petrov and second-order conditions are discussed in [34]. Under one of these two conditions the minimal time function is either continuous or Hölder continuous with exponent 1 2 . Extensions of the continuity property to ϕ-convexity can be found in [20].
In the previous proposition we can allow that R ≤ (t) is lower-dimensional and are still able to prove the inclusion "⊃" in (9), since the interior of R ≤ (t) would be empty and x cannot lie in the interior which also creates the (wanted) contradiction.
For the other inclusion "⊂" the nonemptiness of the interior of R(t) in Proposition 1 resp. the one of R ≤ (t) in Proposition 2 is essential. Therefore, the expanding property in Assumptions (iv) resp. (iv)' cannot be relaxed by assuming only monotonicity in the sense for s < t as Example 5.6 shows.
3. Approximation of the minimum time function.
3.1. Set-valued discretization methods. Consider the linear control dynamics (1). For a given x ∈ R n , the problem of computing approximately the minimum time T (x) to reach S by following the dynamics (1) is deeply investigated in literature. It was usually obtained by solving the associated discrete Hamilton-Jacobi-Bellman equation (HJB), see, for instance, [13,25,19,28]. Neglecting the space discretization we obtain an approximation of T (x). In this paper, we will introduce another approach to treat this problem based on approximation of the reachable set of the corresponding linear differential inclusion. The approximate minimum time function is not derived from the PDE solver, but from iterative set-valued methods or direct discretization of control problems. Our aim now is to compute R(t) numerically up to a maximal time t f based on the representation (7) by means of set-valued methods to approximate Aumann's integral. There are many approaches to achieving this goal. We will describe three known options for discretizing the reachable set which are used in the following.
Consider for simplicity of notations an equidistant grid over the interval and grid points t i = t 0 + ih, i = 0, . . . , N .
(I) Set-valued quadrature methods with the exact knowledge of the fundamental solution matrix of (6) (see e.g., [40,23,11], [6, Sec. 2.2]): as in the pointwise case, we replace the integral t t0 Φ(t, s)B(s)U ds by some quadrature scheme of order p with non-negative weights. Therefore, (7) is approximated by

ROBERT BAIER AND THUY T. T. LE
with weights c i ≥ 0, i = 0, . . . , N . Moreover, the following error estimate holds: (II) Set-valued combination methods (see e.g., [11], [6, Sec. 2.3]): we replace Φ(t N , t i ) in method (I) by its approximation (e.g., via ODE solvers of the corresponding matrix equation) such that Then, the discrete reachable sets is globally resp. locally recursively represented as (III) Set-valued Runge-Kutta methods (see e.g., [24,43,41,7]): We can approximate (5) by set-valued analogues of Runge-Kutta schemes. The discrete reachable set is computed recursively with the starting condition (13) for the set-valued Euler scheme (see e.g., [24]) as for the set-valued Heun's scheme with piecewise constant selections (see e.g., [41]) as An example of R h with different choices of numerical methods is as follows.
The purpose of this paper is not to focus on the set-valued numerical schemes themselves, but on the approximative construction of T (·). Thus, without loss of generality, we mainly utilize the scheme described in (II) to present our idea from now on. In practice, there are several strategies in control problems to discretize the set of controls U, see e.g., [8]. Here we choose a piecewise constant approximation U h for the sake of simplicity which corresponds to use only one selection on the subinterval [t i , t i+1 ] in the corresponding set-valued quadrature method. Depending on the choice of the method, we can find a subset U h of U , usually the piecewise constant controls so that in the case (II), for instance, we have R h (t N ) = {y ∈ R n : there exists a piecewise constant control u h ∈ U h and y 0 ∈ S for some y ∈ R n , y 0 ∈ S and a piecewise constant grid function u h with u h (t i ) = u i ∈ U , i = 0, . . . , n. If there does not exist such a grid control u h which reaches y from y 0 by the corresponding discrete trajectory, t h (y 0 , y, u h ) = ∞. Then the discrete minimum time function T h (·) is defined as Notice that the definitions of R h and t h for the remaining cases (I) and (III) can be derived in a similar way by using the corresponding expressions of y.
Proposition 3. In all of the constructions (I)-(III) described above, R h (t N ) is a convex, compact and nonempty set. Proof.
The key idea of the proof of this proposition is to employ the linearity of (5), in conjunction with the convexity of S, U and the arithmetic operations for convex sets. In particular, it follows analogously to the proof of [8, Proposition 3.3].
Theorem 3.1. Consider the linear control problem (4)- (5). Assume that the setvalued quadrature method and the ODE solver have the same order p. Furthermore, assume thatĀ(·) and δ * (l, Φ(t f , ·)B(·)U ) have absolutely continuous (p − 2)-nd derivative, the (p − 1)-st derivative is of bounded variation uniformly with respect to all l ∈ S n−1 and where C is a non-negative constant.
The next subsection is devoted to the full discretization of the reachable set, i.e., we consider the space discretization as well. Since we will work with supporting points, we do this implicitly by discretizing the set S n−1 of normed directions. This error will be adapted to the error of the set-valued numerical scheme caused by the time discretization to preserve its order of convergence with respect to time step size as stated in Theorem 3.1. Then we will describe in detail the procedure to construct the graph of the minimum time function based on the approximation of the reachable sets. We will also provide the corresponding overall error estimate.
3.2. Implementation and error estimate of the reachable set approximation. For a particular problem, according to its smoothness in an appropriate sense we are first able to choose a difference method with a suitable order, say O(h p ) for some p > 0, to solve (6) numerically effectively, for instance Euler scheme, Heun's scheme or Runge-Kutta scheme. Then we approximate Aumann's integral in (7) by a quadrature formula with the same order, for instance Riemann sum, trapezoid rule, or Simpson's rule to obtain the discrete scheme of the global order O(h p ).
We implement the set arithmetic operations in (14) only approximately as indicated in [8,Proposition 3.4] and work with finitely many normed directions ≤ Ch p to preserve the order of the considered scheme approximating the reachable set.
It is well-known that convex sets can be described via support functions or points in every directions. With this approximation we generate a finite set of supporting points of R h (·) and with its convex hull the fully discrete reachable set R h∆ (·). To reach this target, we also discretize the target set S and the control set U appearing in (13) and (14), e.g., along the line of [8, Proposition 3.4]: Hence, S ∆ , U ∆ are polytopes approximating S resp. U in the Hausdorff distance with error term O(h p ). Let T h∆ (·) be the fully discrete version of T (·) (it will be defined later in details). Our aim is to construct the graph of T h∆ (·) up to a given time t f based on the knowledge of the reachable set approximation. We divide [t 0 , t f ] into K subintervals each of length ∆t: where we have t f − t 0 = KN h and compute subsequently the sets of supporting points Y h∆ (∆t),. . . , Y h∆ (t f ) by the algorithm described below yielding fully discrete reachable sets R h∆ (i∆t), i = 1, . . . , K. Here K decides how many sublevel sets of the graph of T h∆ (·) we would like to have and h is the step size of the numerical scheme computing Y h∆ (i∆t) starting from Y h∆ ((i − 1)∆t). Due to (7) and (8), the description of each sublevel set of T (·) can be formulated only with its boundary points, i.e., the supporting points of the reachable sets at the corresponding time. For the discrete setting, at each step, we will determine the value of T h∆ (x) for x ∈ Y h∆ (·). Therefore, we only store this information for constructing the graph of T h∆ (·) on the subset [t 0 , t f ] of its range.
step 3: Compute the set of the supporting points l k ∈S ∆ R {y(l k , R h∆ (t i+1 ))} and set where y(l k , R h∆ (t i+1 )) is an arbitrary element of Y(l k , R h∆ (t i+1 )) and set step 4: If i < K − 1, set i = i + 1 and go back to step 2. Otherwise, go to step 5. step 5: Construct the graph of T h∆ (·) by the (piecewise) linear interpolation based on the values t i at the points Y h∆ (t i ), i = 0, . . . , K.
The algorithm computes the set of vertices Y h∆ (t i ) of the polygon R h∆ (t i ) which are supporting points in the directions l k ∈ S ∆ R . The following proposition is the error estimate between the fully discrete reachable set R h∆ (·) and R(·).
for the set-valued combination method (12) in (II), be valid. Furthermore, finitely where C s , C ∆ , C f are some positive constants and t i = t 0 + i∆t, i = 0, . . . , K.
Proof. The proof can be found in [31, Proposition 7.2.5].
Remark 4. If S is a singleton, we do not need to discretize the target set. The overall error estimate in (23) even improves in this case, since d H R h∆ (t 0 ), R h (t 0 ) = 0.
As we can see in this subsection the convexity of the reachable set plays a vital role. Therefore, this approach can only be extended to special nonlinear control systems with convex reachable sets.
In the following subsection, we provide the error estimation of T h∆ (·) obtained by the indicated approach under Assumptions 2.3, the regularity of T (·) and the properties of the numerical approximation.
3.3. Error estimate of the minimum time function. After computing the fully discrete reachable sets in Subsection 3.2, we obtain the values of together with the initial condition The task is now to define a suitable value of T h∆ (x) in the computational domain if x is neither a boundary point of reachable sets nor lies inside the target set. First we construct a simplicial triangulation {Γ j } j=1,...,M over the set Ω \ int(S) of points with grid nodes in i=0,...,K Y h∆ (t i ). Hence, 12 ROBERT BAIER AND THUY T. T. LE • the intersection of two different simplices is either empty or a common face, • all supporting points in the sets {Y h∆ (t i )} i=0,...,K are vertices of some simplex, • all the vertices of each simplex have to belong either to the fully discrete reachable set R h∆ (t i ) or to R h∆ (t i+1 ) for some i = 0, 1, . . . , K − 1. For the triangulation as in Figure 1, we introduce the maximal diameter of simplices as Assume that x is neither a boundary point of one of the computed discrete reachable sets {R h∆ (t i )} i=0,...,K nor an element of the target set S and let Γ j be the simplex containing x. Then where x = n+1 ν=1 λ ν x ν , n+1 ν=1 λ ν = 1 with λ ν ≥ 0 and {x ν } ν=1,...,n+1 being the vertices of Γ j .
If x lies in the interior of Γ j , the index j of this simplex is unique. Otherwise, x lies on the common face of two or more simplices due to our assumptions on the simplicial triangulation and (25) is well-defined. Let i be the index such that The latter holds, since the convex combination is bounded by t i and equality to t i only holds, if all vertices with positive coefficient λ ν lie on the boundary of the reachable set R h∆ (t i ).
The following theorem is about the error estimate of the minimum time function obtained by this approach.

CONSTRUCTION OF THE MINIMUM TIME FUNCTION VIA REACHABLE SETS 13
Theorem 3.3. Assume that T (·) is continuous with a non-decreasing modulus ω(·) in R, i.e., Let Assumptions 2.3 be fulfilled, furthermore assume that holds. Then where · ∞, Ω is the supremum norm taken over Ω.
Proof. We divide the proof into two cases.
Let Γ j be a simplex containing x with the set of vertices {x j } j=1,...,n+1 . Then where we applied the continuity of T (·) for the first term and the error estimate (29) of case 1 for the other.
Combining two cases and noticing that T (x) = T h∆ (x) = t 0 if x ∈ S ∆ , we get The proof is completed.
Remark 5. Theorem 3.1 provides sufficient conditions for set-valued combination methods such that (27) holds. See also e.g., [24] for set-valued Euler's method resp. [41] for Heun's method. If the minimum time function is Hölder continuous on Ω, (28) becomes for some positive constant C. The inequality (31) shows that the error estimate is improved in comparison with the one obtained in [19] and does not assume explicitly the regularity of optimal solutions as in [14]. One possibility to define the modulus of continuity satisfying the required property of non-decrease in Theorem 3.3 is as follows: ω(δ) = sup{|T (x) − T (y)| : x − y ≤ δ}. An advantage of the methods of Volterra type studied in [19] which benefit from non-standard selection strategies is that the discrete reachable sets converge with higher order than 2. The order 2 is an order barrier for set-valued Runge-Kutta methods with piecewise constant controls or independent choices of controls, since many linear control problems with intervals or boxes for the control values are not regular enough for higher order approximations (see [41]). Moreover, notice that there are many different triangulations based on the same data. Among them, we can always choose the one with a smaller diameter close to the Hausdorff distance of the two sets by applying standard grid generators.
Proposition 5. Let the conditions of Theorem 3.3 be fulfilled. Furthermore assume that the step size h is so small such that Ch p in (27) is smaller than ε 3 , where Then and where · ∞, Ω is the supremum norm taken over Ω.
Proof. For some i = 0, . . . , K−1 we choose a constant M i+1 > 0 such that R(t i+1 ) ⊂ M i+1 B 1 (0). Since R(t i ) does not intersect the complement of int R(t i+1 ) bounded with M i+1 B 1 (0) and both are compact sets, there exists ε > 0 such that We will show that a similar inclusion as (35) holds for the discrete reachable sets for small step sizes. If the step size h is so small that Ch p in (27) is smaller than ε 3 , then we have the following inclusions: We have In order to obtain the estimate, we observe that To prove 1) the inequality T (x j ) >= t 0 is clear. Assume that T (x j ) < t i−1 for some i > 1. Then x j ∈ R(t i−1 ). By the estimates (27), (36) and Ch p < ε 3 , it follows that which is a contradiction to the assumption Then, x j / ∈ R(t i+1 ). Furthermore, x j cannot be an element of R h∆ (t i ), since otherwise a contradiction to x j / ∈ R(t i+1 ) follows: Therefore, x j / ∈ R h∆ (t i ) which contradicts x j ∈ ∂R h∆ (t i ). Hence, the starting assumption T (x j ) > t i+1 must be wrong which proves T (x j ) ≤ t i+1 . To prove 2) if we assume T (x) ≤ t i−2 for some i ≥ 2, then x ∈ R(t i−2 ) and by estimate (27). But this contradicts x / ∈ R h∆ (t i−1 ). Therefore, T (x) > t i−2 . Assuming T (x) > t i+1 for some i < K − 1, then x / ∈ R(t i+1 ). Furthermore, if x is an element of R h∆ (t i ), which is a contradiction to x / ∈ R(t i+1 ). Therefore, x / ∈ R h∆ (t i ) which contradicts x ∈ int(R h∆ (t i )) \ R h∆ (t i−1 ). Hence, the starting assumption T (x) > t i+1 must be wrong which proves T (x) ≤ t i+1 . Consequently, 1) and 2) are proved. Notice that a) the case 1) means Therefore, |T (x) − T h∆ (x j )| ≤ 2∆t for i ≥ 2 (similarly with estimates for i = 0, 1).
Altogether, (34) is proved. 4. Convergence and reconstruction of discrete optimal controls. In this subsection we first prove the convergence of the normal cones of R h∆ (·) to the ones of the continuous-time reachable set R(·) in an appropriate sense. Using this result we will be able to reconstruct discrete optimal trajectories to reach the target from a set of given points and also derive the proof of L 1 -convergence of discrete optimal controls. In the following only convergence under weaker assumptions and no convergence order 1 as in [1] are proved (see more references therein for the classical field of direct discretization methods). We also restrict to linear minimum time problems. Some basic notions of nonsmooth and variational analysis which are needed in constructing and proving the convergence of controls can be found in [18,37]. Let A be a subset in R n and f : A → R ∪ {∞} be a function. The indicator function of A and the epigraph of f be defined as  Then the epi-convergence of (f i ) i∈N to f is equivalent to the graphical convergence of the subdifferential maps (∂f i ) i∈N to ∂f .
The following theorem plays an important role in this reconstruction and will deal with the convergence of the normal cones. If the normal vectors of R h∆ (·) converge to the corresponding ones of R(·), the discrete optimal controls can be computed with the discrete Pontryagin Maximum Principle under suitable assumptions.   The remainder deals with the reconstruction of discrete optimal trajectories and the proof of convergence of optimal controls in the L 1 -norm, i.e., ti 0 for k = 1, . . . , i. Thus The continuous-time adjoint equation of (40) written for n-row vectors reads as and its discrete version, approximated by the same method (see [26,Chap. 5]) as the one used to discretize (40), i.e., (41), can be written as follows. For k = i − 1, i − 2, . . . , 0 and j = N, N − 1, . . . , 1, where ζ, ζ h will be clarified later. By the definition of t kj (see Algorithm 3.2) the index k0 can be replaced by (k − 1)N , the solution of (43) in backward time is therefore possible. Here, the end condition will be chosen subject to certain transversality conditions, see the latter reference for more details. Due to well-known arguments (see e.g., [33,Sec. 2.2]) the end point of the timeoptimal solution lies on the boundary of the reachable set and the adjoint solution η(·) is an outer normal at this end point. Similarly, this also holds in the discrete case. The following proposition formulates this fact by a discrete version of [33, Sec. 2.2, Theorem 2]. The proof is just a translation of the one of the cited theorem in [33] to the discrete language. For the sake of clarity, we will formulate and prove it in detail. Proof. Assume that {u jk } is such that y (i−1)N by the response Since R h∆ (t i ) is a compact and convex set by construction, there exists a supporting hyperplane γ to R h∆ (t i ) at y (i−1)N . Let ζ h be the outer normal vector of R h∆ (t i ) at y (i−1)N . Define the nontrivial discrete adjoint response (43), i.e., . Now we compute the inner product of η (i−1)N , y (i−1)N : Now assume that η kjB u kj < max u∈U {η kjB u} for some indices k, j. Then define another sequence of controls as follows Letỹ (i−1)N be the end point of the discrete trajectory following {ũ kj }. We have which contradicts the construction of η (i−1)N = ζ h , an outer normal vector of R h∆ (t i ) at y (i−1)N . Therefore, η kjB u kj = max u∈U {η kjB u}. Conversely, assume that for some nontrivial discrete adjoint response for every indices k = 0, ..., i − 1, j = 0, ..., N . We will show that the end point y (i−1)N of the corresponding trajectory {y kj } will lie at the boundary of R h∆ (t i ), not at any point belonging to its interior. Suppose, by contradiction, y (i−1)N lies in the interior of R h∆ (t i ). Letỹ (i−1)N be a point reached by a sequence of controls {ũ kj } in R h∆ (t i ) in such that Our assumption (44) implies that for all k, j. As above, due to (46), we show that which is a contradiction to (45). Consequently, Motivated by the outer normality of the adjoints in continuous resp. discrete time and the maximum conditions, we define the optimal controlsû(t),û h (t) as follows whereû kj = sign(η kjB ) , k = 0, ..., i − 1, j = 0, ..., N and is the signum function and v, w ∈ R m , µ = 1, . . . , m.

Numerical tests.
The following examples should serve as a collection of academic test examples for calculating the minimum time function for several, mainly linear control problems which were previously discussed in the literature. The examples also illustrate the performance of the error behavior of our proposed approach.
The space discretization follows the presented approach in Subsection 3.2 and uses supporting points in directions for all t 0 ≤ t 1 < t 2 ≤ t f . in several ways. From the numerical calculations we can observe this property in the shown figures for the fully discrete reachable sets. Secondly, we can use the available analytical formula for the minimum time function resp. the reachable sets or check the Kalman rank condition rank B, AB = 2 for time-invariant systems if the target is the origin (see [29,Theorems 17.2 and 17.3]).
The control sets in the linear examples are either one-or two-dimensional polytopes (a segment or a square) or balls and are varied to study different regularity allowing high or low order of convergence for the underlying set-valued quadrature method. In all linear examples, we apply a set-valued combination method of order 1 and 2 (the set-valued Riemann sum combined with Euler's method resp. the set-valued trapezoidal rule with Heun's method).
We start with an example having a Lipschitz continuous minimum time function and verify the error estimate in Theorem 3.3. Observe that the numerical error here is only contributed by the spatial discretization of the target set or control set.
Example 5.1. Consider the control dynamics , see [15,28], We consider either the small ball therefore, R h (t N ) = S − N hU = S + (t N − t 0 )U and the error is only due to the space discretizations S ∆ ≈ S, U ∆ ≈ U and does not depend on h (see Table 1). The error would be the same for finer step size h and ∆t in time or if a higherorder method is applied. Note that the error for the origin as target set (no space discretization error) is in the magnitude of the rounding errors of floating point numbers. We choose t f = 1, K = 10 and N = 2 and the set-valued Riemann sum combined with Euler's method for the computations. It is easy to check that the minimum time function is Lipschitz continuous, since one of the equivalent Petrov conditions in [36], [12, Chap. IV, Theorem 1.12] with U = B 1 (0) or [−1, 1] 2 hold: Moreover, the support function with respect to the time-reversed dynamics (51) is constant with respect to the time t, so it is trivially arbitrarily continuously differentiable with respect to t with bounded derivatives uniformly for all l ∈ S n−1 .

ROBERT BAIER AND THUY T. T. LE
In Fig. 2  Since it is zero in the interior of the target, one has at most Lipschitz continuity at the boundary of S. In Fig. 3 the minimum time function is plotted for the same control set as in Fig. 2 (right), but this time the target set is the origin and not a small ball. We now study well-known dynamics as the double integrator and the harmonic oscillator in which the control set is one-dimensional. The classical rocket car example with Hölder-continuous minimum time function was already computed by the Hamilton-Jacobi-Bellman approach in [25,Test 1] and [19,28], where numerical calculations are carried out by enlarging the target (the origin) by a small ball.
We consider either the small ball B 0.05 (0) or the origin as target set S. Then the minimum time function is 1 2 -Hölder continuous for the first choice of S see [34,19]  is only absolutely continuous with respect to τ for some directions l ∈ S 1 with l 1 = 0. Hence, we can expect that the convergence order for the set-valued quadrature method is at most 2. We fix t f = 1 as maximal computed value for the minimum time function and N = 5.
In Table 2 the error estimates for two set-valued combination methods are compared (order 1 versus order 2). Since the minimum time function is only 1 2 -Hölder continuous we expect as overall convergence order 1 2 resp. 1. A least squares approximation of the function Ch p for the error term reveals C = 1.37606, p = 0.4940 for Euler scheme combined with set-valued Riemann sum resp. C = 22.18877, p = 1.4633 (if p = 1 is fixed, then C = 2.62796) for Heun's method combined with set-valued trapezoidal rule. Hence, the approximated error term is close to the expected one by Theorem 3.3 and Remark 5. Very similar results are obtained with the Runge-Kutta methods of order 1 and 2 in Table 3 in which the set-valued Euler method is slightly better than the combination method of order 1 in Table 2, and the set-valued Heun's method coincides with the combination method of order 2, since both methods use the same approximations of the given dymanics. Here we have chosen to double the number of directions N R each time the step size is halfened which is suitable for a first order method. For a second order method we should have multiplied N R by 4 instead. From this point it is not surprising that there is no improvement of the error in the fifth row for step size h = 0.0025.
As in Example 5.1 we can consider the dynamics (52) with the origin as a target (see the minimum time function in Fig. 4 (left). In this case, the numerical computation by PDE approaches, i.e., the solution of the associated Hamilton-Jacobi-Bellman equation (see e.g., [25]) requires the replacement of the target point 0 by a small ball B ε (0) for suitable ε > 0. This replacement surely increases the error of the calculation (compare the minimum time function in Fig. 4 for ε = 0.05). However, our proposed approach works perfectly regardless of the fact whether S is a two-dimensional set or a singleton. 24 ROBERT BAIER AND THUY T.
Since the Kalman rank condition rank B, AB = 2, the minimum time function T (·) is also continuous. The plot for T (x) for the harmonic oscillator with the origin as target, t f = 6, N R = 100, N = 5 and K = 40 is shown in Fig. 5. According to Section 4 we construct open-loop time-optimal controls for the discrete problem with target set S = {0} by Euler's method. In Fig. 6 the corresponding discrete open-loop time-optimal trajectories for Examples 5.2a) (left) and b) (right) are depicted.
The following two examples exhibit smoothness of the support functions and would even allow for methods with order higher than two with respect to time  Figure 6. Approximate optimal trajectories for Example 5.2a) resp. b) discretization. The first one has a special linear dynamics and is smooth, although the control set is a unit square. Example 5.3. In the third linear two-dimensional example the reachable set for various end times t is always a polytope with four vertices and coinciding outer normals at its faces. Therefore, it is a smooth example which would even justify the use of methods with higher order than 2 to compute the reachable sets (see [9,11]). It is similar to Example 5.6, but has an additional column in matrix B and is a variant of [9, Example 2].
Again, we fix t f = 1 as maximal time value and compute the result with N = 2. We choose N R = 50 normed directions, since the reachable set has only four different vertices.
ẋ 1 x 2 = 0 −1 2 3 x 1 where (u 1 , u 2 ) ∈ [−1, 1] 2 . Let the origin be the target set S. The fundamental solution matrix of the time-reversed dynamics of (54) is given by . ROBERT BAIER AND THUY T. T. LE This is a smooth example in the sense that the support function for the time-reversed set-valued dynamics of (54), is smooth with respect to τ uniformly in l ∈ S 1 . The analytical formula for the (time-continuous) minimum time function is as follows: T ((x 1 , x 2 ) ) = max{t : t ≥ 0 is the solution of one of the equations Similarly, another variant of this example with a one-dimensional control can be constructed by deleting the second column in matrix B. The resulting (discrete and continuous-time) reachable sets would be line segments. Thus, the algorithm would compute the fully discrete minimum time function on this one-dimensional subspace. The absence of interior points in the reachable sets is not problematic for this approach in contrary to common approaches based on the Hamilton-Jacobi-Bellman equation as shown in Example 5.6.  The next example involves a ball as control set and leads naturally to a smooth problem.
Example 5.4. The following smooth example is very similar to the previous example. It is given in [8,Example 4.2], [11,Example 4.4] ẋ 1 x 2 = 0 −1 2 3 x 1 and uses a ball as control set. This is a less academic example than Example 5.3 (in which the matrix B(t) was carefully chosen), since a ball as control set often allows the use of higher order methods for the computation of reachable sets (see [11,7]).
Here, no analytic formula for the minimum time function is available so that we can study only numerically the minimum time function (see Fig. 7 (right)). Obviously, the support function is again smooth with respect to τ uniformly in all normed directions l, since

5.2.
A nonlinear example. The following special bilinear example with convex reachable sets may provide the hope to extend our approach to some class of nonlinear dynamics. We approximate the time-reversed dynamics of Example 5.5 by Euler's and Heun's method.
Example 5.5. The nonlinear dynamics is one of the examples in [28].
With this dynamics, after computing the true minimum time function we observe that T (·) is Lipschitz continuous and its sublevel set, which is exactly the reachable set at the corresponding time, satisfies the required properties. The target set S is B 0.25 (0).
We fix t f = 1 as maximal computed value for the minimum time function and N = 2. Estimating the error term Ch p in Table 5 by least squares approximation yields the values C = 0.3293133, p = 1.8091 for the set-valued Euler method and C = 0.5815318, p = 1.9117 for the Heun method.
The unexpected good behavior of Euler's method stems from the specific behavior of trajectories. Although the distance of the end point of the Euler iterates for halfened step size to the true end point is halfened, but the distance of the Euler iterates to the boundary of the true reachable set is almost shrinking by the factor 4 due to the specific tangential approximation. In Fig. 8 the Euler iterates are marked with * in red color, while Heun's iterates are shown with • marks in blue color. The symbol • marks the end point of the corresponding true solution.
Observe that the dynamics originates from the following system in polar coordinatesṙ = ru,φ = 1, u ∈ [−1, 1]. Hence, the reachable set will grow exponentially with increasing time. The minimum time function for this example is shown in Fig. 8.

5.3.
Non-strict expanding property of reachable sets. The next example violates the continuity of the minimum time function (the dynamics is not normal). Nevertheless, the proposed Algorithm 3.2 is able to provide a good approximation of the discontinuous minimum time function.
This example also shows that boundary points of the reachable set can no longer be characterized via time-minimal points (compare Propositions 1 and 2), if the strict expanding property of (the union of) reachable sets is not satisfied.  Example 5.6. Consider the dynamics The Kalman rank condition yields rank B, AB = 1 < 2, so that the normality of the system is not fulfilled. Hence, the reachable set is an increasing line segment (and always part of the same line in R 2 , i.e., it is one-dimensional so that the interior is empty). Clearly, both inclusions i.e., (10), hold, but not the strictly expanding property of R(·) on [t 0 , t f ] in Assumptions 2.3(iv) and (iv)', i.e., for Assumption (iv), R ≤ (t) for Assumption (iv)'.
The strict inclusion only holds in the relative interior. By [12, Sec. IV.1, Proposition 1.2] the minimum time function is discontinuous (it has infinite values outside the line segment).
The plots of the two continuous-time reachable sets R(t) for t = 1, 2 together with the true minimum time function (in red) and its discrete analogue (in green) obtained by the Euler scheme with h = 0.025 are shown in Fig. 9: The two red points are the end points of the line segment for a smaller time t 1 = 1, the two blue points are the end points of the line segment for a larger time t 2 = 2 > t 1 . The blue line segment is the reachable set for time t 2 (also the reachable set up to time t 2 ). All four points are on the boundary of the blue set R(t 2 ), but the minimum time to reach the two blue points is t 2 , while the minimum time to reach the two red points is t 1 < t 2 which is a contradiction to Proposition 2.

5.4.
Problematic examples. The first two examples show linear systems with hidden stability properties so that the discrete reachable sets converge to a bounded convex set if the time goes to infinity (or is large enough in numerical experiments). For larger end times the numerical calculation gets more demanding, since the step size must be chosen small enough according to Proposition 5. The remaining part of the subsection contain examples that violate Assumption 2.3(iv) and (iv') from Proposition 1. The examples demonstrate that a target or a control set not containing the origin (as a relative interior point) might lead to non-monotone behavior of the (union of) reachable sets. In all of these examples the union of reachable sets is no longer convex.
Example 5.7. We consider the following time-dependent linear dynamics: The reachable sets converge towards a final, bounded, convex set due to the scaling factor 1 t 2 in the matrix B(t), see Fig. 10 (left). From a formal point of view the strict expanding condition (32) in Proposition 5 is satisfied, but the positive number ε tends to zero for increasing end time. On the other hand we would stop the calculations if the Hausdorff distance of two consequent discrete reachable sets is below a certain threshold. .Ā has negative eigenvalues -1 and -2. Hence, the reachable sets converge towards a final, bounded, convex set, see Fig. 10 (right). We experience the same numerical problems as in Example 5.7.
Example 5.9. Let the dynamics be given bẏ In case a) the reachable sets for a given end time are always balls around the origin (see Fig. 11 (left)), if the target set is chosen as the origin. In case b) the point (2, 2) is considered as target set. Fig. 11 (right) shows that the union of reachable sets is no longer convex. x 1 = x 2 ,ẋ 2 = u, u ∈ U.
In the first case, let S = {0}, U = [0, 1]. From the numerical calculations, we observe that R(t), R ≤ (t) are still convex and satisfy (10) in Remark 2, but violate the strictly expanding property (59) as shown in Fig. 12 (left). In the other case, U = [1, 2] is chosen. The convex reachable set R(t) is not only enlarging, but also moving which results in the nonconvexity of R ≤ (t). Moreover, both (10) in Remark 2 and (59) are not fulfilled in this example as depicted in Fig. 12 (right). Conclusions. Although the underlying set-valued method approximating reachable sets in linear control problems is very efficient, the numerical implementation is a first realization only and can still be considerably improved. Especially, step 3 in Algorithm 3.2 can be computed more efficiently as in our test implementation. Furthermore, higher order methods like the set-valued Simpson's rule combined with the Runge-Kutta(4) method are an interesting option in examples where the underlying reachable sets can be computed with higher order of convergence than