Remarks on the convergence of an algorithm for curvature-dependent motions of hypersurfaces

We consider a threshold-type algorithm for curvature-dependent motions (CDM for short) of hypersurfaces. This algorithm was numerically studied by Kimura - Notsu [ 13 ], Esedoḡlu - Ruuth - Tsai [ 7 ] and Mohammad - Svadlenka [ 16 ], where they used the signed distance function as the level set function for CDM. The convergence of this algorithm and its optimal rate have been considered in Ishii - Kimura [ 12 ]. In this paper we give different approaches to the optimal rate of convergence to the smooth and compact CDM from [ 12 ]. As for the optimality, we give a more precise estimate than that in [ 12 ].

1. Introduction. In this paper we study the convergence of a threshold-type algorithm for curvature-dependent motions (CDM for short) of hypersurfaces. This algorithm is introduced and numerically studied by Kimura -Notsu [13].
Let {Γ(t)} 0≤t<T0 be a family of compact hypersurfaces in R N . We say this family is a CDM if Γ(t) moves by the following equation: Here T 0 > 0, n = n(t, x) is the inner unit normal vector field on Γ(t), V = V (t, x) is the velocity of Γ(t) in the direction of n, κ = κ(t, x)(= −div n(t, x)) is the ((N − 1)times) mean curvature of Γ(t), b = b(t, x) denotes a given vector field in R N , g = g(t, x) is a forcing term and ·, · denotes the inner product in R N . As is well known, in the case of b ≡ 0 and g ≡ 0, {Γ(t)} t∈[0,T0) is called a mean curvature flow (MCF for short). The CDM and MCF are mathematical models of the motion of an interface by its surface tension, a transport term and an external force and they arise in various fields such as two-phase flow problems, phase transitions, image processing and so on.
From the viewpoints of these applications, many people have studied numerical methods for CDM. Among them, we treat the following algorithm: Let C 0 be a compact set in R N and fix a time step h > 0. For k = 0, 1, 2, . . ., set b k (t, x) := b(t + kh, x) and g k (t, x) := g(t + kh, x). Let w 0 = w 0 (t, x) be a solution of the initial value problem for the linear parabolic equation with k = 0: where d(x, E) is the signed distance function to ∂E defined by for each closed subset E( = ∅) of R N . We then set Let w 1 be a unique solution of (2) -(3) with k = 1. Again we define C 2 as the set in (5) with w 1 replacing w 0 . Repeating this process, we have a sequence {C k } +∞ k=0 of compact subsets of R N . We set C h (t) := C k for t ∈ [kh, (k + 1)h), k = 0, 1, 2, . . .
Letting h → 0, we formally obtain a limit flow {C(t)} t≥0 of compact sets in R N and observe that ∂C(t) moves by (1) starting from ∂C(0) = ∂C 0 . The above algorithm was numerically studied by Kimura -Notsu [13]. In their paper Kimura and Notsu proposed a fully discrete finite element scheme based on the level set method of the signed distance function. In [13,Section 4] they gave some numerical examples for MCF with a forcing term. Esedoḡlu -Ruuth -Tsai [7] considered various geometric motions, including CDM, MCF with triple junctions and motion by surface diffusion. Mohammand -Švadlenka [16] gave an extension of these approaches to the vector-valued one for numerical computation of multiphase problems. On the other hand, our algorithm is regarded as a variant of the Bence -Merriman -Osher (BMO for short) algorithm and that by Chambolle (cf. [2] and [3]). Many people studied their algorithms and generalizations (e.g., [9], [1], [17], [15], [4], [5], [8], etc.). Among them Vivier [17] and Leoni [15] generalized the BMO algorithm using linear/semilinear parabolic equations and proved the convergence of their scheme to the anisotropic CDMs associated with these equations. Chambolle -Novaga [4], [5] considered an extension of [3] for the anisotropic CDM and showed the convergence to the regular flow. Although our algorithm is quite similar to theirs, the differences between their algorithm and ours are the choice of the initial data and the fact that (2) has a transport term. In the (generalized) BMO algorithm they choose the initial data instead of (3), where sgn * (r) := 1 for r ≥ 0, := −1 for r < 0. As for the convergence of our algorithm, Ishii -Kimura [12] proved the convergence to the generalized CDM under the nonfattening assumption. (cf. [1,Corollary 1.3], [12, (2.16) and Theorem 5.6], etc.). In addition they obtained the rate of convergence to the smooth and compact CDM and showed its optimality with respect to the order of h in the case of a circle evolving by curvature. However, the arguments in [12] are complicated since we divide the argument into two parts to get the rate of convergence. First we showed the convergence itself and then we obtained its rate with using this convergence, because we needed some uniform local estimates and Taylor expansion of the solution w k of (2) to derive the rate of convergence. Also, the proofs of the convergence and its rate are based on cumbersome calculations.
The main purpose of this paper is to give different approaches to the optimal rate of convergence to the smooth and compact CDM. As a result, we are able to simultaneously prove the convergence and its rate of the algorithm and to obtain the asymptotic behavior of the radius of a circle evolving by the algorithm as h → 0 in terms of that of a circle by evolving curvature. The approach in this paper is simpler and more straightforward than that in [12] and our optimality estimate is more precise than that in [12].
In deriving the rate of the convergence of the algorithm, the basic idea is to construct sub-and super-solutions of (2) in each time interval [kh, (k + 1)h) (k = 0, 1, 2, . . .) which approximate the solution w k of (2) -(3) near Γ(t). For this purpose we employ the signed distance function to the CDM. To show the optimality of our estimate we use an explicit solution of the level set equation to (1) Then Γ(t) evolves by mean curvature starting from the unit sphere. Set ρ(t, r) := 1− r 2 + 2(N − 1)t. Then we see that Γ(t) = {ρ(t, |·|) = 0} and that ρ(t, x) = ρ(t, |x|) satisfies Note that (7) holds in the sense that Here F * and F * are defined by The inequality (8) is equivalent to We use (9) to construct sub-and super-solutions of (2) with b ≡ 0 and g ≡ 0 in order to prove the optimality of our estimate. We have mentioned in the above paragraph that our approach is simpler and more straightforward than that in [12]. As a by-product, we have an approximation of the solution w k of (2) -(3) (resp., (33) -(34) -(35) in section 4 below) by means of the signed distance function to Γ(t) (resp., the function ρ defined as above). This paper is organized as follows. In section 2 we state our assumptions and briefly recall the smooth and compact CDM and the solution w k of (2) -(3). In section 3 we derive the rate of convergence of the algorithm to the smooth and compact CDM. Section 4 is devoted to the optimality of our estimate in the case of a circle evolving by curvature. We use the following notation in this paper: For u : p, q := the inner product between p, q ∈ R N , In addition, K i and M j (i, j ∈ N) denote positive constants independent of small h > 0 and k = 0, 1, 2, . . .
2.2. The signed distance function to the CDM. Let {Γ(t)} 0≤t<T0 be the smooth and compact CDM.

Auxiliary functions. For any fixed
We remark that v N is used in section 3 to construct sub-and super-solutions of (2). Note that for each R > 0, the set {v N −1 (t, ·; 0) = R} is a sphere evolving by mean curvature and shrinks to a point at t = R 2 /2(N − 1).
3. Rate of convergence to CDM. In this section we give an alternative proof of Theorem 3.1 below, which has already been obtained in [12, Theorem 6.1]. For any given smooth and compact hypersurface Γ 0 ⊂ R N , there is a unique, smooth and compact CDM {Γ(t)} 0≤t<T0 locally in time satisfying Γ(0) = Γ 0 (cf. [10] and [12,Appendix]). Let {C(t)} 0≤t<T0 be a family of compact sets in R N such that instead of (6).
This theorem is derived from the following proposition. For any ε > 0, there exist K 1 > 0 and t 1 > 0 such that for any h ∈ (0, t 1 ) and The proof of this proposition in [12] is complicated because it is based on some uniform estimates of solutions of (2) in N ε,30δ and because we need to show the convergence itself of our algorithm in advance to apply them. In this section we adopt a different approach from [12] to show Proposition 3.2. Note that the following arguments derive the convergence and its rate simultaneously.
The basic idea to prove the above proposition is a suitable construction of a suband a super-solution of (2) which approximate w k near Γ(t + kh) for t ∈ [0, h] and k = 0, 1, . . . , [(T 0 −ε)/h]−1. This is the main difference from [12]. For this purpose, we first do this construction in Note that by the smoothness of the CDM, we are able to take t 2 > 0 such that for any h ∈ (0, t 2 ) and k = 0, 1, . . . , ω k := κ s k + Db k Dd k , Dd k + Dg k , Dd k . Here α k is nonnegative and selected later. Lemma 3.3 below shows that d k and d k are, respectively, a classical subsolution and a classical supersolution of (2) in M k . In Lemmas 3.4 and 3.5 below we extend d k and d k in [0, h) × R N as a viscosity subsolution and a viscosity supersolution of (2), respectively. We apply the comparison principle for viscosity solutions to estimate w k near Γ(t + kh) for t ∈ [0, h]. We then use this estimate to show the assertion of Proposition 3.2. Now we construct a classical subsolution of (2). Lemma 3.3. For large K 2 > 0, d k and d k are, respectively, a classical subsolution and a classical supersolution of (2) in M k for all k = 0, 1, . . . , Proof. We prove only the subsolution case because the other one is similarly proved. It is seen by (17) that where L[w] := w t − ∆w + b, Dw and · ∞ := · L ∞ (N ε,30δ ) . Hence taking K 2 > 0 sufficiently large, we see that d k is a classical subsolution of (2) in M k .
If d H (C h (kh), C(kh)) ≤ α k for some α k ∈ [0, 2δ), then it is seen that Thus the proof is completed.
We extend d k and d k to the domain [kh, (k+1)h)×R N as, respectively a viscosity subsolution and a viscosity supersolution of (2). See [6] and [14] for the definitions and the theory of viscosity solutions.
Let η k,i = η k,i (x) (i = 1, 2, 3) be smooth cut-off functions satisfying where v N is given in subsection 2.3.
We show only (27) since (28) can be similarly shown. The relation (27) is obvious if d k (0, x) ≥ 10δ. Hence we may assume that 3δ ≤ d k (0, ·) ≤ 10δ. Let x ∈ {d k (0, ·) ≥ 3δ} and set d k (0, x) = αδ for some α ∈ [3,10]. Then it is seen that for all y ∈ {d k (0, ·) = 10δ} Thus we get (27). Combining these estimates with Lemma 3.4, we get w k (0, ·) ≤ d k (0, ·) ≤ d(·, C h (kh)) on {|d k (0, ·)| ≥ 3δ}. It directly follows from the definitions of ρ k and w k that w k (0, ·) ≤ d(·, C h (kh)) on {|d k (0, ·)| ≤ 3δ}. Therefore we conclude that w k (0, ·) ≤ d k (0, ·) ≤ d(·, C h (kh)) in R N and this is our desired extension of d k Recall that d k is given by (20) and that η k,i (i = 1, 2, 3) are defined after the proof of Lemma 3.3. As for a supersolution of (2), define where v N is given in subsection 2.3 and set Then we are able to prove by the same way as Lemmas 3.4 and 3.5 that w k is a viscosity supersolution of (2) satisfying d(·, C(kh)) ≤ w(0, ·) in R N for all h ∈ (0, t 3 ) and k = 0, 1, . . . , [(T 0 − ε)/h] − 1. Here t 3 is given in Lemma 3.4. Figure 1 shows the outline of the graphs of w k and w k . Note that w k and w k grow at most linearly as |x| → +∞ since so does v N . Figure 1. Graphs of w k , w k and w k (thick curves) It follows from (10) and the constructions of w k and w k that for all h ∈ (0, t 3 ) and k = 0, 1, . . . , Thus we use the comparison principle for viscosity solutions and the continuity of these functions to obtain w k ≤ w k ≤ w k in [0, h] × R N for all k = 0, 1, . . . , [(T 0 − ε)/h] − 1 and h ∈ (0, t 3 ).
Step 2. We show the assertion of Proposition 3.2.
Remark 3.6. (1) In [12] we used some local uniform estimates of the solution to (2) in N ε,30δ . However, thanks to the explicit forms of w k and w k on [0, h]×{|d k (0, ·)| ≤ 3δ}, we do not need such estimates.
Remark 3.7. The arguments of this section cannot be applied to the case where {Γ(t)} t∈[0,T0) is a generalized CDM because the smoothness of Γ(t) is essentially used. We have a partial result in [12] which gives the rate of convergence to the regular portion of the generalized CDM. This results is valid even if Γ(t) has some singular points. However, to our best knowledge, there is no result such as Theorem 3.1 around singular points.

4.
Optimality. This section is devoted to the optimality of the estimate in Theorem 3.1 with respect to the order h. For this purpose we consider the radial case. For simplicity, we set N = 2, R(t) := √ 1 − 2t, T 0 := 1/2, C(t) := {x ∈ R 2 | |x| ≤ R(t)}, b ≡ 0 and g ≡ 0. Since it suffices to consider the radial solution, we treat the following problem:

KATSUYUKI ISHII AND TAKAHIRO IZUMI
The following proposition says that for each h > 0, C h (t) evolves faster than C(t). Combining Theorem 3.1 with this proposition, we see that for any ε ∈ (0, 1/100), there exists h 0 > 0 such that for all t ∈ [0, T 0 − ε] and h ∈ (0, h 0 ). Under these settings, we are able to prove the following estimate.
Theorem 4.2. For any ε ∈ (0, 1/100), there exist L 1 > 0 and h 1 > 0 such that for all h ∈ (0, h 1 ), This implies the optimality of the estimate in Theorem 3.1 with respect to the order h and is more precise than the estimate in [12].
Remark 4.6. The point of the constructions of w k and w k is the modification of ρ k near r = 0. Then it is necessary that R(t) > 4ε because if otherwise, we cannot estimate R(t) − R h (t) precisely. The choice of ε ∈ (0, 1/100) and the estimate (36) assure this fact. In the following arguments we always assume ε ∈ (0, 1/100).