A family of self-avoiding random walks interpolating the loop-erased random walk and a self-avoiding walk on the Sierpinski gasket

We show that the `erasing-larger-loops-first' (ELLF) method, which was first introduced for erasing loops from the simple random walk on the Sierpinski gasket, does work also for non-Markov random walks, in particular, self-repelling walks to construct a new family of self-avoiding walks on the Sierpinski gasket. The one-parameter family constructed in this method continuously connects the loop-erased random walk and a self-avoiding walk which has the same asymptotic behavior as the `standard' self-avoiding walk. We prove the existence of the scaling limit and study some path properties: The exponent governing the short-time behavior of the scaling limit varies continuously in the parameter. The limit process is almost surely self-avoiding, while its path Hausdorff dimension is the reciprocal of the exponent above, which is strictly greater than 1.

Two of the basic questions concerning random walks are: (1) What is the asymptotic behavior of the walk as the number of steps tends to infinity? To be more specific, if X(n) denotes the location of the walker starting at the origin after n steps, does the mean square displacement show a power behavior? In other words, does the following hold in some sense?
where |X(n)| denotes the Euclidean distance from the starting point and ν is a positive constant.
If it is the case, what is the value of the displacement exponent ν?
(2) Does the walk have a scaling limit? A scaling limit is the limit as the edge length of the graph tends to 0. To give some examples, Brownian motion on Z d and Brownian motion on the Sierpiński gasket are obtained as the scaling limit of the simple random walk on their respective graph approximations. The displacement exponent ν governs also the short-time behavior of the scaling limit. Question (1) originated from the problem of the end-to-end distance of long polymers. Since no two monomers can occupy the same place, a self-avoiding walk is expected to model polymers. There have been many works, not only mathematical works, but also computer simulations and heuristics aimed at answering the question, however, for 'standard' self-avoiding walk on Z d with d = 2, 3, 4, it is not solved rigorously yet. Question (2) for Z d , d = 2, 3, 4 has not been given a rigorous answer yet, either, while for Z d with d > 4 the answers are given; the scaling limit is the d-dimensional Browinan motion and ν = 1/2. The difficulties for d = 2, 3, 4 lie in the strong self-avoiding effect in low dimensions. For what is known about 'standard' self-avoiding walks on Z d , see [19].
The situation is quite different for the LERW on Z d . The existence of the scaling limit has been proved for all d, and the asymptotic behavior has been studied in terms of the growth exponent (the reciprocal of the displacement exponent). For d = 2 Schramm-Loewner evolution (SLE) has played an essential role. For some further discussion of the LERW on Z d , see [18], [20], [14] and [17].
The Sierpiński gasket provides a space which is 'low-dimensional', but permits rigorous analysis. For this fractal space, the displacement exponent ν of the 'standard' SAW is obtained in [10]. The scaling limit is studied in [6] and it is proved that the same ν governs the short-time behavior of the limit process X t , that is, there exist positive constants C 1 and C 2 such that holds for small enough t ( [3]). As for the LERW, the scaling limit was obtained by two groups independently, using different methods ( [21], [11]). SLE mentioned above is a profound theory, which goes far beyond the investigation of the scaling limit of the LERW on Z 2 . It is a unified theory for a variety of random curves in R 2 that involves a parameter κ, and different values of κ correspond to different models. κ = 2 corresponds to the scaling limit of the LERW and κ = 8/3 is conjectured to be the scaling limit of the SAW. Thus, SLE is expected to connect the SAW and the LERW on R 2 .
There arises a natural question: Is it possible to construct a model that connects the SAW and the LERW on the Sierpiński gasket continuously in some parameter? In this case we cannot use SLE, for which the conformal invariance of models in R 2 plays an essential role.
In this paper, we construct a one-parameter family of self-avoiding random walks on the Sierpiński gasket continuously connecting the LERW and a SAW which has the same asymptotic behavior as the 'standard' SAW. We prove the existence of the scaling limit and show some path properties: The exponent ν governing the short-time behavior of the scaling limit varies continuously in the parameter. The limit process is almost surely self-avoiding, while it has path Hausdorff dimension 1/ν, which is strictly greater than 1.
The main ingredients for the model are the one-parameter family of self-repelling walks on the Sierpinski gasket studied in [3] and [7], and the 'erasing-larger-loops-first' (ELLF) method employed in the study of the LERW [11]. A self-repelling walk is a walk that is discouraged, if not prohibited, to return to points it has visited before. There have been a variety of models on Z. See, for example, the survey paper [12] and the references therein. The model we use here is unique in the way of discouraging returns; penalties are given for backtracks and sharp turns, rather than for revisits to the same points or the same edges.
For the 'standard' LERW on graphs, the uniform spanning tree proves to be a powerful tool ( [21]). By 'standard', we mean the loops are erased chronologically as first introduced by G. Lawler ([16]). On the other hand, [11] constructed a LERW on the Sierpiński gasket by ELLF, that is, by erasing loops in descending order of size of loops and proved that the resulting LERW has the same distribution as that of the 'standard' LERW. The uniform spanning tree is powerful in the sense that it can be used on any graph, however, this tool is valid only for loop erasure from simple random walks. We prove that ELLF does work also for other kinds of random walks on some fractals, in particular, for self-repelling walks on the Sierpiński gasket, for the method is based on self-similarity. Thus, our construction is performed by erasing loops from the family of self-repelling walks by the ELLF method.
In Section 2, we describe the set-up and recall the family of self-repelling walks introduced in [3] and [7] in a more concise manner. In Section 3, we describe the ELLF method of loop-erasing in a more organized manner than [11], and apply it to the self-repelling walks to obtain a new family of self-avoiding walks interpolating LERW and SAW. In Section 4 we study the scaling limit. In Section 5 we prove some properties of the limit process concerning the short-time behavior. In Section 6, we give the conclusion and some remarks.
2 Self-repelling walk on the pre-Sierpiński gaskets Let us first recall the definition of the pre-Sierpiński gaskets, that is, graph approximations of the Sierpiński gasket which is a fractal with Hausdorff dimension log 3/ log 2. Let O = (0, 0), ), b = (1, 0) and define F ′ 0 to be the graph that consists of the three vertices and the three edges of △Oab. Define similarity maps f i : R 2 → R 2 , i = 1, 2, 3 by and a recursive sequence of graphs Let F N be the union of F ′ N and its reflection with respect to the y-axis, and let G N and E N be the sets of the vertices and of the edges of F N , respectively. F 3 is shown in Fig. 1.
Let T M be the set of all upward (closed and filled) triangles which are translations of 2 −M △Oab and whose vertices are in G M ; an element of T M is called a 2 −M -triangle.
For each N ∈ Z + = {0, 1, 2, . . .}, denote the set of finite paths on F N starting from O, not hitting any vertices in G 0 other than O on the way, and stopped at the first hitting time of a by For a path w = (w(0), w(1), · · · , w(n)) ∈ W N , denote the number of steps by ℓ(w) := n.
If we assign probability (1/4) ℓ(w)−1 to each w ∈ W N , then this determines a probability measure on paths which is the same as that induced by the simple random walk on F N starting  We shall assign probabilities such that they give random walks whose revisits to the same points are discouraged. First let us start with paths in W 1 . The idea is that we give a penalty to w ∈ W 1 every time it makes a sharp turn or a backtrack at G 1 \ G 0 , or revisits O. We realize it by using N (w), the total number of sharp turns and backtracks, and M (w), the total number of revisits to O, and by assigning probability u N (w)+M (w) x ℓ(w)−1 u , where u is a parameter taking values in [0, 1] and x u is a positive constant determined so that the sum of the probabilities over W 1 equals to 1. This is a natural way to define a self-repelling walk on F 1 : If u = 1, then we have x 1 = 1/4 and the simple random walk given above, and if u = 0, then the probability is supported on a set of self-avoiding paths. On a general W N , we define the probability recursively.
To give a precise definition, we shall make some preparations. For a path w ∈ ∞ N =1 W N and A ⊂ R 2 , we define the hitting time of A by where we set inf ∅ = ∞. For w ∈ W N and 0 ≦ M ≦ N , we shall define a recursive sequence {T M i (w)} m i=0 of hitting times of G M as follows: Let T M 0 (w) = 0, and for i ≧ 1, let here we take m to be the smallest integer such that T M m+1 (w) = ∞. Then T M i (w) is the time (steps) taken for the path w to hit vertices in G M for the (i + 1)-th time, under the condition that if w hits the same vertex in G M more than once in a row, we count it only 'once'.
For each M ∈ Z + , we define a coarse-graining map Q M : where m is as above. Note that holds and that if w ∈ W N and M ≦ N , then Q M w ∈ W M . For w ∈ W 1 , define the reversing number N (w) and the revisiting number M (w) by For each u, within the radius of convergence r u > 0 as a power series in x, we have the following explicit form of Φ given in [3]: .
As a function in u, x u is continuous and strictly decreasing on [0, 1].
To define a family of probability measures {P u N , u ∈ [0, 1]} on each W N , we consider decompositions of a path based on the self-similarity and the symmetries of the pre-Sierpiński gaskets. Assume w ∈ W N and 0 ≦ M < N and denotew = Q M w. Since the pair of adjacent 2 −M -triangles includingw(i − 1),w(i) andw(i + 1) is similar to F N −M , there is a unique decomposition (w; w 1 , · · · , w ℓ(w) ),w ∈ W M , w i ∈ W N −M , i = 1, · · · , ℓ(w) (2.4) such that the path segment (w(T M i−1 (w)), w(T M i−1 (w) + 1)), · · · , w(T M i (w))) of w is identified with w i ∈ W N −M by appropriate similarity, rotation, translation and reflection so that w(T M i−1 (w)) is identified with O and w(T M i (w)) with a. We shall use this kind of identification throughout this paper. We illustrate a simple example of the decomposition in Fig. 2 First, for each w ∈ W 1 , let and define P u N on W N recursively by In [3], it is proved that for each u, the sequence {Z u N (λ N u · )} ∞ N =1 of time-scaled self-repelling walks converges to a continuous process as N → ∞. The one-parameter family of the limit processes {Z u ( · ), u ∈ [0, 1]} continuously interpolates a self-avoiding process (u = 0) and Brownian motion (u = 1) on the Sierpiński gasket.
In the next section, we erase loops from this family of self-repelling walks to obtain a oneparameter family of self-avoiding walks. For this purpose, we introduce an auxiliary family of self-repelling walks. Let , which consists of three adjoining copies of F ′ N , and let G V N and E V N be the sets of the vertices and of the edges of F V N , respectively. Denote the set of finite paths on F V N starting from O and stopped at the first hitting time of a by and Q M in the same way as for W N . Let Paths in V N are allowed to leak into the 'interior' of the third copy of F ′ N . A path w ∈ V N defined in this way consists of two parts, (w(0), w(1), · · · , w(T 0 1 (w))) and (w(T 0 1 (w)), w(T 0 1 (w) + 1), · · · , w(T 0 2 (w))), and they can be identified with some w ′ , w ′′ ∈ W N , respectively. Define a probability measure P ′u This is a family of self-repelling walks that hit b 'once' in the sense that Q 0 w = (O, b, a).
a loop formed at c and define its diameter by d = sup i≦k 1 <k 2 ≦j |w(k 1 ) − w(k 2 )|, where | · | denotes the Euclidean distance. Note that a loop can be a part of another larger loop formed at some other vertex. By definition the paths in W N ∪ V N do not have any loops with diameter greater than or equal to 1. Let Γ N be the set of loopless paths on F N from O to a: Note that any loopless path from O to a is confined in △Oab.
We shall now describe the loop-erasing procedure for paths in W 1 ∪ V 1 : (i) Erase all the loops formed at O; (ii) Progress one step forward along the path, and erase all the loops at the new position; (iii) Iterate this process, taking another step forward along the path and erasing the loops there, until reaching a.
To be precise, for w ∈ W 1 ∪ V 1 , define the recursive sequence {s i } n i=0 , forms a loop or multiple loops at w(s i−1 + 1) = w(s i ), so we erase this part by removing w(s i−1 + 1), w(s i−1 + 2), . . . , w(s i − 2), and w(s i − 1). If w(s n ) = a, then we have obtained a loop-erased path, Note that w ∈ W 1 implies Lw ∈ W 1 ∩ Γ 1 , but that w ∈ V 1 can result in Lw ∈ W 1 ∩ Γ 1 , with b being erased together with a loop. So far, our loop-erasing procedure is the same as the chronological method defined for paths on Z d in [16].
For a general N , we erase loops from the largest scale loops down, repeatedly applying the loop-erasing procedure on F V 1 .

First step of the induction -erasing largest scale loops
We shall illustrate the first step of loop erasure. Decompose a path w ∈ W N ∪ V N into (Q 1 w; w 1 , · · · , w ℓ(Q 1 w) ), w i ∈ W N −1 ∪ V N −1 i = 1, · · · , ℓ(Q 1 w) as in (2.4). Fig. 4(a) shows w ∈ W N ∪V N and Fig. 4(b) shows Q 1 w. Erase all the loops in chronological order from Q 1 w ∈ W 1 ∪V 1 to obtain LQ 1 w as in Fig. 4(c), then restore the original fine structures to the remaining parts as shown in Fig. 4(d). That is, if we write We call the path obtained at this stageLw. Notice that in this stage all the loops with diameter greater than 1/2 have been erased. LetQ 1 w = LQ 1 w. This completes the first induction step. ✷ The idea is to repeat a similar procedure within each 2 −1 -triangle to erase all loops with diameter greater than 1/4, and then within each 4 −1 -triangle, and so on, until there remain no loops. To describe next induction steps more precisely, we make some preparations. For w ∈ W N and M ≦ N , we shall define the sequence (∆ 1 , . . . , ∆ k ) of the 2 −M -triangles w 'passes through', and their exit times There is a unique element of T M that contains w(T M 0 ) and w(T M 1 ), which we denote by ∆ 1 . For i ≧ 1, define , and let ∆ i+1 be the unique 2 −M -triangle that contains both w(T ex,M i ) and w(T M J(i)+1 ). By definition, we see that We denote the sequence of these triangles by σ M (w) = (∆ 1 , . . . , ∆ k ), and call it the 2 −M -skeleton of w. We call the sequence If w ∈ Γ N and M ≦ N , then its 2 −M -skeleton is a collection of distinct 2 −M -triangles and each of them is either Type 1 or Type 2. Assume w ∈ W N ∪ V N and M ≦ N . For each ∆ in σ M (w), the path segment of w in ∆ is defined by Note that the definition of T M i allows a path segment w| ∆ to leak into the neighboring 2 −Mtriangles.
If Q M w ∈ Γ M , then w| ∆ ∈ W N −M or w| ∆ ∈ V N −M (identification implied), according to the type of ∆ ∈ σ M (w), where the entrance to ∆ is identified with O and the exit with a. This means that each w satisfying Q M w ∈ Γ M can be decomposed uniquely to Conversely, given a collection of distinct 2 −M -triangles {∆ i } k i=1 such that O ∈ ∆ 1 , a ∈ ∆ k , ∆ i and ∆ i+1 are neighbors, and w ′ i ∈ W N −M ∪ V N −M , i = 1, · · · , k, then we can assemble them to obtain a unique element w of W N ∪ V N .
We call a loop [w(i), w(i + 1), · · · , w(i + i 0 )] a 2 −M -scale loop whenever there exists an M ∈ Z + such that where d is the diameter of the loop. Using above as a base step, we shall now describe the induction step of our operation: assume that all of the 2 −1 to 2 −M -scale loops have been erased from w, and denote by w ′ ∈ W N ∪ V N the path obtained at this stage.

✷
We then continue this operation until we have erased all of the loops to have Lw =Q N w ∈ Γ N . In this way, the loop erasing operator L defined for W 1 ∪ V 1 has been extended to L : Notice that the operation described above is essentially a repetition of loop-erasing for In the induction step, we observe thatQ M +1 w = Q M +1 w ′′ . Although it may occur that σ M +1 (w ′′ ) = σ M +1 (w ′ ) because of the erasure of 2 −(M +1) -scale loops, it holds that σ M (w ′′ ) = σ M (w ′ ), which can be extended to σ K (w ′ ) = σ K (w ′′ ) for any K ≦ M . We remark that the procedure implies that for any w ∈ W N ∪ V N , In particular, i.e., in the process of loop-erasing, once loops of 2 −K -scale and greater have been erased, the 2 −Kskeleton does not change any more. However, it should be noted that the types of the triangles can change from Type 2 to Type 1.

3P
′1 N equals to the 'standard' LERW studied in [21]. An important observation is that in the process of erasing loops from Z u N +1 , if we stop at the point where we have obtainedQ N Z u N +1 , it is nothing but the procedure for obtaining LZ u N from Z u N . The same holds also for Z ′u N +1 . This can be expressed as: In this stage what is left to do for obtaining LZ u N +1 fromQ N Z u N +1 is a sequence of loop-erasing from Z u 1 or Z ′u 1 . This combined with (3.6) leads to a 'decomposition' of LERW measures. For where σ N (v) = (∆ 1 , · · · , ∆ k ), w i = v| ∆ i (identification implied),P * u 1 =P u 1 if ∆ i is Type 1, and P * u 1 =P ′u 1 if ∆ i is Type 2. A similar decomposition holds also forP ′u N +1 . This is the key to the recursion relations of generating functions defined below.

(3.7)
It is a strictly positive matrix, and the larger eigenvalue λ = λ(u) is a continuous function of u, satisfying 2 < λ < 3. Let Z u N and Z ′u N be as in (2.7)

The scaling limit
In this section, we investigate the limit of the loop-erased self-repelling walks constructed in Section 3 as the edge length tends to 0. Since it is easier to deal with continuous functions from the beginning, we regard F N 's and F V N 's as closed subsets of R 2 made up of all the points on their edges. We define the Sierpiński gasket by F = cl(∪ ∞ N =0 F N ), where cl denotes closure. We start with a larger space F V = cl(∪ ∞ N =0 F V N ) and let C is a complete separable metric space with the metric where |x − y|, x, y ∈ R 2 , denotes the Euclidean distance. Hereafter, for w ∈ ∞ N =1 (W N ∪ V N ), we define w(t) = a, t ≧ ℓ(w), and interpolate the path linearly, so that we can regard w as a continuous function on [0, ∞). We shall regard W N , V N and Γ N as subsets of C. Hitting times, {T M i (w)} m i=1 are defined for w ∈ C as in the previous sections, although the infimum is taken over continuous time: Notice that the condition lim t→∞ w(t) = a makes {T M i (w)} m i=0 a finite sequence. For N ∈ Z + , we define a coarse-graining map Q N : C → C by (Q N w)(i) = w(T N i (w)) for i = 0, 1, 2, . . . , m, and by using linear interpolation We define also the 2 −M -skeleton, σ M (w) (a sequence of 2 −M -triangles w passes through), the exit times {T ex,M i } k i=1 and types of triangles in a similar way to their counterparts in Section 3. The loop-erasing operator is regarded as L : s are as in Section 3 with resulting paths in Γ N . P u N , P ′u N ,P u N andP ′u N are regarded as probability measures on C.
In order to consider an almost sure limit, we shall couple walks on different pre-Sierpiński gaskets. Let and Namely, v is a path obtained by adding a finer, 2 −(N +1) -scale structure (not loopless yet) to ω N , and erasing 2 −(N +1) -scale loops from v gives ω N +1 . We assumed ω 0 = f a here, for we can deal with the case where b = (1, 0), in a similar way. Define the projection onto the first N + 1 elements by π N ω = (ω 0 , ω 1 , . . . , ω N ).
For each u ∈ [0, 1], define a probability measureP N on π N Ω ′ bỹ where P u N is defined in Section 2. AlthoughP N depends on u, we shall not write the u-dependence explicitly for simplicity. The following consistency condition is a direct consequence of the looperasing procedure:P where the sum is taken over all possible ω ′ ∈ Γ N +1 such that ω N ⊲ ω ′ . By virtue of (4.1) and Kolmogorov's extension theorem for a projective limit, there is a probability measure P on Ω 0 = C N = C × C × · · · such that P [ Ω ′ ] = 1, where π N denotes the projection onto the first (N + 1) elements also here.
Define The offspring distributions born from a Type 1 triangle and from a Type 2 triangle are equal to those of S and S ′ , respectively. If ∆ i is Type 1, the process starts in state (1, 0), and if ∆ i is Type 2, in state (0, 1).
(1) The generating functions for the offspring distributions are where E is the expectation with regard to P .
(2) Let M be the mean matrix given by (3.7).
Proposition 4 suggests that we should consider the time-scaled processes: where λ is the larger eigenvalue of the mean matrix.
(1)-(4) in Proposition 6 are the straightforward consequences of general limit theorems for supercritical multi-type branching processes (Theorem 1 and Theorem 2 in V.6 of [1]). P [B i > 0] = 1 is a consequence ofΦ andΘ having no terms with degree smaller than 2. For the existence of the Laplace transform on the entire C, we need careful study of the recursions. We omit the details here, since they are lengthy and similar to the proof of Proposition 4.5 in [9]. Theorem 7 X N converges uniformly in t a.s. as N → ∞ to a continuous process X.
Proof. Choose ω ∈ Ω ′ such that the following holds for all M ∈ Z + : lim Therefore, if N, N ′ ≧ N 1 , then for any t ∈ [0, R], where the third term in the middle part is 0 by (4.5). Since M is arbitrary, we have the uniform convergence. ✷ Proposition 6 (5) implies that E[exp tB i ] < ∞ for t > 0, which leads to: The proof is similar to that in [6].

Proposition 9
The following holds for all M ∈ Z + almost surely: Proof. (1) and (2) are direct consequences of Proposition 5, (4.3) and Theorem 7. To prove (3), let v i = X(T * M i ), i = 1, · · · , k M and we first prove that if T * M i−1 < t < T * M i , then X(t) ∈ {v i−1 , v i }, by showing none of the following events A j , j = 1, 2, 3, 4 has positive probability.
such that X(t 1 ) = v i and X(t 2 ) = v i holds for some i ∈ {1, · · · , k M }. A 4 : There exist t 1 and t 2 , Proposition 8 guarantees that P [A 1 ] = P [A 2 ] = 0. Since X is the uniform limit of a sequences of self-avoiding walks, we have P [ Let σ = (∆ 1 , · · · , ∆ k M ) be a sequence such that P [σ M (X) = σ] > 0. Let ∆ i be one of the triangles in σ, and denote the third vertex of ∆ i (neither the exit or entrance) by v * i . We prove that the probability that X hits v * i at some T * M i−1 < t < T * M i is zero. We can take a decreasing sequence of triangles {∆ where p i and q i are defined by (3.5). For any K, with K ≧ M , (1) implies Thus it follows that This implies that the probability that X hits any 'third' vertex of the triangles in its skeleton is zero. This completes the proof of (3). ✷ This proposition further leads to ; Theorem 10 (1) X is almost surely self-avoiding in the sense that where T a (X) = inf{t > 0 : X(t) = a} = T * 0 1 . (2) The Hausdorff dimension of the path X([0, T a (X)]) is almost surely equal to log λ/ log 2, which is a continuous function of u.
(1) is a consequence of Proposition 8 and Proposition 9. To calculate the Hausdorff dimension, we use the fact that if a path w is self-avoiding, then it holds that σ 1 (w) ⊃σ 2 (w) ⊃σ 3 (w) ⊃ · · · → w, in the Hausdorff metric, whereσ M (w) is the union of all the closed 2 −M -triangles in σ M (w). We could call the sample path a 'random graph' directed recursive construction, for the numbers of similarity maps are random variables. We obtain the Hausdorff dimension by applying Thoerem 4.3 in [4] ✷

Path properties of the limit process
In this section we study some more sample path properties of the limit process. We assume 0 < u ≦ 1, for the case of u = 0 is considered in [7]. We shall not explicitly write u-dependence as in the previous section. Let ν = ν(u) = log 2 log λ .
Recall, from Proposition 6 (5) that satisfy the functional equations: Let The proof of the following proposition uses the explicit forms ofΦ andΘ, but it basically follows those of [2] and [15].
Remark In [13], supercritical multi-type branching processes are studied and detailed results on the tail behavior of the limit processes are given, but our case does not satisfy the conditions for his results.
Proposition 14 There exist positive constants C 7 , C 8 and K such that Proof. For an arbitrarily given 0 < δ < 1, take N ∈ N such that 2 −N < δ ≦ 2 −N +1 holds. Recall that if ∆ 1 , the first element of σ N (X), is of Type 1, T ex,N 1 (X) has the same distribution as that of λ −NB 1 , and if of Type 2, the same distribution as that of λ −NB 2 . For i = 1, 2 denote by A i the event that ∆ 1 is of Type i.
For the upper bound, since sup 0≦s≦t |X(s)| ≧ δ implies T ex,N ≦ e −C 6 (λ N t) − ν where we assumed that λ N t ≦ x 0 in the second inequality and set C 8 = 2 −1/(1−ν) C 6 . For the lower bound, since T ex,N −1 1 < t implies |X(t)| ≧ δ, we can show that there exists a C 7 > 0 such that P [ |X(t)| ≧ δ ] ≧ e −C 7 (δt −ν ) 1/(1−ν) holds for λ N −1 t ≦ x 0 . Take K = 2x −ν 0 . ✷ Theorem 15 For any p > 0, there are positive constants C 9 and C 10 such that Proof. Proposition 14 implies that the following holds for small enough t:  ✷ Corollary 13 and Proposition 14 lead to a law of the iterated logarithm. Since the argument is similar to that in [3], we just give the statement below: Theorem 16 There are positive constants C 11 and C 12 such that where ψ(t) = t ν (log log(1/t)) 1−ν .

Conclusion and remarks
We constructed a one-parameter family of self-avoiding walks that interpolates the SAW and the LERW on the Sierpiński gasket, and proved that the scaling limit exists. The exponent that governs the short-time behavior and equals to the reciprocal of the path Hausdorff dimension is a continuous function of the parameter. Our construction has proved that the ELLF method does work for non-Markov random walks as well as the simple random walk. Although we restricted ourselves to u ∈ [0, 1] above, all the results hold also for u > 1, that is, for self-attracting walks. By numerical calculations we observe that λ is a decreasing function of u and conjecture that as u → ∞, x * = lim u→∞ ux u , p * i = lim u→∞ p i (x u , u) and q * i = lim u→∞ q i (x u , u) exist with x * ∼ 0.351, p * 1 ∼ 0.206 , p * 2 ∼ 0.124, p * 3 ∼ 0.206, p * 4 ∼ 0.352, p * 5 ∼ 0.083, p * 6 ∼ 0, p * 7 ∼ 0.029, q * 1 ∼ 0.345, q * 2 ∼ 0.034, q * 3 ∼ 0.242, q * 4 ∼ 0.097, q * 5 ∼ 0.208, q * 7 ∼ 0.073 and q * i ∼ 0 otherwise.