Random interval diffeomorphisms

We consider a class of step skew product systems of interval diffeomorphisms over shift operators, as a means to study random compositions of interval diffeomorphisms. The class is chosen to present in a simplified setting intriguing phenomena of intermingled basins, master-slave synchronization and on-off intermittency. We provide a self-contained discussion of these phenomena.

These step skew product systems provide a setting to study all possible compositions of the two maps f 1 , f 2 in a single framework. Indeed, for initial conditions (ω, x) ∈ Σ + 2 × I, the coordinate in I iterates as The maps f 1 , f 2 simply move points to either smaller or larger values. We will pick the diffeomorphisms f 1 and f 2 randomly, independently at each iterate, with positive probabilities p 1 and p 2 = 1 − p 1 . This corresponds to taking a Bernoulli measure on Σ + 2 from which we pick ω. The obtained random compositions (1) thus form a (nonhomogeneous) random walk on the interval.
The dynamics of the step skew product system depends on the Lyapunov exponents at the boundaries Σ + 2 × {0} and Σ + 2 × {1}. We list the possibilities, which will be worked out in subsequent sections below.
Intermingled basins: With negative Lyapunov exponents at the boundaries, these boundaries are attracting. Their basins are intermingled: any open set in Σ + 2 × I intersects both basins. Master-slave synchronization: With positive Lyapunov exponents at the boundaries, the boundaries are repelling. We find that for almost all fibers {ω} × I, orbits of points in the same fiber converge to each other, i.e. synchronize. On-off intermittency: A zero Lyapunov exponent at a boundary makes that boundary neutral. With the other boundary repelling, a typical time series has long laminary phases where the orbit is close to the "off" state (the neutral boundary) and has bursts where the orbit is in the "on" state, i.e. away from the neutral boundary. Orbits spend a portion of its iterates with full density near the neutral boundary. With two neutral boundaries, orbits spend a portion of its iterates with full density near the union of the neutral boundaries. Random walk with drift: With one attracting and one repelling or neutral boundary, most orbits approach the attracting boundary. Thus we find the most elementary case of the more widespread phenomenon of intermingled basins [1,25], or on-off intermittency [21,38], or master-slave synchronization [36,40].
The setup chosen in this paper is a starting point for research in random dynamics, see e.g. [4], and nonhyperbolic dynamics, see e.g. [10], and has relations to nonautonomous dynamics, see e.g. [30]. The following directions for generalizations give an idea of the many possibilities. We will not give details, but refer to [4,10,30] for more. One may consider other measures than Bernoulli measures to pick random compositions of the interval maps. A natural generalization is also to let the diffeomorphisms on I depend on ω more generally than through ω 0 alone; (ω, x) → (σω, f ω (x)).
One can further consider parameters ω from other spaces than symbol spaces, with other dynamics than generated by the shift operator. One may then also generalize the skew product structure to maps on fiber bundles, and study perturbations that destroy the skew product structure. A heuristic principle going back to [19] states that phenomena in random dynamics on compact manifolds may also occur for diffeomorphisms of manifolds of higher dimensions. This paper is organized as follows. We start with a section that contains definitions. The next sections form the heart of the paper, describing possible dynamics for the considered class of step skew product systems. An important role in the study of skew product systems is by invariant measures. A basic result gives the connection between invariant measures for skew product systems and their natural extensions. In the appendix this is worked out in the simple context of step skew product systems over one-sided and two-sided shifts.
Acknowledgment. We are indebted to Todd Young for discussions on the topics of this paper, and to Abbas Ali Rashid and Vahatra Rabodonandrianandraina for a careful reading and remarks. Frank den Hollander pointed out relevant work by Lamperti on random walks. Detailed comments from referees have been very helpful in improving the presentation.

Step skew product systems
This section serves to present the setup of this paper and to collect necessary definitions. A skew product system is a dynamical system generated by a map F : Y × X → Y × X of the form F (y, x) = (g(y), f (y, x)); ( if one sees X as the state space of interest, one has dynamics of the x variable that is governed by the map f which depends on the variable y that changes through g. The space Y is the base space, the sets {y} × X are fibers. We have an interest in skew product systems over full shifts. Write Ω for the finite set of symbols {1, . . . , N }. Let Σ N = Ω Z be the set of bilateral sequences ω = (ω n ) ∞ −∞ composed of symbols in Ω. Let σ : Σ N → Σ N be the shift operator; the map σ shifts every sequence ω ∈ Σ N one step to the left, (σω) i = ω i+1 . We can also consider the shift operator σ acting on the one-sided symbol space Σ + N , i.e. the space of sequences ω = (ω n ) ∞ 0 composed of symbols in Ω. The spaces Σ N and Σ + N are endowed with the product topology. This topology is generated by cylinders like C k 1 ,...,kn ω 1 ,...,ωn for Σ N , C k 1 ,...,kn ω 1 ,...,ωn = {ω ∈ Σ N ; ω k i = ω k i , ∀i = 1, . . . , n}. As it will not lead to confusion, we use the same notation for cylinders in Σ + N . Now let M be a compact manifold, or compact manifold with boundary, and for ω ∈ Σ N , let f ω : M → M be diffeomorphisms depending continuously on ω. Consider skew product systems F : i.e. the fiber maps depend on ω 0 alone.
We denote iterates of a skew product system F (ω, x) = (σω, f ω (x)) as . For a step skew product system this becomes We also consider (step) skew products over the shift on one-sided symbol sequences. We write F + for the skew product system Recall that a natural extension of a continuous map is the smallest invertible extension, up to topological semi-conjugacy. The skew product system F on Σ N × M is the natural extension of F + on Σ + N × M . Definition 2.2. Let F be a family of diffeomorphisms on M . The iterated function system IFS (F) is the action of the semigroup generated by F. So a collection of diffeomorphisms f i , 1 ≤ i ≤ N , generates an iterated function system. And an iterated function system IFS {f 1 , . . . , f N } on M corresponds to a step skew product system F + (ω, x) = (σω, f ω 0 (x)) on Σ + N × M . Given an iterated function system IFS (F), a sequence {x n : n ∈ N} is called a branch of an orbit of IFS (F) if for each n ∈ N there is f n ∈ F such that x n+1 = f n (x n ). We say that IFS (F) is minimal if every orbit has a branch which is dense in M .
The appendix collects definitions and basic results on stationary and invariant measures in the context of step skew product systems over shifts. We will make use of the material from the appendix in the following sections.
So f 1 moves points in (0, 1) to the left, whereas f 2 moves points in (0, 1) to the right. On Σ + 2 we take Bernoulli measure ν + where the symbols 1, 2 have probability p 1 , p 2 , see Appendix A.
Definition 2.4. The standard measure s on Σ + 2 × I is the product of Bernoulli measure ν + and Lebesgue measure on I. Figure 1. The left frame depicts the graphs of g 1 , g 2 , the diffeomorphisms on I that are conjugate to the maps y → y ± 1 that generate the symmetric random walk. The right frame shows a time series of the iterated function system generated by g 1 , g 2 , both picked with probability 1/2.
A specific example of a step skew product system from S comes from the symmetric random walk. The symmetric random walk is given by translations on the real line, where both maps are chosen randomly with probability 1/2. Now conjugate the symmetric random walk to maps on I as follows. Consider the coordinate change given by the diffeomorphism h : R → (0, 1), . Define the step skew product system on Σ 2 × I generated by the fiber diffeomorphisms see Figure 1. We will also refer to the step skew product system generated by g 1 , g 2 as the symmetric random walk. Observe g 1 (0) = 1 e , g 2 (0) = e, g 1 (1) = e, g 2 (1) = 1 e , so that the symmetric random walk has zero Lyapunov exponents at the boundaries Σ + 2 × {0} and Σ + 2 × {1}. Perturbations of g 1 , g 2 that preserve the boundary points 0, 1 lead to diffeomorphisms f 1 , f 2 with various signs of Lyapunov exponents at the boundaries: all cases that are treated in the following sections also occur as small perturbations from the symmetric random walk.
The text book [12] contains a discussion of recurrence properties of random walks on the line with i.i.d. steps. In the same vein one can ask for the iterated function system IFS ({f 1 , f 2 }) to be minimal on (0, 1). The proof of [22,Lemma 3] gives the following result.
Then the iterated function system generated by f 1 , f 2 is minimal on (0, 1). Such minimality is also implied by analogous conditions at the end point 1.
Proof. For the proof we refer to [22]. We add some comments to clarify the conditions. Il'yashenko [22,Lemma 3] considers, for x, y ∈ (0, 1), compositions f l 2 • f k 1 (x) that converge to y for suitable k, l → ∞. Note that this property implies minimality. His analysis uses linearizing coordinates h • f 1 • h −1 (x) = λx with x ∈ [0, s] for an s < 1. Here h is a local diffeomorphism. The two cases where ln(λ), ln(µ) are rationally dependent or not, are distinguished. In case ln(λ), ln(µ) are rationally dependent, the argument works if the second order derivative of h • f 2 • h −1 at 0 is not zero. An explicit calculation shows that this gives the condition in the proposition.
Obviously, the iterated function system generated by g 1 and g 2 , where g 2 = g −1 1 , is not minimal.

Intermingled basins
Kan [25] describes an example of a skew product system on T×I, over an expanding circle map in the base, where the boundary components T × {0} and T × {1} are attractors so that both basins intersect each open set. We will describe his results in the elementary setting of step skew product systems.
Since the function r is positive almost everywhere, the basin of Σ + 2 × {0} has positive standard measure. The same holds for the basin of Σ + 2 × {1}. It is easily seen that any open set in Σ + 2 × I intersects both basins; forward iterations must accumulate onto both Σ + 2 × {0} and Σ + 2 × {1} using that the shift operator is an expansion and 0 and 1 occur as attracting fixed points for f 1 and f 2 respectively. Proof of Theorem 3.1. We prove the theorem by considering the inverse diffeomorphisms, i.e. a step skew product with positive Lyapunov exponents along Σ + 2 × {0} and Σ + 2 × {1}. For the duration of this proof, we consider F + ∈ S with L(0) > 0 and L(1) > 0. The following lemmas deal with this. The theorem will follow by linking the derived statements on the natural extension F of F + and the statements we wish to prove for its inverse.
We write P I for the space of probability measures on I equipped with the weak star topology. As explained in Appendix A, a stationary measure is a fixed point of T : P I → P I given by . The inverse maps give positive Lyapunov exponents at the end points. The right frame shows a time series for the iterated function system generated by f −1 Proof. For small 0 < α < 1, q > 0, and positive c, define The conditions exclude stationary measures supported on the end points 0 or 1. Note that N c depends on α and q; but we do not include this dependence in the notation. We first show that there exist c > 0 and α, q > 0 close to 0 such that . We claim that there is a small α > 0 such that the assumption Multiplying by α we get Moreover, for such δ > 0 we are able to choose a sufficiently small q = q(δ) > 0 so that Take c with cq α > 1.
Note that this implies that in the definition of N c , m [0, x) ≤ cx α and m (1 − x, 1] ≤ cx α for any 0 ≤ x ≤ 1, and not just for 0 ≤ x ≤ q. Take a measure m ∈ N c . To prove T m ∈ N c , we must show that for x ≤ q, T m [0, x) ≤ cx α . Knowing that m [0, x) ≤ cx α and applying (5), (6) we obtain the following estimates: Estimates near the right boundary point are treated in the same manner. By the Krylov-Bogolyubov averaging method, for a measure m ∈ N c there is a subsequence of { 1 n n−1 r=0 T r m} n∈N that is convergent to a probability measurê m ∈ N c such that Tm =m.
The following additional reasoning shows that there is an ergodic stationary measure in N c . The set of stationary measures M I is a convex compact subset of P I . The ergodic stationary measures are the extreme points of it. Note that N c ∩ M I is a convex compact subset of M I , which is itself also convex and compact. We claim that the extreme points of N c ∩ M I are also extreme points of M I . Suppose by contradiction that there are n 1 , n 2 ∈ M I \ (N c ∩ M I ) and the convex combination  Proof. We follow [4,Theorem 1.8.4]. Consider a µ m and its conditional measures µ m,ω . Let X m (ω) be the smallest median of µ m,ω , i.e. the infimum of all points x for which The set of medians of µ m is a compact interval and X m : The following lemma shows that the set of stationary measures is the triangle consisting of convex combinations of δ 0 , δ 1 and one other ergodic stationary measure m. Proof. Suppose there are two different such stationary measures m 1 , m 2 . We may take m 1 , m 2 to be ergodic stationary measures. This corresponds to two ergodic invariant measures µ m 1 = µ m 2 for F . By Proposition A.1 and Lemma 3.3 there are measurable functions X m i : Observe that the supports of m 1 and m 2 are invariant: for i = 1, 2. The convex hulls of the supports of m 1 and m 2 therefore both equal I. We can find generic points (ω, x 1 ) and (ω, x 2 ) for m 1 and m 2 such that and f 1 , f 2 are both strictly increasing, we conclude that X m 2 (ω) < X m 1 (ω), contradicting our assumption. Thus, µ m 1 = µ m 2 . for every x ∈ (0, 1). Again since the diffeomorphisms f 1 , f 2 are increasing, for the

By Proposition
. Note that this also shows that for the inverse diffeomorphisms, the union of the basins of attraction of Σ 2 × {0} and Σ 2 × {1} has full standard measure.
Let us give some pointers to further research literature: a discussion of a step skew product system over the full shift on two symbols, with piecewise linear fiber maps is in [2]. Several articles discuss extensions to skew product systems that are not step skew product systems. We refer the reader in particular to [24,28] and [10,Section 11.1]. Further studies that quantify the phenonomenon are [26,35]. See [18,22,23] for related work on so-called thick attractors (attractors of positive standard measure).

Master-slave synchronization
The proof of Theorem 3.1 relies on an analysis of step skew product systems with positive Lyapunov exponents at the boundaries. The following result further discusses such step skew product systems. It describes a synchronization phenomenon that is illustrated in the right frame of Figure 3. We see an example of master-slave synchronization, which refers to synchronization caused by external forcing. It is explained by a single attracting invariant graph for the skew product system [40]. From a slightly different perspective one can also view this as synchronization by noise, where common noise synchronizes orbits with different initial conditions.
In a general context of skew product systems F (y, x) = (g(y), f (y, x)) on a product Y × X of metric spaces Y, X, as in (2), master-slave synchronization is given by the following picture. If {(y, ξ(y)) | y ∈ Y } is a globally attracting graph, then the orbits F n (y, x 1 ) and F n (y, x 2 ) converge to each other. In particular if one observes the dynamics of the x-variable, one has where d is a metric on X and F n (y, x) = (g n (y), f n (y, x)). An illustrative example, of linear differential equations forced by the Lorenz equations, is given by Pecora and Carroll in [36]. We refer to [37] for an explanation of synchronization in a range of contexts.
The following result describes a similar effect for step skew product systems F + ∈ S; the proof employs a measurable invariant graph for the natural extension F of F + . Theorem 4.1. Let F + ∈ S and assume L(0) > 0 and L(1) > 0. Let x 0 , y 0 ∈ (0, 1). Then for ν + -almost all ω, Proof. The proof of Theorem 3.1 gives the existence of an invariant measurable graph ξ : Σ 2 → I so that given any x 0 ∈ (0, 1), one has that for ν-almost all ω, As ν is invariant for σ, this proves that f n ω (x 0 ) converges to ξ(σ n ω) in probability. This gives the existence of a subsequence n k → ∞ as k → ∞ so that (see e.g. [39, Theorem II.10.5]). We have thus obtained the weaker statement that for ν + -almost all ω, lim inf n→∞ |f n ω (x 0 ) − ξ(σ n ω)| = 0. We provide a sketch of the argument to show that this is also true with limit replacing the limit inferior, which would imply the theorem. The measure (id, ξ)ν on Σ 2 ×I, with conditional measures δ ξ(ω) and marginal ν, is invariant for the natural extension F : Σ 2 × I → Σ 2 × I of F + . It corresponds to an invariant measure ν + × m for F + by Proposition A.1 and the observation that ξ(ω) depends on (ω n ) −1 −∞ only. Lemma 4.1. With respect to the measure ν + × m, the system F + has a negative Lyapunov exponent; Proof. One can follow the argument for [29, Theorem 7.1] (an analogue of [8, Theorem 4.2]). We describe the steps. A key idea is the use of the notion of relative entropy; the relative entropy h(m 1 |m 2 ) of a probability measure m 1 on I with respect to a probability measure m 2 on I is given by The following properties hold [15].
A relation between Lyapunov exponent and relative entropy can be derived for absolutely continuous stationary measures. The argument now involves maps with absolutely continuous noise with shrinking amplitude to approximate the fiber diffeomorphisms. Such a perturbed system admits an absolutely continuous stationary measure. One uses the relation between the Lyapunov exponent and relative entropy for this absolutely continuous stationary measure and considers the limit where the noise amplitude shrinks to zero.
Let ζ be a random variable with values in [0, 1] that is uniformly distributed. For small positive values of ε, let Note that m ε is a fixed point of an operator T ε where T ε m ε is defined by the right hand side of (9). One can show that m ε has a smooth density [42]. Moreover, with N c the closed set of probability measures considered in the proof of Lemma 3.3, one has that for for suitable values of α, c, q, m ε ∈ N c for all small positive ε. This is true since T ε maps N c into itself for suitable values of α, c, q. To see this follow the proof of Lemma 3.3 with T ε replacing T . The main calculation analogous to (7) is A similar argument can be employed near the boundary point 1, for ε small.
The Lyapunov exponent for the stationary measure m ε is given by Since m ε has a smooth and bounded density, one can prove the relation (see [29,Proposition 7.2]) By (i), λ ε ≤ 0. By (ii), λ ε = 0 if and only if f i,ζ m ε = m ε for all ζ ∈ [0, 1] and i = 1, 2. As the latter is not possible, λ ε < 0. Now take the limit ε → 0. Then m ε → m since T ε is continuous and depends continuously on ε (compare Lemma A.1), convergence is in N c , and m is the unique stationary measure in N c . From (10) one sees that λ ε → λ as ε → 0 and we obtain λ ≤ 0. Since the relative entropy is lower semi-continuous in ε as the supremum of continuous functionals, one finds from (11) that This shows that h(f i m|m) = 0 for i = 1, 2 in case λ = 0. This is clearly not the case by ((ii)), as f 1 m = f 2 m = m. So we have λ < 0.
Because of this lemma, for ν-almost all ω, ξ(ω) from (8) has a stable manifold W s (ω) that is an open neighborhood of ξ(ω) in I. To see this one can refer to general theory for nonuniformly hyperbolic systems as in [7], or apply reasoning as in Lemma 3.1. For each x ∈ W s (ω), Then r b and r t are invariant. Hence r b > 0, ν-almost everywhere, or r b = 0, νalmost everywhere, and likewise r t < 1, ν-almost everywhere, or r t = 1, ν-almost everywhere. We will derive a contradiction from the assumption that r t < 1 or r b > 0, ν-almost everywhere. Assume that e.g. r t < 1, ν-almost everywhere. Write As L(1) > 0, we have r(ω) < 1 for ν-almost all ω ∈ Σ 2 , compare Lemma 3.1. Since the graphs of r t and r are invariant graphs and also r t < 1, we have r ≥ r t > ξ, ν-almost everywhere.
Under the conditions of Theorem 4.1, the proof of Theorem 3.1 shows that for ν-almost all ω ∈ Σ 2 and any x ∈ (0, 1). This convergence is called pullback convergence. The proof of Theorem 4.1 shows that for ν-almost all ω ∈ Σ 2 and any x ∈ (0, 1). This convergence is called forward convergence. So in this case both pullback and forward convergence to ξ holds. In general however forward convergence is not a consequence of pullback convergence. The next section provides an example, involving a zero Lyapunov exponent, with pullback convergence but not forward convergence. See in particular Section 5.1. Section 6 contains a related example, related by going to the inverse skew product system, with forward convergence but not pullback convergence. We refer to [30] for more discussion on conditions for convergence in nonautonomous and skew product systems.
We finish with some pointers to further literature. In [3,14,27,43] synchronization results, similar to Theorem 4.1, for skew product systems with circle diffeomorphisms as fiber maps are treated without employing negativity of Lyapunov exponents. Motivated by Lemma 4.1 for example, one may wonder about other invariant measures than those with Bernoulli measure as marginal. Reference [20] considers, in this direction, the existence of nonhyperbolic measures for step skew product systems with circle fibers.

On-off intermittency
Intermittency in a dynamical system stands for dynamics that exhibits alternating phases of different characteristics. Typically, intermittent dynamics alternates time series close to equilibrium with bursts of global dynamics [9]. In our context, we say that a step skew product system F + ∈ S displays intermittency if the following holds for any sufficiently small neighborhood U of 0: (1) For all x ∈ (0, 1) and ν + -almost all ω ∈ Σ + 2 : (2) For all x ∈ (0, 1) and ν + -almost all ω ∈ Σ + 2 , f n ω (x) ∈ U for infinitely many n.
Here, for a finite set S, we write |S| for its cardinality.
This kind of intermittency that involves a weakly unstable invariant set, here Σ + 2 × {0} ⊂ Σ + 2 × I, has been called on-off intermittency [21,38]. The occurrence of intermittency in iterated function systems of logistic maps with zero Lyapunov exponent at the fixed point in 0 is treated in [5,6]. See also [11] for a study of specific interval diffeomorphisms over expanding circle maps.  (3), (4) and p(x) = 3 10 x(1−x). The corresponding step skew product system has a zero Lyapunov exponent along Σ + 2 × {0} and a positive Lyapunov exponent along Σ + 2 × {1}. The right frame shows a time series for the iterated function system generated by these diffeomorphisms.
In this section we will discuss on-off intermittency for step skew product systems F + ∈ S. Throughout we assume that both diffeomorphisms f 1 , f 2 are picked with probability 1/2. This is for convenience, we expect the more general case with probabilities p 1 , p 2 to go along the same lines. The following two theorems, Theorems 5.1 and 5.2, demonstrate that F + ∈ S with L(0) = 0 and L(1) > 0 displays intermittency. Figure 4 illustrates a typical time series.
Lamperti, in a sequence of papers [31][32][33], developed a general theory of recurrence for nonhomogeneous random walks on the half-line. His results may be used to prove on-off intermittency in our context, see in particular [33, Theorems 3.1 and 4.1]. We will get it by calculating bounds on stopping times, using C 2 differentiability of the generating diffeomorphisms.
Proof. On C, z n (ω) is constant. As further c ω n+1 is independent of c ωn , it suffices to consider n = 0 and to prove The zero Lyapunov exponent, L(0) = 0, implies Σ + 2 d ω 0 dν + (ω) = 0. Using this, By developing the integrand of the last expression in a Taylor expansion, this gives implying that there existsū so that h(u) < 0 for u ≥ū.
We denote For β small, K = ln(β/ (1 − β)) is a large negative number. For definiteness assume x 0 ≤ β, i.e. y 0 ≤ K. Define stopping times T 0 = 0, T 2k+1 = inf{n ∈ N | n > T 2k and y n > K}, T 2k = inf{n ∈ N | n > T 2k−1 and y n ≤ K}, see Figure 5. Let be the duration of subsequent iterates with y n ≤ K and the duration of subsequent iterates with y n > K, respectively. Lemmas 5.2 and 5.4 determine bounds for the expectation of the stopping times η k (which is shown to be infinite) and ξ k (which is shown to be finite).
Write L =L/ ln(d); we consider z n on (−∞, L]. Let g > 0; g will be chosen large in the sequel. The term ln(1 + r(e zn ln(d) ))/ ln(d) may be bounded from above by Ce −g ln(d) on intervals (−∞, L − g], for some C > 0. On (−∞, L − g] ⊂ (−∞, L] we compare the random walk z n with the random walk If z 0 , . . . , z n ∈ (−∞, L − g), then z i ≤ v i for all 0 ≤ i ≤ n + 1. Therefore, for each ω ∈ Σ + 2 , T z (ω) ≥ T v (ω). By Wald's identity, see e.g. [39,Section VII.2], for some c > 0, and hence Let α > 0 be so that for L − g ≤ z n ≤ L. Note that we may take α to be small if L is large. Consider the random walk given by (16) with initial point z 0 ∈ (L − 1, L]. Define the stopping time T g = min{n > 0 | z n < L − g or z n > L}. Lemma 5.3. For α > 0 small enough, there is c, r * = r * (α) < 0, so that We finish the proof of Lemma 5.2 using this lemma, and then prove Lemma 5.3. Consider the following reasoning. Start with a point z T 2k ∈ (L − 1, L]. Then some iterate of z T 2k will have left [L − g, L], either through the right boundary point L or, with probability determined by Lemma 5.3, through the left boundary point L − g. In the latter case there will be a return time to [L−g, L] after which a further iterate may leave through the right boundary point L. Consequently, combining (17) and Lemma 5.3, for some c > 0. For L sufficiently large, α is small enough to ensure e r * e ln(d) > 1, because r * (α) → 0 as α → 0. Then the right hand side of (19) goes to infinity as g → ∞. This concludes the proof of Lemma 5.2.
One can check that this equation has a unique solution r * < 0 with r * → 0 as α → 0. Now G n is a martingale as This gives Σ + 2 e r * u Ug dν + (ω) = e r * u 0 .
Similar arguments prove the following lemma.
ξ k has finite expectation.
Proof. Recall that x n+1 = f ωn (x n ) on I is conjugate to y n+1 = h ωn (y n ) on R through y n = ln(x n /(1 − x n )). We split iterates of y n in [K, ∞) into two sets, namely iterates in [K,K] and iterates in (K, ∞), for some positive and largeK. Near x = 1, write ) with a i > 0 and r i (u) = O(u), u → 0. The positive Lyapunov condition L(1) > 0 means that ln(a 1 ) + ln(a 2 ) > 0. Calculate where 1 − x n = 1/(1 + e yn ). From this expression it is easily seen for any ε > 0 one can pickK large, so that for y n >K, y n+1 ≤ y n − ln(a ωn ) + ε.
Pick ε small enough so that − ln(a 1 ) − ln(a 2 ) + 2ε < 0. For z 0 ∈ (K, h 2 (K)] and z n+1 = h ωn (z n ), let TK = min{n ∈ N | z n ≤K} be the stopping time to leave (K, ∞). As in the proof of Lemma 5.2 one shows that the expectation of TK is finite. To provide the argument, consider the random walk u n+1 = u n − ln(a ωn ) + ε starting at u 0 = z 0 and let T u = min{n ∈ N | u n ≤K}.
It follows from the construction that as the sequence (ω i ) ∞ 0 is independent and identically distributed, also (ρ i ) ∞ 0 is independent and identically distributed with the same distribution: probability 1/2 for both symbols 1, 2. Because f 1 (x) < x, we find h 1 (y) < y and thus that there is a number l < 0 with h 1 (y) < y + l, for y ∈ [K,K]. Hence, for any y ∈ [K,K] and N = (K −K)/l we will have g N 1 (y) = h N 1 (y) < K. The stopping time T K is therefore smaller than the stopping time min{n ∈ N | ρ i = 1 for n − N < i ≤ n}. Note that the expected number of throws of symbols 1, 2 that lead to N consecutive 1's is finite. (In fact it equals 2 N +1 − 2. It is easily bounded by N times the expectation of the first number j so that ω i = 1 for jN ≤ i < j(N + 1); the latter is a geometric distribution with expectation 2 N ). So the expectation of the stopping time T K is finite; Finally we combine (22) and (23): the formula This proves Lemma 5.4.
The next theorem is an immediate consequence of Theorem 5.2. Proof. We will only treat the case L(0) = 0 and L(1) > 0. Suppose there is an ergodic stationary measure m with support in (0, 1). By Lemma A.2, ν + × m is an ergodic invariant measure for F + . By Birkhoff's ergodic theorem, for ν + ×m-almost every (ω, x), we have By Fubini's theorem, there is a subset of I of full m-measure, so that in any Σ + 2 ×{x} with x from this subset, there is a set of full ν + -measure for which (24) holds. This however contradicts (14), since that holds for all β > 0 and applies to all x ∈ I.
The type of reasoning to prove Theorem 5.2 can be used to obtain the following result on iterated functions systems with zero Lyapunov exponents at both end points.
Theorem 5.4. Consider F + ∈ S and assume L(0) = L(1) = 0. Let 0 < β < 1 and x 0 ∈ (0, 1). Then for ν + -almost every ω ∈ Σ + 2 , Figure 1 illustrates a time series of the symmetric random walk, to which this theorem applies. 5.1. Pullback convergence. Theorem 5.1 implies that forward convergence of f n ω (x) to 0 does not hold: it is not true that for ν + -almost all ω ∈ Σ + 2 , f n ω (x) → 0 as n → ∞. The next result stipulates that pullback convergence to 0 does hold. See also [4,Section 9.3.4] for a related example where pullback convergence does not imply forward convergence, in a context of stochastic differential equations.
Proof. We reformulate the theorem to the following equivalent statement: for νalmost all ω ∈ Σ 2 , and for all y ∈ (0, 1), Equivalence of the statements (25) and (26) follows from the monotonicity of the interval diffeomorphisms: f −n ω (y) > x precisely if f n σ −n ω (x) < y and thus for ε 1 , ε 2 small positive numbers, As L(1) > 0, by Lemma 3.1 we know that u exists and u < 1, ν-almost everywhere.
Since u is invariant we get that either u > 0, ν-almost everywhere, or u = 0, νalmost everywhere. Assume that u is not identically 0. The measure µ = (id, u)ν on Σ 2 × I with conditional measures δ u(ω) and marginal ν on Σ 2 , defines an invariant measure for F . Denote by Π the natural projection Σ 2 × I → Σ + 2 × I, where Σ 2 = Σ − 2 × Σ + 2 . Expression (27) gives that u(ω) depends on the past ω − = (ω i ) −1 −∞ only. Therefore, the measure µ is a product measure ν + × ϑ on Σ + 2 × Σ − 2 × I . The projection Πµ is therefore a product measure ν + × m on Σ + 2 × I. That is, µ corresponds to an invariant measure ν + × m for F + , see Proposition A.1. Here m is a stationary measure by Lemma A.2. Since 0 < u < 1, ν-almost everywhere, m assigns positive measure to (0, 1). By Theorem 5.3, the only stationary measures are convex combinations of delta measures at 0 and 1. We have obtained a contradiction and proven (26) and hence the theorem. 5.2. Central limit theorem. Under the assumptions of Theorem 5.5, its conclusion that f n σ −n ω (x) → 0 for ν-almost all ω, implies that f n σ −n ω (x) converges to 0 in probability. By σ-invariance of ν, f n ω (x) converges to 0 in probability. Hence, for any a ∈ (0, 1), We state a central limit theorem that gives convergence of the distribution of the points f n ω (x), after an appropriate scaling. The proof is essentially contained in [32], where a central limit theorem for Markov processes on the half-line is stated.
Lemma 5.5. The moments of z 2 n /n satisfy Proof. It suffices to follow the proof of [32, Lemma 2.1] and [32,Lemma 2.2]. The proofs in [32] use that the process is null, that is, for any compact interval I ⊂ R, lim n→∞ As in [32, Theorem 2.1], Lemma 5.5 implies that z 2 n /n has a limiting distribution as n → ∞ and We conclude that Plugging in y n = − ln(x n ) + ln(x) gives the statement of the theorem.

Random walk with drift
The material in the previous sections treats all possible combinations of signs of L(0) and L(1) except the case where L(0) ≥ 0 and L(1) < 0 (or vice versa). We put the remaining case in the following result. Theorem 6.1. Consider F + ∈ S and assume L(0) ≥ 0 and L(1) < 0. Let x 0 ∈ (0, 1). Then for ν + -almost every ω ∈ Σ + 2 , Proof. Take the proof of Theorem 5.5 applied to the inverse skew product system F −1 .
Theorem 6.1 establishes forward convergence of f n ω (x 0 ) to 1 under the given assumptions. Consider F + ∈ S with L(0) > 0 and L(1) < 0. Then there is also pullback convergence to 1; for ν-almost all ω ∈ Σ 2 . It follows from the results in Section 5, again by going to the inverse skew product system, that such a pullback convergence does not hold in case L(0) = 0, L(1) < 0. See [13] for considerations on forward versus pullback convergence in an example of random circle dynamics.
Appendix A. Invariant measures for step skew product systems An iterated function system defines a Markov process and as such may admit stationary measures. Their relation with invariant measures for the corresponding one-sided skew product system and its natural extension, the two-sided skew product system, is explored in this section. This is classical material, originating from Furstenberg [17]. A general account of the constructions is found in [4]. We provide a simplified discussion taylored to a setting of step skew product systems over shifts. The reader may also consult the exposition in [41,Chapter 5].
Assume the context from Section 2. So consider Ω = {1, . . . , N } and the family of diffeomorphisms F = {f 1 , . . . , f N } on M . We pick f i with probability p i with 0 < p i < 1 and N i=1 p i = 1. We endow Σ N with the Borel sigma-algebra, denoted by F. Likewise we take the Borel sigma-algebra F + on Σ + N . Given the probabilities p i , we take a Bernoulli measure ν on Σ N which is determined by its values on cylinders; As this holds for any ε, we get and hence that f i,n m converges to f i m. This argument shows that T depends continuously on f 1 , . . . , f N , continuous dependence on parameters p 1 , . . . , p N is clear.
The same type of argument shows that the map T ε , appearing in the proof of Theorem 4.1, is continuous and changes continuously with ε. The set of fixed points of T changes upper semi-continuously in the Hausdorff metric if parameters p 1 , . . . , p N and f 1 , . . . , f N are varied. So if m is a unique fixed point for T , T ε → T as ε → 0 and T ε m ε = m ε , then m ε → m as ε → 0.
Lemma A.2. A probability measure m is a stationary measure if and only if µ + = ν + × m is an invariant measure of F + with marginal ν + on Σ + N .
Proof. Consider the following calculation for product sets C × B ⊂ Σ + N × M of a cylinder C = C 0,...,n−1 i 0 ,...,i n−1 and a Borel set B: If m is a stationary measure, then the last expression equals ν + (C)m(B) = ν + × m(C × B), so that F + (ν + × m)(C × B) = ν + × m(C × B). Since the product sets generate the σ-algebra, this proves F + -invariance of ν + × m. Similarly, if ν + × m is F + -invariant, then the last expression equals ν + × m(C × B) = ν + (C)m(B) and Let m be a stationary measure for M . We say that m is ergodic if ν + × m is ergodic for F + . A point (ω, x) is said to be a generic point for an ergodic measure ν + × m, if the orbit is distributed according to the measure.
Write π : Σ 2 → Σ + 2 for the natural projection (ω n ) ∞ −∞ → (ω n ) ∞ 0 . The Borel sigma-algebra F + on Σ + N yields a sigma-algebra F 0 = π −1 F + on Σ N . A measure µ on Σ N × M with marginal ν has conditional measures µ ω on the fibers {ω} × M , such that for measurable sets A, where we have written A measure µ + on Σ + N × M with marginal ν + likewise has conditional measures µ + ω . It is convenient to consider ν + as a measure on Σ N with sigma-algebra F 0 and µ + as a measure on Σ N × M with sigma-algebra F 0 ⊗ B. When ω ∈ Σ N we will write µ + ω for the conditional measures µ + πω . The spaces of measures are equipped with the weak star topology.
Invariant measures for F + with marginal ν + correspond to invariant measures for F with marginal ν in a one-to-one relationship, as detailed in Proposition A.1 below. This is a special case of [4,Theorem 1.7.2]. The result implies that stationary measures correspond one-to-one to specific invariant measures for F with marginal ν.
Remark A.1. Consider µ + as a measure on Σ N × M with sigma-algebra F 0 ⊗ B.
Remark A.2. From the characterization of ergodic probability measures as extremal points in the set of invariant probability measures, the one-to-one correspondence µ ↔ µ + implies that µ is ergodic if and only if µ + is ergodic.
Proof of Proposition A.1. Note that F s = σ s F 0 are sigma-algebras on Σ N with F s ↑ F. For a Borel set B ⊂ M , define υ t (ω) = f t σ −t ω µ + σ −t ω (B).
Here (1) and (5) is the definition of υ t , (2) and (4) are by σ-invariance of ν, and (3) is by F + -invariance of µ + (see Lemma A.3 and Corollary A.1 below for a derivation). The above calculation shows that υ t is a martingale with respect to the filtration F t . Hence the limit lim t→∞ υ t (ω) exists. By the Vitali-Hahn-Saks theorem, see [16, Theorem III.10], the limit for varying Borel sets B defines a measure, µ ω . To obtain that the resulting measure µ is F -invariant, we refer to Remark A.1. Since F acts continuous on the space of probability measures, the limit lim n→∞ F n (µ + ) is F -invariant.
Corollary A.1. The lemma implies that for A 0 ∈ F 0 , B ∈ B, for 0 ≤ s ≤ t, Note that for the natural extension, F -invariance of µ means f t σ s−t ω µ σ s−t ω = f s ω µ ω for 0 ≤ s ≤ t and for ν-almost all ω ∈ Σ N .