Isentropes and Lyapunov exponents

We consider skew tent maps $T_{{\alpha}, {\beta}}(x)$ such that $( {\alpha}, {\beta})\in[0,1]^{2}$ is the turning point of $T {_ { {\alpha}, {\beta}}}$, that is, $T_{{\alpha}, {\beta}}=\frac{{\beta}}{{\alpha}}x$ for $0\leq x \leq {\alpha}$ and $T_{{\alpha}, {\beta}}(x)=\frac{{\beta}}{1- {\alpha}}(1-x)$ for $ {\alpha}<x\leq 1$. We denote by $ {\underline {M}}=K( {\alpha}, {\beta})$ the kneading sequence of $T {_ { {\alpha}, {\beta}}}$, by $h( {\alpha}, {\beta})$ its topological entropy and $\Lambda=\Lambda_{\alpha,\beta}$ denotes its Lyapunov exponent. For a given kneading squence $ {\underline {M}}$ we consider isentropes (or equi-topological entropy, or equi-kneading curves), $( {\alpha},\Psi_{{\underline {M}}}( {\alpha}))$ such that $K( {\alpha},\Psi_{{\underline {M}}}( {\alpha}))= {\underline {M}}$. On these curves the topological entropy $h( {\alpha},\Psi_{{\underline {M}}}( {\alpha}))$ is constant. We show that $\Psi_{{\underline {M}}}'( {\alpha})$ exists and the Lyapunov exponent $\Lambda_{\alpha,\beta}$ can be expressed by using the slope of the tangent to the isentrope. Since this latter can be computed by considering partial derivatives of an auxiliary function $ { \Theta}_{{\underline {M}}}$, a series depending on the kneading sequence which converges at an exponential rate, this provides an efficient new method of finding the value of the Lyapunov exponent of these maps.

The first listed author was supported by the Hungarian National Foundation for Scientific Research Grant 124003. During the preparation of this paper this author was a visiting researcher at the Rényi Institute.
The second listed author was supported by the Hungarian National Foundation for Scientific Research Grant 124749.
On the bottom part of the figure one can see the first few entries of the kneading sequence. To visualize the isentrope the computer plotted in black some pixels which correspond to parameter values with similar initial segment of kneading sequence. To obtain a not too thick region the length of this initial segment depends on the parameter region. For example on the left half of Figure 2 there is a thicker region, which can be made thinner by considering longer initial segments. However if the initial segment is too long, the computer is not finding enough pixels from the given equi-kneading region, see for example the right half of Figure  1 where close to the upper left corner of the unit square the plot is too thin. We will see in this paper that the isentropes (α, Ψ M (α)) are continuously differentiable curves. What we found really interesting that the derivatives of these curves can be used to compute the Lyapunov exponents of the skew tent maps T α,β .
To study equi-topological entropy, or equi-kneading curves in the region U in [4] we introduced the auxiliary functions Θ M . Suppose that we have a given kneading-sequence M and Here M = M − if the turning point is not periodic, that is T k α,β (α) = α for k ∈ N. In this case there is no C ∈ M . The set of such kneading sequences is denoted by M ∞ . The cases when the truning point is periodic, that is when C appears in M will play a very important role in this paper. The set of these kneading sequences is denoted by M <∞ . These are the ones ending with C. In this case M − can be defined in many ways. One such way was discussed in [4]. However, for our α β γ Ψ ′ M -γ Ψ ′ M -Θ . 3  Concatenate M with itself infinitely many times. Then in the right infinite (right) periodic sequence replace the Cs in an arbitrary manner with Rs and Ls.
The other approach is to estimate Ψ ′ M (α) via the Lyapunov exponents. For the skew tent map T α,β , (α, β) ∈ U there is a unique ergodic acim µ α,β = µ, that is a measure absolutely continuous with respect to the Lebesgue measure, λ. Its density f is an invariant function/fixed point of the Frobenius-Perron operator P α,β , that is P α,β f = f . By Birkhoff's ergodic theorem the Lyapunov exponent In case of skew tent maps |T ′ α,β ( for µ a.e. x. Figure 2. More tangents to isentropes computed from γ and from Θ Hence to estimate the Lyapunov exponent we need to estimate γ. This is usually done by using a computer program. For a sufficiently large N and a "randomly" selected x one computes the sum in (6). Actually we have done this as well in our computer simulations. It has turned out that N = 200000 was sufficiently large to have a reasonably good estimate for γ. In Table 1 there is a column γ containing these estimates for the randomly selected parameter values. The main result of this paper is the fact that γ, and hence Λ α,β can be expressed by using Ψ ′ M (α). We show in Proposition 10 and in Theorem 13 that Since Ψ ′ M (α) can be calculated by (4) using (5), (6) and (7) we can calculate the Lyapunov exponent for any T α,β with (α, β) ∈ U. To illustrate the connection between Ψ ′ M (α) and Λ α,β , or γ in our computer simulations followed a reverse approach. This means that the computer program calculated an estimate of γ (and hence of Λ) and this estimate was used for calculating the slope of an approximate tangent to the isentrope. As the images show this method, based on (7) works, that is the approximate tangents really seem to be tangent to the isentrope. In Table 1 there is a column labeled Ψ ′ M -Θ which contains the estimates we obtained for Ψ ′ M (α) by using the estimate for γ based on (6). As one can see that the estimates we obtained for Ψ ′ M (α) by using the Θ M function in (4) are quite close to the ones obtained by using γ. On Figures 1 and 2 we plotted both approximate tangents to the isentropes, the one calculated from γ and the one calculated from Θ M . On the color pdf version of the paper the first approximate tangent is in red and the second is in blue. In case only one, the red tangent is visible then it means that the two approximate tangents are on top of each other. It is also visible that they are indeed "tangent" to the isentrope as well. On the right half of Figure 2 the two approximate tangents are not exactly on top of each other. This is due to the fact that for the parameter values α = 0.49 and β = 0.56 both α/β and (1 − α)/β are close to one and the convergence in the series giving the partial derivatives of Θ M is slower. To get a better estimate one needs to consider more than the first 200 entries of the kneading sequence. On this figure the tiny black region corresponding to the equi-kneading region is almost completely covered by the blue and red approximate tangents. We would like to emphasize that our new method based on Θ M , even if the number of iterates is increased from 200 to a larger number requires still much less many iterates than the other method which needed 1000 times more iterates for about the same accuracy.
Finally, there is one more illustration showing that indeed there is a link between γ and Ψ ′ M (α). On Figure 3 the color of pixels in U was calculated based on the first 10 entries of the kneading sequence. Hence equi-kneading regions containing isentropes are of the same color (modulo screen/pixel resolution). We also plotted three skew tent maps with three different colors and the approximate tangent line computed by using γ from (6) substituted into (7).
As far as we know in the literature there were two ways to estimate/approximate Lyapunov exponents of skew tent maps. One method is based on computer programs approximating γ, or the acim, or its density as we also did in some calculations on our illustrations. In [2] for the Markov case a histogram of the distribution of the location in the Markov partition of the first 50000 iterates of a "generic" point is used to approximate the piecewise constant invariant density function of the acim. Here again a rather high number of iterates was used. In [7] a central limit theorem is discussed for the convergence in (6). The other method, discussed in [2] is based on the fact that if K(α, β) ∈ M <∞ , that is when the turning point is periodic for T α,β then there is a Markov partition for T α,β . Based on the Markov partition one can obtain a system of linear equations and the solution of this system gives us the invariant density function f α,β of the acim µ α,β of T α,β . Then γ = µ α,β ([0, α]). (In [2] a different parametrization and notation was used, but we translated it to our notation.) The drawback of this calculation is that the number of equations is the number of elements in the Markov partition. If K(α, β) ∈ M ∞ then there is no Markov partition, but isentropes corresponding to skew tent maps with Markov partition are dense in U. It was remarked in [2] that in this case we can also approximate the invariant density by invariant densities of Markov skew tent maps. In this case the number of elements in the Markov partition of these appproximating maps tends to infinity, making it more and more difficult to solve the system of linear equations. It also seems for us that Theorem 10.3.2 from [3] was used in an incorrect way in [2]. By this we mean, that the way these Markov skew tent maps are approximating the non-Markov one is not satisfying the exact assumptions of Theorem 10.3.2 in [3]. Since in our paper we also need approximations of skew tent maps by other ones in Proposition 6 we clarify the way these approximations work. For some specific Markov parameter values in [10] a central limit behavior is discussed.
Properties of isentropes, especially connectedness in different families of dynamical systems were also studied for example in [1], [9] and [12]. This paper is organized the following way. In Section 2 we recall some definitions and results concerning skew tent maps and invariant densities. In Section 3 we continue to discuss some known results about absolutely continuous invariant measures and prove Proposition 6 which will be the key lemma about approximations of skew tent maps by other ones. This section concludes with some remarks about uniform Lipschitz properties of isentropes.
The most involved part of the paper is Section 4 in which we prove Proposition 10. This is a special version of the main result of the paper about the relationship between Lyapunov exponents and tangents to isentropes. In this proposition we suppose that the isentrope is differentiable at the point considered and we also suppose that we work with a Markov map. In later sections we aim towards Theorem 13 to use some approximation arguments to remove the assumptions about differentiability and Markovness.
In Section 5 by using Proposition 10 first we show that isentropes are continuously differentiable for Markov skew-tent maps. In this argument we use Proposition 6 and approximations of our skew tent map by other ones with the same topological entropy. Then by using another approximation argument based on Proposition 6 and approximation of non-Markov maps by Markov maps we generalize this result for arbitrary maps.
Finally, in Section 6 we prove Theorem 13 which is the main result of our paper. It is again an approximation argument of non-Markov maps by Markov maps. This way we obtain the general version of Proposition 10.

Preliminaries
Kneading theory was introduced by J. Milnor and W. Thurston in [8]. For symbolic itineraries and for the kneading sequences we follow the notation of [6].
Suppose T = T α,β is fixed for an (α, β) ∈ U and x ∈ [0, 1]. The extended kneading sequence K(α, β) = M ext = (m 1 , m 2 , ...) ∈ {L, R, C} N is defined as follows. If T n α,β (α) < α then m n = L, if T n α,β (α) = α then m n = C, and if T n α,β (α) > α then m n = R. If there is no C in M ext then the kneading sequence K(α, β) = M = M ext . If there are Cs in M ext then the kneading squence K(α, β) = M is a finite string which is obtained by stopping at the first C and throwing away the rest of the infinite string M ext .
In [11] a different parametrization of skew tent maps was used. The functions [11]. In [4] we gave the explicit formula for the linear homeomorphism showing that T α,β and F λ(α,β),µ(α,β) are topologically conjugate. We use the notation K(λ, µ) for the kneading sequence of F λ,µ . In this parametrization M corresponds to the kneading sequences of functions F µ,µ with 1 < µ ≤ 2.
We denote by ≺ the parity lexicographical ordering of kneading sequences, symbolic itineraries, for the details see [6].
Without discussing too much details of renormalization we need to say a few words about it. The interested reader is refered to more details in [6] or [11]. For j = 0, 1, ... we denote by M j the set of those kneading sequences M for which there exists β ∈ (( The kneading sequences in M 0 correspond to the non-renormalizable case. We denoute by U j the set of those (α, β) ∈ U for which K(α, β) ∈ M j . In [11], D 0 denotes the region of those (λ, µ) ∈ D for which λ > µ µ 2 −1 . This is the non-renormalizable region in the λ − µ-parametrization. In [2] and [11] mainly the non-renormalizable region is considered. In Section 5 of [11] renormalization, and the way of extension the result obtained for the non-renormalizable case is discussed. It turns out that if K(λ, µ) ∈ M j with j ≥ 1 then F 2 λ,µ can be restricted onto a suitable interval mapped into itself by this map. This restriction is topologically conjugate to F µ 2 ,λµ and In this paper we only use that the density of Markov maps in U 1 , shown in [2] implies via renormalization density of Markov maps in U.
We recall a corollary of Theorem C of [11] adapted to our α−β-parametrization. For the skew tent map T α,β , (α, β) ∈ U we define the Frobenius-Perron operator P α,β : which in a more explicit form is We recall for example from Proposition 4.2.4 of [3] the contraction property of Frobenius-Perron operator We also remind to the definition of the variation of a real function f : [a, b] → R. . . , [a n−1 , a n ]} of [a, b] is Markov for T if for any i = 1, . . . , n the transformation T | (a i−1 ,a i ) is a homeomorphism onto the interior of the connected union of some elements of P, that is onto an interval (a j(i) , a k(i) ).

Absolutely continuous invariant measures and densities for skew tent maps
We recall some definitions and results from [3] p. 96. We denote by T (I) the set of those transformations T : I → I which satisfy the next two properties: I. T is piecewise expanding, that is there exists a partition P = {I i = [a i−1 , a i ], i = 1, . . . , n} of I such that T | I i is C 1 and |T ′ (x)| ≥ α > 1 for any i and for all x ∈ (a i−1 , a i ). II. g(x) = 1 |T ′ (x)| is a function of bounded variation, where T ′ (x) is an appropriately calculated one-sided derivative at the endpoints of P. For every n ≥ 1 we define P (n) as One can easily see that if T ∈ T (I) then T n is piecewise expanding on P (n) . Since The next theorem is about the existence of absolutely continuous invariant measures, acims and it is Theorem 5.2.1. from [3].
Theorem 3. If T ∈ T (I) then it admits an absolutely continuous invariant measure, acim whose density is of bounded variation.
In case of skew tent maps this acim is unique. Theorem 8.2.1 of [3] gives an upper bound on the number of distinct ergodic acims for a T ∈ T (I).
Theorem 4. Let T ∈ T (I) be defined on a partition P. Then the number of distinct ergodic acims for T is at most #P − 1.
In our case when (α, β) ∈ U and I 0 = [0, 1] then P = {[0, α], [α, 1]}. Since #P = 2 we obtain that for T α,β there is only one ergodic acim. Using this and the results about the spectral decomposition of the Frobenius-Perron operator in Chapter 7 of [3] one can see that invariant densities are linear combinations of densities of ergodic acims. Hence in case of our skew tent maps the following Lemma holds: Lemma 5. For every (α, β) ∈ U there is a unique invariant density for T α,β , and it is the density of the unique ergodic acim.
As Theorems 10.2.1 and 10.3.2 are proved in [3] we show the next proposition. In a similar situation in [2] there is a direct reference to Theorem 10.3.2 of [3] but it seems that after a careful check, this reference is not applicable in the situation of the Markov approximations in [2], neither in our case.
Next we discuss what the problem is with the direct application of Theorem 10.3.2 then by using the ideas of the proofs of Theorems 10.2.1 and 10.3.1 of [3] we prove our Proposition 6. The main problem of the direct application in [2] of the theorems from [3] to the case of approximations by skew tent maps is the following. In the assumptions of these theorems given a piecewise expanding transformation T : I → I, a family {T n } n≥1 of approximating Markov transformations associated with T is considered.
Assume Q (0) denotes the endpoints of intervals belonging to P (0) , where P (0) is a partition such that T is C 1 and expanding on the partition intervals of P (0) .
If one checks in Section 10.3, p. 217 of [3] the definition of the approximating Markov transformations associated with T one can see that there is a sequence of partitions P (n) . It is supposed that the transformations T n are piecewise expanding and Markov transformations with respect to P (n) .
Moreover, in assumption (a) on p. 217 of [3] it is stated that if J = [c, d] ∈ P n and J ∩ Q (0) = ∅ then T n | J is a C 1 monotonic function such that (11) T n (c) = T (c), T n (d) = T (d) Assumption (11) is clearly not satisfied if (α n , β n ) → (α 0 , β 0 ), (α n , β n ) = (α 0 , β 0 ), T n = T αn,βn , T = T α 0 ,β 0 and P (n) has subintervals [c, d] which do not contain 0, α 0 or 1. This means that contrary to what is claimed by the authors of [2] Theorem 10.3.2 of [3], cannot be applied directly to the case of Markov approximations they want to use. Our Proposition 6 can be used in their case as well. Moreover, it is also an advantage of our Proposition 6 that we do not assume that the approximating skew tent maps are Markov.
Finally, (10) is assumption (4) of Theorem 10.2.1. Therefore this theorem is applicable to the sequence T αn,βn . This yields that conclusion (A) of our Proposition 6 holds true. The only thing which needs extra proof that in conclusion (B) the function f 0 , which is the L 1 limit of the P 0 αn k ,βn k invariant densities f n k , is For the invariance of f 0 we need to show that P α 0 ,β 0 f = f 0 a.e.. As on page 220 of [3] it is sufficient to show that P α 0 ,β 0 f 0 − f 0 1 = 0, which will be verified by the following estimates: Since f n k is an invariant density of T αn k ,βn k we have A 3,n k = 0 for any k. It is also clear that A 4,n k → 0 as k → ∞.
The only non-trivial part is the estimation of A 1,n k . Suppose ε > 0 is given and We suppose that β n k ≥ β 0 , the case β n k < β 0 is similar and is left to the reader. For ease of notation we denote n k by k in the sequel. We have for k ≥ K 0 (using (12) and (13)) (using again (13)) Thus P α 0 ,β 0 f − P α k ,β k f 1 → 0 as k → ∞ and hence A 1,k → 0 as k → ∞ and completes the proof of the Proposition.
We will fix an N ≥ N 0 later. Suppose N is given and fixed. We can select a system of intervals I l = [d l , e l ] such that T N α 0 ,β 0 is linear and non-constant on I l but is non-linear on any larger interval containing I l , moreover d l e l d l (α 0 + ∆α) e l (α 0 + ∆α) The maximality of the intervals I l implies that (37) T N α 0 ,β 0 (d l ), T N α 0 ,β 0 (e l ) ∈ {c i : i = 1, . . . , k} and T N α 0 ,β 0 (d l ) = T N α 0 ,β 0 (e l ). From (36) it follows that By using (37) we introduce the notation (39) c − N,l = T N α 0 ,β 0 (d l ) and c + N,l = T N α 0 ,β 0 (e l ). From (27) and (37) it follows that An elementary calculation shows that During the rest of the proof the reader might find useful to look every so often at the left half of Figure 4. Also observe that the value of γ N (x) is constant on (d l , e l ). Denote this constant by g l . Using (35) and (36) we obtain (42) |g l − γ| < δ 1 .

Differentiability of the isentropes (ergodic theory approach)
In this section we prove that isentropes are continuously differentiable curves. We have already seen that results of [11] imply that they are (locally uniformly) Lipschitz. There are two possible ways to verify that they are differentiable. One way, the one which we call analytic method, is to use the auxiliary function Θ M , (4) and implicit differentiation. If one can verify that for (α, β) ∈ U, M = K(α, β) we have ∂ 2 Θ M (α, β) = 0 then this argument works. Unfortunately, to deal with partial derivatives of Θ M is a quite unpleasant and technical task. We have a manuscript in prepartion, [5] which discusses this other approach. In this paper we use a much more elegant and less technical argument which we called the er-6. Isentropes and Lyapunov exponents, the general case Next we state the main result of our paper. Its special Markov case, assuming differentiability of the isentrope at the point considered was discussed in Section 4.