Entropy on regular trees

We show that the limit in our definition of tree shift topological entropy is actually the infimum, as is the case for both the topological and measure-theoretic entropies in the classical situation when the time parameter is $\mathbb Z$. As a consequence, tree shift entropy becomes somewhat easier to work with. For example, the statement that the topological entropy of a tree shift defined by a one-dimensional subshift dominates the topological entropy of the latter can now be extended from shifts of finite type to arbitrary subshifts. Adapting to trees the strip method already used to approximate the hard square constant on $\mathbb Z^2$, we show that the entropy of the hard square tree shift on the regular $k$-tree increases with $k$, in contrast to the case of $\mathbb Z^k$. We prove that the strip entropy approximations increase strictly to the entropy of the golden mean tree shift for $k=2,\dots,8$ and propose that this holds for all $k \geq 2$. We study the dynamics of the map of the simplex that advances the vector of ratios of symbol counts as the width of the approximating strip is increased, providing a fairly complete description for the golden mean subshift on the $k$-tree for all $k$. This map provides an efficient numerical method for approximating the entropies of tree shifts defined by nearest neighbor restrictions. Finally, we show that counting configurations over certain other patterns besides the natural finite subtrees yields the same value of entropy for tree SFT's.


Introduction
Entropy is a single number attached to a topological or measure-theoretic dynamical system that in a limited but precise way describes the complexity or richness of the system. In recent years increasing attention has been paid to the calculation of entropy for systems in which the "time" is not Z nor R, but perhaps Z d for some d ≥ 2, or an arbitrary amenable group, or even a free or arbitrary countable group. We will not attempt to review the extensive and rapidly developing literature here (nor the connections with information theory, statistical mechanics, and other areas), referring only to [6,8,9] for background and references.
Aubrun and Béal [1][2][3][4][5] proposed studying subshifts on trees, since for such systems the "time" has both higher-dimensional and directional aspects, making them perhaps somehow between one-and higher-dimensional subshifts. Steve Piantadosi [15] studied the topological entropy of the hard square model on free groups F k . He obtained an explicit formula in terms of a rapidly converging infinite series and used it to show numerically but rigorously that the entropy increases with k for a range of k. Here we investigate some of these same questions for trees, with different methods but with some closely related results.
In a previous paper [14] we gave a definition of entropy for tree shifts and showed that the limit in the definition exists. We proved that for a 2-tree shift defined by nearest neighbor constraints, the tree-shift entropy dominates the entropy of the corresponding one-dimensional shift of finite type. We also provided estimates for the entropies of various 2-tree shifts, especially the ones determined by the "golden mean" (or "hard square" or "hard core") condition that no two adjacent nodes have identical labels.
One of our main results here (Theorem 2.1) is that the limit in the definition is actually an infimum. As a corollary (Corollary 2.2) we show that the entropy comparison between a one-dimensional shift of finite type and the tree shift it defines holds for all subshifts. Then we adapt the "strip method" used for lattice shifts [11,13] to study the entropy h (k) of the golden mean subshift on the regular k-tree. Generalizing and improving the result in [14], we show in Theorem 3.7 that h (k) is strictly increasing in k. This contrasts with the apparent decrease of the entropy for the golden mean SFT's on Z k for k = 1, 2, 3, 4 [7]. In Theorems 3.1 and 4.1 we show that for each fixed k = 2, . . . , 8 the strip entropies h (k) n increase strictly to h (k) . We believe that the statement holds for all k ≥ 2. As in [15], a related map of the interval (or, for the case of more general tree subshifts, simplex) appears as one considers ratios of symbol counts in the improving approximations. We produce a thorough analysis of this map for the case of golden mean restrictions (see Theorem 5.1) and show in Section 7 how to use it to obtain rapidly converging approximations to the entropies of more general tree shifts. Finally, we count configurations over extensions of the patterns that in this setting correspond to intervals in the one-dimensional case, showing in Corollary 2.2 that for tree SFT's the resulting entropy is the same.
Apparently the definition of entropy considered here and in [14,15] is not the same as sofic entropy (see [6,9]), since the latter is a conjugacy invariant while the entropy considered here can increase under higher block codes.
1.1. Notation and setup. Let k ≥ 2 and let Σ k = {0, 1, . . . , k − 1}. The set Σ * k of all finite words on the alphabet Σ k is the k-tree, which is naturally visualized as the Cayley graph of the free semigroup on k generators. The empty word corresponds to the root of the tree and the neutral element of the semigroup. Let A = {0, 1, . . . , d − 1} be an alphabet on d symbols. A labeled tree is a function τ : Σ * k → A. For w ∈ Σ * k , we think of τ (w) as the label attached to the node determined by w. For each n ≥ 0 let ∆ n = ∪ 0≤i≤n Σ i k denote the initial height-n subtree of the k-tree. The cardinality of ∆ n is |∆ n | = 1 + k + · · · + k n . An n-block is a function B : ∆ n → A, which we think of as a labeling of ∆ n or a configuration on ∆ n . We say that an n-block B appears in a labeled tree τ if there is a node x ∈ Σ * k such that τ (xw) = B(w) for all w ∈ ∆ n . A tree shift Z is the set of all labeled trees which omit all of a certain set (possibly infinite) of forbidden blocks. These are exactly the closed shift-invariant subsets of the full tree shift space T (A) = A Σ * k . A tree shift Z is called transitive if it contains a labeled tree τ such that for every ξ ∈ Z every block that appears in ξ also appears in τ . Such a labeled tree τ is called a transitive point.
The complexity function p τ of the labeled tree τ assigns to each n ≥ 0 the number of n-blocks that appear in τ . The complexity function p Z (n) of a tree shift Z gives for each n ≥ 0 the number of n-blocks among all labeled trees in Z. We are interested in studying the complexity functions of trees that are labeled according to certain restrictions, in particular nearest-neighbor constraints specified by d-dimensional 0, 1 transition matrices. In [14] it was proved that for any labeled tree τ the limit (1.1) h(τ ) = lim n→∞ log p τ (n) 1 + k + · · · + k n exists. This limit is called the topological entropy of the labeled tree τ . The topological entropy h(Z) of a transitive tree shift Z is defined to be the topological entropy of any of its transitive points.
2. The limit in the definition of tree shift entropy is the infimum Theorem 2.1. The limit in the definition of tree shift topological entropy is actually the infimum: Proof. In the proof of the existence of the limit for h in [14], ∆ jm was decomposed into a union of shifts of ∆ m . But these copies of ∆ m did not have independent entries; in fact they were not disjoint, since the last row of one formed the root vertices of the next ones. So here we improve the estimate 2.2 in [14] by making them disjoint.
Fix n ≥ 1 and consider ∆ n . Its last row has k n entries. The next row has k n+1 entries, and we use these as root vertices for new shifts of ∆ n . This new row ends with k n+1 k n entries, so the next row has k 2n+2 entries, which we use as vertices of new shifts of ∆ n . The last row now has k 2n+2 k n = k 3n+2 entries, the next row has k 3n+3 and each of these becomes the root of a new shift of ∆ n .
We have formed a ∆ 4(n+1)−1 out of 1 + k n+1 + k 2(n+1) + k 3(n+1) disjoint copies of ∆ n . In general, for each j ≥ 1 we have (In formula 2.2 of [14] we find the same estimate, except with ≤ p(n) replaced by ≤ p(n + 1). So this estimate is better.) Then take logarithms, divide by (k j(n+1) − 1)/(k − 1), and take the limit as j → ∞, to find that A first consequence of this result is a generalization of Theorem 3.3 of [14] for 2-trees from shifts of finite type to arbitrary subshifts. Using the same argument and Theorem 3.7 below, the statement extends to k-trees. Let M be a d × d matrix with entries from {0, 1}. The matrix M defines a one-step shift of finite type (SFT) X M on the alphabet A and a tree shift Z M consisting of all k-trees labeled by A with the property that for every w = w 0 w 1 . . . w j ∈ Σ * k , M w i w i+1 = 1 for all 0 ≤ i < j. In [14,Theorem 3.3] it was also proved that h(Z M ) = sup{h(τ ) : τ ∈ Z M } dominates the entropy h(X M ) of the associated shift of finite type.
More generally, given any subshift X ⊂ A Z , there is a naturally associated tree shift Z(X) defined as follows. Denote by L(X) the language of X, namely the set of all finite words on A found as subwords of sequences in X. The k shifts on Σ * k are defined by σ s (w) = sw, w ∈ Σ * k , s ∈ Σ k . For s 1 , s 2 , . . . , s m ∈ Σ k and s = s 1 s 2 . . . s m ∈ Σ * k , define σ s = σ s 1 . . . σ sm . On a labeled tree τ , define (σ s τ )(w) = τ (σ s w) = τ (sw), s ∈ Σ * k .
We define the one-dimensional language of a tree shift Z to be the set L (1) (Z) of strings on the alphabet A found along paths in the tree: . . , s m ∈ Σ k }. Given a subshift X ⊂ A Z , we define the tree shift associated to X to be the unique tree shift Z(X) such that Corollary 2.2. Let X ⊂ A Z be a subshift on a finite alphabet, let k = 2, and let Z(X) be the tree shift on the binary tree associated with X. Then Proof. Given any subshift X, for each r ∈ N let X r be the shift of finite type which has the same r-blocks as X. Then X is the decreasing intersection of the X r . Denote by p X , p Xr , and p Z the complexity functions of the subshifts X and X r and the tree shift Z, respectively. Then for r ≥ n we have p Xr (n) = p X (n), and similarly for p Z(Xr) and p Z(X) . Thus In [14,Theorem 3.3] it was proved that if X is a one-step SFT, then Each X r is an (r − 1)-step SFT and is topologically conjugate to a one-step SFT Y r on the alphabet A (r) (X) of r-blocks which appear in X. In a labeled tree in Z(Y r ), we think of the last entry of the r-block labeling a node as being attached to that node. A labeling by elements of A of the vertices of the k-tree is consistent with the restrictions from X r if and only if it is consistent with the restrictions from Y r , so p Z(Xr) = p Z(Yr) . Using (2.8), [14,Theorem 3.3] applied to Y r and Z(Y r ), and Theorem 2.1, we then have

Strict increase with dimension
We use the strip method for the golden mean shift of finite type on the k-tree, k ≥ 2, to show that the entropy h (k) is strictly increasing in k.
For each n = 0, 1, 2, . . . we define a 1-dimensional SFT Σ (k) n whose alphabet consists of the legal (no adjacent 1's) labelings of the subtree with k n nodes consisting of a vertex with ∆      While k is fixed, we will suppress the exponents (k).
The labeling counts satisfy the recursions The labels at the nodes i on the left edge respect the SFT restriction no 11. Let a i (0) denote the number of ways to label the first i levels of the strip of width n ≥ 0 with 0 labeling node i, and define a i (1) similarly. For n ≥ 1 the ∆ n−1 attached to node i + 1 can be labeled in B n−1 ways if node i + 1 has label 0, and in B n−1 (0) ways if node i + 1 has label 1. Therefore Thus for n ≥ 0 the number a i (0) + a i (1) of i-blocks in Σ (k) n has the same asymptotic growth rate as the entries of i'th powers of the matrix The latter is given by powers of the Perron-Frobenius eigenvalue λ n−1 , which is the maximum root of the characteristic equation and satisfies c 2 n −c n = r k−1 n .
We have Proof. Since c n is bounded, We develop for golden mean tree shifts the analogue of Piantadosi's [15] infinite series formula for the entropy of the golden mean SFT on a free group. Then we use the formula to prove that the entropy h (k) of the golden mean SFT on the k-tree is strictly increasing in k, for all k ≥ 2. (Piantadosi used his infinite series formula and rigorous numerical estimates to prove the strict increase for a range of k.) Our infinite series formula follows from two easy lemmas.
Proof. Assume the formula holds for n − 1. Then Theorem 3.4. For all k ≥ 2 the entropy of the golden mean SFT on the k-tree is given by the following formula: Examination of this infinite series formula provides useful upper and lower bounds for h (k) . and Proof. Formula (3.17) is immediate by truncation of the series. To prove Formula (3.18), recall that r 0 = 1/2, r 1 = 1/(1 + 1/2 k ), and note that r 1 > r i for all i > 1 (see the proof of Theorem 5.1).
These formulas give us We also need the estimate U (k − 1) < L(k).
Lemma 3.6. For each k ≥ 2, Proof. (3.23) (3.24) Theorem 3.7. The entropy h (k) of the golden mean SFT on the k-tree is strictly increasing in k.
Proof. The inequality is verified in Corollary 4.5 by direct computation for 2 ≤ k ≤ 5.
Recall that r 0 = 1/2. It is enough to show that for all k > 5 we have L(k) > U (k − 1), i.e. (3.25) Letting x k = 1 + 1/2 k and exponentiating, (3.25) is equivalent to We claim that from which (3.26) will follow, because then Note that the left side of (3.27) increases (to 4) in k, while 1 + x k−1 k−1 decreases (to 2), because, using the alternating series for log(1 + t), which is true for k ≥ 2, because k/2 k is decreasing in k and 2 > 1 + 2/2 2 . Now (3.27) holds for k = 6, because then the left side is 2.6611 . . . while the right side is 2.16633 . . . , and therefore it holds for all k ≥ 6.

Monotonicity of the strip approximation entropies
Now we turn to the study for fixed k of the strip approximation entropies h (k) n for the entropy of the golden mean k-tree shift (with labeling alphabet {0, 1}). Numerical calculations indicate that these approximations increase strictly with n. We can prove this rigorously with the help of computer algebra for k = 2, . . . , 8, and we believe that it is so for all k ≥ 2. The strict increase allows for another proof of Theorem 3.7, for rigorous lower bounds for h (k) , and efficient approximations to h (k) , in the range of k for which strict increase is known to hold. Rigorous upper bounds are easy because the limit in the definition of entropy is the infimum.  λ k n < λ n+1 , and hence Remark 4.2. The analogue of the final statement (4.3) was conjectured for the golden mean on free groups by Piantadosi [15] and proved already (in more generality, for ktree shifts determined by arbitrary irreducible one-dimensional shifts of finite type) by the present authors in a previous paper [14].
For subshifts on the lattices Z d , this "asymptotic entropy" was known to be the increasing limit in d and was proved in [10,12] to equal the "independence entropy".
Proof. Fix k = 2, 3, . . . . Note that for n ≥ 0 Then c n+1 (r) = c n (T r). Define also (4.6) c = c(r) = 1 Note that Abbreviate r n = r, c n (r) = c(r), and use (T r) k−1 = c(T r) 2 − c(T r). The statements are equivalent to one another. Moreover, when r = r n , n ≥ −1, each is equivalent to (4.12) λ k n < λ n+1 and implies We can prove by hand in the cases k = 2, 3 that n k (r) ≤ 0 for all r ∈ [0, 1], with equality only at the one point where p k (r) = r k+1 + r − 1 = 0. For k = 4, 5, 6, 7, 8, to prove this we need the help of Sage and Mathematica. We believe that given time and sufficient computer memory, (4.9) can be verified similarly for each individual k. So far we have not been able to prove algebraically that (4.32) has at most one solution r ∈ [0, 1] (which would suffice to prove that the statement holds for all k-see below).
The calculation for each fixed k proceeds as follows. Expand n k (r) as a function of r, using c(r) = (1 + √ 1 + 4r k−1 )/2. Group into B k (r) all the terms that contain a factor of √ 1 + 4r k−1 to write n k (r) = A k (r) + B k (r). We verify that A k (r) and B k (r) have opposite signs on [0, 1], take B k (r) to the right side of the formula, square both sides and take the difference. Thus the left side of (4.9) is nonpositive if and only if and observe that q k (r) > 0 on [0, 1], concluding the argument. (In some of the cases a constant is factored out, making no difference in the sign.) Remark 4.3. It appears that q k (r) → 1 + 2r + 3r 2 + · · · = 1/(1 − r) 2 as k → ∞ for each r ∈ [0, 1). Figure 2 is a graph of n 6 (r), and Figure 3 is a graph of q 5 (r).
Monotonicity in n of the h (k) n , along with knowledge that the limit in the defintion is the infimum (Theorem 2.1), allows to show explicitly by rigorous numerical calculation that h (k) increases with k, for the values of k where the monotonicity is known to hold, providing another proof of Theorem 3.7 for these cases.   (n = 4 suffices for the above statements) shows rigorously that h (k) < h (k+1) for that k.
We discuss now a plan to extend Theorem 4.1 to all k ≥ 2. It suffices to show that for each fixed k ≥ 2 equality in any of (4.8)-(4.11) can hold for at most one r ∈ [0, 1], which is irrational (and in fact is the solution of T r = r), and so in particular the inequality holds for all the r n−1 , n = 1, 2, . . . , which are rational.
We want to show that there is at most one value of r ∈ [0, 1] for which equality can hold in (4.8). Given this and the continuity of the functions being compared, the inequality 4.8 holds on all of [0, 1] except at the point where T r = r, where it is an equality.
It is difficult to make an estimate to prove any of these inequalities. For example, the graph of the left side n k (r) of inequality (4.9) touches the r axis with a high degree of tangency, not allowing one to squeeze a simpler expression in between. Figure 4 shows the two terms in (4.32) of Prop. 4.6 and their difference for k = 5.
If one can obtain a workable expression for n k (r)/p k (r) 2 , however, one could seek to prove, for example, thus proving (4.9).
We believe that the irrationality of the single solution will follow from Gauss' Lemma.
The following observations may be helpful in completing the proof of strict increase of h which implies that c = 1/r. To see that (4.28) implies (4.29), note that we already know that (4.27) is equivalent to (4.28), so that cr = 1 implies Finally, (4.29) implies (4.28) because Turning now to the second part of the Proposition, if T r = r then (4.31) says c(r) = r k−1 c(r) k , which is true because cr = 1.
We finish the argument by showing that (4.30)-(4.34) are equivalent to one another. Note that λ k n = λ n+1 if and only if This shows that (4.30) is equivalent to (4.32).

Map of the simplex
Consider a k-tree SFT Z M with k ≥ 2, alphabet size d = 2, and transition matrix M = [11,10], so that Z M consists of all labelings of the k-tree with no adjacent nodes both labeled by 1. Consider first the case k = 2. In this case T : In the case of the k-tree, k ≥ 2, T k : . We may consider this function for all k > 0, not necessarily an integer. When k is fixed, we will suppress the subscript on T k . Again there is a fixed point u k = u ∈ [0, 1], which is unique because T is decreasing. For small k we have |T (u)| < 1, so the fixed point is attracting. But there is a critical value k 0 ≈ 4.125, the solution of at which T (u) = 1. We claim that for k > k 0 besides the fixed point u there is also an attracting periodic orbit {p 1 , p 2 }, and there are no other periodic points. Figure 5 shows the graphs of T, T 2 , and the identity function for k = 7. Computer assisted computation shows that if k < k 0 then Thus for k < k 0 , p 1 = u = p 2 and all of [0, 1] is attracted to the unique fixed point u.
Note further that if k > k 0 , then at the fixed point u of T x = 1/(1 + x k ) we have |T (u)| > 1. Because consider the function g(k) = k k/(k+1) + 1 for k ≥ 0. We have g(0) = 2, g(k 0 ) = k 0 , g (k) < 0 for small k, g (k) = 0 when log k = k + 1 (k = k 1 ≈ 0.3), g (k) > 0 for k > k 1 . Thus g(k) > k for small k, there is a unique solution k 0 ≈ 4.125 of g(k) = k, and g(5) ≈ 4.82362 < 5, so g(k) < k for k > k 0 . See Figure 6. This implies that for k > k 0 we have |T k (u)| > 1. Because for k > k 0 we have which is to say that Since |T (u)| > 1, there are δ, > 0 such that |T x−u| ≥ (1+δ)|x−u| for |x−u| ≤ . If we take a point x > u close to u, we have T x < u, T 2 x > u. We must have u < x < T 2 x, because otherwise u < T 2 x < x and a contradiction. Thus T 2 moves points near u away from u, towards either p 1 or p 2 , depending on whether x ∈ (p 1 , u) or x ∈ (u, p 2 ).
We show now that the 2-cycle {p 1 , p 2 } attracts all of [0, 1] \ {u}. This will follow if we can show that (for k > k 0 ) T 2 has a unique inflection point. Because from above, we have that for p 1 < x < u and x near u, p 1 < T 2 x < x < u. Since T 2 is increasing, even though maybe no longer is T 2 x ≈ u, we may iterate to find so that T 2n decreases to a fixed point p ≥ p 1 of T 2 .
But if T 2 has a unique inflection point in [0, 1], we must have p = p 1 . This is because then the graph of T 2 hits the line y = x at least at p 1 , p, u, T p, p 2 , and so must change concavity at least twice. This shows that all of (p 1 , u) is attracted to p 1 under T 2 , and therefore all of [0, 1] \ {u} is attracted to {p 1 , p 2 } under T .
We show now that T 2 has a unique inflection point in [0, 1]. With the help of Sage, we find that Let y = x k and Now j(y, k) is a polynomial in y of degree k + 1. The constant term and coefficient of y are positive, and for j > 1 the coefficient of y j is which is negative for k > 2. So the second derivative of j(y, k) is negative, its first derivative starts positive at 0 and later is negative so that j has a unique zero on the positive axis. Since (T 2 ) has the same sign as j, T has a unique inflection point. Therefore, from above, for k > k 0 T has a unique fixed point and an attracting cycle of period 2 which attracts everything else, and there are no other periodic points.

Intermediate entropy
For a tree labeled by elements of a finite alphabet A = {0, 1, . . . , d − 1}, one can count numbers of possible labelings not just for the fundamental height n subtrees but for any finite patterns. Focusing in this section on the binary regular tree, we are interested in particular patterns which consist of a translate of ∆ n and an 'initial' segment of the next row (the final row of the corresponding translate of ∆ n+1 . Let us label the nodes η left to right in each row, continuing downward from row to row: so ↔ η(1), 0 ↔ η(2), 1 ↔ η(3), 00 ↔ η(4), 01 ↔ η(5), 10 ↔ η(6), 11 ↔ η(7), 000 ↔ η(8), .... (Recall that nodes of the tree correspond to words in {0, 1} * -see Section 1.1). Define q(n) to be the number of labelings of η(1)η(2) . . . η(n) in the tree shift. (For a single labeled tree, we define q(n) to be the number of labelings found among all translates of η(1)η(2) . . . η(n).) For n = 0, 1, . . . let c n = |∆ n | = 2 n+1 − 1. Thus Proof. Fix n and j = 0, . . . , 2 n+1 and consider the pattern ∆ n (j) that consists of ∆ n together with an initial segment of length j of the next row in ∆ n+1 . We wish to estimate from above the number of possible labelings of ∆ n (j) that can be found in the tree shift. Consider first the case when j = 2 n , i.e. we use half of the next row; see Figure 7. ∆ n (2 n ) consists of the root, a translate of ∆ n (the blue nodes in Figure 7), and a translate of ∆ n−1 (the green nodes in Figure 7). There are d choices for labels of the root, q(c n ) choices for labelings of ∆ n , and q(c n−1 ) choices for labelings of ∆ n−1 , so Since c n + c n−1 = c n + 2 n − 1, and (6.5) log q(c n + 2 n ) c n + 2 n ≤ log d c n + 2 n + c n c n + 2 n log q(c n ) c n + c n−1 c n + 2 n log q(c n−1 ) c n−1 , we have (6.6) lim sup n→∞ log q(c n + 2 n ) c n + 2 n ≤ h.
Continuing with r fixed, we consider now values of j which are not multiples of 2 m . For any such j = 1, . . . , 2 n+1 − 1, let j = inf{k2 m > j}. Then for large enough n. Proof. In a tree shift corresponding to a one-step one-dimensional shift of finite type, labelings of disjoint translates of basic patterns ∆ n are independent given the configurations above them, so the key inequalities in the above proof are actually equalities. For systems with more memory (still bounded), consider the appropriate higher block coding.
For the general case of a tree SFT, labelings of sets of nodes in distinct branches of the tree are again independent, given a fixed allowed configuration above both of them: pairing two such allowed labelings produces an allowed labeling of their union. For each n = 0, 1, . . . let x(n) = (x 0 (n), . . . , x d−1 (n)) denote the vector of symbol counts in which x i (n) is the number of labeled translates of ∆ n (n-blocks) with the symbol i at the root. Let |x(n)| = x 0 (n) + · · · + x d−1 (n) As before, denoting by M i the i'th row of M , we have the recursion and as before (7.4) r(n) = x(n) |x(n)| , so that r(n + 1) = T k r(n).
For the 1-dimensional golden mean SFT, the map T on . The equations T (x, y) = (x, y) define an algebraic curve, part of which is shown in Figure 9 along with the simplex x + y = 1.
Remark 7.1. When estimating the entropy h (k) of Z M numerically, iteration of the mapping T k combined with a recursion on log |x(n)| can reduce the size of the numbers involved and give us more accurate estimates more quickly. Since (7.5) x i (n + 1) = |x(n)| k (M r(n)) k i , i = 0, . . . , d − 1, |x(n + 1)| = |x(n)| k g k (r(n)), we may form the T k r(n) by iterating T k and use them in the recursion (7.6) log |x(n + 1)| = k log |x(n)| + log g k (r(n)) log |x(n)| 1 + k + · · · + k n . The factors g k (r(n)) are bounded, but they have a cumulative effect on the growth of |x(n)| that may be sufficient to affect the ultimate value of h.
Let C(−1) = M and for each n ≥ 0 define a d × d matrix C(n) by Denote by λ n the maximal eigenvalue of C(n) and define c n by (7.9) λ n = |x(n)| k−1 c n .
The entropy of the one-dimensional n'th strip approximation subshift to Z M is (7.10) h (k) n = log λ n−1 k n . Theorem 7.2. With notation as above, the entropy of the k-tree shift corresponding to an irreducible d × d 0, 1 matrix M is given by the following infinite series formula: Proof. We compute that (7.12) h k − 1 k n+1 log g k (r(n − 1)) + 1 k n+1 log c n − 1 k n log c n−1 , and then by induction that We showed before that for the golden mean SFT on the k-tree, the site specific entropies h (k) n increase with the strip width to the tree shift entropy h (k) (at least for k = 2, . . . , 8), and that h (k) is strictly increasing in k, with limit log 2, in contrast with the situation for the golden mean SFT's on integer lattices. The same statement for the more general tree shifts considered in this section can be approached by the same techniques, although the formulas and computations will naturally be much more complex. Extensions to pressure and equilibrium states, including measures of maximal entropy, and subshifts on other trees and graphs, are also attractive topics for further research.