The Mathieu Differential Equation and Generalizations to Infinite Fractafolds

One of the more well-studied equations in the theory of ODEs is the Mathieu differential equation. Because of the difficulty in finding closed-form solutions to this equation, it is often necessary to seek solutions via Fourier series by converting the equation into an infinite system of linear equations for the Fourier coefficients. In this paper we present results pertaining to the stability of this equation and convergence of solutions. We also investigate ways to modify the linear-system form of the equation in order to study a wider class of equations. Further, we provide a method in which the Mathieu differential equation can be generalized to be defined on an infinite fractafold, with our main focus being the fractal blow-up of the Sierpinski gasket. We discuss methods for studying the stability of solutions to this fractal differential equation and describe further results concerning properties and behavior of solutions.

1. Introduction The Mathieu differential equation, as will be defined in Section 2, takes the form d 2 u dt 2 + (δ + ε cos t) u = 0, where δ, ε ∈ R are fixed, and u : R → R. Named after French mathematicianÉmile Léonard Mathieu (1835-1890), the origin of the Mathieu differential equation stems from real-world phenomena. For example, it describes the motion of a pendulum subject to a periodic driving force. See [21] for more details.
One topic pertaining to the Mathieu differential equation which has been much researched is the stability of solutions. See [2,10,16,17,28]. Readers may also read [21,18] for a brief introduction to this area. Most of the existing literature concerns the parameter space, i.e. the space of δ-ε pairs, whose choices can drastically alter the behavior of solutions. This paper presents research results and new phenomena both on the parameter space and on solutions.
On the other hand, another area of mathematics which has been actively researched in recent years is analysis on fractals, based on J. Kigami's construction of Laplacians on post-critically finite self-similar sets. See [13,15,14,22]. It is of interest to define and study the Mathieu differential equation on fractal domains, and one suitable domain is the infinite Sierpinski gasket developed by R.S. Strichartz [24]. Existing research on the infinite Sierpinski gasket and on other 'fractafolds' be found in [12,20,23,25,26,27]. In particular, we will use the important result concerning the spectrum of the Laplacian on the infinite Sierpinski gasket by A. Teplyaev [27]. We will explore a way to generalize the Mathieu differential equation to be defined on the infinite Sierpinski gasket.
This paper is organized as follows. In Section 2, we provide relevant background on the Mathieu differential equation, give definitions which will be used throughout the remainder of the paper, and describe some of the methods used to study the relationship that the values of δ and ε have to the solutions of the equation; in doing so we will discuss 'transition curves' and the 'truncation method' used to study solutions to the Mathieu differential equation. In Section 3 we present a number of proofs for theorems pertinent to the Mathieu differential equation and provide justification for the methods presented in Section 2. Some of our methods and proofs are modified versions from [1,9]. In Section 4 we provide a discussion of computational results in studying the Mathieu differential equation, including results related to the asymptotic behavior of transition curves and the convergence of solutions. In Section 5 we give an overview of the Sierpinski gasket (SG) and provide definitions of various terms in fractal analysis, such as the fractal Laplacian and the infinite Sierpinski gasket (SG ∞ ), to be used in the remaining sections. In Section 6 we extend the content of Sections 2, 3, and 4 to the fractal setting by explaining how the 'Mathieu differential equation defined on the real line' can be generalized to a 'Mathieu differential equation defined on the infinite Sierpinski gasket. ' We describe how solutions to this generalization can be studied by considering solutions expanded as a linear combination of eigenfunctions of the fractal Laplacian. We also describe how the 'truncation method' on the line can be used to study the Mathieu differential equation on SG ∞ . In Section 7 we provide computational results and observations about the shape and asymptotic behavior of the transition curves for the Mathieu differential equation on SG ∞ and also about the behavior of solutions. In Section 8 we describe further research that can be done on the Mathieu differential equation and its fractal generalizations. Section 9, the Appendix, describes an alternate approach to the 'truncation method' which involves partitioning Fourier coefficients into various equivalence classes.
A website for this research has been created at http://pi.math.cornell.edu/ ∼ aac254/. We invite the reader to visit this website, as it contains plots, graphs, data, and other information gathered from the research.

Definitions and Methods
In this section we give formal definitions pertaining to the Mathieu differential equation on the real line and then describe the main methods used to study it.

Background & Definitions.
We begin by defining the Mathieu differential equation on the real line. where δ, ε ∈ R are fixed and u : R → R is unknown.
Henceforth, the abbreviation 'MDE' will often be used to denote 'Mathieu differential equation.' Also, unless otherwise specified, the term 'Mathieu differential equation' (and its abbreviation), when used in Sections 2, 3, and 4, will always refer to the MDE defined on the real line, as opposed to the later sections which discuss the MDE as defined on a fractal domain.
The particular values of δ and ε chosen can drastically alter the corresponding solutions to the MDE (see [21], for example). With this in mind, we make the following definition. (b). An ordered pair (δ, ε) ∈ R 2 is an unstable pair of values if there exists a solution to the corresponding MDE which is unbounded.
In order to understand which values of δ and ε correspond to stable pairs and which correspond to unstable pairs, it is useful to examine the δ-ε plane given in Figure 2.1. In this figure, the horizontal axis is the δ-axis, and the vertical axis is the ε-axis. The gray shaded region of the plane corresponds to (δ, ε) pairs which are stable and is called the stable region; the white region of the plane corresponds to (δ, ε) pairs which are unstable and is called the unstable region. The solid curves and dashed curves which are either orange or black are called the transition curves and form the boundary between the stable region and the unstable region. The curves which share the same color (either orange or black) and the same format (either solid or dashed) share certain properties in common which will be discussed in Section 2.2.
There exists a systematic method for determining the stable and unstable regions shown in Figure 2.1, which we describe in Section 2.2. A useful result regarding this method is given in [21] as follows:  Remark 2.4. In this paper, we say f is periodic of period 2N π, or f is 2N π-periodic, if N is the smallest positive integer such that f admits the Fourier expansion

Fourier Expansion and Matrix
Form. Let us suppose we have a periodic solution u with period 2π or 4π to the Mathieu differential equation. In this case, we can write the Fourier series expansion of u as If we plug this Fourier series for u into the MDE, rearrange terms, and use some trigonometric identities, we find that the function u given above solves the MDE if and only if the two infinite systems of linear homogeneous equations shown below for the cosine and sine coefficients are satisfied. See [18] and [21] for more details. cosine coefficients and sine coefficients To solve these systems of equations, we consider putting them into matrix form. Readers can check that the above two systems of equations are equivalent to the following four equations in matrix form collectively, which correspond to cosine or sine coefficients with even or odd indexes. All the matrices are tridiagonal.
For the cosine coefficients with even indexes, we have For the sine coefficients with even indexes, we have For cosine coefficients with odd indexes, we have For sine coefficients with odd indexes, we have We can solve the above four matrix equations separately. If either of the first two equations has a nontrivial solution in 2 , then the MDE has a 2π-periodic solution, since    solves the MDE and is 4π-periodic, also comparing with Remark 2.4. Interested readers can also read the appendix for equations in matrix form for periodic solutions with larger periods, say 2N π with N an arbitrary positive integer. Now we return our discussion to Figure 2.1, where we plot the stable and unstable regions. According to Theorem 2.3, the transition curves consist of (δ, ε) pairs with nontrivial 2π-or 4π-periodic solutions. Equivalently, by the above discussion, (δ, ε) lies on a transition curve if and only if at least one of the four equations in matrix form discussed above has a nontrivial solution in 2 . Since we develop four different equations in matrix form, we use different colors (orange and black) and line formats (solid and dashed) in Figure 2.1 to distinguish between the four matrix equations: • If (δ, ε) falls on a black solid line, then the equation Ax = 0 has a nontrivial solution.
• If (δ, ε) falls on a black dashed line, then the equation Bx = 0 has a nontrivial solution.
• If (δ, ε) falls on an orange solid line, then the equation Cx = 0 has a nontrivial solution.
• If (δ, ε) falls on an orange dashed line, then the equation Dx = 0 has a nontrivial solution.
Taking the determinant of each truncated matrix yields an algebraic expression involving δ and ε. Setting each expression equal to zero, we can then plot each equation in the δ-ε plane and obtain a set of algebraic curves in the variables δ and ε. If we choose m sufficiently large, the δ-ε curves derived from the truncated matrices will be very close to the true transition curves corresponding to the infinite matrices. This statement is made precise by Ikebe et al. in [1], and we present their results in Section 3.2.

Backward Recursion.
In addition to studying the parameter space, it is also of interest to investigate the set of periodic solutions to the MDE. Motivated by the discussions in Section 2.2, we will study the Fourier coefficients of periodic solutions. For the computational results presented the paper, when numerically computing Fourier coefficients of solutions, instead of computing a 2 , a 4 , a 6 ,... successively given an initial value a 0 (and similarly for the other three matrix equations) using the recursion relation on Page 4, we will compute Fourier coefficients by setting initial value a n 0 for some large index n 0 and compute a n 0 −2 , a n 0 −4 , a n 0 −6 ,... successively. The former method is called the 'forward recursion method', whereas the latter method is called the 'backward recursion method. ' We choose to use the backward recursion method instead of the more commonly-used forward recursion method due to the instability of the forward recursion method, which will be discussed in Section 3.1. Also see [9] for more details on the backward recursion method.

Three-Term Recurrence Relations and Truncations of the Infinite Matrices
In this section, we will provide theoretical foundation for the methods we use. In doing so we will discuss the asymptotic convergence of Fourier coefficients and provide an error estimate for the truncation method we use for approximating the transition curves. (We will extend these techniques to the fractal MDE in later sections.) In this section, we work in generality by considering matrices of the form where α j , β j , λ j ∈ R for j ≥ 1, and γ ∈ R; further, we assume that {α j } ∞ j=1 and {β j } ∞ j=1 are bounded sequences and that lim j→∞ λ j = ∞. Note that matrices A, B, C, and D as defined in Section 2.2 are special cases of (3.1). In addition, we always assume α j , β j = 0, for all j ≥ 0, in this section. We will consider equations of the form where c = (c 1 , c 2 , · · ·) T ∈ 2 := (x 1 , x 2 , ...) T : ∞ j=1 |x j | 2 < ∞ is a real sequence.
3.1. Asymptotic Behavior of {c j } ∞ j=1 . Three-term recurrence relations (TTRRs) and difference equations have been well-studied throughout the last century. Since (3.2) naturally gives a TTRR, we can use existing theorems to study the asymptotic behavior of the sequence {c j } ∞ j=1 as j → ∞. Below we first present the well-known Poincaré Theorem (see [9]), also called the Poincaré-Perron Theorem, which describes the asymptotic behavior of solutions to general TTRRs, and then we will see how the theorem applies to the equations above.
Theorem 3.1 (Poincaré-Perron Theorem, [9]). For j ≥ 1, let c j+1 +b j c j +a j c j−1 = 0 with lim j→∞ b j = b and lim j→∞ a j = a. Let t 1 and t 2 denote the zeros of the characteristic equation t 2 + bt + a = 0. Then, if |t 1 | = |t 2 |, the difference equation has two linearly independent solutions {x n } and {y n } satisfying If |t 1 |= |t 2 |, then lim sup for any nontrivial solution {c j } to the recurrence relation.
With this theorem in mind, we consider the TTRR where {f j } and {g j } neither of which contains 0 as one of its terms, are uniformly bounded, and where |d j |→ ∞ as j → ∞. Notice that, with our assumptions, the solution of Equation 3.4 is uniquely determined by specifying the first two terms c 1 and c 2 . We can derive the following proposition from the Poincaré-Perron Theorem, adapting its proof directly from the proof of the Poincaré-Perron Theorem in [9].
Proposition 3.2. Assume {f j } and {g j }, neither of which is equal to the zero sequence, are uniformly bounded, and assume that |d j |→ ∞ as j → ∞. Then the TTRR (3.4) has two linearly independent solutions {x j } and {y j } satisfying Proof. The proof is modified directly from the proof for the Poincaré-Perron Theorem in [9]. For convenience, we assume d j = 0 for any j, since otherwise we can consider the For j sufficiently large,x j does not vanish. Indeed, ifx j 0 = 0 for some j 0 ∈ N then the TTRR impliesx j 0 +2 x j 0 +1 = −1, which cannot happen if j 0 were sufficiently large (since x ĵ x j−1 → 0). Similarly, for j sufficiently largeŷ j does not vanish, for ifŷ j 0 = 0 for some j 0 ∈ N thenŷ j 0 y j 0 −1 = 0, which cannot happen for j 0 sufficiently large (sinceŷ ĵ y j−1 → −1).
Multiplying each side by H (l) Subtracting one equation from the other, we obtain Therefore, Let {z j } be an arbitrary solution to the TTRR 3.4. We say {z j } is a minimal solution of the TTRR if it obeys We can apply Proposition 3.  . If c ∈ 2 , then c j will decay with the asymptotic behavior with the asymptotic behavior Proof. It is equivalent to studying Equation 3.3.
(a). Given c 1 ∈ R, we can solve c 2 , c 3 , · · · inductively, which gives us a solution to Equation 3.3. In this way, any solution to Equation 3.3 is uniquely determined by c 1 .
(b) Any solution c = {c j } ∞ j=1 can be written as a linear combination of {x j } and {y j } as given in in Proposition 3.2, and hence there exist p, q ∈ R such that c j = px j + qy j for all j. Note that if q = 0 then {z j } is a dominant solution, and if q = 0 then {z j } is a minimal solution.
If {c j } ∈ 2 then lim j→∞ c j = 0 and thus {c j } cannot be a dominant solution; thus, we must have p = 0 and hence c j Corollary 3.3 shows that there is a sharp contrast between the two possible types of behavior that a solution to Equation (3.2) can have. Since {α j } and {β j } are bounded sequences, when (δ, ε) are fixed and properly chosen any solution {c j } to (3.2) will converge to 0 very rapidly; otherwise, all nontrivial solutions will tend to infinity very fast.
If c 1 ∈ R is an initial value which corresponds to a minimal solution, Proposition 3.2 shows that numerically computing the solution with the forward recursion method is unstable, since a small error in computation will lead to a dominant solution, in which {c j } explodes. On the other hand, the backward recursion method can give a good approximation for a minimal solution. In [9], detailed discussions are given on this method, as well as a corresponding error estimate. We briefly state the result here.
where r k = x k y k and 1 ≤ k ≤ N . 3.2. The Truncation Method. In this part, we introduce the truncation method for determining which (δ, ε) pairs yield nontrivial solutions of (3.2). We start from the observation where I is the identity matrix and the shorthand notation T (ε) is used in the last equality. Clearly, for a fixed ε, equation (3.2) has nontrivial solutions if and only if δ is an eigenvalue of T (ε). The eigenvalue problem for infinite tridiagonal matrices has been widely studied. In [1], there is a result on the error between eigenvalues of the truncated matrices and those of the corresponding infinite matrix. We state the result in Theorem 3.5 below.
Here we consider the infinite symmetric tridiagonal matrix T of the form where d n → ∞ as n → ∞ and {f n } bounded. Denote its n × n truncation by T n , i.e., ) Let T and T n (n ≥ 1) be given as above.
(a). T has pure point spectrum.
(b). If δ is a given simple eigenvalue of T , then there exists, for each n ∈ N, an eigenvalue l n of T n such that the sequence {l n } ∞ n=1 satisfies l n → δ as n → ∞. For any such sequence the error is given by where c = (c 1 , c 2 , · · ·) T ∈ 2 is an eigenvector corresponding to δ.
We can extend the above result to nonsymmetric tridiagonal matrices.
Theorem 3.6. Let T be an infinite tridiagonal matrix of the form where d n → ∞ as n → ∞, and {f n } ∞ n=1 and {g n } ∞ n=1 are bounded, positive, and nonzero. Let T n be the n × n truncation of T . If δ is a given simple eigenvalue of T , then there exists, for each n ∈ N, an eigenvalue l n of T n such that the sequence {l n } ∞ n=1 satisfies l n → δ as n → ∞. For any such sequence the error is given by We observe that It is clear that T is a self-adjoint operator from 2 to 2 with pure point spectrum by In addition, from Corollary 3.3 we deduce that there is a one-to-one correspondence between eigenvectors of T and T in 2 . Indeed, if c is an eigenvector of T in 2 , we have d j by Corollary 3.3, which implies c is an eigenvector of T in 2 , and for the same reason if c ∈ 2 is an eigenvector of T then c = diag(κ 1 , κ 2 , · · ·)c is an eigenvector of T in 2 . Now, using Theorem 3.5 we deduce that there is a sequence of eigenvalues of the truncated T n that converges to the eigenvalue δ of T and hence of T , since T and T have the same eigenvalues. In addition, noticing that, for each n ∈ N, T n has the same eigenvalue as T n , we see that there is a sequence of eigenvalues of T n which converges to δ. Lastly, the error estimate comes by applying Theorem 3.5 to T with the eigenvector c = diag(κ −1 1 , κ −1 2 , · · ·)c.

Observations and Analysis for the Mathieu Differential Equation on the Line
The Mathieu differential equation has been an important topic in differential equations due to its numerous real-world applications. However, most of the existing work on the MDE focuses on theoretical analysis of the stable and unstable regions of the δ-ε plane, such as the asymptotic behavior of the transition curves, and there has not been much computational work done on the stability curves as well as on the solutions to the Mathieu differential equation. In this section, in addition to describing some of our theoretical results, we will explain our computational results on the intricate shape of the δ-ε curves and on the converging behavior of the solutions.

4.1.
The δ-ε Plot. In this part, we use the truncation method introduced in Section 2 and Section 3 to study the δ-ε curves in more detail.
As discussed above, the transition curves are found via 2π-periodic and 4π-periodic Fourier expansions of solutions. However, one might wonder what the curves would be if the same process is undertaken for Fourier expansions of larger periods, say periods 8π, 16π, 32π, .... The answer is that curves corresponding to expansions of larger periods in fact "fill in" the stable regions whose boundary is obtained using the 2π-and 4πperiodic expansions. See Figure 4.1 for an illustration.
In fact, this "fill in" property remains valid for even larger periods, and a proof is given below.  Proof. First, if (δ, ε) is a stable pair, then by the discussion given in Chapter VI of [21] we can find two linearly independent solutions u 1 and u 2 of the MDE, such that for some θ ∈ [0, 1 2 ]. θ is uniquely determined by the pair (δ, ε), so we may view θ = θ(δ, ε) as a function from the stable region to [0, 1/2].
In fact, by the Floquet Theorem, there exists a 2 × 2 matrix A depending only on (δ, ε) such that, for any solution u of the MDE, So (δ, ε) is a stable pair only if the two eigenvalues of A both have norm 1, which can be written as e i2πθ and e −i2πθ . Readers can find details in [21]. Clearly, A depends continuously on the parameter (δ, ε), which shows that θ = θ(δ, ε) is a continuous function on the stable region. Next, we fix θ 0 ∈ [0, 1 2 ] and ε ∈ R, and show that only countably many δ's can be found such that θ 0 = θ(δ, ε). Recall that for each such δ we have a solution u 1 to the corresponding MDE so that e −iθ 0 t u 1 (t) is a 2π-periodic function. We have the Fourier series expansion e −iθ 0 t u 1 (t) = ∞ j=−∞ c j e ijt , which implies Plugging this into the Mathieu differential equation, we get the following equation for the coefficients, From this we can see that there are only countably many values of δ such that the corresponding equation has a nontrivial solution ∞}. Now, we prove the proposition by contradiction. Suppose D is not dense in the stable region. Then, there is a ball in the δ-ε plane where θ does not take any dyadic rational value, noticing that when θ is a dyadic rational the MDE has 2 k π-periodic solutions u 1 , u 2 for some k. This means θ equals a constant θ 0 on that ball, since θ is a continuous function of (δ, ε). However, for this fixed θ = θ 0 , and a fixed ε in the ball, there are only countably many possible values for δ, which can not fill in the ball. This gives a contradiction.
Next we will discuss the asymptotic behavior of the stable and unstable regions. Existing works including [10], [16], and [28] cover two important observations: (i) First, as illustrated in Figure 4.2, the width of each stable band gets thinner and thinner as |ε| tends to ∞. In [28], it is shown that the width of the kth stable band will decrease exponentially on |ε|≥ k 2 as |ε|→ ∞.
(ii) Secondly, for each fixed ε, the width of kth unstable band becomes smaller as k increases. An estimate is given in [2] and [10] with different methods, both yielding where d k is the width of the kth unstable band with ε fixed.
Since the study of the type of convergence in the second observation is comparatively complete, we focus on the type in the first observation, which concerns the width of the stable band as |ε|→ ∞. In particular, we study the first ten stable bands by computing the width of each band from ε = 0 to ε = 50 in increments of 0.1. A graph for each stable band, plotting width vs epsilon, are shown below. Then we use the curve fitting toolbox in MATLAB to estimate the fitting curves between ε and width in each stable band. Please see The last topic we will discuss before moving on to talk about solutions to the MDE is to approximate and simplify the irregular boundary between stable and unstable regions for practical use. We can see that most (δ, ε) pairs in the first and the fourth quadrants which lie below by the line ε = δ and above by the line ε = −δ are stable, while most (δ, ε) values in those quadrants outside that region form unstable pairs. For any w > 0, let R w := {(δ, ε) ∈ R 2 : 0 < ε < w and − δ < ε < δ}. We are interested in determining the probability, for various values of w, that a δ-ε pair is stable, given that it lies in the triangle R w .
So, we numerically compute the probabilities with different choices of w. Notice that the line ε = δ (as well as the line ε = −δ) alternatively passes through stable and unstable regions. So, we define w i to be the δ-coordinate of the point of intersection between the line ε = δ and the right boundary of the i-th unstable region; also, for each i, we define P i to be the probability that a δ-ε pair is stable, given that it is in R w i . Readers can see figure 4.5 for an illustration of how we divide the regions into triangles.
P i can also be considered as the probability of getting a stable (δ, ε) pair within the triangular regions, which can characterize how well the irregular boundaries can be approximated by lines ε = ±δ. We list the P i , 1 ≤ i ≤ 18 in Table 1.
In addition, we also obtain a surprisingly good fitting curve of the form P i = ai+b i+c , with a = 0.9938, b = −0.1424, c = 0.3608. See figure 4.6 for details.

Solutions of the Mathieu Differential Equation.
Now we discuss the periodic solutions for the Mathieu differential equation. In Corollary 3.3, we showed that the sequence {c j } of Fourier coefficients to periodic solutions corresponding to δ-ε pairs on the transition curves converge rapidly. Hence, given a δ-ε pair on a transition curve, we can calculate explicitly the coefficients of the corresponding solution with a finite truncation of the Fourier series and use those coefficients to plot, with very high accuracy, solutions to the MDE.
One question one might investigate is as follows. Suppose one fixes a transition curve and considers periodic solutions corresponding to various (δ, ε) points along that transition curve. How do properties of solutions change as ε varies?
To be more precise, we need some new notation. Recall from Section 2.2 that, when solving 2π-or 4π-periodic solutions, we can rewrite the MDE as a system of linear equations for Fourier coefficents, which collectively can be rewritten as four independent    The triangle area corresponding to R w 1 , R w 2 , R w 3 , and R w 4 . The green area is the stable region, and shaded area is the unstable region.
homogeneous matrix equations, with corresponding matrices A, B, C, D. The transition curves consist of the δ-ε pairs such that one of these matrices degenerates. In this way, a transition curve can be labeled with a matrix and an integer k ≥ 1, so that for each δ-ε pair on this transition curve, δ is the kth smallest real number such that the matrix degenerates with the fixed ε. We want a more convenient way to label these transition curves, and to achieve this, we proceed as follows. Fix a transition curve, let ε = 0 on the transition curve, and solve the corresponding equation in matrix form. According to the discussion in Section 2.2, we can then construct a periodic solution to the MDE, and the solution is clearly an eigenfunction of d 2 dt 2 . For example, if we consider the transition curve labeled by A and k = 2 we will get cos t as a solution with the above process. In this way, a transition curve can be labeled with an eigenfunction of d 2 dt 2 . In light of this, we define p(ϕ, ε) to be the point in the δ-ε plane with prescribed ε-coordinate which lies on the transition curve corresponding to the eigenfunction ϕ.  As shown in Section 2, the transition curves can be organized into four different classes, made up of points {p(sin kt, ε)}, {p(cos kt, ε)}, {p(sin 2k+1 2 t, ε)}, {p(cos 2k+1 2 t, ε)} with our new notation. It is natural to study them separately.
First we consider the class {p(sin kt, ε)}. Because of the symmetry of the transition curves across the δ-axis, it suffices to only consider positive values of ε. In fact, if u is a solution corresponding to the pair (ε, δ), then u(t+π) is a solution corresponding to the pair (−ε, δ) since − cos t = cos(t + π). The normalized solutions are plotted in  where by 'normalized solution' we mean a solution u for which max t∈R u(t) = 1. In plotting the graphs we have used the particular values k = 1, 2, 3 and ε = 5, 10, 20, 40, 80, 160. 15 × 15 truncated matrices are used to compute the solutions. The horizontal axis is the t-axis, and the vertical axis is the u-axis.   We now use curves to fit a curve of the t-coordinate of these relative maxima in [0, π] as a function of ε. We still pick the same three transition curves, and take ε from 1 to 200, in increments by 1. The relationship of t coordinate of maximal points and ε on the three curves are shown in figure 4.10, and the fitting curve is of the form t = aε+b ε 2 +cε+d . For the case of p(sin 2t, ε) and p(sin 3t, ε), does the same convergence behavior occur if one considers the sequence of the first minima when t > 0? We also do the computation, and show the results in figure 4.11, where the same kind of fitting curve also works.  In case of p(sin 3t, ε), the same observation applies to the second maxima when t > 0. See figure 4.12 for details.
One can also consider the behavior of the u-coordinate of these various sequences of local extrema. See figure 4.13 for the behavior of u-coordinate of the first minimal points on the curve p(sin 2t, ε), where we can fit the points well with a rational function of the form aε 2 +bε+c ε 2 +bε+c . For the curve p(sin 3t, ε), there are three local extremes on each solution, and we also do the computation for the u-coordinate for the first minimal and the second maximal value. The fitting curve is a little more complicated, of the form t = aε 3 +bε 2 +cε+d ε 3 +dε 2 +eε+g . See figure 4.14.
Next, we turn to the solutions on curves p(cos kt, ε), and we do experiments for k = 0, 1, 2. Please look at figure 4.15,4.16,4.17 for the results.
We can see that the shapes of these curves are different from those in figure 4.7 4.8,4.9. The first curve p(cos 0t, ε) is of course very special, as it corresponds to the constant functions. It turns out the solution is always positive on this curve, as a consequence of Sturm's Theorem [17]. Also, we have the minimal values of the solutions at π, see figure       The solutions on the second curve is a little more complicated. As shown in figure 4.16, we can see that minimal value is achieved at t = π in [0, 2π] for ε = 0 and ε = 1, but as ε becomes larger, the minimum point splits into two different minimum points. This   Another thing we need to highlight here is that the maximal absolute value does not occur at 0 or π, but at some other points. So, as usual, we normalize the solutions so that their minimum value are −1, and we compute values of solutions at the two other local peaks at 0 and π, which is shown in figure 4.20. It is also interesting to see in figure  4.21 that the local maximal value at 0 drop rapidly near ε = 0, and then increase quickly.
The solutions on the curve p(cos 3t, ε) is even more complicated. Still, we observe the split of one peak. When ε = 0, 1, 2, 3, 4, 5, we only see one maximum point, but as ε becomes larger, we see two maximum points. We also do computation on the t coordinate of one of these splited maximum point. See    We also do experiment on the t coordinate of the first minimum points and the ucoordinate of the peaks on the curve p(cos 2t, ε).

4.3.
Explanation on the Behavior of the Solutions. Before the end of this section, we give a proof on the convergence of the t-coordinates of the peaks. This interesting point is that the behaviors are closely related to the asymptotic behavior of the transition curves, i.e. how δ behaves as ε increases to ∞. There are several works on this problem, and here we refer [17] for one version. Readers can find other dicussions in [28].
as ε → +∞.    Notice that in the above theorem, we fix a transition curve and let ε → ∞, which is exactly the way we study the behavior of the solutions. Now, we apply the theorem to derive some interesting facts that we have observed in our experiments. Proof. Let c ε be the zero of δ + ε cos t in (0, π), where theorem 4.2 guarantee the existence of c ε when ε is large. In addition, we have the estimate by using Taylor expansion of cos t locally at 0, lim We will show u (t) = 0 on [c ε , π), which immediately implies the lemma. First, we show there is no t ∈ [c ε , π) such that u(t)u (t) > 0, by contradiction. Without loss of generality, we assume u(t 0 ) > 0 and u (t 0 ) > 0 for some t 0 ∈ [c ε , π). Then we can show that u(t) > 0, u (t) > 0 for any t ∈ [t 0 , π]. If this is not true, we can take t 1 = inf{t ∈ [t 0 , π] : u(t) ≤ 0 or u (t) ≤ 0}, then u(t) and u (t) are both increasing on [t 0 , t 1 ] as where −(δ + ε cos t) > −(δ + ε cos c ε ) = 0 on (c ε , π). As a consequence, u(t 1 ) > 0 and u (t 1 ) > 0, which contradicts the definition of t 1 , noticing that both u, u are continuous. As a result, we see that u(t) > 0, u (t) > 0 for any t ∈ [t 0 , π]. On the other hand, u satisfies the boundary condition u(π) = 0 or u (π) = 0, since u takes a Fourier series expansion in one of the following forms (see [21]): ∞ j=0 c j cos jt, ∞ j=1 c j sin jt, ∞ j=0 c j cos 2j+1 2 t or ∞ j=0 c j sin 2j+1 2 t. This gives a contradiction. Using the above observation, we can see u (t) = 0 for any t ∈ [c ε , π). Otherwise, if u (t) = 0 for some t ∈ [c ε , π), we should have u(t) = 0, so that u(t + h)u (t + h) > 0 for some small h > 0. 2 )t. Fix a large ε > 0, and let u be a nontrivial periodic solution. Then, we have all the local extremums of u in n∈Z (−c ε + 2nπ, c ε + 2nπ), where c ε tends to 0 as ε → ∞. (c ε depends on k) (b). Consider a fixed transition curve characterized by the solution u(t) = cos kt or u(t) = sin(k + 1 2 )t. Fix a large ε > 0, and let u be a nontrivial periodic solution. Then, we have all the local extremums of u in πZ ∪ ( n∈Z (−c ε + 2nπ, c ε + 2nπ)), where c ε tends to 0 as ε → ∞. (c ε depends on k) In addition, there is a critical value α ≥ 0 given by the equation where we view δ as a function of ε on a fixed transition curve. Then π is a local maximum of |u| if 0 ≤ ε < α, and π is a local minimum of |u| is ε > α. When ε = α and the curve is not characterized by cos 0t = 1, π is a local maximum of |u|.
Proof. (a) Lemma 4.3 shows that there are no local extremums in [c ε , π). By symmetry, we do not have local extremums in n∈Z [c ε , 2nπ + π) ∪ (2nπ + π, c ε ]. It remains to show (2k + 1)π, k ∈ Z are not local extremums in these cases. It is enough to show that π is not a local extremum.
Note that for any case in part (a), a nontrivial 2π-or 4π-periodic solution u corresponding to a point (δ, ε) on a transition curve is written as a sum of sines or as a sum of cosines: u(t) = ∞ k=0 a k sin(kt) or u(t) = ∞ k=0 a k cos(k + 1 2 )t. It is easy to see that u(π) = 0. We therefore conclude that u (π) = 0, since otherwise u would be the trivial solution. As a result, π is not a local extremum.
(b) We only need to understand the behavior of u at π. The idea is essentially the same as the proof of Lemma 4.3, where we look at the sign of δ + ε cos t. It is also clear that u(π) = 0 and u (π) = 0 in this case.
In addition, δ − ε is a strictly decreasing function of ε on (0, ∞), so we can find a critical point α such that δ − ε > 0 on (0, α), and δ − ε < 0 on (α, ∞). To show that δ − ε is strictly decreasing, notice that δ is the kth smallest value such that the MDE has a 2π-or 4π-periodic solution, which is equivalent to say that δ is the kth smallest eigenvalue of the self-adjoint operator −∆ − ε cos t on L 2 (R/4πZ). Let ε > ε > 0, and δ, δ be the corresponding eigenvalues. Then using Raleigh quotient, we get where M k,ε is the space spanned by the smallest k eigenfunctions of −∆ − ε cos t and the infimum in the last line is taken over all the k dimensional subspaces of dom∆ on R/4πZ.

The Sierpinski Gasket and the Fractal Laplacian
In this section we give preliminary definitions and results concerning analysis on fractals and provide a basic introduction to the 'infinite' Sierpinski gasket will be given. This serves to set up the discussion of the generalization of the MDE to the fractal setting described in in Section 6.

Sierpinski Gasket. Consider the three contraction mappings {F
, where x ∈ R 2 . Then {F i } i=0,1,2 form an 'iterated function system' (see page 133 of [7]). By Theorem 9.1 in [7], there exists a unique nonempty compact set K ⊂ R 2 such that (see [11]) Then K is defined to be the Sierpinski gasket, often denoted SG. See Figure 5.1. In studying SG it is useful to use its graph approximations, constructed as follows. Let q 0 , q 1 , and q 2 be the unique fixed points of F 0 , F 1 , and F 2 , respectively. Define a vertex We refer to V 0 as the boundary of SG. We further define vertex sets V n ⊂ SG for n ≥ 1 inductively by V n := 2 i=0 F i (V n−1 ) and let V * := ∞ n=0 V n be the set of all vertices. Note that V * is dense in SG. For an m-tuple w = (w 1 , w 2 , · · · , w m ), where w j ∈ {0, 1, 2} for each w j (1 ≤ j ≤ m), we define F w by F w := F w 1 • F w 2 • · · · • F wm and say that w is a word of length |w| = m. With this, an edge relation ∼ m on V m can be introduced as follows: for x, y ∈ V m we say x ∼ m y if and only if there exists a word w of length m and unequal indices i, j ∈ {0, 1, 2} such that x = F w (q i ) and y = F w (q j ). This relation on V m gives a sequence of graphs Γ m approximating SG, with the vertex set V m and the edge set E m = {{x, y}|x ∼ m y}. See Figure 5.1 for Γ 0 , Γ 1 , Γ 2 . x ∈ V m \ V 0 .
Then we define the continuous Laplacian ∆ by If the limit above converges uniformly on V * \ V 0 to a continuous function, we say u ∈ dom∆ . In this case, we extend ∆u to all of SG, including points not in V * \ V 0 , by continuity (recall that V * \ V 0 is dense in SG). The continuous Laplacian on SG is the analog of the usual 'second-order derivative' on the line. We shall mention now the the following proposition derived by O. Ben-Bassat, R.S. Strichartz,and A. Teplyaev in [3], which will be of interest in the next section. A function u satisfying −∆u = λu for some number λ is called an eigenfunction of ∆ with eigenvalue λ.
If a function u : SG → R satisfies u| V 0 = 0, then we say that u satisfies the Dirichlet boundary condition, and the eigenvalue problem −∆u = λu, u| V 0 = 0 is called the Dirichlet eigenvalue problem. A function u satisfying both of these equations is called a Dirichlet eigenfunction of ∆.
Similarly, we have a notion of a 'Neumann condition' as follows. Define the normal derivative ∂ n (q i ) for i ∈ {0, 1, 2} by where we have identified indices modulo 3. If a function u : SG → R satisfies ∂u(q i ) = 0 for i = 0, 1 and 2, then we say that u satisfies the Neumann boundary condition, and the eigenvalue problem −∆u = λu, ∂u(q i ) = 0, for i = 0, 1, 2 is called the Neumann eigenvalue problem. A function u satisfying both of these equations is called a Neumann eigenfunction of ∆.
Dirichlet and Neumann eigenfunctions on SG are the analog of sine and cosine functions on the line.

Spectral Decimation. A method for explicitly computing all possible eigenvalues and eigenfunctions of ∆ was introduced in [8] using a process called spectral decimation.
Below we briefly discuss some results from spectral decimation we will use. Readers can find detailed discussion on spectral decimation in [8] and [22].
Proposition 5.2. Suppose λ m = 2, 5, 6, and λ m−1 is given by If we want to extend an eigenfunction of ∆ m−1 with eigenvalue λ m−1 to an eigenfunction of ∆ m using the proposition above, we have two choices, except when λ m−1 = 6 (as we will see below), in which to extend the eigenfunction: For convenience, define the functions ψ + (x) and ψ − (x) by The numbers 2, 5, 6 are called forbidden eigenvalues, and it turns out that each Dirichlet eigenfunction of ∆ comes from a 2-, 5-, or 6-eigenfunction of ∆ m 0 for some m 0 ≥ 0, while all the Neumann eigenfunctions come from 5-or 6-eigenfunctions of ∆ m 0 for some m 0 ≥ 0. If u is a Dirichlet or Neumann eigenfunction, we call m 0 the generation of birth and u| Vm 0 the initial function.
With a fixed generation of birth m 0 and initial function f , we can extend the function level-by-level. We can first fix a sequence {ε m } ∞ m=1 of ±, with only finitely many −, and then let λ m = ψ ε m−m 0 (λ m−1 ) for m > m 0 inductively. Then the function is extended to be an eigenfunction of ∆, with the corresponding eigenvalue In fact, all the eigenfunctions with a given generation of birth and initial function can be generated by the above recipe. Also, if the initial eigenvalue is 6, we can only choose ε 1 = +1, as ψ − (6) = 2 is a forbidden eigenvalue. Now, suppose we fix a generation of birth m 0 , fix an initial function on V m 0 , and let where ∅ denotes the empty sequence. Define ψ e = ψ e 1 ψ e 2 · · · ψ e |e| , where |e| is the length of e. In particular, ψ ∅ (x) = x. Let Ψ(x) = 3 2 lim l→∞ 5 l ψ l − (x). Then, we can deduce the following possibilities: 1. If λ m 0 = 2 or 5, then all the possible eigenvalues of ∆ having generation of birth m 0 are given by 2. If λ m 0 = 6, then all the possible eigenvalues of ∆ having generation of birth m 0 are given by Here we remark that when we use the notation λ e , we always assume that we have a fixed generation of birth m 0 and initial eigenvalue λ m 0 . There is a method, given by [5], in which to arrange in increasing order the set of eigenvalues arising a fixed generation of birth and initial eigenvalue. The idea is to translate each finite sequence in E into a binary number. The process is as follows.
Given e ∈ E of length |e|= n, let d(e) be the integer with binary (base-2) expansion Thus, λ e corresponding to the sequence e is the 105th smallest eigenvalue. Also, note that, in particular, λ e=∅ is the smallest eigenvalue, λ e=(+) is the 2nd smallest eigenvalue, and λ e=(+,+) is the 3rd smallest eigenvalue. We can, of course, reverse this process so that, given an in integer d we can find the e ∈ E corresponding to the d-th smallest eigenvalue. Another fact is that the sequence of eigenvalues {λ n,m 0 } ∞ n=1 , where λ 1,m 0 < λ 2,m 0 < λ 3,m 0 < λ 4,m 0 < ..., corresponding to a fixed generation of birth m 0 and a fixed initial eigenfunction grow according to the power law n log 5/log 2 . In addition, if we define the eigenvalue counting function ρ m 0 : R + → N to be ρ m 0 (x) = #{e ∈ E : λ e,m 0 ≤ x}, then we have the following proposition concerning the asymptotic behavior of ρ m 0 . Proof. To avoid the high multiplicity of eigenfunctions corresponding to a same eigenvalue, we fix an initial eigenfunction instead of just fixing an initial eigenvalue. In our setting, the eigenfunction is unique for each eigenvalue, so we only need to count the number of eigenvalues.
In fact, if we fix n and look at b l ≤ x ≤ b l for some l, l ∈ {+, −} n such that d(+l ) = d(+l) + 1, we have for any k ≥ n + 1. In addition, if e c ≤ x ≤ min l∈{+,−} n a l , we have where l = (−, −, −, · · ·) · · · ∈ {+, −} n . Similarly, we have for max l∈{+,−} n b l ≤ x ≤ 5e c , where l = (+, −, −, · · ·) ∈ {+, −} n . The above discussions shows that ρm 0 (5 n x) 2 n x log 2/log 5 converges to some function g(log x) uniformly on c ≤ log x ≤ c + log 5 as n → ∞. Also we can easily see that g is continusous with g(c) = g(log 5) from the estimates. In fact, for each x, we can find a small neighbourhood such that for n > k and any y in the neighborhood, ρ(5 k y) 2 k x log 2/log 5 < 2 −n x − log 2/log 5 . The estimate obviously holds for the limit function.
We can extend g to be periodic on R, and the theorem follows immediately.

Infinite Sierpinski Gasket.
In the last part of this section, we introduce the infinite Sierpinski gasket (SG ∞ ). It is a particular example of fractal blow-ups introduced in [24] by R. S. Strichartz.
Recall that the Sierpinski gasket is defined by the self-similar identity, SG = 2 i=0 F i (SG), where each F i is a contraction mapping R 2 → R 2 of contraction ratio 1 2 for i = 0, 1, 2, as defined earlier in this section. The infinite Sierpinski gasket is constructed as follows.
The Laplacian ∆ ∞ on SG ∞ can be defined locally with graph approximation in a same way as on SG. In [27], a Sierpinski lattice was introduced to describe the infinite graphs that approximate SG ∞ . Define and say x ∼ (m) y if F K,M (x) ∼ m+M F K,M (y) for some M . Then the resulting infinite graph is called a Sierpinski lattice. We can still define the discrete Laplacian on the lattices by Then the continuous Laplacian ∆ is defined by One of the most important results on SG ∞ was A. Teplyaev's theorem (see [27]) below showing that the Laplacian ∆ ∞ on SG ∞ has pure point spectrum, which means the eigenfunctions of the Laplacian form a complete set.
where µ is the Hausdoff measure on SG ∞ . The spectrum of ∆ ∞ is pure point (i.e., the eigenfunctions of ∆ ∞ form a basis of L 2 (SG ∞ , µ)) and each eigenvalue has infinite multiplicity. The set of eigenfunctions with compact support is complete in L 2 (SG ∞ , µ).
As a result of the theorem, spectral decimation still works on SG ∞ . Each of the eigenfunctions of ∆ ∞ is an extension of an eigenfunction of ∆ (m 0 ) with eigenvalue 5 or 6 by spectral decimation. The only difference here is that the generation of birth m 0 takes values in Z instead of N. All the results concerning eigenvalues from a same generation of birth and initial function in the previous section, including Proposition 5.3, still hold on SG ∞ .

Extending the Mathieu Differential Equation to Infinite Fractafolds
Now we are ready to discuss how we will define the Mathieu differential equation on an infinite fractafold.
6.1. Defining the Fractal MDE. Recall that the MDE, defined on the real line, is given by d 2 u dt 2 + (δ + ε cos t) u = 0, where u is a function from R to R.
The first questions we wish to address are "What should the fractal space be that replaces the line?" and "what should 'periodic function' mean?". We choose to consider the infinite Sierpinski gasket SG ∞ to be our domain. By "periodic function," we mean a function on SG ∞ which is identical on all the copies of SG of the same size. In particular, if we are given a function u on SG with the boundary conditions ∂ n u(q l ) = 0 for l = 0, 1, 2 u(q 0 ) = u(q 1 ) = u(q 2 ), (6.1) we can get a periodic function on SG ∞ by translating the function to other copies. But what about the differential equation? The first step in finding a fractal analog of the MDE defined on the line is to replace d 2 dt 2 with the fractal Laplacian ∆, since ∆ is the analog of the second-derivative operator. Now, what to do with the ε (cos x) u term? Recall from Theorem 5.1 above that the multiplication of two nonconstant functions in dom∆ may result in a function which is not in dom∆. Thus, we cannot simply replace cos x by a function in dom∆, and so we must figure out a suitable analog of multiplication by cosine.
Recall that, in the line case, we sought solutions u in the form of a Fourier expansion in terms of cosines and sines. Note, however, that cosines and sines on the line are Neumann and Dirichlet eigenfunctions, respectively, of d 2 dt 2 . Hence, we will adopt a form of Mathieu's equation which is compatible with functions u that can be written as a linear combination of Neumann eigenfunctions, motivated by Equation (6.1).
We choose to only consider functions which have Neumann eigenfunction expansions of the form where each ϕ j is a Neumann eigenfunction function defined as follows. Fix a generation of birth, a series (5-series or 6-series), and an initial eigenfunction ϕ such that ∆ m 0 ϕ(x) = λ m 0 ϕ(x) for all x ∈ V m 0 and ϕ(q 0 ) = ϕ(q 1 ) = ϕ(q 2 ). With the spectral decimation algorithm introduced in Section 5.3, we obtain a set of Neumann eigenfunctions ϕ i of ∆ extended from ϕ, and we write λ i for the eigenvalue corresponding to ϕ i . We still take the order λ 1 < λ 2 < λ 3 < · · · as in Section 5.3. Readers can also find more details on Neumann eigenfunctions on SG in [22]. The reason we fix a common initial Neumann eigenfunction is that, if we do not fix such an initial eigenfunction and instead consider the set of all Neumann eigenfunctions of ∆, then we cannot order their eigenvalues in a discrete way as above.
To this end, suppose we have a function u on SG which can be written as a linear combination of Neumann eigenfunctions as in Equation 6.2, where the ϕ j (j ≥ 0) satisfy the definition in the previous paragraph. Then, for any ϕ j with j ≥ 2 we define multiplication by cosine as follows: The motivation for this definition comes from the fact that, for the line case, the Neumann eigenfunction cos(jt) obeys the following trigonometric property when multiplied by cos t: cos t cos (jt) = 1 2 cos(j − 1)t + 1 2 cos(j + 1)t.
As for j ≥ 1 we consider two possibilities, each of which will be described in Section 6.2. In addition, we will consider another two variant versions in the following subsection.
We now consider a variant of the Mathieu differential equation, given by where A is the operator analogous to multiplication by cosine. For j ≥ 2, define A · ϕ j := α j−1 ϕ j−1 + β j ϕ j+1 , with α j and β j satisfying for j ≥ 1. One can solve system 6.9 to find the solutions of α j and β j (6.10) Note that 0 < α j < 1 and 0 < β j < 1 for all j, and hence the sequences {α j } and {β j } are each uniformly bounded.
The motivation for the setup above is that, if we plug in λ j = j 2 , which corresponds to the eigenvalues on the line, Equation 6.10 yields α j = β j = 1 2 . For j = 0 and j = 1, we still consider two cases, which give us Version 3 and Version 4 as follows.
We let Version 3 be where α j and β j are as given in Equation 6.10.
This matrix is reminiscent of the cosine matrices for the line case. Note that the first term in the second row has an extra factor of 2.
The recursion relation for the coefficients c j becomes We let Version 4 be where α j and β j are as given in Equation 6.10.
This matrix is reminiscent of the sine matrices for the line case. Again, note that the eigenvalues start from λ 1 instead of λ 0 .

Ovservations and Analysis for the Fractal Mathieu Differential Equation
In this section we parallel our results presented in Section 4 by giving a discussion of the asymptotic behavior of the transition curves (defined below) for the fractal MDE and of the convergence of solutions. We also describe a phenomenon wherein the SG ∞ transition curves form a prominent 'diamond' pattern.         As can be viewed from Figures 7.1-7.8, in each plot there is a 'diamond' pattern formed between adjacent transition curves. That is, the curves seem to 'bounce' off each other before parting in separate directions. This pattern does not appear in the transition curves for the Mathieu differential equation on the line. This is one way in which the transition curves for SG ∞ are different than those for the line.
We give a short proof here concerning the asymptotic behavior of the transition curves for Versions 1 and 2. The first matrix is real symmetric with continuous spectrum [−1, 1]. To see this, we only need observe that B = U · cos t · U −1 , where U : 2 → L 2 ([0, π]) is defined as U c = ∞ n=1 c n sin nt. The second one is not symmetric, but we can study the matrix κ −1 (A − εḂ)κ = A − κ −1 Bκ where κ is a diagonal matrix, we did in the proof for theorem 4. After the transformation, we have which is also a real symmetric matrix with continuous spectrum [−1, 1]. Without loss of generality, we just assume B is symmetric in the follows.
In the following proof, we only consider ε → +∞ case, and the other half is then immediate by symmetry.
On the k-th curve, we know that where the infimum takes over all the k dimensional subspace M k of domA. It is easy to see the following lower bound of δ Next, we give an upper bound. For any l < 1, we can find a k dimensional subspace N k ⊂ domA, such that −1 < B| N k < l, which means In fact, observe that B has continuous spectrum [−1, 1], we can always find a k dimensional subspaceÑ k ⊂ 2 such that −1 < B|Ñ k < l. ButÑ k is not necessarily in domA. So we pick a large L, and define N k = {c = (c 1 , c 2 , · · · , c L , 0, 0, · · ·) : There existsc ∈Ñ k such that c j =c j , ∀1 ≤ j ≤ L}.
Obviously, N k ⊂ domA, and −1 < B| N k < l as long as L is large enough.
Then we get the following upper bound Combining the upper and lower bounds of δ, we get for any l < 1. Thus lim ε→+∞ δ ε = −1. By symmetry, we also get lim ε→−∞ δ ε = 1. 7.2. Solutions of the Mathieu Differential Equation. Now we investigate solutions to the fractal MDE.
From Corollary 3.3 we deduce that, for ε = 0, a nontrivial solution c = (c 1 , c 2 , ...) ∈ 2 (or c = (c 0 , c 1 , c 2 , ...) ∈ 2 ) to any of the four systems M i x = 0 (i = 1, 2, 3, 4) in section 6.2 will decay with asymptotic behavior to any of the four versions of the fractal MDE may be well-approximated by the sum of just the first few terms in the series, and such an approximation will yield only a small error to the actual infinite-series solution.
Recall that in Section 4 we investigated, for the MDE on the line, how properties of solutions change if one fixes a transition curve and considers periodic solutions corresponding to various (δ, ε) points along that transition curve. We will consider the same question for the SG ∞ transition curves for the fractal MDE as well.
We investigate this question by first studying the solutions plotted in Figures 7. 10-7.29. In constructing the solutions we choose eigenfunctions of generation of birth 2 and of initial eigenvalue 5 or 6, arising from either of the initial functions shown in Figure 7 From the plots, it is evident that these normalized solutions seem to converge as ε tends to infinity along the first transition curve.
The second way in which we investigate the question is by observing the behavior of the location of relative maxima of solutions as ε traverses along a transition curve. Here we still choose to study (δ, ε) pairs and corresponding solutions on the first transition curve for each version. See Figure 7.30-7.37. This is analogous to our study of the relative maxima of solutions to the MDE on the line in Section 4.2. Each of the Figures contains two images indicating the position of relative maxima: the image on the left shows how the positions of the maximas move as ε becomes larger, and the image on the right is an overlook of the left one where we can clearly see the locations of the peaks. We make several observations: • The first thing we can see is that the movement of the peaks are not large, and we do not observe the peaks converging to the boundary.
• The second observation is that the 5 and 6 series behave quite differently. We can observe jumps of peaks in all the versions for 5 series, but the number of peaks is usually 3. A special case is figure 7.36, where many peaks occur when ε is quite large. However, for 6 series, we can observe many more peaks when ε is very small, but most of them do not appear when ε is large. Until now, we do not have explanation for these behaviors.

Further Research
Now we discuss further research questions that can be investigated in this area.
(i) Asymptotic behavior of SG ∞ transition curves One could further investigate the asymptotic behavior of the SG ∞ transition curves. For the line case, we have Theorem 4.2 due to W.S. Loud, stating that as ε → +∞ on the k−th transition curve. We can investigate whether estimates of a similar form hold on SG ∞ . This would include extending Proposition 7.1 to Version 3 and Version 4 the MDE on SG ∞ .

(ii) Other modifications of MDE matrix
Further research could investigate different matrix versions of the fractal MDE, aside from Versions 1-4 presented in Section 6.

(iii) Other Fractal Domains
One could investigate how the Mathieu differential equation could be extended to other infinite fractafolds. Such infinite fractafolds may or may not be based on SG.
(iv) The Hill Equation The Mathieu differential equation is actually a special case of the so-called Hill differential equation. The Hill differential equation is given by where f (t) is an arbitrary periodic function. Readers can read [21,16,17,28] for details. Further research could investigate ways in which to extend, for various choices of f (x), the Hill differential equation to be defined on fractal domains.

Appendix
In this appendix we derive equations in matrix form for solutions to the MDE on the line which have period 2N π, N ∈ N. We will use a similar procedure as in Section 2 and will obtain tridiagonal matrices.
Let u be a solution with period 2N π. We can write the Fourier expansion of u as Plugging in this Fourier series for u into the Mathieu differential equation, we obtain two infinite systems of linear homogeneous equations for the cosine and sine coefficients, respectively, as follows: cosine coefficients          δa 0 + ε 2 a N = 0, (δ − j 2 )a j + ε 2 (a N −j + a N +j ) = 0 if 1 ≤ j ≤ N − 1, (δ − N 2 )a N + ε 2 (2a 0 + a 2N ) = 0, (δ − j 2 )a j + ε 2 (a j−N + a j+N ) = 0 if j ≥ N + 1, and sine coefficients We can immediately write the equations in the matrix form, but the result is unwieldy. So, we make further classifications of the coefficients, just as we did in Section 2.2, where coefficients are separated into two classes-those with even indices and those with and odd indexes.
A natural criterion is to have coefficients that appear in the same equation be in a same class. For example, a 0 and a N should be in a same class. Based on this idea, we say that a j and a j (or b j and b j ) are in the 'same class' if and only if there is a finite sequence a j = a j 0 , a j 1 , a j 2 , · · · , a j L = a j , such that a j l and a j l+1 appear in a same equation for all 0 ≤ l ≤ L − 1. For example, a 0 and a 3N are in the same class, since we a 0 and a N are in the same equation, a N and a 2N are in the same equation, and a 2N and a 3N are in the same equation. It is easy to check that the the property of being in the 'same class' is an equivalence relation on the set {a j : j ≥ 0} and on the set {b j : j ≥ 1}. This partitions {a j : j ≥ 0} into [ N 2 ] + 1 different equivalence classes and partitions {b j : j ≥ 0} into [ N 2 ] + 1 different equivalence classes. So we can write equations for different equivalence classes separately. Below, we discuss all the possible cases. First we give a discussion for the cosine coefficients {a j }. As we will see, the case for the sine coefficients {b j } is similar.
There are three possible forms that the matrix corresponding to any particular equivalence class can take: 1). The first possible equivalence class is {a 0 , a N , a 2N , · · ·}, with the corresponding matrix equation Note that this is just the equation Ax = 0 in Section 2.2. If there is a nontrivial solution to the above equation, then the MDE has a 2π periodic solution, since u(t) = ∞ l=0 a lN cos(lt) solves the MDE. 2). The second form that an equivalence class can take is If there is a nontrivial solution to the above equation, then u(t) = ∞ l=−∞ a k+lN cos( k+lN N t) is a 2N π gcd(N,K) -periodic solution to the MDE, where gcd(N, k) denotes the greatest common divisor of the pair (N, k).
In particular, we get a 2N π-periodic solution to the MDE from the equation above if and only if k and N are coprime. Thus, the matrices of the second form with k and N coprime yield 2N π-periodic solutions.
3) The third form that an equivalence class can take is {aN 2 , a3 2 N , a5 2 N , · · ·}. This form can only occur if N is even. The corresponding equations can be written in the form The above matrix is exactly the matrix C in Section 2.2, which yield 4π-periodic solutions.
The matrix forms for sine coefficients are quite similar to those for the cosine coefficients. For solutions of period 2N π, N ≥ 3, the equations for sine coefficents have the following form where k can by any integer such that k and N are coprime.