Underlying one-step methods and nonautonomous stability of general linear methods

We generalize the theory of underlying one-step methods to strictly stable general linear methods (GLMs) solving nonautonomous ordinary differential equations (ODEs) that satisfy a global Lipschitz condition. We combine this theory with the Lyapunov and Sacker-Sell spectral stability theory for one-step methods developed in [34,35,36] to analyze the stability of a strictly stable GLM solving a nonautonomous linear ODE. These results are applied to develop a stability diagnostic for the solution of nonautonomous linear ODEs by strictly stable GLMs.

1. Introduction. 'What do multistep methods approximate?' This question is one that beleaguers many researchers of multistep discretizations of ordinary differential equation (ODE) initial value problems (IVPs) due to the fact that the local truncation error of a k-step multistep method depends on the previous k steps. For time-independent (autonomous) ODEs, two classic papers, [25] and [21], provide an answer to this question by applying invariant manifold theory for maps to relate the numerical solution produced by a multistep method to the flow of the differential equation it is approximating. The focus of this paper is to use the spirit and technique of [25] and [21] together with invariant manifold theory for timedependent (nonautonomous) difference equations to develop a stability theory for general linear methods (GLMs) solving nonautonomous linear ODEs.
Our contribution in this work is twofold. We first apply invariant manifold theory for nonautonomous difference equations to prove Theorem 3.1 which applies to strictly stable GLMs solving a nonautonomous ODE that satisfies a global Lipschitz condition in its state variables. This theorem states that for sufficiently small stepsizes there exists a time-independent linear change of coordinates and a unique continuous function whose graph defines a one-step method (called the underlying one-step method) with local truncation error the same order as the GLM. The onestep method is a globally exponentially attractive, invariant manifold of the discretetime system resulting from the time-independent change of variables. Theorem 3.1 generalizes the technique of characterizing the approximation properties of a GLM by its underlying one-step method to ODEs that are nonautonomous.
The second contribution of this paper is to use Theorem 3.1 and the Lyapunov and Sacker-Sell spectral stability theory for one-step methods solving nonautonomous ODE IVPs developed in [34,35,36] to prove Theorem 3.3. Theorem 3.3 states that for all sufficiently small step-sizes the numerical solution by a strictly stable GLM of a uniformly or non-uniformly exponentially stable nonautonomous linear ODE is exponentially stable. While our analysis is unable to show uniform exponential stability even if the ODE is uniformly exponentially stable, Theorem 3.3 still provides a way of analyzing the numerical stability of time-dependent linear ODEs that may fail to satisfy the hypotheses of AN-and B-stability theory (for an example of such an ODE see equation (3) below). Subsequently, we apply Theorem 3.3 to prove Proposition 1 showing that a strictly stable GLM approximating a uniformly exponentially stable trajectory of a nonlinear ODE will produce an (nonuniformly) exponentially stable numerical solution. The theoretical results are used to develop a Lyapunov exponent based stability diagnostic to determine when a strictly stable GLM fails to produce a decaying numerical solution to a linear ODE whose Lyapunov or Sacker-Sell spectrum is bounded above by zero.
The use of invariant manifold theory to characterize the approximation properties of a multistep method by an associated one-step method was pioneered in [25] and [21]. The results of [25] were extended to GLMs in [38] using the invariant manifold theory for maps developed in [37]. Understanding the properties of the underlying one-step method of a GLM is critical for understanding the evolution of the numerical solution (see for example the papers [22] and [13] as well as the book [23]). We extend existing theoretical techniques for invariant manifold reduction of GLMs for autonomous differential equations to nonautomous equations by employing the invariant manifold theory for nonautonomous differential and difference equations (see [1,2,4,5]). We remark here that we can prove the existence of an underlying one-step method for strictly stable GLMs solving nonautonomous ODEs by applying the standard technique of converting a nonautonomous ODE to an autonomous ODE in an extended phase space of one higher dimension (see Section 4.2 of [34] for the special case of strictly stable linear multistep methods). However, we give an example below (Equation (1)) that shows that this type of reduction excludes nonautonomous ODEs satisfying the hypotheses of the theory developed in this paper.
The stability of the numerical solution of an ODE IVP by a multistep method is a challenging and important problem dating back at least to the investigations by Dahlquist in [10,11,12]. For time-dependent trajectories the well-established stability theories (e.g. AN-stability, B-stability, algebraic stability) give conditions on a GLM so that, with no step-size restriction, it preserves the asymptotic decay of a trajectory that is uniformly decaying. This restricts the analysis to implicit methods and there is no obvious analog of linear stability domains for time-dependent problems. In this paper we exploit the Lyapunov and Sacker-Sell spectral stability theory for one-step methods developed in [34,35,36] with theoretical results on underlying one-step methods on GLMs solving nonautonomous ODEs developed herein to characterize the stability of a strictly stable GLM solving a nonautonomous linear ODE whose Lyapunov or Sacker-Sell spectrum is bounded above by zero.
We now give an example of a nonautonomous ODE which, when viewed as an autonomous ODE in one higher dimension, does not satisfy the hypotheses of the theory for underlying one-step methods of GLMs solving autonomous ODEs. Consider the scalar ODEẋ The function f (x, t) := ax + tanh(t 2 ) is analytic, bounded in t for each fixed x, and satisfies the global in space Lipschitz condition |f (y, t) − f (x, t)| ≤ |a||y − x| for all x, y, t ∈ R. The ODE (1) therefore satisfies the hypotheses of Theorem 3.1. Using the standard substitution τ (t) = t andτ = 1 we can view the nonautonomous scalar ODE (1) as the following two dimensional autonomous ODE The right-hand side of the ODE (2) is not Lipschitz since tanh(τ 2 ) is not. Thus the ODE (2) fails to satisfy the hypotheses of the theories developed in [25] and [21]. We motivate the development of our nonautonomous stability theory for GLMs with the following example. Consider the ODĖ sin(ω(t)) cos(ω(t)) , b 2 < b 1 < 0, a 1 , a 2 > 0, ω ∈ C 2 ((0, ∞)) solved by the BDF2 method with constant step-size h > 0: The BDF2 method is a 3-step and single-stage strictly stable GLM that is AN-and B-stable and has local truncation error of order 2. The ODE (3) does not satisfy the one-sided Lipschitz estimates of B-stability theory nor does it satisfy the hypotheses of AN-stability theory. However, there exists K > 0 so that every solution x(t) of (3) satisfies that x(t) ≤ K x(s) e b1(t−s) where t ≥ s and · is some norm on R 2 . We show that given any step-size h > 0, there is a suitable choice of parameters such that BDF2 can produce an exponentially growing solution to (3). Let I 2 denote the 2 × 2 identity matrix, 0 2 denote the 2 × 2 matrix of zeros, {x n } ∞ n=0 denote the numerical solution with 0 = x 0 ∈ R 2 , and let X n = (x T n , x T n+1 ) T . Let h > 0 be such that I 2 − hA(t) is invertible for all t ≥ 0 and suppose that α i := a i + b i > 0 for i = 1, 2 and ω(t) is such thatω(nh) = 2π/h for n ≥ 0. We then have that has an eigenvalue with modulus greater than 1 for all sufficiently large β and it follows that X n → ∞ exponentially fast as n → ∞ for some initial value X 0 . Our main stability result (Theorem 3.3) follows from the Lyapunov and Sacker-Sell stability theory for one-step methods developed in [34,35,36]. These results use the local truncation error of a method to characterize its stability. This confounds the concepts of accuracy and stability which have traditionally been considered separate topics in the analysis of initial value problem solvers. The reason for blurring the separation between these concepts is based on classical ideas. Consider a nonautonomous complex scalar test equatioṅ Suppose that we solve (5) with a method M that has linear stability region D M . If the Sacker-Sell spectrum of (5) lies to the left of zero, then zero is a uniformly exponentially stable equilibrium of (5). Under this assumption it is possible that for some step-size h > 0, there exists a sequence {t n } ∞ n=0 ⊂ [t 0 , ∞) where t n = t 0 + nh such that hλ(t n ) / ∈ D M for all n ≥ 0. It is shown in [34,36] that the coefficients λ(t) of the test problems for (3) are approximately b i + a i cos(t), i = 1, 2. The reason BDF2 failed to produce a decaying solution to (3) is that hλ(t) ≈ h(a i cos(t) + b i ) can cross the boundary of the stability region of BDF2 at the origin infinitely often and the step-size must be restricted to make sure that the average sign of λ(t) is accurately approximated or equivalently that the stability spectra of the numerical method accurately approximate the stability spectra of the differential equation it is solving. We show in Theorem 3.4 in Section 3.2 that no Runge-Kutta or strictly stable linear multistep method can produce a decaying solution to every uniformly exponentially stable test equation (5) without a step-size restriction.
The remainder of this paper is organized as follows. In Section 2 we introduce some definitions and background material on the approximation of Lyapunov and Sacker-Sell spectral intervals based on smooth QR decompositions of fundamental matrix solutions and the associated nonautonomous stability theory for one-step methods. In Section 3.1 we prove an existence theorem (Theorem 3.1) for underlying one-step methods of strictly stable GLMs. In Section 3.2 we apply Theorem 3.1 to prove Theorem 3.3 which relates the stability of a strictly stable GLM solving a nonautonomous linear ODE to the Lyapunov and Sacker-Sell spectrum of its underlying one-step method. In Section 4 we present the results of two experiments showing how the theory developed in Section 3 can be used to develop a Lyapunov exponent based stability diagnostic for strictly stable GLMs solving linear ODEs. The paper is concluded with some final remarks in Section 5.

2.
Preliminaries. For the remainder of this work we let · be a norm on R d and use the same symbol for the induced matrix norm. We may sometimes drop writing the explicit t dependence of matrices and functions when their time dependence is clear from the context. Whenever we use the word stability we are referring to timedependent Lyapunov stability. Consider the well-posed ODE IVP with sufficiently and t 0 > τ 0 . We consider numerical solutions of (6) by a fixed step-size, k-step, r-stage general linear method where h > 0 is the size of the time step (the step-size), t n = t 0 + nh, I d is the d × d identity matrix, U ∈ R r×k , V ∈ R k×k , C ∈ R r×r , D ∈ R k×r , F n = (f n,1 , . . . , f n,r ) T ∈ R dr where f n,i = f (g n,i , t n + ξ i h) for some real constants ξ i and i = 1, . . . , r, and G n = (g T n,1 , . . . , g T n,r ) T ∈ R dr . The symbol ⊗ denotes the Kronecker matrix product which defines an algebraic operation on matrices A = (a i,j ) ∈ R m×n , B ∈ R p×q for positive integers m, n, p, q by the rule  (7) is said to be strictly stable if 1 is an eigenvalue of V and all the other eigenvalues of V have modulus strictly less than 1. We refer readers to [8] and [24] for excellent overviews of the theory of general linear methods.
To use a GLM (7) to approximate the solution of (6) we need a starting procedure S h : R d → R kd and a finishing procedure The starting procedure takes the initial condition x 0 into an initial value X 0 for the method (7) and a finishing procedure takes values X n produced by the method and turns them into an approximation to the solution of (6) at t n . Let F : R dk × Z × (0, ∞) → R dk denote the map defined by the output X n of the GLM (7) using step-size h > 0 such that X n+1 = F (X n , n, h). A general linear method is said to have local truncation error of order p relative to a starting procedure S h if for every sufficiently smooth function v(t), n ≥ 0, and sufficiently small h. A general linear method is said to have local truncation error of order p if p is the maximal positive integer such that the method has local truncation error of order p relative to some starting procedure S h .
If f (x, t) is continuously differentiable on its domain, then associated to the solution x(t; x 0 , t 0 ) of (6) is the linear variational equatioṅ The stability of the zero solution of (8) in general does not depend on the time-dependent eigenvalues of A(t) (see the example at the bottom of page 3 of [9] or the third example on page 24 of [26]) which has motivated the development of several alternative stability spectra. The two spectra we consider in this work are the Lyapunov spectrum, based on the theory of Lyapunov exponents originating in [28], and the Sacker-Sell spectrum, which was developed in [32]. We say that the zero solution of a nonautonomous linear ODE of the form (8) is uniformly exponentially stable (or simply that the nonautonomous linear ODE is uniformly exponentially stable) if for any fundamental matrix solution X(t), there exists K > 0 and γ > 0 so that Analogously, we say that the zero solution of (8) is exponentially stable (or simply that (8) is exponentially stable) if for any fundamental matrix solution X(t), there exists K > 0 and γ > 0 so that A sufficient condition for uniform exponential stability is that the Sacker-Sell spectrum is bounded above by zero and a sufficient condition for exponential stability is that the Lyapunov spectrum is bounded above by zero.
These linear stability concepts have natural extensions for solutions of nonlinear differential equations. For s ≥ t 0 , let x(t; u s , s) denote the solution of the differential equationẋ = f (x, t) from (6) with the initial condition x(s; u s , s) = u s .
Using (8) we can express the differential equation of (6) in linear inhomogeneous then an argument using the nonlinear variation of parameters formula shows that uniform exponential stability of (8) implies uniform exponential stability of x(t; x 0 , t 0 ). A similar implication is not true if (8) is exponentially stable, but not uniformly so (see [29] or Equation 14 in [27]).
The QR theory for the approximation of the Lyapunov and Sacker-Sell spectrum of (8), developed and analyzed extensively in [15,16,17,18], is based on the construction of a time-dependent orthogonal change of variables. Let Q(t) be a solution of the differential equatioṅ is the coefficient matrix of (8). Under the change of variables x = Q(t)y, the systeṁ is such that B(t) is upper triangular for all t > t 0 . We refer to the system (10) as a corresponding upper triangular system to (8) (with initial orthogonal factor Q(t)) and the Lyapunov and Sacker-Sell spectra of these two systems coincide.
The following definition and theorem summarize how to compute end-points of the Lyapunov and Sacker-Sell spectrum of (8) in terms of the diagonal entries of B(t).
Definition 2.2. Assume that B : (t 0 , ∞) → R d×d is bounded, continuous, and upper triangular and let B i,j (t) denote the i, j entry of B(t). Suppose that for any i < j one of the two following conditions hold: Then we say that the ODEẏ = B(t)y and B(t) have an integral separation structure. If the first condition is satisfied for all i < j, then we say that B(t) andẏ = B(t)y are integrally separated. If the system (8) has a corresponding upper triangular system that has an integral separation structure, then we say that (8) has an integral separation structure and if the corresponding upper triangular system is integrally separated, then we say that (8) Assume that B : (t 0 , ∞) → R d×d has an integral separation structure and let A similar theorem can be proved for discrete-time linear systems (see Section 3.2 of [39] and Corollary 3.25 of [30]) which is used to prove the main linear stability results of Section 3.1 in [36] and Section 3.2 of [34] which are summarized in Theorem 2.4 below. This result is fundamentally based on the observation that the numerical solution of (8) by a one-step method with local truncation error of order p ≥ 1 takes the form x n+1 = Φ A (n; h)x n . For such a discrete-time difference equation we have for each fixed initial orthogonal Q 0 ∈ R d×d a discrete QR iteration Φ A (n; h)Q n = Q n+1 R A (n; h) where Q n ∈ R d×d is orthogonal and R A (n; h) is upper triangular with positive diagonal entries.
Theorem 2.4. Let x n+1 = Φ A (n; h)x n denote the numerical solution to (8) by a one-step method with local truncation error of order p ≥ 1 with step-size h > 0 and initial condition denote the Lyapunov and Sacker-Sell spectrum respectively of the discrete nonautonomous difference equation denote the Lyapunov and Sacker-Sell spectrum of (8). 1. If the coefficient matrix A(t) of (8) is bounded and continuous, then for every (8) has an integral separation structure and let Q 0 ∈ R d×d be orthogonal. Let R A (n; h) be the corresponding upper triangular factor of the discrete QR iteration applied to x n+1 = Φ A (n; h)x n with initial orthogonal factor Q 0 and let We apply Theorem 2.4 in Section 3.2 to prove a stability result for strictly stable GLMs solving nonautonomous, linear ODEs with Sacker-Sell spectrum bounded above by zero. Similar to results derived in [20] and [6], an argument with the variation of parameters formula allows us to use Theorem 2.4 to prove the following theorem (See Theorems 4.1-2 in [36] or Theorem 9 in [34]).
In Section 3.2 we explain how to obtain an analogous nonlinear stability theorem for GLMs using Proposition 1.

Main Results.
3.1. Nonautonomous underlying one-step methods. In this section we prove that there exists a unique underlying one-step method for a strictly stable GLM approximating the solution of a nonlinear and nonautonomous ODE whose nonlinear part satisfies a global Lipschitz condition. Throughout we consider a strictly stable, k-step, and r-stage GLM (7) that we denote by M which we assume has local truncation error of order p ≥ 1. We let P ∈ R k×k be a matrix so that where the eigenvalues of E 2,2 ∈ R k−1×k−1 all have modulus strictly less than 1 (E may be taken to be e.g. the real Jordan form of V ).
The main result of this section is the following theorem.
Theorem 3.1. Consider the following ODĖ where f : R d × (τ 0 , ∞) → R d , τ 0 ≥ −∞, and N (x, t) satisfies the global in space Lipschitz condition that there exists K > 0 so that for all x, y ∈ R d and t > τ 0 we have t) is C p+1 on its domain, and the partial deriva- where X n = X(n; X 0 , t 0 , h) denotes the output of M applied to solve (11) using step-size h > 0, initial value X 0 ∈ R dk , and initial time t 0 > τ 0 . Then there exists G > 0, γ ∈ (0, 1), and h * > 0 such that the following conclusions hold for any t 0 > τ 0 , X 0 ∈ R d , and h ∈ (0, h * ).

The difference equation
defined from the change of variables X n = (P ⊗ I)Y n satisfies that if h ∈ (0, h * ), then there exists a unique (but see the remark immediately after the statement of this theorem) continuous function ϕ : R d × Z × (0, h * ) → R d(k−1) whose graph is invariant under the flow of Y n+1 = H(Y n , n, h) for n ≥ 0 and such that for any Y 0 ∈ R dk , there exists z 1 0 ∈ R d such that the solution where the sequence {Z n } ∞ n=0 is such that Z n = ((z 1 n ) T , ϕ(z 1 n , 0, h) T ) T and Z n+1 = H(Z n , n, h) for all n ≥ 0. 3. If h ∈ (0, h * ), then ϕ(y, m) is globally Lipschitz and C p+1 with respect to the state variable y. 4. The difference equation y n+1 = H 1 (y n , ϕ(y n , 0, h), n, h) where H 1 denotes the first d components of H defines a unique one-step method referred to as the underlying one-step method. Let F * h be a finishing procedure defined by projecting a vector Y := (y 1 , y 2 ) ∈ R d × R d(k−1) onto its first d components using the formula F * h (Y ) = y 1 . If h ∈ (0, h * ), then for each underlying one-step method there exists a starting procedure S * h that takes the form S * h (x) = (x T , ϕ(x, 0, h) T ) T such that the method M is of order p relative to S * h and so that for any x ∈ R d and n ≥ 0 we have S n, h)). We first remark that ϕ and the underlying one-step method are not uniquely defined unless we agree to extend the difference equations defined by applying M to solve (11) from N to Z in a unique way. This is because the uniqueness of the function ϕ whose graph defines the pseudo-unstable manifold of a nonautonomous difference equation satisfying a gap condition relies on the difference equation being defined on all of Z rather than merely N (see Theorem 4.1 of [1]).
We also remark that the assumption that N (x, t) satisfies the global Lipschitz condition (12), which is quite strong, is not essential for our results and is used for simplicity. In general (see Remark 2.7 (2) of [31]) all we need is that (N (y, t) − N (x, t))/ y − x → 0 as y → x uniformly for t ≥ t 0 .
The remainder of this section is dedicated to the proof of Theorem 3.1. Let t 0 > τ 0 and X 0 ∈ R dk . The method M applied to solve (11) with step-size h > 0 using the initial value X 0 at the initial time t 0 > s takes the form where M n = diag(A n,1 , . . . , A n,r ) ∈ R dr×dr , N n = (N (g n,1 , t n + ξ 1 h) T , . . . , N (g n,r , t n + ξ r h) T ) T , and A n,i = A(t n + ξ i h) for i = 1, . . . , r where t n := t 0 + nh. The equation (14) implies that the internal stages G n satisfy the following algebraic condition The implicit function theorem and the fact that f (x, t) = A(t)x + N (x, t) is at least C 2 (since p ≥ 1) then implies that there exists h * > 0 so that h ∈ (0, h * ), then where (because of (12)) the term R(X, t, h) is Lipschitz in X n with Lipschitz constant L R = L R (h) bounded as L R (h) ≤ hJ for some constant J > 0. Therefore the first conclusion of Theorem 3.1 is proved. If we write Y n = ((y 1 n ) T , (y 2 n ) T ) T where y 1 n ∈ R d and y 2 n ∈ R d(k−1) , then under the change of variables X n = (P ⊗ I d )Y n the resulting system Y n+1 := H(Y n , n, h) can be expressed as where R 1 and R 2 each have Lipschitz constants L R1 = L R1 (h) and L R1 = L R2 (h) bounded by hJ where J ≤ P −1 ⊗ Id J P ⊗ I d . The following is an invariant manifold theorem for difference equations of the form (16) and is a restatement of the conclusions of Theorem 3.1, Theorem 3.2, and Theorem 5.1 in [3] (See also [4] and [1]). It is included for completeness.
Theorem 3.2. Consider a system of difference equations of the form where A n ∈ R d1×d1 , B n ∈ R d2×d2 , and F i : F 1 (n, x n , y n ) − F 2 (n,x n ,ỹ n ) ≤ L x n −x n + L y n −ỹ n F 2 (n, x n , y n ) − F 2 (n,x n ,ỹ n ≤ L x n −x n + L y n −ỹ n for constants L > 0, K ≥ 1 and 0 < α < β satisfying the following conditions for some c > 0. Denote the solution of (17) with the initial condition z m = x m y m at initial time m as z(n; m, x m , y m ) = x(n; m, x m , y m ) y(n; m, x m , y m ) Then there exists a unique continuous map ϕ : R d1 × Z → R d2 whose graph is the manifold D = {(m, x, ϕ(x, m)) : m ∈ Z, x ∈ R d1 } and D is invariant under the discrete flow of (17). Additionally, D is globally exponentially attracting in the sense that for any m ∈ Z, z m = (x m , y m ) ∈ R d1 ×R d2 there exists (m, w m , ϕ(w m , m)) ∈ D, G > 0 and γ ∈ (0, 1) so that z(n; m, x m , y m ) − z(n; m, w m , ϕ(w m , m)) ≤ Gγ n−m , n ≥ m We use Theorem 3.2 to complete the proof of Theorem 3.1. There exists h * 1 > 0 so that if h ∈ (0, h * 1 ), then X n satisfies the difference equation (15). The matrix sequence {Y n } ∞ n=0 where Y n = (P −1 ⊗ I d )X n satisfies the difference equation (16). If (11) is not defined on all of Z (i.e. s > −∞), then we uniquely extend the difference equation (16) satisfied by {Y n } ∞ n=0 that is defined on N to a difference equation defined on all of Z by setting R i (·, n, ·) ≡ 0 for i = 1, 2 whenever n < 0. Since the eigenvalues of E 2,2 all have modulus strictly less than 1 this extended difference equation on Z is of the form (17) for α < β = 1 and L = hJ. Thus, we can choose c > 0 and h * ∈ (0, h * 1 ] so that the inequalities (18) are satisfied whenever h ∈ (0, h * ). So, there exists a continuous map ϕ : is invariant under the flow of H and such that if Y n is the solution of (16), then there exists a sequence {z 1 n } ∞ n=0 with z 1 n ∈ R d , G > 0, and γ ∈ (0, 1) such that n , ϕ(z 1 n , 0, ) T ) T ≤ Gγ n , n ≥ 0. This completes the proof of the second conclusion of Theorem 3.1. Conclusion 3 follows from the results of [2]. The fourth conclusion is proved by repeating the proof of Theorem 2.3 in [38] using the function ϕ(y, 0), Conclusion 3, and the definition of local truncation error for GLMs.

3.2.
Nonautonomous stability of general linear methods. In this section we combine the results of Theorems 2.4 and 3.1 to prove a stability result for the solution of a nonautonomous linear ODE by a strictly stable GLM. Consider the ODEẋ = A(t)x, t > τ 0 (19) where A : (t 0 , ∞) → R and τ 0 ≥ −∞. The following theorem states that if the step-size of a GLM satisfying the hypotheses of Theorem 3.1 solving the linear ODE (19) is sufficiently small, then the exponential stability/instability of numerical solutions of (19) found with the GLM are determined by the stability spectra its underlying one-step method approximates. Notice, however, that we are unable to show uniform exponential stability/instability of the solution found with the GLM. (19) is bounded and C p+1 . Assume that the method (7) denoted by M is strictly stable and has local truncation error of order p ≥ 1. Let X n := X(n; X 0 , t 0 , h) denote the output of M applied to solve (19) using step-size h > 0, initial time t 0 > τ 0 , and initial value X 0 ∈ R dk . Denote the Sacker-Sell spectrum of (19) by Σ ED . 1. If Σ ED ∩ [0, ∞) = ∅, then for each initial value X 0 there exists h * > 0, G > 0, and γ ∈ (0, 1) so that if h ∈ (0, h * ), then X(n; X 0 , t 0 , h) ≤ Gγ n . 2. If Σ ED ∩ [0, ∞) = ∅, then there exists h * > 0, G > 0, and γ > 1 so that if h ∈ (0, h * ), then X(n; X 0 , t 0 , h) ≥ Gγ n for some initial value X 0 . An analogous result holds for the Lyapunov spectrum of (19) if we assume that the ODE has an integral separation structure.

Theorem 3.3. Suppose that the coefficient matrix A(t) of the nonautonomous linear ODE
Proof. We prove the first conclusion since the proof of the second is very similar. Let X 0 ∈ R dk be some initial condition at the fixed initial time t 0 . Since A(t) is bounded and C p+1 and M is strictly stable and has local truncation error of order p ≥ 1, we can choose h * 1 > 0 so small that the four conclusions of Theorem 3.1 hold for h ∈ (0, h * 1 ). The first conclusion of Theorem 3.1 implies that X n+1 = F (X n , n, h) for some function F and the second conclusion of 3.1 implies that there exists G 1 > 0, γ 1 ∈ (0, 1), and ϕ : where P is as defined in Section 3.1 and Z n = ((z 1 n ) T , (ϕ(z 1 n , 0, h)) T ) T is a solution of Z n+1 = H(Z n , n, h) with H and ϕ defined as in Theorem 3.1. The fourth conclusion implies that z 1 n+1 = H 1 (z 1 n , ϕ(z 1 n , 0, h), n, h), where H 1 is the first d components of H, defines a one-step approximation with local truncation error of order p toẋ = A(t)x with initial condition z 1 0 . We therefore can write z 1 n+1 = H 1 (z 1 n , ϕ(z 1 n , 0, h), n, h) ≡ Φ A (n; h)z n . Theorem 2.4 then implies that there exists h * 2 ∈ (0, h * 1 ] so that if h ∈ (0, h * 2 ) then the Sacker-Sell spectrum of z n+1 = Φ A (n; h)z n is bounded above by zero and therefore for some G 2 > 0 and γ 2 ∈ (0, 1). By the work in the previous section, there exists and Φ(n; h) is bounded and invertible with M n as defined in Equation (16). The third conclusion of Theorem 3.1 implies that there exists h * 3 ∈ (0, h * 2 ], G 3 > 0, and . The result follows by taking G = P ⊗ I d max{G 1 , G 3 Z 0 } and γ = max{γ 1 , γ 3 }.
Various types of scalar test equations are often used to characterize the stability properties of GLMs solving ODE IVPs. In [34] and [36] it is shown that the stability of the numerical solution by a one-step method with local truncation error of order p ≥ 1 of a nonautonomous linear ODE with a bounded and sufficiently smooth coefficient matrix can be approximately characterized by the one-step method applied to d scalar test equations of the forṁ where λ : (t 0 , ∞) → R is the real-valued diagonal element of a matrix B(t) of a corresponding upper triangular systemẏ = B(t)y to (19). Theorem 3.3 justifies using such test equations to characterize the stability of strictly stable GLMs solving nonautonomous linear ODEs by passing to the approximation properties of the underlying one-step method. Theorem 3.3 is an asymptotic result showing that as h → 0 we can guarantee the exponential decay of the numerical solution of a nonautonomous linear ODE whose Lyapunov or Sacker-Sell spectrum lies to the left of zero. It is natural to look for a subset of A-stable methods that preserve the asymptotic decay of all such linear ODEs with no restriction on h. The following theorem partially answers this question and says that step-size restriction is essential for the preservation of asymptotic decay by strictly stable linear multistep and Runge-Kutta methods.
Theorem 3.4. Given any strictly stable and consistent linear multistep method or convergent Runge-Kutta method M and h > 0, there exists a uniformly exponentially stable scalar ODEẋ = λ(t)x such that the numerical solution {x n } ∞ n=0 by M with step-size h > 0 becomes unbounded as n → ∞ for any initial condition Proof. Let S denote the linear stability domain of M and let ∂S denote its boundary. Since M is a strictly stable linear multistep method or a convergent Runge-Kutta method it follows that there exists δ > 0 such that (0, δ) / ∈ S ∪ ∂S. Consider the ODEẋ = (D cos(ωt) + L)x ≡ λ(t)x where t > 0, 0 < D + L < δ/2, L < 0 and ω = 2π/h and let x(0) = x 0 = 0. Notice that L < 0 implies that zero is uniformly exponentially stable forẋ = λ(t)x. The equation D cos(ωnh) + L = D + L implies that the solution ofẋ = λ(t)x using M with step-size h > 0 is the same as the numerical solution of the ODEẋ = (D + L)x. The quantity h(D + L) / ∈ S ∪ ∂S since h(D + L) < δ/2. Therefore the numerical solution ofẋ = λ(t)x by M using the step-size h > 0 becomes unbounded as n → ∞.
The geometric idea behind the proof of Theorem 3.4 is that if the step-size is too large, then hλ(nh + t 0 ) may be outside the classical stability domain too often and destabilize the numerical solution. It seems impossible to devise general algebraic conditions on the method coefficients of (7) guaranteeing that the numerical solution of any uniformly exponentially stable test equation of the form (20) decays. However, if hλ(t) is in the linear stability region of (7) on average, then we can use the standard linear stability region techniques in an approximate sense as we now show. For each n ≥ 0 we have an associated mean test equationṡ For n ≥ 0 the exact solutions of (20) and (21) at t = (n + 1)h using the same initial condition x(t n ) = x n agree and are given by Assume that h > 0 is so small that an underlying one-step method of (7) exists. The numerical solution of (20) by (7) with step-size h > 0 is given by x n+1 = Φ λ (n; h)x n . For n ≥ 0 applying the underlying one-step method to compute one forward step of the numerical solution of (21) with initial condition w n (nh) = x n is given by Φ ξ(n;h) (h)x n . Assuming that (7) (7) for all sufficiently small h > 0, then the sequence {Φ ξ(n;h) (h)} ∞ n=0 is power bounded (see Definition 2.1 of [7]) and it follows that {Φ λ (n; h)} ∞ n=0 is power bounded for all sufficiently small h > 0. Hence, we can repeat the analysis of Runge-Kutta methods in Theorem 3.6 of [36] and in an approximate sense determine the stability of (7) applied to solve (20) from the application of the underlying one-step method to solve mean test equations of the form (21).
As a follow-up to our linear stability theory we prove the following proposition for nonlinear initial value problems. From this proposition it follows that the conclusion of Theorem 2.5 holds for underlying one-step methods and thus for the approximation generated by a strictly stable GLM with suitably chosen starting and finishing procedures. Proposition 1. Consider a GLM (7) that is strictly stable and has local truncation error of order p ≥ 1. Then for any underlying one-step method y n+1 = H 1 (y n , ϕ(y n , 0, h), h) there exists a starting procedure S h , a finishing procedure F h , and an h * > 0 so that if h ∈ (0, h * ), then the output of the GLM is defined by the map X n+1 = F (X n , n, h) and for any x ∈ R d and n ≥ 0 we have Proof. Let h * > 0 be such that if h ∈ (0, h * ), then the conclusions of Theorem 3.1 hold. If h ∈ (0, h * ), then the first conclusion implies that the output of the GLM satisfies X n+1 = F (X n , n, h).
where P is as defined in Section 3.1. Then (22) follows by combining H(Y, n, h) = (P −1 ⊗ I d )F ((P −1 ⊗ I d )Y, n, h) with the fourth conclusion of Theorem 3.1.
In general we will not have explicit formulas for the starting procedure in Proposition 1 since it is defined in terms of the map ϕ. However, as shown in the proof of Theorem 2.3 in [38], if we have a starting procedure S h relative to which a strictly stable GLM has local truncation error of order p, then we can show that S * h = S h +O(h p+1 ). This is sufficient for the output of a GLM to be (non-uniformly) exponentially attracted to a uniformly exponentially stable trajectory since the onestep method is exponentially attractive.

Experiments.
In this section we develop a stability diagnostic for strictly stable GLMs solving time-dependent linear ODEs based upon the QR approximation theory for the Lyapunov spectrum and Theorems 3.1 and 3.3. As shown in Section 1 and Theorem 3.4, the AN-stability of BDF2 does not guarantee that there is no stability induced step-size restriction when solving time-dependent problems.
We first show how to evaluate the underlying one-step method of a strictly stable GLM indirectly. Theorem 3.3 implies that the exponential stability of a strictly stable GLM solving (3) can be characterized by the Lyapunov or Sacker-Sell spectrum of an underlying one-step method Rather than attempting to directly evaluate the function H 1 we instead make use of (13) to evaluate H 1 approximately. Let X n := X(n; X 0 , t 0 , h) denote the output of M applied to solve (11) using step-size h > 0, initial value X 0 ∈ R dk , and initial time t 0 > s and express X(n; X 0 , t 0 , h) = ((x 1 n ) T , . . . , (x k n ) T ) T . For the sequence defined by Y n = (P −1 ⊗ I d )X n with Y n = ((y 1 n ) T , . . . , (y k n ) T ) T there exists G > 0, γ ∈ (0, 1), and Z n of the form Z n = (z n , ϕ(z n , 0, h)) T , where ϕ is as defined in Theorem 3.1, so that if n ≥ 0, then and z n+1 = H 1 (z n , n, h). If we let P −1 = (p i,j ) k i,j=1 , then if follows from (24) that the sequence defined component-wise as w n := k j=1 p 1,j x j n is approximately equal to an output of (23) for sufficiently large values of n ≥ 0.
We use this technique to approximate the largest discrete Lyapunov exponent of (23) as follows. Given an initial condition x(0) = x 0 we use the RK4 Runge-Kutta method to compute x 1 . For n ≥ 2, we solve the BDF2 equation (3) for x n+2 and set X n = (x T n , x T n+1 ) T . Using X n , we form w n = 3 j=1 p 1,j x n+j−1 . Since w n approximately satisfies (23) we can view it as the first column in a fundamental matrix solution. Suppose that we let w n = Q n R n be a QR factorization where Q n ∈ R d×1 is orthogonal and R n ∈ R 1×1 . Under the assumption that (23) has a discrete integral separation structure, the largest discrete Lyapunov exponent µ max of (23) is almost surely (see [14] and also [19] and [33]) given by where (R n ) 1,1 denotes the (1, 1) entry of R n . We estimate (25) as We approximate the largest discrete Lyapunov exponent µ max of (23) by (26) using and use the sign of µ appr (N 0 , N ) for large values of N 0 and N as a stability diagnostic for the numerical solution of (3) by (4). Note that conclusion 2 of Theorem 2.4 and the fact that x(0) = x 0 = (1, 0) T implies that almost surely we have Table 1. Results of an experiment for the solution of (3) using BDF2, a 1 = a 2 = 1.2, b 1 = −0.14, b 2 = −0.15, β = 10.0, ω = 1, and a final time of t f = 40 for various step-sizes h and the initial condition x(0) = (1, 0) T . LTEmean is the mean local truncation error, LTEmax is the maximum local truncation error, and µ appr (N f /2, N f /2) is the value of (26) where N f is the final step of the approximation.
We display the results of our first experiment in Table 1 and Figure 1. For stepsizes h = 7.5·10 −1 , 7.5·10 −2 the method (4) produces numerical solutions to (3) that are growing in norm with approximate largest discrete Lyapunov exponents that are positive. When h = 7.5·10 −2 the local trunation error, which is gradually increasing as shown in Figure 1, remains bounded by 10 −2 . When h = 7.5 · 10 −3 , 7.5 · 10 −4 the method (4) produces a decaying solution to (3) and the approximate largest discrete Lyapunov exponent of (23) is negative. This experiment shows that monitoring the approximate largest discrete Lyapunov exponent of the one-step method (23) can be a more effective tool for controlling the global error and monitoring stability than the local truncation error.
In Table 2 and Figure 2 we display the results of our second experiment. The results of this experiment are meant to illustrate the difficulty in detecting stability using only point-wise values of the local truncation error. We see that there are no spikes in the local truncation error from one step to the next since τ max is approximately 1 for all values of a = a 1 = a 2 . Additionally, as the parameter a varies from 1.45 to 1.75, the numerical solution becomes unstable and the ratio between the mean and maximum 2-norm of the local truncation error is 2.44 and  Table 2. Results of an experiment for the solution of (3) using BDF2, using b 1 = −0.5, b 2 = −.055, β = 1.0, ω = 1, and a final time of t f = 100 for various values of a = a 1 = a 2 using the stepsizes h = 0.05 and the initial condition x(0) = (1, 0) T . LTEmean is the mean local truncation error, LTEmax is the maximum local truncation error, µ appr (N f /2, N f /2) is the value of (26) where N f is the final step of the approximation, and τ max is the maximum value of τ n which denotes the quotient of the local truncation error at time-steps n + 1 and n.
1.14 respectively which are comparable in value to the corresponding ratios when the parameter a varies from 1.15 to 1.45 where there is no loss of stability. This experiment demonstrates that the point-wise local truncation error and its local variation can fail to detect a loss of time-dependent stability.

5.
Conclusion. In this work we have used invariant manifold theory for nonautonomous difference equations to show that a strictly stable GLM solving a nonautonomous ODE that satisfies a global Lipschitz condition has an underlying one-step method whenever the step-size is sufficiently small. This result combined with the Lyapunov and Sacker-Sell spectral stability theory for one-step methods developed in [35,36] and [34] is applied to analyze the stability of a strictly stable GLM solving a nonautonomous linear ODE. These theoretical results are then applied to show that sign of the approximate largest discrete Lyapunov exponent of the underlying one-step method of a strictly stable GLM can be a more robust tool than the pointwise values of the local truncation error for monitoring the stability (and hence global error) of the numerical solution of a nonautonomous linear ODE IVP. Most step-size selection strategies for the solution of ODE IVPs select step-size based mainly on the local accuracy of the method, which we have shown in Section 4 can cause a solver to produce an exponentially growing approximation to an exponentially contracting nonautonomous linear ODE, even if the method is ANstable. Our experimental results suggest that the nonautonomous stability theory for GLMs that we have developed can be a useful tool for step-size selection based on stability as well as accuracy (a practical step-size selection algorithm for explicit Runge-Kutta methods based on these ideas can be found in [35]). In future work it remains to show that our results can be extended to variable step-size and variable order GLMs. Interestingly, whereas we have used our nonautonomous results as a practical way of detecting (and hence correcting) an unstable numerical solution, in the abstract of [25] it is stated that "...this result is of theoretical interest; it does not seem to affect the significance of multi-step methods for practical computations". The results of the present paper that build upon the fundamental ideas in [25] and [21] serve as yet another example of how mathematics that is considered theoretical and abstract can potentially find a practical application.