ASYMPTOTIC HOMOGENIZATION FOR DELAY-DIFFERENTIAL EQUATIONS AND A QUESTION OF ANALYTICITY

. We consider a class of nonautonomous delay-diﬀerential equations in which the time-varying coeﬃcients have an oscillatory character, with zero mean value, and whose frequency approaches + ∞ as t → ±∞ . Typical simple examples are where q ≥ 2 is an integer. Under various conditions, we show the existence of a unique solution with any prescribed ﬁnite limit lim at . We also show, under appropriate conditions, that any solution of an initial value problem has a ﬁnite limit lim t ) x + at + ∞ , and thus we establish the existence of a class of heteroclinic solutions. We term this limiting phenome- non, and thus the existence of such solutions, “asymptotic homogenization.” Note that in general, proving the existence of a bounded solution of a given delay-diﬀerential equation on a semi-inﬁnite interval ( −∞ , − T ] is often highly nontrivial.Ouroriginal interest in such solutions stems from questions concerning their smoothness. In particular, any solution x : R → C of one of the equations in ( ∗ ) with limits x ± at ±∞ is C ∞ , but it is unknown if such solutions are analytic. Nevertheless, one does know that any such solution of the second equation in ( ∗ ) can be extended to the lower half plane { z ∈ C | Im z < 0 } as an analytic function.


1.
Introduction. We are interested in this paper in delay-differential equations such as Here we assume that |p j (t)| → +∞ as t → −∞, that A j is an n × n-matrix-valued function which has an oscillatory character with mean value zero, and that r j ≥ 0, for each j. Typical examples of such equations, in the scalar case with a single delay, are x (t) = sin(t q )x(t − 1) and where q ≥ 2 is an integer. For equation (1) it is natural to seek solutions x = ξ(t), where ξ : R → C n , for which lim holds for some given x − ∈ C n . The existence of solutions satisfying (3) is a phenomenon which we term "asymptotic homogenization," and indeed, this will be the major focus of this paper. Our asymptotic homogenization results below give conditions under which equation (1) has a solution satisfying (3) for every x − , and generally, we prove there is a unique such solution for every x − . There is a sharp delineation between the cases q = 2 and q > 2 in equations (2). More generally, the so-called super-quadratic case, where each |p j (t)| in equation (1) grows faster than quadratic as t → −∞ (thus corresponding to q > 2 in (2)), is considered in Section 2. In this case obtaining asymptotic homogenization is relatively straightforward, and results are obtained even if A j is not periodic or in any way oscillatory. In fact, we shall see without much additional difficulty that nonlinear equations such as can be considered in the super-quadratic case. By contrast, as we shall see in Section 3, proving the existence of solutions of (2) with q = 2 which satisfy (3) is a subtle question involving delicate issues of resonance.
Let us note that the solution of equation (1) for an initial value problem {r j } and ϕ ∈ C([−R, 0], C n ) is given, can be extended forward in time to +∞. Thus, under appropriate assumptions on A j and on p j , we are also interested in the limiting behavior of solutions as t → +∞, and we show under general conditions the existence of such solutions for which the limit lim t→+∞ ξ(t) = x + exists for some x + ∈ C n . We therefore obtain heteroclinic solutions of equation (1); namely, for any given x − ∈ C n there exists some x + ∈ C n and a solution ξ such that both (3) and (4) hold.
Our original interest in such equations came from general questions concerning analyticity properties of solutions of analytic delay-differential equations; and our work here can be considered a continuation of this general project begun and continuing in [3], [4], [5], and [6]. A central question, of course, is what is meant by an "analytic delay-differential equation," and this question is addressed in [5] and [6]. In particular, if ξ : (−∞, −T ] → C n is a bounded, C 1 function satisfying an analytic delay-differential equation which satisfies some natural hypotheses, then results in [5] and [6] imply that there exists δ > 0 such that ξ extends to an analytic function which is defined and bounded on a (uniform) δ-neighborhood of (−∞, −T ] in C. (A special case of this result, for a linear equation with a single delay, is given below in Proposition 4.4.) In the case of equation (1), these results apply if there exists ε > 0 such that t → A j (p j (t)) extends to an analytic, bounded function defined on an ε-neighborhood of (−∞, −T ] in C. However, one sees that this condition is not fulfilled even for the simple equations (2) when q ≥ 2. Certainly both t → sin(t q ) and t → e it q are analytic, in fact, entire functions of t ∈ C for any positive integer q; but if q ≥ 2 then neither of these functions is bounded on an ε-neighborhood of (−∞, −T ] in C, for any ε or T . Any solution x = ξ(t) to these equations satisfying (3), as given by asymptotic homogenization, is certainly C ∞ on the real line; but it is an open question whether or not such ξ is analytic on the real line if x − = 0. In fact, it is unknown whether there exists any t 0 ∈ R such that ξ is analytic on a neighborhood of t 0 for such a solution.
2. Asymptotic homogenization: The super-quadratic case. In this section we consider the following assumptions on equation (1). (Here C n×n denotes the space of n × n complex matrices.) H1. A j : R → C n×n , for 1 ≤ j ≤ m, is continuous and A j (u) is bounded as u → +∞. H2. If we define Note that A j : R → C n×n satisfies the conditions in H1 and H2 if it is continuous and periodic with mean value zero. Note also that in the important case that is a polynomial of degree q j whose leading coefficient c j,qj has the indicated sign, then p j satisfies the conditions in both H3 and H4 provided q j ≥ 3. But if q j = 2, so that p j (t) = c j,2 t 2 + c j,1 t + c j,0 is quadratic with c j,2 > 0, then only the condition in H3 holds, and H4 fails as the first integral there is infinite.
Observe that if H1-H3 are satisfied then (in fact for any T − ) there exists a constant K such that Here | · | denotes some fixed norm on C n , and |A j (u)| and |B j (u)| the corresponding matrix norms. Also, it will be useful to define for t ≤ −T − . Below we shall take T − as in H4, and this quantity will appear repeatedly in this section.
Let us finally remark that if A j (u) and B j (u) in H1 and H2 are instead bounded as u → −∞, for some j, and if in H3 we have instead lim t→−∞ p j (t) = +∞ (and so lim t→−∞ p j (t) = −∞), then by simply replacing A j (u) with A j (−u) and p j (t) with −p j (t) the conditions as given in H1-H3 will hold.
The following result concerns solutions of (1) which are bounded in backward time.
Theorem 2.1. Assume that H1-H4 all hold and that r j ≥ 0 for 1 ≤ j ≤ m. Suppose that ξ : R → C n is a C 1 function which is bounded as t → −∞ and that x = ξ(t) satisfies equation (1) for all t ∈ R. Then there exists x − ∈ C n such that lim It suffices to prove that diam(D T ), the diameter of D T , approaches zero as T → +∞. Let C be a constant such that with T − as in H4. Then from the differential equation (1), using (5) and (7), we have that (8) Now integrate the differential equation (1) by parts, taking the solution x = ξ(t), to give We assume that τ, t ≤ −T − , where T − is as in H4, to ensure that the denominators in (9) do not vanish. Let us also assume that τ < t. Upon taking norms in (9) and using (5), (6), (7), and (8), we obtain Since we are assuming H3 and H4, one sees that Φ − (t) < ∞, and also that lim as desired.
The next result obtains a unique solution limiting at a given point in backward time.
Theorem 2.2. Assume that H1-H4 all hold and that r j ≥ 0 for 1 ≤ j ≤ m. Let x − ∈ C n be given. Then there exists a unique C 1 function ξ : R → C n with x = ξ(t) satisfying the differential equation (1) for t ∈ R, such that lim Proof. We first remark that it suffices to prove existence and uniqueness of a C 1 function ξ : (−∞, −T ] → C n satisfying equation (1) on (−∞, −T ] for some T ∈ R, and such that lim t→−∞ ξ(t) = x − , for as remarked earlier, any such solution has a unique extension to a solution of (1) on R.
Observe that if ξ : R → C n is a function as described in the statement of the theorem, with x − ∈ C n given, then equation (9) holds for any τ < t. Further, upon taking the limit τ → −∞, we obtain by virtue of H2-H4 and by the fact that ξ (t) is bounded as t → −∞, which follows from the differential equation (1); and further, the improper integrals in (10) are absolutely convergent hence finite. Now let T ∈ R be fixed. (We shall be more precise about our choice of T shortly.) For any continuous function x : (−∞, −T ] → C n define and let If κ is a positive number we endow X with a norm · 1 defined by thereby making it a Banach space. Of course all such norms for different choices of κ are equivalent, however, we shall require that κ be sufficiently small; to be precise, fix κ to satisfy where K is as in (5).
Assuming that T ≥ T − , we next define a linear map L, which we shall prove maps X into itself for T sufficiently large, by In particular, we note that if x ∈ X then x is bounded and absolutely continuous on (−∞, −T ] with a bounded derivative |x (t)| ≤ Lip(x) almost everywhere there. It follows easily from this, and from H2-H4, that Lx is a well-defined continuous function on (−∞, −T ]; and further, A direct examination of equation (15), using the fact that x ∈ X implies that x and x are bounded on (−∞, −T ], shows that Lx is bounded and uniformly lipschitz on (−∞, −T ], hence absolutely continuous there, for any x ∈ X. Upon differentiating equation (15) we obtain simply due to several cancellations, and where we recall that B j (u) = A j (u). It remains to prove that Lx ∈ X for T sufficiently large. Now from (15), using (5) and (6), one sees that where And from equation (17) one has At this point let us fix a value for T . As above we require that T ≥ T − . In addition, we require that This can be done by choosing T sufficiently large, in light of (14) and because lim T →+∞ Ψ j (T ) = 0 for j = 0, 1. With this we may estimate the norm of L as an operator on X. We have, using (18), (19), and (20), that and thus L < 1 for the operator norm. It follows that with x − ∈ C n given, and regarding x − as an element of X (a constant function), the equation has a unique solution ξ ∈ X. Further, we note that this solution satisfies (1) and (3), by virtue of (16) and (17); and as noted earlier, any solution of (1), (3) satisfies (21). With this, the proof is complete.
We now consider the analog of Theorem 2.1, but for t → +∞. Here we consider solutions of equation (1) on [0, ∞), but with arbitrary initial conditions on [−R, 0] where R = max 1≤j≤m {r j }. Hypotheses H1 and H2 are as before, but H3 and H4 are replaced by the following.
With this we have the following result. Note that in contrast to Theorem 2.1, there is no assumption that ξ is bounded; indeed, the boundedness of ξ(t) as t → +∞ is a conclusion of the theorem. Proof. By translating time, we may assume without loss that T 0 = 0. We first prove that ξ(t) is bounded as t → +∞. Assuming to the contrary that ξ(t) is unbounded, one sees that there exists a sequence t k → +∞ such that Let K be such that |A j (p j (t))|, |B j (p j (t))| ≤ K for t ≥ 0, for 1 ≤ j ≤ m, and fix τ ≥ max{R, T + } sufficiently large that Here σ 1 and σ 2 as before are as in (6), but now for large positive t, and the equality in (23) serves as the definition of the function Φ + . Assuming without loss that t k ≥ τ , consider equation (9) with t = t k . From (22) one sees that for the terms appearing in (9); and further, using the differential equation (1) one also sees that Thus upon moving the term ξ(τ ) to the right-hand side and taking norms in (9), one obtains with the help of (24) and (25), and then (23), that It follows that |ξ(t k )| ≤ 2|ξ(τ )|, which contradicts the second equation in (22). Thus ξ(t) is bounded as t → +∞. It remains to prove that lim t→+∞ ξ(t) = x + exists. As in the proof of Theorem 2.1 Let C be such that |ξ(t)| ≤ C for all t ≥ −R and let τ and t be any values satisfying max{R, T + } ≤ τ < t. (In particular, τ need not be the value in the earlier part of this proof.) Then we have from (9) that Since we are assuming H3 and H4 , one sees that Φ + (t) < ∞, and also that lim Remark 1. In general, with H1-H4 holding as in Theorem 2.1, one does not expect every solution ξ : R → C n to be bounded as t → −∞; this is in marked contrast to the situation in Theorem 2.3 with t → +∞. To be precise, consider an equation as in (1), with m = 1, A 1 (u) = A(u), p 1 (t) = p(t), and r 1 = 1, and assume that H1-H4 are satisfied. In addition assume that A and p are C ∞ smooth and that det A(p(t)) = 0 for every t ≤ 0. Then given any The condition det A(p(t)) = 0 ensures that any such ϕ has at most one left-prolongation ξ, as ξ(t) may be determined inductively by Then any ϕ ∈ C ∞ ([−1, 0], C n ) belongs to V − provided appropriate compatibility conditions exist at the endpoints θ = −1, 0. In particular, One sees that V 0 , and thus V − , are infinite dimensional vector spaces. By contrast, let Then V bdd is an n-dimensional vector space, by virtue of Theorems 2.1 and 2.2.
All of the results of this section can be generalized to nonlinear equations of the form Here we seek solutions taking values in R n ; since C n can be canonically identified with R 2n , this includes the case where one seeks C n -valued solutions of (1). We maintain our earlier notation, and A j and p j for 1 ≤ j ≤ m will usually be assumed to satisfy H1-H4, with the modification that we assume A j : R → R n×n . Again, because C n×n can be canonically identified with a subset of R 2n×2n and C n canonically identified with R 2n in such a way as to respect matrix multiplication, nothing is lost. We shall always assume that the maps f j for 1 ≤ j ≤ m satisfy the following hypothesis. H5. f j : R qn → R n is a locally Lipschitz map, for 1 ≤ j ≤ m.
We shall now state analogs of Theorems 2.1 and 2.2 for equation (27).
Theorem 2.4. Assume that H1-H5 all hold and that r j, Outline of Proof. If D T is as in the proof of Theorem 2.1, it suffices to prove that diam(D T ) approaches zero as T → +∞. Assume without loss that T 0 ≥ T − where T − is as in H4, and let C be such that |ξ(t)| ≤ C for t ≤ −T 0 . Define η j (t) by and let C 0 be such that It follows from equations (27) and (29) Because f j is Lipschitz on bounded subsets of R qn , it follows that η j is Lipschitz on (−∞, −T 0 ] and so is absolutely continuous. With C 1 a Lipschitz constant for the η j , we have that |η If τ < t ≤ −T 0 , we can now integrate by parts to obtain Because |η j (θ)| ≤ C 1 Lebesgue almost everywhere, the remainder of the proof proceeds as in the proof of Theorem 2.1.
Theorem 2.5. Assume that H1-H5 all hold and that r j,k ≥ 0 for 1 ≤ j ≤ m and 1 ≤ k ≤ q. Let x − ∈ R n be given. Then there exists T ∈ R and a C 1 function ξ : (−∞, −T ] → R n with x = ξ(t) satisfying the differential equation (27) for If r j,k > 0 for every j and k then we can extend ξ so that it satisfies (27) for all t ∈ R.
Outline of Proof. For any numbers T ∈ R and M 0 , M 1 > 0 define a Banach space W and a subset Q ⊂ W by where · 0 and Lip are as in (11), and · 0 is taken as the norm in W . One easily sees that the set Q is closed, bounded, and convex. Now fix M 0 so that M 0 > x − . The quantities T and M 1 will be fixed later, although henceforth we restrict T so that T ≥ T − . Given any x ∈ Q, define η j : (−∞, −T ] → R n for 1 ≤ j ≤ m as in (28). One can prove there exists a constant C 0 independent of x, T , M 1 , and j, such that |η j (t)| ≤ C 0 for t ≤ −T . Also there exists a constant C 1 , independent of x, T , and j, but possibly depending Equation (31) implies that η j is absolutely continuous on (−∞, −T ], hence differentiable Lebesgue almost everywhere with By using H3, H4, equation (32), and integration by parts (compare equation (30)), one can prove that the improper integral in equation (33) exists. By increasing T and M 1 , one can prove that F (Q) ⊂ Q and that F is a continuous compact map of Q into itself. It then follows from the Schauder fixed point theorem that F has a fixed point ξ ∈ Q, and equation (33) implies that ξ satisfies equation (27) for t ≤ −T , and also that lim Theorem 2.5 does not imply the uniqueness of the solution ξ described there. However, if the assumptions on the f j are slightly strengthened, we obtain uniqueness.
Theorem 2.6. Assume that H1-H5 all hold and that r j,k ≥ 0 for 1 ≤ j ≤ m and 1 ≤ k ≤ q. Let x − ∈ R n be given. Also assume there exists a neighborhood U of (x − , x − , . . . , x − ) ∈ R qn such that for 1 ≤ j ≤ m the function f j is C 1 on U and that its Jacobian derivative f j is Lipschitz on U . Then if ξ k : (−∞, −T ] → R n for k = 1, 2 and some T ∈ R both satisfy equation (27) for t ≤ −T , and if also lim The idea of the proof of Theorem 2.6 is to prove that the map F : Q → Q defined in the proof of Theorem 2.5 is a contraction mapping if T is chosen large enough. We omit further details.
We finally mention a nonlinear generalization of Theorem 2.3. We leave the proof, which roughly follows the proof of Theorem 2.3, to the reader. Theorem 2.7. Assume that H1, H2, H3 , and H4 all hold and that r j,k ≥ 0 for 1 ≤ j ≤ m and 1 ≤ k ≤ q. Also assume that f j : R qn → R n is a globally Lipschitz 3. Asymptotic homogenization: The quadratic case. When some p j has quadratic growth, say p j (t) = c 2 t 2 + c 1 t + c 0 with c 2 = 0, then H4 is no longer satisfied as the first integral there is infinite, and thus the arguments in Section 2 are no longer valid. To remedy this problem it is necessary to integrate by parts several more times, somewhat in the spirit of the proofs of Section 2. However, some delicate issues related to resonance arise, and it is natural to restrict A j in equation (1) to be a periodic (or at least oscillatory) function of mean value zero.
For simplicity we shall assume that m = 1, that is, we have only one delay, and we shall assume without loss that it is r 1 = 1. We shall denote A(u) = A 1 (u) and B(u) = B 1 (u), and also p(t) = p 1 (t). We shall assume that p is a quadratic polynomial as above with c 2 = 0, although we believe it should be possible to extend our results to a broader class of functions p.
Thus we consider an equation of the form where c 2 = 0. The matrix-valued function A will be periodic and of mean value zero; to be specific, we shall assume the following.

H6.
A : R → C n×n has the form for some a j ∈ C n×n for j ∈ Z, with a 0 = 0, and with ω > 0.
Thus, with H6 holding, the function A is continuous and has period 2π/ω. Conversely, if A is a C 1 periodic function of period 2π/ω and mean value zero, and if also A is Hölder continuous with Hölder exponent α ∈ (0, 1), then standard estimates (see Chapter 2, Section 5 of [7]) imply that A is given by an infinite series as in H6 where a 0 = 0 and |a j | ≤ C/|j| 1+α for some C > 0, and so the conditions of H6 are satisfied.
With this, we have the following results, which are analogs of the first three results in Section 2.
Theorem 3.1. Assume that H6 holds and that c 0 , c 1 , c 2 ∈ R with c 2 = 0. Suppose that ξ : R → C n is a C 1 function which is bounded as t → −∞ and that x = ξ(t) satisfies equation (34) for t ∈ R. Then there exists x − ∈ C n such that lim Theorem 3.2. Assume that H6 holds and that c 0 , c 1 , c 2 ∈ R with c 2 = 0. Let x − ∈ C n be given. Then there exists a unique C 1 function ξ : R → C n with x = ξ(t) satisfying the differential equation (34) for t ∈ R, such that lim Theorem 3.3. Assume that H6 holds and that c 0 , c 1 , c 2 ∈ R with c 2 = 0. Suppose for some T 0 ∈ R that ξ : We note that by means of a linear change of the time variable, we can assume without loss that p(t) = t 2 exactly. Indeed, beginning with equation (34), let t 0 = −c 1 /(2c 2 ) and t = t − t 0 , and let y( t) = x(t). A calculation gives . One checks that if A satisfies the conditions of H6, then so does A, although with different coefficients a j and a different frequency ω. Thus instead of equation (34) we shall, without loss of generality, consider the equation with A satisfying H6. As a 0 = 0 in H6, the primitive of A is also periodic of the same period. We choose the unique primitive which also has mean value zero, and define As in the previous section, we shall integrate our differential equation by parts and set up appropriate integral operators. However, here it will require several integrations by parts before we obtain operators with the appropriate convergence properties. As a result of these integrations one obtains several auxiliary functions, which are listed in Table 1, and whose properties are described in Lemma 3.4 below. We mention that some rather subtle resonances between various terms in the Fourier series arise in obtaining these functions. In particular, note in the formulas for Γ 2 , Ω 2 , and Ω 3 in Table 1, how the terms with indices j and k satisfying j + k = 0 are treated differently from those with j + k = 0.
Lemma 3.4. Assume that H6 holds and let I ⊂ R be a compact interval with 0 ∈ I. Then the functions Γ j for 1 ≤ j ≤ 3 and Ω j for 1 ≤ j ≤ 4 in Table 1 are well-defined and continuous on R \ {0}, and the infinite series that define them are absolutely convergent, uniformly on I. Further, the functions Γ j are C 1 on R \ {0}, and their derivatives Γ j can be obtained by term by term differentiation of the defining infinite series; and moreover, the infinite series of derivatives so obtained are absolutely convergent, uniformly on I. It is also the case that the equations −A(t 2 ) + Γ 1 (t) + Ω 1 (t) = 0, , In every summation we assume that j, k ∈ Z \ {0}. Table 1 hold identically on R \ {0}. Finally, there exists K such that the bounds hold for j as above, on R \ {0}.
Proof. For the most part, the arguments in the proof involve basic calculus and routine (but tedious) calculations.
First observe that due to H6, the absolute and uniform convergence of the various series on the interval I follows directly, and the continuity of each Γ j and Ω j follows. We do note that in verifying the continuity of Ω 2 the estimate for the first series is helpful. In the same way, the absolute and uniform convergence of the series obtained by term by term differentiation of the series for Γ j is verified. The estimates for Γ j and Ω j in (37) also follow directly, using H6. Certainly Γ 1 is C 1 on R \ {0}, so we only need to prove that Γ 2 and Γ 3 are C 1 . To verify this fact, and that the derivatives of Γ 2 and Γ 3 can be obtained by term by term differentiation of their defining infinite series, we use the following general argument. Given a compact interval I, let V be a finite dimensional vector space with norm | · |, and let Q denote a countable index set. Suppose for each q ∈ Q that f q : I → V is a C 1 function, and that both the series q∈Q |f q (t)| and q∈Q |f q (t)| converge uniformly on I. It then follows that where the first series above serves as the definition of F , where the limit of each of the series is independent of the order of the terms, where both series define continuous function, and where F is C 1 with its derivative F as given. It is now straightforward to apply this result to the series in the definitions of Γ 2 and Γ 3 ; we omit the details. The formulas in (36) can now be verified by a routine calculation. And finally, the bounds (37) on Γ j follow from the bounds there on Γ j and Ω j , and from (36) (and by possibly increasing K).
Remark 2. If we define Γ 0 (t) = −I and Γ 4 (t) = 0 identically, where I is the identity matrix, then the equations in (36) reduce to the single equation Notice that for n > 1 it need not be true that The reader should note that in subsequent arguments, it naturally occurs that we only consider Γ j (t)A((t−j) 2 ) and not A((t−j) 2 )Γ j (t), for 1 ≤ j ≤ 3.
For the remainder of this section, the functions Γ j and Ω j will be as in Table 1. While the formula (38) in the next result might seem rather mysterious, it is basically what one would obtain upon integrating the differential equation (35) by parts several times. Rather than go through the lengthy and tedious derivation of this formula, we provide here simply the end result.
Lemma 3.5. Let I ⊂ R be an interval of length len(I) > 4 and let I 4 ⊂ I 1 ⊂ I be given by I j = {t ∈ I | t − j ∈ I}, j = 1, 4, namely, I j is the interval I with a segment of length j removed from the left-hand end. Assume that 0 ∈ I 4 . Suppose that ξ : I → C n is continuous, and that x = ξ(t) satisfies the differential equation (35) for t ∈ I 1 . Then for any t, τ ∈ I 4 we have that holds.
Proof. The lemma can be proved by first noting that equation (38) holds when t = τ ; and then by differentiating both sides of this equation with respect to t, showing the results are equal for t ∈ I 4 . Upon differentiating and using the differential equation (35), one obtains as the required condition to be satisfied; and one sees that this indeed holds upon substituting the formulas in (36).
We shall also need the following partial converse of Lemma 3.5.
Lemma 3.6. Suppose that H6 is satisfied and that for some T 0 > 0 and x − ∈ C n there is a bounded continuous function ξ : (−∞, −T 0 ] → C n satisfying Here the improper integrals in (39) exist by virtue of the estimates (37) in Lemma 3.4.
We claim further that (35) holds everywhere on (−∞, −T 0 ]. To this end, let Then T * ≤ T 1 and we wish to prove that T * = T 0 , so suppose to the contrary that T * > T 0 . Then following directly from the definition of T * , there exists some t ∈ (−T * , −T 0 ] ∩ (−T * , −T * + 1] such that µ(t) = 0. But µ(t − j) = 0 for j ≥ 1 as t − j ≤ −T * , and so for this t the first equation in (40) is false; this contradiction implies that T * = T 0 , as desired. The fact that ξ is C 1 on (−∞, −T 0 ] follows from the fact that ξ(t) satisfies the differential equation (35) there. And the fact that ξ(t) limits to x − as t → −∞ follows from the bounds (37) and from equation (39).
Proof of Theorem 3.1. As remarked earlier, it is enough to consider equation (35); and as in the proof of Theorem 2.1 we let D T = {ξ(t) | t ≤ −T }. Note also that for t ≤ −1 we have, with K as in (37), that for 1 ≤ j ≤ 3 and 1 ≤ k ≤ 4. Also let C be such that |ξ(t)| ≤ C for t ≤ −1. Now fixing T ≥ 1 and taking any τ, t ≤ −T , we have by Lemma 3.5 that (38) holds and hence by (41) T . Proof of Theorem 3.2. Again, it is enough to consider equation (35). We follow roughly the outline of the proof of Theorem 2.2, and in particular, we take the space X as in (12) with the norm (13) for some T and positive κ. To be specific, we fix T ≥ 1 sufficiently large and κ sufficiently small so that

It follows that lim
where the above equality serves as the definition of ρ and where K is as in (37). If ξ : R → C n is a function as in the statement of the theorem, with x − ∈ C n given, then one sees that ξ ∈ X. By Lemma 3.5 equation (38) holds for any τ < t ≤ −T and upon letting τ → −∞ one obtains equation (21), where here L : X → X is defined by One checks that indeed, L is a bounded linear operator on X; in particular, as T ≥ 1 then arguing much as in the proof of Theorem 3.1 and using (41) we have that Also, using (41) and the second inequality in (37), we see upon differentiating (43) that again as T ≥ 1. It now follows that for every x ∈ X, and thus L ≤ ρ for the operator norm of L. We thus obtain a unique solution ξ ∈ X to the problem ξ = x − + Lξ, which by Lemma 3.6 satisfies equation (35), as desired.
Proof of Theorem 3.3. Again, it is enough to consider equation (35). We follow the proof of Theorem 2.3 and first prove that ξ(t) is bounded as t → +∞. (Unlike in the proof of Theorem 2.3, here we are not free to translate time so that T 0 = 0, as this would change the form of the equation (35).) Assuming to the contrary that ξ is unbounded, one sees there exists a sequence t k → +∞ satisfying (22), with the modification that −R is replaced by T 0 − 1. Taking t = t k in (38), where we assume that t k ≥ τ ≥ T 0 + 3, and also that τ ≥ 1, we have that with K as in (37). Fixing τ so that τ ≥ 40K, we have that |ξ(t k )| ≤ |ξ(τ )| + 1 2 |ξ(t k )| and thus |ξ(t k )| ≤ 2|ξ(τ )|, contradicting the second equation in (22). Now with ξ(t) bounded as t → +∞, say |ξ(t)| ≤ C for t ≥ T 0 − 1, let D T = {ξ(t) | t ≥ T } for any T ≥ T 0 + 3. Then if t, τ ≥ T for such T , where we assume additionally that T ≥ 1, we have from (41) and (38) that 4. Analytic extension to the lower half plane. In this section we find conditions under which the solution ξ in Theorem 3.2 can be extended as an analytic function to part or all of the lower half complex plane. To this end, we introduce the following condition.
H7. Hypothesis H6 is satisfied and in addition a j = 0 for all j ≤ 0.
Under H7 one easily sees that the function A extends to an analytic function on the upper half plane H = {z ∈ C | Im z > 0}; and further, this function is continuous on the closure H. In addition, we have the bound and observing that Im(z 2 ) = 2(Re z)(Im z) > 0 for z ∈ V , we see that the function z → A(z 2 ) is analytic on V , and continuous and bounded on V , with the bound Here and below we shall usually write z = t + is where t = Re z and s = Im z.
The following is the main result of this section, describing how the solution ξ extends to the region V as an analytic function. We shall show later that under slightly stronger conditions, this extension can be made to the entire lower half plane.
Theorem 4.1. Assume that H7 holds and let x − ∈ C n be given. Then for every s ≤ 0 the equation has a unique solution x = ξ s (t) for t ≤ 0; and in particular, when s = 0, the solution ξ 0 is the same as that obtained in Theorem 3.2. Denoting for t + is ∈ V , we have that the mapping t + is → ξ(t + is) is continuous on V and analytic on V . Further, there exists C > 0 such that the bound holds for all t + is ∈ V , and one has that for any t ≤ 0 and any s 1 , s 2 ≤ 0.
Referring to Table 1 in Section 3, we see that the functions Γ j and Ω j extend to analytic functions on V , and that they are continuous on V \ {0} and bounded on V \ B ρ (0) for every ρ > 0. (Here and below, we let B ε (z 0 ) = {z ∈ C | |z − z 0 | < ε} for any z 0 ∈ C and ε > 0.) In verifying this claim, note that the sums in which a j a −j appears are absent due to H7, and in particular Γ 3 and Ω 4 vanish identically. Also note the bound |e iω(j(t+is) 2 +k(t−1+is) 2 ) | = e −2ω(jts+k(t−1)s) ≤ 1 in these formulas, as j, k ≥ 1 and t, s ≤ 0. Further, the formulas in (36) hold with t replaced with t + is ∈ V , and the inequalities (37) hold with t replaced with t + is in the left-hand sides of each inequality. Thus one has for K independent of s, that for t ≤ −1 and s ≤ 0, for 1 ≤ j ≤ 2 and 1 ≤ k ≤ 3. We begin with the following result, which proves some of the claims of Theorem 4.1.
hold for all t ≤ 0 and s ≤ 0.
Proof. The same argument used to prove Theorem 3.2 establishes the existence of the solution ξ s (t) for t ≤ −T , for some sufficiently large T independent of s. In particular, take T ≥ 1 and κ > 0 so that (42) holds, and let X be as in (12) with norm (13). For each s ≤ 0 consider an operator L s on X given by (43) but with Γ j (t+is) and Ω j (θ+is) in place of Γ j (t) and Ω j (θ). As before one has L s ≤ ρ < 1, and so the equation x = x − + L s x has a unique solution x = ξ s ∈ X, and with norm satisfying ξ s 1 ≤ |x − |/(1 − ρ) independent of s. Of course Lemmas 3.5 and 3.6 must be appropriately modified, with t, τ , and θ replaced by t + is, τ + is, and θ + is except as they appear in the limits of the integrals, and with the differential equation (45) in place of (35). One now extends ξ s (t) from (−∞, −T ] to (−∞, 0] by solving the differential equation (45) forward by steps. Noting the uniform bound (44) of A((t + is) 2 ) on V , one obtains the first bound in (49). Finally, again from the differential equation (45), one has the second bound in (49), after possibly increasing C. Proposition 4.4 below gives conditions under which certain solutions of certain analytic delay-differential equations are analytic. In fact, we obtain much more general results in [5], although below we give only the results needed here. Proposition 4.3, which we use in the proof of Proposition 4.4, is a special case of results which can be found, for example, in [2]; we omit the proof. In what follows we shall denote the semi-infinite strip Σ ε (s 0 ) = {z ∈ C | Re z < 0 and | Im z − s 0 | < ε} for any s 0 ∈ R and ε > 0.
Proposition 4.4. Assume for some ρ > 0 that Q : Σ ρ (0) → C n×n is analytic and bounded. Also assume that ζ : (−∞, 0) → C n is a bounded C 1 function that satisfies for all real t < 0. Then ζ(t) is analytic in t, and in fact extends to a bounded analytic function ζ : Σ σ (0) → C n for some positive σ ≤ ρ; and further, the equation holds throughout Σ σ (0).
One checks that E is differentiable on Σ ρ (0), in light of the boundedness of Q, and thus E is analytic on Σ ρ (0). Further, E is bounded on Σ ρ (0) and one has that One now concludes from Proposition 4.3 that η(t) is analytic for t < 0. In particular, there exists a positive σ ≤ ρ such that η extends as an analytic function to the domain {z ∈ C | − 3 < Re z < −1 and | Im z| < σ}. As each coordinate in (52) is analytic on this domain, it follows that ζ extends to a function ζ which is analytic for −3 − j < Re z < −1 − j and | Im z| < σ for each j ≥ 0; that is, ζ is analytic on the semi-infinite strip Σ defined by Σ = {z ∈ C | Re z < −1 and | Im z| < σ}. The function z → ζ (z) − Q(z) ζ(z − 1) is analytic and vanishes identically for real z ∈ (−∞, −1), and thus it vanishes identically on Σ; that is, equation (51) holds on Σ. Finally, by integrating the differential equation (51) forward one unit from Re z < −1 to Re z < 0, one extends ζ analytically to all of Σ σ (0), as desired.
Proof of Theorem 4.1. With Proposition 4.2 established, it remains to prove the continuity and analyticity claims for the function ξ, as well as the estimate (48). Fix s 0 < 0 and 0 < ρ ≤ |s 0 |. Then Proposition 4.4 applies, with ζ(t) = ξ s0 (t) and Q(t) = A((t + is 0 ) 2 ). In particular, the map t + is → A((t + is) 2 )) is bounded and analytic on V and thus on Σ ρ (s 0 ) ⊂ V , and therefore Q(z) = A((z + is 0 ) 2 ) is bounded and analytic for z ∈ Σ ρ (0). Thus for some positive σ ≤ ρ, the function t → ξ s0 (t) for t < 0 extends as to a bounded analytic function z → ξ s0 (z) for z ∈ Σ σ (0), and which moreover satisfies Let C > 0 be such that | ξ s0 (z)| ≤ C for all such z. We claim that with ξ as in (47). Note that this claim immediately implies that z → ξ(z) is analytic on Σ σ (s 0 ). To prove this claim, take any s 1 , s 2 ∈ (s 0 − σ, s 0 + σ). Then for t < 0 Here the Cauchy-Riemann equations are used in the equality, and the differential equation (53) and then the bound (44) are used in the final inequalities. Now taking s 2 = s 0 , and so ξ s0 (t) = ξ s0 (t), we have that which approaches zero as t → −∞. Thus We see now from (53) and (56) that the function t → ξ s0 (t + i(s 1 − s 0 )) satisfies the same differential equation and limit as the function t → ξ s1 (t) = ξ(t + is 1 ), namely (45) and (46) but with s 1 in place of s. By the uniqueness claim in Theorem 4.1, one concludes that ξ s0 (t + i(s 1 − s 0 )) = ξ(t + is 1 ) for all t < 0 and s 1 ∈ (s 0 − σ, s 0 + σ); however, this conclusion is exactly the same as (54), as desired.
We have now that for each s 0 < 0, there exists σ > 0 such that ξ is analytic on Σ σ (s 0 ), and so it follows that ξ is analytic throughout V , as desired. One now sees that (55) holds with ξ s0 (t + i(s j − s 0 )) replaced with ξ(t + is j ) for j = 1, 2, and with ∂ ξ s0 (t + i(s − s 0 ))/∂s and ∂ ξ s0 (t + i(s − s 0 ))/∂t replaced with ∂ξ(t + is)/∂s and ∂ξ(t + is)/∂t, respectively, for any choice of t < 0 and s 1 , s 2 < 0. The constant C in (55) now is an upper bound for |ξ(t + is)| on V obtained in Proposition 4.2. This establishes the bound (48), but only on the open region V . In particular, (48) is not yet established if either s 1 = 0 or s 2 = 0.
To extend (48) to all of V , first note that ξ is uniformly lipschitz on V , in light of the second inequality in (49) and from (48). As ξ is bounded on V , it follows that it extends to a continuous function on V . In particular, from (48) one has the existence of the limit lim uniformly for t ∈ (−∞, 0], for some bounded continuous function ξ * : (−∞, 0] → C n . It remains to prove that ξ * (t) = ξ(t), that is, ξ * (t) = ξ 0 (t), for all such t. First note from the integrated form of the differential equation (45) for all t ≤ 0, upon taking the limit s → 0−. Also note the limit lim t→−∞ ξ s (t) = x − and hence lim t→−∞ ξ * (t) = x − . One now sees that the function ξ * satisfies (45) and (46) with s = 0. Thus ξ * (t) = ξ 0 (t) identically, as desired.
We would like to extend the solution ξ obtained in Theorem 4.1 from the set V to the lower half plane W , where In particular, we wish the extended function, which we shall denote by ξ, to be analytic on W and continuous on W ; and in fact with the hypothesis H8 below, this can be done so that the extended function is C ∞ on W . Although the function ξ is bounded on V , one does not expect ξ to be bounded on W as the coefficient A(z 2 ) in the differential equation in general will be unbounded for z ∈ W .
We shall assume the following. H8. Hypothesis H7 is satisfied and in addition lim j→∞ |a j | 1/j = 0.
Note that the limit condition in H8 implies that the function A is an entire function. Indeed, letting A(u) = ∞ j=1 a j u j for u ∈ C, one sees that the radius of convergence of this series is infinite; thus A is an entire function, and therefore so is A, as We have the following result.
Theorem 4.5. Assume that H8 holds and let x − ∈ C n be given. Then the function ξ : V → C n obtained in Theorem 4.1 extends to a function ξ : W → C n which is analytic on W and C ∞ on W . Further, the function x = ξ(z) satisfies the differential equation x (z) = A(z 2 )x(z − 1) (57) throughout W .
Proof. The coefficient A(z 2 ) is now defined for all z ∈ W , in fact for all z ∈ C. Thus for any s ≤ 0 the solution ξ s : (−∞, 0] → C n to the differential equation (45) with (46) obtained in Theorem 4.1 can be extended to a solution ξ s : R → C n of (45) on the real line. The function ξ : V → C n in Theorem 4.1 can now be extended to ξ : W → C n by setting ξ(t + is) = ξ s (t). As ξ is analytic on V and continuous on V , and because z → A(z 2 ) is analytic on W , one sees that the extended function ξ is analytic on W and continuous on W , and that it satisfies the differential equation (57) on W . It remains to prove that ξ is C ∞ on W . By repeatedly differentiating equation (57) one obtains a series of equations each of which is satisfied by ξ on W . Here x (m) denotes the m th derivative of x, and the coefficients A m,j are analytic on W . It follows that for each m ≥ 1 the function ξ (m) extends continuously to W , and thus ξ is C ∞ on W .
5. Some open questions. We mention here several open issues.
5.1. Analyticity of solutions. In the setting of Theorem 2.2 assume that the functions A j and p j are analytic and not identically zero; or in the setting of Theorem 3.2 assume that the function A is analytic and not identically zero. Take x − = 0 and let ξ be the solution as in the statement of these theorems.
Question. Is the function t → ξ(t) analytic at any point t ∈ R?
Certainly the function ξ is C ∞ everywhere. We do not know the answer to the above question even for simple cases such as where q ≥ 2 is an integer. We conjecture, but with no serious evidence, that such solutions of these equations are nowhere real analytic.

Asymptotic behavior of solutions.
Under the conditions of Theorem 3.2, we proved that for each x − ∈ C n there exists a unique solution x = ξ(t) of equation (34) with lim shows clearly that for t ≤ −1, for some K 0 .
In the setting of Theorem 3.2, in the scalar case n = 1, one would like a sequence of functions ζ m : (−∞, 0] → C n and positive constants K m and ν m , with lim m→∞ ν m = ∞, such that for t ≤ −1. Taking x − = 1, one might define β 0 (t) = 1 identically for t ≤ 0, and inductively define for j ≥ 1, assuming the above integral exists as an improper Riemann integral. Operating purely formally, if one defines ξ(t) = ∞ j=0 β j (t), a formal calculation yields It is thus plausible that if one defines ζ m (t) = m j=0 β j (t), then the inequality (58) is satisfied for ν m = m + 1 and some sequence K m . Preliminary work suggests that this is true under a resonance condition that a j a k = 0 whenever 2j + k = 0. We wonder if estimates such as (58) can be obtained without this restriction. 5.3. Scattering properties of equations. Theorem 3.3 implies that the limit lim t→+∞ ξ(t) = x + exists for the solution of any initial value problem satisfying the conditions of that theorem. Curiously, the proof does not show that x + is ever nonzero, and the same remarks apply to the theorems in Section 2. If ξ is as in Theorem 3.2 with x − given, then the corresponding x + in Theorem 3.3 for this solution can be regarded as a linear function of x − , and so the mapping x − → x + can be thought of as a sort of scattering operator.
Questions. Does it ever happen that x + = 0 in Theorem 3.2 for every (or for any) choice of x − = 0, for a particular function A? Can one prove that x + = 0 for particular examples such as A(u) = e iωu and A(u) = sin(ωu)? Introducing parameters into the coefficient A, for example considering a scalar equation such as where a ∈ C and ω ∈ R \ {0}, one may ask how the limit x + = x + (x − , a, ω) depends on its arguments. In particular, is this function continuous, or smooth, or analytic in its arguments? What are its asymptotic properties for limiting values of a and ω?

Almost periodic coefficients.
A final question involves generalizing the coefficient beyond periodic functions.
Question. Can one prove analogs of the results in Section 3 when the function A is almost periodic with mean value zero?