SELF-EXCITED VIBRATIONS FOR DAMPED AND DELAYED HIGHER DIMENSIONAL WAVE EQUATIONS

. In the article [12] it is shown that time delay induces self-excited vibrations in a one dimensional damped wave equation. Here we generalize this result for higher spatial dimensions. We prove the existence of branches of nontrivial time periodic solutions for spatial dimensions d ≥ 2. For d > 2, the bifurcating periodic solutions have a ﬁxed spatial frequency vector, which is the solution of a certain Diophantine equation. The case d = 2 must be treated separately from the others. In particular, it is shown that an arbitrary number of symmetry breaking orbitally distinct time periodic solutions exist, provided d is big enough, with respect to the symmetric group action. The direction of bifurcation is also obtained.

1. Introduction. Consider the following d-dimensional viscously damped wave equation with constant time delay τ > 0: Here x ∈ [0, π] d and ∆ = d j=1 ∂ 2 xj . Thus we are considering the problem with Dirichlet boundary conditions on the cube and also looking for solutions which are real valued. The nonlinearity f : R → R is taken to be C ∞ , satisfying f (−u) = −f (u), and f (0) = f (0) = 0. The oddness of the nonlinearity will enable us to find solutions which are also 2π-periodic in space. System 1 describes the motion of a vibrating d-cube with Dirichlet boundary conditions subject to external damping and an external delayed restoring force which is a perturbation of a linear one. In the article [12] it was shown that when d = 1 the above system undergoes a sequence of Hopf bifurcations, when the time delay is thought of as the bifurcation parameter, giving rise to nonconstant C ∞ smooth time periodic solutions, which in this situation are naturally called "self-excited vibrations", since the feedback autonomously depends on the state of the system. Although system 1 is not related to any specific bridge or robot, systems similar to 1 arise in various situations in mechanical engineering and we suggest the introduction of the articles [12,11], for a brief survey of the literature and various references on applications, as well as the survey article [8]. Much work has been done on the problem of bifurcation of periodic and quasi-periodic solutions for forced, parametrically forced [10], Hamiltonian, and reversible in time wave equations, none of which include the system 1. As in [12] it is not hard to see using an energy argument that time delay (τ > 0) is necessary for 1 to have a nonconstant periodic in time solution since otherwise, the system is dissipative.
The aim of the present work is to generalize the results of [12] to spatial dimensions d ≥ 2. We will use a combination of the Lyapunov-Schmidt reduction with the classical implicit function theorem. However, in doing so we will see some surprising features which are not encountered in the one dimensional problem. Some of these features stem from the fact that system 1 is equivariant with respect to the action of the symmetric group on d letters (S d ), which is nontrivial for d > 1, while other features are inherent to this particular problem. In order to explain the features, we need to briefly discuss some properties of a certain Diophantine equation, as well as describe the hallmarks of Hopf bifurcation problems with symmetry.
An important object in the present work is the Diophantine equation We will be interested in solutions of 2 which are d-tuples of positive integers. Notice that the symmetric group acts in the obvious way on solutions of this equation by permuting coordinates. An important observation is that this group action can have more than one orbit which consists of solutions whose isotropy subgroup is S d−1 .
A central theme of the current work is the S d−1 symmetry of these solutions; we shall see that this symmetry is also manifested in the periodic solutions that we construct. (See Corollary 1). For local equivariant Hopf bifurcation problems, one looks for branches of bifurcating periodic solutions, with prescribed symmetry subgroup, using an equivariant Lyapunov-Schmidt reduction. Since symmetry forces the bifurcation equations to contain more equations than unknowns, one looks for maximal subgroups with two dimensional fixed point subspaces, provided one is seeking locally unique periodic solutions; see, for instance, Theorem 4.9 on page 91 of [5] or Sections 4 and 5 of [4]. (On the other hand, if one drops the requirement that solutions are locally unique (or that they even form a branch) one can still obtain bifurcating periodic solutions, as in, for example, Theorem 6.3 on page 519 of [7]). One then carries out the Lyapunov-Schmidt reduction on the fixed point subspace of the relevant symmetry group, reducing the bifurcation equations to two equations in three unknowns. Unfortunately, in our setting the fixed point subspaces of maximal subgroups of S d may have dimension larger than two which is a consequence of the observation above concerning equation 2. Instead of exploiting hidden symmetries, in order to reduce the dimension of the kernel and obtain locally unique solutions possessing those symmetries, an alternative approach is to fix a solution of the Diophantine equation 2, then look for solutions having this fixed spatial frequency vector. Doing so will of course break the S d symmetry, since these frequency vectors cannot be constant. The upshot of all of this is that we end up with solutions having S d−1 symmetry, which is inherent to this problem. In summary: (i) In contrast with the one dimensional case, due to the fact that S d is nontrivial when d > 1, we find branches of symmetry breaking periodic solutions. (ii) The periodic orbits obtained above will be dynamically distinct, but equivalent when considered as orbits with respect to the action of the symmetric group. The orbits will also break the S d symmetry, but nonetheless possess S d−1 symmetry. (iii) We cannot find the periodic orbits above using the implicit function theorem by simply restricting to the fixed point subspace of S d−1 , without in addition fixing the spatial frequency vector of the solutions sought.
When it comes to bifurcation theory for wave equations it cannot go without mentioning the major works in this area, [3] for the one dimensional case, and [1] for higher spatial dimensions. However, these works are only nominally related to ours since the major interest in the latter is a careful handling of small divisors, which thanks to the presence of time delay and damping, are not present in our case and we stress that we do not encounter any small divisor problem. The reason for this is that there are only finitely many 'bad sites' and as a result we can solve the range equation using the standard implicit function theorem. Our work is much more similar in spirit and methodology to the works [7,6] but differs in that we do not explicitly look for periodic solutions having a prescribed spatiotemporal symmetry group. We compensate for this by looking for solutions with a fixed spatial frequency vector, as well as understanding the role of the external bifurcation parameter, which is the time delay. However, due the specific nature of this particular problem, we end up with solutions possessing S d−1 symmetry.
There is one other remark we should make which concerns the specially chosen coefficient of damping which is 1 d , where d is in fact the spatial dimension of the problem. This is done purely for convenience to simplify the exposition. That is, we could equally well consider systems of the form: for real parameters α, τ > 0. For the corresponding linearized system to have periodic solutions we need that the characteristic equation has purely imaginary roots. The parameter space is partitioned into open subsets whose boundaries correspond to values of parameters at which Hopf bifurcation typically occurs. See e.g. the calculations for the single degree of freedom analogue in [2]. Although we can produce similar phenomena, such considerations involve nasty transcendental numbers, and for simplicity we chose specific parameters, for which these numbers are particularly simple. (See also the two dimensional case here). Alternatively, for any α > 0 we could add the parameter αd in front of the linear restoring force term and repeat the calculations below for the same critical values of the frequency and time delay.
Then for any j ≥ 0, 1 has a family of real valued non-constant C ∞ smooth small amplitude periodic solutions, having spatial frequency vector m 0 and temporal frequency ω ≈ d, when τ ≈ τ j . More precisely, for any d-tuple m of positive integers, let ϕ m (x) := sin(m 1 x 1 ) sin(m 2 x 2 ) · · · sin(m d x d ). For any m 0 as above, there are C ∞ functions ω(ρ) (frequency), and τ (ρ) (delay), with ω(0) = d and τ (0) = τ j , defined near ρ = 0 such that for each ρ ≈ 0, 1 has a solution of the . Furthermore, for any s > 0, the map ρ → ψ(ρ) ∈ H s (T 1+d ) is C ∞ and the second term in the latter expansion is o(ρ) with respect to the H s norm.
We call two solutions u(t, x), v(t, x) orbitally distinct if their orbits are distinct.
Corollary 1 (Multiplicity of Periodic Orbits ). Let κ be the quantity which determines the direction of bifurcation from Theorem 1.2. For each j ≥ 0 and each d > 4, the system 1 has at least d orbitally distinct, non-constant C ∞ smooth, small amplitude periodic solutions for τ τ j (τ τ j ) if κ > 0 (κ < 0). In particular, these solutions possess S d−1 symmetry.
Proof. By Proposition 2, there is always at least one solution of the Diophantine equation d k=1 m 2 k = d 2 , given by m = (d − 2, 2, · · · , 2) ∈ N d . Since d − 2 > 2 there are d distinguishable permutations of the latter, it follows by Theorem 1.1 that there are at least d branches of nonconstant smooth periodic solutions of 1. It is not hard to show that each of these branches of periodic solutions are orbitally distinct, since they have a different spatial frequency vector. . For the case d = 2, there are two distinct temporal frequencies ω ± > 0 and corresponding values of the delay τ ± > 0, so that for any j ≥ 0 system 1 has a family of real valued small amplitude non-constant C ∞ smooth time periodic solutions, having temporal frequency ω ≈ ω ± when τ ≈ τ ± j . More precisely, there are C ∞ smooth functions ω(ρ) and τ (ρ) defined near ρ = 0 with ω(0) = ω ± and τ (0) = τ ± j , satisfying that for each ρ ≈ 0, the system 1 has a solution of the form u(ρ)(t, x) = ψ(ρ)(ω(ρ)t, x) when τ = τ (ρ), where ψ(ρ)(t, x) = ρ cos(t)ϕ 1 (x) + n∈Z,m∈N 2 c nm (ρ)e int ϕ m (x) ∈ C ∞ (T 1+2 ). Furthermore, for any s > 0, the map ρ → ψ(ρ) ∈ H s (T 1+d ) is C ∞ and the second term in the latter expansion is o(ρ) with respect to the H s norm.

1.2.
Organization. In Section 2 we review the notation that is to be used throughout this work, including the key function spaces that are involved in our analysis. Section 3 presents the key calculations that motivate the introduction of our ansatz for d ≥ 3 in Section 4. The Lyapunov-Schmidt reduction for d ≥ 3 is set up in Section 5 and the solutions of the range equation and the bifurcation equation are present in Sections 6 and 7, respectively. The direction of bifurcation for d ≥ 3 is presented in Section 8. Finally, the corresponding analysis for d = 2 is presented in Section 9. For a quick reading, we suggest one read Sections 3, 4, 5, and skip Section 6, which proves the existence and regularity of the solution of the range equation, and instead take this for granted proceeding to read Section 7. There is a list of notation and conventions below which may be helpful for quick reading. Section 9, which treats the case d = 2, is almost identical to the previous ones. Hence, we suggest that Section 6 be read last.
2. Notation and conventions. The following notation will be used throughout the paper.
• N = The positive integers.
• T = The one dimensional torus R/2πZ. • Given normed spaces V and W , L (V, W ) denotes the space of bounded linear operators L : V → W , having the uniform operator topology. • Given two vectors y, z in Z d , we denote their pointwise product as (y z) j := y j z j for j = 1, · · · , d. • Given a vector m = (m 1 , · · · , m d ) ∈ Z d , |m| 2 := d j=1 m 2 j is its Euclidean norm squared.
• For real numbers a, b ≥ 0 we use a b to mean that a ≤ Cb for some universal constant C > 0. • Whenever a product of Banach spaces is under consideration, we view it as a Banach space with the corresponding product norm.
We use T n = R n /2πZ n to denote the n-torus. Let C k (T n ) denote the set of k times differentiable functions on the n-torus. The space L 2 (T n ) is defined to be where the norm · L 2 is given by The Fourier transform of a function f ∈ L 2 (T n ) is given by Note that if f is a real-valued function, then the Fourier coefficients satisfy f (m) = f (−m). We will frequently write f m = f (m). By Plancherel's theorem we have where the norm · H s is given by (in the form convenient for us) With this notation, we have the following Sobolev embedding theorem: If k ≥ 0 and s > k + n 2 , then H s (T n ) ⊂ C k (T n ) and u C k u H s . Moreover, if s > n 2 then H s (T n ) is a Banach algebra under pointwise multiplication. We will be interested in the case n = 1 + d corresponding to one time dimension and d spatial dimensions.
2.1. The space X s . Let d be a positive integer and m ∈ N d . Since we will be interested in Dirichlet boundary conditions, we define ϕ m (x) := Π d j=1 sin(m j x j ) which are eigenfunctions of the Laplacian on the d-dimensional cube R = [0, π] d . We will be working in the real Hilbert space X s is a real subspace of H s (T 1+d ) and inherits the L 2 inner product and H s norm. Note, however, that the latter is expressed in a different basis. It is not difficult to check that the X s norm above and the H s norm are equivalent. Moreover, the latter norm is equivalent to which is a fact we will use in Section 6.
3. Motivating calculations. We begin with the ansatz for the usual Lyapunov-Schmidt reduction on H s as in [12]. Finding a 2π ω -periodic solution of 1 amounts to finding a 2π-periodic solution of the below system, and vice versa.
(4) We note that when 4 is linearized, (f = 0) we can look for 2π periodic solutions of the form If we let ω = d and τ = τ j = π 2 +2πj d where j ≥ 0, be our "critical points", then observe that we must have |n| = 1 and Definition 3.1. By a solution of 2 we will always mean a d-tuple m of positive integers satisfying 2.
Next we define L ω,τ : X s+2 → X s by This gives us the following proposition: If K is the number of solutions of the Diophantine equation 2, then the dimension of ker L d,τj is 2K.
It is interesting to note that, when d = 2, the Diophantine equation 2 has no solutions. However, the following proposition tells us that as long as d ≥ 3, there will always be at least one solution of 2.
It is clear that we will have to consider the case d = 2 separately, and look for different critical values of frequency and time delay, from which a Hopf bifurcation occurs. See Section 9.
By Proposition 1 we see that the kernel of L d,τj can in general be very large due to the high geometric multiplicity of eigenvalues of the Laplacian. This high dimensional kernel has the potential of violating the solvability of the bifurcation equations in the Lyapunov-Schmidt reduction, since in general, there will be more equations than unknowns. However, it is well known that this problem can be circumvented if a subspace of the "linearized" solutions which span the kernel has some symmetry. In this case, as in e.g. Theorem 6.3 on page 519 of [7] or Theorem 4.9 on page 91 of [5], one carries out the Lyapunov-Schmidt reduction on the fixed point subspace of the relevant symmetry group, reducing the bifurcation equations to two equations in three unknowns. Instead of exploiting hidden symmetries, in order to reduce the dimension of the kernel and obtain solutions possessing those symmetries, an alternative approach is to fix a solution of the Diophantine equation 2, then look for solutions having that solution as a fixed spatial frequency vector. This is the approach we will take, and in light of this discussion, we are naturally led to the ansatz in the next section.
By the discussion from the end of Section 3, we wish to find a solution of 4 having spatial frequency vector m 0 . Hence we make the ansatz Substituting the above ansatz into equation 4 gives us: is a solution of 4 with spatial frequency vector m 0 . Linearizing 8 around 0 like in Section 3 and substituting Hence, if ω = d and τ = τ j , by splitting up the above into real and imaginary parts we have that |n| = 1 and Since m 0 was fixed as a solution of 2, it follows that m 1 = m 2 = · · · = m d = 1. Let the d-tuple having each entry equal to 1 be denoted by 1.
Like in Section 3, we define L ω,τ,m 0 : X s+2 → X s by This discussion gives us: For fixed m 0 satisfying 2, for each j ∈ N, ker L d,τj ,m 0 is given by Clearly ker L d,τj ,m 0 is two dimensional.
We are now ready to commence the Lyapunov-Schmidt reduction.
5. The Lyapunov-Schmidt reduction: d ≥ 3. Throughout we assume that d ≥ 3, so that we can fix m 0 = (m 0 1 , · · · m 0 d ) ∈ N d satisfying equation 2. Owing to translation invariance of 8, we can look for a solution of 8 the form where ρ ∈ R is the amplitude and v is in the L 2 orthogonal complement of the kernel of L d,τj ,m 0 , computed in Proposition 3. Let Substituting the latter into 8 gives us: where V s is the orthogonal complement of ker L d,τj ,m 0 with respect to the L 2 norm, and Π V s : X s → V s , Π d,τj ,m 0 : X s → ker L d,τj ,m 0 denote the relevant projectors. The map F (ω, τ, u)(t, x) := f (u(t − ωτ, x)). (For properties of this Nemytskii operator see Lemma 6.1). We refer to 9 as the range equation while 10 is referred to as the bifurcation equation. We can rewrite 10 as: Let A ± j (ρ, ω, τ, v) = N d T 1+d F (ω, τ, w(ρ) + v)e ±it ϕ 1 (x)dtdx for an appropriate normalizing constant N d > 0. In this event, 10 can be written as 2 equations in the four unknowns, ρ, ω, τ , and v: Splitting the above into real and imaginary parts yields where and similarly for S but with 'sin' in place of 'cos'. The equations 13, 14 are a system of two equations in four unknowns. We will solve the range equation 9 first.
In order to do this, we need to establish some properties of the operators L ω,τ,m 0 in the next section.
6. The range equation: d ≥ 3. Throughout we assume that d ≥ 3, so that we can fix m 0 = (m 0 1 , · · · m 0 d ) ∈ N d satisfying equation 2. First we will need some technical lemmas concerning the regularity of the nonlinear term in 4 and 8. and f : R → R is C ∞ . In this event, the substitution operator F : is continuous, C 1 in the Fréchet sense in u, is continuous. If the codomain of F is taken to be the space H s−1 (T 1+d ), then the other partial Fréchet derivatives exist and are given by

and vary continuously in
then, in addition to the above we have that the image of F is contained in X s , and the partials in ω and τ in X s−1 ; X s is invariant for the partial in u.
Proof. This is similar to the proof of the corresponding Lemma 4.1 in [12] which covers the case d = 1. The argument is standard and exploits approximation arguments, Sobolev embedding, and the Banach algebra property of H s .
The goal of this section is to develop a result analogous to Proposition 3.2 in the 1-dimensional problem [12]. To begin with we introduce some notation.
Let m = (m 1 , · · · , m d ) denote an element of N d and recall the notation |m| 2 = d j=1 m 2 j . Given two vectors y, z in N d , we denote their pointwise product as (y z) j := y j z j for j = 1, · · · , d. We define the set By Proposition 3, it is clear that S corresponds to the complement of the kernel of L ω,τ,m 0 . We define the inverse of the operator L ω,τ,m 0 to be the operator L −1 ω,τ,m 0 defined by with domain given by Here f denotes the Fourier transform of f , f nm is the Fourier coefficient, and supp f denotes the support of the Fourier transform of f . We recall that It will be useful to denote the denominators appearing above by Then |θ(n, m, m 0 , ω, τ )| 2 = (|m m 0 | 2 − ω 2 n 2 + cos(ωnτ )) 2 + ( ω d n − sin(ωnτ )) 2 .
We would like to show that the operator L −1 ω,τ,m 0 has a regularizing effect on an appropriate space. In contrast to the one-dimensional case, it will be useful here to partition indices into "bad sites" and "good sites". To this end, we note that the regularizing property is in particular related to the modulus of the denominators θ above being bounded away from zero, which occurs if either | ω d n| or |m 0 m| 2 − ω 2 n 2 is large enough. To achieve this, we can further partition the set of S the following way: Let Let A c := S − A. We think of A as the good sites and A c as the bad sites. In fact, the set A c is finite: Proof. If (n, m) ∈ A c then for some ω ∈ (d − 1 2 , d + 1 2 ), |m 0 m| 2 − ω 2 n 2 < 2 and |n| < 2d d−1/2 . Note that since m 0 consists only of positive integers, |m| 2 ≤ |m 0 m| 2 < 2 + ω 2 n 2 ≤ 2 + ω 2 ( 2d d−1/2 ) 2 < ∞. Hence, there are at most finitely many such m and n.
With this notation in hand we see that we must show that α(n, m) and β(n, m) are bounded on A.
We have Therefore β(n, m) = |m| 2 and the proof is complete.

SELF-EXCITED VIBRATIONS FOR NONLINEAR WAVE EQUATIONS 2425
That having been said, by definition we have that V s = G s ⊕ B s , and we can decompose the range equation 9 accordingly: Since L ω,τ,m 0 takes two derivatives, and its inverse returns only one, on G s , we instead write this as In order to use the implicit function theorem to solve equations 18 and 19, we need to know how the operators 17 vary with respect to parameters.
is continuous with respect to the uniform operator topology.
Remark. We are intentionally viewing the codomain as G s in the above Lemma.
Finally, substituting the local solution found in Proposition 5 into the other range equation 19, we can solve 19 in three free parameters.
By Proposition 5 we have B(d, τ j , 0, 0) = 0. By Proposition 6.1 D b B(d, τ j , 0, 0) is the operator L d,τj ,m 0 on B s . By Lemma 6.2 , B s is finite dimensional, and since B s is a subspace of V s by construction, L d,τj ,m 0 is invertible since its kernel is trivial. The implicit function theorem applies and gives us a local solution b(ρ, ω, τ ), which is a trigonometric polynomial, since B s is finite dimensional. The other properties follow from Lemma 6.1, g(0, 0, ω, τ ) = 0, and by differentiation.
We need a technical result which will ensure that the solution of the range equation 9 obtained in Proposition 7 above, is C ∞ smooth with respect to the parameters (ρ, ω, τ ). We will not go into all of the details here, but we mention that a similar proof can be found in Proposition 5.4 of [11]. The idea is to exploit the fact that the solution of range equation obtained above is a C ∞ function of (t, x), so that way one does not need to worry about derivative loss issues, which come up when working on G s . To shed some light on this technicality, it can be shown that ∂ ω L −1 ω,τ,m 0 g ∈ G s for g ∈ G s (no derivative gain). Consequently, since by Lemma 6.1, the term F (ω, τ, w(ρ) + b + g) 'loses a derivative' when differentiated with respect to ω, it is expected, in particular, that the second order derivative ∂ 2 ω L −1 ω,τ,m 0 Π G s F (ω, τ, w(ρ) + b + g) ∈ G s−1 (at best), for g ∈ G s (derivative loss). Hence we cannot expect C ∞ smoothness if the solution of the range equation merely lives in G s for some fixed s > 0, since we would run out of derivatives in a finite number of steps. The crucial point is that the solution of the range equation is in G s for every s > 0. We need the following lemmas: The proof is almost identical to Proposition 5.4 of [11]. The idea is to show that the modulus of all of the partial derivatives of the Fourier multipliers 1 θ(n,m,m 0 ,ω,τ ) , which appear in the Fourier expansion of L −1 ω,τ,m 0 , are bounded by N (l)(n 2 + |m| 2 ) N (l) for a positive integer N (l) depending on the order of differentiation, l ∈ N. This can be achieved by using an iterated chain rule formula. (This corresponds to Fact 1 and Fact 2 in Proposition 5.4 of [11]). Next, it is easily shown by induction and the dominated convergence theorem for infinite series, that for is C l for all positive integers l, by exploiting the bound N (l)(n 2 + |m| 2 ) N (l) of the modulus of the partial derivatives of the Fourier multipliers, and the fact that g is C ∞ ∩ G s , and hence in G s for each s > 0.
We will first show that above holds but with D γ F (ω, τ, u) ∈ H s (instead of X s ) if we assume that f is merely C ∞ but not necessarily odd, by induction on |γ| = γ 1 + γ 2 . The case |γ| = 1 is granted by Lemma 6.1. Next we assume that for all C ∞ functions f : R → R and for all u ∈ X s ∩ C ∞ (T 1+d ) that the map (ω, τ ) → F (ω, τ, u) ∈ H s ; F (ω, τ, u)(t, x) = f (u(t − ωτ, x)), is C k for a positive integer k ≥ 1. Now let f : R → R be any C ∞ odd function. Then we have that By the induction hypothesis, for any u ∈ X s ∩ C ∞ (T 1+d ) the map (ω, τ ) → f (u(· − ωτ, ·)) ∈ H s is C k . Moreover if u ∈ X s ∩ C ∞ (T 1+d ), then u t ∈ X s ∩ C ∞ (T 1+d ), and hence the map (ω, τ ) → u t (t − ωτ, x) ∈ X s is C k by the induction hypothesis. This shows D τ F (ω, τ, u) is C k in (ω, τ ). It still remains to show that if f is odd, then the latter actually lies in X s , but this follows from Lemma 6.1. Similar statements hold for the other partial derivative D ω F (ω, τ, u).

Finally, we have:
Proposition 8. The solution v(ρ, ω, τ ) of the range equation 9 is C ∞ with respect to parameters in the sense that for each s > 0 the map , ω, τ ) where b, g are solutions of the range equation on B s 19 and G s 18, respectively. Next we recall that g satisfies the implicit equation G(ω, τ, ρ, b, g) = 0, where G : . By the Lemmas 6.5 and 6.6, the map G above is C ∞ when its third argument (g) is restricted to G s ∩ C ∞ (T 1+d ). Since g(ρ, b, ω, τ ) ∈ G s ∩ C ∞ (T 1+d ), it follows by standard arguments that the map (ρ, b, ω, τ ) → g(ρ, b, ω, τ ) ∈ G s defined in a neighborhood of (0, 0, d, That is, g inherits its smoothness from G. By using the obtained C ∞ smoothness of g and similar reasoning, we can show that the solution b(ρ, ω, τ ) is also C ∞ smooth in the parameters with codomain B s . Consequently, the desired conclusion holds for each s > 5 + 1+d 2 . The fact that it also holds for all s > 0 is clear since the embedding of H s2 in H s1 for s 2 > s 1 is continuous.
With these technicalities behind us, we can rigorously proceed to solve the bifurcation equations.
8. The direction of bifurcation: d ≥ 3. In this section we prove Theorem 1.2, the proof of which is nearly identical to the proof of the analogous result in the onedimensional case found in [12]. For the reader's convenience we recall the details below.
Our goal is to determine whether the values of τ occurring in the periodic solutions constructed above satisfy τ > τ j (in which case the bifurcation is said to be forward ) or if τ < τ j (the bifurcation is then said to be backward ). Similarly we wish to know whether ω < d or ω > d. For the remainder of this section we assume that (ρ, ω, τ ) is sufficiently close to (0, τ j , d) to justify our calculations.
A similar calculation yields 28.
Next, we proceed as in Section 5 and look for a solution of 4 in the space X s of the form u(t, x) = ρ(e it + e −it )ϕ 1 (x) + v, where ρ ∈ R is the amplitude and v is in the L 2 orthogonal complement of the kernel of L ω±,τ ± j , computed in Proposition 10. It becomes clear that the remainder of the Lyapunov-Schmidt reduction follows as in the case d ≥ 3, so we will be brief with the details in this section.
Recalling that cos(ω ± τ ± j ) = ω 2 ± − 2 and sin(ω ± τ ± j ) = ω± 2 this becomes which vanishes if and only if ω 2 ± = 15 8 which is not the case by virtue of 32. Therefore, by the implicit function theorem, we obtain a solution (ω(ρ), τ (ρ)) ≈ (ω ± , τ ± j ) for ρ ≈ 0, completing the proof of Theorem 1.3. 9.1. The direction of bifurcation: d = 2. Since the function w(ρ) has the same definition as in the case when d ≥ 3 and using the same arguments as those presented in the proof of Lemma 8.1, we see immediately that we have 26, 27, and 28. Once again it follows that the functions τ (ρ) and ω(ρ) are even functions, meaning that in order to determine the direction of bifurcation we must determine the sign of τ (0) and ω (0).