The Maslov and Morse Indices for Sturm-Liouville Systems on the Half-Line

We show that for Sturm-Liouville Systems on the half-line $[0,\infty)$, the Morse index can be expressed in terms of the Maslov index and an additional term associated with the boundary conditions at $x = 0$. Relations are given both for the case in which the target Lagrangian subspace is associated with the space of $L^2 ((0,\infty), \mathbb{C}^{n})$ solutions to the Sturm-Liouville System, and the case when the target Lagrangian subspace is associated with the space of solutions satisfying the boundary conditions at $x = 0$. In the former case, a formula of H\"ormander's is used to show that the target space can be replaced with the Dirichlet space, along with additional explicit terms. We illustrate our theory by applying it to an eigenvalue problem that arises when the nonlinear Schr\"odinger equation on a star graph is linearized about a half-soliton solution.

With this choice of domain and inner product, L is densely defined, closed, and self-adjoint, so σ(L) ⊂ R.
Our particular interest lies in counting the number of negative eigenvalues of L (i.e., the Morse index). We proceed by relating the Morse index to the Maslov index, which is described in Section 3. We find that the Morse index can be computed in terms of the Maslov index and an additional term associated with the boundary condition at x = 0.
As a starting point, we define what we will mean by a Lagrangian subspace of C 2n . For comments about working in C 2n rather than R 2n , the reader is referred to Remark 1.1 of [10], and the references mentioned in that remark. Definition 1.1. We say ℓ ⊂ C 2n is a Lagrangian subspace of C 2n if ℓ has dimension n and (Ju, v) C 2n = 0, (1.3) for all u, v ∈ ℓ. Here, (·, ·) C 2n denotes the standard inner product on C 2n . In addition, we denote by Λ(n) the collection of all Lagrangian subspaces of C 2n , and we will refer to this as the Lagrangian Grassmannian.
Any Lagrangian subspace of C 2n can be spanned by a choice of n linearly independent vectors in C 2n . We will generally find it convenient to collect these n vectors as the columns of a 2n × n matrix X, which we will refer to as a frame for ℓ. Moreover, we will often coordinatize our frames as X = X Y , where X and Y are n × n matrices. Following [4] (p. 274), we specify a metric on Λ(n) in terms of appropriate orthogonal projections. Precisely, let P i denote the orthogonal projection matrix onto ℓ i ∈ Λ(n) for i = 1, 2. I.e., if X i denotes a frame for ℓ i , then P i = X i (X * i X i ) −1 X * i . We take our metric d on Λ(n) to be defined by d(ℓ 1 , ℓ 2 ) := P 1 − P 2 , where · can denote any matrix norm. We will say that a path of Lagrangian subspaces ℓ : I → Λ(n) is continuous provided it is continuous under the metric d.
Suppose ℓ 1 (·), ℓ 2 (·) denote continuous paths of Lagrangian subspaces ℓ i : I → Λ(n), for some parameter interval I. The Maslov index associated with these paths, which we will denote Mas(ℓ 1 , ℓ 2 ; I), is a count of the number of times the paths ℓ 1 (·) and ℓ 2 (·) intersect, counted with both multiplicity and direction. (In this setting, if we let t * denote the point of intersection (often referred to as a conjugate point), then multiplicity corresponds with the dimension of the intersection ℓ 1 (t * ) ∩ ℓ 2 (t * ); a precise definition of what we mean in this context by direction will be given in Section 3.) In order to place our analysis in the usual Hamiltonian framework, we express (1.1) as a first order system for y = y 1 y 2 , with y 1 = φ and y 2 = P (x)φ ′ . We find y ′ = A(x; λ)y, (1.4) where which can be expressed in the standard linear Hamiltonian form Let X 1 (x; λ) ∈ C 2n×n denote the matrix solution to (1.5) We will verify in Section 4 that for each (x, λ) ∈ [0, ∞) × R, X 1 (x; λ) is the frame for a Lagrangian subspace of C 2n , ℓ 1 (x; λ). Likewise, let X 2 (x; λ) ∈ C 2n×n denote the matrix solution to JX ′ 2 = B(x; λ)X 2 X 2 (·; λ) ∈ L 2 ((0, ∞), C 2n ). (1.6) We will verify in Section 2 that for κ := inf r∈C n \{0} (V + r, r) (Q + r, r) , (1.7) we have σ ess (L) ⊂ [κ, ∞), where σ ess (·) denotes essential spectrum, as defined in Section 2. Subsequently, we verify in Section 4 that for each (x, λ) ∈ [0, ∞) × (−∞, κ), X 2 (x; λ) is the frame for a Lagrangian subspace of C 2n , ℓ 2 (x; λ), and moreover that for any λ ∈ (−∞, κ), the asymptotic space ℓ + 2 (λ) := lim x→∞ ℓ 2 (x; λ) is well-defined and Lagrangian (with convergence in the metric d described above). Finally, we will establish that the map ℓ + 2 : (−∞, κ) → Λ(n) is continuous. There are two different ways in which we can formulate a relation between the Maslov index and the Morse index, depending upon whether we view x = 0 as our target or x = +∞ as our target. We state these results respectively as Theorems 1.1 and 1.2. Prior to these statements, we set some terminology with the following lemma.
In this case, we refer to λ ∞ as boundary inconjugate.

ODE Preliminaries
In this section, we develop preliminary ODE results that will serve as the foundation of our analysis. This development is standard, and follows [18], pp. 779-781 (see, e.g., [2] for similar analyses). We begin by clarifying our terminology.
Definition 2.1. We define the point spectrum of L, denoted σ pt (L), as the set Elements of the point spectrum will be referred to as eigenvalues. We define the essential spectrum of L, denoted σ ess (L), as the values in C (and so R, by self-adjointness) that are not in the resolvent set of L and are not isolated eigenvalues of finite multiplicity.
We note that the total spectrum is σ(L) = σ pt (L) ∪ σ ess (L), and the discrete spectrum is defined as σ discrete (L) = σ(L)\σ ess (L). Since our analysis takes place entirely away from essential spectrum, the eigenvalues we are counting are elements of the discrete spectrum.
If we consider (1.1) as x → ∞, we obtain the asymptotic system For operators such as L posed on R, it's well-known that the essential spectrum is entirely determined by the associated asymptotic problems at ±∞ (see, for example, in [5,11]). As we will verify at the end of this section, it's straightforward to show that a similar result holds true in the current setting. In particular, if we look for solutions of (2.1) of the form φ(x) = e ikx r, for some scalar constant k ∈ R and (non-zero) constant vector r ∈ C n then the essential spectrum will be confined to the allowable values of λ. For (2.1), we find (k 2 P + + V + )r = λQ + r, and upon taking an inner product with r we see that k 2 (P + r, r) + (V + r, r) = λ(Q + r, r).
Since P + and Q + are positive definite, we see that for all k ∈ R, and consequently σ ess In order to describe the Lagrangian subspaces ℓ 2 (x; λ), we need to characterize the solutions of (1.6) in L 2 ((0, ∞), C 2n ). As a starting point for this characterization, we fix any λ < κ and look for solutions of (2.1) of the form φ(x; λ) = e µx r, where in this case µ is a scalar function of λ, and r is a vector function of λ (in C n ). Computing directly, we obtain the relation (−µ 2 P + + V + − λQ + )r = 0, which we can rearrange as P −1 + (V + − λQ + )r = µ 2 r. Since P + is positive definite, we can work with the inner product (r, s) + := (P + r, s) C n , (2.2) and it's clear that for λ ∈ R, the operator P −1 + (V + − λQ + ) is self-adjoint with respect to this inner product, and moreover positive definite for λ < κ. We conclude that for λ < κ, the eigenvalues µ 2 will be positive real values, and that the associated eigenvectors can be chosen to be orthonormal with respect to the inner product (2.2). For each of the n values of µ 2 (counted with multiplicity), we can associate two values ± µ 2 . By a choice of labeling, we can split these values into n negative values {µ k (λ)} n k=1 and n positive values {µ k } 2n k=n+1 with the correspondence (again, by labeling convention) For k = 1, 2, . . . , n, we denote by r k the eigenvector of P −1 + (V + − λQ + ) with associated eigenvalue µ 2 k = µ 2 2n+1−k . I.e., Recalling (1.4), we note that under our asymptotic assumptions on P (x), Q(x), and V (x), the limit is well-defined. The values {µ k } 2n k=1 described above comprise a labeling of the eigenvalues of A + (λ). Each of these eigenvalues is semi-simple, and so we can associate them with a choice of eigenvectors {r k } 2n k=1 so that We see that for k = 1, 2, . . . , n, we have relations and then we can express a frame for the eigenspace of A + (λ) associated with negative eigenvalues as X + 2 = R P + RD , and likewise we can express a frame for the eigenspace of A + (λ) associated with positive eigenvalues asX + 2 = R −P + RD .
Proof. For any λ < κ, we follow [18] and write (1.4) as We can now fix a particular index k ∈ {1, 2, . . . , n}, and look for solutions to (2.5) of the form y(x; λ) = e µ k (λ)x z(x; λ), Based on η, let η 1 , η 2 ∈ R + satisfy 0 < η 1 < η 2 < η. Then there exists a neighborhood of λ on which we can define a continuous projector P k (λ) onto the direct sum of all eigenspaces of A + (λ) with eigenvaluesμ satisfyingμ < µ k − η 1 , and likewise a projector Q k (λ) = I − P k (λ) projecting onto the direct sum of all eigenspaces of A + (λ) with eigenvaluesμ satisfying For some fixed value M > 0 taken sufficiently large, we will look for solutions to (2.6) of the form (2.7) We proceed by contraction mapping, defining an operator action T z as the right-hand side of (2.7). For this, we use the following fact, which is immediate from the definitions of P k and Q k : there exist constants C 1 and C 2 so that (2.8) We check that T is a contraction on the space L ∞ ((M, ∞), C 2n ). To see this, we note that given any z, w ∈ L ∞ ((M, ∞), C 2n ), there exist constants C 3 and C 4 so that Combining terms, we see that for some constant C 5 , from which it's clear that by taking M sufficiently large, we can ensure that we have a contraction. Invariance of T on L ∞ ((M, ∞), C 2n ) follows similarly, and we conclude that there exists a unique z ∈ L ∞ ((M, ∞), C 2n ) satifying (2.7). Upon direct differentiation of (2.7), we see that z solves (2.6). Solutions to (2.6) are absolutely continuous, so in fact z ∈ AC loc ([M, ∞), C n ). But then we can continue z from M back to 0 by standard ODE continuation, so that we have z ∈ AC loc ([0, ∞), C n ).
We can now substitute z back into (2.7) to obtain the asymptotic estimates we're after. Proceeding similarly as in our verification that T is a contraction, we find that Finally, differentiability in λ is obtained by differentiating (2.7) with respect to λ and proceeding with a similar argument for the resulting integral equation.
(The verification that M(λ) is independent of ξ proceeds almost precisely as the verification that X 1 (ξ; λ) and X 2 (ξ; λ) are Lagrangian subspaces.) According to Lemma 2.2 in [10], for λ < κ, M(λ) exists if and only if the Lagrangian subspaces ℓ 1 (ξ; λ) and ℓ 2 (ξ; λ) do not intersect, and these Lagrangian subspaces intersect if and only if λ is an eigenvalue of L (i.e., an element of the point spectrum). Moreover, for λ < κ, the frames X 1 (ξ; λ) and X 2 (ξ; λ) are analytic in λ (see, e.g., Theorem 2.1 in [17], and this can also be seen with an approach essentially identical to our proof of Lemma 2.1). It follows that M(λ) is meromorphic in λ < κ, and so there can be no accumulation of eigenvalues on this interval. This allows us to conclude in fact that for λ < κ, M(λ) can only fail to exist if λ ∈ σ discrete (L).
In the case that M(λ) exists, it can be shown (e.g., as in the proof of Proposition 7.1 in [18]) that there exist constants C(λ) > 0, c(λ) > 0 so that We can conclude that for any λ < κ that is not an eigenvalue of L, the resolvent map Although it's not required for the current analysis, we can also readily verify that in fact σ ess (L) = [κ, ∞). In order to see this, we first not that for any λ ≥ κ, the matrix P −1 + (V + − λQ + ) will have one or more non-positive eigenvalues. It follows that A + (λ) will have two or more eigenvalues with zero real part. The proof of Lemma 2.1 proceeds essentially unchanged in this case, and we see that for λ ≥ κ the space of L 2 ((0, ∞), C n ) solutions of Lφ = λφ has dimension less than n. It follows immediately from Theorem 11.4.c of [17] that λ ∈ σ ess (L) in these cases.

The Maslov Index
Our framework for computing the Maslov index is adapted from Section 2 of [10], and we briefly sketch the main ideas here. Given any pair of Lagrangian subspaces ℓ 1 and ℓ 2 with respective frames In [10], the authors establish: (1) the inverses appearing in (3.1) exist; (2)W is independent of the specific frames X 1 and X 2 (as long as these are indeed frames for ℓ 1 and ℓ 2 ); (3)W is unitary; and (4) the identity dim(ℓ 1 ∩ ℓ 2 ) = dim(ker(W + I)).
Given two continuous paths of Lagrangian subspaces ℓ i : [0, 1] → Λ(n), i = 1, 2, with respective frames X i : [0, 1] → C 2n×n , relation (3.2) allows us to compute the Maslov index Mas(ℓ 1 , ℓ 2 ; [0, 1]) as a spectral flow through −1 for the path of matrices In [10], the authors provide a rigorous definition of the Maslov index based on the spectral flow developed in [16]. Here, rather, we give only an intuitive discussion. As a starting point, if −1 ∈ σ(W (t * )) for some t * ∈ [0, 1], then we refer to t * as a conjugate point, and its multiplicity is taken to be dim(ℓ 1 (t * ) ∩ ℓ 2 (t * ), which by virtue of (3.2) is equivalent to its multiplicity as an eigenvalue ofW (t * ). We compute the Maslov index Mas(ℓ 1 , ℓ 2 ; [0, 1]) by allowing t to run from 0 to 1 and incrementing the index whenever an eigenvalue crosses −1 in the counterclockwise direction, while decrementing the index whenever an eigenvalue crosses −1 in the clockwise direction. These increments/decrements are counted with multiplicity, so for example, if a pair of eigenvalues crosses −1 together in the counterclockwise direction, then a net amount of +2 is added to the index. Regarding behavior at the endpoints, if an eigenvalue ofW rotates away from −1 in the clockwise direction as t increases from 0, then the Maslov index decrements (according to multiplicity), while if an eigenvalue ofW rotates away from −1 in the counterclockwise direction as t increases from 0, then the Maslov index does not change. Likewise, if an eigenvalue ofW rotates into −1 in the counterclockwise direction as t increases to 1, then the Maslov index increments (according to multiplicity), while if an eigenvalue ofW rotates into −1 in the clockwise direction as t increases to 1, then the Maslov index does not change. Finally, it's possible that an eigenvalue ofW will arrive at −1 for t = t * and stay. In these cases, the Maslov index only increments/decrements upon arrival or departure, and the increments/decrements are determined as for the endpoints (departures determined as with t = 0, arrivals determined as with t = 1).
One of the most important features of the Maslov index is homotopy invariance, for which we need to consider continuously varying families of Lagrangian paths. To set some notation, we denote by P(I) the collection of all paths L(t) = (ℓ 1 (t), ℓ 2 (t)), where ℓ 1 , ℓ 2 : I → Λ(n) are continuous paths in the Lagrangian-Grassmannian. We say that two paths L, M ∈ P(I) are homotopic provided there exists a family H s so that H 0 = L, H 1 = M, and H s (t) is continuous as a map from (t, s) ∈ I × [0, 1] into Λ(n) × Λ(n).
The Maslov index has the following properties. Straightforward proofs of these properties appear in [6] for Lagrangian subspaces of R 2n , and proofs in the current setting of Lagrangian subspaces of C 2n are essentially identical.
We define a bilinear form Remark 3.1. Although we will only utilize the bilinear forms Q in combination, it's worth noting how we should interpret the meaning of an individual form. Given three Lagrangian subspaces ℓ 0 , ℓ 1 , and ℓ 2 , Q(ℓ 1 , ℓ 2 ; ℓ 0 ) provides information about the relative orientation of these three spaces. For the case n = 1, the nature of this information is particularly clear. In that setting, we can associate to any Lagrangian subspace ℓ j with frame correspond with distinct points on S 1 . Given any third Lagrangian plane ℓ 0 distinct from both ℓ 1 and ℓ 2 ,W D 0 will lie either on the arc going fromW D 1 toW D 2 in the clockwise direction or on the arc going fromW D 1 toW D 2 in the counterclockwise direction. In the former case, we will have sgn Q(ℓ 1 , ℓ 2 ; ℓ 0 ) = +1, while in the latter case we will have sgn Q(ℓ 1 , ℓ 2 ; ℓ 0 ) = −1. Using this observation, we can readily derive Hörmander's formula ((3.5), just below) for the case n = 1, and it can subsequently be established that (3.5) is valid for n > 1 as well.
In practice, we would often prefer the Dirichlet plane ℓ D as our target (e.g., when the target is Dirichlet, all crossings will necessarily be in the same direction), and so let's check the calculation associated with exchanging a general Lagrangian target space ℓ G with the Dirichlet plane. For notational convenience, we will think of this the other way around, taking ℓ 1 = ℓ D and ℓ 2 = ℓ G in our general formulation. Following our general development, we assume ℓ D ∩ ℓ G = {0}, and also that ℓ(0) intersects neither ℓ D nor ℓ G and likewise ℓ(1) intersects neither ℓ D nor ℓ G . Since the analysis of ℓ(0) and ℓ(1) are the same, we will proceed with each replaced by the general notation ℓ 0 .
Our starting point is to characterize the maps C : will be a frame for ℓ G . We denote the set of all such maps C by C. Next, we must be able to find some C (0) ∈ C so that given any w ∈ ℓ 0 there will exist u ∈ ℓ D so that w = u + C (0) u.
I.e., we must have w = 0 is indeed a frame for ℓ G . For u ∈ ℓ D , we can now compute is a frame for ℓ G , we must have that for any other frame X G = X G Y G , there exists an invertible matrix M ∈ C n×n so that Likewise, if X 0 = X 0 Y 0 is any frame for ℓ 0 , then any other frame for ℓ 0 can be expressed as In this way, we can express the relation First, if X G is invertible (as will be the case in the current analysis), we can write M = X −1 G X 0 M, and subsequently Remark 3.2. We note that in the event that X 0 is also invertible, we obtain the expression On the other hand, suppose X G is not invertible. In this case, we observe that the system (3.6) can be combined into the form The advantage of this formulation is simply that we can be sure that every matrix is invertible. In this case, we obtain from the first equation in (3.9), and upon substitution into the second equation in (3.9), Rearranging terms, we can write By assumption, ℓ 0 does not intersect ℓ G , and so the matrix multiplying M on the left (i.e., the entire matrix, including the square brackets) must be invertible. In this way, we arrive at In this case, C (3.10) We summarize these observations in the following lemma. Remark 3.3. We will use these considerations in Section 5 to establish Corollary 5.1.

Proof of Theorem 1.1
In this section, we prove Theorem 1.1. Our starting point is to verify that X 1 (0; λ) and X 2 (x; λ) are indeed frames for Lagrangian subspaces. According to Proposition 2.1 of [10], a matrix X ∈ C 2n×n is the frame for a Lagrangian subspace if and only if the following two conditions both hold: (1) rank(X) = n; and (2) X * JX = 0.
Finally, we recall from Section 2 that the Lagrangian subspaces ℓ 2 (x; λ) with frames X 2 (x; λ) can be extended as x tends to infinity to the Lagrangian subspaces ℓ + 2 (λ) with frames X + 2 (λ) = R(λ) P + R(λ)D(λ) . Here, R(λ) and D(λ) are specified respectively in (2.3) and (2.4). In order to verify that ℓ + 2 (λ) is indeed Lagrangian, we compute where we have observed that P + and D(λ) are self-adjoint. Recalling the normalization identity R(λ) * P + R(λ) = I, we see that X + 2 (λ) * JX + 2 (λ) = 0 for all λ < κ, from which we can conclude that ℓ + 2 (λ) is Lagrangian. We proceed now by considering the Maslov box, for which we fix λ 0 < κ, and work with a value λ ∞ that will be chosen sufficiently large during the proof, and certainly large enough so that −λ ∞ < λ 0 . The Maslov box in this case will refer to the following sequence of four lines, creating a rectangle in the (λ, x)-plane: we fix x = 0 and let λ run from −λ ∞ to λ 0 (the bottom shelf); we fix λ = λ 0 and let x run from 0 to +∞ (the right shelf); we fix x = +∞ and let λ run from λ 0 to −λ ∞ (the top shelf); and we fix λ = −λ ∞ and let x run from +∞ to 0 (the left shelf).
For Theorem 1.1, we view the bottom shelf at x = 0 as our target, and the Lagrangian subspace we associate with the target is ℓ 1 (0; λ), with frame X 1 (0; λ) = Jα * . Clearly, ℓ 1 (0; λ) does not depend on λ, and λ only appears as an argument for notational consistency. In this case, the evolving Lagrangian subspace is ℓ 2 (x; λ), which we recall corresponds with the space of solutions that decay as x → +∞. As our frame for ℓ 2 (x; λ), we use the matrix X 2 (x; λ) constructed in (1.6). We set The Maslov index computed withW (x; λ) will detect intersections between ℓ 1 (0; λ) and ℓ 2 (x; λ). For expositional convenience, we consider the sides of the Maslov box in the following order: bottom, top, left, right. Bottom shelf. Beginning with the bottom shelf, we observe that our Lagrangian subspaces have been constructed in such a way that conjugate points correspond with eigenvalues of L, with the multiplicity of λ as an eigenvalue of L matching the multiplicity of the intersection. This means that if each conjugate point along the bottom shelf has the same direction then the Maslov index along the bottom shelf will be (up to a sign) a count of the total number of eigenvalues that L has between −λ ∞ and λ 0 . We will show below that as λ ranges from −λ ∞ toward λ 0 on the bottom shelf, the conjugate points are all negatively directed, and so the corresponding Maslov index is a negative of this count. In addition, we will show during our discussion of the left shelf that we can choose λ ∞ sufficiently large so that L has no eigenvalues on the interval (−∞, −λ ∞ ]. We will be able to conclude, then, that the Maslov index along the bottom shelf is negative a count of the total number of eigenvalues, including multiplicity, that L has below λ 0 ; i.e., Mas(ℓ 1 (0; ·), ℓ 2 (0; ·); [−λ ∞ , λ 0 ]) = − Mor(L; λ 0 ). (4.2) According to Lemma 3.1 of [10] (also Lemma 4.2 of [6]), rotation of the eigenvalues ofW (x; λ) as λ varies-for any fixed x ∈ [0, ∞)-can be determined from the matrix X 2 (x; λ) * J∂ λ X 2 (x; λ) in the following sense: If this matrix is positive definite at some point (x 0 , λ 0 ), then as λ increases through λ 0 (with x = x 0 fixed), all n eigenvalues ofW (x; λ) will monotonically rotate in the counterclockwise direction.
For this calculation, we temporarily set for which we can compute (with prime denoting differentiation with respect to x) Integrating, we see that where convergence of the integral is assured by the exponential decay of the elements in our frame X 2 . In this case, This matrix is clearly non-positive (since Q is positive definite), and moreover it cannot have 0 as an eigenvalue, because the associated eigenvector v ∈ C n would necessarily satisfy X 2 (y; λ)v = 0 for all y ∈ [x, ∞), and this would contradict linear independence of the columns of X 2 (y; λ) (as solutions of (1.1)). Since B(x; λ) is negative definite, we can conclude that as λ increases, the eigenvalues of W (x; ·) rotative monotonically clockwise. It follows immediately that for the bottom shelf, (4.2) holds.
Top shelf. For the top shelf (obtained in the limit as x → +∞), we set W + (λ) := lim x→+∞W (x; λ), and note thatW + (λ) detects intersections between ℓ 1 (0; λ) and ℓ + 2 (λ). Our frames for these Lagrangian subspaces are explicit, X 1 (0; λ) = Jα * and X + 2 (λ) = R(λ) P + R(λ)D(λ) , and we can use these frames to explicitly compute Mas(ℓ 1 (0; ·), ℓ + 2 (·); [−λ ∞ , λ 0 ]). We observe that the monotonicity that we found along horizontal shelves does not immediately carry over to the top shelf (since that calculation is only valid for x ∈ [0, ∞)). Nonetheless, we can conclude monotonicity along the top shelf in the following way: by continuity of our frames, we know that as λ increases along the top shelf the eigenvalues ofW + (λ) cannot rotate in the counterclockwise direction. Moreover, eigenvalues ofW + (λ) cannot remain at −1 for any interval of λ values. In order to clarify this last statement, we observe that the Lagrangian subspaces ℓ 1 (0; λ) and ℓ + 2 (λ) intersect if and only if λ is an eigenvalue for the constant coefficient equation (Due to the appearance of P (0) in the boundary condition rather than P + , this equation may not be self-adjoint, but that doesn't affect this argument.) If λ is an eigenvalue of (4.3) that is not isolated from the rest of the spectrum, then it must be in the essential spectrum of (4.3), but by an argument essentially identical to the one given at the end of Section 2, we see that the essential spectrum for (4.3) is confined to the interval [κ, ∞), so there can be no interval of eigenvalues below κ. Left shelf. For the left shelf, intersections between ℓ 1 (0; λ) and ℓ 2 (x; λ) at some value x = s will correspond with one or more non-trivial solutions to the truncated boundary value problem For this calculation, it's useful to use the projector formulation of our boundary conditions, developed in [1,14] (see also [9] for an implentation of this formulation in circumstances quite similar to those of the current analysis). Briefly, there exist three orthogonal (and mutually orthogonal) projection matrices P D (the Dirichlet projection), P N (the Neumann projection), and P R = I − P D − P N (the Robin projection), and an invertible self-adjoint operator Λ acting on the space P R C n such that the boundary condition can be expressed as P D φ(s) = 0 P N P (s)φ ′ (s) = 0 P R P (s)φ ′ (s) = P R ΛP R φ(s).

(4.5)
Moreover, P D can be constructed as the projection onto the kernel of α 2 and P N can be constructed as the projection onto the kernel of α 1 .
We have, then, where C b depends only on the boundary matrices α 1 and α 2 . For φ(·; λ) ∈ D(L s ), we can write so that the Cauchy-Schwarz inequality leads to for any ǫ > 0.
Combining these observations, we see that (4.6) leads, for any λ < 0, to the inequality We choose ǫ so that θ P − C b ǫ ≥ 0 to obtain the inequality from which we conclude the lower bound We see that we can choose λ ∞ sufficiently large so that L s has no eigenvalues λ on the interval (−∞, −λ ∞ ) for any s ∈ [0, ∞). Consequently, there can be no conjugate points s ∈ [0, ∞) along a left shelf at λ = −λ ∞ . We note that this analysis leaves open the possibility that the asymptotic point at +∞ is conjugate. In the event that it is conjugate, λ ∞ can be increased slightly to break the conjugacy. This is an immediate consequence of monotonicity along the top shelf, and serves to establish Lemma 1.1.
Combining these observations, and using catenation of paths along with homotopy invariance, we find that the sum bottom shelf + right shelf + top shelf + left shelf = 0,  We established in our proof of Theorem 1.1 that ℓ 2 (x; λ) is Lagrangian for all (x, λ) ∈ [0, ∞) × (−∞, κ), and we can proceed similarly to verify that the same is true for ℓ 1 (x; λ). We omit the details. As with our proof of Theorem 1.1, we work with the Maslov box, but in this case, we place the top shelf at x = x ∞ , for x ∞ chosen sufficiently large during the analysis. We proceed in this way, because the Lagrangian subspace (which is well-defined for each λ < κ) is not generally continuous as a function of λ. In particular, it is discontinuous at each eigenvalue of L (see [7] for a discussion in the context of Schrödinger operators on R). We will use the Maslov index to detect intersections between our evolving Lagrangian subspace ℓ 1 (x; λ) and our target Lagrangian subspace ℓ 2 (x ∞ ; λ). Re-definingW for this section, we now set For expositional convenience, we consider the sides of the Maslov box in the following order: left, top, bottom/right (together). Left shelf. In this case, conjugate points x = s along the left shelf correspond with values s for which λ = −λ ∞ is an eigenvalue for the ODE where for notational brevity we are suppressing dependence of φ on λ. By taking x ∞ sufficiently large, we can make X 2 (x ∞ ; λ) as close as we like to the invertible matrix R(λ), so that in this case X 2 (x ∞ ; λ) is also invertible, and we can write, where the error on this approximation is O(e −ηx∞ ) for some η > 0. The matrix P + R(λ)D(λ)R(λ) * P + is self-adjoint, and since the entries of D(λ) are the negative eigenvalues of A + (λ), it is negative definite. Also, the entries of D(λ) approach −∞ as λ approaches −∞, so the eigenvalues of P + R(λ)D(λ)R(λ) * P + approach −∞ as λ approaches −∞.
Let φ(x; λ) denote a solution to (5.2). Upon taking an L 2 ((0, s), C n ) inner product of φ with (5.2), we obtain For the first integral in this last expression, we compute
Combining these observations, we see that the boundary terms can be expressed as For s sufficiently small, φ(s) = φ(0) + O(s), so that we approximately have which is positive for x ∞ and λ ∞ both chosen sufficiently large (by the discussion following (5.4)). We conclude that there exists s 0 > 0 sufficiently small so that Similarly as in the proof of Theorem 1.1, we have For λ < 0, this allows us to write (still for 0 < s ≤ s 0 ) from which we can immediately conclude for all 0 < s ≤ s 0 . For s > s 0 , we scale the independent variable by setting Our system becomes −(P (ξs)ϕ ′ ) ′ + s 2 V (ξs)ϕ = s 2 λQ(ξs)ϕ; in (0, 1) Suppose ϕ solves (5.6) for λ = −λ ∞ . Taking an inner product of ϕ with (5.6), we get For the first integral, we have For the boundary term at ξ = 1, we have where the inequality follows for x ∞ sufficiently large from our prior discussion of For the boundary term at ξ = 0, we have (P (0)ϕ ′ (0), ϕ(0)) = s(P R ΛP R ϕ(0), ϕ(0)).
According to Lemma 1.3.8 in [1], we can compute the upper bound For λ < 0, this allows us to compute For each s ∈ [s 0 , x ∞ ], we choose ǫ = ǫ s = θ P /(sC b ). This ensures which leads immediately to We conclude a lower bound on λ, Combining these observations, we can conclude that for any value λ ∞ chosen so that we will have no crossings along the left shelf. Similarly as in the proof of Theorem 1.1, this leaves open the possibility of a conjugate point at (0, −λ ∞ ), corresponding with an intersection between ℓ 1 (0; −λ ∞ ) and ℓ 2 (x ∞ , −λ ∞ ). Precisely as in the proof of Theorem 1.1, we can increase λ ∞ (if necessary) to ensure that ℓ 1 (0; −λ ∞ ) ∩ ℓ + 2 (−λ ∞ ) = {0}, and then we can choose x ∞ sufficiently large to ensure that this implies For these choices of x ∞ and λ ∞ , we have Top shelf. In the case of Theorem 1.2,W (x; λ) has been constructed so that conjugate points along the top shelf correspond precisely with eigenvalues of L. In order to verify that the Maslov index along the top shelf corresponds with a count of eigenvalues, we need to check that the eigenvalues ofW (x; λ) rotate monotonically counterclockwise as λ decreases. In this case, both X 1 and X 2 depend on λ, so according to Lemma 3.1 of [10] (also Lemma 4.2 of [6]), rotation of the eigenvalues ofW (x; λ)-for any x ∈ [0, ∞)-can be determined from the matrices −X 1 (x; λ) * J∂ λ X 1 (x; λ) and X 2 (x ∞ ; λ) * J∂ λ X 2 (x ∞ ; λ) in the following sense: If both of these matrices are non-positive, and at least one is negative definite at some point (x 0 , λ 0 ), then as λ increases through λ 0 (with x = x 0 fixed), all n eigenvalues ofW (x; λ) will monotonically rotate in the clockwise direction.
We have already established during the proof of Theorem 1.1 that the matrix is negative definite, so we only need to check that −X 1 (x; λ) * J∂ λ X 1 (x; λ) is non-positive. In fact, this latter matrix is negative definite as well, and since the proof is essentially identical to the proof for X 2 (x ∞ ; λ) * J∂ λ X 2 (x ∞ ; λ), we omit the details. We can conclude, similarly as for the bottom shelf in the proof of Theorem 1.1, that Bottom and right shelves. We will need to compute Maslov indices along the bottom and right shelves, so it's natural to address the two of them together. Our approach is based substantially on the proofs of Claims 4.11 and 4.12 in [7].
As a starting point, we introduce the new unitary matrix which detects intersections between ℓ 1 (x; λ) and the asymptotic Lagrangian subspace Likewise, we specify the asymptotic matrix which is well-defined for each λ < κ, but not generally continuous as a function of λ. (See the appendix in [7] for a discussion of this discontinuity.) Since R(λ) and S(λ) can be written down explicitly, it is much more convenient to work withW(x; λ) than it is to work with W (x; λ). In light of this, we will show that our calculations can be carried out entirely in terms of the former matrix. In particular, we have the following claim: Proof. First, it's clear that we have the relatioñ Recalling from Lemma 2.1 that for someη > 0, we see that by choosing x ∞ sufficiently large, we can ensure that the eigenvalues ofW(x; λ) are as close as we like to the eigenvalues ofW (x; λ) for all (x, λ) ∈ [0, ∞) × [−λ ∞ , λ 0 ]. (Here, exponential decay in x allows us to compactify [0, ∞) with the usual one-point compactification.) In particular, we can ensure that no eigenvalue of W (x; λ 0 ) can complete a loop of S 1 unless a corresponding eigenvalue ofW(x; λ 0 ) completes a loop of S 1 , with the converse holding as well. Following our discussion of the left shelf, we have chosen λ ∞ so that and x ∞ sufficiently large to ensure that this implies With these choices, we see thatW (0; −λ ∞ ) does not have −1 as an eigenvalue, and alsõ W(0; −λ ∞ ) does not have −1 as an eigenvalue. Case 1. First, suppose λ 0 is not an eigenvalue for L. ThenW (x ∞ ; λ 0 ) does not have −1 as an eigenvalue, and alsoW + (λ 0 ) does not have −1 as an eigenvalue. By continuity, we can take x ∞ large enough so thatW(x ∞ ; λ 0 ) does not have −1 as an eigenvalue, and additionally so thatW(x; λ 0 ) does not have −1 as an eigenvalue for any x ≥ x ∞ . Since the eigenvalues ofW andW remain uniformly close, the total spectral flow associated with the bottom and right shelves forW (x; λ) must be the same as forW(x; λ). Specifically, we have and the claim for Case 1 follows immediately from the specification that x ∞ is taken large enough so that ℓ 1 (x; λ 0 ) and ℓ + 2 (λ 0 ) do not intersect for x ≥ x ∞ . Case 2. Next, suppose λ 0 is an eigenvalue for L. Then certainlyW (x ∞ ; λ 0 ) has −1 as an eigenvalue, and its multiplicity corresponds with the multiplicity of λ 0 as an eigenvalue of L. Likewise,W + (λ 0 ) will have −1 as an eigenvalue, and its multiplicity corresponds with the multiplicity of λ 0 as an eigenvalue of L. As in the case when λ 0 is not an eigenvalue, we can choose x ∞ large enough so that for x ≥ x ∞ the eigenvalues ofW(x; λ) that do not approach −1 as x → +∞ remain bounded away from −1 as x → +∞.
We now proceed precisely as in Case 1 for the eigenvalues ofW (x ∞ ; λ 0 ) other than −1, and we note that an eigenvalue ofW (x; λ 0 ) will approach −1 as x → x ∞ if and only if an eigenvalue ofW(x; λ) approaches −1 as x → +∞. Moreover, despite possible transient crossings, the net number of crossings associated with these eigenvalues must coincide, because otherwise, an eigenvalue ofW (x; λ) would complete a full loop of S 1 without a corresponding eigenvalue ofW(x; λ) also completing such a loop (or vice versa).

Changing the Target
In this section, we verify that under certain conditions the target frame ℓ + 2 (λ 0 ) in the calculation Mas(ℓ 1 (·; λ 0 ), ℓ + 2 (λ 0 ); [0, ∞)) can be replaced with the Dirichlet plane ℓ D . As noted earlier, one advantage of this replacement is that for a Dirichlet target the rotation of eigenvalues ofW (x; λ) as x increases is monotonically clockwise. (This is straightforward to show, e.g., with the methods of [10].) The key observation we take advantage of here is that if λ 0 is not an eigenvalue of L, then we explicitly know both ℓ 1 (0; λ 0 ) and wherel + 2 (λ 0 ) denotes the Lagrangian subspace associated with solutions that grow as x tends to positive infinity. This allows us to compute both sgn Q(ℓ D , ℓ + 2 (λ 0 ); ℓ 1 (0; λ 0 )) and sgn Q(ℓ D , ℓ + 2 (λ 0 ); ℓ + 1 (λ 0 )), and consequently we can compute the Hörmander index s(ℓ D , ℓ + 2 (λ 0 ); ℓ 1 (0; λ 0 ), ℓ + 1 (λ 0 )). In order to apply our development from Section 3.1, we need the following five conditions to hold: We will check below that Items (iii), (iv), and (v) hold under our general assumptions, and we will take Items (i) and (ii) to be additional assumptions for this section.

Application to Quantum Graphs
In this section, we apply our framework to an operator on the half-line that arises through consideration of nonlinear Schrödinger equations on quantum graphs with n infinite edges extending from a single vertex (i.e., on star graphs). Our direct motivation for considering this example is the recent analysis of Kairzhan and Pelinovsky (see [12]), and we also note that Kostrykin and Schrader have shown how the symplectic framework fits well with such problems (see [13]) and that Latushkin and Sukhtaiev have recently developed this framework in the case of quantum graphs with edges of finite length (see [15]). Finally, we mention that our general approach to quantum graphs is adapted from the reference [1].

The Schrödinger Operator on Star Graphs
We consider a star graph with n edges, which can be visualized as a single point with n distinct half-lines emerging from it. We will associate with each edge of our graph the interval [0, ∞), and our basic Hilbert space associated with the full graph will be H = n j=1 L 2 ((0, ∞), C).
We will view elements φ ∈ H as vector functions φ = (φ 1 , φ 2 , · · · , φ n ) t , and we specify the linear operator L : H → H by where v ∈ C([0, ∞), C) is a scalar potential for which we will assume the limit exists and satisfies the asymptotic relation (This is slightly weaker than our Assumption (A2), but sufficient in the current setting (see [7]).) We specify boundary conditions at the vertex as with α 1 and α 2 satisfying the assumptions described in (A3). Under these assumptions, we take as our domain for L, D(L) = {φ ∈ H : φ, φ ′ ∈ AC loc ([0, ∞), C n ), Lφ ∈ H}.
With this notation in place, we can consider the eigenvalue problem Lφ = λφ with boundary conditions (6.1). In order to place this system in the framework of our analysis, we set y(x; λ) = y 1 (x;λ) y 2 (x;λ) , with y 1 (x; λ) = φ(x; λ) and y 2 (x; λ) = φ ′ (x; λ). In this way, we arrive at our standard Hamiltonian system where B(x; λ) denotes the diagonal matrix Under our assumptions on the scalar potential v, it's well known that for each λ < v + the scalar equation has one non-trivial solution that decays as x → +∞ and one non-trivial solution that grows as x → +∞. (See, e.g., [7].) If we denote by ζ(x; λ) the solution that decays as x → +∞, then we can express our frame X 2 (x; λ) of solutions of (6.2) decaying as x → +∞ as We see that in this case, and in the context of Theorem 1.1, (I.e., this is (4.1) for the current case.) In particular, if we denote the eigenvalues of (−α * 2 + iα * 1 )(−α * 2 − iα * 1 ) −1 by {a j } n j=1 , then the eigenvalues ofW (x; λ) will be Remark 6.1. We distinguish the Neumann or Neumann-Kirchhoff boundary conditions as those specified by the relations (See p. 14 of [1] for a discussion of terminology.) These correspond with and In this case, the eigenvalues of (−α * 2 + iα * 1 )(−α * 2 − iα * 1 ) −1 are −1 and +1, with +1 simple and −1 occurring with multiplicity n − 1. This fact is straightforward to verify directly, and is also an immediate consequence of Corollary 2.3 from [13].

NLS on Star Graphs
We now consider the nonlinear Schrödinger equation where p > 0 and u ∈ H with u j taking the values of u on edge j of the graph. We interpret the notation ∆u and |u| 2p u in this setting as ∆u = (u ′′ 1 , u ′′ 2 , . . . , u ′′ n ) t |u| 2p u = (|u 1 | 2p u 1 , |u 2 | 2p u 2 , . . . , |u n | 2p u n ) t .
For this calculation, we will use Theorem 1.1 with λ 0 = 0. We observe that by construction, ; s(x) = sech 1/p (px), solves (6.8) for λ = 0 (just differentiate (6.7) to see this; here, φ is not expected to satisfy the boundary condition at x = 0). This allows us to express our frame for solutions of (6.8) that decay as x → +∞ as We set X 1 (0; λ) = −α * 2 α * 1 , so that According to Remark 6.1, the eigenvalues ofW (x; 0) are with multiplicity n − 1 and the negative of this with multiplicity 1. (Here, the notation q(x) has been introduced simply for expositional convenience).
In [10], the authors have developed a straightforward approach toward determining the direction of rotation for the eigenvalues ofW (x; λ) as x varies, but in the current setting this rotation can be determined directly from the form of s(x). We observe that s ′ (x) = −s(x) tanh(px) s ′′ (x) = s(x) tanh 2 (px) − s(x)p sech 2 (px).
This means that −1 is an eigenvalue ofW (0; 0) with multiplicity n−1, and +1 is an eigenvalue ofW (0; 0) with multiplicity 1. As x increases from 0, we see from (6.9) that the imaginary part of q(x) is negative, so rotation is in the counterclockwise direction. Moreover, since tanh 2 (px) and sech 2 (px) are both monotonic in x (for x ≥ 0), we see that the imaginary part of q(x) remains negative until x arrives at the unique valuex for which tanh 2 (px) − p sech 2 (px) = 0.
We see from (6.9) that sgn Re q(x) > 0, so q(x) = +1. For x >x, the imaginary part of q(x) is positive, and by noting the asymptotic relations s ′ (x) ∼ −2 1/p e −x , s ′′ (x) ∼ 2 1/p e −x , we see that as x → +∞, q(x) approaches i. In summary, we see that as x runs from 0 to +∞, q(x) rotates from −1 to i, leaving −1 in the counterclockwise direction and never crossing −1. Indeed, with a bit more work, we can verify that the rotation is entirely counterclockwise, but we don't require that much information to draw our conclusions.
Remark 6.2. For a more complete discussion of the instability of the half-soliton e iωtũ ω (x) as a solution to (6.6), including a calculation of Mor(L + ) by other means, we refer the reader to [12].