THE PROJECTIVE CARTAN-KLEIN GEOMETRY OF THE HELMHOLTZ CONDITIONS

. We show that the Helmholtz conditions characterizing diﬀerential equations arising from variational problems can be expressed in terms of invariants of curves in a suitable Grassmann manifold.


1.
Introduction. There has been considerable interest in the geometrization of the inverse problem in the calculus of variations, mainly to make sense of the classical constructions in a general non-linear setting of differential equations on manifolds (see, e.g., [3] for an algebraic approach, and the survey [13] for differential geometric language).
In this paper we describe a complementary kind of geometrization: interpreting the linear inverse problem of the Calculus of Variations through a Cartan-Klein geometry and invariant theory of curves in homogeneous spaces.
Consider the following two problems: Linear inverse problem in the Calculus of Variations: Under which conditions the solutions of the linear second order differential equation arises as the extremals of a (time-dependent) Lagrangian?
Congruence problem of curves in the half-Grassmannian: Let 1 (t), 2 (t) : [a, b] → Gr(n, 2n) be two curves in the half-Grassmannian manifold of n-dimensional subspaces in R 2n . When are these two curves congruent under the canonical action of the general linear group GL(2n)?
The classical solution of the first problem is given by the so-called Helmholtz conditions. A drawback of the Helmholtz conditions is that they are stated in terms of the existence of a solution of a matrix ordinary differential equation which must satisfy certain properties; it is therefore not completely satisfactory for effective computation. This shortcoming was addressed in the work of Sarlet, Engels and Bahar ( [12]), where they construct the generalized commutativity conditions, an infinite family of algebraic necessary (and, in the analytic case, sufficient) conditions for a second order linear ODE being the Euler-Lagrange equation of a variational problem.
As for the second problem, given a (non-degenerate) curve (t) in the half-Grassmannian, a complete invariant called the Jacobi endomorphism K(t) : (t) → (t) is constructed in in [2]. Choosing a frame of the curve (t), this intrinsic invariant is expressed as a time dependent matrix called the matrix Schwarzian, which generalizes the classical Schwarzian of complex analysis and has already been considered as an invariant in [8] for the one-dimensional case.
The main result of this note relates these two theories: we associate to a linear second order ordinary differential equation the complete projectivization of a fundamental set of its solutions; this is a curve (t) in the half-Grassmannian called the Jacobi curve; the full linear invariants of this curve are contained in its Jacobi endomorphism (see section 3 for details). This kind of projective geometry of differential equations has a long history (witness the 1906 book of Wilczynski [14]), and many variational problems are more fundamentally studied in the projectivized setting, e.g. the Maslov index ( [11]). We prove Theorem 1.1. The generalized commutativity conditions of Sarlet, Engels and Bahar can be obtained as successive derivatives of the Jacobi endomorphism of its Jacobi curve.
Then, the Helmholtz-Sarlet-Engels-Bahar necessary conditions for a second order differential equation on a manifold be the Euler-Lagrange equation can be expressed exclusively in terms of the local geometry of the Jacobi curve, making this geometry a modular component of the inverse problem in the calculus of variations.
This relationship is but one instance of what we call the Jacobi curve method: many properties of differential equations only depend of the linearization and particularly the projectivization of the set of solutions: Definition 1.2. Let X be a n-dimensional manifold endowed with a k-dimensional distribution ∆ and a flow φ. The Jacobi curve at p ∈ X associated to this data is the curve (t) = dφ t (∆ φ−t(p) ). This is a curve on the Grassmann manifold of k-dimensional planes in T p X.
Thus the Jacobi curve is constructed by linearizing the flow through its derivative dφ t for fixed t ∈ R and considering the action on a distinguished set of k-planes (thus the projectivization). If instead of planes we considered a frame along the curve t → φ t (p) we would have an expression for the linearization of the flow. The most common use is when X = T M and φ is the flow of a second-order differential equation, which, in the case of Euler-Lagrange equations, gives a curve in the Lagrangian Grassmannian whose self-intersection properties expresses minimality and index conditions (see [11] for a general exposition the Maslov theory the and [5] for extensions to higher order variational problems). It should be emphasized that in this paper we assume that the flow is already linearized in the same spirit as § 2.2 in [5] and we concentrate on the projectivization given by the Jacobi curve of a linear second order differential equation on R n .
More recently, the local geometry of the Jacobi curve has been shown to be the fundamental building block for curvature invariants in several variational, geometric and control theory situations. For exposition of this "local" Maslov theory, the reader can refer to [7] for Finsler geometry applications (which is viewed as a variational problem in this case) and [1] in other geometrical contexts.
Thus the main thrust of Theorem 1.1 is the factoring of the inverse problem of the calculus of variations through the Jacobi curve and therefore explicitly and quantitatively showing the projective nature of the Helmholtz condition, while additionally giving a Cartan-Klein-geometric interpretation of the Sarlet-Engels-Bahar conditions. The derivatives mentioned in Theorem 1.1 are either ordinary derivatives (when expressing the Jacobi endomorphism in a matrix form with respect to a normal frame) or suitable defined intrinsic covariant derivatives; these covariant derivatives are constructed exclusively in terms of Jacobi curve data.
The paper is organized as follows: in the preliminary sections 2.1 and 2.2 we give a brief description of the methods used to solve the two problems above, using mainly the references [12] and [2] respectively.
In section 3 we give the reason behind the relationship between the two problems: the invariance of the Helmholtz condition under a group acting on the coefficients of a differential equation. Section 4 is devoted the study of the derivatives of the Jacobi endomorphism and their relation with the Sarlet, Engels and Bahar generalized commutativity conditions, thus proving Theorem 1.1. Section 5 is devoted to examples where the rôle of the Jacobi curve becomes apparent.

Preliminaries.
Here we briefly describe the papers [12] and [2], which give solutions to the problems at the beginning of the paper. We warn the reader that there is an unfortunate mismatch between the notations used in these two papers: one is the transpose of the other.
2.1. The Helmholtz-Sarlet-Engels-Bahar conditions. We first review the Helmholtz conditions and the Sarlet-Engels-Bahar extensions. Given a linear, variable coefficient second order ODE as in (1), a Helmholtz multiplier is a n × n invertible matrix Z(t) satisfying the conditions where Z 0 = Z(t 0 ) for some t 0 ∈ [a, b] and Φ (0) (t) = Q(t) − P (t) 2 −Ṗ (t). If such multiplier is found then a possible Lagrangian is given by the expression whose Euler-Lagrange equation is the "multiplied" equation which obviously has the same solutions as (1). The first insight in [12] is to study the behavior of the Helmholtz conditions under the change of variables x(t) = U (t)y(t). Choosing U to be a solution of the differential equationU (t) = −P (t)U (t) with invertible initial condition, we transform 1 to the normal formÿ where R(t) = U −1 (t)Φ (0) (t)U (t) (note that the relation between R and Φ (0) is that of a change of basis; this will be important later). The second Helmholtz condition for this equation then vanishes, thus giving a constant multiplier Z 0 , satisfying At first glance, it seems that we just exchanged the problem of the differential equationŻ = P Z + ZP in the second Helmholtz condition for the problem of solvingU + P U = 0. However, the second insight from [12] is to take successive derivatives at t 0 of this procedure, furnishing an infinite amount of algebraic conditions. To be more specific, if we define inductively the curve of matrices for j ∈ N where [·, ·] is the commutator of matrices then, by the definitions in the last paragraph, we have and we have the Sarlet-Engels-Bahar theorem of generalized commutativity Theorem 2.1. ( [12], Proposition 7) Assuming that P and Q are analytic near t 0 we have that (1) has a Helmholtz multiplier if, and only if, exists a n × n multiplier (a constant non-degenerate symmetric matrix) Z 0 such that for all j = 0, 1, 2, . . .
To finish this section, we note that the change of variables above, x(t) = U (t)y(t), is the key for relating the Helmholtz conditions with the invariants of curves in the half-Grassmannian.

2.2.
Fanning curves in the half-Grassmannian. Let us turn our attention now to the invariants of curves in the half-Grassmannian constructed in [2]. Given a curve (t) : [a, b] → Gr(n, 2n), an adapted frame of is a 2n × n matrix A(t) whose n columns span (t). Note that two frames A(t), B(t) span the same curve of nplanes if and only if there exists a curve of invertible n × n matrices X(t) such that As usual in congruence problems, we pose a generic non-degeneracy condition: a curve (t) is said to be fanning if for some (and therefore for all) frame A(t), the 2n × 2n matrix (A(t) |Ȧ(t)) obtained by juxtaposing the frame and its derivative is invertible. From now on we only deal with fanning curves.
We can then construct invariants from a given frame in two ways: 1. Construct certain endomorphisms of R 2n from the information of the frame and its derivatives, in a way that only depends on the curve (t) and not on the specific frame. 2. Find a normal form of the frame and study the coefficients giving relations between the normal frame and its derivatives.
The first way is realized by the fundamental endomorphism F(t), defined as follows: the matrix of F(t) in the basis of R 2n given by the columns of (A(t) |Ȧ(t)) is the nilpotent matrix 0 I 0 0 or, in the standard basis of R 2n , the matrix of F(t) is given by It is easy to see that this matrix expressions define F(t) as a nilpotent endomorphism of R n that only depends on the curve (t) and not on the specific spanning frame A(t).
We have: 3) The derivativeḞ(t) is a reflection, whose -1 eigenspace coincides with the curve (t).
The +1 eigenspace ofḞ(t) is called of the horizontal space h(t) and has many interesting properties; in this paper we only use that the derivative of normal frames (defined at the end of this section) are horizontal.
Define the Jacobi endomorphism by the equation K(t) = 1 4 (F(t)) 2 . We have that K(t) preserves (t) (and also h(t)), thus defining an endomorphism of (t).
Let us now describe the normal form method. Since by hypothesis the columns of (A(t) |Ȧ(t)) is a basis of R 2n , we have that columns of the second derivativeÄ(t) can be uniquely expressed as a linear combination of of the columns as A(t),Ȧ(t), which translates in matrix language that A satisfies the second order differential equationÄ (t) + 2Ȧ(t)P (t) + A(t)Q(t) = 0 , where P (t), Q(t) are smooth n × n matrices uniquely defined by A. This is the key relationship between curves in the half-Grassmannian and linear second order differential equations. Define the matrix Schwarzian 1 {A(t), t} = (Q(t) − P (t) 2 − P (t)) . We have Note that proposition 2 means that the Schwarzian is the matrix expression of the Jacobi endomorphism in the given frame.
We will also use the behavior of the Schwarzian under reparametrizations (cf. corollary 1.6 in [2]: if s is a diffeomorphism of the real line, then the Schwarzian satisfies the transformation rule {A(s(t)), t} = {A(s(t)), s}ṡ(t) 2 + {s(t), t}I .
Let us now look at the normal form point of view. A fanning frame B(t) is said to be normal ifB(t) + B(t)R(t) = 0; that is, there is no first derivative term in the second order differential equation satisfied by B.
Remark 1. For normal framesB(t) + B(t)R(t) = 0 the Schwarzian is just R(t) . Now given a fanning curve (t), choose a fanning frame A(t) spanning (t) and satisfying (5). If we choose X to satisfy the differential equationẊ(t) = −P (t) X(t) with invertible initial condition, then the frame B given by the relation A(t) = B(t)X(t) is a normal frame spanning (t). It is easy to see that two normal frames spanning the same curve are related by a constant change of frame (see proposition 4.2 in [2]). The main congruence result is then 3. An equivalence. By now the reader must have perceived the similarities between the computations used in the Helmholtz conditions and in the congruence problem for curves in the half-Grassmannian: the Schwarzian is Φ (0) , the reduction to normal form without first derivative term, etc. In this section we formalize this relationship.
Let us represent the set of second order linear differential equations on R n in the following naive way: as the set T of triples of n × n-matrix-valued functions There is an obvious (left) action of the group G 0 of curves of n × n invertible matrices with point wise multiplication, given by "multipliers": for Z(t) ∈ G 0 . Clearly, two triples related by this action produce a differential equation with the same solutions.
There is another, less obvious (left) action: consider the group G 2 of 2-jets prolongations of curves of invertible matrices; a typical element of this group is a triple of matrix-valued functions (X(t),Ẋ(t),Ẍ(t)) with multiplication defined by the Leibniz's rule: where we have omitted the variable in order to clean the notation. Then Two triplets related by this action produce two differential equations with the coefficients being related by Also we have that every solutionx of (8) will be of the formx(t) = X(t)x(t) for some solution x of (7).
The two actions above commute, providing a joint action (·, ) of the group G 0 × G 2 on T . What is actually happening is that G 0 has a section (given by the subset M = Id), and the action preserves this section. Then G 2 acting on all triplets is obtained by extending the action restricted to the section.
Let us now consider the space L of fanning curves (t) in the half-Grassmannian Gr(n, 2n). Here the group GL(2n) of fixed invertible 2n × 2n matrices acts in the tautological way, T • (t) = T (t).
We have There is a natural bijection between the quotients (·, )\T and •\L.
Proof. First we are going to construct a map ϕ from (·, )\T to •\L. Let [(M, P, Q)] be an equivalence class of the quotient (·, )\T and let {x i ; i = 1, . . . 2n} be a system of fundamental solutions of (M, P, Q). Define frame A associated to this system by By the discussion after equations (7) and (8)
Thus showing that the fanning curve spanned by the columns of the matrix on the left side of (10) is in the same equivalence class of the fanning curve spanned by the columns of the matrix on the right side of (10) and ϕ is well defined.
We now construct an inverse ψ : •\L → (·, )\T . Let [ ] be an equivalence class of •\L and choose a fanning frame A of . Since (t) is assumed to be fanning, there are unique curves of n × n matrices, P (t) and Q(t) witḧ Taking the transpose in the equation above we will have that the rows of A form a set of fundamental solutions of the differential equation (Id, P, Q), that is, , following the construction above. We have to check that ψ is well defined. Let˜ be a fanning curve with Then there is T ∈ GL(2n) such that˜ (t) = T (t) for all t. If we takeÃ a fanning frame of˜ we will have that where B is a fanning frame of . By the fact that A and B span the same frame curve , there exists a curve of invertible n × n matrices, By the fanning condition, there are unique curvesP (t) andQ(t) of n × n matrices and the rows ofÃ form a set of fundamental solutions of Taking the derivative twice in (12) we geṫ and, substituting in (13), By the uniqueness of the coefficients in (11) and by the calculation above, we have the relation Taking the transpose, we get Then [(Id,P ,Q)] = [(Id, P, Q)], and the map ψ is well defined. Using the constructions above we will have that ϕ • ψ = Id •\L and ψ • ϕ = Id (·, )\T . Then ϕ is a bijection, and the proof is complete.
The next result relates the group action (·, ) to the Helmholtz condition. Let us first extend the condition (2) for the existence of a Helmholtz multiplier of a differential equation (M, P, Q). We say that Z(t) is a Helmholtz multiplier for (M, P, Q) if Z(t) is invertible for all t and it satisfies Note that this is equivalent in saying that Z(t) is a Helmholtz multiplier of (Id, M −1 P, M −1 Q) in the sense of (2).
The next result says that the existence of a Helmholtz multiplier for a differential equation (M, P, Q) depends only of the orbit of (M, P, Q) trough the action of G 0 × G 2 . In terms of theorem 3.1 above, this means that the Helmholtz multiplier depends only of the fanning curve associated to (M, P, Q): Theorem 3.2. If (M, P, Q) has a Helmholtz multiplier then every element of the orbit of (M, P, Q), with respect to the action of of G 0 ×G 2 , has a Helmholtz multiplier.
Proof. Let (M ,P ,Q) be a element of the orbit of (M, P, Q). Then there exists Suppose that (M, P, Q) has a Helmholtz multiplier Z(t). Then we have that Z(t) is invertible for all t and it satisfies (14), (15) and (16). Set and we claim thatZ(t) is a Helmholtz multiplier of (M ,P ,Q). We have to show thatZ(t) is invertible for all t and satisfies whereZ 0 =Z(t 0 ) and The fact thatZ(t) is invertible for all t andZ satisfies (21) follows immediately by the definition (20) and (14). It remains to prove conditions (22) and (23). Taking the derivative in expression (20) and using (15), we geṫ We are going to show that and is symmetric for all t and then, by definition (20),Z(t) is symmetric for all t. Computing the right hand side of (25) and using (17), (18) and (20) we have Then the derivative ofZ can be written as and this proves (22). To prove (23), first we show that holds for all t. By the definition (24) ofΦ (0) and (17), (18) and (19) we havẽ Developing further the expressions ∆ 1 and ∆ 2 we get Now, using the equalities above involving ∆ 1 and ∆ 2 in (27) we get which proves our initial assertion (26). Now by the definition (20) ofW and (26) and then (23) follows. This completes the proof of the theorem. 4. Derivatives of the Jacobi endomorphism and generalized commutativity conditions. The theory developed by Sarlet-Engels-Bahar [12], provides an effective approach to answer the first question at the beginning of this paper (theorem 2.1). We now interpret this theory in the terms described in the previous section.
Lets consider a differential equation (M, P, Q).
Using the formulas (17), (18) and (19), we can take X 0 = M −1 and X 2 satisfying the first order differential equationẊ Now we define inductively the curve of matrices for j ∈ N where [·, ·] is the commutator of matrices. By the definitions in the last paragraph we have and we have the following version of Sarlet-Engels-Bahar theorem of generalized commutativity, which is agnostic with respect to the section of the (·)-action chosen; that is, we allow an arbitrary invertible coefficient in the second derivatives; the proof is the same as in [12].
Theorem 4.1. Let (M, P, Q) be a differential equation. Assuming that M, P and Q are analytic near t 0 we have that (M, P, Q) has a Helmholtz multiplier if, and only if, exists a n × n multiplier (a constant non-degenerate symmetric matrix) Z 0 such that for all j = 0, 1, 2, . . .
From this and remark 1, if we take a normal frame B(t) of a fanning curve (t), the R(t) appearing above is the transpose of the Schwarzian of the frame. Then for normal frames the ordinary derivatives of the matrix representing the Jacobi endomorphism gives the generalized commutativity condition appearing in Theorem 4.1, thus proving Theorem 1.1 in these frames.
The computations leading to theorem 4.1 took advantage of working in a normal frame; recall that in order to construct a normal frame we need to solve the matrix differential equation (29), which might be impossible to do explicitly in practice.
In the remaining of this section we introduce a covariant derivative which expresses the successive derivatives of the Jacobi endomorphism in invariant terms, and also in a way that is computable in any frame, not necessarily normal, of the curve. We remark that this covariant derivative is constructed using only the linear geometry of the Jacobi curve; we do not use any manifold-based computations.
Consider the tautological bundle ξ : E → Gr(n, 2n) with total space Let now (t) be a curve in Gr(n, 2n), and v(t) a section of E along , i.e., for each t, v(t) ∈ (t). We will define the covariant derivative of the vector field v along as follows. First, let P(t) be the projection to (t) with kernel the horizontal space h(t), that is given by P(t) = 1 2 Id −Ḟ(t) where F(t) is the fundamental endomorphism associated to (see [2]). Then the covariant derivative of v is defined by We emphasize again that this covariant derivative is intrinsically defined purely in terms of the Jacobi curve data.
Suppose now that a 1 , . . . a n is a normal frame. Then eachȧ i is horizontal. Then we have that ∇ t a i (t) = P(t)ȧ i (t) = 0 , which is what we wanted ("normal frames are by definition parallel").
Using this covariant derivative, we can now define derivatives of tensor fields along fanning curves in the usual way. In particular, the derivative of the Jacobi endomorphism is defined in the standard way by for any vector field W along , but the computation is tensorial in W .
Then is easy to show that, if we express the Jacobi endomorphism as a matrix in a normal frame, then this covariant derivative is just the usual derivative of each entry of the matrix.
We have the following result Proposition 3. Let (t) ∈ Gr(n; 2n) be a fanning curve and K(t) its Jacobi endomorphism. The matrices Φ (j) (t) can be obtained by successive covariant derivatives

P(t)(Ȧ(t)) = −A(t)P (t)
K(t)(A(t)) = A(t){A(t), t} and from this we get the case j = 0 Now, using the relation and dropping the parameter t in the notation bellow, for j = 1 we have Now assume the result is true for j − 1. We prove the result holds for j. Again dropping the notation of the parameter t, we have and then proposition 3 follows by induction.

Helmholtz conditions and the Lagrangian Grassmannian.
It is a well known, and extremely useful fact, that the curve corresponding to a variational problem lies in the Lagrangian Grassmannian with respect to the canonical symplectic form, see e.g. section 6 of [2] for the linear situation and [11] for the global theory.
In the context of the inverse problem in the calculus of variations, we do not know a priori the symplectic form making the associated fanning curve a curve of Lagrangian planes. This section is devoted to the existence of such a symplectic form being equivalent to the existence of the Helmholtz multiplier (compare with a similar non-linear construction in [4], where the Lagrangian planes are implicitly given as the vertical sub-bundle); in the next section we give some uniqueness results.
Using the same notation in the proof of the theorems 3. for (X 0 , X 2 ) ∈ G 0 × G 2 . In terms of the identification of theorem 3.1 we have that whereÃ and A are a fanning frames of congruent fanning curves˜ and , constructed using fundamental solutions of (M ,P ,Q) and (M, P, Q), respectively, and T ∈ GL(2n). Now, in the proof of theorem 3.2 above, we arrived at two formulas (20) and (26) that relate the change of the multiplier and the (transpose of) Schwarzian as we move trough the orbit. Both of the formulas depend only of X 2 and they can be written asZ where we took the inverse in equation (20) and the transpose in (26). If we denotẽ As a consequence of the relations above, there is a well defined linear transformation given by the Schwarzian {A(t), t} because the way it changes when we use another congruent frame (32); of course, this just reflects the fact that the Schwarzian is the matrix expression of the Jacobi endomorphism in a given frame. The termsW and W change like a bilinear form when we change by a congruent frame (31). Then for each fanning curve that comes from a variational principle we have a well defined bilinear form W . This bilinear form is essentially the Wronskian that appears also in [2]. We use this bilinear form to get the following result: Theorem 5.1. Let be a fanning curve. Then is variational if, and only if, is a curve of Lagrangian subspaces for some symplectic form ω.
An immediate consequence of Theorem 5.1 is that the variational character is determined by the Jacobi curve as a set, i.e. the parametrization is not important. This fact could also be deduced by the transformation rule (6) Such a frame is constructed in the same way that is in the proof of theorem 3.1.
By theorem 3.2, (Id, 0, R) has a constant Helmholtz multiplier Z 0 satisfying If we denote W 0 = Z −1 0 , using (31) and Sylvester's law of inertia, there is a n × n non-degenerate matrix U 0 such that where I j is the j × j identity matrix; thus without loss of generality we can suppose that W 0 = I n,k . Note that in this case we have W −1 0 = W 0 and W 2 0 = I n . Consider the basis β of R 2n formed by the columns of (B(t 0 )|Ḃ(t 0 )W 0 ) for some initial condition t 0 ∈ [a; b]. Define a symplectic form ω such that the matrix of ω in the basis β is given by 0 −I n I n 0 .
Using the relations (33) we have R(t)W 0 = W 0 R(t) , and the following identity holds: It follows that the curve of matrices bellow is in the Lie algebra of the symplectic group: and using the initial condition . This implies that , being the space spanned by the columns of B, is a curve of Lagrangian subspaces with respect to the symplectic form ω.
For the converse, suppose that is a curve of Lagrangian subspaces with respect to a symplectic form ω. Choose a basis β of R 2n such that in this basis ω is written as In the same way as above, choose a normal frame B of which satisfies By the hypothesis and proposition 6.2 of [2] , the Wronskian of B is given by and is a constant equal to some symmetric non-degenerate matrix W 0 . Still following [2], we can suppose further that because the Wronskian changes as a bilinear form when we perform a change of frames.
With some computations, we can show that and then R(t)W 0 is symmetric. It follows that and W 0 is a Helmholtz multiplier of (Id, 0, R(t)).

5.2.
Determination of possible Lagrangians through invariants of the Jacobi curve. In the previous section we determined that a curve in the half-Grassmannian is a Jacobi curve if and only if there exists a symplectic form that makes all points of the curve Lagrangian. We now try to see up to what extent can the symplectic form and the Lagrangian be determined by the curve . Obviously, this determination is at best up to a constant multiplicative factor. We have the following partial result: the index of a possible Lagrangian can be bounded by information read from of the Jacobi curve. In particular, the Jacobi curve recognizes for example the impossibility of a linear second order ODE be the Jacobi equation of a Riemannian (or Finsler) manifold and the need of a Lorentzian character of a possible Lagrangian.
Theorem 5.2. Let be the Jacobi curve of a second order linear ODE, and R = R(t 0 ) be its curvature at any given point. Then the index of any Lagrangian having as its Jacobi curve is at least the number of non-real eigenvalues of R, counted with multiplicity.
Before proceeding with the proof, let us remark that the counting or bounding the number of non-real eigenvalues of R is effectively computable, using Sturm's theorem or Descartes's rule of signs. Also, the transformation law (6) implies that the number of real eigenvalues of the Schwarzian is invariant under reparametrization.
Proof. The theorem is a consequence of just the zero order Helmholtz condition (4). Fix any basis of (t 0 ) so that we can do everything in terms of matrices; then the zero order Helmholtz condition is the existence of a symmetric invertible matrix Z 0 such that Z 0 R = R Z 0 . We shall see that the eigenvalue condition on R implies that Z 0 has at least the required number of negative eigenvalues.
We shall need the following orthogonality lemma: Lemma 5.3. Let η = a + ib ∈ C and ζ = c + id ∈ C be eigenvalues of the matrix R with η = ζ and η = ζ. Let E η and E ζ be the generalized eigenspaces associated to η and ζ, respectively. If Z 0 is a symmetric non-degenerate n × n matrix satisfying and if z ∈ E η and w ∈ E ζ we have (Re(z)) Z 0 (Re(w)) = (Re(z))) Z 0 (Im(w)) = 0 (Im(z)) Z 0 (Re(w)) = (Im(z)) Z 0 (Im(w)) = 0, that is, the real space spanned by the real and imaginary parts of z and the real space spanned by the real and imaginary parts of w are orthogonal with respect to the bilinear form induced by Z 0 .
This lemma, which we shall prove after this discussion, means that we can concentrate on the index of Z 0 restricted to the realization of the generalized eigenspace E λ ⊕Eλ for a properly complex eigenvalue λ = a+ib, b = 0, and we can then assume that the characteristic polynomial of R is P (x) = (x 2 − 2ax + a 2 + b 2 ) k for k the complex dimension of E λ . Then we can change basis and write R = D + N , where N is nilpotent, D and N commute and D is the realized form of a complex diagonal By the linearity of the Helmholtz condition we can assume a = 0 (since the diagonal part satisfies the Helmholtz equation (4) automatically). Then, also by linearity, we can assume b = 1 and then e 2πD is the identity matrix.. Then the following argument will then allow us to discard the nilpotent part of R: if R satisfies the Helmholtz condition for the given Z 0 , then all of its powers, and therefore any analytic function of R, satisfies the Helmholtz condition wit respect to the same Z 0 .
In particular, e 2πR = e 2π(D+N ) = e 2πD e 2πN = e 2πN def = S satisfies the Helmholtz condition with respect to Z. But then 2πN is the matrix logarithm of S (which is well-defined by the power series of log(1 + x) and nilpotency), and also satisfies the Helmholtz condition. Thus Z 0 N = N Z 0 , and therefore also Z 0 D = D Z 0 . Now if v is an eigenvector of Z 0 with eigenvalue ρ, then the Helmholtz condition and D 2 = −1 implies that D v is an eigenvector of Z with eigenvalue −ρ and thus the eigenvalues of Z come in pairs ρ, −ρ and the index of Z is exactly k. Now it only remains to prove lemma 5.3. This lemma can be proven by using the work of Garcia [9] on symmetric operators with respect to conjugation in Hilbert spaces, which we describe briefly below in a way adapted to our situation.
Consider the complex space C n with an inner product ·, · . An anti-linear operator C : C n → C n is said to be a conjugation if is involutive (C 2 = Id) and isometric, which means z, w = Cw, Cz for all z, w ∈ C n . If we have a conjugation we can define a bilinear form in C n given by [z, w] C = z, Cw .
We say that two subspaces E 1 , E 2 ⊂ C are C-orthogonal if [z 1 , z 2 ] = 0 for every z 1 ∈ E 1 and every z 2 ∈ E 2 (notation: E 1 ⊥ C E 2 ). Finally we say that a linear operator T : C n → C n is C-symmetric if CT = T * C. for all m 1 , m 2 ≥ 0. In particular, generalized eigenspaces of a C-symmetric operator corresponding to distinct eigenvalues are mutually C-orthogonal. Now we can prove lemma 5.3 Proof of Lemma 5.3. Since the condition Z 0 R = R Z 0 is invariant by a change of basis, it is sufficient to prove the lemma in case that the matrix of the bilinear form Z 0 in the canonical basis is given by Z 0 = diag(1, . . . , 1, −1, . . . , −1). Now for x + iy ∈ C n define C 0 (x + iy) = Z 0 x − iZ 0 y. It is easy to check that C 0 is a conjugation with respect to the standard inner product in C n . By a direct computation, for every w 1 , w 2 ∈ C n we have [w 1 , w 2 ] C0 = 0 if, and only if, both equations below are satisfied The extensionR of R to C n , that isR(x + iy) = Rx + iRy, is a C 0 -symmetric operator. In fact, since R is a real matrix, and by the commutativity condition we have Now if λ is an eigenvalue of R then λ is also an eigenvalue of R, since R is a real matrix. Also, if u is a generalized eigenvector associated to λ we have that u is a generalized eigenvector associated to λ. By definition ofR we have that λ and λ are also eigenvalues ofR : C n → C n . DenotingẼ λ andẼ λ the generalized eigenspaces ofR associated to λ and λ, respectively, we have By hypothesis and by the discussion above we have z ∈Ẽ η , w ∈Ẽ ζ and w ∈Ẽ ζ . Since η = ζ and η = ζ by theorem 5.4 we have [z, w] = 0 and [z, w] = 0. The equations (34) implies Since Re(z) = Re(z), Re(w) = Re(w), Im(z) = − Im(z) and Im(w) = − Im(w) we have the thesis (Re(z)) Z 0 (Re(w)) = (Re(z))) Z 0 (Im(w)) = 0 (Im(z)) Z 0 (Re(w)) = (Im(z)) Z 0 (Im(w)) = 0. 5.3. Special curves in the Grassmannian. In any Cartan-Klein geometry, there are distinguished objects with special properties, such as flat objects, constant curvature, and, in the case of curves, orbits of one-parameter subgroups. For example, in the most elementary case of curves in 3-dimensional Euclidean space, these special curves are the straight lines (zero curvature and torsion) and helices (which are simultaneously constant curvature and torsion and are also the orbits of one parameter subgroups of the Euclidean group).
Viewing the Helmholtz condition through the Jacobi curves lens then opens another line of inquiry: determine, for each class of distinguished curves in the half-Grassmannian, whether or not they are Jacobi curves of linear variational problems.
Here we analyze these problem for each of the special curves mentioned above. In contrast with the case of curves in Euclidean space, in the half-Grassmannian the orbits of one parameter subgroups of GL(2n) form a class that properly contains curves of constant curvature; and the best way to describe the special curves is to consider them as different cases of parametrized orbits of one-parameter subgroups, as is done in section 5 of [2]: let X ∈ M 2n (R) = gl(2n) and 0 ∈ Gr(n, 2n) a fixed n-plane. Consider now the curve (t) = e tξ · 0 given by the action of the one-parameter group e tξ on Gr(n, 2n). In order for the curve (t) be fanning, we need that R 2n = 0 ⊕ ξ 0 . We assume this in the sequel. We have 5.3.1. Flat curves. This case is of course trivial, and we only include it for completeness: since the Schwarzian is zero any multiplier Z 0 will satisfy the normal form of the Helmholtz equations 4. Observe that the property of having zero Schwarzian can be computed directly from the matrix Schwarzian {A(t), t} = (Q(t)−P (t) 2 −Ṗ (t)) .

5.3.2.
Curves with constant Schwarzian. Since in a normal frame curvature R(t) ≡ R is constant, all the Sarlet-Engels-Bahar conditions reduce to the single one of existence of a symmetric multiplier Z 0 such that Z 0 R = R Z 0 , which always exists. These curves can be recognized in any frame by verifying that the covariant derivative defined in section 4 of the Schwarzian, or equivalently, that the first generalized commutativity condition holds for all t (see § 5 of [12]).

5.3.3.
Generic orbits of one-parameter groups. For generic parametrized orbits (t) = e tξ 0 which are fanning curves, the situation changes drastically: there will be non-trivial algebraic obstructions to the existence of a multiplier. Theorem 5.5 shows that the inverse problem for generic orbits of one-parameter subgroups amounts to the study of the Helmholtz condition for constant coefficient equations; we show a GL-Lie algebraic approach consistent with the geometric nature of this section.
First we need to summarize part of the proof of theorem 5.5. Let A 0 be a basis of 0 , which naturally induces a frame A(t) = e tξ A 0 of (t); we are simply lifting the orbit from the half-Grassmannian to the space of frames. By the fanning condition, there exists n × n matrices P and Q such that and by the properties of the exponential, these constant matrices P and Q actually work for all t ∈ R: which shows (2) in theorem 5.5. The Schwarzian of this frame is the constant matrix and a normal frame associated to (t) can be given as B(t) = A(t)e −tP , where, by the linear behavior of the Schwarzian under change of frame, gives (3) in theorem 5.5, with the additional information the matrix called Y there can be taken to be P as above. In this normal frame, each Φ (k) (t) is just the standard k-th derivative of R(t) = e tP R 0 e −tP , and Φ (k) (0) = [P, Φ (k−1) (0)], and the Helmholtz conditions (4) are expressed in terms of these repeated commutators.
We will be manipulating P in order to find some non-trivial conditions; thus we need to establish some kind of uniqueness of P in order for our conclusions to depend only on the curve. It is easy to see that if B(t) = A(t)X(t) is another frame representing the same curve (t), then B (t) + 2B (t)P + B(t)Q = 0 for constant matricesP ,Q if and only if X is constant, in which caseP = X −1 P X. Then P is well defined once we choose a basis of (0).
Let us now introduce some notation. Given a symmetric invertible matrix Z which is a candidate for multiplier, define the Z-transpose of a matrix A by Aˆ = Z −1 A Z. This, of course, represents in matrix form the transpose of a linear transformation with respect to the symmetric bilinear formẐ(u, v) = u Zv. The Z-transpose obeys all the rules that make manipulating the standard transpose feasible: It is a product-reversing involution, which in particular implies that it is a matrix Lie algebra anti-homomorphism: ; the Lie bracket of Z-symmetric or Z-antisymmetric matrices is Z-antisymmetric and mixed Lie brackets are Z-symmetric.
In this notation, the Sarlet-Engels-Bahar conditions at t 0 = 0 take the form Φ (j) (0) = Φ (j) (0)ˆ , where Φ (0) (0) = R and, as mentioned before, Φ (k) (0) = [P, Φ (k−1) (0)]. Thus, in order for a given Z be a multiplier we must first have Φ (0) (0) = R be Z-symmetric. Decomposing P = Pŝ + Pâ in its Z-symmetric and Z-antisymmetric parts, we most have, by the relationship between the Z-transpose and the Lie bracket of the previous paragraph, that Pŝ must commute with R, and then Φ (1) = [P, Φ (0) (0)] = [Pâ, R] will be automatically Z-symmetric. By induction, we get the infinite set of conditions It is now easy to find examples that cannot have multipliers. Let 0 ⊂ R 4 be the plane defined by the vanishing of the last two coordinates, and let The advantage of taking ξ of this form is that the matrices P and Q are then the constant coefficients appearing in (2) of theorem 5.5. Now let (t) = e tξ 0 . The curvature of (t) at t = 0 is given by the matrix Schwarzian R = Q − P 2 = 1 0 0 2 .
Since R is diagonal with distinct entries, any candidate for multiplier Z must also be diagonal, say Z = a 0 0 b with ab = 0. Then we have, taking the Z-symmetric It follows that Z must be a multiple of the identity and the Z-transpose is the standard one. Then [P s , [P a , R]] = 0 1 −1 0 = 0 , and therefore this curve in Gr(2, R 4 ) is not variational. The example can clearly be generalized to give conditions for the existence of a multiplier in the case when the curvature at a point has all real, distinct eigenvalues, in the spirit of [12], page 60.
5.4. The geometry of the Sarlet-Engels-Bahar example. The paper [12], whose geometrization in terms of Jacobi curves motivates the present one, finishes with a nice example of application of the generalized commutativity conditions: consider the linear systemẍ where ρ, α and β are arbitrary smooth functions of time. In this section we shall study the geometry of the Jacobi curve (t) in the half-Grassmannian Gr(2, 4) associated with the system (35) and show how the geometry of implies its variational character. Given a complex structure in R 4 , we have CP 1 ⊂ Gr(2, 4) consisting of all real 2-planes which are actually a complex line. The signs accompanying the coefficients ρ, α and β suggests some complex structure behind the scenes; we have on R 4 the Jacobi curve (t) associated to the system (35) lies in CP 1 ⊂ R 4 . Proof. A frame A(t) of the Jacobi curve (t) is given by a transposed set of fundamental solutions of (35), as in the proof of Theorem 3.1, normalized so as to satisfy A(0) = (1 2×2 , 0 2×2 ) andȦ(0) = (0 2×2 , 1 2×2 ) . Then (0) is a complex plane. Let A(t) = A(t) |Ȧ(t) be the standard juxtaposition. The curve (t) can be seen as the evolution of (0) by A(t), that is, (t) = A(t) (0). Observe that A(0) is the identity. We claim that A −1 (t)Ȧ(t) commutes with J 0 , that is, it defines is a complex linear map with respect to the complex structure J 0 .
Linear transformations commuting with J are those composed by 2 × 2 blocks of the form a b −b a . The frame A(t) satisfies the matrix second order differential equationÄ (t) + 2Ȧ(t)P (t) + A(t)Q(t) = 0 , where which shows that A(t) satisfies the differential equatioṅ where Since R(t) commutes with J, the initial condition A(0) = identity and equation (36) shows our claim that A(t) defines a J 0 -complex linear transformation. Since (0) is a complex line, then so is (t) = A(t) (0). Proposition 4 in this example is interesting because, for this standard complex structure and the canonical Euclidean inner product g 0 of R 4 , apparently we have exactly the opposite of what we want: the Jacobi curve lies in a subset of complex planes = J 0 of Gr(2, 4), whereas in view of Theorem 5.1 we want the curve to belong to the set of totally real planes ⊥ J 0 , i.e., the Lagrangian Grassmannian of ω 0 = g 0 (J 0 ·, ·). However, the next proposition straightens this: Proposition 5. Let J be a complex structure in R 4 , g a compatible Euclidean inner product (i.e. J is a g-isometry), and Ω = g(J·, ·) the associated symplectic form. Then there is an invertible linear transformation Σ : R 4 → R 4 inducing a diffeomorphism of Gr(2, 4) that takes the subset CP 1 of complex lines into the Ω-Lagrangian Grassmannian Λ 2 of totally real planes.
Then we have Corollary 1. There exists a symplectic form ω such that the Jacobi curve (t) associated to the system (35) lies in the Lagrangian Grassmannian.
Proof. Use the previous corollary in Theorem 5.1.