USING LIE GROUP INTEGRATORS TO SOLVE TWO AND HIGHER DIMENSIONAL VARIATIONAL PROBLEMS WITH SYMMETRY

. The theory of moving frames has been used successfully to solve one dimensional (1D) variational problems invariant under a Lie group sym- metry. In the one dimensional case, Noether’s laws give ﬁrst integrals of the Euler–Lagrange equations. In higher dimensional problems, the conservation laws do not enable the exact integration of the Euler–Lagrange system. In this paper we use the theory of moving frames to help solve, numerically, some higher dimensional variational problems, which are invariant under a Lie group action. In order to ﬁnd a solution to the variational problem, we need ﬁrst to solve the Euler Lagrange equations for the relevant diﬀerential invariants, and then solve a system of linear, ﬁrst order, compatible, coupled partial diﬀeren- tial equations for a moving frame, evolving on the Lie group. We demonstrate that Lie group integrators may be used in this context. We show ﬁrst that the Magnus expansions on which one dimensional Lie group integrators are based, may be taken sequentially in a well deﬁned way, at least to order 5; that is, the exact result is independent of the order of integration. We then show that eﬃcient implementations of these integrators give a numerical solution of the equations for the frame, which is independent of the order of integration, to high order, in a range of examples. Our running example is a variational problem invariant under a linear action of SU p 2 q . We then consider variational problems for evolving curves which are invariant under the projective action of SL p 2 q and ﬁnally the standard aﬃne action of SE p 2 q .

1. Introduction. One dimensional (1D) variational problems with Lie group symmetries have been solved exactly, by making use of the moving frame theory (see for example the textbook, [17] and references therein). The idea behind the method is to define a moving frame for the Lie group action, find a generating set of differential invariants, and then rewriting the Lagrangian in terms of the generating differential invariants and their derivatives. Using the results of [8,10], one obtains directly the invariantised Euler-Lagrange equations, as well as a set of conservation laws given in terms of the frame. Once the Euler-Lagrange equations are solved for the invariants, the frame can be used to find the solution in terms of the original variables. For a 1D problem, Noether's laws yield algebraic equations for the frame and these can be used to ease the integration problem for the minimising solution.
For higher dimensional problems, the laws do not in general lend themselves to finding exact solutions.
In this paper we reduce the problem of finding the minimiser, to that of solving the Euler-Lagrange equations for the invariants and then solving the compatible system of differential equations, for ρ, where G is the Lie group, ρ : M Ñ G is the moving frame, g is the Lie algebra of G, and Q i : M Ñ g are the so-called curvature matrices. The system (1) is The curvature matrices depend on the invariants of the Lie group action, which are known as functions of the independent variables as soon as the Euler-Lagrange equations have been solved. We solve (1) by showing that the Magnus expansion solution for a single such equation, may be applied sequentially to obtain a welldefined result, provided the compatibility conditions (2) are satisfied, at least to order 5.
In section 2 we present the basic concepts of the theory of moving frames which we will use in our application, specifically, the definitions of a moving frame, differential invariants, syzygies and curvature matrices, and describe how these are used to study a variational problem with a Lie group symmetry. Our running example is a linear action of SU p2q on C 2 . A different approach to moving frame theory and its application to the Calculus of Variations can be found in [16].
Section 3 gives a summary of the main results concerning the Magnus expansion on which Lie group integrators are based, for a matrix ODE system evolving on a Lie group (see [1,4,13] for surveys on the topic, [7] for numerical software).
We then present the main result of this paper: that the Magnus expansion solution may be used to solve the compatible differential system (1) in the case p " 2, at least to order 5, in the neighbourhood of a point where the components of the curvature matrices are regular. We do this by showing that applying the expansion sequentially, yields a result which is independent of the order in which the two differential equations are solved, to order 5. This then implies directly, that for a set of p pairwise compatible equations of the form (1) the Magnus expansion can be applied sequentially, with respect to each independent variable, yielding a well defined result, at least to order 5. We then demonstrate in a range of examples, that an efficient implementation, [7], mirrors this result, to the relevant order of approximation.
Our running example is to find the minimiser of a 2D variational problem which is invariant under a linear action of SU p2q. We then consider some examples invariant under the projective action of SLp2q, and finally an example invariant under the standard affine action of SEp2q. Section 4 contains the numerical tests.
We conclude with a conjecture, that compatibility of the system (1) implies that the Magnus expansion may be used sequentially to obtain a well-defined result, to all orders, in the neighbourhood of a point where the components of the curvature matrices equal their Taylor series.
2. Moving frames and the Calculus of Variations. In this section we provide a brief introduction to Lie group actions and moving frames which suffices for our applications. Details for more general constructions can be found in the textbook [17] and references therein.
Let G be a Lie group and M a manifold. We say the smooth map GˆM Ñ M , pg, zq Þ Ñ g¨z P M , is a left Lie group action if g¨ph¨zq " pghq¨z for all g, h P G and z P M .
In our applications here, M will be the jet bundle, J n pXˆU q with coordinates z " px, uq " px 1 , .., x p , u 1 , .., u q , . . . u 1 K , ..q where K " pk 1 ,¨¨¨, k p q P N, k 1`¨¨¨k p " |K| ď n, and In this case, we assume there is a Lie group action on the base space, XˆU and that the action on the remaining coordinates of J n pXˆU q is induced via the chain rule. Standing Assumption. We assume throughout that the independent variables are invariant under the Lie group action, We have then that for all K " pk 1 , . . . , k p q, We discuss how to relax this assumption slightly, at the end of this section.
Running Example. We take for our running example, the linear action of the Lie group G " SU p2q on C 2 . Throughout our running example,z denotes the complex conjugate of z. In this case, the general element of G is given by gpα, βq "ˆα ββᾱ˙, |α| 2`| β| 2 " 1 and the linear action is given bŷ We take pu, vq for our dependent variables. The induced action on the jet bundle coordinates is then Given a left Lie group action, GˆM Ñ M , a right moving frame is a map for all g P G and z P M . Moving frames are often defined only locally, that is, on some domain contained in M , in terms of solutions of so-called normalisation equations, of the form Φpg¨zq " 0, with as many independent equations in Φ " 0 as the dimension of the Lie group. Conditions for the existence of the frame are then the same as those for the Implicit Function Theorem, needed to solve Φpg¨zq " 0 uniquely for g " gpzq. By an abuse of notation, we denote the neighbourhood where the frame is defined as M .
This frame can be obtained by the normalisation equations r u " 1, r v " 0.
Consider now the so-called invariantisation map, z Ñ Ipzq given by Ipzq " ρpzq¨z. The right equivariance of the frame guarantees that Ipzq is an invariant. Indeed, we have for each g P G and each z P M that ρpg¨zq¨pg¨zq " ρpzqg´1¨pg¨zq "`ρpzqg´1g˘¨z " ρpzq¨z " Ipzq.
The invariantisation map is also denoted as ιpzq orῑpzq in the literature. If M has coordinates z " pz 1 , . . . , z m q, then in coordinates we have Ipzq " pρpzq¨z 1 , . . . , ρpzq¨z m q " pIpz 1 q, . . . , Ipz m qq " pI 1 , . . . , I m q where this defines the I k , k " 1, . . . , m. The I k are denoted as normalised invariants. For our application, where the action is on a jet bundle, the normalised invariants are denoted as I α K " ρ¨u α K " g¨u α Kˇg "ρ . Running Example (cont.). Since we know the induced action on each u K , v K , Equation (5), and we know the frame ρ explicitly, given in Equation (6), it is simple to write down the normalized invariants. Specifically, they are The normalised invariants play a strong role in the so-called calculus of invariants. The most important result is that along with the invariant independent variables, they generate the algebra of invariants.
Theorem 2.1 (Replacement Rule). Let GˆM Ñ M be a smooth left Lie group action and let ρ : M Ñ G be a right moving frame for this action. Let coordinates on M be given as z " pz 1 , . . . , z m q and let I k " ρpzq¨z k , k " 1, . . . , m be the normalised invariants. Then for any invariant of the action, F pz 1 , z 2 , . . . , z m q, we have F pz 1 , z 2 , . . . , z m q " F pI 1 , I 2 , . . . , I m q.
Indeed, we have that since F pzq " F pg¨zq for all g P G, then we must have For actions on jet bundles, the Replacement Rule proves there is an infinite number of generators of the algebra of differential invariants, the x i and the I α K . Since we assume that the independent variables x k , k " 1, . . . p are all invariant, then the differential operators B{Bx k are all invariant, and thus any derivative of an invariant is invariant. Taking this into account we may obtain a finite number of generators of the algebra of invariants as follows.
Definition 2.2 (Curvature matrices). Let GˆJ n pXˆU q Ñ J n pXˆU q be a smooth left Lie group action, induced by an action on XˆU which leaves each point of X invariant. Let ρ : M Ă J n pXˆU q Ñ G be a right moving frame for this action. Suppose further that G is a matrix Lie group. Then the matrices where ρ´1 is the group inverse to ρ, are known as the curvature matrices, and the non-constant components of these are known as the Maurer-Cartan invariants.
It is a standard result that at each point where the frame is defined, Q i P g, the Lie algebra of G, so that in effect, Q i : M Ă J n pXˆU q Ñ g.
Running Example (cont.). We now suppose that u and v depend on some independent variables, which we denote here, for the purposes of our application, as px, t, τ q. We have that Applying the Replacement Rule, noting that Ipuq " 1 and Ipvq " 0, we have that From uū`vv " 1 we have also that Ipu x q is pure imaginary, and that in fact, for n ě 1, Q x : J n pXˆUq Ñ sup2q, the Lie algebra of SU p2q. Similar results for the other independent variables hold.
The curvature matrices yield important recurrence relations for the derivatives of the normalised invariants. The calculations in the running example are typical for linear actions; for generalisations to nonlinear actions or where the independent variables are not invariant, see the textbook, [17].
With this notation, if we differentiate both sides of the matrix equation e note that setting K " p0, 0, 0q and knowing I u p0,0,0q " Ipuq " 1 and I v p0,0,0q " Ipvq " 0, this calculation essentially solves for Q i in terms of the symbolic, normalised invariants, using the fact that at each point of J 1 pXˆU q, Q i P sup2q. This calculation shows further that´Ipu x q " Ipu x q,´Ipu t q " Ipu t q and´Ipu τ q " Ipu τ q confirming that these are pure imaginary quantities. Finally, it is clear that the algebra of differential invariants is generated by the Maurer-Cartan invariants, Ipu x q, Ipv x q, Ipu t q, Ipv t q, Ipu τ q, Ipv τ q and their derivatives, together with the invariant independent variables.
Our application to the invariant Calculus of Variations requires that we have already calculated, the differential relations or syzygies satisfied by the Maurer-Cartan invariants. These arise from the identities, which follow directly from cross-differentiation of the equations defining the curvature matrices, Equation (9).
Running Example (cont.). For ease of exposition, and to align the notation with the general statements, we rename the Maurer-Cartan invariants in our running example to be, and Equating components of Equation (10) yields, where This is the form of the syzygies we will need in the sequel.
Finally, we show how these computations are used in our application, which is the invariant Calculus of Variations. We consider the problem of finding the extremizing curve or surface where the Lagrangian is invariant under a Lie group action. We assume the independent variables are x i , i " 1, . . . p, the dependent variables are u k , k " 1, . . . , q and the generating differential invariants are given as κ i , i " 1, . . . , r. We denote derivatives of the invariants as We consider the Lagrangian where in any given example, the number of arguments of the Lagrangian function L is finite. In order to effect the variation, we introduce a new dummy independent variable τ , which gives rise to a new set of normalised invariants, Ipu α τ q " σ α and syzygies, B Bτ`κ 1¨¨¨κr˘T " H`σ 1¨¨¨σq˘T , using the calculations described above. The matrix operator H, is always a linear matrix differential operator, whose coefficients involve only the κ k and their derivatives. Then the result is that the Euler-Lagrange equations for the Lagrangian in Equation (16) are given by the components of [8,9] 0 " H˚¨E κ1 pLq . . .
where H˚is the operator adjoint of H, and where is the Euler operator with respect to the variable, κ . It is a result that any syzygies between the κ 's do not need to be included as constraints, as all terms in the corresponding Lagrange multipliers disappear. However, the syzygies do need to be included when solving the Euler-Lagrange system for the invariants, so as not to have an under-determined system.
Having solved the Euler-Lagrange equations for the invariants, κ , there comes the question of what are the extremising curves and surfaces in the original dependent variables. If the κ are known as functions of the independent variables, then the curvature matrices are also known. One can then solve the equations (9) in the form, for ρ " ρpxq, yielding its components as functions of the independent variables. Equations (18) are guaranteed to be compatible, by (10), which is the necessary condition for a solution to exist. Finally, the extremising solution in terms of the original variables, may be obtained as where ρpxq´1 is the group inverse of ρpxq and the action is that appropriate for the jet bundle coordinate, u α K .
Running Example (cont.). Suppose for simplicity that the Lagrangian is Then the Euler-Lagrange equations are 0 " H˚¨κ and these must be solved together with the syzygies in Equation (13). This yields, provided κ 1 ‰ 0, together with, The arbitrary functions can be fixed with some boundary conditions. Given the invariants κ i , i " 1, .., 6, the curvature matrices Q x and Q t , (11), are known in terms of the original independent variables. The next step, which we will discuss in the next section, is to solve equations (18) in a way such that the solution is guaranteed to belong to the Lie group G. Once a solution for ρ " ρpx, tq has been found in some domain, then the surfaces that minimise (16) are given, in that same domain, byˆu v˙" ρpx, tq´1ˆ1 0˙ (  22) where ρpx, tq´1 is the group inverse to ρpx, tq, that is, u " ρ 22 , the p2, 2q component of ρ, and v "´ρ 21 , noting that Ipuq " 1 and Ipvq " 0. The initial data for ρ is taken to be compatible with the given boundary data for pu, vq, and their derivatives, using (19) and (22) on the boundary.

Remark 1.
Our results extend readily to where the action on the independent variables is translation, so that in which case (3) still holds, the operators B Bx i are still invariant, and the equations (10) still hold. The only real difference is that the Lagrangian can not depend on the independent variables. The moving frame for the group parameter is taken to be " c´x where c is constant, so that Ipx i q " c i . We note that the choice of the c i can lead to more or less complicated expressions, and are often taken to be either zero or unity.

Lie group integrators.
In the previous section we saw how the moving frame ρ is the solution of the compatible system, rewritten here for the two dimensional case, Equations (23)- (24) are linear coupled PDEs which evolve on a Lie group. However, each equation contains only a derivative in a single direction. Hence, it is possible to solve each of them numerically using numerical schemes developed to solve ODEs on Lie groups: the so-called 'Lie group integrators'. In the following subsection we review the main facts concerning the theory of Lie group integrators. In-depth surveys on this can be found in [1,4,13].
3.1. Matrix ODEs. As our focus is on matrix Lie group actions, we will assume we are dealing with matrix Lie groups. Moreover, the matrices Q x and Q y depend only on the generating differential invariants of the action and not on the moving frame itself. This means that equations (23)-(24) form a system of linear PDEs.
Suppose we have a matrix Lie group G with Lie algebra g. We consider the initial value problem on G, where Y P G and A : R Ñ g. To solve the initial value problem (25) it is necessary to extend the exponential function to Lie algebras. 12]). If g is the Lie algebra of a Lie group G, then the exponential map is defined as It can be shown that the series exppAq indeed maps into G. The exponential is one-to-one only in a neighbourhood of 0 P g. However, it is not globally oneto-one, nor surjective. When it does exist, the inverse function is known as the logarithm and denoted as log.
Then the differential of exppAptqq, denoted by dexp, is given by d dt exppAptqq " dexp Aptq pA 1 ptqq exppAptqq Given A P g , the adjoint map ad A is defined as It can be proved [24], that dexp A is an analytic function of ad A , namely and we used the notation We follow [13] and read the ratio in the second equality of (27) in the sense of the power series exppzq´1 z " where x is replaced by ad A . As dexp is an analytic function, we can invert it and write This last equation should also be read as a power series, recalling that We now state the fundamental result that lies behind the theory of the Lie group integrators. [18]). Consider the initial value problem on G given in (25) and define Then, for every t 0 P p0, T max q, the solution of (25) in r0, t 0 s is given by Y ptq " exppΘptqqY 0 and Θptq P g is the solution of 3.2. The Magnus expansion. We are interested in using a class of numerical methods that goes under the name of 'Magnus expansion methods' [14]. This is a particular case of the Runge-Kutta-Munthe-Kaas methods developed in [19], [20], [21] and [22]. In order to solve (29), the method of Picard iteration is used, which relies on the concept of uniformly Lipschitz continuous function [11]. We recall the following definitions.
the Picard iteration is defined as the sequence The two definitions above play a central role in the Picard-Lindelöf theorem: Theorem 3.6 (Picard-Lindelöf, [11]). Consider the initial value problem given by (30). If f pt, yptqq is uniformly Lipschitz continuous in y and continuous in t, then there exists ą 0 such that there exists a unique solution to (30) on the interval rt 0´ , t 0` s. Further, this solution is the limit of the Picard iterations.
As seen in (28), the inverse of dexp can be written as a series involving powers of the ad operator. Applying the Picard iterations to (29) yields for m " 0, 1, 2, ... In our case, the matrix function Aptq has no dependence on Y . Since it is assumed to be smooth and hence continuous, in t, Picard's theorem can be applied, to yield a unique local solution to (29), namely Θptq " lim mÑ8 Θ rms ptq. It can be seen [13], that it is possible to rearrange the terms in Θ as Θptq " where H i ptq comprises those terms involving precisely i commutators and i`1 integrals. The expression defined in (32) is called the Magnus expansion.

3.3.
Magnus expansion and coupled systems of PDEs. We now restrict to the two dimensional case, for simplicity. We are interested in applying the theory of Lie group integrators based on the Magnus expansion to solve 2D variational problems. Let us recall we want to solve system (23)- (24) in order to find the moving frame ρ. Equations (23)-(24) form a system of two linear matrix differential equations to be solved in a suitable domain of R 2 and we want the solution to belong to the Lie group G at every point where it is defined. We also recall the compatibility condition (2) for (23)- (24) to have a solution. We denote this condition by R, that is, which must be identically zero for the system to be compatible. We apply Lemma (3.3) to equations (23)- (24), obtaining the coupled system of differential equations, The method of Picard iterations is applied to each of the differential equations (34)-(35) to yield, for n " 0, 1, 2, ..., where the iterations of Θ x and Θ y solve the equation for (34) and (35) respectively. We use the superscripts x and y to denote the integrations in the x and y direction respectively. As in (32), we rearrange terms such that where M x i , M y i comprise those terms containing exactly i commutators and i`1 integrals.

3.4.
Magnus expansions commute up to order 5. We now show that the Magnus expansion, considered as an exact, albeit infinite series solution, yields a well defined integration method for a system of the form (23)- (24), in the neighbourhood of a point px 0 , y 0 q for which the curvature matrices both have a Taylor series expansion. We consider the result obtained by sequential integration in the two different directions. We show, in fact, that the difference in the results obtained by changing the order of integration, can be expressed in terms of a differential operator acting on the compatibility condition, R, (33). The two different solutions are the same, then, provided R " 0. While we show the result only to order 5, it is clear that the calculations may be continued to any order, albeit they become increasingly complex. Definition 3.7. If Q P g is a matrix Lie algebra element, then Q is of order n in h if In our calculations, we will make strong use of the Baker-Campbell-Hausdorff (BCH) formula which shows how two matrix exponentials may be multiplied to obtain a single matrix exponential. Although we will use a truncated BCH expansion up to order 5, a recursive formula to determine every term has been proved by Dynkin, [6]. . . .

rn`sną0
rX r1 Y s1 X r2 Y s2¨¨¨X rn Y sn s ř n i"1 pr i`si qΠ n i"1 r i !s i ! where rX r1 Y s1 X r2 Y s2¨¨¨X rn Y sn s " rX, rX,¨¨¨rX loooooomoooooon r1 , rY, rY,¨¨¨rY looooomooooon s1 ,¨¨¨rX, rX,¨¨¨rX loooooomoooooon rn , rY, rY,¨¨¨Y looooomooooon sn ss¨¨¨s Theorem 3.9. Let px 0 , y 0 q be a point in the domain of the moving frame, for which the curvature matrices have a (local) Taylor series expansion. Then in a neighbourhood of this point, the Magnus expansion may be used sequentially, to yield a well-defined solution for the compatible system (23)- (24), to order at least 5.
Proof. Consider a rectangular neighbourhood of px 0 , y 0 q, given by rx 0 , x 0`h sr y 0 , y 0`k s, where h, k P R are sufficiently small, that is, rx 0 , x 0`h sˆry 0 , y 0`k s lies within the domain of validity of the Taylor series of the curvature matrices. In order to have a well-defined solution, we need to prove that if we start from the initial datum ρ 0 " ρpx 0 , y 0 q, then we obtain a unique expression for ρpx 0`h , y 0`k q, regardless of the order of integration, that is, regardless of whether we integrate first with respect to x or with respect to y. Let us consider two paths, say γ 1 and γ 2 , such that they both start at px 0 , y 0 q and end at px 0`h , y 0`k q " px 1 , y 1 q, but γ 1 first goes to px 0 , y 1 q and then to px 1 , y 1 q, while γ 2 travels first to px 1 , y 0 q before going to px 1 , y 1 q (see Figure 1). We compute the solution ρpx 1 , y 1 q along the two paths, and compare the two results. We call ρ γ1 px 1 , y 1 q and ρ γ2 px 1 , y 1 q the solution ρpx 1 , y 1 q obtained along γ 1 and γ 2 respectively. To make the calculations tractable, we will approximate the solutions ρ γ1 and ρ γ2 to order five.
where 'h.o.t' stands for higher order terms. The expansion for logpρ γ2 px 1 , y 1 qρ´1 0 q is analogous. The second step is to express Θ x py 1 q and Θ y px 1 q as Taylor polynomials around y 0 and x 0 respectively. The terms we need for the Magnus expansion of Θ x py 0 q are, The expression for Θ y px 0 q is analogous. The third step is to expand the integrand functions inside Θ x py 0 q and Θ y px 0 q, that is, Q x pξ, y 0 q and Q y px 0 , ξq, around x 0 and y 0 respectively as Taylor polynomials up to order 5. The coefficients of this Taylor expansion are functions of the curvature matrices Q x and Q y and their partial derivatives evaluated at px 0 , y 0 q. After this step, it becomes trivial to compute the integrals as they are polynomial in the dummy variables of integration, ξ, ξ 1 and ξ 2 , and to collect terms of each order.
In this way, the right hand side of (38) may be written down in terms of Q x and Q y and their partial derivatives, all evaluated at px 0 , y 0 q.
The final step is to write this resulting expression in terms of the compatibility expression R defined in (33) and its partial derivatives, all evaluated at the arbitrary initial point px 0 , y 0 q. We summarise the result in the table below, noting that the coefficient of h n k m can be obtained from that of h m k n by interchanging x and y. It can be seen that every coefficient is a differential expression in R which is identically zero when R is zero, and hence the right hand side of (38) is zero. This ends the proof.
It can be seen that the calculations become increasingly complex as the order increases. While obtaining a recursive expression for these expressions seems out of reach, nevertheless, it seems reasonable to conjecture that the result holds to x Q y pRq´1 24 ad Q y pad ByQ x pRqq´1 24 ad BxQ y pad Q x pRqq 1 8 ad rBxQ y ,Q x s pRq`1 8 ad rQ y ,BxQ x s pRq every order. Of interest is the emergence of an operator acting on R at every order, which combines differential and ad operators, both of which are derivations acting on the free Lie algebra generated not only by the curvature matrices but also their derivatives. Understanding the structure of the sequence of operators acting on R, as exhibited in Table 1, is an open problem.

Numerical examples.
We showed in the previous section that the Magnus expansions commute at least up to order 5. This hints that the Lie group integrators based on the Magnus expansion may also commute to some related order, and we investigate some simple examples. We consider four variational problems and, in order to solve the system of coupled matrix PDEs for the frame, we use a sixth-order Magnus series method which is included in the Matlab package DiffMan ( [7], Algorithm A.2.5). This numerical scheme is cost efficient [2,3,15], which means that not all the terms in the Magnus expansion are used in the calculations. Moreover, the algorithm numerically approximates integrals using a Gauss-Legendre scheme. Further research needs to be done in order to understand fully how the compatibility condition can be used to prove, to some order, a result like Theorem (3.9) for the solvers implemented in Diffman. However, as we will see in the numerical examples in this section, neither the omission of some terms in the name of efficiency, nor the replacement of quadrature for exact integration, appear to affect unduly the numerical compatibility.
In the following we first find a simple exact solution to the Euler-Lagrange equations which may readily be used as components of the curvature matrices Q i in the software 1 . We then solve for the frame using two different methods: 1 integrating first with respect to y along the line x " x 0 , and then, for j " 0, .., n, use the points ρpx j , y 0 q as initial condition for the solution found integrating with respect to x along the line y " y j . 2 integrating first with respect to x along the line y " y 0 , and then, for j " 0, .., n, use the points ρpx 0 , y j q as initial condition for the solution found integrating with respect to y along the line x " x j . and we will compare the solutions obtained. Finally, we use (19) to plot the minimiser, given the frame, for completeness.
We first conclude our running example.
Running Example (cont.). Recall in Section 2, we considered the linear action of SU p2q on pair of complex surfaces upx, tq and vpx, tq. We consider the extremal surfaces to describe an evolving curve, x Þ Ñ pupxq, vpxqq P U. The Lagrangian considered was 1 2 where D is the square r0, 1sˆr0, 1s, and note that a simple exact solution to the system (20), (21), with f ptq " t 3 and with boundary condition, is given by Since we do not impose initial data for pu, vq , we may take the (randomly chosen) initial condition for the moving frame to be

43)
We use Diffman to solve the system for the moving frame in two different ways; first by solving the equation ρ x " Q x ρ for ρpx, 0q, and then by solving the equation ρ t " Q t ρ equation with ρpx, 0q as the initial data, and second, by reversing the order of integrations. In order to keep the number of plots low (there are 4 surfaces corresponding to real and imaginary parts of u and v, and 4 plots related to the difference between each surface computed along the two paths), we show in Figure  (2) the 2-norm of the difference of the two moving frames computed. The step sizes in the x and y directions were chosen to be h " k " 0.01. It can be seen that the two possible solutions to the equations for the frame, coincide at least up to order 5.
Once the frame has been computed on some domain, we may use (22) to obtain the extremal solution on the same domain. In Figure 3 we plot the imaginary component of u; the three other possible plots are similar.  g¨x " x, g¨y " y, g¨u " This action and its use in the Calculus of Variations is studied in complete detail in [10,17]. For convenience, we record here the information needed to complete the calculations.
Given the frame ρ defined by the normalisation equations g¨u " 0, g¨u x " 1 and g¨u xx " 0, the generating differential invariants are, The two curvature matrices are Introducing a dummy variable τ to effect the variation yields the new invariant ω " u τ {u x and the syzygies, The invariantised Euler-Lagrange equation is [9],ˆB Finally, the equations for the moving frame ρ, are By ρ " Q y ρ ρpx 0 , y 0 q " ρ 0 px, yq P rx 0 , x 1 sˆry 0 , y 1 s We now consider two different Lagrangians. Our aim here is to investigate the numerical compatibility of the Lie group integrator in some simple examples. Therefore, the region D for this example and the ones that follow have been chosen such that it is possible to compute the solution in a reasonable time and the solution itself is well defined all over the domain. Further, the boundary and initial conditions in the following examples have been chosen in order to have the existence of a solution guaranteed and to make computations tractable. where D is the square r3, 4sˆr3, 4s and we choose a step size equal in both directions h " k " 0.01. The Euler-Lagrange equation is and if we impose that σp1, yq " y, we obtain the solution Inserting (51) and (52) into (48), adding an initial condition nd integrating as we described using the two methods above, we obtain two surfaces, identical to the naked eye, shown in Figure 4. A plot of the absolute difference between the two surfaces is shown in Figure 5. We can see in this case, that the point-wise difference of the two surfaces plotted in Figure 4 is of order at least 7 in h, k.   where D is the square r1, 2sˆr1, 2s and we choose a step size equal in both directions h " k " 0.01. The Euler-Lagrange equation becomes σ xxxxx`2 σσ xxx`σx σ xx " 0 and we notice that all summands in the differential equation above contain one factor with at least a second order derivative in x. So a simple exact solution is σpx, yq " x´y (54) Now we can substitute the expression for σ into the sygyzy equation (45), obtaining an equation for κ κ xxx`p 2x´2yqκ x`κ`1 " 0 (55) and if we impose that we obtain a solution in terms of the Airy functions of first and second kind (and their first derivative). Inserting (54) and the solution to (55)-(56) into (48), adding an initial condition nd integrating as we described in 1-2 above, we obtain the two surfaces shown in Figure 6. A plot of the absolute difference between the two surfaces is given in Figure 7. In this example we obtain that the difference between the two surfaces is

4.2.
An example using the standard action of SEp2q. We end this section with a numerical example involving an action of SEp2q " SOp2q˙R 2 on parametrised surfaces ps, tq Ñ pxps, tq, ups, tqq. In many applications, we consider pxps, tq, ups, tqq as an evolving curve, pxpsq, upsqq, in the px, uq plane. In this case, it is common to take s to be arc length. Here, we achieve this, while maintaining the compatibility condition will not take the form (33).
The action is given by where pθ, a, bq P R 3 .
Moving frames for this and related actions and their use in the Calculus of Variations is well studied, see [8,17]. For convenience, we record here the information we need. Given the normalisation equations the frame is The normalisation equations give ρ¨x " Ipxq " 0 and similarly Ipuq " 0 and Ipu s q " 0, while ρ¨x s " Ipx s q " px 2 s`u 2 s q 1{2 . Calculating the curvature matrices and applying the Replacement Rule yields where ρ¨u ss " κ 1 , ρ¨x s " κ 2 , ρ¨u t " κ 3 , ρ¨x t " κ 4 .
Calculating the syzygies from the compatibility condition yields Ipu st q " κ 3,s`κ1 κ 4 {κ 2 and therefore, the generating invariants are κ i , i " 1, . . . , 4, together with the invariant independent variables. The famous invariant of this action, the Euclidean curvature, can be expressed as κ 1 κ´3 2 . It is usual to set κ 2 " px 2 s`u 2 s q 1{2 " 1 to fix the parametrisation and ease the calculations.
Setting κ 2 " 1, the syzygies for our invariants are κ 4,s " κ 1 κ 3 together with In order to effect the variation, we introduce a dummy invariant independent variable, τ . We obtain two new invariants, σ 1 " ρ¨u τ and σ 2 " ρ¨x τ , and then the syzygy operator H needed to calculate the Euler-Lagrange equations is, where we have set κ 2 " 1 in H.

Consider the Lagrangian
where D " r1, 2sˆr1, 2s and λ is a Lagrange multiplier for the constraint, κ 2 " 1. Given (63), the system to be solved is made of the two Euler-Lagrange equations for the invariants and their syzygies, which in this case is ' % κ 1 λ´κ 1,sstt " 0 λ s´κ1 κ 1,stt " 0 κ 4,s´κ1 κ 3 " 0 κ 3,ss`κ1,s κ 4`κ 2 1 κ 3´κ1,t " 0 A simple exact solution to (64) is κ 1 "´4ps`tq´1 λ " 24ps`tq´4 κ 3 " s`t`sin p4 lnps`tqq`cos p4 lnps`tqq (65) and κ 4 " cos p4 lnps`tqq´sin p4 lnps`tqq`1´4ps`tq (66) Substituting (65)-(66) into (59)-(60), we solve system (48) using the procedure described above, with a constant step size in both direction equal to h " k " 0.01. A plot of the 2-norm of the difference of the two moving frames obtained in this way can be found in Figure 8. From the plot it can be seen that in this case our theoretical result is mirrored in the numerical result. Once the frame has been computed, recall the minimisers are given bÿ xps, tq ups, tq 1‚ " ρps, tq´1¨I pxq Ipuq 1‚ " ρps, tq´1¨0 0 1‚ where the right-hand side is determined by the first two of the normalisation equations, Ipxq " ρ¨x " 0 and Ipuq " ρ¨u " 0, which define the frame. A plot of the minimisers is provided in Figure 9.  . A plot of the minimiser as an evolving curve, pt, xps, tq, ups, tqq 5. Conclusion. In this paper, we have shown that Magnus expansions may be used to solve the system of equations for a moving frame, (1), which evolves on a Lie group, in the case where the base space has two dimensions. Our result extends immediately to the system of equations for a moving frame on an p-dimensional base space, p ě 2, as these equations are pairwise compatible. Our method can, in principle, be applied to any variational problem with a Lie group symmetry, where 1. the Lie group action leaves the independent variables invariant or acts by translation on them, so that the invariant differential operators are the standard, commuting operators, 2. which can be described and analysed using a Lie group based moving frame, 3. and for which the solutions of the Euler-Lagrange equations lead to smooth curvature matrices.
We have applied our result to find, numerically, simple extremal solutions for variational problems which are invariant either under a linear action of SU p2q, the projective action of SLp2q or the affine action of SEp2q. Cost efficient Lie group integrators [2,3,15] reduce the number of commutators involved in the numerical computation, and the implementation we have used, [7], takes advantage of these ideas. The precise interplay between compatibility and efficiency is a topic for further study. Further, the use of Lie group integrators for the computation of the frame for numerical solutions of the Euler-Lagrange solutions will depend on whether or not they may take as input, numerical coefficients in the curvature matrices Q i .
While we have shown that the Magnus expansions are compatible to order 5, it is clear that our proof of the compatibility (38) could have continued to higher orders. However, the calculations become less and less tractable, and there is no clear, discernible, recursive pattern. The infinite set of operators acting on the compatibility condition R, involving not only the curvature matrices Q i but also their derivatives, appearing in Table 1, seems to be new. We conclude by stating the general result as a conjecture. Conjecture 1. The Magnus expansions for compatible systems will commute to all orders, that is, the right-hand side of (38) is identically zero to all orders of h, k.
We may state the conjecture more precisely, that the right-hand side of (38) is a differential operator acting on the compatibility condition R (given in (33)), and which therefore must be identically zero for the Magnus expansions to commute.