Computing distances and geodesics between manifold-valued curves in the SRV framework

This paper focuses on the study of open curves in a Riemannian manifold M, and proposes a reparametrization invariant metric on the space of such paths. We use the square root velocity function (SRVF) introduced by Srivastava et al. to define a Riemannian metric on the space of immersions M'=Imm([0,1],M) by pullback of a natural metric on the tangent bundle TM'. This induces a first-order Sobolev metric on M' and leads to a distance which takes into account the distance between the origins in M and the L2-distance between the SRV representations of the curves. The geodesic equations for this metric are given and exploited to define an exponential map on M'. The optimal deformation of one curve into another can then be constructed using geodesic shooting, which requires to characterize the Jacobi fields of M'. The particular case of curves lying in the hyperbolic half-plane is considered as an example, in the setting of radar signal processing.


1.
Introduction. Computing distances between shapes of open or closed curves is of interest in many applications, from medical imaging to radar signal processing, as soon as one wants to compare, classify or statistically analyze trajectories or contours of objects. While the shape of an organ or the trajectory of an object on a short distance can be modeled by a curve in the plane R 2 or in the ambient space R 3 , some applications provide curves in an intrinsically non-flat space. Simple examples in positive curvature include trajectories on the sphere where points represent positions on the earth, and a negatively-curved space of interest in signal processing is the hyperbolic half plane, which as we will explain later coincides with the statistical manifold of Gaussian densities. We are motivated by the study of curves in the latter for signal processing purposes, however our framework is more general.
Here we consider open oriented curves in a Riemannian manifold M , more precisely the space of smooth immersions c : To compare or average elements of this space, one way to proceed is to equip M with a Riemannian structure, that is to locally define a scalar product G on its tangent space T M. A property that is usually required of this metric is reparameterization invariance, that is that the metric be the same at all points of M representing curves that are identical modulo reparameterization. Two curves are identical modulo reparameterization when they pass through the same points of M but at different speeds. Reparameterizations are represented by increasing diffeomorphisms φ : [0, 1] → [0, 1] (so that they preserve the end points of the curves), and their set is denoted by Diff + ([0, 1]). Elements h, k ∈ T c M of the tangent space in c ∈ M are infinitesimal deformations of c and can be seen as vector fields along the curve c in M (this results from the so called "Exponential law" for smooth functions, see e.g. [15], Theorem 5.6.). The Riemannian metric G is reparameterization invariant if the action of Diff + ([0, 1]) is isometric for G for any c ∈ M, h, k ∈ T c M and φ ∈ Diff + ([0, 1]). This is often called the equivariance property, and it guarantees that the induced distance between two curves c 0 and c 1 does not change if we reparametrize them by the same diffeomorphism φ What's more, a reparameterization invariant metric on the space of curves induces a Riemannian structure on the "shape space", where the space of reparameterizations is quotiented out. A shape can be seen as the equivalence class of all the curves that are identical modulo a change of parameterization, and the shape space as the associated quotient space While the space of immersions is an open submanifold of the Fréchet manifold C ∞ ([0, 1], M ) (see [21], Theorem 10.4.), the shape space is not a manifold and therefore the fiber bundle structure we discuss next is to be understood formally. We get a principal bundle structure π : M → S, which induces a decomposition of the tangent bundle T M = V M ⊕ HM into a vertical subspace V M = ker(T π) consisting of all vectors tangent to the fibers of M over S, and a horizontal subspace HM = (V M) ⊥ G defined as the orthogonal complement of V M according to the metric G that we put on M. If G verifies the equivariance property, then it induces a Riemannian metricĜ on the shape space, for which the geodesics are the projected horizontal geodesics of M for G. The geodesic distanced on S between the shapes [c 0 ] and [c 1 ] of two given curves c 0 and c 1 is then defined bŷ andd verifies the stronger property, for any reparameterizations φ, ψ ∈ Diff + ([0, 1]),

This motivates the choice of a reparameterization invariant metric on M.
Riemannian metrics on the space of curves lying in the Euclidean space R n , and especially closed curves c : S 1 → R n (S 1 is the circle), have been widely studied ( [33], [23], [24], [8]). The most natural candidate for a reparameterization invariant metric is the L 2 -metric with integration over arc length d = c (t) dt but Michor and Mumford have shown in [22] that the induced metric on the shape space always vanishes. This has motivated the study of Sobolev-type metrics ( [24], [20], [34], [7]), where higher order derivatives are introduced. Local existence and uniqueness of geodesics for first and second-order Sobolev metrics on the space of closed plane curves were shown in [24] and completion results were given in [20]. One first-order Sobolev metric where different weights are given to the tangential and normal parts of the derivative has proved particularly interesting for the applications ( [16], [30]) In that case c is a curve in R n , ·, · denotes the Euclidean metric on R n , D h = h / c is the derivation of h according to arc length, This metric belongs to the class of so-called elastic metrics, defined by for any weights a, b ∈ R + . The parameters a and b respectively control the degree of bending and stretching of the curve. Srivastava et al. introduced in [29] a convenient framework to study the case where a = 1 and b = 1/2 by showing that metric 2 could be obtained by pullback of the L 2 -metric via a simple transformation R called the Square Root Velocity Function (SRVF), which associates to each curve its velocity renormalized by the square root of its norm. A similar idea had been previously introduced in [33] and then used in [34], where a Sobolev metric is also mapped to an L 2 -metric. The general elastic metric G a,b with weights a and b satisfying 4b 2 ≥ a 2 can also be studied using a generalization of the SRVF [8].
The SRV framework can be extended to curves in a Lie groupe using translations [11], and to curves in a general manifold using parallel transport. For manifoldvalued curves, this can be done in a way that enables to move the computations to the tangent space to the origin of one of the two curves under comparison [30], [17], [35]. In [17] the authors consider the general elastic metric G a,b , but no Riemannian framework is given. In [35], a Riemannian framework is given for the case a = 1, b = 1/2, and the geodesic equations are derived. In this paper we also restrict to this particular choice of coefficients a and b for simplicity, but we propose another generalization of the SRV framework to manifold-valued curves. Instead of encoding the information of each curve within a tangent space at a single point as in [17] and [35] using parallel transport, the distance is computed in the manifold itself which enables us to be more directly dependent on its geometry. Intuitively, the data of each curve is no longer concentrated at any one point, and so the energy of the deformation between two curves takes into account the curvature of the manifold along the entire "deformation surface", not just along the path traversed by the starting point of the curve.
In the following section, we introduce our metric as the pullback of a quite natural metric on the tangent bundle T M, and show that it induces a fiber bundle structure over the manifold M seen as the set of starting points of the curves. In Section 3, we give the induced geodesic distance and highlight the difference with respect to the distance introduced in [35]. In Section 4, we give the geodesic equations associated to our metric and exploit them to build the exponential map. Geodesics of the space of curves can then be computed using geodesic shooting. To this end, we describe the Jacobi fields on M. We test these algorithms on curves lying in the hyperbolic half-plane H, a choice that we motivate in Section 5. Finally, in the setting of radar spectral analysis, we model locally stationary radar signals by curves in H and compute their mean.

2.
Extension of the SRV framework to manifold-valued curves.
2.1. Our metric on the space of curves. Let c : [0, 1] → M be a curve in M and h, k ∈ T c M two infinitesimal deformations. We consider the following first-order Sobolev metric on M where we integrate according to arc length d = c (t) dt, ·, · and ∇ respectively denote the Riemannian metric and the associated Levi-Civita connection of the manifold M , ∇ h = 1 c ∇ c h is the covariant derivative of h according to arc length, and ∇ h T = ∇ h, v v and ∇ h N = ∇ h − ∇ h T are its tangential and normal components respectively, with the notation v = c / c . If M is a flat Euclidean space, we obtain the metric 2 studied in [29], with an added term involving the origins. Without this extra term, the bilinear form G is not definite since it vanishes if h or k is covariantly constant along c. Here we show that G can be obtained as the pullback of a very natural metricG on the tangent bundle T M. We consider the square root velocity function (SRVF, introduced in [29]) on the space of curves in M , where · is the norm associated to the Riemannian metric on M . In order to definẽ G, we introduce the following projections from T T M to T M . Let ξ ∈ T (p,u) T M and t → (x(t), U (t)) be a curve in T M that passes through (p, u) at time 0 at speed ξ. Then we define the vertical and horizontal projections vp (p,u) : The horizontal and vertical projections live in the tangent bundle T M and are not to be confused with the horizontal and vertical parts which live in the double tangent bundle T T M and will be denoted by ξ H , ξ V . Furthermore, let us point out that the horizontal projection is simply the differential of the natural projection T M → M , and that according to these definitions, a very natural metric on the tangent bundle T M , the Sasaki metric ( [27], [28]), can be written where ·, · denotes the Riemannian metric on M . Now we can define the metric that we put on T M. Let us consider h ∈ T M and ξ, η ∈ T h T M . We definẽ where ξ(t) H = hp(ξ(t)) ∈ T M and ξ(t) V = vp(ξ(t)) ∈ T M are the horizontal and vertical projections of ξ(t) ∈ T T M for all t. Then we have the following result. Proposition 1. The metric G on the space of curves M can be obtained as pullback of the metricG by the square root velocity function R, that is for any curve c ∈ M, and vector fields h, k ∈ T c M along c.

Notation.
Here and in all the paper we will denote by s the parameter of paths in the space of curves M and by t the parameter of a curve in M . For any path of curves s → c(s, ·) the corresponding derivatives will be denoted by c s = ∂c/∂s and c t = ∂c/∂t , and we will also use the notations ∇ s = ∇ ∂c/∂s and ∇ t = ∇ ∂c/∂t .
To prove this proposition, we just need to compute the latter. Let s → c(s, ·) be a curve in M such that c(0, ·) = c and c s (0, ·) = h . Then where we used twice the inversion ∇ s c t = ∇ t c s .

2.2.
Fiber bundle structures. This choice of metric induces two fiber bundle structures. While the second one is an actual fiber bundle structure between two manifolds, the first structure is over the shape space which, as discussed in the introduction, is not a manifold, and so it should be understood formally. Note that we could obtain a manifold structure by restricting ourselves to so-called "free" immersions, that is elements of M on which the diffeomorphism group acts freely, see [23].
Principal bundle over the shape space. Just as in the planar case, the fact that the square root velocity function R satisfies for all c ∈ M, h, k ∈ T c M and φ ∈ Diff + ([0, 1]), guarantees that the integral part of G is reparameterization invariant. Remembering that the reparameterizations φ ∈ Diff + ([0, 1]) preserve the origins of the curves, we notice that G is constant along the fibers and verifies the equivariance property 1. We then have a formal principal bundle structure over the shape space which induces a decomposition T M = V M ⊥ ⊕ HM . There exists a Riemannian metricĜ on the shape space S such that π is (formally) a Riemannian submersion from (M, G) to (S,Ĝ) where h H and k H are the horizontal parts of h and k, as well as the horizontal lifts of T c π(h) and T c π(k), respectively. This expression does in fact defineĜ in the sense that it does not depend on the choice of the representatives c, h and k. For more details, see the theory of Riemannian submersions and G-manifolds in [25].
Fiber bundle over the starting points. The special role played by the starting point in the metric G induces another fiber bundle structure, where the base space is the manifold M , seen as the set of starting points of the curves, and the fibers are composed of the curves which have the same origin. The projection is then It induces another decomposition of the tangent bundle in vertical and horizontal bundles Proof. Let h be a tangent vector. Consider h 0 the parallel vector field along c with initial value h 0 (0) = h(0). It is a horizontal vector, since its vanishing covariant derivative along c assures that for any vertical vector l we have G c (h 0 , l) = 0. The differenceh = h − h 0 between those two vectors has initial value 0 and so it is a vertical vector, which gives a decomposition of h into a horizontal vector and a vertical vector. The definition of H ( * ) M as the orthogonal complement of V ( * ) M guaranties that their sum is direct. Now if k is another tangent vector, then the scalar product between their horizontal parts is which proves that π ( * ) is a Riemannian submersion and completes the proof.
3. Induced distance on the space of curves. Here we give an expression of the geodesic distance induced by the metric G. We show that it can be written similarly to the product distance given in [17] and [35], with an added curvature term. Let us consider two curves c 0 , c 1 ∈ M, and a path of curves s → c(s, ·) linking them in We denote by q(s, ·) = R (c(s, ·)) the image of this path of curves by the SRVF R. Note that q is a vector field along the surface c in M . Let nowq be the "raising" of q in the tangent space where we denote by P t1,t2 γ : T γ(t1) M → T γ(t2) M the parallel transport along a curve γ from γ(t 1 ) to γ(t 2 ). Notice thatq is a surface in a vector space, as illustrated in Figure 1. Lastly, we introduce a vector field (a, τ ) → ω s,t (a, τ ) in M , which parallel translates q(s, t) along c(s, ·) to its origin, then along c(·, 0) and back down again, as shown in Figure 1. More precisely for all b, s. That way the quantity ∇ a ω s,t (s, t) measures the holonomy along the rectangle of infinitesimal width shown in Figure 1.
where q = R(c) is the Square Root Velocity representation of the curve c and the norm is the one associated to the Riemannian metric on M . It can also be written as a function of the "raising"q of q in the tangent space T c0(0) M defined by 3, where Ω is a curvature term measuring the holonomy along a rectangle of infinitesimal width if R denotes the curvature tensor of the manifold M and ω s,t is defined by 4. Remark 1. The second expression 6 highlights the difference with respect to the distance given in [17] and [35]. In the first term under the square root we can see the velocity vector of the curve c(·, 0) linking the two origins, and in the second the velocity vector of the curveq linking the TSRVF-images of the curves -Transported Square Root Velocity Function, as introduced by Su et al. in [30]. If instead we equip the tangent bundle TM with the metric for h ∈ T M and ξ, η ∈ T h T M, then the curvature term Ω vanishes and the geodesic distance on M becomes which corresponds exactly to the geodesic distance introduced by Zhang et al. in [35] on the space The difference between the two distances 5 and 7 resides in the curvature term Ω, which measures the holonomy along the rectangle of infinitesimal width shown in Figure 1, and arises from the fact that in the first one, we compute the distance in the manifold, whereas in the second, it is computed in the tangent space to one of the origins of the curves. Therefore, the first one takes more directly into account the geometry of the manifold between the two curves under comparison, since it reads the information directly in the manifold itself.
Remark 2. Let us briefly consider the flat case : if the manifold M is flat, e.g. M = R n , then the two distances 5 and 7 coincide. If two curves c 0 and c 1 in R n have the same starting point p, the first summand under the square root vanishes and the distance becomes the L 2 -distance between the two SRV representations q 0 = R(c 0 ) and q 1 = R(c 1 ). If two R n -valued curves differ only by a translation, then the distance is simply the distance between their origins.
Remark 3. Note that this distance is only local in general, that is, only works for curves that are "close enough". Indeed, if we consider two curves c 1 , c 2 in M = R 2 with the same origin, the distance between them is the length of the L 2 geodesic between their SRV representations q 1 and q 2 in C ∞ ([0, 1], R 2 \{0}). If the minimizing geodesic between those two (in C ∞ ([0, 1], R 2 )) passes through 0, then there is no minimizing geodesic between q 1 and q 2 in C ∞ ([0, 1], R 2 \{0}).
Proof of Proposition 3. Since G is defined by pullback ofG by the SRVF R, we know that the lengths of c in M and of q = R(c) in TM are equal and so that To obtain the second expression of this distance we need to express ∇ s q as a function of the derivativeq s . Let us fix t ∈ [0, 1], and consider the vector field ν along the surface (s, τ ) → c(s, τ ) that is parallel along all curves c(s, ·) and takes value ν(s, t) = q(s, t) in τ = t for any s ∈ [0, 1], that is for all s ∈ [0, 1] and τ ∈ [0, 1]. With this definition we have ∇ s ν(s, t) = ∇ s q(s, t).
Noticing that we additionally have ∇ τ ν(s, τ ) = 0 for all s, τ ∈ [0, 1], and using Now let us fix s ∈ [0, 1] as well. Notice that the vector field ω s,t defined above as Note that unlike ν, we do not have ∇ a ω s,t (s, t) = ∇ s q(s, t) because ω s,t (a, t) = q(a, t) is only true for a = s. Using Equations 9 and 10 we get which is the same integral as the one in 8. Finally, since ∇ s q(s, t) = ∇ s ν(s, t) = P s,0 c(·,0) • P t,0 c(s,·) (∇ s ν(s, t)) we obtain which gives Equation 6 and completes the proof.

Geodesic equations on M.
To be able to compute the distance given by 5 between two curves, we first need to compute the optimal deformation s → c(s, ·) from one to the other. That is, we need to characterize the geodesics of M for our metric. In order to do so, taking inspiration from [35], we use the variational principle. In what follows, we use the lighter notation u(t 1 ) t1,t2 = P t1,t2 c (u(t 1 )) to denote the parallel transport of a vector u(t 1 ) ∈ T c(t1) M along a curve c from c(t 1 ) to c(t 2 ), when there is no ambiguity on the choice of c. We also denote by w T = w, v v the tangential component of any vector field w along a curve c, that is its projection on the unit speed vector field v = c / c .
where q = c t / c t is the SRV representation of c, the vector field r is given by Proof. The path c is a geodesic if and only if it is a critical point of the energy functional E : Let a →ĉ(a, ·, ·), a ∈ (− , ), be a proper variation of the path s → c(s, ·), meaning that it coincides with c in a = 0, and it preserves its end pointŝ Then c is a geodesic of M if and only if d da a=0 E(ĉ(a, ·, ·)) = 0 for any proper variationĉ. If we denote by E(a) = E(ĉ(a, ·, ·)), for a ∈ (− , ), the energy of a proper variationĉ, then we have whereq =ĉ t / ĉ t is the SRV representation ofĉ. Its derivative is given by E (a) = ∇ aĉs (a, s, 0),ĉ s (a, s, 0) ds + ∇ a ∇ sq (a, s, t), ∇ sq (a, s, t) dt ds.
We cannot yield any conclusions at this point, becauseĉ a (0, s, t) and ∇ aq (0, s, t) cannot be chosen independently, sinceq is not any vector field alongĉ but its image via the Square Root Velocity Function. Computing the covariant derivative of q =ĉ t / ĉ t 1/2 according to a gives ∇ aq = ĉ t −1/2 (∇ aĉt − 1 2 ∇ aĉt T ), and projecting both sides on v = c t / c t results in ∇ aq T = 1 2 q −1 ∇ aĉt T . We deduce ∇ aĉt = q ∇ aq + ∇ aq T , and since ∇ tĉa = ∇ aĉt , we can express the variationĉ a as followŝ Inserting this expression in the derivative of the energy we obtain the following, where we omit to write that the variationsĉ andq are always taken in a = 0 for the sake of readability,  with the previously given definition of r. Since the variationsĉ a (0, s, 0) and ∇ aq (0, s, t) can be chosen independently and take any value for all s and all t, we obtain the desired equations.

4.2.
Exponential map. Now that we have the geodesic equations, we are able to describe an algorithm which allows us to compute the geodesic s → c(s, ·) starting from a point c ∈ M at speed u ∈ T c M. This amounts to finding the optimal deformation of the curve c in the direction of the vector field u according to our metric. We initialize this path s → c(s, ·) by setting c(0, ·) = c and c s (0, ·) = u, and we propagate it using iterations of fixed step > 0. The aim is, given c(s, ·) and c s (s, ·), to deduce c(s + , ·) and c s (s + , ·). The first is obtained by following the exponential map on the manifold M for all t ∈ [0, 1] where once again, we use the notation w(s) s,s+ = P s,s+ c (w(s)) for the parallel transport of a vector field s → w(s) along a curve s → c(s) in M . If we assume that at time s we have c(s, ·) and c s (s, ·) at our disposal, then we can estimate c t (s, ·) and ∇ t c s (s, ·), and deduce q(s, ·) = c t (s, ·)/ c t (s, ·) as well as using the fact that ∇ s c t = ∇ t c s . The variation ∇ s c s (s, ·) can then be computed in the following way for all t ∈ [0, 1], where ∇ s c s (s, 0) is given by equation 11a, the second order variation ∇ s ∇ s c t (s, ·) is given by and ∇ s ∇ s q can be computed via equation 11b. where log M denotes the inverse of the exponential map on M , and compute q(s, t) = 1 √ ct c t (s, t) and ∇ s q(s, t) using equation 12.
2. Compute r(s, t) = 1 t R(q, ∇ s q)c s (s, τ ) τ,t dτ, and ∇ s ∇ s q(s, t) = − q(s, t) r(s, t) + r(s, t) T , and deduce ∇ s ∇ s c t (s, t) using equation 14 for all t ∈ [0, 1]. Output : c = exp M c0 u. The last step needed to compute the optimal deformation between two curves c 0 and c 1 is to find the appropriate initial speed u, that is the one that will connect c 0 to c 1 . Since we do not have an explicit expression for this appropriate initial speed, we compute it iteratively using geodesic shooting. Geodesic shooting and Jacobi fields. The aim of geodesic shooting is to compute the geodesic linking two points p 0 and p 1 of a manifold N , knowing the exponential map exp N . More precisely, the goal is to iteratively find the initial speed u p0,p1 such that exp N p0 (u p0,p1 ) = p 1 . An initial speed vector u ∈ T p0 N is chosen, and is iteratively updated after evaluating the gap between the point p = exp N p0 u obtained by taking the exponential map at point p 0 in u -that is, by "shooting" from p 0 in the direction u -and the target point p 1 . Assuming that the current point p is "not too far" from the target point p 1 , and that there exists a geodesic linking p 0 to p 1 , we can consider that the gap between p and p 1 is the extremity of a Jacobi field J : [0, 1] → N in the sense that it measures the variation between the geodesics s → exp N p0 (su) and s → exp N p0 (su p0,p1 ). Since both geodesics start at p 0 , this Jacobi field has value J(0) = 0 in 0. Then, the current speed vector can be corrected by u ← u +J(0), as shown in Figure 2. Let us briefly explain why. If c(a, s), a ∈ (− , ), s ∈ [0, 1], is a family of geodesics starting from the same point p 0 at different speeds u(a) ∈ T p0 N , i.e. c(a, s) = exp N p0 (su(a)), and J(s) = c a (0, s), s ∈ [0, 1] measures the way that these geodesics spread out, then we havė

Initialize
In the context of geodesic shooting between two curves c 0 and c 1 in M, the speed vector u can be initialized using the L 2 logarithm map, the inverse of the exponential map for the L 2 -metric (these maps are simply obtained by post-composition of mappings with the finite-dimensional maps exp M and log M ). That is, we set u = log L 2 c0 (c 1 ). The L 2 logarithm map also allows us to approximate the gap between the current point and the target point. This amounts to minimizing the functional F (u) = dist L 2 (exp M c0 (u), c 1 ). We summarize as follows.  3. If j L 2 > δ, set J(1) = j and u ← u +J(0) whereJ(0) = φ −1 (J(1)) is computed using Algorithm 3, and go back to the first step. Else, stop.
Output : c approximation of the geodesic linking c 0 and c 1 .
The function φ associates the last value J(1) of a Jacobi field with initial value J(0) = 0 to the initial speedJ(0), and can be deduced from Algorithm 3, which describes the function associating J(1) to the initial conditions J(0) andJ(0). To find the inverse of this function, we consider the image of a basis of the tangent vector space in whichJ(0) lives. Now, let us characterize the Jacobi fields of M to obtain the function φ. A Jacobi field is a vector field that describes the way geodesics spread out on a manifold. Consider a → c(a, ·, ·), a ∈ (− , ), a family of geodesics in M, that is for each a ∈ (− , ), [0, 1] s → c(a, s, ·) is a geodesic of M. Then for all a, c(a, ·, ·) verifies the geodesic equations ∇ s c s (a, s, 0) + r(a, s, 0) = 0, ∀s (15a) ∇ s ∇ s q(a, s, t) + q(a, s, t) r(a, s, t) + r(a, s, t) T = 0, ∀t, s, where q = c t / c t is the SRV representation of c and r is given by Recall that we use the notation w T = w, v(a, s, t) v(a, s, t) with v = c t / c t for the tangential component of a tangent vector w ∈ T c(a,s,t) M . To characterize the way these geodesics spread out, we consider the Jacobi field J : [0, 1] → T M, c(a, s, ·).
By decomposing ∇ s ∇ s J(s, 0) and ∇ t ∇ s ∇ s J(s, τ ) we can write the second order variation of J as The term ∇ s ∇ s ∇ t J can be expressed as a function of ∇ s ∇ s ∇ a q by twice differen- Since ∇ a c t = ∇ t c a = ∇ a J for a = 0, we know that the term we are looking for is The terms ∇ a ∇ s c s (0, s, 0) and ∇ a ∇ s ∇ s q(0, s, τ ) for all τ ∈ [0, 1] can be obtained by differentiating the geodesic equations 15a and 15b ∇ a ∇ s c s (0, s, 0) + ∇ a r(0, s, 0) = 0, ∀s ∇ a ∇ s ∇ s q(0, s, t) + ∇ a q(0, s, t) r(0, s, t) + r(0, s, t) T + q(0, s, t) ∇ a r(0, s, t) + ∇ a r(0, s, t) T = 0, ∀t, s.
The first one gives ∇ a ∇ s c s (s, 0) = −∇ a r(s, 0), and for all s and t we get The only term left to compute is the variation ∇ a r, which is by definition Since the covariant derivative of V t in τ vanishes, we can write for any t ≤ τ ≤ 1 Integrating this equation according to τ from t to 1 we obtain where, since V t (0, s, t) = R(q, ∇ s q)c s (0, s, t), we get for with finally We can notice that, however complicated, the numbered equations 16 to 23b when put together define a partial differential equation verified by the Jacobi field J. They allow us to iteratively compute J(s + , ·) and ∇ s J(s + , ·), for a fixed step > 0, knowing J(s, ·) and ∇ s J(s, ·). Indeed, we can estimate ∇ t J(s, ·) since J(s, t) is known for all t, as well as ∇ t ∇ s J(s, ·) since ∇ s J(s, t) is known for all t, and finally ∇ s ∇ t J = ∇ t ∇ s J + R(c s , c t )J. Assuming that we are able to compute the covariant derivative ∇ J R of the curvature tensor, for example if we are in a symmetric space (then it is zero), we obtain an algorithm to compute the Jacobi fields in the space of curves. To summarize :  Compute ∇ a q(0, s, t), ∇ s ∇ a q(0, s, t) and ∇ a V t (0, s, t) for all t using 23a, 23b and 22, and deduce ∇ a r(0, s, t) using Equation 21.
Using a discretization of Algorithms 1, 2 and 3, we are able to compute an approximation of the optimal deformation between two curves, as shown in a toy example in Figure 3. In this simple case we perform geodesic shooting between two geodesics c 0 and c 1 (in red) of the hyperbolic half-plane H. The reasons for our interest in this particular space, as well as the tools needed to work in it, are given in the next section. Figure 3 shows the different steps of the first iteration of the geodesic shooting algorithm. In this simple case, we converge in only three iterations. Further toy examples are given in Figure 4, where we show the optimal deformations between pairs of geodesics of H (in blue), compared to the L 2 -geodesics (in green). We can see in the first image that our metric has a tendency to "shrink" the curves in the center of the deformation compared to the L 2 -metric. We also show the influence of the orientation of the curves, on which the deformations depend. We do not give the details of the discretization used for these examples or the ones in the following section here. A detailed description of this discrete model is given in a forthcoming paper [19].

5.
Example : curves in the hyperbolic half-plane H. In this section we consider the case where the base manifold M is a symmetric manifold of negative curvature, the two-dimensional hyperbolic space H. We first explain why this space can be interesting for applications, namely as it coincides with the statistical manifold of Gaussian densities equipped with the Fisher information metric. Then we give some basic tools -exponential map, logarithm map, curvature tensor -needed to implement the previous algorithms in H. Finally, we consider a specific application of curve analysis in that space, for the statistical study of locally stationary radar signals. We explain how this framework gives us curves lying in the hyperbolic plane, and we present some simulation results.

5.1.
The hyperbolic half-plane as a statistical manifold. It is possible to adopt a geometrical point of view to solve problems in various fields such as statistical inference, information theory or signal processing [14], [9], [1]. This framework is given by information geometry. Each element of a parametric family of probability densities {f (·, θ), θ ∈ Θ} can be seen as a point in the manifold of parameters Θ. Intuitively, it is easy to see that the Euclidean metric is not always appropriate to compare two probability distributions in that space. For example, each univariate Gaussian distribution can be identified with its mean and standard deviation (m, σ) in the upper half-plane R × R * + . As explained in [12], two univariate Gaussian densities N (m 1 , σ 1 ) and N (m 2 , σ 1 ) with different means but the same standard deviation get "closer" to each other as their common standard deviation increases, meaning that intuitively the distance between the points of coordinates (m 1 , σ 1 ) and (m 2 , σ 1 ) in the upper half-plane should be greater than the distance between the points (m 1 , σ 2 ) and (m 2 , σ 2 ) for σ 2 > σ 1 . A more pertinent Riemannian structure on the space of parameters Θ is the one induced by the Fisher information metric, defined in its matrix form as the Fisher information. If the parameter θ ∈ R d is d-dimensional and E denotes the expected value, for any 1 ≤ i, j ≤ d and θ = (θ 1 , . . . , θ d ). This metric is chosen, among other reasons, because it has statistical meaning : in parameter estimation, the Fisher information measures the "amount of information" on the parameter θ contained in data sampled from the density f (·, θ); it also gives a fundamental limit to the precision at which one can estimate this θ, in the form of the Cramer-Rao bound.
In the case of univariate Gaussian densities N (m, σ 2 ), Fisher geometry amounts to hyperbolic geometry. More precisely, the space of parameters (m, σ) equipped with the Fisher information metric is in bijection with the hyperbolic half-plane via the change of variables (m, σ) → ( m √ 2 , σ). Indeed, with this rescaling of the mean, the Fisher information matrix becomes which, up to a constant, defines the Riemannian metric of the well-known hyperbolic half-plane. This is coherent with the example given above, since in the hyperbolic half-plane the distance between the points of coordinates (m 1 , σ) and (m 2 , σ) decreases as σ increases for fixed values of m 1 , m 2 . The differential geometry of Gaussians has proved useful for applications, e.g. in image processing where in the image model, each pixel is represented by a univariate Gaussian distribution [2], and in radar signal processing [3], [6], [26], [18], as we will see in Section 5.3.

5.2.
Geometry of the hyperbolic half-plane. First, let us give a few tools which are necessary to work in the hyperbolic half-plane representation. Along with the Poincaré disk, the Klein model and others, the hyperbolic half-plane H = {z = x + iy ∈ C, y > 0} is one of the representations of two-dimensional hyperbolic geometry. The Riemannian metric is given by This means that the scalar product between two tangent vectors u = u 1 + iu 2 and v = v 1 + iv 2 at a point z = x + iy is Using the usual formula (see e.g. [13]) to compute the Christoffel symbols, we can easily compute the covariant derivative of a vector field v(t) = v 1 (t) + iv 2 (t) along a curve c(t) = x(t) + iy(t) in H. It is given by Let us now remind a well-known expression (see e.g. [13]) for the Riemann curvature tensor in a manifold of constant sectional curvature. Recall that H has constant sectional curvature K = −1.
Proposition 5 (Curvature tensor). Let X, Y, Z be three vector fields on a manifold of constant sectional curvature K. The Riemann curvature tensor can be written For the algorithms described above, we need to be able to compute the geodesic starting from a point p ∈ H at speed u 0 ∈ T p H -in other words, the exponential map u → exp H p (u) -as well as the geodesic linking two points p and q, with the associated initial vector speed -the inverse q → log H p (q) of the exponential map. The geodesics of the hyperbolic half-plane are vertical segments and half-circles whose origins are on the x-axis, as shown in Figure 5, and they can be obtained as images of the vertical geodesic following the y-axis by a Moebius transformation z → az+b cz+d , with ad − bc = 1. To be complete, we give the proofs of the three following propositions in the appendix.
Proposition 6 (Geodesics of H and logarithm map). Let z 0 = x 0 + iy 0 and z 1 = x 1 + iy 1 be two elements of H.
• If x 0 = x 1 , then the geodesic going from z 0 to z 1 is the segment γ(t) = iy(t) with y(t) = y 0 e t ln y 1 y 0 , and the logarithm map is given by log H z0 (z 1 ) = iy 0 ln y 1 y 0 .
• If x 0 = x 1 , the geodesic is given by γ(t) = x(t) + iy(t) with where the coefficients of the Moebius transformation can be deduced from the center x Ω and the radius R of the semi-circle going through z 0 and z 1 : a = , and for all t ∈ [0, 1], The logarithm map is in turn given by We now give the exponential map in H.
Proposition 7 (Exponential map in H). Let z 0 = x 0 + iy 0 be an element of H and u 0 =ẋ 0 + iẏ 0 a tangent vector. Then the exponential map is given by exp H z0 (u 0 ) = γ(1), where • ifẋ 0 = 0, γ(t) = iy 0 e tẏ 0 y 0 , The coefficients a, b, c, d of the Moebius transformation can be computed as previously from the center x Ω = x 0 + y 0ẏ 0 x0 and the radius R = (x 0 − x Ω ) 2 + y 2 0 of the semi-circle of the geodesic, and for all t ∈ [0, 1], Finally, we give the expression of parallel transport along a geodesic in the hyperbolic plane.
Proposition 8 (Parallel transport in H). Let t → γ(t) be a curve in H with coordinates x(t), y(t), and u 0 ∈ T γ(t0) H a tangent vector. The parallel transport of u 0 along γ from t 0 to t is given by y(τ ) dτ . If γ is a vertical segment then θ(t i , t f ) = 0, and if it is a portion of a circle, we get where the coefficients c and d of the Moebius transformation can be computed as explained previously, andγ = iȳ is the pre-image of γ by that transformation. Now that we have these explicit formulas at our disposal, we are able to test the algorithms described above in the simple case where the base manifold M has constant sectional curvature K = −1. Note that computations are further simplified by the existence of a global chart. 5.3. Spectral estimation of locally stationary radar signals. In radar signal processing, given an observation of a signal, it is useful to estimate the spectrum of the underlying process, as it is indicative of its structure. If we are interested in the temporal modulations of that signal, we can estimate several spectra for that same signal and study their evolution in time. The study of these time-frequency spectra, or spectrograms, is at the heart of micro-Doppler analysis. Here we explain how a time series of spectra can be represented as a curve in the Poincaré polydisk.
The data we use for this example is synthetic data generated by a simulator of helicopter signatures. Using this simulator, we obtain a series z = (z 1 , . . . , z N ) ∈ C N of N complex numbers that simulates the reflected signal received by a fixed radar antenna after sending a burst of N pulses in the direction of a fixed helicopter. Given this vector of N observations, the goal is to study the temporal evolutions of the underlying process. To do so, we consider that this process is locally stationary and Gaussian, and we estimate a spectrum for each stationary portion. More precisely, using a gliding window of size n < N to be adjusted, we estimate a high resolution spectrum for each position of the window of size n on the vector of size N . This gives us a time series of N − n + 1 spectra, which we index by 1 ≤ i ≤ N − n + 1.
Let us now explain how we estimate and represent these spectra, each of which corresponds to the observation z (i) = (z i , . . . , z i+n−1 ) of a centered stationary Gaussian time series Z (i) . To overcome the low resolution issues of the classical FFT-based spectral estimation methods, Burg suggested in the 1970s an alternative method based on autoregressive processes [10]. Given the partial knowledge of the autocorrelation function of a stationary and Gaussian process Z, Burg showed that the process which maximizes the entropy -that is, which adds the fewest assumptions on the data -is an autoregressive process of the appropriate order. Following this maximum entropy approach, we estimate an autoregressive spectrum for each stationary portion z (i) using the so-called Burg algorithm, see e.g. [3]. Burg also showed that the second order statistics of such a process Z can be equivalently represented by its covariance matrix Σ n = E(ZZ * ), a Toeplitz (because of the stationarity) Hermitian Positive Definite (THPD) matrix of size n, or the so-called reflection coefficients (P 0 , µ 1 , . . . , µ n ) of the autoregressive model, as there exists a bijection [31], [32] Ψ : T n → R * + × D n−1 , Ψ : Σ n → (P 0 , µ 1 , . . . , µ n−1 ), between the space T n of THPD matrices of size n and the product space R * + × D n−1 where these reflection coefficients live. Here D = {z ∈ C | |z| < 1} is the unit disk of the complex plane. This means that each stationary portion z (i) of size n (the size of the gliding window) can be equivalently parameterized by its covariance matrix in T n or by an element of the product space R * + × D n−1 . We choose to work with the latter representation, because the Fisher metric on centered Gaussian processes induces a convenient metric on the submanifold of stationary centered Gaussian processes, in that space of representation. Indeed, the Legendre dual of the Fisher information metric [6], defined as the hessian of minus  Figure 6. Computation of the mean curve (in black) for 4 sets of 11 curves in the hyperbolic half-plane, constructed from simulated helicopter radar data.
the entropy, i.e. of the potential Φ(Σ n ) = − ln(det Σ n ) − n ln(πe), has a nice expression, when Σ n is Toeplitz, in terms of the reflection coefficients (P 0 , µ 1 , . . . , µ n−1 ) = Ψ(Σ n ) ∈ R * + × D n−1 , where we can see the Riemannian metric of the Poincaré disk appear [4]. In other words, equipped with the Legendre dual of the Fisher information metric -or rather its projection on the submanifold of stationary centered Gaussian processes -the space of reflection coefficients becomes the product manifold R * + × D n−1 , where D is the Poincaré disk. This means that each stationary portion Z (i) of our radar signal can be parameterized in the product manifold R * + × D n−1 by a set of coefficients (P 0 (i), µ 1 (i), . . . , µ n−1 (i)), and so the entire locally stationary radar signal is represented by a time series in that space, which corresponds to a set of observations of the "real" evolution (P 0 (t), µ 1 (t), . . . , µ n−1 (t)) of the locally stationary signal.
With this choice of representation, comparing two vectors of radar observations can be carried out by computing the distance between the two corresponding curves in the product manifold R * + × D, which, thanks to the product metric, is the same as comparing their components separately -that is, pairwise comparing the evolutions of each reflection coefficient µ k (t) in the Poincaré disk. More generally, the representation of a vector of radar observation in a Riemannian manifold enables to do basic statistics on these objects, such as defining the mean, median and variance of a set, or performing classification. This can naturally be useful in target detection as well as target recognition. Here we use the algorithms presented in the previous sections to compute the Fréchet meanμ(t) of p curves µ (1) (t), . . . , µ (p) (t). The Fréchet mean, also called intrinsic mean, is defined bȳ if M is the space of curves in D and d the distance on M. Since it is defined as a minimizer of a functional, this intrinsic mean can be found by a gradient descent type procedure, summarized as follows.
Using our radar data, we compute the Fréchet mean for 4 sets of 11 curves tracing the evolutions of one reflection coefficient of 11 signals -we choose to represent only one of the coefficients µ k , 1 ≤ k ≤ n − 1. These signals are generated using the helicopter signature simulator, and correspond to the observation at 4 different times of 11 helicopters which differ only in their rotor rotation speeds. We consider small variations (less than 1%) around the mean value of 390 RPM (rotations per minute), and show the obtained curves in Figure 6. Theses curves are shown in the hyperbolic half-plane representation, which is equivalent to the Poincaré disk in terms of geometry. In each case, the red extremity of the colormap corresponds to the helicopter with the highest rotation speed, the blue extremity to the lowest rotation speed, and the mean curve is shown in black. This can be used to construct a "reference signature" for a given type of helicopter, for target recognition purposes. 6. Conclusion. We have studied a first-order Sobolev metric G on the space of manifold-valued curves and its induced geometry. The metric G can be obtained as the pullback of a natural metric on the tangent bundle TM by the square root velocity function, and as such it is reparameterization invariant. The special role that G gives to the starting points of the curves induces a fiber bundle structure over the manifold M seen as the set of starting points of the curves, for which the projection is a Riemannian submersion. The geodesic distance induced by G takes into account the distance between the origins of the curve in M and the L 2 -distance between the SRV representations, without parallel transporting the computations to a unique tangent plane as in [17] and [35]. This should allow us to take into account a greater amount of information on the geometry of the manifold M . Using the pullback form of G, we obtain the geodesic equations in the SRV coordinates, as well as the Jacobi field equations, which allow us to construct the optimal deformation between two curves by geodesic shooting. Once we can compute geodesics in the space of curves, we can also compute the mean of a set of curves and conceivably more. We considered the case where the base manifold M is the hyperbolic half-plane, whose geometry coincides with the Fisher geometry of gaussian densities, and tested the algorithms on simulated radar data for the spectral analysis of locally stationary gaussian radar signals. Future work will include applications on the sphere for the statistical analysis of large trajectories.