# American Institute of Mathematical Sciences

doi: 10.3934/fods.2021020
Online First

Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible.

Readers can access Online First articles via the “Online First” tab for the selected journal.

## Learning landmark geodesics using the ensemble Kalman filter

 Department of Mathematics, Imperial College London, South Kensington Campus, London SW7 2AZ, UK

* Corresponding author: Andreas Bock

Received  March 2021 Revised  July 2021 Early access August 2021

We study the problem of diffeomorphometric geodesic landmark matching where the objective is to find a diffeomorphism that, via its group action, maps between two sets of landmarks. It is well-known that the motion of the landmarks, and thereby the diffeomorphism, can be encoded by an initial momentum leading to a formulation where the landmark matching problem can be solved as an optimisation problem over such momenta. The novelty of our work lies in the application of a derivative-free Bayesian inverse method for learning the optimal momentum encoding the diffeomorphic mapping between the template and the target. The method we apply is the ensemble Kalman filter, an extension of the Kalman filter to nonlinear operators. We describe an efficient implementation of the algorithm and show several numerical results for various target shapes.

Citation: Andreas Bock, Colin J. Cotter. Learning landmark geodesics using the ensemble Kalman filter. Foundations of Data Science, doi: 10.3934/fods.2021020
##### References:

show all references

##### References:
A matching between landmarks where the geodesics are shown
Template-target configurations for different values of $M$. Left to right: 10, 50, 150. Linear interpolation has been used between the landmarks to improve the visualisation
Log data misfits for $M = N_E = 50$ for different values of $\xi$ using three different targets
Progression of Algorithm 1 for various targets using $M = 10$ and $N_E = 10$. Computation times for 50 iterations: 6s for each configuration
Progression of Algorithm 1 for various targets using $M = 50$ and $N_E = 50$. Computation times for 50 iterations (top to bottom): 2m8s, 2m9s, 1m29s
Progression of Algorithm 1 for various targets using $M = 150$ and $N_E = 100$. Computation times for 50 iterations (top to bottom): 5m22s, 5m23s, 5m23s
Convergence of $E^k$ where $M = 10$
Convergence of $E^k$ where $M = 50$
Convergence of $E^k$ where $M = 150$
where $M = 10$">Figure 10.  Evolution of the relative error $\mathcal{R}^k$ corresponding to the misfits in Figure 7 where $M = 10$
where $M = 150$">Figure 11.  Evolution of the relative error $\mathcal{R}^k$ corresponding to the misfits in Figure 8 where $M = 150$
where $M = 50$">Figure 12.  Evolution of the relative error $\mathcal{R}^k$ corresponding to the misfits in Figure 9 where $M = 50$
Global parameters used for Algorithm 1
 Variable Value Description $n$ 50 Kalman iterations $T$ 15 time steps $\tau$ 1 landmark size (cf. (2)) $\epsilon$ 1e-05 absolute error tolerance
 Variable Value Description $n$ 50 Kalman iterations $T$ 15 time steps $\tau$ 1 landmark size (cf. (2)) $\epsilon$ 1e-05 absolute error tolerance
Relative error at the last iteration of algorithm 1 for different values of $N_E$ for fixed $M = 10$. The rows correspond to the configurations in Figure 4
Relative error at the last iteration of algorithm 1 for different values of $N_E$ for fixed $M = 50$. The rows correspond to the configurations in Figure 5
Relative error at the last iteration of algorithm 1 for different values of $N_E$ for fixed $M = 150$. The rows correspond to the configurations in Figure 6
 [1] Min Xi, Wenyu Sun, Jun Chen. Survey of derivative-free optimization. Numerical Algebra, Control & Optimization, 2020, 10 (4) : 537-555. doi: 10.3934/naco.2020050 [2] Gaohang Yu. A derivative-free method for solving large-scale nonlinear systems of equations. Journal of Industrial & Management Optimization, 2010, 6 (1) : 149-160. doi: 10.3934/jimo.2010.6.149 [3] Yigui Ou, Wenjie Xu. A unified derivative-free projection method model for large-scale nonlinear equations with convex constraints. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021125 [4] Junyoung Jang, Kihoon Jang, Hee-Dae Kwon, Jeehyun Lee. Feedback control of an HBV model based on ensemble kalman filter and differential evolution. Mathematical Biosciences & Engineering, 2018, 15 (3) : 667-691. doi: 10.3934/mbe.2018030 [5] Jiangqi Wu, Linjie Wen, Jinglai Li. Resampled ensemble Kalman inversion for Bayesian parameter estimation with sequential data. Discrete & Continuous Dynamical Systems - S, 2021  doi: 10.3934/dcdss.2021045 [6] A. M. Bagirov, Moumita Ghosh, Dean Webb. A derivative-free method for linearly constrained nonsmooth optimization. Journal of Industrial & Management Optimization, 2006, 2 (3) : 319-338. doi: 10.3934/jimo.2006.2.319 [7] Wei-Zhe Gu, Li-Yong Lu. The linear convergence of a derivative-free descent method for nonlinear complementarity problems. Journal of Industrial & Management Optimization, 2017, 13 (2) : 531-548. doi: 10.3934/jimo.2016030 [8] Liang Zhang, Wenyu Sun, Raimundo J. B. de Sampaio, Jinyun Yuan. A wedge trust region method with self-correcting geometry for derivative-free optimization. Numerical Algebra, Control & Optimization, 2015, 5 (2) : 169-184. doi: 10.3934/naco.2015.5.169 [9] Dong-Hui Li, Xiao-Lin Wang. A modified Fletcher-Reeves-Type derivative-free method for symmetric nonlinear equations. Numerical Algebra, Control & Optimization, 2011, 1 (1) : 71-82. doi: 10.3934/naco.2011.1.71 [10] Jun Takaki, Nobuo Yamashita. A derivative-free trust-region algorithm for unconstrained optimization with controlled error. Numerical Algebra, Control & Optimization, 2011, 1 (1) : 117-145. doi: 10.3934/naco.2011.1.117 [11] Alexander Bibov, Heikki Haario, Antti Solonen. Stabilized BFGS approximate Kalman filter. Inverse Problems & Imaging, 2015, 9 (4) : 1003-1024. doi: 10.3934/ipi.2015.9.1003 [12] Russell Johnson, Carmen Núñez. The Kalman-Bucy filter revisited. Discrete & Continuous Dynamical Systems, 2014, 34 (10) : 4139-4153. doi: 10.3934/dcds.2014.34.4139 [13] Sebastian Reich, Seoleun Shin. On the consistency of ensemble transform filter formulations. Journal of Computational Dynamics, 2014, 1 (1) : 177-189. doi: 10.3934/jcd.2014.1.177 [14] Fabrizio Colombo, Irene Sabadini, Frank Sommen. The inverse Fueter mapping theorem. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1165-1181. doi: 10.3934/cpaa.2011.10.1165 [15] Jiangfeng Huang, Zhiliang Deng, Liwei Xu. A Bayesian level set method for an inverse medium scattering problem in acoustics. Inverse Problems & Imaging, 2021, 15 (5) : 1077-1097. doi: 10.3934/ipi.2021029 [16] Neil K. Chada, Claudia Schillings, Simon Weissmann. On the incorporation of box-constraints for ensemble Kalman inversion. Foundations of Data Science, 2019, 1 (4) : 433-456. doi: 10.3934/fods.2019018 [17] Marc Bocquet, Alban Farchi, Quentin Malartic. Online learning of both state and dynamics using ensemble Kalman filters. Foundations of Data Science, 2020  doi: 10.3934/fods.2020015 [18] Zhiyan Ding, Qin Li, Jianfeng Lu. Ensemble Kalman Inversion for nonlinear problems: Weights, consistency, and variance bounds. Foundations of Data Science, 2020  doi: 10.3934/fods.2020018 [19] Håkon Hoel, Gaukhar Shaimerdenova, Raúl Tempone. Multilevel Ensemble Kalman Filtering based on a sample average of independent EnKF estimators. Foundations of Data Science, 2020, 2 (4) : 351-390. doi: 10.3934/fods.2020017 [20] Neil K. Chada, Yuming Chen, Daniel Sanz-Alonso. Iterative ensemble Kalman methods: A unified perspective with some new variants. Foundations of Data Science, 2021  doi: 10.3934/fods.2021011

Impact Factor: