We introduce a new multilevel ensemble Kalman filter method (MLEnKF) which consists of a hierarchy of independent samples of ensemble Kalman filters (EnKF). This new MLEnKF method is fundamentally different from the preexisting method introduced by Hoel, Law and Tempone in 2016, and it is suitable for extensions towards multi-index Monte Carlo based filtering methods. Robust theoretical analysis and supporting numerical examples show that under appropriate regularity assumptions, the MLEnKF method has better complexity than plain vanilla EnKF in the large-ensemble and fine-resolution limits, for weak approximations of quantities of interest. The method is developed for discrete-time filtering problems with finite-dimensional state space and linear observations polluted by additive Gaussian noise.
Citation: |
Figure 2. One prediction-update iteration of the MLEnKF estimator described in Section 2.4.1. Green and pink ovals represent fine- and coarse-level prediction-state particles, respectively, sharing the same initial condition and driving noise $ \omega^{\ell} $ and the respective squares represent fine- and coarse-level updated-state particles sharing the perturbed observartions. The MLEnKF estimator is obtained by iid copies of pairwise-coupled samples, cf (9)
Figure 3. Top row: comparison of the runtime versus RMSE for the QoIs mean (left) and variance (right) over $ \mathcal{N} = 10 $ observation times for the problem in Section 3.1. The solid-crossed line represents MLEnKF, the solid-asterisk line represents the original MLEnKF and the bottom reference triangle with the slope $ \frac{1}{2} $, the solid-bulleted line represents EnKF and the upper reference triangle with the slope $ \frac{1}{3} $. Bottom row: similar plots over $ \mathcal{N} = 20 $ observation times
Figure 5. Left column: Well transition of the EnKF ensemble when the measurements are located in the opposite well. Right column: Animation of the particle paths of the corresponding EnKF ensemble during the well-transition, and the resulting kernel density estimations of the EnKF prediction and update densities (Section 3.2). For practical purposes, EnKF with only 7 particles is not very robust. We use so few particles in this computation for the sole purpose of obtaining a visually clear illustration of the ensemble transition
Figure 6. Top row: comparison of the runtime versus RMSE for the QoIs mean (left) and variance (right) over $ \mathcal{N} = 10 $ observation times for the problem in Section 3.2. The solid-crossed line represents MLEnKF, the solid-asterisk line represents the original MLEnKF and the bottom reference triangle with the slope $ \frac{1}{2} $, the solid-bulleted line represents EnKF and the upper reference triangle with the slope $ \frac{1}{3} $. Bottom row: similar plots over $ \mathcal{N} = 20 $ observation times
Figure 7. (a) The inequality $ \min(\beta s,1)<s $ (green line). (b) The equality $ \min(\beta s,1) = s $ (blue line). (c) The inequality $ \min(\beta s,1)>s $ (red line). The dash lines correspond to the function $ y(s) = \min(\beta s, 1) $ and the dotted lines refer to the function $ y(s) = \beta s $ varying by different cases of $ \beta $ value
[1] |
S. I. Aanonsen, G. Nævdal, D. S. Oliver, A. C. Reynolds, B. Vallès, et al., The ensemble Kalman filter in reservoir engineering-a review, Spe Journal, 14 (2009), 393-412.
doi: 10.2118/117274-PA.![]() ![]() |
[2] |
A. Beskos, A. Jasra, K. J. H. Law, Y. Marzouk and Y. Zhou, Multilevel sequential Monte Carlo with dimension-independent likelihood-informed proposals, SIAM/ASA Journal on Uncertainty Quantification, 6 (2018), 762-786.
doi: 10.1137/17M1120993.![]() ![]() ![]() |
[3] |
A. Beskos, A. Jasra, K. Law, R. Tempone and Y. Zhou, Multilevel sequential monte carlo samplers, Stochastic Processes and their Applications, 127 (2017), 1417-1440.
doi: 10.1016/j.spa.2016.08.004.![]() ![]() ![]() |
[4] |
J. Bezanson, A. Edelman, S. Karpinski and V. B. Shah, Julia: A fresh approach to numerical computing, SIAM Review, 59 (2017), 65–98.
doi: 10.1137/141000671.![]() ![]() ![]() |
[5] |
D. Blömker, C. Schillings, P. Wacker and S. Weissmann, Well Posedness and Convergence Analysis of the Ensemble Kalman Inversion, Inverse Problems, IOP Publishing, (2019).
doi: 10.1088/1361-6420/ab149c.![]() ![]() ![]() |
[6] |
G. Burgers, P. J. van Leeuwen and G. Evensen, Analysis scheme in the ensemble Kalman filter, Monthly Weather Review, 126 (1998), 1719-1724.
doi: 10.1175/1520-0493(1998)126<1719:ASITEK>2.0.CO;2.![]() ![]() |
[7] |
A. Chernov, H. Hoel, K. J. H. Law, F. Nobile and R. Tempone, Multilevel ensemble Kalman filtering for spatio-temporal processes, preprint, arXiv: 1710.07282.
doi: 10.1137/15M100955X.![]() ![]() ![]() |
[8] |
N. D. Conrad, L. Helfmann, J. Zonker, S. Winkelmann and C. Schütte, Human mobility and innovation spreading in ancient times: A stochastic agent-based simulation approach, in EPJ Data Science, Springer, 7 (2018), 24.
![]() |
[9] |
J. de Wiljes, S. Reich and W. Stannat, Long-time stability and accuracy of the ensemble Kalman–Bucy filter for fully observed processes and small measurement noise, SIAM Journal on Applied Dynamical Systems, 17, (2018) 1152–1181.
doi: 10.1137/17M1119056.![]() ![]() ![]() |
[10] |
T. J. Dodwell, C. Ketelsen, R. Scheichl and A. L. Teckentrup, A hierarchical multilevel Markov chain Monte Carlo algorithm with applications to uncertainty quantification in subsurface flow, SIAM/ASA Journal on Uncertainty Quantification, 3 (2015), 1075-1108.
doi: 10.1137/130915005.![]() ![]() ![]() |
[11] |
O. G. Ernst, B. Sprungk and H. Starkloff, Analysis of the ensemble and polynomial chaos Kalman filters in Bayesian inverse problems, SIAM/ASA Journal on Uncertainty Quantification, 3 (2015), 823-851.
doi: 10.1137/140981319.![]() ![]() ![]() |
[12] |
G. Evensen, Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics, Journal of Geophysical Research: Oceans, 99(C5) (1998), 10143-10162.
doi: 10.1029/94JC00572.![]() ![]() |
[13] |
K. Fossum, T. Mannseth and A. S. Stordal, Assessment of multilevel ensemble-based data assimilation for reservoir history matching, Computational Geosciences, 17 (2019), 1-23.
doi: 10.1007/s10596-019-09911-x.![]() ![]() ![]() |
[14] |
M. B. Giles, Multilevel Monte Carlo path simulation, Oper. Res., 56 (2008), 607-617.
doi: 10.1287/opre.1070.0496.![]() ![]() ![]() |
[15] |
C. Graham and D. Talay, Stochastic simulation and Monte Carlo methods: Mathematical foundations of stochastic simulation, Springer Science & Business Media, 68 (2013).
doi: 10.1007/978-3-642-39363-1.![]() ![]() ![]() |
[16] |
A. Gregory and C. J. Cotter, A seamless multilevel ensemble transform particle filter, SIAM Journal on Scientific Computing, 39 (2017), A2684–A2701.
doi: 10.1137/16M1102021.![]() ![]() ![]() |
[17] |
A. Gregory, C. J. Cotter and S. Reich, Multilevel ensemble transform particle filtering, SIAM Journal on Scientific Computing, 38 (2016), A1317–A1338.
doi: 10.1137/15M1038232.![]() ![]() ![]() |
[18] |
A. Haji-Ali, F. Nobile and R. Tempone, Multi-index Monte Carlo: When sparsity meets sampling, Numerische Mathematik, 132 (2016), 767-806.
doi: 10.1007/s00211-015-0734-5.![]() ![]() ![]() |
[19] |
A. Haji-Ali and R. Tempone, Multilevel and Multi-index Monte Carlo methods for the McKean–Vlasov equation, Statistics and Computing, 28 (2018), 923-935.
doi: 10.1007/s11222-017-9771-5.![]() ![]() ![]() |
[20] |
S. Heinrich, Multilevel Monte Carlo methods, Large-Scale Scientific Computing, (2001), 58–67.
doi: 10.1007/3-540-45346-6_5.![]() ![]() |
[21] |
H. Hoel, J. Häppölä and R. Tempone, Construction of a mean square error adaptive Euler–Maruyama method with applications in multilevel Monte Carlo, Monte Carlo and Quasi-Monte Carlo Methods, (2016), 29–86.
doi: 10.1007/978-3-319-33507-0_2.![]() ![]() ![]() |
[22] |
H. Hoel, K. J. H. Law and R. Tempone, Multilevel ensemble Kalman filtering, SIAM Journal on Numerical Analysis, 54 (2016), 1813-1839.
doi: 10.1137/15M100955X.![]() ![]() ![]() |
[23] |
H. Hoel, E. von Schwerin, A. Szepessy and R. Tempone, Adaptive multilevel monte carlo simulation, Numerical Analysis of Multiscale Computations, (2012), 217–234.
doi: 10.1007/978-3-642-21943-6_10.![]() ![]() ![]() |
[24] |
H. Hoel, E. Von Schwerin, A. Szepessy and R. Tempone, Implementation and analysis of an adaptive multilevel Monte Carlo algorithm, Monte Carlo Methods and Applications, 20 (2014), 1-41.
doi: 10.1515/mcma-2013-0014.![]() ![]() ![]() |
[25] |
P. L. Houtekamer and H. L. Mitchell, Data assimilation using an ensemble Kalman filter technique, Monthly Weather Review, 126 (1998), 796-811.
doi: 10.1175/1520-0493(1998)126<0796:DAUAEK>2.0.CO;2.![]() ![]() |
[26] |
P. L. Houtekamer, H. L. Mitchell, G. Pellerin, M. Buehner, M. Charron, L. Spacek and B. Hansen, Atmospheric data assimilation with an ensemble Kalman filter: Results with real observations, Monthly Weather Review, 133 (2005), 604-620.
doi: 10.1175/MWR-2864.1.![]() ![]() |
[27] |
A. Jasra, K. Kamatani, K. J. H. Law and Y. Zhou, Multilevel particle filters, SIAM Journal on Numerical Analysis, 55 (2017), 3068-3096.
doi: 10.1137/17M1111553.![]() ![]() ![]() |
[28] |
R. E. Kalman, A new approach to linear filtering and prediction problems, Journal of basic Engineering, 82 (1960), 35-45.
doi: 10.1115/1.3662552.![]() ![]() ![]() |
[29] |
E. Kalnay, Atmospheric Modeling, Data Assimilation and Predictability, Cambridge university press, (2003).
doi: 10.1017/CBO9780511802270.![]() ![]() |
[30] |
D. T. B. Kelly, K. J. H. Law and A. M. Stuart, Well-posedness and accuracy of the ensemble Kalman filter in discrete and continuous time, Nonlinearity, 27 (2014), 2579.
doi: 10.1088/0951-7715/27/10/2579.![]() ![]() ![]() |
[31] |
P. E. Kloeden and E. Platen, Numerical solution of stochastic differential equations, in Applications of Mathematics (New York), 82, Springer-Verlag, Berlin, 1992.
doi: 10.1007/978-3-662-12616-5.![]() ![]() ![]() |
[32] |
E. Kwiatkowski and J. Mandel, Convergence of the square root ensemble Kalman filter in the large ensemble limit, SIAM/ASA Journal on Uncertainty Quantification, 3 (2015), 1-17.
doi: 10.1137/140965363.![]() ![]() ![]() |
[33] |
T. Lange and W. Stannat, On the continuous time limit of the ensemble Kalman filter, preprint, arXiv: 1901.05204.
doi: 10.1090/mcom/3588.![]() ![]() ![]() |
[34] |
J. Latz, I. Papaioannou and E. Ullmann, Multilevel sequential Monte Carlo for Bayesian inverse problems, Journal of Computational Physics, 368 (2018), 154-178.
doi: 10.1016/j.jcp.2018.04.014.![]() ![]() ![]() |
[35] |
K. J. H. Law, H. Tembine and R. Tempone, Deterministic mean-field ensemble Kalman filtering, SIAM Journal on Scientific Computing, 38 (2016), A1251–A1279.
doi: 10.1137/140984415.![]() ![]() ![]() |
[36] |
F. Le Gland, V. Monbet, V. Tran, et al., Large sample asymptotics for the ensemble Kalman filter, in Oxford University Press (eds. D. Crisan, B. Rozovskii), 2011,598–631.
![]() ![]() |
[37] |
J. Mandel, L. Cobb and J. D. Beezley, On the convergence of the ensemble Kalman filter, Applications of Mathematics, 56 (2011), 533-541.
doi: 10.1007/s10492-011-0031-2.![]() ![]() ![]() |
[38] |
P. D. Moral, A. Jasra, K. J. H. Law and Y. Zhou, Multilevel sequential Monte Carlo samplers for normalizing constants, ACM Transactions on Modeling and Computer Simulation (TOMACS), 27 (2017), 20.
doi: 10.1145/3092841.![]() ![]() ![]() |
[39] |
B. Peherstorfer, K. Willcox and M. Gunzburger, Optimal model management for multifidelity Monte Carlo estimation, SIAM Journal on Scientific Computing, 38 (2016), A3163–A3194.
doi: 10.1137/15M1046472.![]() ![]() ![]() |
[40] |
A. Popov, C. Mou, T. Iliescu and A. Sandu, A multifidelity ensemble Kalman filter with reduced order control variates, preprint, arXiv: 2007.00793.
![]() |
[41] |
B. V. Rosić, A. Kučerová, J. Sỳkora, O. Pajonk, A. Litvinenko and H. G. Matthies, Parameter identification in a probabilistic setting, Engineering Structures, 50 (2013), 179-196.
![]() |
[42] |
C. Schillings and A. M. Stuart, Analysis of the ensemble Kalman filter for inverse problems, SIAM Journal on Numerical Analysis, 55 (2017), 1264-1290.
doi: 10.1137/16M105959X.![]() ![]() ![]() |
[43] |
C. Schillings and A. M. Stuart, Convergence analysis of ensemble Kalman inversion: The linear, noisy case, Applicable Analysis, 97 (2018), 107-123.
doi: 10.1080/00036811.2017.1386784.![]() ![]() ![]() |
[44] |
C. Schütte and M. Sarich, Metastability and Markov State Models in Molecular Dynamics, American Mathematical Soc., 24 (2013).
doi: 10.1090/cln/024.![]() ![]() ![]() |
[45] |
A. Szepessy, R. Tempone and G. E. Zouraris, Adaptive weak approximation of stochastic differential equations, Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences, 54 (2001), 1169-1214.
doi: 10.1002/cpa.10000.![]() ![]() ![]() |
[46] |
X. T. Tong, A. J. Majda and D. Kelly, Nonlinear stability and ergodicity of ensemble based Kalman filters, Nonlinearity, 29 (2016), 657.
doi: 10.1088/0951-7715/29/2/657.![]() ![]() ![]() |
Illustration, based on the nonlinear dynamics (5), of the contracting property which can produce almost identical prediction densities (middle panels) for the Bayes filter and MFEnKF even when the preceding updated densities differ notably
One prediction-update iteration of the MLEnKF estimator described in Section 2.4.1. Green and pink ovals represent fine- and coarse-level prediction-state particles, respectively, sharing the same initial condition and driving noise
Top row: comparison of the runtime versus RMSE for the QoIs mean (left) and variance (right) over
Realization of the double-well SDE from Section 3.2 over time
Left column: Well transition of the EnKF ensemble when the measurements are located in the opposite well. Right column: Animation of the particle paths of the corresponding EnKF ensemble during the well-transition, and the resulting kernel density estimations of the EnKF prediction and update densities (Section 3.2). For practical purposes, EnKF with only 7 particles is not very robust. We use so few particles in this computation for the sole purpose of obtaining a visually clear illustration of the ensemble transition
Top row: comparison of the runtime versus RMSE for the QoIs mean (left) and variance (right) over
(a) The inequality