# American Institute of Mathematical Sciences

doi: 10.3934/fods.2020017

## Multilevel Ensemble Kalman Filtering based on a sample average of independent EnKF estimators

 1 Chair of Mathematics for Uncertainty Quantification, RWTH Aachen University, Aachen, Germany 2 Applied Mathematics and Computational Sciences, KAUST, Thuwal, Saudi Arabia

* Corresponding author: Gaukhar Shaimerdenova

Received  September 2020 Published  November 2020

We introduce a new multilevel ensemble Kalman filter method (MLEnKF) which consists of a hierarchy of independent samples of ensemble Kalman filters (EnKF). This new MLEnKF method is fundamentally different from the preexisting method introduced by Hoel, Law and Tempone in 2016, and it is suitable for extensions towards multi-index Monte Carlo based filtering methods. Robust theoretical analysis and supporting numerical examples show that under appropriate regularity assumptions, the MLEnKF method has better complexity than plain vanilla EnKF in the large-ensemble and fine-resolution limits, for weak approximations of quantities of interest. The method is developed for discrete-time filtering problems with finite-dimensional state space and linear observations polluted by additive Gaussian noise.

Citation: Håkon Hoel, Gaukhar Shaimerdenova, Raúl Tempone. Multilevel Ensemble Kalman Filtering based on a sample average of independent EnKF estimators. Foundations of Data Science, doi: 10.3934/fods.2020017
##### References:

show all references

##### References:
Illustration, based on the nonlinear dynamics (5), of the contracting property which can produce almost identical prediction densities (middle panels) for the Bayes filter and MFEnKF even when the preceding updated densities differ notably
One prediction-update iteration of the MLEnKF estimator described in Section 2.4.1. Green and pink ovals represent fine- and coarse-level prediction-state particles, respectively, sharing the same initial condition and driving noise $\omega^{\ell}$ and the respective squares represent fine- and coarse-level updated-state particles sharing the perturbed observartions. The MLEnKF estimator is obtained by iid copies of pairwise-coupled samples, cf (9)
Top row: comparison of the runtime versus RMSE for the QoIs mean (left) and variance (right) over $\mathcal{N} = 10$ observation times for the problem in Section 3.1. The solid-crossed line represents MLEnKF, the solid-asterisk line represents the original MLEnKF and the bottom reference triangle with the slope $\frac{1}{2}$, the solid-bulleted line represents EnKF and the upper reference triangle with the slope $\frac{1}{3}$. Bottom row: similar plots over $\mathcal{N} = 20$ observation times
Realization of the double-well SDE from Section 3.2 over time $\mathcal{N} = 20$ observation times (solid line) and observations (dots)
Left column: Well transition of the EnKF ensemble when the measurements are located in the opposite well. Right column: Animation of the particle paths of the corresponding EnKF ensemble during the well-transition, and the resulting kernel density estimations of the EnKF prediction and update densities (Section 3.2). For practical purposes, EnKF with only 7 particles is not very robust. We use so few particles in this computation for the sole purpose of obtaining a visually clear illustration of the ensemble transition
Top row: comparison of the runtime versus RMSE for the QoIs mean (left) and variance (right) over $\mathcal{N} = 10$ observation times for the problem in Section 3.2. The solid-crossed line represents MLEnKF, the solid-asterisk line represents the original MLEnKF and the bottom reference triangle with the slope $\frac{1}{2}$, the solid-bulleted line represents EnKF and the upper reference triangle with the slope $\frac{1}{3}$. Bottom row: similar plots over $\mathcal{N} = 20$ observation times
(a) The inequality $\min(\beta s,1)<s$ (green line). (b) The equality $\min(\beta s,1) = s$ (blue line). (c) The inequality $\min(\beta s,1)>s$ (red line). The dash lines correspond to the function $y(s) = \min(\beta s, 1)$ and the dotted lines refer to the function $y(s) = \beta s$ varying by different cases of $\beta$ value
 [1] Zhiyan Ding, Qin Li, Jianfeng Lu. Ensemble Kalman Inversion for nonlinear problems: Weights, consistency, and variance bounds. Foundations of Data Science, 2020  doi: 10.3934/fods.2020018 [2] Thierry Horsin, Mohamed Ali Jendoubi. On the convergence to equilibria of a sequence defined by an implicit scheme. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020465 [3] Parikshit Upadhyaya, Elias Jarlebring, Emanuel H. Rubensson. A density matrix approach to the convergence of the self-consistent field iteration. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 99-115. doi: 10.3934/naco.2020018 [4] Zuliang Lu, Fei Huang, Xiankui Wu, Lin Li, Shang Liu. Convergence and quasi-optimality of $L^2-$norms based an adaptive finite element method for nonlinear optimal control problems. Electronic Research Archive, 2020, 28 (4) : 1459-1486. doi: 10.3934/era.2020077

Impact Factor: