doi: 10.3934/fods.2020018

Ensemble Kalman Inversion for nonlinear problems: Weights, consistency, and variance bounds

1. 

Department of Mathematics, University of Wisconsin-Madison, Madison, WI 53706 USA

2. 

Department of Mathematics, Duke University, Durham, NC 27708 USA

* Corresponding author: Zhiyan Ding

Zhiyan Ding and Qin Li are supported in part by NSF CAREER DMS-1750488, NSF TRIPODS 1740707 and Wisconsin Data Science Initiative. The work of Jianfeng Lu is supported in part by National Science Foundation via grants DMS-1454939 and DMS-2012286. All three authors thank the two anonymous referees for the very helpful suggestions

Received  May 2020 Revised  July 2020 Published  November 2020

Ensemble Kalman Inversion (EnKI) [23] and Ensemble Square Root Filter (EnSRF) [36] are popular sampling methods for obtaining a target posterior distribution. They can be seem as one step (the analysis step) in the data assimilation method Ensemble Kalman Filter [17,3]. Despite their popularity, they are, however, not unbiased when the forward map is nonlinear [12,16,25]. Important Sampling (IS), on the other hand, obtains the unbiased sampling at the expense of large variance of weights, leading to slow convergence of high moments.

We propose WEnKI and WEnSRF, the weighted versions of EnKI and EnSRF in this paper. It follows the same gradient flow as that of EnKI/EnSRF with weight corrections. Compared to the classical methods, the new methods are unbiased, and compared with IS, the method has bounded weight variance. Both properties will be proved rigorously in this paper. We further discuss the stability of the underlying Fokker-Planck equation. This partially explains why EnKI, despite being inconsistent, performs well occasionally in nonlinear settings. Numerical evidence will be demonstrated at the end.

Citation: Zhiyan Ding, Qin Li, Jianfeng Lu. Ensemble Kalman Inversion for nonlinear problems: Weights, consistency, and variance bounds. Foundations of Data Science, doi: 10.3934/fods.2020018
References:
[1]

C. AndrieuN. FreitasA. Doucet and M. Jordan, An introduction to MCMC for machine learning, Machine Learning, 50 (2003), 5-43.   Google Scholar

[2]

A. Bain and D. Crisan, Fundamentals of Stochastic Filtering, Stochastic Modelling and Applied Probability, Springer New York, 2008. doi: 10.1007/978-0-387-76896-0.  Google Scholar

[3]

K. Bergemann and S. Reich, A localization technique for ensemble Kalman filters, Quarterly Journal of the Royal Meteorological Society, 136 (2010), 701-707.  doi: 10.1002/qj.591.  Google Scholar

[4]

K. Bergemann and S. Reich, A mollified ensemble Kalman filter, Quarterly Journal of the Royal Meteorological Society, 136 (2010), 1636-1643.  doi: 10.1002/qj.672.  Google Scholar

[5]

D. Blömker, C. Schillings, P. Wacker and S. Weissmann, Well posedness and convergence analysis of the ensemble Kalman inversion, Inverse Problems, 35 (2019), 085007. doi: 10.1088/1361-6420/ab149c.  Google Scholar

[6]

D. BlomkerC. Schillings and P. Wacker, A strongly convergent numerical scheme from ensemble Kalman inversion, SIAM Journal on Numerical Analysis, 56 (2018), 2537-2562.  doi: 10.1137/17M1132367.  Google Scholar

[7]

J. A. CañizoJ. A. Carrillo and J. Rosado, A well-posedness theory in measures for some kinetic models of collective motion, Mathematical Models and Methods in Applied Sciences, 21 (2011), 515-539.  doi: 10.1142/S0218202511005131.  Google Scholar

[8]

N. ChadaA. Stuart and X. Tong, Tikhonov regularization within ensemble Kalman inversion, SIAM Journal on Numerical Analysis, 58 (2019), 1263-1294.  doi: 10.1137/19M1242331.  Google Scholar

[9]

N. Chada and X. T. Tong, Convergence acceleration of ensemble Kalman inversion in nonlinear settings, preprint, arXiv: 1911.02424. Google Scholar

[10]

N. K. Chada, M. A. Iglesias, L. Roininen and A. M. Stuart, Parameterizations for ensemble kalman inversion, Inverse Problems, 34 (2018), 055009. doi: 10.1088/1361-6420/aab6d9.  Google Scholar

[11]

J. de WiljesS. Reich and W. Stannat, Long-time stability and accuracy of the ensemble Kalman–Bucy filter for fully observed processes and small measurement noise, SIAM Journal on Applied Dynamical Systems, 17 (2018), 1152-1181.  doi: 10.1137/17M1119056.  Google Scholar

[12]

Z. Ding and Q. Li, Ensemble Kalman inversion: Mean-field limit and convergence analysis, arXiv: 1908.05575. Google Scholar

[13]

Z. Ding and Q. Li, Ensemble Kalman sampling: Mean-field limit and convergence analysis, preprint, arXiv: 1910.12923. Google Scholar

[14]

A. Doucet, N. De Freitas and N. Gordon, An Introduction to Sequential Monte Carlo Methods, Springer New York, New York, NY, 2001. doi: 10.1007/978-1-4757-3437-9_1.  Google Scholar

[15]

A. Doucet, N. De Freitas and N. Gordon, Sequential Monte Carlo Methods in Practice, Springer New York, London, 2001. doi: 10.1007/978-1-4757-3437-9.  Google Scholar

[16]

O. G. ErnstB. Sprungk and H.-J. Starkloff, Analysis of the ensemble and polynomial chaos Kalman filters in bayesian inverse problems, SIAM/ASA Journal on Uncertainty Quantification, 3 (2015), 823-851.  doi: 10.1137/140981319.  Google Scholar

[17]

G. Evensen, Data Assimilation: The Ensemble Kalman Filter, Springer-Verlag, Berlin, Heidelberg, 2006. doi: 10.1007/978-3-642-03711-5.  Google Scholar

[18]

N. Fournier and A. Guillin, On the rate of convergence in Wasserstein distance of the empirical measure, Probability Theory and Related Fields, 162 (2015), 707-738.  doi: 10.1007/s00440-014-0583-7.  Google Scholar

[19]

A. Garbuno-InigoF. HoffmannW. Li and A. M. Stuart, Interacting Langevin diffusions: Gradient structure and ensemble Kalman sampler, SIAM Journal on Applied Dynamical Systems, 19 (2020), 412-441.  doi: 10.1137/19M1251655.  Google Scholar

[20]

A. Garbuno-Inigo, N. Nüsken and S. Reich, Affine invariant interacting Langevin dynamics for Bayesian inference, CoRR, abs/1912.02859. Google Scholar

[21]

J. Geweke, Bayesian inference in econometric models using Monte Carlo integration, Econometrica, 57 (1989), 1317-1339.  doi: 10.2307/1913710.  Google Scholar

[22]

M. Herty and G. Visconti, Continuous limits for constrained ensemble Kalman filter, Inverse Problems, 36 (2020), 075006. doi: 10.1088/1361-6420/ab8bc5.  Google Scholar

[23]

M. A. Iglesias, K. J. H. Law and A. M. Stuart, Ensemble Kalman methods for inverse problems, Inverse Problems, 29 (2013), 045001. doi: 10.1088/0266-5611/29/4/045001.  Google Scholar

[24]

T. Lange and W. Stannat, On the continuous time limit of the ensemble Kalman filter, preprint, arXiv: 1901.05204. doi: 10.1090/mcom/3588.  Google Scholar

[25]

K. J. H. Law, H. Tembine and R. Tempone, Deterministic mean-field ensemble Kalman filtering, SIAM Journal on Scientific Computing, 38 (2016), A1251–A1279. doi: 10.1137/140984415.  Google Scholar

[26]

D. M. LivingsS. L. Dance and N. K. Nichols, Unbiased ensemble square root filters, Physica D: Nonlinear Phenomena, 237 (2008), 1021-1028.  doi: 10.1016/j.physd.2008.01.005.  Google Scholar

[27]

Y. Lu, J. Lu and J. Nolen, Accelerating Langevin sampling with birth-death, preprint, arXiv: 1905.09863. Google Scholar

[28]

J. Martin, L. Wilcox, C. Burstedde and O. Ghattas, A stochastic newton MCMC method for large-scale statistical inverse problems with application to seismic inversion, SIAM Journal on Scientific Computing, 34 (2012), A1460–A1487. doi: 10.1137/110845598.  Google Scholar

[29]

Y. M. MarzoukH. N. Najm and L. A. Rahn, Stochastic spectral methods for efficient Bayesian solution of inverse problems, Journal of Computational Physics, 224 (2007), 560-586.  doi: 10.1016/j.jcp.2006.10.010.  Google Scholar

[30]

A. Muntean, J. Rademacher and A. Zagaris, Macroscopic and Large Scale Phenomena: Coarse Graining, Mean Field Limits and Ergodicity, LAMM, 3, Springer, Cham, 2016. doi: 10.1007/978-3-319-26883-5.  Google Scholar

[31]

N. PapadakisE. MéminA. Cuzol and N. Gengembre, Data assimilation with the weighted ensemble Kalman filter, Tellus A, 62 (2010), 673-697.   Google Scholar

[32]

S. Reich, A dynamical systems framework for intermittent data assimilation, BIT Numerical Mathematics, 51 (2011), 235-249.  doi: 10.1007/s10543-010-0302-4.  Google Scholar

[33] S. Reich and C. Cotter, Probabilistic Forecasting and Bayesian Data Assimilation, Cambridge University Press, 2015.  doi: 10.1017/CBO9781107706804.  Google Scholar
[34]

S. Reich and S. Weissmann, Fokker-planck particle systems for bayesian inference: Computational approaches, preprint, arXiv: 1911.10832. Google Scholar

[35]

C. Schillings and A. M. Stuart, Analysis of the ensemble Kalman filter for inverse problems, SIAM Journal on Numerical Analysis, 55 (2017), 1264-1290.  doi: 10.1137/16M105959X.  Google Scholar

[36]

M. TippettJ. AndersonC. BishopT. Hamill and J. Whitaker, Ensemble square root filters, Monthly Weather Review, 131 (2003), 1485-1490.  doi: 10.1175/1520-0493(2003)131<1485:ESRF>2.0.CO;2.  Google Scholar

show all references

References:
[1]

C. AndrieuN. FreitasA. Doucet and M. Jordan, An introduction to MCMC for machine learning, Machine Learning, 50 (2003), 5-43.   Google Scholar

[2]

A. Bain and D. Crisan, Fundamentals of Stochastic Filtering, Stochastic Modelling and Applied Probability, Springer New York, 2008. doi: 10.1007/978-0-387-76896-0.  Google Scholar

[3]

K. Bergemann and S. Reich, A localization technique for ensemble Kalman filters, Quarterly Journal of the Royal Meteorological Society, 136 (2010), 701-707.  doi: 10.1002/qj.591.  Google Scholar

[4]

K. Bergemann and S. Reich, A mollified ensemble Kalman filter, Quarterly Journal of the Royal Meteorological Society, 136 (2010), 1636-1643.  doi: 10.1002/qj.672.  Google Scholar

[5]

D. Blömker, C. Schillings, P. Wacker and S. Weissmann, Well posedness and convergence analysis of the ensemble Kalman inversion, Inverse Problems, 35 (2019), 085007. doi: 10.1088/1361-6420/ab149c.  Google Scholar

[6]

D. BlomkerC. Schillings and P. Wacker, A strongly convergent numerical scheme from ensemble Kalman inversion, SIAM Journal on Numerical Analysis, 56 (2018), 2537-2562.  doi: 10.1137/17M1132367.  Google Scholar

[7]

J. A. CañizoJ. A. Carrillo and J. Rosado, A well-posedness theory in measures for some kinetic models of collective motion, Mathematical Models and Methods in Applied Sciences, 21 (2011), 515-539.  doi: 10.1142/S0218202511005131.  Google Scholar

[8]

N. ChadaA. Stuart and X. Tong, Tikhonov regularization within ensemble Kalman inversion, SIAM Journal on Numerical Analysis, 58 (2019), 1263-1294.  doi: 10.1137/19M1242331.  Google Scholar

[9]

N. Chada and X. T. Tong, Convergence acceleration of ensemble Kalman inversion in nonlinear settings, preprint, arXiv: 1911.02424. Google Scholar

[10]

N. K. Chada, M. A. Iglesias, L. Roininen and A. M. Stuart, Parameterizations for ensemble kalman inversion, Inverse Problems, 34 (2018), 055009. doi: 10.1088/1361-6420/aab6d9.  Google Scholar

[11]

J. de WiljesS. Reich and W. Stannat, Long-time stability and accuracy of the ensemble Kalman–Bucy filter for fully observed processes and small measurement noise, SIAM Journal on Applied Dynamical Systems, 17 (2018), 1152-1181.  doi: 10.1137/17M1119056.  Google Scholar

[12]

Z. Ding and Q. Li, Ensemble Kalman inversion: Mean-field limit and convergence analysis, arXiv: 1908.05575. Google Scholar

[13]

Z. Ding and Q. Li, Ensemble Kalman sampling: Mean-field limit and convergence analysis, preprint, arXiv: 1910.12923. Google Scholar

[14]

A. Doucet, N. De Freitas and N. Gordon, An Introduction to Sequential Monte Carlo Methods, Springer New York, New York, NY, 2001. doi: 10.1007/978-1-4757-3437-9_1.  Google Scholar

[15]

A. Doucet, N. De Freitas and N. Gordon, Sequential Monte Carlo Methods in Practice, Springer New York, London, 2001. doi: 10.1007/978-1-4757-3437-9.  Google Scholar

[16]

O. G. ErnstB. Sprungk and H.-J. Starkloff, Analysis of the ensemble and polynomial chaos Kalman filters in bayesian inverse problems, SIAM/ASA Journal on Uncertainty Quantification, 3 (2015), 823-851.  doi: 10.1137/140981319.  Google Scholar

[17]

G. Evensen, Data Assimilation: The Ensemble Kalman Filter, Springer-Verlag, Berlin, Heidelberg, 2006. doi: 10.1007/978-3-642-03711-5.  Google Scholar

[18]

N. Fournier and A. Guillin, On the rate of convergence in Wasserstein distance of the empirical measure, Probability Theory and Related Fields, 162 (2015), 707-738.  doi: 10.1007/s00440-014-0583-7.  Google Scholar

[19]

A. Garbuno-InigoF. HoffmannW. Li and A. M. Stuart, Interacting Langevin diffusions: Gradient structure and ensemble Kalman sampler, SIAM Journal on Applied Dynamical Systems, 19 (2020), 412-441.  doi: 10.1137/19M1251655.  Google Scholar

[20]

A. Garbuno-Inigo, N. Nüsken and S. Reich, Affine invariant interacting Langevin dynamics for Bayesian inference, CoRR, abs/1912.02859. Google Scholar

[21]

J. Geweke, Bayesian inference in econometric models using Monte Carlo integration, Econometrica, 57 (1989), 1317-1339.  doi: 10.2307/1913710.  Google Scholar

[22]

M. Herty and G. Visconti, Continuous limits for constrained ensemble Kalman filter, Inverse Problems, 36 (2020), 075006. doi: 10.1088/1361-6420/ab8bc5.  Google Scholar

[23]

M. A. Iglesias, K. J. H. Law and A. M. Stuart, Ensemble Kalman methods for inverse problems, Inverse Problems, 29 (2013), 045001. doi: 10.1088/0266-5611/29/4/045001.  Google Scholar

[24]

T. Lange and W. Stannat, On the continuous time limit of the ensemble Kalman filter, preprint, arXiv: 1901.05204. doi: 10.1090/mcom/3588.  Google Scholar

[25]

K. J. H. Law, H. Tembine and R. Tempone, Deterministic mean-field ensemble Kalman filtering, SIAM Journal on Scientific Computing, 38 (2016), A1251–A1279. doi: 10.1137/140984415.  Google Scholar

[26]

D. M. LivingsS. L. Dance and N. K. Nichols, Unbiased ensemble square root filters, Physica D: Nonlinear Phenomena, 237 (2008), 1021-1028.  doi: 10.1016/j.physd.2008.01.005.  Google Scholar

[27]

Y. Lu, J. Lu and J. Nolen, Accelerating Langevin sampling with birth-death, preprint, arXiv: 1905.09863. Google Scholar

[28]

J. Martin, L. Wilcox, C. Burstedde and O. Ghattas, A stochastic newton MCMC method for large-scale statistical inverse problems with application to seismic inversion, SIAM Journal on Scientific Computing, 34 (2012), A1460–A1487. doi: 10.1137/110845598.  Google Scholar

[29]

Y. M. MarzoukH. N. Najm and L. A. Rahn, Stochastic spectral methods for efficient Bayesian solution of inverse problems, Journal of Computational Physics, 224 (2007), 560-586.  doi: 10.1016/j.jcp.2006.10.010.  Google Scholar

[30]

A. Muntean, J. Rademacher and A. Zagaris, Macroscopic and Large Scale Phenomena: Coarse Graining, Mean Field Limits and Ergodicity, LAMM, 3, Springer, Cham, 2016. doi: 10.1007/978-3-319-26883-5.  Google Scholar

[31]

N. PapadakisE. MéminA. Cuzol and N. Gengembre, Data assimilation with the weighted ensemble Kalman filter, Tellus A, 62 (2010), 673-697.   Google Scholar

[32]

S. Reich, A dynamical systems framework for intermittent data assimilation, BIT Numerical Mathematics, 51 (2011), 235-249.  doi: 10.1007/s10543-010-0302-4.  Google Scholar

[33] S. Reich and C. Cotter, Probabilistic Forecasting and Bayesian Data Assimilation, Cambridge University Press, 2015.  doi: 10.1017/CBO9781107706804.  Google Scholar
[34]

S. Reich and S. Weissmann, Fokker-planck particle systems for bayesian inference: Computational approaches, preprint, arXiv: 1911.10832. Google Scholar

[35]

C. Schillings and A. M. Stuart, Analysis of the ensemble Kalman filter for inverse problems, SIAM Journal on Numerical Analysis, 55 (2017), 1264-1290.  doi: 10.1137/16M105959X.  Google Scholar

[36]

M. TippettJ. AndersonC. BishopT. Hamill and J. Whitaker, Ensemble square root filters, Monthly Weather Review, 131 (2003), 1485-1490.  doi: 10.1175/1520-0493(2003)131<1485:ESRF>2.0.CO;2.  Google Scholar

Figure 1.  Example $ 1 $: from left top to bottom right: WEnKI; WEnSRF; WEnKF, as shown in Remark 1 and equation (44); IS; EnKI and EnSRF. (All evolutional equation take $ \Delta t = 10^{-3} $.)
Figure 2.  Example $ 2 $: from left top to bottom right: WEnKI; WEnSRF; WEnKF; IS; EnKI and EnSRF
Figure 3.  Example $ 3 $: from left top to bottom right: WEnKI; WEnSRF; WEnKF; IS; EnKI and EnSRF
Figure 4.  Example 3: $ \log( {\rm{Var}}(Nw(t))+1) $ for WEnKI, WEnSRF and IS
Figure 5.  Example $ 4 $: from left top to bottom right: WEnKI; WEnSRF; WEnKF; IS; EnKI and EnSRF
Figure 6.  Example $ 5 $: from left top to bottom right: WEnKI; WEnSRF; WEnKF; IS; EnKI and EnSRF
Figure 7.  Example 5: $ \log( {\rm{Var}}(Nw(t))+1) $ for WEnKI, WEnSRF and IS
Table 1.  Error of moments estimation in Example 3
WEnKI WEnSRF
Moments Est. Re. Error Est. Re. Error
$\mathbb{E}|u|^1=3.84$ 3.82 0.0056 3.88 0.0098
$\mathbb{E}|u|^2=14.90$ 14.73 0.0114 15.19 0.0192
$\mathbb{E}|u|^3=58.22$ 57.19 0.0177 59.86 0.0281
$\mathbb{E}|u|^4=229.36$ 223.79 0.0243 237.75 0.0366
$\mathbb{E}|u|^5=911.22$ 882.83 0.0312 951.95 0.0447
EnKI EnSRF
Moments Est. Re. Error Est. Re. Error
$\mathbb{E}|u|^1=3.84$ 3.69 0.0413 3.70 0.0391
$\mathbb{E}|u|^2=14.90$ 13.66 0.0833 13.73 0.0785
$\mathbb{E}|u|^3=58.22$ 50.90 0.1258 51.35 0.1181
$\mathbb{E}|u|^4=229.36$ 190.68 0.1687 193.24 0.1575
$\mathbb{E}|u|^5=911.22$ 718.31 0.2117 732.17 0.1965
WEnKF IS
Moments Est. Re. Error Est. Re. Error
$\mathbb{E}|u|^1=3.84$ 3.40 0.1156 3.52 0.0858
$\mathbb{E}|u|^2=14.90$ 11.65 0.2181 12.37 0.1699
$\mathbb{E}|u|^3=58.22$ 40.22 0.3093 43.57 0.2517
$\mathbb{E}|u|^4=229.36$ 139.72 0.3908 153.56 0.3305
$\mathbb{E}|u|^5=911.22$ 488.51 0.4639 541.71 0.4055
WEnKI WEnSRF
Moments Est. Re. Error Est. Re. Error
$\mathbb{E}|u|^1=3.84$ 3.82 0.0056 3.88 0.0098
$\mathbb{E}|u|^2=14.90$ 14.73 0.0114 15.19 0.0192
$\mathbb{E}|u|^3=58.22$ 57.19 0.0177 59.86 0.0281
$\mathbb{E}|u|^4=229.36$ 223.79 0.0243 237.75 0.0366
$\mathbb{E}|u|^5=911.22$ 882.83 0.0312 951.95 0.0447
EnKI EnSRF
Moments Est. Re. Error Est. Re. Error
$\mathbb{E}|u|^1=3.84$ 3.69 0.0413 3.70 0.0391
$\mathbb{E}|u|^2=14.90$ 13.66 0.0833 13.73 0.0785
$\mathbb{E}|u|^3=58.22$ 50.90 0.1258 51.35 0.1181
$\mathbb{E}|u|^4=229.36$ 190.68 0.1687 193.24 0.1575
$\mathbb{E}|u|^5=911.22$ 718.31 0.2117 732.17 0.1965
WEnKF IS
Moments Est. Re. Error Est. Re. Error
$\mathbb{E}|u|^1=3.84$ 3.40 0.1156 3.52 0.0858
$\mathbb{E}|u|^2=14.90$ 11.65 0.2181 12.37 0.1699
$\mathbb{E}|u|^3=58.22$ 40.22 0.3093 43.57 0.2517
$\mathbb{E}|u|^4=229.36$ 139.72 0.3908 153.56 0.3305
$\mathbb{E}|u|^5=911.22$ 488.51 0.4639 541.71 0.4055
Table 2.  Simulation time in Example 1-3
Case WEnKI WEnSRF EnKI EnSRF
Example 1 0.362s 0.197s 0.138s 0.178s
Example 2 50.041s 41.739s 26.564s 18.518s
Example 3 0.198s 0.115s 0.120s 0.072s
Case WEnKI WEnSRF EnKI EnSRF
Example 1 0.362s 0.197s 0.138s 0.178s
Example 2 50.041s 41.739s 26.564s 18.518s
Example 3 0.198s 0.115s 0.120s 0.072s
Table 3.  Error of moments estimation in Example 5
WEnKI WEnSRF
Moments Est. Re. Error Est. Re. Error
$\mathbb{E}|u|^1=3.32$ 3.30 0.0055 3.32 0.0017
$\mathbb{E}|u|^2=11.16$ 10.99 0.0147 11.19 0.0023
$\mathbb{E}|u|^3=38.05$ 36.99 0.0279 38.12 0.0019
$\mathbb{E}|u|^4=131.45$ 125.53 0.0451 131.47 0.0001
$\mathbb{E}|u|^5=460.56$ 429.99 0.0664 459.16 0.0030
EnKI EnSRF
Moments Est. Re. Error Est. Re. Error
$\mathbb{E}|u|^1=3.32$ 2.96 0.1084 3.28 0.0112
$\mathbb{E}|u|^2=11.16$ 9.07 0.1872 11.04 0.0111
$\mathbb{E}|u|^3=38.05$ 29.17 0.2332 38.25 0.0053
$\mathbb{E}|u|^4=131.45$ 100.32 0.2369 137.43 0.0455
$\mathbb{E}|u|^5=460.56$ 379.73 0.1755 516.22 0.1208
WEnKF IS
Moments Est. Re. Error Est. Re. Error
$\mathbb{E}|u|^1=3.32$ 3.40 0.1658 3.24 0.0245
$\mathbb{E}|u|^2=11.16$ 7.72 0.3077 10.50 0.0592
$\mathbb{E}|u|^3=38.05$ 21.74 0.4287 34.10 0.1037
$\mathbb{E}|u|^4=131.45$ 61.62 0.5313 110.81 0.1571
$\mathbb{E}|u|^5=460.56$ 175.99 0.6179 360.27 0.2178
WEnKI WEnSRF
Moments Est. Re. Error Est. Re. Error
$\mathbb{E}|u|^1=3.32$ 3.30 0.0055 3.32 0.0017
$\mathbb{E}|u|^2=11.16$ 10.99 0.0147 11.19 0.0023
$\mathbb{E}|u|^3=38.05$ 36.99 0.0279 38.12 0.0019
$\mathbb{E}|u|^4=131.45$ 125.53 0.0451 131.47 0.0001
$\mathbb{E}|u|^5=460.56$ 429.99 0.0664 459.16 0.0030
EnKI EnSRF
Moments Est. Re. Error Est. Re. Error
$\mathbb{E}|u|^1=3.32$ 2.96 0.1084 3.28 0.0112
$\mathbb{E}|u|^2=11.16$ 9.07 0.1872 11.04 0.0111
$\mathbb{E}|u|^3=38.05$ 29.17 0.2332 38.25 0.0053
$\mathbb{E}|u|^4=131.45$ 100.32 0.2369 137.43 0.0455
$\mathbb{E}|u|^5=460.56$ 379.73 0.1755 516.22 0.1208
WEnKF IS
Moments Est. Re. Error Est. Re. Error
$\mathbb{E}|u|^1=3.32$ 3.40 0.1658 3.24 0.0245
$\mathbb{E}|u|^2=11.16$ 7.72 0.3077 10.50 0.0592
$\mathbb{E}|u|^3=38.05$ 21.74 0.4287 34.10 0.1037
$\mathbb{E}|u|^4=131.45$ 61.62 0.5313 110.81 0.1571
$\mathbb{E}|u|^5=460.56$ 175.99 0.6179 360.27 0.2178
[1]

Jiangqi Wu, Linjie Wen, Jinglai Li. Resampled ensemble Kalman inversion for Bayesian parameter estimation with sequential data. Discrete & Continuous Dynamical Systems - S, 2021  doi: 10.3934/dcdss.2021045

[2]

Neil K. Chada, Claudia Schillings, Simon Weissmann. On the incorporation of box-constraints for ensemble Kalman inversion. Foundations of Data Science, 2019, 1 (4) : 433-456. doi: 10.3934/fods.2019018

[3]

Junyoung Jang, Kihoon Jang, Hee-Dae Kwon, Jeehyun Lee. Feedback control of an HBV model based on ensemble kalman filter and differential evolution. Mathematical Biosciences & Engineering, 2018, 15 (3) : 667-691. doi: 10.3934/mbe.2018030

[4]

Theresa Lange, Wilhelm Stannat. Mean field limit of Ensemble Square Root filters - discrete and continuous time. Foundations of Data Science, 2021  doi: 10.3934/fods.2021003

[5]

Sylvain De Moor, Luis Miguel Rodrigues, Julien Vovelle. Invariant measures for a stochastic Fokker-Planck equation. Kinetic & Related Models, 2018, 11 (2) : 357-395. doi: 10.3934/krm.2018017

[6]

Marco Torregrossa, Giuseppe Toscani. On a Fokker-Planck equation for wealth distribution. Kinetic & Related Models, 2018, 11 (2) : 337-355. doi: 10.3934/krm.2018016

[7]

Michael Herty, Christian Jörres, Albert N. Sandjo. Optimization of a model Fokker-Planck equation. Kinetic & Related Models, 2012, 5 (3) : 485-503. doi: 10.3934/krm.2012.5.485

[8]

José Antonio Alcántara, Simone Calogero. On a relativistic Fokker-Planck equation in kinetic theory. Kinetic & Related Models, 2011, 4 (2) : 401-426. doi: 10.3934/krm.2011.4.401

[9]

Sebastian Reich, Seoleun Shin. On the consistency of ensemble transform filter formulations. Journal of Computational Dynamics, 2014, 1 (1) : 177-189. doi: 10.3934/jcd.2014.1.177

[10]

Helge Dietert, Josephine Evans, Thomas Holding. Contraction in the Wasserstein metric for the kinetic Fokker-Planck equation on the torus. Kinetic & Related Models, 2018, 11 (6) : 1427-1441. doi: 10.3934/krm.2018056

[11]

Andreas Denner, Oliver Junge, Daniel Matthes. Computing coherent sets using the Fokker-Planck equation. Journal of Computational Dynamics, 2016, 3 (2) : 163-177. doi: 10.3934/jcd.2016008

[12]

Ioannis Markou. Hydrodynamic limit for a Fokker-Planck equation with coefficients in Sobolev spaces. Networks & Heterogeneous Media, 2017, 12 (4) : 683-705. doi: 10.3934/nhm.2017028

[13]

Manh Hong Duong, Yulong Lu. An operator splitting scheme for the fractional kinetic Fokker-Planck equation. Discrete & Continuous Dynamical Systems, 2019, 39 (10) : 5707-5727. doi: 10.3934/dcds.2019250

[14]

Giuseppe Toscani. A Rosenau-type approach to the approximation of the linear Fokker-Planck equation. Kinetic & Related Models, 2018, 11 (4) : 697-714. doi: 10.3934/krm.2018028

[15]

Shui-Nee Chow, Wuchen Li, Haomin Zhou. Entropy dissipation of Fokker-Planck equations on graphs. Discrete & Continuous Dynamical Systems, 2018, 38 (10) : 4929-4950. doi: 10.3934/dcds.2018215

[16]

Martin Burger, Ina Humpert, Jan-Frederik Pietschmann. On Fokker-Planck equations with In- and Outflow of Mass. Kinetic & Related Models, 2020, 13 (2) : 249-277. doi: 10.3934/krm.2020009

[17]

Michael Herty, Lorenzo Pareschi. Fokker-Planck asymptotics for traffic flow models. Kinetic & Related Models, 2010, 3 (1) : 165-179. doi: 10.3934/krm.2010.3.165

[18]

Marc Bocquet, Alban Farchi, Quentin Malartic. Online learning of both state and dynamics using ensemble Kalman filters. Foundations of Data Science, 2020  doi: 10.3934/fods.2020015

[19]

Le Yin, Ioannis Sgouralis, Vasileios Maroulas. Topological reconstruction of sub-cellular motion with Ensemble Kalman velocimetry. Foundations of Data Science, 2020, 2 (2) : 101-121. doi: 10.3934/fods.2020007

[20]

Håkon Hoel, Gaukhar Shaimerdenova, Raúl Tempone. Multilevel Ensemble Kalman Filtering based on a sample average of independent EnKF estimators. Foundations of Data Science, 2020, 2 (4) : 351-390. doi: 10.3934/fods.2020017

 Impact Factor: 

Metrics

  • PDF downloads (50)
  • HTML views (247)
  • Cited by (0)

Other articles
by authors

[Back to Top]