doi: 10.3934/fods.2021034
Online First

Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible.

Readers can access Online First articles via the “Online First” tab for the selected journal.

Constrained Ensemble Langevin Monte Carlo

Department of Mathematics, University of Wisconsin-Madison, Madison, WI 53705 USA

* Corresponding author: Qin Li

Received  September 2021 Revised  October 2021 Early access December 2021

Fund Project: Q.L. acknowledges support from Vilas Early Career award. The research of Z.D., and Q.L is supported in part by NSF via grant DMS-1750488, DMS-2023239 and Office of the Vice Chancellor for Research and Graduate Education at the University of Wisconsin Madison with funding from the Wisconsin Alumni Research Foundation

The classical Langevin Monte Carlo method looks for samples from a target distribution by descending the samples along the gradient of the target distribution. The method enjoys a fast convergence rate. However, the numerical cost is sometimes high because each iteration requires the computation of a gradient. One approach to eliminate the gradient computation is to employ the concept of "ensemble." A large number of particles are evolved together so the neighboring particles provide gradient information to each other. In this article, we discuss two algorithms that integrate the ensemble feature into LMC, and the associated properties.

In particular, we find that if one directly surrogates the gradient using the ensemble approximation, the algorithm, termed Ensemble Langevin Monte Carlo, is unstable due to a high variance term. If the gradients are replaced by the ensemble approximations only in a constrained manner, to protect from the unstable points, the algorithm, termed Constrained Ensemble Langevin Monte Carlo, resembles the classical LMC up to an ensemble error but removes most of the gradient computation.

Citation: Zhiyan Ding, Qin Li. Constrained Ensemble Langevin Monte Carlo. Foundations of Data Science, doi: 10.3934/fods.2021034
References:
[1]

C. AndrieuN. de FreitasA. Doucet and M. I. Jordan, An introduction to MCMC for machine learning, Machine Learning, 50 (2003), 5-43.  doi: 10.1023/A:1020281327116.  Google Scholar

[2]

A. BeskosA. JasraK. LawR. Tempone and Y. Zhou, Multilevel sequential Monte Carlo samplers, Stochastic Process. Appl., 127 (2017), 1417-1440.  doi: 10.1016/j.spa.2016.08.004.  Google Scholar

[3]

N. S. Chatterji, N. Flammarion, Y.-A. Ma, P. L. Bartlett and M. I. Jordan, On the theory of variance reduction for stochastic gradient Monte Carlo, Proceedings of the 35th international Conference on Machine Learning, 80 (2018), 764–773. Available from: http://proceedings.mlr.press/v80/chatterji18a/chatterji18a.pdf. Google Scholar

[4]

A. S. Dalalyan, Theoretical guarantees for approximate sampling from smooth and log-concave densities, J. R. Stat. Soc. Ser. B. Stat. Methodol., 79 (2017), 651-676.  doi: 10.1111/rssb.12183.  Google Scholar

[5]

A. S. Dalalyan and A. Karagulyan, User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient, Stochastic Process. Appl., 129 (2019), 5278-5311.  doi: 10.1016/j.spa.2019.02.016.  Google Scholar

[6]

A. S. Dalalyan and L. Riou-Durand, On sampling from a log-concave density using kinetic Langevin diffusions, Bernoulli, 26 (2020), 1956-1988.  doi: 10.3150/19-BEJ1178.  Google Scholar

[7]

Z. Ding and Q. Li, Ensemble Kalman inversion: Mean-field limit and convergence analysis, Stat. Comput., 31 (2021), 21pp. doi: 10.1007/s11222-020-09976-0.  Google Scholar

[8]

Z. Ding and Q. Li, Ensemble Kalman sampler: Mean-field limit and convergence analysis, SIAM J. Math. Anal., 53 (2021), 1546-1578.  doi: 10.1137/20M1339507.  Google Scholar

[9]

Z. Ding and Q. Li, Langevin Monte Carlo: Random coordinate descent and variance reduction, J. Mach. Learn. Res., 22 (2021), 51pp.  Google Scholar

[10]

Z. Ding and Q. Li, Variance reduction for random coordinate descent-Langevin Monte Carlo, Proceedings of the 34th Conference on Neural Information Processing Systems, 33 (2020), 3748–3760. Available from: https://proceedings.neurips.cc/paper/2020/file/272e11700558e27be60f7489d2d782e7-Paper.pdf. Google Scholar

[11]

A. Doucet, N. de Freitas and N. Gordon, An introduction to sequential Monte Carlo Methods, in Sequential Monte Carlo Methods in Practice, Stat. Eng. Inf. Sci., Springer, New York, 2001, 3–14. doi: 10.1007/978-1-4757-3437-9_1.  Google Scholar

[12]

S. DuaneA. D. KennedyB. J. Pendleton and D. Roweth, Hybrid Monte Carlo, Phys. Lett. B, 195 (1987), 216-222.  doi: 10.1016/0370-2693(87)91197-X.  Google Scholar

[13]

A. Durmus, S. Majewski and B. Miasojedow, Analysis of Langevin Monte Carlo via convex optimization, J. Mach. Learn. Res., 20 (2019), 46pp.  Google Scholar

[14]

A. Durmus and É. Moulines, Non-asymptotic convergence analysis for the unadjusted Langevin algorithm, Ann. Appl. Probab., 27 (2017), 1551-1587.  doi: 10.1214/16-AAP1238.  Google Scholar

[15]

R. Dwivedi, Y. Chen, M. J. Wainwright and B. Yu, Log-concave sampling: Metropolis-Hastings algorithms are fast, J. Mach. Learn. Res., 20 (2019), 42pp.  Google Scholar

[16]

G. Evensen, Data Assimilation. The Ensemble Kalman Filter, Springer-Verlag, Berlin, 2009. doi: 10.1007/978-3-642-03711-5.  Google Scholar

[17]

P. Fabian, Atmospheric sampling, Adv. Space Res., 1 (1981), 17-27.  doi: 10.1016/0273-1177(81)90444-0.  Google Scholar

[18]

A. Garbuno-InigoF. HoffmannW. Li and A. M. Stuart, Interacting Langevin diffusions: Gradient structure and Ensemble Kalman sampler, SIAM J. Appl. Dyn. Syst., 19 (2020), 412-441.  doi: 10.1137/19M1251655.  Google Scholar

[19]

A. Garbuno-InigoN. Nüsken and S. Reich, Affine invariant interacting Langevin dynamics for Bayesian inference, SIAM J. Appl. Dyn. Syst., 19 (2020), 1633-1658.  doi: 10.1137/19M1304891.  Google Scholar

[20]

S. Geman and D. Geman, Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images, IEEE Trans. Pattern Anal. Mach. Intell., 6 (1984), 721-741.  doi: 10.1109/TPAMI.1984.4767596.  Google Scholar

[21]

W. K. Hastings, Monte Carlo sampling methods using Markov chains and their applications, Biometrika, 57 (1970), 97-109.  doi: 10.1093/biomet/57.1.97.  Google Scholar

[22]

M. Herty and G. Visconti, Continuous limits for constrained ensemble Kalman filter, Inverse Problems, 36 (2020), 28pp. doi: 10.1088/1361-6420/ab8bc5.  Google Scholar

[23]

M. A. Iglesias, K. J. H. Law and A. M. Stuart, Ensemble Kalman methods for inverse problems, Inverse Problems, 29 (2013), 20pp. doi: 10.1088/0266-5611/29/4/045001.  Google Scholar

[24]

Q. Li and K. Newton, Diffusion equation-assisted Markov chain Monte Carlo methods for the inverse radiative transfer equation, Entropy, 21 (2019), 25pp. doi: 10.3390/e21030291.  Google Scholar

[25]

R. LiS. PeiB. ChenY. SongT. ZhangW. Yang and J. Shaman, Substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (SARS-CoV-2), Science, 368 (2020), 489-493.  doi: 10.1126/science.abb3221.  Google Scholar

[26]

R. Li, H. Zha and M. Tao, Sqrt(d) dimension dependence of Langevin Monte Carlo, preprint, 2021, arXiv: 2109.03839. Google Scholar

[27]

P. A. Markowich and C. Villani, On the trend to equilibrium for the Fokker-Planck equation: An interplay between physics and functional analysis. Ⅵ Workshop on Partial Differential Equations, Part Ⅱ (Rio de Janeiro, 1999), Mat. Contemp., 19 (2000), 1-29.   Google Scholar

[28]

J. Martin, L. C. Wilcox, C. Burstedde and O. Ghattas, A stochastic Newton MCMC method for large-scale statistical inverse problems with application to seismic inversion, SIAM J. Sci. Comput., 34 (2012), A1460–A1487. doi: 10.1137/110845598.  Google Scholar

[29]

B. LeimkuhlerC. Matthews and J. Weare, Ensemble preconditioning for Markov chain Monte Carlo simulation, Stat. Comput., 28 (2018), 277-290.  doi: 10.1007/s11222-017-9730-1.  Google Scholar

[30]

N. R. NagarajanM. M. Honarpour and K. Sampath, Reservoir-fluid sampling and characterization–Key to efficient reservoir management, J. Petroleum Technology, 59 (2007).   Google Scholar

[31]

R. M. Neal, Annealed importance sampling, Stat. Comput., 11 (2001), 125-139.  doi: 10.1023/A:1008923215028.  Google Scholar

[32]

R. M. Neal, Probabilistic inference using Markov chain Monte Carlo methods, Technical Report CRG-TR-93-1. Dept. of Computer Science, University of Toronto, 1993. Google Scholar

[33]

N. Nüsken and S. Reich, Note on interacting Langevin diffusions: Gradient structure and ensemble Kalman Sampler by Garbuno-Inigo, Hoffmann, Li and Stuart, preprint, arXiv: 1908.10890. Google Scholar

[34]

S. Reich, A dynamical systems framework for intermittent data assimilation, BIT, 51 (2011), 235-249.  doi: 10.1007/s10543-010-0302-4.  Google Scholar

[35]

G. O. Roberts and J. S. Rosenthal, General state space Markov chains and MCMC algorithms, Probab. Surv., 1 (2004), 20-71.  doi: 10.1214/154957804100000024.  Google Scholar

[36]

G. O. Roberts and O. Stramer, Langevin diffusions and Metropolis-Hastings algorithms. International Workshop in Applied Probability (Caracas, 2002), Methodol. Comput. Appl. Probab., 4 (2002), 337-357.  doi: 10.1023/A:1023562417138.  Google Scholar

[37]

G. O. Roberts and R. L. Tweedie, Exponential convergence of Langevin distributions and their discrete approximations, Bernoulli, 2 (1996), 341-363.  doi: 10.2307/3318418.  Google Scholar

[38]

C. Schillings and A. M. Stuart, Analysis of the ensemble Kalman filter for inverse problems, SIAM J. Numer. Anal, 55 (2017), 1264-1290.  doi: 10.1137/16M105959X.  Google Scholar

[39]

X. T. Tong, M. Morzfeld and Y. M. Marzouk, MALA-within-Gibbs samplers for high-dimensional distributions with sparse conditional structure, SIAM J. Sci. Comput., 42 (2020), A1765–A1788. doi: 10.1137/19M1284014.  Google Scholar

[40]

S. S. Vempala and A. Wibisono, Rapid convergence of the unadjusted Langevin algorithm: Isoperimetry suffices, Proceedings of the 33rd Conference on Neural Information Processing Systems, 32 (2019). Available from: https://proceedings.neurips.cc/paper/2019/file/65a99bb7a3115fdede20da98b08a370f-Paper.pdf. Google Scholar

[41]

P. Zhang, Q. Song and F. Liang, A Langevinized ensemble Kalman filter for large-scale static and dynamic learning, preprint, 2021, arXiv: 2105.05363. Google Scholar

show all references

References:
[1]

C. AndrieuN. de FreitasA. Doucet and M. I. Jordan, An introduction to MCMC for machine learning, Machine Learning, 50 (2003), 5-43.  doi: 10.1023/A:1020281327116.  Google Scholar

[2]

A. BeskosA. JasraK. LawR. Tempone and Y. Zhou, Multilevel sequential Monte Carlo samplers, Stochastic Process. Appl., 127 (2017), 1417-1440.  doi: 10.1016/j.spa.2016.08.004.  Google Scholar

[3]

N. S. Chatterji, N. Flammarion, Y.-A. Ma, P. L. Bartlett and M. I. Jordan, On the theory of variance reduction for stochastic gradient Monte Carlo, Proceedings of the 35th international Conference on Machine Learning, 80 (2018), 764–773. Available from: http://proceedings.mlr.press/v80/chatterji18a/chatterji18a.pdf. Google Scholar

[4]

A. S. Dalalyan, Theoretical guarantees for approximate sampling from smooth and log-concave densities, J. R. Stat. Soc. Ser. B. Stat. Methodol., 79 (2017), 651-676.  doi: 10.1111/rssb.12183.  Google Scholar

[5]

A. S. Dalalyan and A. Karagulyan, User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient, Stochastic Process. Appl., 129 (2019), 5278-5311.  doi: 10.1016/j.spa.2019.02.016.  Google Scholar

[6]

A. S. Dalalyan and L. Riou-Durand, On sampling from a log-concave density using kinetic Langevin diffusions, Bernoulli, 26 (2020), 1956-1988.  doi: 10.3150/19-BEJ1178.  Google Scholar

[7]

Z. Ding and Q. Li, Ensemble Kalman inversion: Mean-field limit and convergence analysis, Stat. Comput., 31 (2021), 21pp. doi: 10.1007/s11222-020-09976-0.  Google Scholar

[8]

Z. Ding and Q. Li, Ensemble Kalman sampler: Mean-field limit and convergence analysis, SIAM J. Math. Anal., 53 (2021), 1546-1578.  doi: 10.1137/20M1339507.  Google Scholar

[9]

Z. Ding and Q. Li, Langevin Monte Carlo: Random coordinate descent and variance reduction, J. Mach. Learn. Res., 22 (2021), 51pp.  Google Scholar

[10]

Z. Ding and Q. Li, Variance reduction for random coordinate descent-Langevin Monte Carlo, Proceedings of the 34th Conference on Neural Information Processing Systems, 33 (2020), 3748–3760. Available from: https://proceedings.neurips.cc/paper/2020/file/272e11700558e27be60f7489d2d782e7-Paper.pdf. Google Scholar

[11]

A. Doucet, N. de Freitas and N. Gordon, An introduction to sequential Monte Carlo Methods, in Sequential Monte Carlo Methods in Practice, Stat. Eng. Inf. Sci., Springer, New York, 2001, 3–14. doi: 10.1007/978-1-4757-3437-9_1.  Google Scholar

[12]

S. DuaneA. D. KennedyB. J. Pendleton and D. Roweth, Hybrid Monte Carlo, Phys. Lett. B, 195 (1987), 216-222.  doi: 10.1016/0370-2693(87)91197-X.  Google Scholar

[13]

A. Durmus, S. Majewski and B. Miasojedow, Analysis of Langevin Monte Carlo via convex optimization, J. Mach. Learn. Res., 20 (2019), 46pp.  Google Scholar

[14]

A. Durmus and É. Moulines, Non-asymptotic convergence analysis for the unadjusted Langevin algorithm, Ann. Appl. Probab., 27 (2017), 1551-1587.  doi: 10.1214/16-AAP1238.  Google Scholar

[15]

R. Dwivedi, Y. Chen, M. J. Wainwright and B. Yu, Log-concave sampling: Metropolis-Hastings algorithms are fast, J. Mach. Learn. Res., 20 (2019), 42pp.  Google Scholar

[16]

G. Evensen, Data Assimilation. The Ensemble Kalman Filter, Springer-Verlag, Berlin, 2009. doi: 10.1007/978-3-642-03711-5.  Google Scholar

[17]

P. Fabian, Atmospheric sampling, Adv. Space Res., 1 (1981), 17-27.  doi: 10.1016/0273-1177(81)90444-0.  Google Scholar

[18]

A. Garbuno-InigoF. HoffmannW. Li and A. M. Stuart, Interacting Langevin diffusions: Gradient structure and Ensemble Kalman sampler, SIAM J. Appl. Dyn. Syst., 19 (2020), 412-441.  doi: 10.1137/19M1251655.  Google Scholar

[19]

A. Garbuno-InigoN. Nüsken and S. Reich, Affine invariant interacting Langevin dynamics for Bayesian inference, SIAM J. Appl. Dyn. Syst., 19 (2020), 1633-1658.  doi: 10.1137/19M1304891.  Google Scholar

[20]

S. Geman and D. Geman, Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images, IEEE Trans. Pattern Anal. Mach. Intell., 6 (1984), 721-741.  doi: 10.1109/TPAMI.1984.4767596.  Google Scholar

[21]

W. K. Hastings, Monte Carlo sampling methods using Markov chains and their applications, Biometrika, 57 (1970), 97-109.  doi: 10.1093/biomet/57.1.97.  Google Scholar

[22]

M. Herty and G. Visconti, Continuous limits for constrained ensemble Kalman filter, Inverse Problems, 36 (2020), 28pp. doi: 10.1088/1361-6420/ab8bc5.  Google Scholar

[23]

M. A. Iglesias, K. J. H. Law and A. M. Stuart, Ensemble Kalman methods for inverse problems, Inverse Problems, 29 (2013), 20pp. doi: 10.1088/0266-5611/29/4/045001.  Google Scholar

[24]

Q. Li and K. Newton, Diffusion equation-assisted Markov chain Monte Carlo methods for the inverse radiative transfer equation, Entropy, 21 (2019), 25pp. doi: 10.3390/e21030291.  Google Scholar

[25]

R. LiS. PeiB. ChenY. SongT. ZhangW. Yang and J. Shaman, Substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (SARS-CoV-2), Science, 368 (2020), 489-493.  doi: 10.1126/science.abb3221.  Google Scholar

[26]

R. Li, H. Zha and M. Tao, Sqrt(d) dimension dependence of Langevin Monte Carlo, preprint, 2021, arXiv: 2109.03839. Google Scholar

[27]

P. A. Markowich and C. Villani, On the trend to equilibrium for the Fokker-Planck equation: An interplay between physics and functional analysis. Ⅵ Workshop on Partial Differential Equations, Part Ⅱ (Rio de Janeiro, 1999), Mat. Contemp., 19 (2000), 1-29.   Google Scholar

[28]

J. Martin, L. C. Wilcox, C. Burstedde and O. Ghattas, A stochastic Newton MCMC method for large-scale statistical inverse problems with application to seismic inversion, SIAM J. Sci. Comput., 34 (2012), A1460–A1487. doi: 10.1137/110845598.  Google Scholar

[29]

B. LeimkuhlerC. Matthews and J. Weare, Ensemble preconditioning for Markov chain Monte Carlo simulation, Stat. Comput., 28 (2018), 277-290.  doi: 10.1007/s11222-017-9730-1.  Google Scholar

[30]

N. R. NagarajanM. M. Honarpour and K. Sampath, Reservoir-fluid sampling and characterization–Key to efficient reservoir management, J. Petroleum Technology, 59 (2007).   Google Scholar

[31]

R. M. Neal, Annealed importance sampling, Stat. Comput., 11 (2001), 125-139.  doi: 10.1023/A:1008923215028.  Google Scholar

[32]

R. M. Neal, Probabilistic inference using Markov chain Monte Carlo methods, Technical Report CRG-TR-93-1. Dept. of Computer Science, University of Toronto, 1993. Google Scholar

[33]

N. Nüsken and S. Reich, Note on interacting Langevin diffusions: Gradient structure and ensemble Kalman Sampler by Garbuno-Inigo, Hoffmann, Li and Stuart, preprint, arXiv: 1908.10890. Google Scholar

[34]

S. Reich, A dynamical systems framework for intermittent data assimilation, BIT, 51 (2011), 235-249.  doi: 10.1007/s10543-010-0302-4.  Google Scholar

[35]

G. O. Roberts and J. S. Rosenthal, General state space Markov chains and MCMC algorithms, Probab. Surv., 1 (2004), 20-71.  doi: 10.1214/154957804100000024.  Google Scholar

[36]

G. O. Roberts and O. Stramer, Langevin diffusions and Metropolis-Hastings algorithms. International Workshop in Applied Probability (Caracas, 2002), Methodol. Comput. Appl. Probab., 4 (2002), 337-357.  doi: 10.1023/A:1023562417138.  Google Scholar

[37]

G. O. Roberts and R. L. Tweedie, Exponential convergence of Langevin distributions and their discrete approximations, Bernoulli, 2 (1996), 341-363.  doi: 10.2307/3318418.  Google Scholar

[38]

C. Schillings and A. M. Stuart, Analysis of the ensemble Kalman filter for inverse problems, SIAM J. Numer. Anal, 55 (2017), 1264-1290.  doi: 10.1137/16M105959X.  Google Scholar

[39]

X. T. Tong, M. Morzfeld and Y. M. Marzouk, MALA-within-Gibbs samplers for high-dimensional distributions with sparse conditional structure, SIAM J. Sci. Comput., 42 (2020), A1765–A1788. doi: 10.1137/19M1284014.  Google Scholar

[40]

S. S. Vempala and A. Wibisono, Rapid convergence of the unadjusted Langevin algorithm: Isoperimetry suffices, Proceedings of the 33rd Conference on Neural Information Processing Systems, 32 (2019). Available from: https://proceedings.neurips.cc/paper/2019/file/65a99bb7a3115fdede20da98b08a370f-Paper.pdf. Google Scholar

[41]

P. Zhang, Q. Song and F. Liang, A Langevinized ensemble Kalman filter for large-scale static and dynamic learning, preprint, 2021, arXiv: 2105.05363. Google Scholar

Figure 1.  Example 1: Evolution of samples using CEnLMC. $ N = 10^4 $
Figure 2.  Example 1: Evolution of samples using LMC and MALA. $ N = 10^4 $
Figure 3.  Example 1: Evolution of $ \mathcal{R}_m $ when $ N = 2\times10^3, 6\times10^3 $ or $ 10^4 $
Figure 4.  Example 2: Evolution of samples using CEnLMC when $ N = 10^4 $
Figure 5.  Example 2: Evolution of samples using LMC and MALA when $ N = 10^4 $
Figure 6.  Example 2: Evolution of $ \mathcal{R}_m $ with $ m $ when $ N = 2\times10^3, 6\times10^3, 10^4 $
[1]

Giacomo Dimarco. The moment guided Monte Carlo method for the Boltzmann equation. Kinetic & Related Models, 2013, 6 (2) : 291-315. doi: 10.3934/krm.2013.6.291

[2]

Guillaume Bal, Ian Langmore, Youssef Marzouk. Bayesian inverse problems with Monte Carlo forward models. Inverse Problems & Imaging, 2013, 7 (1) : 81-105. doi: 10.3934/ipi.2013.7.81

[3]

Ajay Jasra, Kody J. H. Law, Yaxian Xu. Markov chain simulation for multilevel Monte Carlo. Foundations of Data Science, 2021, 3 (1) : 27-47. doi: 10.3934/fods.2021004

[4]

Theodore Papamarkou, Alexey Lindo, Eric B. Ford. Geometric adaptive Monte Carlo in random environment. Foundations of Data Science, 2021, 3 (2) : 201-224. doi: 10.3934/fods.2021014

[5]

Michael B. Giles, Kristian Debrabant, Andreas Rössler. Analysis of multilevel Monte Carlo path simulation using the Milstein discretisation. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3881-3903. doi: 10.3934/dcdsb.2018335

[6]

Jiakou Wang, Margaret J. Slattery, Meghan Henty Hoskins, Shile Liang, Cheng Dong, Qiang Du. Monte carlo simulation of heterotypic cell aggregation in nonlinear shear flow. Mathematical Biosciences & Engineering, 2006, 3 (4) : 683-696. doi: 10.3934/mbe.2006.3.683

[7]

Tengteng Yu, Xin-Wei Liu, Yu-Hong Dai, Jie Sun. Variable metric proximal stochastic variance reduced gradient methods for nonconvex nonsmooth optimization. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021084

[8]

Chjan C. Lim, Joseph Nebus, Syed M. Assad. Monte-Carlo and polyhedron-based simulations I: extremal states of the logarithmic N-body problem on a sphere. Discrete & Continuous Dynamical Systems - B, 2003, 3 (3) : 313-342. doi: 10.3934/dcdsb.2003.3.313

[9]

Joseph Nebus. The Dirichlet quotient of point vortex interactions on the surface of the sphere examined by Monte Carlo experiments. Discrete & Continuous Dynamical Systems - B, 2005, 5 (1) : 125-136. doi: 10.3934/dcdsb.2005.5.125

[10]

Olli-Pekka Tossavainen, Daniel B. Work. Markov Chain Monte Carlo based inverse modeling of traffic flows using GPS data. Networks & Heterogeneous Media, 2013, 8 (3) : 803-824. doi: 10.3934/nhm.2013.8.803

[11]

Mazyar Zahedi-Seresht, Gholam-Reza Jahanshahloo, Josef Jablonsky, Sedighe Asghariniya. A new Monte Carlo based procedure for complete ranking efficient units in DEA models. Numerical Algebra, Control & Optimization, 2017, 7 (4) : 403-416. doi: 10.3934/naco.2017025

[12]

Zhiyan Ding, Qin Li, Jianfeng Lu. Ensemble Kalman Inversion for nonlinear problems: Weights, consistency, and variance bounds. Foundations of Data Science, 2021, 3 (3) : 371-411. doi: 10.3934/fods.2020018

[13]

Yuhong Dai, Ya-xiang Yuan. Analysis of monotone gradient methods. Journal of Industrial & Management Optimization, 2005, 1 (2) : 181-192. doi: 10.3934/jimo.2005.1.181

[14]

Lili Ju, Wei Leng, Zhu Wang, Shuai Yuan. Numerical investigation of ensemble methods with block iterative solvers for evolution problems. Discrete & Continuous Dynamical Systems - B, 2020, 25 (12) : 4905-4923. doi: 10.3934/dcdsb.2020132

[15]

Neil K. Chada, Yuming Chen, Daniel Sanz-Alonso. Iterative ensemble Kalman methods: A unified perspective with some new variants. Foundations of Data Science, 2021, 3 (3) : 331-369. doi: 10.3934/fods.2021011

[16]

Robert I. McLachlan, G. R. W. Quispel. Discrete gradient methods have an energy conservation law. Discrete & Continuous Dynamical Systems, 2014, 34 (3) : 1099-1104. doi: 10.3934/dcds.2014.34.1099

[17]

Predrag S. Stanimirović, Branislav Ivanov, Haifeng Ma, Dijana Mosić. A survey of gradient methods for solving nonlinear optimization. Electronic Research Archive, 2020, 28 (4) : 1573-1624. doi: 10.3934/era.2020115

[18]

Giacomo Frassoldati, Luca Zanni, Gaetano Zanghirati. New adaptive stepsize selections in gradient methods. Journal of Industrial & Management Optimization, 2008, 4 (2) : 299-312. doi: 10.3934/jimo.2008.4.299

[19]

Richard A. Norton, David I. McLaren, G. R. W. Quispel, Ari Stern, Antonella Zanna. Projection methods and discrete gradient methods for preserving first integrals of ODEs. Discrete & Continuous Dynamical Systems, 2015, 35 (5) : 2079-2098. doi: 10.3934/dcds.2015.35.2079

[20]

Samuel Amstutz, Antonio André Novotny, Nicolas Van Goethem. Minimal partitions and image classification using a gradient-free perimeter approximation. Inverse Problems & Imaging, 2014, 8 (2) : 361-387. doi: 10.3934/ipi.2014.8.361

 Impact Factor: 

Metrics

  • PDF downloads (34)
  • HTML views (38)
  • Cited by (0)

Other articles
by authors

[Back to Top]