• Previous Article
    Finite- and multi-dimensional state representations and some fundamental asymptotic properties of a family of nonlinear multi-population models for HIV/AIDS with ART treatment and distributed delays
  • DCDS-S Home
  • This Issue
  • Next Article
    On properties of similarity boundary of attractors in product dynamical systems
doi: 10.3934/dcdss.2021103
Online First

Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible.

Readers can access Online First articles via the “Online First” tab for the selected journal.

ISALT: Inference-based schemes adaptive to large time-stepping for locally Lipschitz ergodic systems

1. 

Department of Mathematics and Statistics, University of North Carolina at Charlotte, 9201 Univ City Blvd., Charlotte, NC 28023, USA

2. 

Department of Mathematics, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD 21218, USA

3. 

Department of Mathematics and Statistics, State University of New York at Albany, Earth Science 110, 1400 Washington Avenue, Albany, NY 12222, USA

* Corresponding author: Fei Lu

Received  February 2021 Revised  June 2021 Early access September 2021

Fund Project: XL is supported by NSF DMS CAREER-1847770. FL is supported NSF DMS 1913243 and NSF DMS 1821211. FY is supported by AMS-Simons travel grants

Efficient simulation of SDEs is essential in many applications, particularly for ergodic systems that demand efficient simulation of both short-time dynamics and large-time statistics. However, locally Lipschitz SDEs often require special treatments such as implicit schemes with small time-steps to accurately simulate the ergodic measures. We introduce a framework to construct inference-based schemes adaptive to large time-steps (ISALT) from data, achieving a reduction in time by several orders of magnitudes. The key is the statistical learning of an approximation to the infinite-dimensional discrete-time flow map. We explore the use of numerical schemes (such as the Euler-Maruyama, the hybrid RK4, and an implicit scheme) to derive informed basis functions, leading to a parameter inference problem. We introduce a scalable algorithm to estimate the parameters by least squares, and we prove the convergence of the estimators as data size increases.

We test the ISALT on three non-globally Lipschitz SDEs: the 1D double-well potential, a 2D multiscale gradient system, and the 3D stochastic Lorenz equation with a degenerate noise. Numerical results show that ISALT can tolerate time-step magnitudes larger than plain numerical schemes. It reaches optimal accuracy in reproducing the invariant measure when the time-step is medium-large.

Citation: Xingjie Helen Li, Fei Lu, Felix X.-F. Ye. ISALT: Inference-based schemes adaptive to large time-stepping for locally Lipschitz ergodic systems. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2021103
References:
[1]

Y. Bar-SinaiS. HoyerJ. Hickey and M. P. Brenner, Learning data-driven discretizations for partial differential equations, Proc. Natl. Acad. Sci. USA, 116 (2019), 15344-15349.  doi: 10.1073/pnas.1814058116.  Google Scholar

[2]

A. J. Chorin and F. Lu, Discrete approach to stochastic parametrization and dimension reduction in nonlinear dynamics, Proceedings of the National Academy of Sciences, USA, 112 (2015), 9804-9809.   Google Scholar

[3]

A. J. ChorinF. LuR. N. MillerM. Morzfeld and X. Tu, Sampling, feasibility, and priors in data assimilation, Discrete Contin. Dyn. Syst., 36 (2016), 4227-4246.  doi: 10.3934/dcds.2016.36.4227.  Google Scholar

[4]

W. EB. EngquistX. LiW. Ren and E. Vanden-Eijnden, The heterogeneous multiscale method: A review, Commun. Comput. Phys., 2 (2007), 367-450.   Google Scholar

[5] P. Hall and C. C. Heyde, Martingale Limit Theory and its Application, Academic press, 1980.   Google Scholar
[6]

J. HanA. Jentzen and W. E, Solving high-dimensional partial differential equations using deep learning, Proc. Natl. Acad. Sci. USA, 115 (2018), 8505-8510.  doi: 10.1073/pnas.1718942115.  Google Scholar

[7]

J. A. Hansen and C. Penland, Efficient approximate technique for integrating stochastic differential equations, Monthly Weather Review, 134 (2006), 3006-3014.  doi: 10.1175/MWR3192.1.  Google Scholar

[8]

C. C. Heyde, On the central limit theorem for stationary processes, Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 30 (1974), 315-320.  doi: 10.1007/BF00532619.  Google Scholar

[9]

Y. Hu, Strong and weak order of time discretization schemes of stochastic differential equatios, In Séminaire de Probabilités XXX, Springer, (1996), 218–227. doi: 10.1007/BFb0094650.  Google Scholar

[10]

T. Hudson and X. H. Li, Coarse-graining of overdamped Langevin dynamics via the Mori–Zwanzig formalism, Multiscale Model. Simul., 18 (2020), 1113-1135.  doi: 10.1137/18M1222533.  Google Scholar

[11]

M. Hutzenthaler and A. Jentzen, Numerical Approximations of Stochastic Differential Equations with Non-globally Lipschitz Continuous Coefficients, American Mathematical Society, 2015. doi: 10.1090/memo/1112.  Google Scholar

[12]

M. HutzenthalerA. Jentzen and P. E. Kloeden, Strong convergence of an explicit numerical method for SDEs with nonglobally Lipschitz continuous coefficients, Ann. Appl. Probab., 22 (2012), 1611-1641.  doi: 10.1214/11-AAP803.  Google Scholar

[13]

A. Jentzen and P. Kloeden, Taylor expansions of solutions of stochastic partial differential equations with additive noise, Ann. Probab., 38 (2010), 532-569.  doi: 10.1214/09-AOP500.  Google Scholar

[14]

S. W. Jiang and J. Harlim, Modeling of missing dynamical systems: Deriving parametric models using a nonparametric framework, Res. Math. Sci., 7 (2020), Paper No. 16, 25 pp. doi: 10.1007/s40687-020-00217-4.  Google Scholar

[15]

R. Khasminskii, Stochastic Stability of Differential Equations, volume 66., Springer-Verlag Berlin Heidelberg, 2nd edition, 2012. doi: 10.1007/978-3-642-23280-0.  Google Scholar

[16]

B. KhouiderA. J. Majda and M. A. Katsoulakis, Coarse-grained stochastic models for tropical convection and climate, Proc. Natl. Acad. Sci. USA, 100 (2003), 11941-11946.   Google Scholar

[17]

P. E. Kloeden and E. Platen, Numerical Solution of Stochastic Differential Equations, Springer, Berlin, 3rd edition, 1992. doi: 10.1007/978-3-662-12616-5.  Google Scholar

[18]

K. Law, A. Stuart and K. Zygalakis, Data Assimilation: A Mathematical Introduction, Springer, 2015. doi: 10.1007/978-3-319-20325-6.  Google Scholar

[19]

F. Legoll and T. Lelièvre, Effective dynamics using conditional expectations, Nonlinearity, 23 (2010), 2131-2163.  doi: 10.1088/0951-7715/23/9/006.  Google Scholar

[20]

F. LegollT. Leliévre and U. Sharma, Effective dynamics for non-reversible stochastic differential equations: A quantitative study, Nonlinearity, 32 (2019), 4779-4816.  doi: 10.1088/1361-6544/ab34bf.  Google Scholar

[21]

H. LeiN. A. Baker and X. Li, Data-driven parameterization of the generalized Langevin equation, Proc. Natl. Acad. Sci. USA, 113 (2016), 14183-14188.   Google Scholar

[22]

B. Leimkuhler and C. Matthews, Molecular Dynamics, Springer, 2015.  Google Scholar

[23]

Y. Li and J. Duan, A data-driven approach for discovering stochastic dynamical systems with non-Gaussian Lévy noise, Phys. D, 417 (2021), 132830, 12 pp. doi: 10.1016/j.physd.2020.132830.  Google Scholar

[24]

K. K. Lin and F. Lu, Data-driven model reduction, Wiener projections, and the Koopman-Mori-Zwanzig formalism, J. Comput. Phys., 424 (2021), 109864, 33 pp. doi: 10.1016/j.jcp.2020.109864.  Google Scholar

[25]

S. Liu, L. Grzelak and C. W. Oosterlee, The seven-league scheme: Deep learning for large time step monte carlo simulations of stochastic differential equations, arXiv: 2009.03202, (2020). Google Scholar

[26]

F. Lu, Data-driven model reduction for stochastic Burgers equations, Entropy, 22 (2020), Paper No. 1360, 22 pp. doi: 10.3390/e22121360.  Google Scholar

[27]

F. LuK. K. Lin and A. J. Chorin, Comparison of continuous and discrete-time data-based modeling for hypoelliptic systems, Commun. Appl. Math. Comput. Sci., 11 (2016), 187-216.  doi: 10.2140/camcos.2016.11.187.  Google Scholar

[28]

F. LuK. K. Lin and A. J. Chorin, Data-based stochastic model reduction for the Kuramoto–Sivashinsky equation, Phys. D, 340 (2017), 46-57.  doi: 10.1016/j.physd.2016.09.007.  Google Scholar

[29]

F. Lu, M. Maggioni and S. Tang, Learning interaction kernels in stochastic systems of interacting particles from multiple trajectories, J. Mach. Learn. Res., 22 (2021), Paper No. 32, 67 pp.  Google Scholar

[30]

F. LuM. ZhongS. Tang and M. Maggioni, Nonparametric inference of interaction laws in systems of agents from trajectory data, Proc. Natl. Acad. Sci. USA, 116 (2019), 14424-14433.   Google Scholar

[31]

Y. Maday and G. Turinici, A parareal in time procedure for the control of partial differential equations, C. R. Math. Acad. Sci. Paris, 335 (2002), 387-392.  doi: 10.1016/S1631-073X(02)02467-6.  Google Scholar

[32]

A. J. Majda and J. Harlim, Physics constrained nonlinear regression models for time series, Nonlinearity, 26 (2013), 201-217.  doi: 10.1088/0951-7715/26/1/201.  Google Scholar

[33]

X. Mao, Stochastic Differential Equations and Applications, Elsevier, 2007. Google Scholar

[34]

J. C. MattinglyA. M. Stuart and and D. J. Higham, Ergodicity for SDEs and approximations: Locally Lipschitz vector fields and degenerate noise, Stochastic Process. Appl., 101 (2002), 185-232.  doi: 10.1016/S0304-4149(02)00150-3.  Google Scholar

[35]

G. A. Pavliotis and A. M. Stuart, Parameter estimation for multiscale diffusions, J. Statist. Phys., 127 (2007), 741-781.  doi: 10.1007/s10955-007-9300-6.  Google Scholar

[36]

G. O. Roberts and R. L. Tweedie, Exponential convergence of Langevin distributions and their discrete approximations, Bernoulli, 2 (1996), 341-363.  doi: 10.2307/3318418.  Google Scholar

[37]

W. Rümelin, Numerical treatment of stochastic differential equations, SIAM J. Numer. Anal., 19 (1982), 604-613.  doi: 10.1137/0719041.  Google Scholar

[38]

J. Sirignano and K. Spiliopoulos, DGM: A deep learning algorithm for solving partial differential equations, J. Comput. Phys., 375 (2018), 1339-1364.  doi: 10.1016/j.jcp.2018.08.029.  Google Scholar

[39]

L. YangD. Zhang and G. E. Karniadakis, Physics-informed generative adversarial networks for stochastic differential equations, SIAM J. Sci. Comput., 42 (2020), A292-A317.  doi: 10.1137/18M1225409.  Google Scholar

show all references

References:
[1]

Y. Bar-SinaiS. HoyerJ. Hickey and M. P. Brenner, Learning data-driven discretizations for partial differential equations, Proc. Natl. Acad. Sci. USA, 116 (2019), 15344-15349.  doi: 10.1073/pnas.1814058116.  Google Scholar

[2]

A. J. Chorin and F. Lu, Discrete approach to stochastic parametrization and dimension reduction in nonlinear dynamics, Proceedings of the National Academy of Sciences, USA, 112 (2015), 9804-9809.   Google Scholar

[3]

A. J. ChorinF. LuR. N. MillerM. Morzfeld and X. Tu, Sampling, feasibility, and priors in data assimilation, Discrete Contin. Dyn. Syst., 36 (2016), 4227-4246.  doi: 10.3934/dcds.2016.36.4227.  Google Scholar

[4]

W. EB. EngquistX. LiW. Ren and E. Vanden-Eijnden, The heterogeneous multiscale method: A review, Commun. Comput. Phys., 2 (2007), 367-450.   Google Scholar

[5] P. Hall and C. C. Heyde, Martingale Limit Theory and its Application, Academic press, 1980.   Google Scholar
[6]

J. HanA. Jentzen and W. E, Solving high-dimensional partial differential equations using deep learning, Proc. Natl. Acad. Sci. USA, 115 (2018), 8505-8510.  doi: 10.1073/pnas.1718942115.  Google Scholar

[7]

J. A. Hansen and C. Penland, Efficient approximate technique for integrating stochastic differential equations, Monthly Weather Review, 134 (2006), 3006-3014.  doi: 10.1175/MWR3192.1.  Google Scholar

[8]

C. C. Heyde, On the central limit theorem for stationary processes, Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 30 (1974), 315-320.  doi: 10.1007/BF00532619.  Google Scholar

[9]

Y. Hu, Strong and weak order of time discretization schemes of stochastic differential equatios, In Séminaire de Probabilités XXX, Springer, (1996), 218–227. doi: 10.1007/BFb0094650.  Google Scholar

[10]

T. Hudson and X. H. Li, Coarse-graining of overdamped Langevin dynamics via the Mori–Zwanzig formalism, Multiscale Model. Simul., 18 (2020), 1113-1135.  doi: 10.1137/18M1222533.  Google Scholar

[11]

M. Hutzenthaler and A. Jentzen, Numerical Approximations of Stochastic Differential Equations with Non-globally Lipschitz Continuous Coefficients, American Mathematical Society, 2015. doi: 10.1090/memo/1112.  Google Scholar

[12]

M. HutzenthalerA. Jentzen and P. E. Kloeden, Strong convergence of an explicit numerical method for SDEs with nonglobally Lipschitz continuous coefficients, Ann. Appl. Probab., 22 (2012), 1611-1641.  doi: 10.1214/11-AAP803.  Google Scholar

[13]

A. Jentzen and P. Kloeden, Taylor expansions of solutions of stochastic partial differential equations with additive noise, Ann. Probab., 38 (2010), 532-569.  doi: 10.1214/09-AOP500.  Google Scholar

[14]

S. W. Jiang and J. Harlim, Modeling of missing dynamical systems: Deriving parametric models using a nonparametric framework, Res. Math. Sci., 7 (2020), Paper No. 16, 25 pp. doi: 10.1007/s40687-020-00217-4.  Google Scholar

[15]

R. Khasminskii, Stochastic Stability of Differential Equations, volume 66., Springer-Verlag Berlin Heidelberg, 2nd edition, 2012. doi: 10.1007/978-3-642-23280-0.  Google Scholar

[16]

B. KhouiderA. J. Majda and M. A. Katsoulakis, Coarse-grained stochastic models for tropical convection and climate, Proc. Natl. Acad. Sci. USA, 100 (2003), 11941-11946.   Google Scholar

[17]

P. E. Kloeden and E. Platen, Numerical Solution of Stochastic Differential Equations, Springer, Berlin, 3rd edition, 1992. doi: 10.1007/978-3-662-12616-5.  Google Scholar

[18]

K. Law, A. Stuart and K. Zygalakis, Data Assimilation: A Mathematical Introduction, Springer, 2015. doi: 10.1007/978-3-319-20325-6.  Google Scholar

[19]

F. Legoll and T. Lelièvre, Effective dynamics using conditional expectations, Nonlinearity, 23 (2010), 2131-2163.  doi: 10.1088/0951-7715/23/9/006.  Google Scholar

[20]

F. LegollT. Leliévre and U. Sharma, Effective dynamics for non-reversible stochastic differential equations: A quantitative study, Nonlinearity, 32 (2019), 4779-4816.  doi: 10.1088/1361-6544/ab34bf.  Google Scholar

[21]

H. LeiN. A. Baker and X. Li, Data-driven parameterization of the generalized Langevin equation, Proc. Natl. Acad. Sci. USA, 113 (2016), 14183-14188.   Google Scholar

[22]

B. Leimkuhler and C. Matthews, Molecular Dynamics, Springer, 2015.  Google Scholar

[23]

Y. Li and J. Duan, A data-driven approach for discovering stochastic dynamical systems with non-Gaussian Lévy noise, Phys. D, 417 (2021), 132830, 12 pp. doi: 10.1016/j.physd.2020.132830.  Google Scholar

[24]

K. K. Lin and F. Lu, Data-driven model reduction, Wiener projections, and the Koopman-Mori-Zwanzig formalism, J. Comput. Phys., 424 (2021), 109864, 33 pp. doi: 10.1016/j.jcp.2020.109864.  Google Scholar

[25]

S. Liu, L. Grzelak and C. W. Oosterlee, The seven-league scheme: Deep learning for large time step monte carlo simulations of stochastic differential equations, arXiv: 2009.03202, (2020). Google Scholar

[26]

F. Lu, Data-driven model reduction for stochastic Burgers equations, Entropy, 22 (2020), Paper No. 1360, 22 pp. doi: 10.3390/e22121360.  Google Scholar

[27]

F. LuK. K. Lin and A. J. Chorin, Comparison of continuous and discrete-time data-based modeling for hypoelliptic systems, Commun. Appl. Math. Comput. Sci., 11 (2016), 187-216.  doi: 10.2140/camcos.2016.11.187.  Google Scholar

[28]

F. LuK. K. Lin and A. J. Chorin, Data-based stochastic model reduction for the Kuramoto–Sivashinsky equation, Phys. D, 340 (2017), 46-57.  doi: 10.1016/j.physd.2016.09.007.  Google Scholar

[29]

F. Lu, M. Maggioni and S. Tang, Learning interaction kernels in stochastic systems of interacting particles from multiple trajectories, J. Mach. Learn. Res., 22 (2021), Paper No. 32, 67 pp.  Google Scholar

[30]

F. LuM. ZhongS. Tang and M. Maggioni, Nonparametric inference of interaction laws in systems of agents from trajectory data, Proc. Natl. Acad. Sci. USA, 116 (2019), 14424-14433.   Google Scholar

[31]

Y. Maday and G. Turinici, A parareal in time procedure for the control of partial differential equations, C. R. Math. Acad. Sci. Paris, 335 (2002), 387-392.  doi: 10.1016/S1631-073X(02)02467-6.  Google Scholar

[32]

A. J. Majda and J. Harlim, Physics constrained nonlinear regression models for time series, Nonlinearity, 26 (2013), 201-217.  doi: 10.1088/0951-7715/26/1/201.  Google Scholar

[33]

X. Mao, Stochastic Differential Equations and Applications, Elsevier, 2007. Google Scholar

[34]

J. C. MattinglyA. M. Stuart and and D. J. Higham, Ergodicity for SDEs and approximations: Locally Lipschitz vector fields and degenerate noise, Stochastic Process. Appl., 101 (2002), 185-232.  doi: 10.1016/S0304-4149(02)00150-3.  Google Scholar

[35]

G. A. Pavliotis and A. M. Stuart, Parameter estimation for multiscale diffusions, J. Statist. Phys., 127 (2007), 741-781.  doi: 10.1007/s10955-007-9300-6.  Google Scholar

[36]

G. O. Roberts and R. L. Tweedie, Exponential convergence of Langevin distributions and their discrete approximations, Bernoulli, 2 (1996), 341-363.  doi: 10.2307/3318418.  Google Scholar

[37]

W. Rümelin, Numerical treatment of stochastic differential equations, SIAM J. Numer. Anal., 19 (1982), 604-613.  doi: 10.1137/0719041.  Google Scholar

[38]

J. Sirignano and K. Spiliopoulos, DGM: A deep learning algorithm for solving partial differential equations, J. Comput. Phys., 375 (2018), 1339-1364.  doi: 10.1016/j.jcp.2018.08.029.  Google Scholar

[39]

L. YangD. Zhang and G. E. Karniadakis, Physics-informed generative adversarial networks for stochastic differential equations, SIAM J. Sci. Comput., 42 (2020), A292-A317.  doi: 10.1137/18M1225409.  Google Scholar

Figure 1.  Schematic plot of inferring explicit scheme with a large time-step
Figure 2.  Large-time statistics for 1D double-well potential. (a) TVD between the empirical invariant densities (PDF) of the inferred schemes and the reference PDF from data. (b) and (c): PDFs and ACFs comparison between the IS-RK4 with $ c_0 $ excluded and the reference data
Figure 3.  1D double-well potential: Convergence of estimators in IS-RK4 with $ c_0 $ excluded. (a) The relative error of the estimator $ \widehat{c_1^{{\delta}, N,M}} $ with $ {\delta} = 80\times \Delta t $ converges at an order about $ (MN)^{-1/2} $, matching Theorem 3.5. (b) Left column: The coefficients depend on the time-step $ {\delta} = {\mathrm{Gap}}\times \Delta t $, with $ c_1 $ being almost 1 and $ c_2 $ being close to linear in $ {\delta} $ until $ {\delta}>0.08 $. The error bars, which are too narrow to be seen, are the standard deviations of the single-trajectory estimators from the $ M $-trajectory estimator. Right column: The residual decays at an order $ O({\delta}^{1/2}) $, matching Theorem 3.6
Figure 4.  Large-time statistics for the 2D gradient system. (a) TVD between the $ x_1 $ marginal invariant densities (PDF) of the inferred schemes and the reference PDF from data. (b) and (c): PDFs and ACFs comparison between IS-SSBE with $ c_0 $ excluded and the reference data
Figure 5.  2D gradient system: Convergence of estimators in IS-SSBE with $ c_0 $ excluded. (a) The relative error of the estimator $ \widehat{c_1^{{\delta}, N,M}} $ with $ {\delta} = 120 \Delta t $ converges at an order about $ (MN)^{-1/2} $, matching Theorem 3.5. (b) Left column: The estimators of $ c_1, c_2 $ are almost linear in $ {\delta} $. Right column: The residual changes little as $ {\delta} $ decreases, due to that IS-SSBE is not a parametrization of an explicit scheme (thus, Theorem 3.6 does not apply)
Figure 6.  2D gradient system: Convergence of estimators in IS-RK4 with $ c_0 $ excluded. (a) The relative error of the estimator $ \widehat{c_1^{{\delta}, N,M}} $ with $ {\delta} = 120 \Delta t $ converges at an order about $ (MN)^{-1/2} $, matching Theorem 3.5. (b) Left column: The estimators of $ c_1, c_2 $ are constant for all $ {\delta} $. Right column: The residual decays at an order $ O({\delta}^{1/2}) $, matching Theorem 3.6
Figure 7.  Large-time statistics of $ x_1 $ for the stochastic Lorenz system. (a) TVD between the $ x_1 $ marginal invariant densities (PDF) of the inferred schemes and the reference PDF from data. (b) and (c): PDFs and ACFs comparison between IS-RK4 with $ c_0 $ included and the reference data
Figure 8.  ACF and PDF of $ x_3 $ in the stochastic Lorenz system. Similar to the other examples, IS-RK4 (with $ c_0 $ included) reproduces the PDF and the ACF the best when the time-step is medium large, while plain RK4 and IS-EM blow up even when $ {\mathrm{Gap}} = 20 $
Figure 9.  The 3D stochastic Lorenz system: Convergence of estimators in IS-RK4 with $ c_0 $ included. (a) The relative error of the estimator $ \widehat{c_1^{{\delta}, N,M}} $ with $ {\delta} = 240 \Delta t = 0.12 $ converges at order about $ (MN)^{-1/2} $, matching Theorem 3.5. (b) Left column: The estimators of $ c_0,c_1, c_2 $ are varies little until $ {\delta}>0.12 $. The vertical dash line is the optimal time gap. Right column: The residuals decay at orders slightly higher than $ O({\delta}^{1/2}) $
Table 1.  Notations
Notation Description
$ {{\bf X}}_t $ and $ {{\bf B}}_t $ true state process and original stochastic force
$ f({{\bf X}}_t) $, $ \sigma\in \mathbb{R}^{d\times m} $ local-Lipschitz drift and diffusion matrix
$ dt $ time-step generating data
$ {\delta}= {\mathrm{Gap}} \times dt $ time-step for inferred scheme, $ {\mathrm{Gap}}\in \{ 1, 2, 4, 10, 20, 40,\ldots\} $
$ t_i = i{\delta} $ discrete time instants of data
$ \{{{\bf X}}_{t_0:t_N}^{(m)}, {{\bf B}}_{t_0:t_N}^{(m)}\}_{m=1}^M $ Data: $ M $ independent paths of $ {{\bf X}} $ and $ {{\bf B}} $ at discrete-times
$ \mathcal{F}\left({{\bf X}}_{t_i},\, {{\bf B}}_{[t_{i}, \, t_{i+1})}\right) $ true flow map representing $ ({{\bf X}}_{t_{i+1}}-{{\bf X}}_{t_i})/{\delta} $
$ {F}^{\delta}({{\bf X}}_{t_n},\Delta {{\bf B}}_{t_n}) $ approximate flow map using only $ {{\bf X}}_{t_n} $, $ \Delta {{\bf B}}_{t_n} = {{\bf B}}_{t_{n+1}}-{{\bf B}}_{t_{n}} $
$ \widetilde F^{\delta}\left(c^{\delta}, {{\bf X}}_{t_n},\Delta {{\bf B}}_{t_n} \right) $ parametric approximate flow map
$ c^{\delta}=(c_0^{\delta},\dots,c_p^{\delta}) $ parameters to be estimated for the inferred scheme
$ \eta_n $ and $ \sigma_{\eta}^{\delta} $ iid $ N(0, I_d) $ and covariance, representing regression residual
EM and IS-EM Euler-Maruyama and inferred scheme (IS) parametrizing it
HRK4 and IS-RK4 hybrid RK4 and inferred scheme parametrizing RK4
SSBE and IS-SSBE split-step stochastic backward Euler and IS parametrizing it
Notation Description
$ {{\bf X}}_t $ and $ {{\bf B}}_t $ true state process and original stochastic force
$ f({{\bf X}}_t) $, $ \sigma\in \mathbb{R}^{d\times m} $ local-Lipschitz drift and diffusion matrix
$ dt $ time-step generating data
$ {\delta}= {\mathrm{Gap}} \times dt $ time-step for inferred scheme, $ {\mathrm{Gap}}\in \{ 1, 2, 4, 10, 20, 40,\ldots\} $
$ t_i = i{\delta} $ discrete time instants of data
$ \{{{\bf X}}_{t_0:t_N}^{(m)}, {{\bf B}}_{t_0:t_N}^{(m)}\}_{m=1}^M $ Data: $ M $ independent paths of $ {{\bf X}} $ and $ {{\bf B}} $ at discrete-times
$ \mathcal{F}\left({{\bf X}}_{t_i},\, {{\bf B}}_{[t_{i}, \, t_{i+1})}\right) $ true flow map representing $ ({{\bf X}}_{t_{i+1}}-{{\bf X}}_{t_i})/{\delta} $
$ {F}^{\delta}({{\bf X}}_{t_n},\Delta {{\bf B}}_{t_n}) $ approximate flow map using only $ {{\bf X}}_{t_n} $, $ \Delta {{\bf B}}_{t_n} = {{\bf B}}_{t_{n+1}}-{{\bf B}}_{t_{n}} $
$ \widetilde F^{\delta}\left(c^{\delta}, {{\bf X}}_{t_n},\Delta {{\bf B}}_{t_n} \right) $ parametric approximate flow map
$ c^{\delta}=(c_0^{\delta},\dots,c_p^{\delta}) $ parameters to be estimated for the inferred scheme
$ \eta_n $ and $ \sigma_{\eta}^{\delta} $ iid $ N(0, I_d) $ and covariance, representing regression residual
EM and IS-EM Euler-Maruyama and inferred scheme (IS) parametrizing it
HRK4 and IS-RK4 hybrid RK4 and inferred scheme parametrizing RK4
SSBE and IS-SSBE split-step stochastic backward Euler and IS parametrizing it
Algorithm 1.  Inference-based schemes adaptive to large time-stepping (ISALT): detailed algorithm
Input: Full model; a high fidelity solver preserving the invariant measure.
Output: Estimated parametric scheme
1: Generate data: solve the system with the high fidelity solver, which has a small time-step $ dt $; down sample to get time series with $ {\delta}= \mathrm{Gap}\times dt $. Denote the data, consisting of $ M $ independent trajectories on $ [0,N{\delta}] $, by $ \{{{\bf X}}_{t_0:t_N}^{(m)}, {{\bf B}}_{t_0:t_N}^{(m)}\}_{m=1}^M $ with $ t_i= i{\delta} $.
2: Pick a parametric form approximating the flow map (2.1) as in (2.5)–(2.6).
3: Estimate parameters $ c_{0:p}^{\delta} $ and $ \sigma_\eta $ as in (2.7).
4: Model selection: run the inferred scheme for cross-validation, and test the consistency of the estimators.
Input: Full model; a high fidelity solver preserving the invariant measure.
Output: Estimated parametric scheme
1: Generate data: solve the system with the high fidelity solver, which has a small time-step $ dt $; down sample to get time series with $ {\delta}= \mathrm{Gap}\times dt $. Denote the data, consisting of $ M $ independent trajectories on $ [0,N{\delta}] $, by $ \{{{\bf X}}_{t_0:t_N}^{(m)}, {{\bf B}}_{t_0:t_N}^{(m)}\}_{m=1}^M $ with $ t_i= i{\delta} $.
2: Pick a parametric form approximating the flow map (2.1) as in (2.5)–(2.6).
3: Estimate parameters $ c_{0:p}^{\delta} $ and $ \sigma_\eta $ as in (2.7).
4: Model selection: run the inferred scheme for cross-validation, and test the consistency of the estimators.
Table 2.  Time gap of blow-up for each scheme: plain verse inferred
1D double-well 2D gradient system 3D Lorenz system
Plain RK4 $ {\mathrm{Gap}}=20 $ $ {\mathrm{Gap}}=20 $ $ {\mathrm{Gap}}=10 $
IS-RK4 $ {\mathrm{Gap}}>200 $ $ {\mathrm{Gap}}>200 $ $ {\mathrm{Gap}}>400 $
Plain SSBE $ {\mathrm{Gap}}=40 $ $ {\mathrm{Gap}}=40 $ $ {\mathrm{Gap}}=20 $
IS-SSBE $ {\mathrm{Gap}}>200 $ $ {\mathrm{Gap}}>200 $ $ {\mathrm{Gap}}>400 $
1D double-well 2D gradient system 3D Lorenz system
Plain RK4 $ {\mathrm{Gap}}=20 $ $ {\mathrm{Gap}}=20 $ $ {\mathrm{Gap}}=10 $
IS-RK4 $ {\mathrm{Gap}}>200 $ $ {\mathrm{Gap}}>200 $ $ {\mathrm{Gap}}>400 $
Plain SSBE $ {\mathrm{Gap}}=40 $ $ {\mathrm{Gap}}=40 $ $ {\mathrm{Gap}}=20 $
IS-SSBE $ {\mathrm{Gap}}>200 $ $ {\mathrm{Gap}}>200 $ $ {\mathrm{Gap}}>400 $
[1]

Meixin Xiong, Liuhong Chen, Ju Ming, Jaemin Shin. Accelerating the Bayesian inference of inverse problems by using data-driven compressive sensing method based on proper orthogonal decomposition. Electronic Research Archive, , () : -. doi: 10.3934/era.2021044

[2]

Matthew O. Williams, Clarence W. Rowley, Ioannis G. Kevrekidis. A kernel-based method for data-driven koopman spectral analysis. Journal of Computational Dynamics, 2015, 2 (2) : 247-265. doi: 10.3934/jcd.2015005

[3]

Michael Herty, Adrian Fazekas, Giuseppe Visconti. A two-dimensional data-driven model for traffic flow on highways. Networks & Heterogeneous Media, 2018, 13 (2) : 217-240. doi: 10.3934/nhm.2018010

[4]

Hailiang Liu, Xuping Tian. Data-driven optimal control of a seir model for COVID-19. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021093

[5]

Mostafa Karimi, Noor Akma Ibrahim, Mohd Rizam Abu Bakar, Jayanthi Arasan. Rank-based inference for the accelerated failure time model in the presence of interval censored data. Numerical Algebra, Control & Optimization, 2017, 7 (1) : 107-112. doi: 10.3934/naco.2017007

[6]

Stefano Almi, Massimo Fornasier, Richard Huber. Data-driven evolutions of critical points. Foundations of Data Science, 2020, 2 (3) : 207-255. doi: 10.3934/fods.2020011

[7]

Lianbing She, Nan Liu, Xin Li, Renhai Wang. Three types of weak pullback attractors for lattice pseudo-parabolic equations driven by locally Lipschitz noise. Electronic Research Archive, , () : -. doi: 10.3934/era.2021028

[8]

Tieliang Gong, Chen Xu, Hong Chen. Modal additive models with data-driven structure identification. Mathematical Foundations of Computing, 2020, 3 (3) : 165-183. doi: 10.3934/mfc.2020016

[9]

Deepak Kumar, Ahmad Jazlan, Victor Sreeram, Roberto Togneri. Partial fraction expansion based frequency weighted model reduction for discrete-time systems. Numerical Algebra, Control & Optimization, 2016, 6 (3) : 329-337. doi: 10.3934/naco.2016015

[10]

Mohammad-Sahadet Hossain. Projection-based model reduction for time-varying descriptor systems: New results. Numerical Algebra, Control & Optimization, 2016, 6 (1) : 73-90. doi: 10.3934/naco.2016.6.73

[11]

Guangjun Shen, Xueying Wu, Xiuwei Yin. Stabilization of stochastic differential equations driven by G-Lévy process with discrete-time feedback control. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 755-774. doi: 10.3934/dcdsb.2020133

[12]

Arnulf Jentzen, Benno Kuckuck, Thomas Müller-Gronbach, Larisa Yaroslavtseva. Counterexamples to local Lipschitz and local Hölder continuity with respect to the initial values for additive noise driven stochastic differential equations with smooth drift coefficient functions with at most polynomially growing derivatives. Discrete & Continuous Dynamical Systems - B, 2021  doi: 10.3934/dcdsb.2021203

[13]

María J. Garrido–Atienza, Kening Lu, Björn Schmalfuss. Random dynamical systems for stochastic partial differential equations driven by a fractional Brownian motion. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 473-493. doi: 10.3934/dcdsb.2010.14.473

[14]

Sarah Jane Hamilton, Andreas Hauptmann, Samuli Siltanen. A data-driven edge-preserving D-bar method for electrical impedance tomography. Inverse Problems & Imaging, 2014, 8 (4) : 1053-1072. doi: 10.3934/ipi.2014.8.1053

[15]

Xianming Liu, Guangyue Han. A Wong-Zakai approximation of stochastic differential equations driven by a general semimartingale. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2499-2508. doi: 10.3934/dcdsb.2020192

[16]

Bahareh Akhtari, Esmail Babolian, Andreas Neuenkirch. An Euler scheme for stochastic delay differential equations on unbounded domains: Pathwise convergence. Discrete & Continuous Dynamical Systems - B, 2015, 20 (1) : 23-38. doi: 10.3934/dcdsb.2015.20.23

[17]

Yanqing Wang. A semidiscrete Galerkin scheme for backward stochastic parabolic differential equations. Mathematical Control & Related Fields, 2016, 6 (3) : 489-515. doi: 10.3934/mcrf.2016013

[18]

Weidong Zhao, Jinlei Wang, Shige Peng. Error estimates of the $\theta$-scheme for backward stochastic differential equations. Discrete & Continuous Dynamical Systems - B, 2009, 12 (4) : 905-924. doi: 10.3934/dcdsb.2009.12.905

[19]

Weidong Zhao, Yang Li, Guannan Zhang. A generalized $\theta$-scheme for solving backward stochastic differential equations. Discrete & Continuous Dynamical Systems - B, 2012, 17 (5) : 1585-1603. doi: 10.3934/dcdsb.2012.17.1585

[20]

Martin Redmann, Melina A. Freitag. Balanced model order reduction for linear random dynamical systems driven by Lévy noise. Journal of Computational Dynamics, 2018, 5 (1&2) : 33-59. doi: 10.3934/jcd.2018002

2020 Impact Factor: 2.425

Article outline

Figures and Tables

[Back to Top]