March  2020, 2(1): 55-80. doi: 10.3934/fods.2020004

Bayesian inference of chaotic dynamics by merging data assimilation, machine learning and expectation-maximization

1. 

CEREA, joint laboratory École des Ponts ParisTech and EDF R & D, Université Paris-Est, Champs-sur-Marne, France

2. 

Nansen Environmental and Remote Sensing Center, Bergen, Norway, and Sorbonne University, CNRS-IRD-MNHN, LOCEAN, Paris, France

3. 

Departement of Meteorology, University of Reading and NCEO, United-Kingdom, and Mathematical Institute, Utrecht University, The Netherlands

4. 

Nansen Environmental and Remote Sensing Center, Bergen, Norway

* Corresponding author: Marc Bocquet

Published  March 2020

The reconstruction from observations of high-dimensional chaotic dynamics such as geophysical flows is hampered by (ⅰ) the partial and noisy observations that can realistically be obtained, (ⅱ) the need to learn from long time series of data, and (ⅲ) the unstable nature of the dynamics. To achieve such inference from the observations over long time series, it has been suggested to combine data assimilation and machine learning in several ways. We show how to unify these approaches from a Bayesian perspective using expectation-maximization and coordinate descents. In doing so, the model, the state trajectory and model error statistics are estimated all together. Implementations and approximations of these methods are discussed. Finally, we numerically and successfully test the approach on two relevant low-order chaotic models with distinct identifiability.

Citation: Marc Bocquet, Julien Brajard, Alberto Carrassi, Laurent Bertino. Bayesian inference of chaotic dynamics by merging data assimilation, machine learning and expectation-maximization. Foundations of Data Science, 2020, 2 (1) : 55-80. doi: 10.3934/fods.2020004
References:
[1]

H. D. I. AbarbanelP. J. Rozdeba and S. Shirman, Machine learning: Deepest learning as statistical data assimilation problems, Neural Computation, 30 (2018), 2025-2055.  doi: 10.1162/neco_a_01094.  Google Scholar

[2]

M. Asch, M. Bocquet and M. Nodet, Data Assimilation: Methods, Algorithms, and Applications, Fundamentals of Algorithms, SIAM, Philadelphia, 2016. doi: 10.1137/1.9781611974546.pt1.  Google Scholar

[3]

C. M. Bishop (ed.), Pattern Recognition and Machine Learning, Springer-Verlag New-York Inc, 2006.  Google Scholar

[4]

C. H. BishopB. J. Etherton and S. J. Majumdar, Adaptive sampling with the ensemble transform Kalman filter. Part Ⅰ: Theoretical aspects, Mon. Wea. Rev., 129 (2001), 420-436.  doi: 10.1175/1520-0493(2001)129<0420:ASWTET>2.0.CO;2.  Google Scholar

[5]

M. Bocquet, Ensemble Kalman filtering without the intrinsic need for inflation, Nonlin. Processes Geophys., 18 (2011), 735-750.  doi: 10.5194/npg-18-735-2011.  Google Scholar

[6]

M. BocquetJ. BrajardA. Carrassi and L. Bertino, Data assimilation as a learning tool to infer ordinary differential equation representations of dynamical models, Nonlin. Processes Geophys., 26 (2019), 143-162.  doi: 10.5194/npg-26-143-2019.  Google Scholar

[7]

M. Bocquet and P. Sakov, Joint state and parameter estimation with an iterative ensemble Kalman smoother, Nonlin. Processes Geophys., 20 (2013), 803-818.  doi: 10.5194/npg-20-803-2013.  Google Scholar

[8]

J. Brajard, A. Carrassi, M. Bocquet and L. Bertino, Combining data assimilation and machine learning to emulate a dynamical model from sparse and noisy observations: a case study with the Lorenz 96 model, J. Comput. Sci., 2020, http://arXiv.org/pdf/2001.01520.pdf. doi: 10.5194/gmd-2019-136.  Google Scholar

[9]

S. L. BruntonJ. L. Proctor and J. N. Kutz, Discovering governing equations from data by sparse identification of nonlinear dynamical systems, PNAS, 113 (2016), 3932-3937.  doi: 10.1073/pnas.1517384113.  Google Scholar

[10]

R. H. ByrdP. Lu and J. Nocedal, A limited memory algorithm for bound constrained optimization, SIAM Journal on Scientific and Statistical Computing, 16 (1995), 1190-1208.  doi: 10.1137/0916069.  Google Scholar

[11]

M. CarluF. GinelliV. Lucarini and A. Politi, Lyapunov analysis of multiscale dynamics: The slow bundle of the two-scale Lorenz 96 model, Nonlin. Processes Geophys., 26 (2019), 73-89.  doi: 10.5194/npg-26-73-2019.  Google Scholar

[12]

A. Carrassi, M. Bocquet, L. Bertino and G. Evensen, Data assimilation in the geosciences: An overview on methods, issues, and perspectives, WIREs Climate Change, 9 (2018), e535. doi: 10.1002/wcc.535.  Google Scholar

[13]

B. Chang, L. Meng, E. Haber, F. Tung and D. Begert, Multi-level residual networks from dynamical systems view, in Proceedings of ICLR 2018, 2018. Google Scholar

[14]

F. Chollet, Deep Learning with Python, Manning Publications Company, 2017. Google Scholar

[15]

A. P. DempsterN. M. Laird and D. B. Rubin, Maximum likelihood from incomplete data via the EM algorithm, Journal of the Royal Statistical Society. Series B, 39 (1977), 1-38.   Google Scholar

[16]

P. D. Dueben and P. Bauer, Challenges and design choices for global weather and climate models based on machine learning, Geosci. Model Dev., 11 (2018), 3999-4009.  doi: 10.5194/gmd-11-3999-2018.  Google Scholar

[17]

W. E, A proposal on machine learning via dynamical systems, Commun. Math. Stat., 5 (2017), 1-11.  doi: 10.1007/s40304-017-0103-z.  Google Scholar

[18]

G. Evensen, Data Assimilation: The Ensemble Kalman Filter, 2nd edition, Springer-Verlag Berlin Heildelberg, 2009. doi: 10.1007/978-3-642-03711-5.  Google Scholar

[19]

R. Fablet, S. Ouala and C. Herzet, Bilinear residual neural network for the identification and forecasting of dynamical systems, in EUSIPCO 2018, European Signal Processing Conference, Rome, Italy, 2018, 1–5. Google Scholar

[20]

M. Fisher and S. Gürol, Parallelization in the time dimension of four-dimensional variational data assimilation, Q. J. R. Meteorol. Soc., 143 (2017), 1136-1147.  doi: 10.1002/qj.2997.  Google Scholar

[21]

Z. Ghahramani and S. T. Roweis, Learning nonlinear dynamical systems using an EM algorithm, in Advances in neural information processing systems, 1999,431–437. Google Scholar

[22]

W. W. Hsieh and B. Tang, Applying neural network models to prediction and data analysis in meteorology and oceanography, Bull. Amer. Meteor. Soc., 79 (1998), 1855-1870.  doi: 10.1175/1520-0477(1998)079<1855:ANNMTP>2.0.CO;2.  Google Scholar

[23]

B. R. HuntE. J. Kostelich and I. Szunyogh, Efficient data assimilation for spatiotemporal chaos: A local ensemble transform Kalman filter, Physica D, 230 (2007), 112-126.  doi: 10.1016/j.physd.2006.11.008.  Google Scholar

[24]

E. KalnayH. LiT. MiyoshiS.-C. Yang and J. Ballabrera-Poy, 4-D-Varor ensemble Kalman filter?, Tellus A, 59 (2007), 758-773.   Google Scholar

[25]

Y. A. LeCun, L. Bottou, G. B. Orr and K.-R. Müller, Efficient backprop, in Neural networks: Tricks of the trade, Springer, 2012, 9–48. Google Scholar

[26]

R. LguensatP. TandeoP. AilliotM. Pulido and R. Fablet, The analog data assimilation, Mon. Wea. Rev., 145 (2017), 4093-4107.  doi: 10.1175/MWR-D-16-0441.1.  Google Scholar

[27]

Y. LiuJ.-M. HaussaireM. BocquetY. RoustanO. Saunier and A. Mathieu, Uncertainty quantification of pollutant source retrieval: comparison of Bayesian methods with application to the Chernobyl and Fukushima-Daiichi accidental releases of radionuclides, Q. J. R. Meteorol. Soc., 143 (2017), 2886-2901.  doi: 10.1002/qj.3138.  Google Scholar

[28]

Z. Long, Y. Lu, X. Ma and B. Dong, PDE-Net: Learning PDEs from Data, , in Proceedings of the 35th International Conference on Machine Learning, 2018. Google Scholar

[29]

E. N. Lorenz, Designing chaotic models, J. Atmos. Sci., 62 (2005), 1574-1587.  doi: 10.1175/JAS3430.1.  Google Scholar

[30]

E. N. Lorenz and K. A. Emanuel, Optimal sites for supplementary weather observations: Simulation with a small model, J. Atmos. Sci., 55 (1998), 399-414.  doi: 10.1175/1520-0469(1998)055<0399:OSFSWO>2.0.CO;2.  Google Scholar

[31]

L. Magnusson and E. Källén, Factors influencing skill improvements in the ecmwf forecasting system, Mon. Wea. Rev., 141 (2013), 3142-3153.  doi: 10.1175/MWR-D-12-00318.1.  Google Scholar

[32]

V. D. Nguyen, S. Ouala, L. Drumetz and R. Fablet, EM-like learning chaotic dynamics from noisy and partial observations, arXiv preprint, arXiv: 1903.10335. Google Scholar

[33]

J. PaduartL. LauwersJ. SweversK. SmoldersJ. Schoukens and R. Pintelon, Identification of nonlinear systems using polynomial nonlinear state space models, Automatica, 46 (2010), 647-656.  doi: 10.1016/j.automatica.2010.01.001.  Google Scholar

[34]

D. C. Park and Y. Zhu, Bilinear recurrent neural network, in IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on, 3 (1994), 1459–1464. doi: 10.1109/ICNN.1994.374501.  Google Scholar

[35]

J. Pathak, B. Hunt, M. Girvan, Z. Lu and E. Ott, Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach, Phys. Rev. Lett., 120 (2018), 024102. doi: 10.1103/PhysRevLett.120.024102.  Google Scholar

[36]

J. Pathak, Z. Lu, B. R. Hunt, M. Girvan and E. Ott, Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data, Chaos, 27 (2017), 121102, 9pp. doi: 10.1063/1.5010300.  Google Scholar

[37]

M. PulidoP. TandeoM. BocquetA. Carrassi and M. Lucini, Stochastic parameterization identification using ensemble Kalman filtering combined with maximum likelihood methods, Tellus A, 70 (2018), 1-17.  doi: 10.1080/16000870.2018.1442099.  Google Scholar

[38]

P. N. RaanesA. Carrassi and L. Bertino, Extending the square root method to account for additive forecast noise in ensemble methods, Mon. Wea. Rev., 143 (2015), 3857-3873.  doi: 10.1175/MWR-D-14-00375.1.  Google Scholar

[39]

V. Rao and A. Sandu, A time-parallel approach to strong-constraint four-dimensional variational data assimilation, J. Comp. Phys., 313 (2016), 583-593.  doi: 10.1016/j.jcp.2016.02.040.  Google Scholar

[40]

M. ReichsteinG. Camps-VallsB. StevensJ. DenzlerN. Carvalhais and Pr abhat, Deep learning and process understanding for data-driven Earth system science, Nature, 566 (2019), 195-204.  doi: 10.1038/s41586-019-0912-1.  Google Scholar

[41]

P. SakovJ.-M. Haussaire and M. Bocquet, An iterative ensemble Kalman filter in presence of additive model error, Q. J. R. Meteorol. Soc., 144 (2018), 1297-1309.  doi: 10.1002/qj.3213.  Google Scholar

[42]

P. Tandeo, P. Ailliot, M. Bocquet, A. Carrassi, T. Miyoshi, M. Pulido and Y. Zhen, A review of innovation based approaches to jointly estimate model and observation error covariance matrices in ensemble data assimilation, 2020, https://arXiv.org/abs/1807.11221, Submitted. Google Scholar

[43]

Y. Trémolet, Accounting for an imperfect model in 4D-Var, Q. J. R. Meteorol. Soc., 132 (2006), 2483-2504.   Google Scholar

[44]

P. R. Vlachas, J. Pathak, B. R. Hunt, T. P. Sapsis, M. Girvan, E. Ott and P. Koumoutsakos, Forecasting of spatio-temporal chaotic dynamics with recurrent neural networks: a comparative study of reservoir computing and backpropagation algorithms, arXiv preprint, arXiv: 1910.05266., Google Scholar

[45]

Y.-J. Wang and C.-T. Lin, Runge-Kutta neural network for identification of dynamical systems in high accuracy, IEEE Transactions on Neural Networks, 9 (1998), 294-307.   Google Scholar

[46]

G. C. G. Wei and M. A. Tanner, A Monte Carlo implementation of the EM algorithm and the poor man's data augmentation algorithms, Journal of the American Statistical Association, 85 (1990), 699-704.  doi: 10.1080/01621459.1990.10474930.  Google Scholar

[47]

P. Welch, The use of fast Fourier transform for the estimation of power spectra: A method based on time averaging over short, modified periodograms, IEEE Transactions on Audio and Electroacoustics, 15 (1967), 70-73.  doi: 10.1109/TAU.1967.1161901.  Google Scholar

[48]

S. J. Wright, Coordinate descent algorithms, Mathematical Programming, 151 (2015), 3-34.  doi: 10.1007/s10107-015-0892-3.  Google Scholar

show all references

References:
[1]

H. D. I. AbarbanelP. J. Rozdeba and S. Shirman, Machine learning: Deepest learning as statistical data assimilation problems, Neural Computation, 30 (2018), 2025-2055.  doi: 10.1162/neco_a_01094.  Google Scholar

[2]

M. Asch, M. Bocquet and M. Nodet, Data Assimilation: Methods, Algorithms, and Applications, Fundamentals of Algorithms, SIAM, Philadelphia, 2016. doi: 10.1137/1.9781611974546.pt1.  Google Scholar

[3]

C. M. Bishop (ed.), Pattern Recognition and Machine Learning, Springer-Verlag New-York Inc, 2006.  Google Scholar

[4]

C. H. BishopB. J. Etherton and S. J. Majumdar, Adaptive sampling with the ensemble transform Kalman filter. Part Ⅰ: Theoretical aspects, Mon. Wea. Rev., 129 (2001), 420-436.  doi: 10.1175/1520-0493(2001)129<0420:ASWTET>2.0.CO;2.  Google Scholar

[5]

M. Bocquet, Ensemble Kalman filtering without the intrinsic need for inflation, Nonlin. Processes Geophys., 18 (2011), 735-750.  doi: 10.5194/npg-18-735-2011.  Google Scholar

[6]

M. BocquetJ. BrajardA. Carrassi and L. Bertino, Data assimilation as a learning tool to infer ordinary differential equation representations of dynamical models, Nonlin. Processes Geophys., 26 (2019), 143-162.  doi: 10.5194/npg-26-143-2019.  Google Scholar

[7]

M. Bocquet and P. Sakov, Joint state and parameter estimation with an iterative ensemble Kalman smoother, Nonlin. Processes Geophys., 20 (2013), 803-818.  doi: 10.5194/npg-20-803-2013.  Google Scholar

[8]

J. Brajard, A. Carrassi, M. Bocquet and L. Bertino, Combining data assimilation and machine learning to emulate a dynamical model from sparse and noisy observations: a case study with the Lorenz 96 model, J. Comput. Sci., 2020, http://arXiv.org/pdf/2001.01520.pdf. doi: 10.5194/gmd-2019-136.  Google Scholar

[9]

S. L. BruntonJ. L. Proctor and J. N. Kutz, Discovering governing equations from data by sparse identification of nonlinear dynamical systems, PNAS, 113 (2016), 3932-3937.  doi: 10.1073/pnas.1517384113.  Google Scholar

[10]

R. H. ByrdP. Lu and J. Nocedal, A limited memory algorithm for bound constrained optimization, SIAM Journal on Scientific and Statistical Computing, 16 (1995), 1190-1208.  doi: 10.1137/0916069.  Google Scholar

[11]

M. CarluF. GinelliV. Lucarini and A. Politi, Lyapunov analysis of multiscale dynamics: The slow bundle of the two-scale Lorenz 96 model, Nonlin. Processes Geophys., 26 (2019), 73-89.  doi: 10.5194/npg-26-73-2019.  Google Scholar

[12]

A. Carrassi, M. Bocquet, L. Bertino and G. Evensen, Data assimilation in the geosciences: An overview on methods, issues, and perspectives, WIREs Climate Change, 9 (2018), e535. doi: 10.1002/wcc.535.  Google Scholar

[13]

B. Chang, L. Meng, E. Haber, F. Tung and D. Begert, Multi-level residual networks from dynamical systems view, in Proceedings of ICLR 2018, 2018. Google Scholar

[14]

F. Chollet, Deep Learning with Python, Manning Publications Company, 2017. Google Scholar

[15]

A. P. DempsterN. M. Laird and D. B. Rubin, Maximum likelihood from incomplete data via the EM algorithm, Journal of the Royal Statistical Society. Series B, 39 (1977), 1-38.   Google Scholar

[16]

P. D. Dueben and P. Bauer, Challenges and design choices for global weather and climate models based on machine learning, Geosci. Model Dev., 11 (2018), 3999-4009.  doi: 10.5194/gmd-11-3999-2018.  Google Scholar

[17]

W. E, A proposal on machine learning via dynamical systems, Commun. Math. Stat., 5 (2017), 1-11.  doi: 10.1007/s40304-017-0103-z.  Google Scholar

[18]

G. Evensen, Data Assimilation: The Ensemble Kalman Filter, 2nd edition, Springer-Verlag Berlin Heildelberg, 2009. doi: 10.1007/978-3-642-03711-5.  Google Scholar

[19]

R. Fablet, S. Ouala and C. Herzet, Bilinear residual neural network for the identification and forecasting of dynamical systems, in EUSIPCO 2018, European Signal Processing Conference, Rome, Italy, 2018, 1–5. Google Scholar

[20]

M. Fisher and S. Gürol, Parallelization in the time dimension of four-dimensional variational data assimilation, Q. J. R. Meteorol. Soc., 143 (2017), 1136-1147.  doi: 10.1002/qj.2997.  Google Scholar

[21]

Z. Ghahramani and S. T. Roweis, Learning nonlinear dynamical systems using an EM algorithm, in Advances in neural information processing systems, 1999,431–437. Google Scholar

[22]

W. W. Hsieh and B. Tang, Applying neural network models to prediction and data analysis in meteorology and oceanography, Bull. Amer. Meteor. Soc., 79 (1998), 1855-1870.  doi: 10.1175/1520-0477(1998)079<1855:ANNMTP>2.0.CO;2.  Google Scholar

[23]

B. R. HuntE. J. Kostelich and I. Szunyogh, Efficient data assimilation for spatiotemporal chaos: A local ensemble transform Kalman filter, Physica D, 230 (2007), 112-126.  doi: 10.1016/j.physd.2006.11.008.  Google Scholar

[24]

E. KalnayH. LiT. MiyoshiS.-C. Yang and J. Ballabrera-Poy, 4-D-Varor ensemble Kalman filter?, Tellus A, 59 (2007), 758-773.   Google Scholar

[25]

Y. A. LeCun, L. Bottou, G. B. Orr and K.-R. Müller, Efficient backprop, in Neural networks: Tricks of the trade, Springer, 2012, 9–48. Google Scholar

[26]

R. LguensatP. TandeoP. AilliotM. Pulido and R. Fablet, The analog data assimilation, Mon. Wea. Rev., 145 (2017), 4093-4107.  doi: 10.1175/MWR-D-16-0441.1.  Google Scholar

[27]

Y. LiuJ.-M. HaussaireM. BocquetY. RoustanO. Saunier and A. Mathieu, Uncertainty quantification of pollutant source retrieval: comparison of Bayesian methods with application to the Chernobyl and Fukushima-Daiichi accidental releases of radionuclides, Q. J. R. Meteorol. Soc., 143 (2017), 2886-2901.  doi: 10.1002/qj.3138.  Google Scholar

[28]

Z. Long, Y. Lu, X. Ma and B. Dong, PDE-Net: Learning PDEs from Data, , in Proceedings of the 35th International Conference on Machine Learning, 2018. Google Scholar

[29]

E. N. Lorenz, Designing chaotic models, J. Atmos. Sci., 62 (2005), 1574-1587.  doi: 10.1175/JAS3430.1.  Google Scholar

[30]

E. N. Lorenz and K. A. Emanuel, Optimal sites for supplementary weather observations: Simulation with a small model, J. Atmos. Sci., 55 (1998), 399-414.  doi: 10.1175/1520-0469(1998)055<0399:OSFSWO>2.0.CO;2.  Google Scholar

[31]

L. Magnusson and E. Källén, Factors influencing skill improvements in the ecmwf forecasting system, Mon. Wea. Rev., 141 (2013), 3142-3153.  doi: 10.1175/MWR-D-12-00318.1.  Google Scholar

[32]

V. D. Nguyen, S. Ouala, L. Drumetz and R. Fablet, EM-like learning chaotic dynamics from noisy and partial observations, arXiv preprint, arXiv: 1903.10335. Google Scholar

[33]

J. PaduartL. LauwersJ. SweversK. SmoldersJ. Schoukens and R. Pintelon, Identification of nonlinear systems using polynomial nonlinear state space models, Automatica, 46 (2010), 647-656.  doi: 10.1016/j.automatica.2010.01.001.  Google Scholar

[34]

D. C. Park and Y. Zhu, Bilinear recurrent neural network, in IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on, 3 (1994), 1459–1464. doi: 10.1109/ICNN.1994.374501.  Google Scholar

[35]

J. Pathak, B. Hunt, M. Girvan, Z. Lu and E. Ott, Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach, Phys. Rev. Lett., 120 (2018), 024102. doi: 10.1103/PhysRevLett.120.024102.  Google Scholar

[36]

J. Pathak, Z. Lu, B. R. Hunt, M. Girvan and E. Ott, Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data, Chaos, 27 (2017), 121102, 9pp. doi: 10.1063/1.5010300.  Google Scholar

[37]

M. PulidoP. TandeoM. BocquetA. Carrassi and M. Lucini, Stochastic parameterization identification using ensemble Kalman filtering combined with maximum likelihood methods, Tellus A, 70 (2018), 1-17.  doi: 10.1080/16000870.2018.1442099.  Google Scholar

[38]

P. N. RaanesA. Carrassi and L. Bertino, Extending the square root method to account for additive forecast noise in ensemble methods, Mon. Wea. Rev., 143 (2015), 3857-3873.  doi: 10.1175/MWR-D-14-00375.1.  Google Scholar

[39]

V. Rao and A. Sandu, A time-parallel approach to strong-constraint four-dimensional variational data assimilation, J. Comp. Phys., 313 (2016), 583-593.  doi: 10.1016/j.jcp.2016.02.040.  Google Scholar

[40]

M. ReichsteinG. Camps-VallsB. StevensJ. DenzlerN. Carvalhais and Pr abhat, Deep learning and process understanding for data-driven Earth system science, Nature, 566 (2019), 195-204.  doi: 10.1038/s41586-019-0912-1.  Google Scholar

[41]

P. SakovJ.-M. Haussaire and M. Bocquet, An iterative ensemble Kalman filter in presence of additive model error, Q. J. R. Meteorol. Soc., 144 (2018), 1297-1309.  doi: 10.1002/qj.3213.  Google Scholar

[42]

P. Tandeo, P. Ailliot, M. Bocquet, A. Carrassi, T. Miyoshi, M. Pulido and Y. Zhen, A review of innovation based approaches to jointly estimate model and observation error covariance matrices in ensemble data assimilation, 2020, https://arXiv.org/abs/1807.11221, Submitted. Google Scholar

[43]

Y. Trémolet, Accounting for an imperfect model in 4D-Var, Q. J. R. Meteorol. Soc., 132 (2006), 2483-2504.   Google Scholar

[44]

P. R. Vlachas, J. Pathak, B. R. Hunt, T. P. Sapsis, M. Girvan, E. Ott and P. Koumoutsakos, Forecasting of spatio-temporal chaotic dynamics with recurrent neural networks: a comparative study of reservoir computing and backpropagation algorithms, arXiv preprint, arXiv: 1910.05266., Google Scholar

[45]

Y.-J. Wang and C.-T. Lin, Runge-Kutta neural network for identification of dynamical systems in high accuracy, IEEE Transactions on Neural Networks, 9 (1998), 294-307.   Google Scholar

[46]

G. C. G. Wei and M. A. Tanner, A Monte Carlo implementation of the EM algorithm and the poor man's data augmentation algorithms, Journal of the American Statistical Association, 85 (1990), 699-704.  doi: 10.1080/01621459.1990.10474930.  Google Scholar

[47]

P. Welch, The use of fast Fourier transform for the estimation of power spectra: A method based on time averaging over short, modified periodograms, IEEE Transactions on Audio and Electroacoustics, 15 (1967), 70-73.  doi: 10.1109/TAU.1967.1161901.  Google Scholar

[48]

S. J. Wright, Coordinate descent algorithms, Mathematical Programming, 151 (2015), 3-34.  doi: 10.1007/s10107-015-0892-3.  Google Scholar

Figure 1.  From top to bottom: representation of the flow rate $ \boldsymbol \phi_ \mathbf{A} $ with a NN, integration of the flow rate into $ \mathbf{f}_ \mathbf{A} $ using an explicit integration scheme (here a second-order Runge Kutta scheme), and $ {N_\mathrm{c}}- $fold composition up to the full resolvent $ \mathbf{F}_ \mathbf{A} $. $ \delta t $ is the integration time step corresponding to the resolvent $ \mathbf{f}_ \mathbf{A} $
Figure 2.  On the left hand side: Properties of the surrogate model obtained from full but noisy observation of the L96 model in the nominal configuration ($ L = 4 $, $ K = 5000 $, $ \sigma_y = 1 $, $ {N_\mathrm{y}} = {N_\mathrm{x}} = 40 $). On the right hand side: Properties of the surrogate model obtained from full but noisy observation of the L05Ⅲ model in the nominal configuration ($ L = 4 $, $ K = 5000 $, $ \sigma_y = 1 $, $ {N_\mathrm{y}} = {N_\mathrm{x}} = 36 $). From top to bottom, are plotted the FS (NRMSE as a function of lead time in Lyapunov time), the LS (all exponents), and the PSD (in log-log-scale). A total of $ 10 $ experiments have been performed for both configurations. The curves corresponding to each member are drawn with thin blue lines while the mean of each indicator over the ensemble are drawn in thick dashed orange line
Figure 3.  Same as Figure 2 but for several values of the training window length $ K $. Each curve is the mean over $ 10 $ experiments with different sets of observations. The LS and PSD of the reference models are also plotted for comparison
Figure 4.  On the left hand side: Properties of the surrogate model obtained from full but noisy observation of the L96 model in the nominal configuration ($ L = 4 $, $ K = 5000 $, $ {N_\mathrm{y}} = {N_\mathrm{x}} = 40 $ and with several $ \sigma_y $). On the right hand side: Properties of the surrogate model obtained from full but noisy observation of the L05Ⅲ model in the nominal configuration ($ L = 4 $, $ K = 5000 $, $ \sigma_y = 1 $, $ {N_\mathrm{y}} = {N_\mathrm{x}} = 36 $ and with several $ \sigma_y $). From top to bottom, are plotted the FS (NRMSE as a function of lead time in Lyapunov time) and the PSD (in log-log-scale), averaged over an ensemble of $ 10 $ samples
Figure 5.  On the left hand side: Properties of the surrogate model obtained from partial and noisy observation of the L96 model in the nominal configuration ($ L = 4 $, $ K = 5000 $, $ \sigma_y = 1 $, $ {N_\mathrm{x}} = 40 $) where $ {N_\mathrm{y}} $ is varied. On the right hand side: Properties of the surrogate model obtained from partial and noisy observation of the L05Ⅲ model in the nominal configuration ($ L = 4 $, $ K = 5000 $, $ \sigma_y = 1 $, $ {N_\mathrm{x}} = 36 $) where $ {N_\mathrm{y}} $ is varied. From top to bottom, are plotted the mean FS (NRMSE as a function of lead time in Lyapunov time), the mean LS (all exponents), and the mean PSD (in log-log-scale). A total of $ 10 $ experiments have been performed for both configurations
Table 1.  Scalar indicators for nominal experiments based on L96 and L05Ⅲ. Key hyperparameters are recalled. The statistics of the indicators are obtained over $ 10 $ samples
Model $ {N_\mathrm{y}} $ $ \sigma_y $ $ K $ $ L $ $ \pi_ \frac{1}{2} $ $ \sigma_q $ $ \lambda_1 $
L96 $ 40 $ $ 1 $ $ 5000 $ $ 4 $ $ 4.56 \pm 0.06 $ $ 0.08790 \pm 2\, 10^{-5} $ $ 1.66 \pm 0.02 $
L05Ⅲ $ 36 $ $ 1 $ $ 5000 $ $ 4 $ $ 4.06 \pm 0.21 $ $ 0.07720 \pm 2\, 10^{-5} $ $ 1.03 \pm 0.05 $
Model $ {N_\mathrm{y}} $ $ \sigma_y $ $ K $ $ L $ $ \pi_ \frac{1}{2} $ $ \sigma_q $ $ \lambda_1 $
L96 $ 40 $ $ 1 $ $ 5000 $ $ 4 $ $ 4.56 \pm 0.06 $ $ 0.08790 \pm 2\, 10^{-5} $ $ 1.66 \pm 0.02 $
L05Ⅲ $ 36 $ $ 1 $ $ 5000 $ $ 4 $ $ 4.06 \pm 0.21 $ $ 0.07720 \pm 2\, 10^{-5} $ $ 1.03 \pm 0.05 $
Table 2.  Scalar indicators for L96 and L05Ⅲ in their nominal configuration, using either the full or the approximate schemes. The statistics of the indicators are obtained over $ 10 $ samples
Model Scheme $ \pi_ \frac{1}{2} $ $ \sigma_q $ $ \lambda_1 $
L96 Approximate $ 4.56 \pm 0.06 $ $ 0.08790 \pm 2\, 10^{-5} $ $ 1.66 \pm 0.02 $
L96 Full $ 4.24 \pm 0.07 $ $ 0.09152 $ $ 1.66 \pm 0.02 $
L05Ⅲ Approximate $ 4.06 \pm 0.21 $ $ 0.07720 \pm 2\, 10^{-5} $ $ 1.03 \pm 0.05 $
L05Ⅲ Full $ 3.97 \pm 0.17 $ $ 0.08024 $ $ 1.03 \pm 0.04 $
Model Scheme $ \pi_ \frac{1}{2} $ $ \sigma_q $ $ \lambda_1 $
L96 Approximate $ 4.56 \pm 0.06 $ $ 0.08790 \pm 2\, 10^{-5} $ $ 1.66 \pm 0.02 $
L96 Full $ 4.24 \pm 0.07 $ $ 0.09152 $ $ 1.66 \pm 0.02 $
L05Ⅲ Approximate $ 4.06 \pm 0.21 $ $ 0.07720 \pm 2\, 10^{-5} $ $ 1.03 \pm 0.05 $
L05Ⅲ Full $ 3.97 \pm 0.17 $ $ 0.08024 $ $ 1.03 \pm 0.04 $
[1]

Xin Guo, Lexin Li, Qiang Wu. Modeling interactive components by coordinate kernel polynomial models. Mathematical Foundations of Computing, 2020, 3 (4) : 263-277. doi: 10.3934/mfc.2020010

[2]

Ying Lin, Qi Ye. Support vector machine classifiers by non-Euclidean margins. Mathematical Foundations of Computing, 2020, 3 (4) : 279-300. doi: 10.3934/mfc.2020018

[3]

Aihua Fan, Jörg Schmeling, Weixiao Shen. $ L^\infty $-estimation of generalized Thue-Morse trigonometric polynomials and ergodic maximization. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 297-327. doi: 10.3934/dcds.2020363

[4]

Xin Guo, Lei Shi. Preface of the special issue on analysis in data science: Methods and applications. Mathematical Foundations of Computing, 2020, 3 (4) : i-ii. doi: 10.3934/mfc.2020026

[5]

Marion Darbas, Jérémy Heleine, Stephanie Lohrengel. Numerical resolution by the quasi-reversibility method of a data completion problem for Maxwell's equations. Inverse Problems & Imaging, 2020, 14 (6) : 1107-1133. doi: 10.3934/ipi.2020056

[6]

Dan Zhu, Rosemary A. Renaut, Hongwei Li, Tianyou Liu. Fast non-convex low-rank matrix decomposition for separation of potential field data using minimal memory. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020076

[7]

Peizhao Yu, Guoshan Zhang, Yi Zhang. Decoupling of cubic polynomial matrix systems. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 13-26. doi: 10.3934/naco.2020012

[8]

Ilyasse Lamrani, Imad El Harraki, Ali Boutoulout, Fatima-Zahrae El Alaoui. Feedback stabilization of bilinear coupled hyperbolic systems. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020434

[9]

Felix Finster, Jürg Fröhlich, Marco Oppio, Claudio F. Paganini. Causal fermion systems and the ETH approach to quantum theory. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020451

[10]

Xiyou Cheng, Zhitao Zhang. Structure of positive solutions to a class of Schrödinger systems. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020461

[11]

Simon Hochgerner. Symmetry actuated closed-loop Hamiltonian systems. Journal of Geometric Mechanics, 2020, 12 (4) : 641-669. doi: 10.3934/jgm.2020030

[12]

Javier Fernández, Cora Tori, Marcela Zuccalli. Lagrangian reduction of nonholonomic discrete mechanical systems by stages. Journal of Geometric Mechanics, 2020, 12 (4) : 607-639. doi: 10.3934/jgm.2020029

[13]

Yuri Fedorov, Božidar Jovanović. Continuous and discrete Neumann systems on Stiefel varieties as matrix generalizations of the Jacobi–Mumford systems. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020375

[14]

João Marcos do Ó, Bruno Ribeiro, Bernhard Ruf. Hamiltonian elliptic systems in dimension two with arbitrary and double exponential growth conditions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 277-296. doi: 10.3934/dcds.2020138

[15]

Awais Younus, Zoubia Dastgeer, Nudrat Ishaq, Abdul Ghaffar, Kottakkaran Sooppy Nisar, Devendra Kumar. On the observability of conformable linear time-invariant control systems. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020444

[16]

Shiqi Ma. On recent progress of single-realization recoveries of random Schrödinger systems. Electronic Research Archive, , () : -. doi: 10.3934/era.2020121

[17]

Sergey Rashkovskiy. Hamilton-Jacobi theory for Hamiltonian and non-Hamiltonian systems. Journal of Geometric Mechanics, 2020, 12 (4) : 563-583. doi: 10.3934/jgm.2020024

[18]

Touria Karite, Ali Boutoulout. Global and regional constrained controllability for distributed parabolic linear systems: RHum approach. Numerical Algebra, Control & Optimization, 2020  doi: 10.3934/naco.2020055

[19]

Maoding Zhen, Binlin Zhang, Vicenţiu D. Rădulescu. Normalized solutions for nonlinear coupled fractional systems: Low and high perturbations in the attractive case. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020379

[20]

Zedong Yang, Guotao Wang, Ravi P. Agarwal, Haiyong Xu. Existence and nonexistence of entire positive radial solutions for a class of Schrödinger elliptic systems involving a nonlinear operator. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020436

 Impact Factor: 

Metrics

  • PDF downloads (345)
  • HTML views (603)
  • Cited by (1)

[Back to Top]