\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

Neural dynamic mode decomposition for end-to-end modeling of nonlinear dynamics

  • *Corresponding author: Tomoharu Iwata

    *Corresponding author: Tomoharu Iwata 
Abstract / Introduction Full Text(HTML) Figure(3) / Table(1) Related Papers Cited by
  • Koopman spectral analysis has attracted attention for understanding nonlinear dynamical systems by which we can analyze nonlinear dynamics with a linear regime by lifting observations using a nonlinear function. For analysis, we need to find an appropriate lift function. Although several methods have been proposed for estimating a lift function based on neural networks, the existing methods train neural networks without spectral analysis. In this paper, we propose neural dynamic mode decomposition, in which neural networks are trained such that the forecast error is minimized when the dynamics is modeled based on spectral decomposition in the lifted space. With our proposed method, the forecast error is backpropagated through the neural networks and the spectral decomposition, enabling end-to-end learning of Koopman spectral analysis. When information is available on the frequencies or the growth rates of the dynamics, the proposed method can exploit it as regularizers for training. We also propose an extension of our approach when observations are influenced by exogenous control time-series. Our experiments demonstrate the effectiveness of our proposed method in terms of eigenvalue estimation and forecast performance.

    Mathematics Subject Classification: Primary: 37M10; Secondary: 68T07.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Our model: First, observation vectors $ \{{\bf{x}}_{t}\} $ are encoded to $ \{\mathit{\boldsymbol{\psi}}_{t}\} $ in the lifted space by encoder $ f $. Second, encoded vectors are forecasted based on DMD using SVD and eigen decomposition. Third, forecasted encoded vectors are decoded in the observation space by decoder $ g $

    Figure 2.  Observation time-series data, and true and estimated eigenvalues for eigenvalue estimation experiments with (a) observation time-series data, (b) observation time-series data and auxiliary information, and (c) observation and exogenous time-series data

    Figure 3.  Cylinder wake data for fluid flow forecast

    Table 1.  Average mean squared error and its standard error for fluid flow forecast

    0.01% 0.03% 0.1%
    NDMD 0.000211$ \pm $0.000097 0.000169$ \pm $0.000129 0.000029$ \pm $0.000004
    DMD 0.332469$ \pm $0.037414 0.011987$ \pm $0.004589 0.000292$ \pm $0.000052
    EDMD 0.080878$ \pm $0.032596 0.001899$ \pm $0.001318 0.000055$ \pm $0.000016
    KDMD(RBF) 0.027164$ \pm $0.011372 0.000435$ \pm $0.000237 0.000173$ \pm $0.000063
    KDMD(Poly) 0.311673$ \pm $0.054852 0.039735$ \pm $0.015964 0.002097$ \pm $0.000647
    AEAR 0.027269$ \pm $0.016137 0.001555$ \pm $0.001245 0.000100$ \pm $0.000028
    LKIS 0.023177$ \pm $0.013134 0.001525$ \pm $0.001257 0.000101$ \pm $0.000027
    AR 59204.80$ \pm $49533.98 20.59142$ \pm $17.15388 0.109702$ \pm $0.087929
    NN 0.011549$ \pm $0.006359 0.005865$ \pm $0.005476 0.000036$ \pm $0.000010
    LSTM 0.157520$ \pm $0.045537 0.012063$ \pm $0.004472 0.000320$ \pm $0.000042
     | Show Table
    DownLoad: CSV
  • [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu and X. Zheng, TensorFlow: Large-scale machine learning on heterogeneous systems, 2015, http://tensorflow.org/, Software available from tensorflow.org.
    [2] O. Azencot, N. B. Erichson, V. Lin and M. Mahoney, Forecasting sequential data using consistent Koopman autoencoders, in International Conference on Machine Learning, 2020, 475-485.
    [3] C. Boeddeker, P. Hanebrink, L. Drude, J. Heymann and R. Haeb-Umbach, On the computation of complex-valued gradients with application to statistically optimum beamforming, arXiv preprint, arXiv: 1701.00392.
    [4] O. M. Braun and Y. S. Kivshar, Nonlinear dynamics of the Frenkel-Kontorova model, Physics Reports, 306 (1998), 1-108.  doi: 10.1016/S0370-1573(98)00029-5.
    [5] E. Bullmore and O. Sporns, Complex brain networks: Graph theoretical analysis of structural and functional systems, Nature Reviews Neuroscience, 10 (2009), 186-198. 
    [6] M. ChilaliP. Gahinet and P. Apkarian, Robust pole placement in LMI regions, IEEE Transactions on Automatic Control, 44 (1999), 2257-2270.  doi: 10.1109/CDC.1997.657634.
    [7] B. C. Daniels and I. Nemenman, Efficient inference of parsimonious phenomenological models of cellular dynamics using S-systems and alternating regression, PloS One, 10 (2015), e0119821.  doi: 10.1371/journal.pone.0119821.
    [8] F. DietrichT. N. Thiem and I. G. Kevrekidis, On the Koopman operator of algorithms, SIAM Journal on Applied Dynamical Systems, 19 (2020), 860-885.  doi: 10.1137/19M1277059.
    [9] K. Fujii and Y. Kawahara, Dynamic mode decomposition in vector-valued reproducing kernel hilbert spaces for extracting dynamical structure among observables, Neural Networks, 117 (2019), 94-103.  doi: 10.1016/j.neunet.2019.04.020.
    [10] M. B. Giles, Collected matrix derivative results for forward and reverse mode algorithmic differentiation, in Advances in Automatic Differentiation, Springer, 2008, 35-44. doi: 10.1007/978-3-540-68942-3_4.
    [11] S. J. GuastelloChaos, Catastrophe, and Human Affairs: Applications of Nonlinear Dynamics to Work, Organizations, and Social Evolution, Psychology Press, 2013.  doi: 10.4324/9780203773895.
    [12] W. Hao and Y. Han, Data driven control with learned dynamics: Model-based versus model-free approach, arXiv preprint, arXiv: 2006.09543.
    [13] S. Hochreiter and J. Schmidhuber, Long short-term memory, Neural Computation, 9 (1997), 1735-1780.  doi: 10.1162/neco.1997.9.8.1735.
    [14] Y. Ji and H. J. Chizeck, Controllability, stabilizability, and continuous-time Markovian jump linear quadratic control, IEEE Transactions on Automatic Control, 35 (1990), 777-788.  doi: 10.1109/9.57016.
    [15] M. R. JovanovićP. J. Schmid and J. W. Nichols, Sparsity-promoting dynamic mode decomposition, Physics of Fluids, 26 (2014), 024103. 
    [16] Y. Kawahara, Dynamic mode decomposition with reproducing kernels for Koopman spectral analysis, in Advances in Neural Information Processing Systems, 2016, 911-919.
    [17] D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, in International Conference on Learning Representations, 2015.
    [18] D. E. Kirk, Optimal Control Theory: An Introduction, Courier Corporation, 2004.
    [19] B. O. Koopman, Hamiltonian systems and transformation in Hilbert space, Proceedings of the National Academy of Sciences of the United States of America, 17 (1931), 315-318.  doi: 10.1073/pnas.17.5.315.
    [20] A. KrizhevskyI. Sutskever and G. E. Hinton, Imagenet classification with deep convolutional neural networks, Communications of the ACM, 60 (2017), 84-90.  doi: 10.1145/3065386.
    [21] J. N. Kutz, S. L. Brunton, B. W. Brunton and J. L. Proctor, Dynamic Mode Decomposition: Data-driven Modeling of Complex Systems, SIAM, 2016. doi: 10.1137/1.9781611974508.
    [22] K. Lee and K. T. Carlberg, Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders, Journal of Computational Physics, 404 (2020), 108973.  doi: 10.1016/j.jcp.2019.108973.
    [23] A. LeonessaW. M. Haddad and V. Chellaboina, Nonlinear system stabilization via hierarchical switching control, IEEE Transactions on Automatic Control, 46 (2001), 17-28.  doi: 10.1109/9.898692.
    [24] Y. Li, H. He, J. Wu, D. Katabi and A. Torralba, Learning compositional Koopman operators for model-based control, in International Conference on Learning Representations, 2020.
    [25] W.-M. LiuH. W. Hethcote and S. A. Levin, Dynamical behavior of epidemiological models with nonlinear incidence rates, Journal of Mathematical Biology, 25 (1987), 359-380.  doi: 10.1007/BF00277162.
    [26] B. Lusch, J. N. Kutz and S. L. Brunton, Deep learning for universal linear embeddings of nonlinear dynamics, Nature Communications, 9 (2018), Article number: 4950, 10pp. doi: 10.1038/s41467-018-07210-0.
    [27] J. Mann and J. N. Kutz, Dynamic mode decomposition for financial trading strategies, Quantitative Finance, 16 (2016), 1643-1655.  doi: 10.1080/14697688.2016.1170194.
    [28] I. Mezić, Spectral properties of dynamical systems, model reduction and decompositions, Nonlinear Dynamics, 41 (2005), 309-325. 
    [29] J. Morton, A. Jameson, M. J. Kochenderfer and F. Witherden, Deep dynamical modeling and control of unsteady fluid flows, in Advances in Neural Information Processing Systems, 2018, 9258-9268.
    [30] S. E. Otto and C. W. Rowley, Linearly recurrent autoencoder networks for learning dynamics, SIAM Journal on Applied Dynamical Systems, 18 (2019), 558-593.  doi: 10.1137/18M1177846.
    [31] J. L. ProctorS. L. Brunton and J. N. Kutz, Dynamic mode decomposition with control, SIAM Journal on Applied Dynamical Systems, 15 (2016), 142-161.  doi: 10.1137/15M1013857.
    [32] C. E. Rasmussen and  C. K. I. WilliamsGaussian Processes for Machine Learning (Adaptive Computation and Machine Learning), The MIT Press, 2005. 
    [33] C. RowleyI. MezicS. BagheriP. Schlatter and D. Henningson, Spectral analysis of nonlinear flows, Journal of Fluid Mechanics, 641 (2009), 115-127.  doi: 10.1017/S0022112009992059.
    [34] P. J. Schmid, Dynamic mode decomposition of numerical and experimental data, Journal of Fluid Mechanics, 656 (2010), 5-28.  doi: 10.1017/S0022112010001217.
    [35] N. SrivastavaG. HintonA. KrizhevskyI. Sutskever and R. Salakhutdinov, Dropout: A simple way to prevent neural networks from overfitting, Journal of Machine Learning Research, 15 (2014), 1929-1958. 
    [36] N. Takeishi, Y. Kawahara and T. Yairi, Learning Koopman invariant subspaces for dynamic mode decomposition, in Advances in Neural Information Processing Systems, 2017, 1130-1140.
    [37] J. H. TuC. W. RowleyD. M. LuchtenburgS. L. Brunton and J. N. Kutz, On dynamic mode decomposition: Theory and applications, Journal of Computational Dynamics, 1 (2014), 391-421. 
    [38] N. van der Aa, H. Ter Morsche and R. Mattheij, Computation of eigenvalue and eigenvector derivatives for a general complex-valued eigensystem, The Electronic Journal of Linear Algebra, 16.
    [39] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser and I. Polosukhin, Attention is all you need, in Advances in Neural Information Processing Systems, 2017, 5998-6008.
    [40] M. O. WilliamsI. G. Kevrekidis and C. W. Rowley, A data-driven approximation of the Koopman operator: Extending dynamic mode decomposition, Journal of Nonlinear Science, 25 (2015), 1307-1346.  doi: 10.1007/s00332-015-9258-5.
    [41] M. O. WilliamsC. W. Rowley and Y. Kevrekidis, A kernel-based method for data-driven Koopman spectral analysis, Journal of Computational Dynamics, 2 (2015), 247-265.  doi: 10.3934/jcd.2015005.
    [42] S. Xie and J. Ren, Linearization of recurrent-neural-network-based models for predictive control of nano-positioning systems using data-driven Koopman operators, IEEE Access, 8 (2020), 147077-147088.  doi: 10.1109/ACCESS.2020.3013935.
    [43] E. Yeung, S. Kundu and N. Hodas, Learning deep neural network representations for Koopman operators of nonlinear dynamical systems, in 2019 American Control Conference (ACC), IEEE, 2019, 4832-4839. doi: 10.23919/ACC.2019.8815339.
  • 加载中

Figures(3)

Tables(1)

SHARE

Article Metrics

HTML views(8588) PDF downloads(727) Cited by(0)

Access History

Other Articles By Authors

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return