doi: 10.3934/jcd.2021006

Computing Lyapunov functions using deep neural networks

Mathematical Institute, University of Bayreuth, 95440 Bayreuth, Germany

Received  September 2020 Revised  November 2020 Published  December 2020

We propose a deep neural network architecture and associated loss functions for a training algorithm for computing approximate Lyapunov functions of systems of nonlinear ordinary differential equations. Under the assumption that the system admits a compositional Lyapunov function, we prove that the number of neurons needed for an approximation of a Lyapunov function with fixed accuracy grows only polynomially in the state dimension, i.e., the proposed approach is able to overcome the curse of dimensionality. We show that nonlinear systems satisfying a small-gain condition admit compositional Lyapunov functions. Numerical examples in up to ten space dimensions illustrate the performance of the training scheme.

Citation: Lars Grüne. Computing Lyapunov functions using deep neural networks. Journal of Computational Dynamics, doi: 10.3934/jcd.2021006
References:
[1]

M. Abadi, A. Agarwal, P. Barham, E. Brevdo and Z. Chen, et al., TensorFlow: Large-scale machine learning on heterogeneous systems, 2015., Available from: https://www.tensorflow.org/. Google Scholar

[2]

M. Abu-Khalaf and F. L. Lewis, Nearly optimal control laws for nonlinear systems with saturating actuators using a neural network HJB approach, Automatica J. IFAC, 41 (2005), 779-791.  doi: 10.1016/j.automatica.2004.11.034.  Google Scholar

[3]

J. Anderson and A. Papachristodoulou, Advances in computational Lyapunov analysis using sum-of-squares programming, Discrete Contin. Dyn. Syst. Ser. B, 20 (2015), 2361-2381.  doi: 10.3934/dcdsb.2015.20.2361.  Google Scholar

[4]

J. BernerP. Grohs and A. Jentzen, Analysis of the generalization error: Empirical risk minimization over deep artificial neural networks overcomes the curse of dimensionality in the numerical approximation of Black-Scholes partial differential equations, SIAM J. Math. Data Sci., 2 (2020), 631-657.  doi: 10.1137/19M125649X.  Google Scholar

[5]

L. Bottou, Large-scale machine learning with stochastic gradient descent, in Proceedings of COMPSTAT'2010, Physica-Verlag/Springer, Heidelberg, 2010, 177-186. doi: 10.1007/978-3-7908-2604-3_16.  Google Scholar

[6]

L. BottouF. E. Curtis and J. Nocedal, Optimization methods for large-scale machine learning, SIAM Rev., 60 (2018), 223-311.  doi: 10.1137/16M1080173.  Google Scholar

[7]

F. Camilli, L. Grüne and F. Wirth, A regularization of Zubov's equation for robust domains of attraction, in Nonlinear Control in the Year 2000, Lect. Notes Control Inf. Sci., 258, NCN, Springer, London, 2001, 277-289. doi: 10.1007/BFb0110220.  Google Scholar

[8]

F. Camilli, L. Grüne and F. Wirth, Domains of attraction of interconnected systems: A Zubov method approach, European Control Conference (ECC), Budapest, Hungary, 2009. doi: 10.23919/ECC.2009.7074385.  Google Scholar

[9]

G. Cybenko, Approximation by superpositions of a sigmoidal function, Math. Control Signals Systems, 2 (1989), 303-314.  doi: 10.1007/BF02551274.  Google Scholar

[10]

J. Darbon, G. P. Langlois and T. Meng, Overcoming the curse of dimensionality for some Hamilton-Jacobi partial differential equations via neural network architectures, Res. Math. Sci., 7 (2020), 50pp. doi: 10.1007/s40687-020-00215-6.  Google Scholar

[11]

S. DashkovskiyH. Ito and F. Wirth, On a small gain theorem for ISS networks in dissipative Lyapunov form, Eur. J. Control, 17 (2011), 357-365.  doi: 10.3166/ejc.17.357-365.  Google Scholar

[12]

S. N. DashkovskiyB. S. Rüffer and F. R. Wirth, Small gain theorems for large scale systems and construction of ISS Lyapunov functions, SIAM J. Control Optim., 48 (2010), 4089-4118.  doi: 10.1137/090746483.  Google Scholar

[13]

W. EJ. Han and A. Jentzen, Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations, Commun. Math. Stat., 5 (2017), 349-380.  doi: 10.1007/s40304-017-0117-6.  Google Scholar

[14]

P. Giesl and S. Hafstein, Computation of Lyapunov functions for nonlinear discrete time systems by linear programming, J. Difference Equ. Appl., 20 (2014), 610-640.  doi: 10.1080/10236198.2013.867341.  Google Scholar

[15]

P. Giesl and S. Hafstein, Review on computational methods for Lyapunov functions, Discrete Contin. Dyn. Syst. Ser. B, 20 (2015), 2291-2331.  doi: 10.3934/dcdsb.2015.20.2291.  Google Scholar

[16]

P. Giesl, Construction of Global Lyapunov Functions Using Radial Basis Functions, Lecture Notes in Mathematics, 1904, Springer, Berlin, 2007. doi: 10.1007/978-3-540-69909-5.  Google Scholar

[17]

L. Grüne, Overcoming the curse of dimensionality for approximating Lyapunov functions with deep neural networks under a small-gain condition, preprint, arXiv: 2001.08423. Google Scholar

[18]

S. Hafstein, C. M. Kellett and H. Li, Continuous and piecewise affine Lyapunov functions using the Yoshizawa construction, American Control Conference, Portland, OR, 2014. doi: 10.1109/ACC.2014.6858660.  Google Scholar

[19]

S. F. Hafstein, An algorithm for constructing Lyapunov functions, Electronic Journal of Differential Equations, Monograph, 8, Texas State University-San Marcos, Department of Mathematics, San Marcos, TX, 2007, 100pp.  Google Scholar

[20]

W. Hahn, Stability of Motion, Die Grundlehren der mathematischen Wissenschaften, 138, Springer-Verlag New York, Inc., New York, 1967. doi: 10.1007/978-3-642-50085-5.  Google Scholar

[21]

J. HanA. Jentzen and W. E, Solving high-dimensional partial differential equations using deep learning, Proc. Natl. Acad. Sci. USA, 115 (2018), 8505-8510.  doi: 10.1073/pnas.1718942115.  Google Scholar

[22]

K. HornikM. Stinchcombe and H. White, Multilayer feedforward networks are universal approximators, Neural Networks, 2 (1989), 359-366.  doi: 10.1016/0893-6080(89)90020-8.  Google Scholar

[23]

C. HuréH. Pham and X. Warin, Deep backward schemes for high-dimensional nonlinear PDEs, Math. Comp., 89 (2020), 1547-1579.  doi: 10.1090/mcom/3514.  Google Scholar

[24]

M. Hutzenthaler, A. Jentzen and T. Kruse, Overcoming the curse of dimensionality in the numerical approximation of parabolic partial differential equations with gradient-dependent nonlinearities, preprint, arXiv: 1912.02571. Google Scholar

[25]

M. Hutzenthaler, A. Jentzen, T. Kruse and T. A. Nguyen, A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations, SN Partial Differ. Equ. Appl., 10 (2020). doi: 10.1007/s42985-019-0006-9.  Google Scholar

[26]

Z.-P. JiangA. R. Teel and L. Praly, Small-gain theorem for ISS systems and applications, Math. Control Signals Systems, 7 (1994), 95-120.  doi: 10.1007/BF01211469.  Google Scholar

[27]

Z.-P. JiangI. M. Y. Mareels and Y. Wang, A Lyapunov formulation of the nonlinear small-gain theorem for interconnected ISS systems, Automatica J. IFAC, 32 (1996), 1211-1215.  doi: 10.1016/0005-1098(96)00051-9.  Google Scholar

[28]

H. K. Khalil, Nonlinear Systems, Prentice-Hall, 1996. Google Scholar

[29]

S. Mohammad Khansari-Zadeh and A. Billard, Learning control Lyapunov function to ensure stability of dynamical system-based robot reaching motions, Robotics and Autonomous Systems, 62 (2014), 752-765.  doi: 10.1016/j.robot.2014.03.001.  Google Scholar

[30]

N. E. KirinR. A. Nelepin and V. N. Ba${\rm{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over i} }}$daev, Construction of the attraction region by Zubov's method, Differ. Equations, 17 (1982), 871-880.   Google Scholar

[31]

F. L. Lewis, S. Jagannathan and A. Yeşildirek, Neural Network Control of Robot Manipulators and Nonlinear Systems, Taylor and Francis, 1998. Google Scholar

[32]

H. Li, Computation of Lyapunov Functions and Stability of Interconnected Systems, Ph.D dissertation, Universität Bayreuth, 2015. Google Scholar

[33]

Y. Long and M. M. Bayoumi, Feedback stabilization: Control Lyapunov functions modelled by neural networks, Proceedings of the 32nd IEEE Conference on Decision and Control, San Antonio, TX, 1993. doi: 10.1109/CDC.1993.325708.  Google Scholar

[34]

H. N. Mhaskar, Neural networks for optimal approximation of smooth and analytic functions, Neural Comput., 8 (1996), 164-177.  doi: 10.1162/neco.1996.8.1.164.  Google Scholar

[35]

N. Noroozi, P. Karimaghaee, F. Safaei and H. Javadi, Generation of Lyapunov functions by neural networks, Proceedings of the World Congress on Engineering. Vol I, London, UK, 2008. Google Scholar

[36]

V. Petridis and S. Petridis, Construction of neural network based Lyapunov functions, Proceedings of the International Joint Conference on Neural Networks, Vancouver, Canada, 2006, 5059-5065. Google Scholar

[37]

T. PoggioH. MhaskarL. RosacoB. Miranda and Q. Liao, Why and when can deep - but not shallow - networks avoid the curse of dimensionality: A review, Int. J. Automat. Computing, 14 (2017), 503-519.  doi: 10.1007/s11633-017-1054-2.  Google Scholar

[38]

C. Reisinger and Y. Zhang, Rectified deep neural networks overcome the curse of dimensionality for nonsmooth value functions in zero-sum games of nonlinear stiff systems, Anal. Appl. (Singap.), 18 (2020), 951--999. doi: 10.1142/S0219530520500116.  Google Scholar

[39]

S. M. Richards, F. Berkenkamp and A. Krause, The Lyapunov neural network: Adaptive stability certification for safe learning of dynamical systems, Proceedings of the 2nd Conference on Robot Learning - CoRL 2018, Zürich, Switzerland, 2018. Available from: arXiv: 1808.00924. Google Scholar

[40]

B. S. Rüffer, Monotone Systems, Graphs, and Stability of Large-Scale Interconnected Systems, Ph.D dissertation, Universität Bremen, Germany, 2007. Google Scholar

[41]

G. Serpen, Empirical approximation for Lyapunov functions with artificial neural nets, Proc. International Joint Conference on Neural Networks, Montreal, Que., Canada, 2005. doi: 10.1109/IJCNN.2005.1555943.  Google Scholar

[42]

J. Sirignano and K. Spiliopoulos, DGM: A deep learning algorithm for solving partial differential equations, J. Comput. Phys., 375 (2018), 1339-1364.  doi: 10.1016/j.jcp.2018.08.029.  Google Scholar

[43]

E. D. Sontag, Smooth stabilization implies coprime factorization, IEEE Trans. Automat. Control, 34 (1989), 435-443.  doi: 10.1109/9.28018.  Google Scholar

[44]

E. D. Sontag, Feedback stabilization using two-hidden-layer nets, IEEE Trans. Neural Networks, 3 (1992), 981-990.  doi: 10.1109/72.165599.  Google Scholar

[45]

V. I. Zubov, Methods of A.M. Lyapunov and Their Application, P. Noordhoff Ltd, Groningen, 1964.  Google Scholar

show all references

References:
[1]

M. Abadi, A. Agarwal, P. Barham, E. Brevdo and Z. Chen, et al., TensorFlow: Large-scale machine learning on heterogeneous systems, 2015., Available from: https://www.tensorflow.org/. Google Scholar

[2]

M. Abu-Khalaf and F. L. Lewis, Nearly optimal control laws for nonlinear systems with saturating actuators using a neural network HJB approach, Automatica J. IFAC, 41 (2005), 779-791.  doi: 10.1016/j.automatica.2004.11.034.  Google Scholar

[3]

J. Anderson and A. Papachristodoulou, Advances in computational Lyapunov analysis using sum-of-squares programming, Discrete Contin. Dyn. Syst. Ser. B, 20 (2015), 2361-2381.  doi: 10.3934/dcdsb.2015.20.2361.  Google Scholar

[4]

J. BernerP. Grohs and A. Jentzen, Analysis of the generalization error: Empirical risk minimization over deep artificial neural networks overcomes the curse of dimensionality in the numerical approximation of Black-Scholes partial differential equations, SIAM J. Math. Data Sci., 2 (2020), 631-657.  doi: 10.1137/19M125649X.  Google Scholar

[5]

L. Bottou, Large-scale machine learning with stochastic gradient descent, in Proceedings of COMPSTAT'2010, Physica-Verlag/Springer, Heidelberg, 2010, 177-186. doi: 10.1007/978-3-7908-2604-3_16.  Google Scholar

[6]

L. BottouF. E. Curtis and J. Nocedal, Optimization methods for large-scale machine learning, SIAM Rev., 60 (2018), 223-311.  doi: 10.1137/16M1080173.  Google Scholar

[7]

F. Camilli, L. Grüne and F. Wirth, A regularization of Zubov's equation for robust domains of attraction, in Nonlinear Control in the Year 2000, Lect. Notes Control Inf. Sci., 258, NCN, Springer, London, 2001, 277-289. doi: 10.1007/BFb0110220.  Google Scholar

[8]

F. Camilli, L. Grüne and F. Wirth, Domains of attraction of interconnected systems: A Zubov method approach, European Control Conference (ECC), Budapest, Hungary, 2009. doi: 10.23919/ECC.2009.7074385.  Google Scholar

[9]

G. Cybenko, Approximation by superpositions of a sigmoidal function, Math. Control Signals Systems, 2 (1989), 303-314.  doi: 10.1007/BF02551274.  Google Scholar

[10]

J. Darbon, G. P. Langlois and T. Meng, Overcoming the curse of dimensionality for some Hamilton-Jacobi partial differential equations via neural network architectures, Res. Math. Sci., 7 (2020), 50pp. doi: 10.1007/s40687-020-00215-6.  Google Scholar

[11]

S. DashkovskiyH. Ito and F. Wirth, On a small gain theorem for ISS networks in dissipative Lyapunov form, Eur. J. Control, 17 (2011), 357-365.  doi: 10.3166/ejc.17.357-365.  Google Scholar

[12]

S. N. DashkovskiyB. S. Rüffer and F. R. Wirth, Small gain theorems for large scale systems and construction of ISS Lyapunov functions, SIAM J. Control Optim., 48 (2010), 4089-4118.  doi: 10.1137/090746483.  Google Scholar

[13]

W. EJ. Han and A. Jentzen, Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations, Commun. Math. Stat., 5 (2017), 349-380.  doi: 10.1007/s40304-017-0117-6.  Google Scholar

[14]

P. Giesl and S. Hafstein, Computation of Lyapunov functions for nonlinear discrete time systems by linear programming, J. Difference Equ. Appl., 20 (2014), 610-640.  doi: 10.1080/10236198.2013.867341.  Google Scholar

[15]

P. Giesl and S. Hafstein, Review on computational methods for Lyapunov functions, Discrete Contin. Dyn. Syst. Ser. B, 20 (2015), 2291-2331.  doi: 10.3934/dcdsb.2015.20.2291.  Google Scholar

[16]

P. Giesl, Construction of Global Lyapunov Functions Using Radial Basis Functions, Lecture Notes in Mathematics, 1904, Springer, Berlin, 2007. doi: 10.1007/978-3-540-69909-5.  Google Scholar

[17]

L. Grüne, Overcoming the curse of dimensionality for approximating Lyapunov functions with deep neural networks under a small-gain condition, preprint, arXiv: 2001.08423. Google Scholar

[18]

S. Hafstein, C. M. Kellett and H. Li, Continuous and piecewise affine Lyapunov functions using the Yoshizawa construction, American Control Conference, Portland, OR, 2014. doi: 10.1109/ACC.2014.6858660.  Google Scholar

[19]

S. F. Hafstein, An algorithm for constructing Lyapunov functions, Electronic Journal of Differential Equations, Monograph, 8, Texas State University-San Marcos, Department of Mathematics, San Marcos, TX, 2007, 100pp.  Google Scholar

[20]

W. Hahn, Stability of Motion, Die Grundlehren der mathematischen Wissenschaften, 138, Springer-Verlag New York, Inc., New York, 1967. doi: 10.1007/978-3-642-50085-5.  Google Scholar

[21]

J. HanA. Jentzen and W. E, Solving high-dimensional partial differential equations using deep learning, Proc. Natl. Acad. Sci. USA, 115 (2018), 8505-8510.  doi: 10.1073/pnas.1718942115.  Google Scholar

[22]

K. HornikM. Stinchcombe and H. White, Multilayer feedforward networks are universal approximators, Neural Networks, 2 (1989), 359-366.  doi: 10.1016/0893-6080(89)90020-8.  Google Scholar

[23]

C. HuréH. Pham and X. Warin, Deep backward schemes for high-dimensional nonlinear PDEs, Math. Comp., 89 (2020), 1547-1579.  doi: 10.1090/mcom/3514.  Google Scholar

[24]

M. Hutzenthaler, A. Jentzen and T. Kruse, Overcoming the curse of dimensionality in the numerical approximation of parabolic partial differential equations with gradient-dependent nonlinearities, preprint, arXiv: 1912.02571. Google Scholar

[25]

M. Hutzenthaler, A. Jentzen, T. Kruse and T. A. Nguyen, A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations, SN Partial Differ. Equ. Appl., 10 (2020). doi: 10.1007/s42985-019-0006-9.  Google Scholar

[26]

Z.-P. JiangA. R. Teel and L. Praly, Small-gain theorem for ISS systems and applications, Math. Control Signals Systems, 7 (1994), 95-120.  doi: 10.1007/BF01211469.  Google Scholar

[27]

Z.-P. JiangI. M. Y. Mareels and Y. Wang, A Lyapunov formulation of the nonlinear small-gain theorem for interconnected ISS systems, Automatica J. IFAC, 32 (1996), 1211-1215.  doi: 10.1016/0005-1098(96)00051-9.  Google Scholar

[28]

H. K. Khalil, Nonlinear Systems, Prentice-Hall, 1996. Google Scholar

[29]

S. Mohammad Khansari-Zadeh and A. Billard, Learning control Lyapunov function to ensure stability of dynamical system-based robot reaching motions, Robotics and Autonomous Systems, 62 (2014), 752-765.  doi: 10.1016/j.robot.2014.03.001.  Google Scholar

[30]

N. E. KirinR. A. Nelepin and V. N. Ba${\rm{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over i} }}$daev, Construction of the attraction region by Zubov's method, Differ. Equations, 17 (1982), 871-880.   Google Scholar

[31]

F. L. Lewis, S. Jagannathan and A. Yeşildirek, Neural Network Control of Robot Manipulators and Nonlinear Systems, Taylor and Francis, 1998. Google Scholar

[32]

H. Li, Computation of Lyapunov Functions and Stability of Interconnected Systems, Ph.D dissertation, Universität Bayreuth, 2015. Google Scholar

[33]

Y. Long and M. M. Bayoumi, Feedback stabilization: Control Lyapunov functions modelled by neural networks, Proceedings of the 32nd IEEE Conference on Decision and Control, San Antonio, TX, 1993. doi: 10.1109/CDC.1993.325708.  Google Scholar

[34]

H. N. Mhaskar, Neural networks for optimal approximation of smooth and analytic functions, Neural Comput., 8 (1996), 164-177.  doi: 10.1162/neco.1996.8.1.164.  Google Scholar

[35]

N. Noroozi, P. Karimaghaee, F. Safaei and H. Javadi, Generation of Lyapunov functions by neural networks, Proceedings of the World Congress on Engineering. Vol I, London, UK, 2008. Google Scholar

[36]

V. Petridis and S. Petridis, Construction of neural network based Lyapunov functions, Proceedings of the International Joint Conference on Neural Networks, Vancouver, Canada, 2006, 5059-5065. Google Scholar

[37]

T. PoggioH. MhaskarL. RosacoB. Miranda and Q. Liao, Why and when can deep - but not shallow - networks avoid the curse of dimensionality: A review, Int. J. Automat. Computing, 14 (2017), 503-519.  doi: 10.1007/s11633-017-1054-2.  Google Scholar

[38]

C. Reisinger and Y. Zhang, Rectified deep neural networks overcome the curse of dimensionality for nonsmooth value functions in zero-sum games of nonlinear stiff systems, Anal. Appl. (Singap.), 18 (2020), 951--999. doi: 10.1142/S0219530520500116.  Google Scholar

[39]

S. M. Richards, F. Berkenkamp and A. Krause, The Lyapunov neural network: Adaptive stability certification for safe learning of dynamical systems, Proceedings of the 2nd Conference on Robot Learning - CoRL 2018, Zürich, Switzerland, 2018. Available from: arXiv: 1808.00924. Google Scholar

[40]

B. S. Rüffer, Monotone Systems, Graphs, and Stability of Large-Scale Interconnected Systems, Ph.D dissertation, Universität Bremen, Germany, 2007. Google Scholar

[41]

G. Serpen, Empirical approximation for Lyapunov functions with artificial neural nets, Proc. International Joint Conference on Neural Networks, Montreal, Que., Canada, 2005. doi: 10.1109/IJCNN.2005.1555943.  Google Scholar

[42]

J. Sirignano and K. Spiliopoulos, DGM: A deep learning algorithm for solving partial differential equations, J. Comput. Phys., 375 (2018), 1339-1364.  doi: 10.1016/j.jcp.2018.08.029.  Google Scholar

[43]

E. D. Sontag, Smooth stabilization implies coprime factorization, IEEE Trans. Automat. Control, 34 (1989), 435-443.  doi: 10.1109/9.28018.  Google Scholar

[44]

E. D. Sontag, Feedback stabilization using two-hidden-layer nets, IEEE Trans. Neural Networks, 3 (1992), 981-990.  doi: 10.1109/72.165599.  Google Scholar

[45]

V. I. Zubov, Methods of A.M. Lyapunov and Their Application, P. Noordhoff Ltd, Groningen, 1964.  Google Scholar

Figure 1.  Neural network with $ 1 $ and $ 2 $ hidden layers
Figure 2.  Neural network for Lyapunov functions, $ f\in F_1^{d_{\max}} $
Figure 3.  Neural network for Lyapunov functions, $ f\in F_2^{d_{\max}} $
Figure 5.  Attempt to compute a Lyapunov function $ W(\cdot;\theta^*) $ (solid) with its orbital derivative $ DW(\cdot;\theta^*)f $ (mesh) for Ex. (12) with loss function (9)
Figure 4.  Approximate Lyapunov function $ W(\cdot;\theta^*) $ (solid) and its orbital derivative $ DW(\cdot;\theta^*)f $ (mesh) for Example (12) computed with loss function (11)
Figure 6.  Approximate Lyapunov function $ W(\cdot;\theta^*) $ (solid) and its orbital derivative $ DW(\cdot;\theta^*)f $ (mesh) for Example (13) on $ (x_2,x_8) $-plane
Figure 7.  Approximate Lyapunov function $ W(\cdot;\theta^*) $ (solid) and its orbital derivative $ DW(\cdot;\theta^*)f $ (mesh) for Example (13) on $ (x_9,x_{10}) $-plane
Figure 8.  Value of approximate Lyapunov function $ W(x(t);\theta^*) $ along trajectories for initial values $ x_0 = (1,1,1,1,1,1,1,1,1,1)^T $, $ (0,1,0,1,0,1,0,1,0,1)^T $, $ (1,0,0,0,0,0,0,0,0,0)^T $ (left to right)
[1]

Ziang Long, Penghang Yin, Jack Xin. Global convergence and geometric characterization of slow to fast weight evolution in neural network training for classifying linearly non-separable data. Inverse Problems & Imaging, 2021, 15 (1) : 41-62. doi: 10.3934/ipi.2020077

[2]

Editorial Office. Retraction: Honggang Yu, An efficient face recognition algorithm using the improved convolutional neural network. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 901-901. doi: 10.3934/dcdss.2019060

[3]

Hiroaki Uchida, Yuya Oishi, Toshimichi Saito. A simple digital spiking neural network: Synchronization and spike-train approximation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1479-1494. doi: 10.3934/dcdss.2020374

[4]

Sibel Senan, Eylem Yucel, Zeynep Orman, Ruya Samli, Sabri Arik. A Novel Lyapunov functional with application to stability analysis of neutral systems with nonlinear disturbances. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1415-1428. doi: 10.3934/dcdss.2020358

[5]

Ivanka Stamova, Gani Stamov. On the stability of sets for reaction–diffusion Cohen–Grossberg delayed neural networks. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1429-1446. doi: 10.3934/dcdss.2020370

[6]

Mohammed Abdulrazaq Kahya, Suhaib Abduljabbar Altamir, Zakariya Yahya Algamal. Improving whale optimization algorithm for feature selection with a time-varying transfer function. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 87-98. doi: 10.3934/naco.2020017

[7]

Ozlem Faydasicok. Further stability analysis of neutral-type Cohen-Grossberg neural networks with multiple delays. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1245-1258. doi: 10.3934/dcdss.2020359

[8]

Peter Giesl, Sigurdur Hafstein. System specific triangulations for the construction of CPA Lyapunov functions. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020378

[9]

Meng Chen, Yong Hu, Matteo Penegini. On projective threefolds of general type with small positive geometric genus. Electronic Research Archive, , () : -. doi: 10.3934/era.2020117

[10]

Peter Giesl, Zachary Langhorne, Carlos Argáez, Sigurdur Hafstein. Computing complete Lyapunov functions for discrete-time dynamical systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 299-336. doi: 10.3934/dcdsb.2020331

[11]

Qiang Fu, Yanlong Zhang, Yushu Zhu, Ting Li. Network centralities, demographic disparities, and voluntary participation. Mathematical Foundations of Computing, 2020, 3 (4) : 249-262. doi: 10.3934/mfc.2020011

[12]

Wolfgang Riedl, Robert Baier, Matthias Gerdts. Optimization-based subdivision algorithm for reachable sets. Journal of Computational Dynamics, 2021, 8 (1) : 99-130. doi: 10.3934/jcd.2021005

[13]

Leslaw Skrzypek, Yuncheng You. Feedback synchronization of FHN cellular neural networks. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2021001

[14]

Bao Wang, Alex Lin, Penghang Yin, Wei Zhu, Andrea L. Bertozzi, Stanley J. Osher. Adversarial defense via the data-dependent activation, total variation minimization, and adversarial training. Inverse Problems & Imaging, 2021, 15 (1) : 129-145. doi: 10.3934/ipi.2020046

[15]

Yifan Chen, Thomas Y. Hou. Function approximation via the subsampled Poincaré inequality. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 169-199. doi: 10.3934/dcds.2020296

[16]

Nicholas Geneva, Nicholas Zabaras. Multi-fidelity generative deep learning turbulent flows. Foundations of Data Science, 2020, 2 (4) : 391-428. doi: 10.3934/fods.2020019

[17]

Hongfei Yang, Xiaofeng Ding, Raymond Chan, Hui Hu, Yaxin Peng, Tieyong Zeng. A new initialization method based on normed statistical spaces in deep networks. Inverse Problems & Imaging, 2021, 15 (1) : 147-158. doi: 10.3934/ipi.2020045

[18]

Juhua Shi, Feida Jiang. The degenerate Monge-Ampère equations with the Neumann condition. Communications on Pure & Applied Analysis, 2021, 20 (2) : 915-931. doi: 10.3934/cpaa.2020297

[19]

Xueli Bai, Fang Li. Global dynamics of competition models with nonsymmetric nonlocal dispersals when one diffusion rate is small. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3075-3092. doi: 10.3934/dcds.2020035

[20]

Yue-Jun Peng, Shu Wang. Asymptotic expansions in two-fluid compressible Euler-Maxwell equations with small parameters. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 415-433. doi: 10.3934/dcds.2009.23.415

 Impact Factor: 

Metrics

  • PDF downloads (25)
  • HTML views (70)
  • Cited by (0)

Other articles
by authors

[Back to Top]