April  2021, 8(2): 131-152. doi: 10.3934/jcd.2021006

Computing Lyapunov functions using deep neural networks

Mathematical Institute, University of Bayreuth, 95440 Bayreuth, Germany

Received  September 2020 Revised  November 2020 Published  April 2021 Early access  December 2020

We propose a deep neural network architecture and associated loss functions for a training algorithm for computing approximate Lyapunov functions of systems of nonlinear ordinary differential equations. Under the assumption that the system admits a compositional Lyapunov function, we prove that the number of neurons needed for an approximation of a Lyapunov function with fixed accuracy grows only polynomially in the state dimension, i.e., the proposed approach is able to overcome the curse of dimensionality. We show that nonlinear systems satisfying a small-gain condition admit compositional Lyapunov functions. Numerical examples in up to ten space dimensions illustrate the performance of the training scheme.

Citation: Lars Grüne. Computing Lyapunov functions using deep neural networks. Journal of Computational Dynamics, 2021, 8 (2) : 131-152. doi: 10.3934/jcd.2021006
References:
[1]

M. Abadi, A. Agarwal, P. Barham, E. Brevdo and Z. Chen, et al., TensorFlow: Large-scale machine learning on heterogeneous systems, 2015., Available from: https://www.tensorflow.org/. Google Scholar

[2]

M. Abu-Khalaf and F. L. Lewis, Nearly optimal control laws for nonlinear systems with saturating actuators using a neural network HJB approach, Automatica J. IFAC, 41 (2005), 779-791.  doi: 10.1016/j.automatica.2004.11.034.  Google Scholar

[3]

J. Anderson and A. Papachristodoulou, Advances in computational Lyapunov analysis using sum-of-squares programming, Discrete Contin. Dyn. Syst. Ser. B, 20 (2015), 2361-2381.  doi: 10.3934/dcdsb.2015.20.2361.  Google Scholar

[4]

J. BernerP. Grohs and A. Jentzen, Analysis of the generalization error: Empirical risk minimization over deep artificial neural networks overcomes the curse of dimensionality in the numerical approximation of Black-Scholes partial differential equations, SIAM J. Math. Data Sci., 2 (2020), 631-657.  doi: 10.1137/19M125649X.  Google Scholar

[5]

L. Bottou, Large-scale machine learning with stochastic gradient descent, in Proceedings of COMPSTAT'2010, Physica-Verlag/Springer, Heidelberg, 2010, 177-186. doi: 10.1007/978-3-7908-2604-3_16.  Google Scholar

[6]

L. BottouF. E. Curtis and J. Nocedal, Optimization methods for large-scale machine learning, SIAM Rev., 60 (2018), 223-311.  doi: 10.1137/16M1080173.  Google Scholar

[7]

F. Camilli, L. Grüne and F. Wirth, A regularization of Zubov's equation for robust domains of attraction, in Nonlinear Control in the Year 2000, Lect. Notes Control Inf. Sci., 258, NCN, Springer, London, 2001, 277-289. doi: 10.1007/BFb0110220.  Google Scholar

[8]

F. Camilli, L. Grüne and F. Wirth, Domains of attraction of interconnected systems: A Zubov method approach, European Control Conference (ECC), Budapest, Hungary, 2009. doi: 10.23919/ECC.2009.7074385.  Google Scholar

[9]

G. Cybenko, Approximation by superpositions of a sigmoidal function, Math. Control Signals Systems, 2 (1989), 303-314.  doi: 10.1007/BF02551274.  Google Scholar

[10]

J. Darbon, G. P. Langlois and T. Meng, Overcoming the curse of dimensionality for some Hamilton-Jacobi partial differential equations via neural network architectures, Res. Math. Sci., 7 (2020), 50pp. doi: 10.1007/s40687-020-00215-6.  Google Scholar

[11]

S. DashkovskiyH. Ito and F. Wirth, On a small gain theorem for ISS networks in dissipative Lyapunov form, Eur. J. Control, 17 (2011), 357-365.  doi: 10.3166/ejc.17.357-365.  Google Scholar

[12]

S. N. DashkovskiyB. S. Rüffer and F. R. Wirth, Small gain theorems for large scale systems and construction of ISS Lyapunov functions, SIAM J. Control Optim., 48 (2010), 4089-4118.  doi: 10.1137/090746483.  Google Scholar

[13]

W. EJ. Han and A. Jentzen, Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations, Commun. Math. Stat., 5 (2017), 349-380.  doi: 10.1007/s40304-017-0117-6.  Google Scholar

[14]

P. Giesl and S. Hafstein, Computation of Lyapunov functions for nonlinear discrete time systems by linear programming, J. Difference Equ. Appl., 20 (2014), 610-640.  doi: 10.1080/10236198.2013.867341.  Google Scholar

[15]

P. Giesl and S. Hafstein, Review on computational methods for Lyapunov functions, Discrete Contin. Dyn. Syst. Ser. B, 20 (2015), 2291-2331.  doi: 10.3934/dcdsb.2015.20.2291.  Google Scholar

[16]

P. Giesl, Construction of Global Lyapunov Functions Using Radial Basis Functions, Lecture Notes in Mathematics, 1904, Springer, Berlin, 2007. doi: 10.1007/978-3-540-69909-5.  Google Scholar

[17]

L. Grüne, Overcoming the curse of dimensionality for approximating Lyapunov functions with deep neural networks under a small-gain condition, preprint, arXiv: 2001.08423. Google Scholar

[18]

S. Hafstein, C. M. Kellett and H. Li, Continuous and piecewise affine Lyapunov functions using the Yoshizawa construction, American Control Conference, Portland, OR, 2014. doi: 10.1109/ACC.2014.6858660.  Google Scholar

[19]

S. F. Hafstein, An algorithm for constructing Lyapunov functions, Electronic Journal of Differential Equations, Monograph, 8, Texas State University-San Marcos, Department of Mathematics, San Marcos, TX, 2007, 100pp.  Google Scholar

[20]

W. Hahn, Stability of Motion, Die Grundlehren der mathematischen Wissenschaften, 138, Springer-Verlag New York, Inc., New York, 1967. doi: 10.1007/978-3-642-50085-5.  Google Scholar

[21]

J. HanA. Jentzen and W. E, Solving high-dimensional partial differential equations using deep learning, Proc. Natl. Acad. Sci. USA, 115 (2018), 8505-8510.  doi: 10.1073/pnas.1718942115.  Google Scholar

[22]

K. HornikM. Stinchcombe and H. White, Multilayer feedforward networks are universal approximators, Neural Networks, 2 (1989), 359-366.  doi: 10.1016/0893-6080(89)90020-8.  Google Scholar

[23]

C. HuréH. Pham and X. Warin, Deep backward schemes for high-dimensional nonlinear PDEs, Math. Comp., 89 (2020), 1547-1579.  doi: 10.1090/mcom/3514.  Google Scholar

[24]

M. Hutzenthaler, A. Jentzen and T. Kruse, Overcoming the curse of dimensionality in the numerical approximation of parabolic partial differential equations with gradient-dependent nonlinearities, preprint, arXiv: 1912.02571. Google Scholar

[25]

M. Hutzenthaler, A. Jentzen, T. Kruse and T. A. Nguyen, A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations, SN Partial Differ. Equ. Appl., 10 (2020). doi: 10.1007/s42985-019-0006-9.  Google Scholar

[26]

Z.-P. JiangA. R. Teel and L. Praly, Small-gain theorem for ISS systems and applications, Math. Control Signals Systems, 7 (1994), 95-120.  doi: 10.1007/BF01211469.  Google Scholar

[27]

Z.-P. JiangI. M. Y. Mareels and Y. Wang, A Lyapunov formulation of the nonlinear small-gain theorem for interconnected ISS systems, Automatica J. IFAC, 32 (1996), 1211-1215.  doi: 10.1016/0005-1098(96)00051-9.  Google Scholar

[28]

H. K. Khalil, Nonlinear Systems, Prentice-Hall, 1996. Google Scholar

[29]

S. Mohammad Khansari-Zadeh and A. Billard, Learning control Lyapunov function to ensure stability of dynamical system-based robot reaching motions, Robotics and Autonomous Systems, 62 (2014), 752-765.  doi: 10.1016/j.robot.2014.03.001.  Google Scholar

[30]

N. E. KirinR. A. Nelepin and V. N. Ba${\rm{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over i} }}$daev, Construction of the attraction region by Zubov's method, Differ. Equations, 17 (1982), 871-880.   Google Scholar

[31]

F. L. Lewis, S. Jagannathan and A. Yeşildirek, Neural Network Control of Robot Manipulators and Nonlinear Systems, Taylor and Francis, 1998. Google Scholar

[32]

H. Li, Computation of Lyapunov Functions and Stability of Interconnected Systems, Ph.D dissertation, Universität Bayreuth, 2015. Google Scholar

[33]

Y. Long and M. M. Bayoumi, Feedback stabilization: Control Lyapunov functions modelled by neural networks, Proceedings of the 32nd IEEE Conference on Decision and Control, San Antonio, TX, 1993. doi: 10.1109/CDC.1993.325708.  Google Scholar

[34]

H. N. Mhaskar, Neural networks for optimal approximation of smooth and analytic functions, Neural Comput., 8 (1996), 164-177.  doi: 10.1162/neco.1996.8.1.164.  Google Scholar

[35]

N. Noroozi, P. Karimaghaee, F. Safaei and H. Javadi, Generation of Lyapunov functions by neural networks, Proceedings of the World Congress on Engineering. Vol I, London, UK, 2008. Google Scholar

[36]

V. Petridis and S. Petridis, Construction of neural network based Lyapunov functions, Proceedings of the International Joint Conference on Neural Networks, Vancouver, Canada, 2006, 5059-5065. Google Scholar

[37]

T. PoggioH. MhaskarL. RosacoB. Miranda and Q. Liao, Why and when can deep - but not shallow - networks avoid the curse of dimensionality: A review, Int. J. Automat. Computing, 14 (2017), 503-519.  doi: 10.1007/s11633-017-1054-2.  Google Scholar

[38]

C. Reisinger and Y. Zhang, Rectified deep neural networks overcome the curse of dimensionality for nonsmooth value functions in zero-sum games of nonlinear stiff systems, Anal. Appl. (Singap.), 18 (2020), 951--999. doi: 10.1142/S0219530520500116.  Google Scholar

[39]

S. M. Richards, F. Berkenkamp and A. Krause, The Lyapunov neural network: Adaptive stability certification for safe learning of dynamical systems, Proceedings of the 2nd Conference on Robot Learning - CoRL 2018, Zürich, Switzerland, 2018. Available from: arXiv: 1808.00924. Google Scholar

[40]

B. S. Rüffer, Monotone Systems, Graphs, and Stability of Large-Scale Interconnected Systems, Ph.D dissertation, Universität Bremen, Germany, 2007. Google Scholar

[41]

G. Serpen, Empirical approximation for Lyapunov functions with artificial neural nets, Proc. International Joint Conference on Neural Networks, Montreal, Que., Canada, 2005. doi: 10.1109/IJCNN.2005.1555943.  Google Scholar

[42]

J. Sirignano and K. Spiliopoulos, DGM: A deep learning algorithm for solving partial differential equations, J. Comput. Phys., 375 (2018), 1339-1364.  doi: 10.1016/j.jcp.2018.08.029.  Google Scholar

[43]

E. D. Sontag, Smooth stabilization implies coprime factorization, IEEE Trans. Automat. Control, 34 (1989), 435-443.  doi: 10.1109/9.28018.  Google Scholar

[44]

E. D. Sontag, Feedback stabilization using two-hidden-layer nets, IEEE Trans. Neural Networks, 3 (1992), 981-990.  doi: 10.1109/72.165599.  Google Scholar

[45]

V. I. Zubov, Methods of A.M. Lyapunov and Their Application, P. Noordhoff Ltd, Groningen, 1964.  Google Scholar

show all references

References:
[1]

M. Abadi, A. Agarwal, P. Barham, E. Brevdo and Z. Chen, et al., TensorFlow: Large-scale machine learning on heterogeneous systems, 2015., Available from: https://www.tensorflow.org/. Google Scholar

[2]

M. Abu-Khalaf and F. L. Lewis, Nearly optimal control laws for nonlinear systems with saturating actuators using a neural network HJB approach, Automatica J. IFAC, 41 (2005), 779-791.  doi: 10.1016/j.automatica.2004.11.034.  Google Scholar

[3]

J. Anderson and A. Papachristodoulou, Advances in computational Lyapunov analysis using sum-of-squares programming, Discrete Contin. Dyn. Syst. Ser. B, 20 (2015), 2361-2381.  doi: 10.3934/dcdsb.2015.20.2361.  Google Scholar

[4]

J. BernerP. Grohs and A. Jentzen, Analysis of the generalization error: Empirical risk minimization over deep artificial neural networks overcomes the curse of dimensionality in the numerical approximation of Black-Scholes partial differential equations, SIAM J. Math. Data Sci., 2 (2020), 631-657.  doi: 10.1137/19M125649X.  Google Scholar

[5]

L. Bottou, Large-scale machine learning with stochastic gradient descent, in Proceedings of COMPSTAT'2010, Physica-Verlag/Springer, Heidelberg, 2010, 177-186. doi: 10.1007/978-3-7908-2604-3_16.  Google Scholar

[6]

L. BottouF. E. Curtis and J. Nocedal, Optimization methods for large-scale machine learning, SIAM Rev., 60 (2018), 223-311.  doi: 10.1137/16M1080173.  Google Scholar

[7]

F. Camilli, L. Grüne and F. Wirth, A regularization of Zubov's equation for robust domains of attraction, in Nonlinear Control in the Year 2000, Lect. Notes Control Inf. Sci., 258, NCN, Springer, London, 2001, 277-289. doi: 10.1007/BFb0110220.  Google Scholar

[8]

F. Camilli, L. Grüne and F. Wirth, Domains of attraction of interconnected systems: A Zubov method approach, European Control Conference (ECC), Budapest, Hungary, 2009. doi: 10.23919/ECC.2009.7074385.  Google Scholar

[9]

G. Cybenko, Approximation by superpositions of a sigmoidal function, Math. Control Signals Systems, 2 (1989), 303-314.  doi: 10.1007/BF02551274.  Google Scholar

[10]

J. Darbon, G. P. Langlois and T. Meng, Overcoming the curse of dimensionality for some Hamilton-Jacobi partial differential equations via neural network architectures, Res. Math. Sci., 7 (2020), 50pp. doi: 10.1007/s40687-020-00215-6.  Google Scholar

[11]

S. DashkovskiyH. Ito and F. Wirth, On a small gain theorem for ISS networks in dissipative Lyapunov form, Eur. J. Control, 17 (2011), 357-365.  doi: 10.3166/ejc.17.357-365.  Google Scholar

[12]

S. N. DashkovskiyB. S. Rüffer and F. R. Wirth, Small gain theorems for large scale systems and construction of ISS Lyapunov functions, SIAM J. Control Optim., 48 (2010), 4089-4118.  doi: 10.1137/090746483.  Google Scholar

[13]

W. EJ. Han and A. Jentzen, Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations, Commun. Math. Stat., 5 (2017), 349-380.  doi: 10.1007/s40304-017-0117-6.  Google Scholar

[14]

P. Giesl and S. Hafstein, Computation of Lyapunov functions for nonlinear discrete time systems by linear programming, J. Difference Equ. Appl., 20 (2014), 610-640.  doi: 10.1080/10236198.2013.867341.  Google Scholar

[15]

P. Giesl and S. Hafstein, Review on computational methods for Lyapunov functions, Discrete Contin. Dyn. Syst. Ser. B, 20 (2015), 2291-2331.  doi: 10.3934/dcdsb.2015.20.2291.  Google Scholar

[16]

P. Giesl, Construction of Global Lyapunov Functions Using Radial Basis Functions, Lecture Notes in Mathematics, 1904, Springer, Berlin, 2007. doi: 10.1007/978-3-540-69909-5.  Google Scholar

[17]

L. Grüne, Overcoming the curse of dimensionality for approximating Lyapunov functions with deep neural networks under a small-gain condition, preprint, arXiv: 2001.08423. Google Scholar

[18]

S. Hafstein, C. M. Kellett and H. Li, Continuous and piecewise affine Lyapunov functions using the Yoshizawa construction, American Control Conference, Portland, OR, 2014. doi: 10.1109/ACC.2014.6858660.  Google Scholar

[19]

S. F. Hafstein, An algorithm for constructing Lyapunov functions, Electronic Journal of Differential Equations, Monograph, 8, Texas State University-San Marcos, Department of Mathematics, San Marcos, TX, 2007, 100pp.  Google Scholar

[20]

W. Hahn, Stability of Motion, Die Grundlehren der mathematischen Wissenschaften, 138, Springer-Verlag New York, Inc., New York, 1967. doi: 10.1007/978-3-642-50085-5.  Google Scholar

[21]

J. HanA. Jentzen and W. E, Solving high-dimensional partial differential equations using deep learning, Proc. Natl. Acad. Sci. USA, 115 (2018), 8505-8510.  doi: 10.1073/pnas.1718942115.  Google Scholar

[22]

K. HornikM. Stinchcombe and H. White, Multilayer feedforward networks are universal approximators, Neural Networks, 2 (1989), 359-366.  doi: 10.1016/0893-6080(89)90020-8.  Google Scholar

[23]

C. HuréH. Pham and X. Warin, Deep backward schemes for high-dimensional nonlinear PDEs, Math. Comp., 89 (2020), 1547-1579.  doi: 10.1090/mcom/3514.  Google Scholar

[24]

M. Hutzenthaler, A. Jentzen and T. Kruse, Overcoming the curse of dimensionality in the numerical approximation of parabolic partial differential equations with gradient-dependent nonlinearities, preprint, arXiv: 1912.02571. Google Scholar

[25]

M. Hutzenthaler, A. Jentzen, T. Kruse and T. A. Nguyen, A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations, SN Partial Differ. Equ. Appl., 10 (2020). doi: 10.1007/s42985-019-0006-9.  Google Scholar

[26]

Z.-P. JiangA. R. Teel and L. Praly, Small-gain theorem for ISS systems and applications, Math. Control Signals Systems, 7 (1994), 95-120.  doi: 10.1007/BF01211469.  Google Scholar

[27]

Z.-P. JiangI. M. Y. Mareels and Y. Wang, A Lyapunov formulation of the nonlinear small-gain theorem for interconnected ISS systems, Automatica J. IFAC, 32 (1996), 1211-1215.  doi: 10.1016/0005-1098(96)00051-9.  Google Scholar

[28]

H. K. Khalil, Nonlinear Systems, Prentice-Hall, 1996. Google Scholar

[29]

S. Mohammad Khansari-Zadeh and A. Billard, Learning control Lyapunov function to ensure stability of dynamical system-based robot reaching motions, Robotics and Autonomous Systems, 62 (2014), 752-765.  doi: 10.1016/j.robot.2014.03.001.  Google Scholar

[30]

N. E. KirinR. A. Nelepin and V. N. Ba${\rm{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over i} }}$daev, Construction of the attraction region by Zubov's method, Differ. Equations, 17 (1982), 871-880.   Google Scholar

[31]

F. L. Lewis, S. Jagannathan and A. Yeşildirek, Neural Network Control of Robot Manipulators and Nonlinear Systems, Taylor and Francis, 1998. Google Scholar

[32]

H. Li, Computation of Lyapunov Functions and Stability of Interconnected Systems, Ph.D dissertation, Universität Bayreuth, 2015. Google Scholar

[33]

Y. Long and M. M. Bayoumi, Feedback stabilization: Control Lyapunov functions modelled by neural networks, Proceedings of the 32nd IEEE Conference on Decision and Control, San Antonio, TX, 1993. doi: 10.1109/CDC.1993.325708.  Google Scholar

[34]

H. N. Mhaskar, Neural networks for optimal approximation of smooth and analytic functions, Neural Comput., 8 (1996), 164-177.  doi: 10.1162/neco.1996.8.1.164.  Google Scholar

[35]

N. Noroozi, P. Karimaghaee, F. Safaei and H. Javadi, Generation of Lyapunov functions by neural networks, Proceedings of the World Congress on Engineering. Vol I, London, UK, 2008. Google Scholar

[36]

V. Petridis and S. Petridis, Construction of neural network based Lyapunov functions, Proceedings of the International Joint Conference on Neural Networks, Vancouver, Canada, 2006, 5059-5065. Google Scholar

[37]

T. PoggioH. MhaskarL. RosacoB. Miranda and Q. Liao, Why and when can deep - but not shallow - networks avoid the curse of dimensionality: A review, Int. J. Automat. Computing, 14 (2017), 503-519.  doi: 10.1007/s11633-017-1054-2.  Google Scholar

[38]

C. Reisinger and Y. Zhang, Rectified deep neural networks overcome the curse of dimensionality for nonsmooth value functions in zero-sum games of nonlinear stiff systems, Anal. Appl. (Singap.), 18 (2020), 951--999. doi: 10.1142/S0219530520500116.  Google Scholar

[39]

S. M. Richards, F. Berkenkamp and A. Krause, The Lyapunov neural network: Adaptive stability certification for safe learning of dynamical systems, Proceedings of the 2nd Conference on Robot Learning - CoRL 2018, Zürich, Switzerland, 2018. Available from: arXiv: 1808.00924. Google Scholar

[40]

B. S. Rüffer, Monotone Systems, Graphs, and Stability of Large-Scale Interconnected Systems, Ph.D dissertation, Universität Bremen, Germany, 2007. Google Scholar

[41]

G. Serpen, Empirical approximation for Lyapunov functions with artificial neural nets, Proc. International Joint Conference on Neural Networks, Montreal, Que., Canada, 2005. doi: 10.1109/IJCNN.2005.1555943.  Google Scholar

[42]

J. Sirignano and K. Spiliopoulos, DGM: A deep learning algorithm for solving partial differential equations, J. Comput. Phys., 375 (2018), 1339-1364.  doi: 10.1016/j.jcp.2018.08.029.  Google Scholar

[43]

E. D. Sontag, Smooth stabilization implies coprime factorization, IEEE Trans. Automat. Control, 34 (1989), 435-443.  doi: 10.1109/9.28018.  Google Scholar

[44]

E. D. Sontag, Feedback stabilization using two-hidden-layer nets, IEEE Trans. Neural Networks, 3 (1992), 981-990.  doi: 10.1109/72.165599.  Google Scholar

[45]

V. I. Zubov, Methods of A.M. Lyapunov and Their Application, P. Noordhoff Ltd, Groningen, 1964.  Google Scholar

Figure 1.  Neural network with $ 1 $ and $ 2 $ hidden layers
Figure 2.  Neural network for Lyapunov functions, $ f\in F_1^{d_{\max}} $
Figure 3.  Neural network for Lyapunov functions, $ f\in F_2^{d_{\max}} $
Figure 5.  Attempt to compute a Lyapunov function $ W(\cdot;\theta^*) $ (solid) with its orbital derivative $ DW(\cdot;\theta^*)f $ (mesh) for Ex. (12) with loss function (9)
Figure 4.  Approximate Lyapunov function $ W(\cdot;\theta^*) $ (solid) and its orbital derivative $ DW(\cdot;\theta^*)f $ (mesh) for Example (12) computed with loss function (11)
Figure 6.  Approximate Lyapunov function $ W(\cdot;\theta^*) $ (solid) and its orbital derivative $ DW(\cdot;\theta^*)f $ (mesh) for Example (13) on $ (x_2,x_8) $-plane
Figure 7.  Approximate Lyapunov function $ W(\cdot;\theta^*) $ (solid) and its orbital derivative $ DW(\cdot;\theta^*)f $ (mesh) for Example (13) on $ (x_9,x_{10}) $-plane
Figure 8.  Value of approximate Lyapunov function $ W(x(t);\theta^*) $ along trajectories for initial values $ x_0 = (1,1,1,1,1,1,1,1,1,1)^T $, $ (0,1,0,1,0,1,0,1,0,1)^T $, $ (1,0,0,0,0,0,0,0,0,0)^T $ (left to right)
[1]

Patrick D. Leenheer, David Angeli, Eduardo D. Sontag. On Predator-Prey Systems and Small-Gain Theorems. Mathematical Biosciences & Engineering, 2005, 2 (1) : 25-42. doi: 10.3934/mbe.2005.2.25

[2]

Ying Sue Huang, Chai Wah Wu. Stability of cellular neural network with small delays. Conference Publications, 2005, 2005 (Special) : 420-426. doi: 10.3934/proc.2005.2005.420

[3]

Yuantian Xia, Juxiang Zhou, Tianwei Xu, Wei Gao. An improved deep convolutional neural network model with kernel loss function in image classification. Mathematical Foundations of Computing, 2020, 3 (1) : 51-64. doi: 10.3934/mfc.2020005

[4]

G. A. Enciso, E. D. Sontag. Global attractivity, I/O monotone small-gain theorems, and biological delay systems. Discrete & Continuous Dynamical Systems, 2006, 14 (3) : 549-578. doi: 10.3934/dcds.2006.14.549

[5]

Hyeontae Jo, Hwijae Son, Hyung Ju Hwang, Eun Heui Kim. Deep neural network approach to forward-inverse problems. Networks & Heterogeneous Media, 2020, 15 (2) : 247-259. doi: 10.3934/nhm.2020011

[6]

Zheng Chen, Liu Liu, Lin Mu. Solving the linear transport equation by a deep neural network approach. Discrete & Continuous Dynamical Systems - S, 2021  doi: 10.3934/dcdss.2021070

[7]

Jianfeng Feng, Mariya Shcherbina, Brunello Tirozzi. Stability of the dynamics of an asymmetric neural network. Communications on Pure & Applied Analysis, 2009, 8 (2) : 655-671. doi: 10.3934/cpaa.2009.8.655

[8]

Eduardo Castillo-Castaneda. Neural network training in SCILAB for classifying mango (Mangifera indica) according to maturity level using the RGB color model. STEM Education, 2021, 1 (3) : 186-198. doi: 10.3934/steme.2021014

[9]

Ricai Luo, Honglei Xu, Wu-Sheng Wang, Jie Sun, Wei Xu. A weak condition for global stability of delayed neural networks. Journal of Industrial & Management Optimization, 2016, 12 (2) : 505-514. doi: 10.3934/jimo.2016.12.505

[10]

Ndolane Sene. Fractional input stability and its application to neural network. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 853-865. doi: 10.3934/dcdss.2020049

[11]

Ziang Long, Penghang Yin, Jack Xin. Global convergence and geometric characterization of slow to fast weight evolution in neural network training for classifying linearly non-separable data. Inverse Problems & Imaging, 2021, 15 (1) : 41-62. doi: 10.3934/ipi.2020077

[12]

H. N. Mhaskar, T. Poggio. Function approximation by deep networks. Communications on Pure & Applied Analysis, 2020, 19 (8) : 4085-4095. doi: 10.3934/cpaa.2020181

[13]

Rui Hu, Yuan Yuan. Stability, bifurcation analysis in a neural network model with delay and diffusion. Conference Publications, 2009, 2009 (Special) : 367-376. doi: 10.3934/proc.2009.2009.367

[14]

Ramon Plaza, K. Zumbrun. An Evans function approach to spectral stability of small-amplitude shock profiles. Discrete & Continuous Dynamical Systems, 2004, 10 (4) : 885-924. doi: 10.3934/dcds.2004.10.885

[15]

Batoul Abdelaziz, Abdellatif El Badia, Ahmad El Hajj. Identification and stability of small-sized dislocations using a direct algorithm. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021046

[16]

Editorial Office. Retraction: Honggang Yu, An efficient face recognition algorithm using the improved convolutional neural network. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 901-901. doi: 10.3934/dcdss.2019060

[17]

Kuo-Shou Chiu. Periodicity and stability analysis of impulsive neural network models with generalized piecewise constant delays. Discrete & Continuous Dynamical Systems - B, 2021  doi: 10.3934/dcdsb.2021060

[18]

King Hann Lim, Hong Hui Tan, Hendra G. Harno. Approximate greatest descent in neural network optimization. Numerical Algebra, Control & Optimization, 2018, 8 (3) : 327-336. doi: 10.3934/naco.2018021

[19]

Shyan-Shiou Chen, Chih-Wen Shih. Asymptotic behaviors in a transiently chaotic neural network. Discrete & Continuous Dynamical Systems, 2004, 10 (3) : 805-826. doi: 10.3934/dcds.2004.10.805

[20]

Jiangtao Mo, Liqun Qi, Zengxin Wei. A network simplex algorithm for simple manufacturing network model. Journal of Industrial & Management Optimization, 2005, 1 (2) : 251-273. doi: 10.3934/jimo.2005.1.251

 Impact Factor: 

Article outline

Figures and Tables

[Back to Top]