doi: 10.3934/jimo.2021016

Branching improved Deep Q Networks for solving pursuit-evasion strategy solution of spacecraft

1. 

Academy of Military Science of the People's Liberation Army, Beijing, China

2. 

Space Engineering University, Beijing, China

* Corresponding author: Bingyan Liu, Academy of Military Science of the People's Liberation Army, Beijing 100091, PR China

Received  June 2020 Revised  October 2020 Published  December 2020

With the continuous development of space rendezvous technology, more and more attention has been paid to the study of spacecraft orbital pursuit-evasion differential game. Therefore, we propose a pursuit-evasion game algorithm based on branching improved Deep Q Networks to obtain a space rendezvous strategy with non-cooperative target. Firstly, we transform the optimal control of space rendezvous between spacecraft and non-cooperative target into a survivable differential game problem. Next, in order to solve this game problem, we construct Nash equilibrium strategy and test its existence and uniqueness. Then, in order to avoid the dimensional disaster of Deep Q Networks in the continuous behavior space, we construct a TSK fuzzy inference model to represent the continuous space. Finally, in order to solve the complex and timeconsuming self-learning problem of discrete action sets, we improve Deep Q Networks algorithm, and propose a branching architecture with multiple groups of parallel neural Networks and shared decision modules. The simulation results show that the algorithm achieves the combination of optimal control and game theory, and further improves the learning ability of discrete behaviors. The algorithm has the comparative advantage of continuous space behavior decision, can effectively deal with the continuous space chase game problem, and provides a new idea for the solution of spacecraft orbit pursuit-evasion strategy.

Citation: Bingyan Liu, Xiongbing Ye, Xianzhou Dong, Lei Ni. Branching improved Deep Q Networks for solving pursuit-evasion strategy solution of spacecraft. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2021016
References:
[1]

G. M. Anderson and V. W. Grazier, Barrier in pursuit-evasion problems between two low-thrust orbital spacecraft, AIAA J., 14 (1976), 158-163.  doi: 10.2514/3.61350.  Google Scholar

[2]

J. Ba, V. Mnih and K. Kavukcuoglu, Multiple Object Recognition with Visual Attention, In ICLR, 2015, arXiv: 1412.7755. Google Scholar

[3]

E. N. BarronL. C. Evans and R. Jensen, Viscosity solutions of Isaacs' formulas and differen-tial games with Lipschitz controls, J. Differential Equations, 53 (1984), 213-233.  doi: 10.1016/0022-0396(84)90040-8.  Google Scholar

[4]

Y. L. Chen, Research on Differential Games-Based Finite-Time Adaptive Dynamic Programming Guidance Law, Nanjing University of Aeronautics and Astronautics, 2019. Google Scholar

[5]

Y. ChengZ. SunY. Huang and W. Zhang, Fuzzy categorical deep reinforcement learning of a defensive game for an unmanned surface vessel, International Journal of Fuzzy Systemse, 21 (2019), 592-606.  doi: 10.1007/s40815-018-0586-0.  Google Scholar

[6]

M. G. CrandallL. C. Evans and P.-L. Lions, Some properties of viscosity solutions of Hamilton-Jacobi equations, Trans. Amer. Math. Soc., 282 (1984), 487-502.  doi: 10.1090/S0002-9947-1984-0732102-X.  Google Scholar

[7]

X. DaiC. K. Li and A. B. Rad, An approach to tune fuzzy contorllers based on rein-forcement learning for autonomous vehicle control, IEEE Transactions on Intelligent Transportation Sys-tems, 6 (2005), 285-293.   Google Scholar

[8]

S. F. Desouky and H. M. Schwartz, Q($\lambda$)-learning fuzzy logic controller for a multi-robot system, in IEEE International Conference on Systems, Man and Cybernetics, 10 (2010), 4075–4080. doi: 10.1109/ICSMC.2010.5641791.  Google Scholar

[9]

J. Engwerda, Algorithms for computing Nash equilibria indeterministic LQ games, Comput. Manag. Sci., 4 (2007), 113-140.  doi: 10.1007/s10287-006-0030-z.  Google Scholar

[10]

A. Friedman, Differential Games, Rhode Island: American Mathematical Society, 1974.  Google Scholar

[11]

W. T. HaferH. L. ReedJ. D. Turner and K. Pham, Sensitivity methods applied to orbital pursuit evasion, J. Guid Control Dyn., 38 (2015), 1118-1126.  doi: 10.2514/1.G000832.  Google Scholar

[12]

Z. W. HaoS. T. SunQ. H. Zhang and Y. Chen, Application of Semi-Direct Collocation method for solving pur-suit-evasion problems of spacecraft, Journal of As-Tronautics, 40 (2019), 628-635.   Google Scholar

[13]

H. V. Hasselt, A. Guez and D. Silver, Deep reinforcement learning with double q-learning, in Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 02 (2016), 2094–2100, arXiv: 1509.06461. Google Scholar

[14]

M. Hessel, J. Modayil, H. H. Van et al, Rainbow: Combining improvements in deep rein-forcement learning, Association for the Advancement of Artificial Intelli-gence, 10 (2017), 3215–3222, arXiv: 1710.02298v1. Google Scholar

[15]

R. Isaacs, Differential Games, New York: Wiley, 1965.  Google Scholar

[16]

J. S. R. JangC. T. Sun and E. Mizutani, Neuro-fuzzy and soft computing: A computational approach to learning and machine intelligence, IEEE Transactions on Automatic Control, 42 (1997), 1482-1484.  doi: 10.1109/TAC.1997.633847.  Google Scholar

[17]

F. Jürgen, W. K. Härdle and C. M. Hafner, Neural Networks and Deep Learning, Statistics of Financial Markets, 2019. Google Scholar

[18]

C. Y. Li, Study on Guidance and Control Problems for Tactical Ballistic Missile Interceptor, Dissertation for Doctoral Degree Harbin: Harbin Institute of Technology, 2008. Google Scholar

[19]

L. LiF. LiuX. Shi and J. Wang, Differential game model and solving method for mis-sile pursuit-evasion, Systems Engineering-Theory & Practice, 36 (2016), 2161-2168.   Google Scholar

[20]

Z.-Y. LiH. ZhuZ. Yang and Y.-Z. Luo, A dimension-reduction solution of free-time differen-tial games for spacecraft pursuit-evasion, Acta Astronautica, 163 (2019), 201-210.  doi: 10.1016/j.actaastro.2019.01.011.  Google Scholar

[21]

T. P. Lillicrap, J. J. Hunt, A. Pritzel et al, Continuous Control with Deep Reinforcement Learn-Ing, In International Conference on Learning Representa-tions, 2016. doi: 10.1016/S1098-3015(10)67722-4.  Google Scholar

[22]

B. LiuX. YeY. GaoX. DongX. Wang and B. Liu, Forward-looking imaginative planning framework combined with prioritized replay double DQN, International Conferenceon Control, Automation and Robotics, 4 (2019), 336-341.  doi: 10.1109/ICCAR.2019.8813352.  Google Scholar

[23]

B. LiuX. YeC. Zhou and B. Liu, Composite mode on-orbit service resource allocation based on improved DQN, Acta Aeronautica et Astronautica Sinica, 41 (2020), 323630-323630.  doi: 10.7527/S1000-6893.2019.23630.  Google Scholar

[24]

R. C. LoxtonK. L. TeoV. Rehbock and K. F. C. Yiu, Optimal control problems with a continuous Inequal-ity constraint on the state and the control, Automatica J. IFAC, 45 (2009), 2250-2257.  doi: 10.1016/j.automatica.2009.05.029.  Google Scholar

[25]

Y. Z. LuoZ. Y. Li and H. Zhu, Survey on spacecraft orbital pursuit-evasion differential games, Scientia Sinica Technologica, 50 (2020), 1533-1545.  doi: 10.1360/SST-2019-0174.  Google Scholar

[26]

L. MatignonG. J. Laurent and N. Le Fort-Piat, Independent reinforcement learners in cooperative markov games: A survey regarding coordination prob-lems, The Knowledge Engineering Review, 27 (2012), 1-31.  doi: 10.1017/S0269888912000057.  Google Scholar

[27]

V. MnihK. Kavukcuoglu and D. Silver et al, Human-level control through deep reinforcement learning, Nature, 518 (2015), 529-533.  doi: 10.1038/nature14236.  Google Scholar

[28] J. F. Nash, Classic in Game Theory, China Renmin University Press, Beijing, 2013.   Google Scholar
[29]

M. Pontani and B. A. Conway, Numerical solution of the three-dimensional orbital pursuit-evasion game, J. Guid Control Dyn., 32 (2009), 474-487.  doi: 10.2514/1.37962.  Google Scholar

[30] X. F. QianR. X. Lin and Y. N. Zhao, Flight Mechanics of Guided Missile, Beijing Institute of Technology Press, Beijing, 2006.   Google Scholar
[31] S. S. Richard and G. B. Andrew, Reinforcement Learning: An Introduction (second edition), The MIT Press, London, England, 2018.   Google Scholar
[32]

T. J. Ross, Fuzzy Logic with Engineering Applications, the United States of America: John Wiley & Sons, Ltd, 2010. doi: 10.1002/9781119994374.  Google Scholar

[33]

W. C. RyanG. C. RichardP. Meir and P. Scott, Solution of a pursuit-evasion game using a near-optimal strategy, J. Guid Control Dyn, 41 (2018), 841-850.  doi: 10.2514/1.G002911.  Google Scholar

[34]

H. M. Schwartz, Multi-Agent Machine Learning: A Reinforcement Ap-Proach, Canada: John Wiley & Sons, Inc, 2014. Google Scholar

[35]

F. SuJ. Liu and Y. Zhang et al, Analysis of optimal impulse for in-plane collision avoidance maneuver, Systems Engineering and Electronics, 40 (2018), 2782-2789.  doi: 10.3969/j.issn.1001-506X.2018.12.23.  Google Scholar

[36]

S. T. Sun, Two Spacecraft Pursuit-Evasion Strategies on Low Earth Orbit and Numerical Solution, Harbin Institute of Technology, 2015. Google Scholar

[37]

S. SunQ. ZhangR. Loxton and B. Li, Numerical solution of a pursuit-evasion differential game involving two spacecraft in low earth orbit, J. Ind. Manag. Optim., 11 (2015), 1127-1147.  doi: 10.3934/jimo.2015.11.1127.  Google Scholar

[38]

S. SunQ. ZhangR. Loxton and B. Li, Numerical solution of a pursuit-evasion differential game involving two spacecraft in low earth orbit, J. Ind. Manag. Optim., 11 (2015), 1127-1147.  doi: 10.3934/jimo.2015.11.1127.  Google Scholar

[39]

T. Takagi and M. Sugeno, Fuzzy identifcation of systems and its applications to modelling ad control, IEEE Transactions on Systems. Man and Cyberetics, 15 (1985), 116-132.  doi: 10.1109/TSMC.1985.6313399.  Google Scholar

[40]

L. X. Wang, A Course in Fuzzy Systems and Control, New Jersey: Prentice-Hall, Inc., 1997. Google Scholar

[41]

Z. Wang, T. Schaul, M. Hessel et al, Dueling network architectures for deep reinforcement learning, preprint, arXiv: 1511.06581, 2015, 10. Google Scholar

[42]

X. Wu, S. Liu, L. Yang and Z. Jia, A gait control method for biped robot on slope based on deep reinforcement learning, ACTA Automatica Sinica, 1-13[2020-02-28]. doi: 10.16383/j.aas.c190547.  Google Scholar

[43]

Q. Wu and H. Zhang, Spacecraft pursuit strategy and numerical solution based on survival differential strategy, Control and Information Technology, 04 (2019), 39-43.  doi: 10.13889/j.issn.2096-5427.2019.04.007.  Google Scholar

[44]

D. T. YuH. Wang and W. M. Zhou, Anti-rendezvous evasive maneuver method consider-ing space geometrical relationship, J. Natl. Univ. Def. Technol., 38 (2016), 89-94.  doi: 10.11887/j.cn.201606015.  Google Scholar

[45]

Q. H. ZhangY. Sun and M. M. Huang et al, Pursuit-evasion barrier of two spacecrafts under mi-nute continuous radial thrust in coplanar orbit, Control Decision, 22 (2007), 530-534.  doi: 10.13195/j.cd.2007.05.52.zhangqh.010.  Google Scholar

show all references

References:
[1]

G. M. Anderson and V. W. Grazier, Barrier in pursuit-evasion problems between two low-thrust orbital spacecraft, AIAA J., 14 (1976), 158-163.  doi: 10.2514/3.61350.  Google Scholar

[2]

J. Ba, V. Mnih and K. Kavukcuoglu, Multiple Object Recognition with Visual Attention, In ICLR, 2015, arXiv: 1412.7755. Google Scholar

[3]

E. N. BarronL. C. Evans and R. Jensen, Viscosity solutions of Isaacs' formulas and differen-tial games with Lipschitz controls, J. Differential Equations, 53 (1984), 213-233.  doi: 10.1016/0022-0396(84)90040-8.  Google Scholar

[4]

Y. L. Chen, Research on Differential Games-Based Finite-Time Adaptive Dynamic Programming Guidance Law, Nanjing University of Aeronautics and Astronautics, 2019. Google Scholar

[5]

Y. ChengZ. SunY. Huang and W. Zhang, Fuzzy categorical deep reinforcement learning of a defensive game for an unmanned surface vessel, International Journal of Fuzzy Systemse, 21 (2019), 592-606.  doi: 10.1007/s40815-018-0586-0.  Google Scholar

[6]

M. G. CrandallL. C. Evans and P.-L. Lions, Some properties of viscosity solutions of Hamilton-Jacobi equations, Trans. Amer. Math. Soc., 282 (1984), 487-502.  doi: 10.1090/S0002-9947-1984-0732102-X.  Google Scholar

[7]

X. DaiC. K. Li and A. B. Rad, An approach to tune fuzzy contorllers based on rein-forcement learning for autonomous vehicle control, IEEE Transactions on Intelligent Transportation Sys-tems, 6 (2005), 285-293.   Google Scholar

[8]

S. F. Desouky and H. M. Schwartz, Q($\lambda$)-learning fuzzy logic controller for a multi-robot system, in IEEE International Conference on Systems, Man and Cybernetics, 10 (2010), 4075–4080. doi: 10.1109/ICSMC.2010.5641791.  Google Scholar

[9]

J. Engwerda, Algorithms for computing Nash equilibria indeterministic LQ games, Comput. Manag. Sci., 4 (2007), 113-140.  doi: 10.1007/s10287-006-0030-z.  Google Scholar

[10]

A. Friedman, Differential Games, Rhode Island: American Mathematical Society, 1974.  Google Scholar

[11]

W. T. HaferH. L. ReedJ. D. Turner and K. Pham, Sensitivity methods applied to orbital pursuit evasion, J. Guid Control Dyn., 38 (2015), 1118-1126.  doi: 10.2514/1.G000832.  Google Scholar

[12]

Z. W. HaoS. T. SunQ. H. Zhang and Y. Chen, Application of Semi-Direct Collocation method for solving pur-suit-evasion problems of spacecraft, Journal of As-Tronautics, 40 (2019), 628-635.   Google Scholar

[13]

H. V. Hasselt, A. Guez and D. Silver, Deep reinforcement learning with double q-learning, in Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 02 (2016), 2094–2100, arXiv: 1509.06461. Google Scholar

[14]

M. Hessel, J. Modayil, H. H. Van et al, Rainbow: Combining improvements in deep rein-forcement learning, Association for the Advancement of Artificial Intelli-gence, 10 (2017), 3215–3222, arXiv: 1710.02298v1. Google Scholar

[15]

R. Isaacs, Differential Games, New York: Wiley, 1965.  Google Scholar

[16]

J. S. R. JangC. T. Sun and E. Mizutani, Neuro-fuzzy and soft computing: A computational approach to learning and machine intelligence, IEEE Transactions on Automatic Control, 42 (1997), 1482-1484.  doi: 10.1109/TAC.1997.633847.  Google Scholar

[17]

F. Jürgen, W. K. Härdle and C. M. Hafner, Neural Networks and Deep Learning, Statistics of Financial Markets, 2019. Google Scholar

[18]

C. Y. Li, Study on Guidance and Control Problems for Tactical Ballistic Missile Interceptor, Dissertation for Doctoral Degree Harbin: Harbin Institute of Technology, 2008. Google Scholar

[19]

L. LiF. LiuX. Shi and J. Wang, Differential game model and solving method for mis-sile pursuit-evasion, Systems Engineering-Theory & Practice, 36 (2016), 2161-2168.   Google Scholar

[20]

Z.-Y. LiH. ZhuZ. Yang and Y.-Z. Luo, A dimension-reduction solution of free-time differen-tial games for spacecraft pursuit-evasion, Acta Astronautica, 163 (2019), 201-210.  doi: 10.1016/j.actaastro.2019.01.011.  Google Scholar

[21]

T. P. Lillicrap, J. J. Hunt, A. Pritzel et al, Continuous Control with Deep Reinforcement Learn-Ing, In International Conference on Learning Representa-tions, 2016. doi: 10.1016/S1098-3015(10)67722-4.  Google Scholar

[22]

B. LiuX. YeY. GaoX. DongX. Wang and B. Liu, Forward-looking imaginative planning framework combined with prioritized replay double DQN, International Conferenceon Control, Automation and Robotics, 4 (2019), 336-341.  doi: 10.1109/ICCAR.2019.8813352.  Google Scholar

[23]

B. LiuX. YeC. Zhou and B. Liu, Composite mode on-orbit service resource allocation based on improved DQN, Acta Aeronautica et Astronautica Sinica, 41 (2020), 323630-323630.  doi: 10.7527/S1000-6893.2019.23630.  Google Scholar

[24]

R. C. LoxtonK. L. TeoV. Rehbock and K. F. C. Yiu, Optimal control problems with a continuous Inequal-ity constraint on the state and the control, Automatica J. IFAC, 45 (2009), 2250-2257.  doi: 10.1016/j.automatica.2009.05.029.  Google Scholar

[25]

Y. Z. LuoZ. Y. Li and H. Zhu, Survey on spacecraft orbital pursuit-evasion differential games, Scientia Sinica Technologica, 50 (2020), 1533-1545.  doi: 10.1360/SST-2019-0174.  Google Scholar

[26]

L. MatignonG. J. Laurent and N. Le Fort-Piat, Independent reinforcement learners in cooperative markov games: A survey regarding coordination prob-lems, The Knowledge Engineering Review, 27 (2012), 1-31.  doi: 10.1017/S0269888912000057.  Google Scholar

[27]

V. MnihK. Kavukcuoglu and D. Silver et al, Human-level control through deep reinforcement learning, Nature, 518 (2015), 529-533.  doi: 10.1038/nature14236.  Google Scholar

[28] J. F. Nash, Classic in Game Theory, China Renmin University Press, Beijing, 2013.   Google Scholar
[29]

M. Pontani and B. A. Conway, Numerical solution of the three-dimensional orbital pursuit-evasion game, J. Guid Control Dyn., 32 (2009), 474-487.  doi: 10.2514/1.37962.  Google Scholar

[30] X. F. QianR. X. Lin and Y. N. Zhao, Flight Mechanics of Guided Missile, Beijing Institute of Technology Press, Beijing, 2006.   Google Scholar
[31] S. S. Richard and G. B. Andrew, Reinforcement Learning: An Introduction (second edition), The MIT Press, London, England, 2018.   Google Scholar
[32]

T. J. Ross, Fuzzy Logic with Engineering Applications, the United States of America: John Wiley & Sons, Ltd, 2010. doi: 10.1002/9781119994374.  Google Scholar

[33]

W. C. RyanG. C. RichardP. Meir and P. Scott, Solution of a pursuit-evasion game using a near-optimal strategy, J. Guid Control Dyn, 41 (2018), 841-850.  doi: 10.2514/1.G002911.  Google Scholar

[34]

H. M. Schwartz, Multi-Agent Machine Learning: A Reinforcement Ap-Proach, Canada: John Wiley & Sons, Inc, 2014. Google Scholar

[35]

F. SuJ. Liu and Y. Zhang et al, Analysis of optimal impulse for in-plane collision avoidance maneuver, Systems Engineering and Electronics, 40 (2018), 2782-2789.  doi: 10.3969/j.issn.1001-506X.2018.12.23.  Google Scholar

[36]

S. T. Sun, Two Spacecraft Pursuit-Evasion Strategies on Low Earth Orbit and Numerical Solution, Harbin Institute of Technology, 2015. Google Scholar

[37]

S. SunQ. ZhangR. Loxton and B. Li, Numerical solution of a pursuit-evasion differential game involving two spacecraft in low earth orbit, J. Ind. Manag. Optim., 11 (2015), 1127-1147.  doi: 10.3934/jimo.2015.11.1127.  Google Scholar

[38]

S. SunQ. ZhangR. Loxton and B. Li, Numerical solution of a pursuit-evasion differential game involving two spacecraft in low earth orbit, J. Ind. Manag. Optim., 11 (2015), 1127-1147.  doi: 10.3934/jimo.2015.11.1127.  Google Scholar

[39]

T. Takagi and M. Sugeno, Fuzzy identifcation of systems and its applications to modelling ad control, IEEE Transactions on Systems. Man and Cyberetics, 15 (1985), 116-132.  doi: 10.1109/TSMC.1985.6313399.  Google Scholar

[40]

L. X. Wang, A Course in Fuzzy Systems and Control, New Jersey: Prentice-Hall, Inc., 1997. Google Scholar

[41]

Z. Wang, T. Schaul, M. Hessel et al, Dueling network architectures for deep reinforcement learning, preprint, arXiv: 1511.06581, 2015, 10. Google Scholar

[42]

X. Wu, S. Liu, L. Yang and Z. Jia, A gait control method for biped robot on slope based on deep reinforcement learning, ACTA Automatica Sinica, 1-13[2020-02-28]. doi: 10.16383/j.aas.c190547.  Google Scholar

[43]

Q. Wu and H. Zhang, Spacecraft pursuit strategy and numerical solution based on survival differential strategy, Control and Information Technology, 04 (2019), 39-43.  doi: 10.13889/j.issn.2096-5427.2019.04.007.  Google Scholar

[44]

D. T. YuH. Wang and W. M. Zhou, Anti-rendezvous evasive maneuver method consider-ing space geometrical relationship, J. Natl. Univ. Def. Technol., 38 (2016), 89-94.  doi: 10.11887/j.cn.201606015.  Google Scholar

[45]

Q. H. ZhangY. Sun and M. M. Huang et al, Pursuit-evasion barrier of two spacecrafts under mi-nute continuous radial thrust in coplanar orbit, Control Decision, 22 (2007), 530-534.  doi: 10.13195/j.cd.2007.05.52.zhangqh.010.  Google Scholar

Figure 1.  Coordinate diagram of spacecraft and non-cooperative target
Figure 2.  Schematic diagram of direction angle of behavior control quantity
Figure 3.  TSK fuzzy inference model of pursuit-evasion behavior
Figure 4.  Branching Deep Q Networks architecture
Figure 5.  Sharing behavior decision diagram based on improved Deep Q Networks
Figure 6.  The interactive flow of pursuit-evasion game
Figure 7.  The error function value comparison of the two algorithms
Figure 8.  The reward value comparison of the two algorithms
Figure 9.  The pursuit-evasion trajectory after learning 0 times
Figure 10.  The pursuit-evasion trajectory after learning 400 times
Figure 11.  Probability distribution of pursuit-evasion behavior
Figure 12.  The pursuit-evasion trajectory after learning 800 times
Table 1.  The initial state of the spacecraft and the non-cooperative target
x y z $ \dot{x} $ $ \dot{y} $ $ \dot{z} $
(km) (km) (km) (km/s) (km/s) (km/s)
P 0 0 0 -0.0563 0.0418 0
E 70 70 0 -0.0425 0.0314 0
x y z $ \dot{x} $ $ \dot{y} $ $ \dot{z} $
(km) (km) (km) (km/s) (km/s) (km/s)
P 0 0 0 -0.0563 0.0418 0
E 70 70 0 -0.0425 0.0314 0
Table 2.  Experimental environment parameters
computing platform environment configuration
CPU Intel Core i5-7300H QCPU @2.50GHz
RAM 8 GB
System Windows 10
Programming language Python 3.6
Compiling environment Pycharm 2018
Deep Learning framework TensorFlow 0.12.0
computing platform environment configuration
CPU Intel Core i5-7300H QCPU @2.50GHz
RAM 8 GB
System Windows 10
Programming language Python 3.6
Compiling environment Pycharm 2018
Deep Learning framework TensorFlow 0.12.0
[1]

Weinan E, Weiguo Gao. Orbital minimization with localization. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 249-264. doi: 10.3934/dcds.2009.23.249

[2]

Lars Grüne. Computing Lyapunov functions using deep neural networks. Journal of Computational Dynamics, 2020  doi: 10.3934/jcd.2021006

[3]

Hongfei Yang, Xiaofeng Ding, Raymond Chan, Hui Hu, Yaxin Peng, Tieyong Zeng. A new initialization method based on normed statistical spaces in deep networks. Inverse Problems & Imaging, 2021, 15 (1) : 147-158. doi: 10.3934/ipi.2020045

[4]

Thabet Abdeljawad, Mohammad Esmael Samei. Applying quantum calculus for the existence of solution of $ q $-integro-differential equations with three criteria. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020440

[5]

Yueyang Zheng, Jingtao Shi. A stackelberg game of backward stochastic differential equations with partial information. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020047

[6]

David W. K. Yeung, Yingxuan Zhang, Hongtao Bai, Sardar M. N. Islam. Collaborative environmental management for transboundary air pollution problems: A differential levies game. Journal of Industrial & Management Optimization, 2021, 17 (2) : 517-531. doi: 10.3934/jimo.2019121

[7]

Qingfeng Zhu, Yufeng Shi. Nonzero-sum differential game of backward doubly stochastic systems with delay and applications. Mathematical Control & Related Fields, 2021, 11 (1) : 73-94. doi: 10.3934/mcrf.2020028

[8]

Xiaoxian Tang, Jie Wang. Bistability of sequestration networks. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1337-1357. doi: 10.3934/dcdsb.2020165

[9]

D. R. Michiel Renger, Johannes Zimmer. Orthogonality of fluxes in general nonlinear reaction networks. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 205-217. doi: 10.3934/dcdss.2020346

[10]

Bernold Fiedler. Global Hopf bifurcation in networks with fast feedback cycles. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 177-203. doi: 10.3934/dcdss.2020344

[11]

Pedro Aceves-Sanchez, Benjamin Aymard, Diane Peurichard, Pol Kennel, Anne Lorsignol, Franck Plouraboué, Louis Casteilla, Pierre Degond. A new model for the emergence of blood capillary networks. Networks & Heterogeneous Media, 2020  doi: 10.3934/nhm.2021001

[12]

Leslaw Skrzypek, Yuncheng You. Feedback synchronization of FHN cellular neural networks. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2021001

[13]

Nicholas Geneva, Nicholas Zabaras. Multi-fidelity generative deep learning turbulent flows. Foundations of Data Science, 2020, 2 (4) : 391-428. doi: 10.3934/fods.2020019

[14]

Juan Pablo Pinasco, Mauro Rodriguez Cartabia, Nicolas Saintier. Evolutionary game theory in mixed strategies: From microscopic interactions to kinetic equations. Kinetic & Related Models, 2021, 14 (1) : 115-148. doi: 10.3934/krm.2020051

[15]

Charlotte Rodriguez. Networks of geometrically exact beams: Well-posedness and stabilization. Mathematical Control & Related Fields, 2021  doi: 10.3934/mcrf.2021002

[16]

Pierluigi Colli, Gianni Gilardi, Jürgen Sprekels. Deep quench approximation and optimal control of general Cahn–Hilliard systems with fractional operators and double obstacle potentials. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 243-271. doi: 10.3934/dcdss.2020213

[17]

Youming Guo, Tingting Li. Optimal control strategies for an online game addiction model with low and high risk exposure. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020347

[18]

Alain Bensoussan, Xinwei Feng, Jianhui Huang. Linear-quadratic-Gaussian mean-field-game with partial observation and common noise. Mathematical Control & Related Fields, 2021, 11 (1) : 23-46. doi: 10.3934/mcrf.2020025

[19]

Zhongbao Zhou, Yanfei Bai, Helu Xiao, Xu Chen. A non-zero-sum reinsurance-investment game with delay and asymmetric information. Journal of Industrial & Management Optimization, 2021, 17 (2) : 909-936. doi: 10.3934/jimo.2020004

[20]

Lorenzo Zambotti. A brief and personal history of stochastic partial differential equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 471-487. doi: 10.3934/dcds.2020264

2019 Impact Factor: 1.366

Metrics

  • PDF downloads (16)
  • HTML views (35)
  • Cited by (0)

Other articles
by authors

[Back to Top]