• Previous Article
    Design of differentiated warranty coverage that considers usage rate and service option of consumers under 2D warranty policy
  • JIMO Home
  • This Issue
  • Next Article
    The point-wise convergence of shifted symmetric higher order power method
doi: 10.3934/jimo.2020030

Finite-horizon optimal control of discrete-time linear systems with completely unknown dynamics using Q-learning

School of Automation, Beijing Institute of Technology, Beijing, 100081, China

* Corresponding author: Jingang Zhao

Received  March 2019 Revised  September 2019 Published  February 2020

Fund Project: The first author is supported by International Graduate Exchange Program of Beijing Institute of Technology; This paper is supported by the National Natural Science Foundation of China grant 61673065

This paper investigates finite-horizon optimal control problem of completely unknown discrete-time linear systems. The completely unknown here refers to that the system dynamics are unknown. Compared with infinite-horizon optimal control, the Riccati equation (RE) of finite-horizon optimal control is time-dependent and must meet certain terminal boundary constraints, which brings the greater challenges. Meanwhile, the completely unknown system dynamics have also caused additional challenges. The main innovation of this paper is the developed cyclic fixed-finite-horizon-based Q-learning algorithm to approximate the optimal control input without requiring the system dynamics. The developed algorithm main consists of two phases: the data collection phase over a fixed-finite-horizon and the parameters update phase. A least-squares method is used to correlate the two phases to obtain the optimal parameters by cyclic. Finally, simulation results are given to verify the effectiveness of the proposed cyclic fixed-finite-horizon-based Q-learning algorithm.

Citation: Jingang Zhao, Chi Zhang. Finite-horizon optimal control of discrete-time linear systems with completely unknown dynamics using Q-learning. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2020030
References:
[1]

A. Al-TamimiF. L. Lewis and M. Abu-Khalaf, Model-free Q-learning design for discrete-time zero-sum games with application to H-infinity control, Automatica, 43 (2007), 473-481.  doi: 10.1016/j.automatica.2006.09.019.  Google Scholar

[2]

D. P. Bertsekas and J. N. Tsitsiklis, Neuro-Dynamic Programming, 1$^st$ edition, Athena Scientific, Belmont, 1996. Google Scholar

[3]

D. P. Bertsekas, Value and policy iterations in optimal control and adaptive dynamic programming, IEEE Transactions on Neural Networks and Learning Systems, 28 (2017), 500-509.  doi: 10.1109/TNNLS.2015.2503980.  Google Scholar

[4]

X. S. ChenX. Li and F. H. Yi, Optimal stopping investment with non-smooth utility over an infinite time horizon, Journal of Industrial & Management Optimization, 15 (2019), 81-96.   Google Scholar

[5]

Y. F. Chen and Y. G. Zhu, Indifinite LQ optimal control with process state inequality constraints for discrete-time uncertain systems, Journal of Industrial & Management Optimization, 14 (2018), 913-930.   Google Scholar

[6]

T. ChengF. L. Lewis and M. Abu-Khalaf, A neural network solution for fixed-final time optimal control of nonlinear systems, Automatica, 43 (2007), 482-490.  doi: 10.1016/j.automatica.2006.09.021.  Google Scholar

[7]

M. G. Gan, J. G. Zhao and C. Zhang, Extended adaptive optimal control of linear systems with unknown dynamics using adaptive dynamic programming, Asian J Control, (2019), 1–10. doi: 10.1002/asjc.2243.  Google Scholar

[8]

W. N. GaoY. JiangZ. P. Jiang and Y. T. Chai, Output-feedback adaptive optimal control of interconnected systems based on robust adaptive dynamic programming, Automatica, 72 (2016), 37-45.  doi: 10.1016/j.automatica.2016.05.008.  Google Scholar

[9]

A. Heydari and S. N. Balakrishnan, Finite-horizon control-constrained nonlinear optimal control using single network adaptvie critics, IEEE Transactions on Neural Networks and Learning Systems, 24 (2013), 145-157.   Google Scholar

[10]

Y. Jiang and Z. P. Jiang, Computational adaptive optimal control for continuous-time linear systems with completely unknown dynamics, Automatica, 48 (2012), 2699-2704.  doi: 10.1016/j.automatica.2012.06.096.  Google Scholar

[11]

Y. Jiang and Z. P. Jiang, Global adaptive dynamic programming for continuous-time nonlinear systems, IEE Trans on Automatic Control, 60 (2015), 2917-2929.  doi: 10.1109/TAC.2015.2414811.  Google Scholar

[12]

W. K. Jong, J. P. Byung, Y. Haeun, H. L. Jay and M. L. Jong, Deep reinforcement learning based finite-horizon optimal tracking control for nonlinear systems, in International Federation Automatic Control, (2018), 257–262. Google Scholar

[13]

R. Kamalapurkar, P. Walters and W. E. Dixon, Model-based reinforcement learning for approximate optimal control, in Reinforcement Learning for Optimal Feedback Control, Communications and Control Engineering. Springer, Cham, (2018), 99–148. Google Scholar

[14]

J. Y. LeeB. P. Jin and Y. H. Chio, Integral Q-learning and explorized policy iteration for adaptive optimal control of continuous-time linear systems, Automatica, 48 (2012), 2850-2859.  doi: 10.1016/j.automatica.2012.06.008.  Google Scholar

[15]

F. L. Lewis and D. R. Liu, Reinforcement learning and approximate dynamic programming for feedback control, IEEE Circuits and Systems Magazine, 9 (2015), 32-50.   Google Scholar

[16]

F. L. Levis, D. L. Vrabie and V. L. Syrmos, Optimal Control, 3$^th$ edition, John Wiley and Sons, Hoboken, 2015. Google Scholar

[17]

J. N. LiT. Y. ChaiF. L. LewisZ. T. Ding and Y. Jiang, Off-policy interleaved Q-learning: optimal control for affine nonlinear discrete-time systems, IEEE Transactions on Neural Networks and Learning Systems, 30 (2019), 1308-1320.  doi: 10.1109/TNNLS.2018.2861945.  Google Scholar

[18]

J. N. LiT. Y. ChaiF. L. LewisJ. L. FanZ. T. Ding and J. L. Ding, Off-policy Q-learning: Set-point design for optimizing dual-rate rougher flotation operational processes, IEEE Transactions on Industrial Electronics, 65 (2018), 4092-4102.  doi: 10.1109/TIE.2017.2760245.  Google Scholar

[19]

X. X. LiZ. H. PengL. Liang and W. Z. Zha, Policy iteration based Q-learning for linear nonzero-sum quadratic differential games, Science China(Information Sciences), 62 (2019), 195-213.   Google Scholar

[20]

Q. LinQ. L. Wei and D. R. Liu, A novel optimal tracking control scheme for a class of discrete-time nonlinear systems using generalised policy iteration adaptive dynamic programming algorithm, International Journal of Systems Science, 48 (2017), 525-534.  doi: 10.1080/00207721.2016.1188177.  Google Scholar

[21]

H. L. Liu and Q. X. Zhu, New forms of Riccati equations and the further results of the optimal control for linear discrete-time systems, International Journal of Control, Automation, and Systems, 12 (2014), 1160-1166.  doi: 10.1007/s12555-013-0202-x.  Google Scholar

[22]

B. LuoD. R. LiuT. W. Huang and D. Wang, Data-based approximate policy iteration for nonlinear continuous-time optimal control design, Automatica, 50 (2014), 3281-3290.  doi: 10.1016/j.automatica.2014.10.056.  Google Scholar

[23]

B. LuoD. R. LiuT. W. Huang and D. Wang, Model-free optimal tracking control via critic-only Q-learning, IEEE Transactions on Neural Networks and Learning Systems, 27 (2016), 2134-2144.  doi: 10.1109/TNNLS.2016.2585520.  Google Scholar

[24]

B. LuoD. R. Liu and H. N. Wu, Adaptive constrained optimal control design for data-based nonlinear discrete-time systems with critic-only structure, IEEE Transactions on Neural Networks and Learning Systems, 29 (2018), 2099-2111.  doi: 10.1109/TNNLS.2017.2751018.  Google Scholar

[25]

B. LuoD. R. LiuH. N. WuD. Wang and F. L. Lewis, Policy gradient adaptive dynamic programming for data-based optimal control, IEEE Transactions on Cybernetics, 47 (2017), 3341-3354.  doi: 10.1109/TCYB.2016.2623859.  Google Scholar

[26]

B. LuoH. N. Wu and T. W. Huang, Optimal output regulation for model-free quanser helicopter with multistep Q-learning, IEEE Transactions on Industrial Electronics, 65 (2018), 4953-4961.  doi: 10.1109/TIE.2017.2772162.  Google Scholar

[27]

B. LuoY. Yang and D. R. Liu, Adaptive Q-Learning for data-based optimal output regulation with experience replay, IEEE Transactions on Cybernetics, 48 (2018), 3337-3348.   Google Scholar

[28]

Y. F. LvN. JingQ. M. YangX. Wu and Y. Guo, Online adaptive optimal control for continuous-time nonlinear systems with completely unknown dynamics, International Journal of Control, 89 (2016), 99-112.  doi: 10.1080/00207179.2015.1060362.  Google Scholar

[29]

W. B. Powell, Approximate Dynamic Programming: Solving the Curses of Dimensionality, 2$^nd$ edition, Wiley Princeton, New Jersey, 2011. Google Scholar

[30]

A. SahooH. Xu and S. Jagannathan, Approximate optimal control of affine nonlinear continuous-time systems using event-sampled neurodynamic programming, IEEE Transactions on Neural Networks and Learning Systems, 28 (2017), 639-652.   Google Scholar

[31]

K. G. Vamvoudakis, Q-learning for continuous-time linear systems: A model-free infinite horizon optimal control approach, Systems and Control Letters, 100 (2017), 14-20.   Google Scholar

[32]

K. G. Vamvoudakis and F. L. Lewis, Online actor-critc algorithm to solve the continuous-time infinite horizon optimal control problem, Automatica, 46 (2010), 878-888.   Google Scholar

[33]

D. WangD. R. Liu and Q. L. Wei, Finite-horizon neuro-optimal tracking control for a class of discrete-time nonlinear systems using adaptive dynamic programming approach, Neurocomputing, 78 (2012), 14-22.   Google Scholar

[34]

F. Y. WangN. JinD. R. Liu and Q. L. Wei, Adaptive dynamic programming for finite-horizon optimal control of discrete-time nonlinear systems with $\varepsilon -$ error bound, IEEE Transactions on Neural Networks, 22 (2011), 24-36.   Google Scholar

[35]

Q. L. Wei and D. R. Liu, A novel policy iteration based deterministic Q-learning for discrete-time nonlinear systems, Science China(Information Sciences), 58 (2015), 1-15.   Google Scholar

[36]

Q. L. WeiD. R. Liu and H. Q. Lin, Value iteration adaptive dynamic programming for optimal control of discrete-time nonlinear systems, IEE Trans Cybern, 46 (2016), 840-853.   Google Scholar

[37]

Y. WuZ. H. Yuan and Y. P. Wu, Optimal tracking control for networked control systems with random time delays and packet dropouts, Journal of Industrial & Management Optimization, 11 (2015), 1343-1354.   Google Scholar

[38]

H. G. ZhangJ. HeY. H. Luo and G. Y. Xiao, Data-driven optimal consensus control for discrete-time multi-agent systems with unknown dynamics using reinforcement learning method, IEEE Transactions on Industrial Electronics, 64 (2017), 4091-4100.   Google Scholar

[39]

H. G. ZhangQ. L. Wei and Y. H. Luo, A novel infinite-time optimal tracking control scheme for a class of discrete-time nonlinear systems via the greedy HDP iteration algorithm, IEEE Transactions on Systems Man and Cybernetics Part B, 38 (2008), 937-942.   Google Scholar

[40]

Q. C. ZhangD. B. Zhao and D. Wang, Event-based robust control for uncertain nonlinear systems using adaptive dynamic programming, IEEE Transactions on Neural Networks and Learning Systems, 29 (2018), 37-50.   Google Scholar

[41]

J. G. ZhaoM. G. Gan and C. Zhang, Event-triggered ${{H}_{\infty }}$ optimal control for continuous-time nonlinear systems using neurodynamic programming, Neurocomputing, 360 (2019), 14-24.   Google Scholar

[42]

Q. M. Zhao, Finite-horizon Optimal Control of Linear and a Class of Nonlinear Systems, Ph.D thesis, Missouri University of Science and Technology. 2013. Google Scholar

[43]

Q. M. ZhaoX. Hao and J. Sarangapani, Finite-horizon near optimal adaptive control of uncertain linear discrete-time systems, Optimal Control Applications and Methods, 36 (2016), 853-872.   Google Scholar

[44]

X. N. ZhongH. B. HeD. Wang and Z. Ni, Model-free adaptive optimal control for unknown nonlinear zero-sum differnetial game, IEEE Transactions on Cybernetics, 48 (2018), 1633-1646.   Google Scholar

[45]

Q. X. Zhu and G. M. Xie, Finite-horizon optimal control of discrete-time switched linear systems, Mathematical Problems in Engineering, 2012 (2012), 1-12.   Google Scholar

show all references

References:
[1]

A. Al-TamimiF. L. Lewis and M. Abu-Khalaf, Model-free Q-learning design for discrete-time zero-sum games with application to H-infinity control, Automatica, 43 (2007), 473-481.  doi: 10.1016/j.automatica.2006.09.019.  Google Scholar

[2]

D. P. Bertsekas and J. N. Tsitsiklis, Neuro-Dynamic Programming, 1$^st$ edition, Athena Scientific, Belmont, 1996. Google Scholar

[3]

D. P. Bertsekas, Value and policy iterations in optimal control and adaptive dynamic programming, IEEE Transactions on Neural Networks and Learning Systems, 28 (2017), 500-509.  doi: 10.1109/TNNLS.2015.2503980.  Google Scholar

[4]

X. S. ChenX. Li and F. H. Yi, Optimal stopping investment with non-smooth utility over an infinite time horizon, Journal of Industrial & Management Optimization, 15 (2019), 81-96.   Google Scholar

[5]

Y. F. Chen and Y. G. Zhu, Indifinite LQ optimal control with process state inequality constraints for discrete-time uncertain systems, Journal of Industrial & Management Optimization, 14 (2018), 913-930.   Google Scholar

[6]

T. ChengF. L. Lewis and M. Abu-Khalaf, A neural network solution for fixed-final time optimal control of nonlinear systems, Automatica, 43 (2007), 482-490.  doi: 10.1016/j.automatica.2006.09.021.  Google Scholar

[7]

M. G. Gan, J. G. Zhao and C. Zhang, Extended adaptive optimal control of linear systems with unknown dynamics using adaptive dynamic programming, Asian J Control, (2019), 1–10. doi: 10.1002/asjc.2243.  Google Scholar

[8]

W. N. GaoY. JiangZ. P. Jiang and Y. T. Chai, Output-feedback adaptive optimal control of interconnected systems based on robust adaptive dynamic programming, Automatica, 72 (2016), 37-45.  doi: 10.1016/j.automatica.2016.05.008.  Google Scholar

[9]

A. Heydari and S. N. Balakrishnan, Finite-horizon control-constrained nonlinear optimal control using single network adaptvie critics, IEEE Transactions on Neural Networks and Learning Systems, 24 (2013), 145-157.   Google Scholar

[10]

Y. Jiang and Z. P. Jiang, Computational adaptive optimal control for continuous-time linear systems with completely unknown dynamics, Automatica, 48 (2012), 2699-2704.  doi: 10.1016/j.automatica.2012.06.096.  Google Scholar

[11]

Y. Jiang and Z. P. Jiang, Global adaptive dynamic programming for continuous-time nonlinear systems, IEE Trans on Automatic Control, 60 (2015), 2917-2929.  doi: 10.1109/TAC.2015.2414811.  Google Scholar

[12]

W. K. Jong, J. P. Byung, Y. Haeun, H. L. Jay and M. L. Jong, Deep reinforcement learning based finite-horizon optimal tracking control for nonlinear systems, in International Federation Automatic Control, (2018), 257–262. Google Scholar

[13]

R. Kamalapurkar, P. Walters and W. E. Dixon, Model-based reinforcement learning for approximate optimal control, in Reinforcement Learning for Optimal Feedback Control, Communications and Control Engineering. Springer, Cham, (2018), 99–148. Google Scholar

[14]

J. Y. LeeB. P. Jin and Y. H. Chio, Integral Q-learning and explorized policy iteration for adaptive optimal control of continuous-time linear systems, Automatica, 48 (2012), 2850-2859.  doi: 10.1016/j.automatica.2012.06.008.  Google Scholar

[15]

F. L. Lewis and D. R. Liu, Reinforcement learning and approximate dynamic programming for feedback control, IEEE Circuits and Systems Magazine, 9 (2015), 32-50.   Google Scholar

[16]

F. L. Levis, D. L. Vrabie and V. L. Syrmos, Optimal Control, 3$^th$ edition, John Wiley and Sons, Hoboken, 2015. Google Scholar

[17]

J. N. LiT. Y. ChaiF. L. LewisZ. T. Ding and Y. Jiang, Off-policy interleaved Q-learning: optimal control for affine nonlinear discrete-time systems, IEEE Transactions on Neural Networks and Learning Systems, 30 (2019), 1308-1320.  doi: 10.1109/TNNLS.2018.2861945.  Google Scholar

[18]

J. N. LiT. Y. ChaiF. L. LewisJ. L. FanZ. T. Ding and J. L. Ding, Off-policy Q-learning: Set-point design for optimizing dual-rate rougher flotation operational processes, IEEE Transactions on Industrial Electronics, 65 (2018), 4092-4102.  doi: 10.1109/TIE.2017.2760245.  Google Scholar

[19]

X. X. LiZ. H. PengL. Liang and W. Z. Zha, Policy iteration based Q-learning for linear nonzero-sum quadratic differential games, Science China(Information Sciences), 62 (2019), 195-213.   Google Scholar

[20]

Q. LinQ. L. Wei and D. R. Liu, A novel optimal tracking control scheme for a class of discrete-time nonlinear systems using generalised policy iteration adaptive dynamic programming algorithm, International Journal of Systems Science, 48 (2017), 525-534.  doi: 10.1080/00207721.2016.1188177.  Google Scholar

[21]

H. L. Liu and Q. X. Zhu, New forms of Riccati equations and the further results of the optimal control for linear discrete-time systems, International Journal of Control, Automation, and Systems, 12 (2014), 1160-1166.  doi: 10.1007/s12555-013-0202-x.  Google Scholar

[22]

B. LuoD. R. LiuT. W. Huang and D. Wang, Data-based approximate policy iteration for nonlinear continuous-time optimal control design, Automatica, 50 (2014), 3281-3290.  doi: 10.1016/j.automatica.2014.10.056.  Google Scholar

[23]

B. LuoD. R. LiuT. W. Huang and D. Wang, Model-free optimal tracking control via critic-only Q-learning, IEEE Transactions on Neural Networks and Learning Systems, 27 (2016), 2134-2144.  doi: 10.1109/TNNLS.2016.2585520.  Google Scholar

[24]

B. LuoD. R. Liu and H. N. Wu, Adaptive constrained optimal control design for data-based nonlinear discrete-time systems with critic-only structure, IEEE Transactions on Neural Networks and Learning Systems, 29 (2018), 2099-2111.  doi: 10.1109/TNNLS.2017.2751018.  Google Scholar

[25]

B. LuoD. R. LiuH. N. WuD. Wang and F. L. Lewis, Policy gradient adaptive dynamic programming for data-based optimal control, IEEE Transactions on Cybernetics, 47 (2017), 3341-3354.  doi: 10.1109/TCYB.2016.2623859.  Google Scholar

[26]

B. LuoH. N. Wu and T. W. Huang, Optimal output regulation for model-free quanser helicopter with multistep Q-learning, IEEE Transactions on Industrial Electronics, 65 (2018), 4953-4961.  doi: 10.1109/TIE.2017.2772162.  Google Scholar

[27]

B. LuoY. Yang and D. R. Liu, Adaptive Q-Learning for data-based optimal output regulation with experience replay, IEEE Transactions on Cybernetics, 48 (2018), 3337-3348.   Google Scholar

[28]

Y. F. LvN. JingQ. M. YangX. Wu and Y. Guo, Online adaptive optimal control for continuous-time nonlinear systems with completely unknown dynamics, International Journal of Control, 89 (2016), 99-112.  doi: 10.1080/00207179.2015.1060362.  Google Scholar

[29]

W. B. Powell, Approximate Dynamic Programming: Solving the Curses of Dimensionality, 2$^nd$ edition, Wiley Princeton, New Jersey, 2011. Google Scholar

[30]

A. SahooH. Xu and S. Jagannathan, Approximate optimal control of affine nonlinear continuous-time systems using event-sampled neurodynamic programming, IEEE Transactions on Neural Networks and Learning Systems, 28 (2017), 639-652.   Google Scholar

[31]

K. G. Vamvoudakis, Q-learning for continuous-time linear systems: A model-free infinite horizon optimal control approach, Systems and Control Letters, 100 (2017), 14-20.   Google Scholar

[32]

K. G. Vamvoudakis and F. L. Lewis, Online actor-critc algorithm to solve the continuous-time infinite horizon optimal control problem, Automatica, 46 (2010), 878-888.   Google Scholar

[33]

D. WangD. R. Liu and Q. L. Wei, Finite-horizon neuro-optimal tracking control for a class of discrete-time nonlinear systems using adaptive dynamic programming approach, Neurocomputing, 78 (2012), 14-22.   Google Scholar

[34]

F. Y. WangN. JinD. R. Liu and Q. L. Wei, Adaptive dynamic programming for finite-horizon optimal control of discrete-time nonlinear systems with $\varepsilon -$ error bound, IEEE Transactions on Neural Networks, 22 (2011), 24-36.   Google Scholar

[35]

Q. L. Wei and D. R. Liu, A novel policy iteration based deterministic Q-learning for discrete-time nonlinear systems, Science China(Information Sciences), 58 (2015), 1-15.   Google Scholar

[36]

Q. L. WeiD. R. Liu and H. Q. Lin, Value iteration adaptive dynamic programming for optimal control of discrete-time nonlinear systems, IEE Trans Cybern, 46 (2016), 840-853.   Google Scholar

[37]

Y. WuZ. H. Yuan and Y. P. Wu, Optimal tracking control for networked control systems with random time delays and packet dropouts, Journal of Industrial & Management Optimization, 11 (2015), 1343-1354.   Google Scholar

[38]

H. G. ZhangJ. HeY. H. Luo and G. Y. Xiao, Data-driven optimal consensus control for discrete-time multi-agent systems with unknown dynamics using reinforcement learning method, IEEE Transactions on Industrial Electronics, 64 (2017), 4091-4100.   Google Scholar

[39]

H. G. ZhangQ. L. Wei and Y. H. Luo, A novel infinite-time optimal tracking control scheme for a class of discrete-time nonlinear systems via the greedy HDP iteration algorithm, IEEE Transactions on Systems Man and Cybernetics Part B, 38 (2008), 937-942.   Google Scholar

[40]

Q. C. ZhangD. B. Zhao and D. Wang, Event-based robust control for uncertain nonlinear systems using adaptive dynamic programming, IEEE Transactions on Neural Networks and Learning Systems, 29 (2018), 37-50.   Google Scholar

[41]

J. G. ZhaoM. G. Gan and C. Zhang, Event-triggered ${{H}_{\infty }}$ optimal control for continuous-time nonlinear systems using neurodynamic programming, Neurocomputing, 360 (2019), 14-24.   Google Scholar

[42]

Q. M. Zhao, Finite-horizon Optimal Control of Linear and a Class of Nonlinear Systems, Ph.D thesis, Missouri University of Science and Technology. 2013. Google Scholar

[43]

Q. M. ZhaoX. Hao and J. Sarangapani, Finite-horizon near optimal adaptive control of uncertain linear discrete-time systems, Optimal Control Applications and Methods, 36 (2016), 853-872.   Google Scholar

[44]

X. N. ZhongH. B. HeD. Wang and Z. Ni, Model-free adaptive optimal control for unknown nonlinear zero-sum differnetial game, IEEE Transactions on Cybernetics, 48 (2018), 1633-1646.   Google Scholar

[45]

Q. X. Zhu and G. M. Xie, Finite-horizon optimal control of discrete-time switched linear systems, Mathematical Problems in Engineering, 2012 (2012), 1-12.   Google Scholar

Figure 1.  The flow chart of Algorithm 1
Figure 2.  Initial system state $x_0$ are randomly selected from a compact set $\Omega : = \{-1\le {{x}_{1}},{{x}_{2}}\le 1\}$
Figure 3.  The convergence process of $\hat W$
Figure 4.  The trajectories of system states
Figure 5.  The optimal control input $u$
Figure 6.  The convergence process of $\hat W$
Figure 7.  The trajectories of system states
Figure 8.  The optimal control input $u$
[1]

Cuicui Li, Lin Zhou, Zhidong Teng, Buyu Wen. The threshold dynamics of a discrete-time echinococcosis transmission model. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020339

[2]

Awais Younus, Zoubia Dastgeer, Nudrat Ishaq, Abdul Ghaffar, Kottakkaran Sooppy Nisar, Devendra Kumar. On the observability of conformable linear time-invariant control systems. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020444

[3]

Lars Grüne, Matthias A. Müller, Christopher M. Kellett, Steven R. Weller. Strict dissipativity for discrete time discounted optimal control problems. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020046

[4]

Haixiang Yao, Ping Chen, Miao Zhang, Xun Li. Dynamic discrete-time portfolio selection for defined contribution pension funds with inflation risk. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020166

[5]

Hong Niu, Zhijiang Feng, Qijin Xiao, Yajun Zhang. A PID control method based on optimal control strategy. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 117-126. doi: 10.3934/naco.2020019

[6]

Zuliang Lu, Fei Huang, Xiankui Wu, Lin Li, Shang Liu. Convergence and quasi-optimality of $ L^2- $norms based an adaptive finite element method for nonlinear optimal control problems. Electronic Research Archive, 2020, 28 (4) : 1459-1486. doi: 10.3934/era.2020077

[7]

Yongge Tian, Pengyang Xie. Simultaneous optimal predictions under two seemingly unrelated linear random-effects models. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020168

[8]

Yuri Fedorov, Božidar Jovanović. Continuous and discrete Neumann systems on Stiefel varieties as matrix generalizations of the Jacobi–Mumford systems. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020375

[9]

Hui Lv, Xing'an Wang. Dissipative control for uncertain singular markovian jump systems via hybrid impulsive control. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 127-142. doi: 10.3934/naco.2020020

[10]

Youming Guo, Tingting Li. Optimal control strategies for an online game addiction model with low and high risk exposure. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020347

[11]

Bernard Bonnard, Jérémy Rouot. Geometric optimal techniques to control the muscular force response to functional electrical stimulation using a non-isometric force-fatigue model. Journal of Geometric Mechanics, 2020  doi: 10.3934/jgm.2020032

[12]

Xuefeng Zhang, Yingbo Zhang. Fault-tolerant control against actuator failures for uncertain singular fractional order systems. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 1-12. doi: 10.3934/naco.2020011

[13]

Thabet Abdeljawad, Mohammad Esmael Samei. Applying quantum calculus for the existence of solution of $ q $-integro-differential equations with three criteria. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020440

[14]

José Madrid, João P. G. Ramos. On optimal autocorrelation inequalities on the real line. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020271

[15]

Xuefei He, Kun Wang, Liwei Xu. Efficient finite difference methods for the nonlinear Helmholtz equation in Kerr medium. Electronic Research Archive, 2020, 28 (4) : 1503-1528. doi: 10.3934/era.2020079

[16]

Anton A. Kutsenko. Isomorphism between one-Dimensional and multidimensional finite difference operators. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020270

[17]

Yue Feng, Yujie Liu, Ruishu Wang, Shangyou Zhang. A conforming discontinuous Galerkin finite element method on rectangular partitions. Electronic Research Archive, , () : -. doi: 10.3934/era.2020120

[18]

Weiwei Liu, Jinliang Wang, Yuming Chen. Threshold dynamics of a delayed nonlocal reaction-diffusion cholera model. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020316

[19]

Manil T. Mohan. First order necessary conditions of optimality for the two dimensional tidal dynamics system. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020045

[20]

Shao-Xia Qiao, Li-Jun Du. Propagation dynamics of nonlocal dispersal equations with inhomogeneous bistable nonlinearity. Electronic Research Archive, , () : -. doi: 10.3934/era.2020116

2019 Impact Factor: 1.366

Article outline

Figures and Tables

[Back to Top]