May  2021, 17(3): 1471-1483. doi: 10.3934/jimo.2020030

Finite-horizon optimal control of discrete-time linear systems with completely unknown dynamics using Q-learning

School of Automation, Beijing Institute of Technology, Beijing, 100081, China

* Corresponding author: Jingang Zhao

Received  March 2019 Revised  September 2019 Published  February 2020

Fund Project: The first author is supported by International Graduate Exchange Program of Beijing Institute of Technology; This paper is supported by the National Natural Science Foundation of China grant 61673065

This paper investigates finite-horizon optimal control problem of completely unknown discrete-time linear systems. The completely unknown here refers to that the system dynamics are unknown. Compared with infinite-horizon optimal control, the Riccati equation (RE) of finite-horizon optimal control is time-dependent and must meet certain terminal boundary constraints, which brings the greater challenges. Meanwhile, the completely unknown system dynamics have also caused additional challenges. The main innovation of this paper is the developed cyclic fixed-finite-horizon-based Q-learning algorithm to approximate the optimal control input without requiring the system dynamics. The developed algorithm main consists of two phases: the data collection phase over a fixed-finite-horizon and the parameters update phase. A least-squares method is used to correlate the two phases to obtain the optimal parameters by cyclic. Finally, simulation results are given to verify the effectiveness of the proposed cyclic fixed-finite-horizon-based Q-learning algorithm.

Citation: Jingang Zhao, Chi Zhang. Finite-horizon optimal control of discrete-time linear systems with completely unknown dynamics using Q-learning. Journal of Industrial & Management Optimization, 2021, 17 (3) : 1471-1483. doi: 10.3934/jimo.2020030
References:
[1]

A. Al-TamimiF. L. Lewis and M. Abu-Khalaf, Model-free Q-learning design for discrete-time zero-sum games with application to H-infinity control, Automatica, 43 (2007), 473-481.  doi: 10.1016/j.automatica.2006.09.019.  Google Scholar

[2]

D. P. Bertsekas and J. N. Tsitsiklis, Neuro-Dynamic Programming, 1$^st$ edition, Athena Scientific, Belmont, 1996. Google Scholar

[3]

D. P. Bertsekas, Value and policy iterations in optimal control and adaptive dynamic programming, IEEE Transactions on Neural Networks and Learning Systems, 28 (2017), 500-509.  doi: 10.1109/TNNLS.2015.2503980.  Google Scholar

[4]

X. S. ChenX. Li and F. H. Yi, Optimal stopping investment with non-smooth utility over an infinite time horizon, Journal of Industrial & Management Optimization, 15 (2019), 81-96.   Google Scholar

[5]

Y. F. Chen and Y. G. Zhu, Indifinite LQ optimal control with process state inequality constraints for discrete-time uncertain systems, Journal of Industrial & Management Optimization, 14 (2018), 913-930.   Google Scholar

[6]

T. ChengF. L. Lewis and M. Abu-Khalaf, A neural network solution for fixed-final time optimal control of nonlinear systems, Automatica, 43 (2007), 482-490.  doi: 10.1016/j.automatica.2006.09.021.  Google Scholar

[7]

M. G. Gan, J. G. Zhao and C. Zhang, Extended adaptive optimal control of linear systems with unknown dynamics using adaptive dynamic programming, Asian J Control, (2019), 1–10. doi: 10.1002/asjc.2243.  Google Scholar

[8]

W. N. GaoY. JiangZ. P. Jiang and Y. T. Chai, Output-feedback adaptive optimal control of interconnected systems based on robust adaptive dynamic programming, Automatica, 72 (2016), 37-45.  doi: 10.1016/j.automatica.2016.05.008.  Google Scholar

[9]

A. Heydari and S. N. Balakrishnan, Finite-horizon control-constrained nonlinear optimal control using single network adaptvie critics, IEEE Transactions on Neural Networks and Learning Systems, 24 (2013), 145-157.   Google Scholar

[10]

Y. Jiang and Z. P. Jiang, Computational adaptive optimal control for continuous-time linear systems with completely unknown dynamics, Automatica, 48 (2012), 2699-2704.  doi: 10.1016/j.automatica.2012.06.096.  Google Scholar

[11]

Y. Jiang and Z. P. Jiang, Global adaptive dynamic programming for continuous-time nonlinear systems, IEE Trans on Automatic Control, 60 (2015), 2917-2929.  doi: 10.1109/TAC.2015.2414811.  Google Scholar

[12]

W. K. Jong, J. P. Byung, Y. Haeun, H. L. Jay and M. L. Jong, Deep reinforcement learning based finite-horizon optimal tracking control for nonlinear systems, in International Federation Automatic Control, (2018), 257–262. Google Scholar

[13]

R. Kamalapurkar, P. Walters and W. E. Dixon, Model-based reinforcement learning for approximate optimal control, in Reinforcement Learning for Optimal Feedback Control, Communications and Control Engineering. Springer, Cham, (2018), 99–148. Google Scholar

[14]

J. Y. LeeB. P. Jin and Y. H. Chio, Integral Q-learning and explorized policy iteration for adaptive optimal control of continuous-time linear systems, Automatica, 48 (2012), 2850-2859.  doi: 10.1016/j.automatica.2012.06.008.  Google Scholar

[15]

F. L. Lewis and D. R. Liu, Reinforcement learning and approximate dynamic programming for feedback control, IEEE Circuits and Systems Magazine, 9 (2015), 32-50.   Google Scholar

[16]

F. L. Levis, D. L. Vrabie and V. L. Syrmos, Optimal Control, 3$^th$ edition, John Wiley and Sons, Hoboken, 2015. Google Scholar

[17]

J. N. LiT. Y. ChaiF. L. LewisZ. T. Ding and Y. Jiang, Off-policy interleaved Q-learning: optimal control for affine nonlinear discrete-time systems, IEEE Transactions on Neural Networks and Learning Systems, 30 (2019), 1308-1320.  doi: 10.1109/TNNLS.2018.2861945.  Google Scholar

[18]

J. N. LiT. Y. ChaiF. L. LewisJ. L. FanZ. T. Ding and J. L. Ding, Off-policy Q-learning: Set-point design for optimizing dual-rate rougher flotation operational processes, IEEE Transactions on Industrial Electronics, 65 (2018), 4092-4102.  doi: 10.1109/TIE.2017.2760245.  Google Scholar

[19]

X. X. LiZ. H. PengL. Liang and W. Z. Zha, Policy iteration based Q-learning for linear nonzero-sum quadratic differential games, Science China(Information Sciences), 62 (2019), 195-213.   Google Scholar

[20]

Q. LinQ. L. Wei and D. R. Liu, A novel optimal tracking control scheme for a class of discrete-time nonlinear systems using generalised policy iteration adaptive dynamic programming algorithm, International Journal of Systems Science, 48 (2017), 525-534.  doi: 10.1080/00207721.2016.1188177.  Google Scholar

[21]

H. L. Liu and Q. X. Zhu, New forms of Riccati equations and the further results of the optimal control for linear discrete-time systems, International Journal of Control, Automation, and Systems, 12 (2014), 1160-1166.  doi: 10.1007/s12555-013-0202-x.  Google Scholar

[22]

B. LuoD. R. LiuT. W. Huang and D. Wang, Data-based approximate policy iteration for nonlinear continuous-time optimal control design, Automatica, 50 (2014), 3281-3290.  doi: 10.1016/j.automatica.2014.10.056.  Google Scholar

[23]

B. LuoD. R. LiuT. W. Huang and D. Wang, Model-free optimal tracking control via critic-only Q-learning, IEEE Transactions on Neural Networks and Learning Systems, 27 (2016), 2134-2144.  doi: 10.1109/TNNLS.2016.2585520.  Google Scholar

[24]

B. LuoD. R. Liu and H. N. Wu, Adaptive constrained optimal control design for data-based nonlinear discrete-time systems with critic-only structure, IEEE Transactions on Neural Networks and Learning Systems, 29 (2018), 2099-2111.  doi: 10.1109/TNNLS.2017.2751018.  Google Scholar

[25]

B. LuoD. R. LiuH. N. WuD. Wang and F. L. Lewis, Policy gradient adaptive dynamic programming for data-based optimal control, IEEE Transactions on Cybernetics, 47 (2017), 3341-3354.  doi: 10.1109/TCYB.2016.2623859.  Google Scholar

[26]

B. LuoH. N. Wu and T. W. Huang, Optimal output regulation for model-free quanser helicopter with multistep Q-learning, IEEE Transactions on Industrial Electronics, 65 (2018), 4953-4961.  doi: 10.1109/TIE.2017.2772162.  Google Scholar

[27]

B. LuoY. Yang and D. R. Liu, Adaptive Q-Learning for data-based optimal output regulation with experience replay, IEEE Transactions on Cybernetics, 48 (2018), 3337-3348.   Google Scholar

[28]

Y. F. LvN. JingQ. M. YangX. Wu and Y. Guo, Online adaptive optimal control for continuous-time nonlinear systems with completely unknown dynamics, International Journal of Control, 89 (2016), 99-112.  doi: 10.1080/00207179.2015.1060362.  Google Scholar

[29]

W. B. Powell, Approximate Dynamic Programming: Solving the Curses of Dimensionality, 2$^nd$ edition, Wiley Princeton, New Jersey, 2011. Google Scholar

[30]

A. SahooH. Xu and S. Jagannathan, Approximate optimal control of affine nonlinear continuous-time systems using event-sampled neurodynamic programming, IEEE Transactions on Neural Networks and Learning Systems, 28 (2017), 639-652.   Google Scholar

[31]

K. G. Vamvoudakis, Q-learning for continuous-time linear systems: A model-free infinite horizon optimal control approach, Systems and Control Letters, 100 (2017), 14-20.   Google Scholar

[32]

K. G. Vamvoudakis and F. L. Lewis, Online actor-critc algorithm to solve the continuous-time infinite horizon optimal control problem, Automatica, 46 (2010), 878-888.   Google Scholar

[33]

D. WangD. R. Liu and Q. L. Wei, Finite-horizon neuro-optimal tracking control for a class of discrete-time nonlinear systems using adaptive dynamic programming approach, Neurocomputing, 78 (2012), 14-22.   Google Scholar

[34]

F. Y. WangN. JinD. R. Liu and Q. L. Wei, Adaptive dynamic programming for finite-horizon optimal control of discrete-time nonlinear systems with $\varepsilon -$ error bound, IEEE Transactions on Neural Networks, 22 (2011), 24-36.   Google Scholar

[35]

Q. L. Wei and D. R. Liu, A novel policy iteration based deterministic Q-learning for discrete-time nonlinear systems, Science China(Information Sciences), 58 (2015), 1-15.   Google Scholar

[36]

Q. L. WeiD. R. Liu and H. Q. Lin, Value iteration adaptive dynamic programming for optimal control of discrete-time nonlinear systems, IEE Trans Cybern, 46 (2016), 840-853.   Google Scholar

[37]

Y. WuZ. H. Yuan and Y. P. Wu, Optimal tracking control for networked control systems with random time delays and packet dropouts, Journal of Industrial & Management Optimization, 11 (2015), 1343-1354.   Google Scholar

[38]

H. G. ZhangJ. HeY. H. Luo and G. Y. Xiao, Data-driven optimal consensus control for discrete-time multi-agent systems with unknown dynamics using reinforcement learning method, IEEE Transactions on Industrial Electronics, 64 (2017), 4091-4100.   Google Scholar

[39]

H. G. ZhangQ. L. Wei and Y. H. Luo, A novel infinite-time optimal tracking control scheme for a class of discrete-time nonlinear systems via the greedy HDP iteration algorithm, IEEE Transactions on Systems Man and Cybernetics Part B, 38 (2008), 937-942.   Google Scholar

[40]

Q. C. ZhangD. B. Zhao and D. Wang, Event-based robust control for uncertain nonlinear systems using adaptive dynamic programming, IEEE Transactions on Neural Networks and Learning Systems, 29 (2018), 37-50.   Google Scholar

[41]

J. G. ZhaoM. G. Gan and C. Zhang, Event-triggered ${{H}_{\infty }}$ optimal control for continuous-time nonlinear systems using neurodynamic programming, Neurocomputing, 360 (2019), 14-24.   Google Scholar

[42]

Q. M. Zhao, Finite-horizon Optimal Control of Linear and a Class of Nonlinear Systems, Ph.D thesis, Missouri University of Science and Technology. 2013. Google Scholar

[43]

Q. M. ZhaoX. Hao and J. Sarangapani, Finite-horizon near optimal adaptive control of uncertain linear discrete-time systems, Optimal Control Applications and Methods, 36 (2016), 853-872.   Google Scholar

[44]

X. N. ZhongH. B. HeD. Wang and Z. Ni, Model-free adaptive optimal control for unknown nonlinear zero-sum differnetial game, IEEE Transactions on Cybernetics, 48 (2018), 1633-1646.   Google Scholar

[45]

Q. X. Zhu and G. M. Xie, Finite-horizon optimal control of discrete-time switched linear systems, Mathematical Problems in Engineering, 2012 (2012), 1-12.   Google Scholar

show all references

References:
[1]

A. Al-TamimiF. L. Lewis and M. Abu-Khalaf, Model-free Q-learning design for discrete-time zero-sum games with application to H-infinity control, Automatica, 43 (2007), 473-481.  doi: 10.1016/j.automatica.2006.09.019.  Google Scholar

[2]

D. P. Bertsekas and J. N. Tsitsiklis, Neuro-Dynamic Programming, 1$^st$ edition, Athena Scientific, Belmont, 1996. Google Scholar

[3]

D. P. Bertsekas, Value and policy iterations in optimal control and adaptive dynamic programming, IEEE Transactions on Neural Networks and Learning Systems, 28 (2017), 500-509.  doi: 10.1109/TNNLS.2015.2503980.  Google Scholar

[4]

X. S. ChenX. Li and F. H. Yi, Optimal stopping investment with non-smooth utility over an infinite time horizon, Journal of Industrial & Management Optimization, 15 (2019), 81-96.   Google Scholar

[5]

Y. F. Chen and Y. G. Zhu, Indifinite LQ optimal control with process state inequality constraints for discrete-time uncertain systems, Journal of Industrial & Management Optimization, 14 (2018), 913-930.   Google Scholar

[6]

T. ChengF. L. Lewis and M. Abu-Khalaf, A neural network solution for fixed-final time optimal control of nonlinear systems, Automatica, 43 (2007), 482-490.  doi: 10.1016/j.automatica.2006.09.021.  Google Scholar

[7]

M. G. Gan, J. G. Zhao and C. Zhang, Extended adaptive optimal control of linear systems with unknown dynamics using adaptive dynamic programming, Asian J Control, (2019), 1–10. doi: 10.1002/asjc.2243.  Google Scholar

[8]

W. N. GaoY. JiangZ. P. Jiang and Y. T. Chai, Output-feedback adaptive optimal control of interconnected systems based on robust adaptive dynamic programming, Automatica, 72 (2016), 37-45.  doi: 10.1016/j.automatica.2016.05.008.  Google Scholar

[9]

A. Heydari and S. N. Balakrishnan, Finite-horizon control-constrained nonlinear optimal control using single network adaptvie critics, IEEE Transactions on Neural Networks and Learning Systems, 24 (2013), 145-157.   Google Scholar

[10]

Y. Jiang and Z. P. Jiang, Computational adaptive optimal control for continuous-time linear systems with completely unknown dynamics, Automatica, 48 (2012), 2699-2704.  doi: 10.1016/j.automatica.2012.06.096.  Google Scholar

[11]

Y. Jiang and Z. P. Jiang, Global adaptive dynamic programming for continuous-time nonlinear systems, IEE Trans on Automatic Control, 60 (2015), 2917-2929.  doi: 10.1109/TAC.2015.2414811.  Google Scholar

[12]

W. K. Jong, J. P. Byung, Y. Haeun, H. L. Jay and M. L. Jong, Deep reinforcement learning based finite-horizon optimal tracking control for nonlinear systems, in International Federation Automatic Control, (2018), 257–262. Google Scholar

[13]

R. Kamalapurkar, P. Walters and W. E. Dixon, Model-based reinforcement learning for approximate optimal control, in Reinforcement Learning for Optimal Feedback Control, Communications and Control Engineering. Springer, Cham, (2018), 99–148. Google Scholar

[14]

J. Y. LeeB. P. Jin and Y. H. Chio, Integral Q-learning and explorized policy iteration for adaptive optimal control of continuous-time linear systems, Automatica, 48 (2012), 2850-2859.  doi: 10.1016/j.automatica.2012.06.008.  Google Scholar

[15]

F. L. Lewis and D. R. Liu, Reinforcement learning and approximate dynamic programming for feedback control, IEEE Circuits and Systems Magazine, 9 (2015), 32-50.   Google Scholar

[16]

F. L. Levis, D. L. Vrabie and V. L. Syrmos, Optimal Control, 3$^th$ edition, John Wiley and Sons, Hoboken, 2015. Google Scholar

[17]

J. N. LiT. Y. ChaiF. L. LewisZ. T. Ding and Y. Jiang, Off-policy interleaved Q-learning: optimal control for affine nonlinear discrete-time systems, IEEE Transactions on Neural Networks and Learning Systems, 30 (2019), 1308-1320.  doi: 10.1109/TNNLS.2018.2861945.  Google Scholar

[18]

J. N. LiT. Y. ChaiF. L. LewisJ. L. FanZ. T. Ding and J. L. Ding, Off-policy Q-learning: Set-point design for optimizing dual-rate rougher flotation operational processes, IEEE Transactions on Industrial Electronics, 65 (2018), 4092-4102.  doi: 10.1109/TIE.2017.2760245.  Google Scholar

[19]

X. X. LiZ. H. PengL. Liang and W. Z. Zha, Policy iteration based Q-learning for linear nonzero-sum quadratic differential games, Science China(Information Sciences), 62 (2019), 195-213.   Google Scholar

[20]

Q. LinQ. L. Wei and D. R. Liu, A novel optimal tracking control scheme for a class of discrete-time nonlinear systems using generalised policy iteration adaptive dynamic programming algorithm, International Journal of Systems Science, 48 (2017), 525-534.  doi: 10.1080/00207721.2016.1188177.  Google Scholar

[21]

H. L. Liu and Q. X. Zhu, New forms of Riccati equations and the further results of the optimal control for linear discrete-time systems, International Journal of Control, Automation, and Systems, 12 (2014), 1160-1166.  doi: 10.1007/s12555-013-0202-x.  Google Scholar

[22]

B. LuoD. R. LiuT. W. Huang and D. Wang, Data-based approximate policy iteration for nonlinear continuous-time optimal control design, Automatica, 50 (2014), 3281-3290.  doi: 10.1016/j.automatica.2014.10.056.  Google Scholar

[23]

B. LuoD. R. LiuT. W. Huang and D. Wang, Model-free optimal tracking control via critic-only Q-learning, IEEE Transactions on Neural Networks and Learning Systems, 27 (2016), 2134-2144.  doi: 10.1109/TNNLS.2016.2585520.  Google Scholar

[24]

B. LuoD. R. Liu and H. N. Wu, Adaptive constrained optimal control design for data-based nonlinear discrete-time systems with critic-only structure, IEEE Transactions on Neural Networks and Learning Systems, 29 (2018), 2099-2111.  doi: 10.1109/TNNLS.2017.2751018.  Google Scholar

[25]

B. LuoD. R. LiuH. N. WuD. Wang and F. L. Lewis, Policy gradient adaptive dynamic programming for data-based optimal control, IEEE Transactions on Cybernetics, 47 (2017), 3341-3354.  doi: 10.1109/TCYB.2016.2623859.  Google Scholar

[26]

B. LuoH. N. Wu and T. W. Huang, Optimal output regulation for model-free quanser helicopter with multistep Q-learning, IEEE Transactions on Industrial Electronics, 65 (2018), 4953-4961.  doi: 10.1109/TIE.2017.2772162.  Google Scholar

[27]

B. LuoY. Yang and D. R. Liu, Adaptive Q-Learning for data-based optimal output regulation with experience replay, IEEE Transactions on Cybernetics, 48 (2018), 3337-3348.   Google Scholar

[28]

Y. F. LvN. JingQ. M. YangX. Wu and Y. Guo, Online adaptive optimal control for continuous-time nonlinear systems with completely unknown dynamics, International Journal of Control, 89 (2016), 99-112.  doi: 10.1080/00207179.2015.1060362.  Google Scholar

[29]

W. B. Powell, Approximate Dynamic Programming: Solving the Curses of Dimensionality, 2$^nd$ edition, Wiley Princeton, New Jersey, 2011. Google Scholar

[30]

A. SahooH. Xu and S. Jagannathan, Approximate optimal control of affine nonlinear continuous-time systems using event-sampled neurodynamic programming, IEEE Transactions on Neural Networks and Learning Systems, 28 (2017), 639-652.   Google Scholar

[31]

K. G. Vamvoudakis, Q-learning for continuous-time linear systems: A model-free infinite horizon optimal control approach, Systems and Control Letters, 100 (2017), 14-20.   Google Scholar

[32]

K. G. Vamvoudakis and F. L. Lewis, Online actor-critc algorithm to solve the continuous-time infinite horizon optimal control problem, Automatica, 46 (2010), 878-888.   Google Scholar

[33]

D. WangD. R. Liu and Q. L. Wei, Finite-horizon neuro-optimal tracking control for a class of discrete-time nonlinear systems using adaptive dynamic programming approach, Neurocomputing, 78 (2012), 14-22.   Google Scholar

[34]

F. Y. WangN. JinD. R. Liu and Q. L. Wei, Adaptive dynamic programming for finite-horizon optimal control of discrete-time nonlinear systems with $\varepsilon -$ error bound, IEEE Transactions on Neural Networks, 22 (2011), 24-36.   Google Scholar

[35]

Q. L. Wei and D. R. Liu, A novel policy iteration based deterministic Q-learning for discrete-time nonlinear systems, Science China(Information Sciences), 58 (2015), 1-15.   Google Scholar

[36]

Q. L. WeiD. R. Liu and H. Q. Lin, Value iteration adaptive dynamic programming for optimal control of discrete-time nonlinear systems, IEE Trans Cybern, 46 (2016), 840-853.   Google Scholar

[37]

Y. WuZ. H. Yuan and Y. P. Wu, Optimal tracking control for networked control systems with random time delays and packet dropouts, Journal of Industrial & Management Optimization, 11 (2015), 1343-1354.   Google Scholar

[38]

H. G. ZhangJ. HeY. H. Luo and G. Y. Xiao, Data-driven optimal consensus control for discrete-time multi-agent systems with unknown dynamics using reinforcement learning method, IEEE Transactions on Industrial Electronics, 64 (2017), 4091-4100.   Google Scholar

[39]

H. G. ZhangQ. L. Wei and Y. H. Luo, A novel infinite-time optimal tracking control scheme for a class of discrete-time nonlinear systems via the greedy HDP iteration algorithm, IEEE Transactions on Systems Man and Cybernetics Part B, 38 (2008), 937-942.   Google Scholar

[40]

Q. C. ZhangD. B. Zhao and D. Wang, Event-based robust control for uncertain nonlinear systems using adaptive dynamic programming, IEEE Transactions on Neural Networks and Learning Systems, 29 (2018), 37-50.   Google Scholar

[41]

J. G. ZhaoM. G. Gan and C. Zhang, Event-triggered ${{H}_{\infty }}$ optimal control for continuous-time nonlinear systems using neurodynamic programming, Neurocomputing, 360 (2019), 14-24.   Google Scholar

[42]

Q. M. Zhao, Finite-horizon Optimal Control of Linear and a Class of Nonlinear Systems, Ph.D thesis, Missouri University of Science and Technology. 2013. Google Scholar

[43]

Q. M. ZhaoX. Hao and J. Sarangapani, Finite-horizon near optimal adaptive control of uncertain linear discrete-time systems, Optimal Control Applications and Methods, 36 (2016), 853-872.   Google Scholar

[44]

X. N. ZhongH. B. HeD. Wang and Z. Ni, Model-free adaptive optimal control for unknown nonlinear zero-sum differnetial game, IEEE Transactions on Cybernetics, 48 (2018), 1633-1646.   Google Scholar

[45]

Q. X. Zhu and G. M. Xie, Finite-horizon optimal control of discrete-time switched linear systems, Mathematical Problems in Engineering, 2012 (2012), 1-12.   Google Scholar

Figure 1.  The flow chart of Algorithm 1
Figure 2.  Initial system state $x_0$ are randomly selected from a compact set $\Omega : = \{-1\le {{x}_{1}},{{x}_{2}}\le 1\}$
Figure 3.  The convergence process of $\hat W$
Figure 4.  The trajectories of system states
Figure 5.  The optimal control input $u$
Figure 6.  The convergence process of $\hat W$
Figure 7.  The trajectories of system states
Figure 8.  The optimal control input $u$
[1]

Tadeusz Kaczorek, Andrzej Ruszewski. Analysis of the fractional descriptor discrete-time linear systems by the use of the shuffle algorithm. Journal of Computational Dynamics, 2021  doi: 10.3934/jcd.2021007

[2]

Paula A. González-Parra, Sunmi Lee, Leticia Velázquez, Carlos Castillo-Chavez. A note on the use of optimal control on a discrete time model of influenza dynamics. Mathematical Biosciences & Engineering, 2011, 8 (1) : 183-197. doi: 10.3934/mbe.2011.8.183

[3]

Horst R. Thieme. Discrete-time dynamics of structured populations via Feller kernels. Discrete & Continuous Dynamical Systems - B, 2021  doi: 10.3934/dcdsb.2021082

[4]

Marita Holtmannspötter, Arnd Rösch, Boris Vexler. A priori error estimates for the space-time finite element discretization of an optimal control problem governed by a coupled linear PDE-ODE system. Mathematical Control & Related Fields, 2021  doi: 10.3934/mcrf.2021014

[5]

Elena K. Kostousova. External polyhedral estimates of reachable sets of discrete-time systems with integral bounds on additive terms. Mathematical Control & Related Fields, 2021  doi: 10.3934/mcrf.2021015

[6]

Elimhan N. Mahmudov. Second order discrete time-varying and time-invariant linear continuous systems and Kalman type conditions. Numerical Algebra, Control & Optimization, 2021  doi: 10.3934/naco.2021010

[7]

Yun Gao, Shilin Yang, Fang-Wei Fu. Some optimal cyclic $ \mathbb{F}_q $-linear $ \mathbb{F}_{q^t} $-codes. Advances in Mathematics of Communications, 2021, 15 (3) : 387-396. doi: 10.3934/amc.2020072

[8]

Diana Keller. Optimal control of a linear stochastic Schrödinger equation. Conference Publications, 2013, 2013 (special) : 437-446. doi: 10.3934/proc.2013.2013.437

[9]

Luke Finlay, Vladimir Gaitsgory, Ivan Lebedev. Linear programming solutions of periodic optimization problems: approximation of the optimal control. Journal of Industrial & Management Optimization, 2007, 3 (2) : 399-413. doi: 10.3934/jimo.2007.3.399

[10]

Tianhu Yu, Jinde Cao, Chuangxia Huang. Finite-time cluster synchronization of coupled dynamical systems with impulsive effects. Discrete & Continuous Dynamical Systems - B, 2021, 26 (7) : 3595-3620. doi: 10.3934/dcdsb.2020248

[11]

Andrés Contreras, Juan Peypouquet. Forward-backward approximation of nonlinear semigroups in finite and infinite horizon. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021051

[12]

W. Cary Huffman. On the theory of $\mathbb{F}_q$-linear $\mathbb{F}_{q^t}$-codes. Advances in Mathematics of Communications, 2013, 7 (3) : 349-378. doi: 10.3934/amc.2013.7.349

[13]

Changjun Yu, Lei Yuan, Shuxuan Su. A new gradient computational formula for optimal control problems with time-delay. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021076

[14]

Vladimir Gaitsgory, Ilya Shvartsman. Linear programming estimates for Cesàro and Abel limits of optimal values in optimal control problems. Discrete & Continuous Dynamical Systems - B, 2021  doi: 10.3934/dcdsb.2021102

[15]

Tuvi Etzion, Alexander Vardy. On $q$-analogs of Steiner systems and covering designs. Advances in Mathematics of Communications, 2011, 5 (2) : 161-176. doi: 10.3934/amc.2011.5.161

[16]

Andrea Signori. Penalisation of long treatment time and optimal control of a tumour growth model of Cahn–Hilliard type with singular potential. Discrete & Continuous Dynamical Systems, 2021, 41 (6) : 2519-2542. doi: 10.3934/dcds.2020373

[17]

Fabio Camilli, Serikbolsyn Duisembay, Qing Tang. Approximation of an optimal control problem for the time-fractional Fokker-Planck equation. Journal of Dynamics & Games, 2021  doi: 10.3934/jdg.2021013

[18]

Kehan Si, Zhenda Xu, Ka Fai Cedric Yiu, Xun Li. Open-loop solvability for mean-field stochastic linear quadratic optimal control problems of Markov regime-switching system. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021074

[19]

Azmeer Nordin, Mohd Salmi Md Noorani. Counting finite orbits for the flip systems of shifts of finite type. Discrete & Continuous Dynamical Systems, 2021  doi: 10.3934/dcds.2021046

[20]

Tobias Geiger, Daniel Wachsmuth, Gerd Wachsmuth. Optimal control of ODEs with state suprema. Mathematical Control & Related Fields, 2021  doi: 10.3934/mcrf.2021012

2019 Impact Factor: 1.366

Article outline

Figures and Tables

[Back to Top]