• Previous Article
    Design of differentiated warranty coverage that considers usage rate and service option of consumers under 2D warranty policy
  • JIMO Home
  • This Issue
  • Next Article
    The point-wise convergence of shifted symmetric higher order power method
doi: 10.3934/jimo.2020030

Finite-horizon optimal control of discrete-time linear systems with completely unknown dynamics using Q-learning

School of Automation, Beijing Institute of Technology, Beijing, 100081, China

* Corresponding author: Jingang Zhao

Received  March 2019 Revised  September 2019 Published  February 2020

Fund Project: The first author is supported by International Graduate Exchange Program of Beijing Institute of Technology; This paper is supported by the National Natural Science Foundation of China grant 61673065

This paper investigates finite-horizon optimal control problem of completely unknown discrete-time linear systems. The completely unknown here refers to that the system dynamics are unknown. Compared with infinite-horizon optimal control, the Riccati equation (RE) of finite-horizon optimal control is time-dependent and must meet certain terminal boundary constraints, which brings the greater challenges. Meanwhile, the completely unknown system dynamics have also caused additional challenges. The main innovation of this paper is the developed cyclic fixed-finite-horizon-based Q-learning algorithm to approximate the optimal control input without requiring the system dynamics. The developed algorithm main consists of two phases: the data collection phase over a fixed-finite-horizon and the parameters update phase. A least-squares method is used to correlate the two phases to obtain the optimal parameters by cyclic. Finally, simulation results are given to verify the effectiveness of the proposed cyclic fixed-finite-horizon-based Q-learning algorithm.

Citation: Jingang Zhao, Chi Zhang. Finite-horizon optimal control of discrete-time linear systems with completely unknown dynamics using Q-learning. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2020030
References:
[1]

A. Al-TamimiF. L. Lewis and M. Abu-Khalaf, Model-free Q-learning design for discrete-time zero-sum games with application to H-infinity control, Automatica, 43 (2007), 473-481.  doi: 10.1016/j.automatica.2006.09.019.  Google Scholar

[2]

D. P. Bertsekas and J. N. Tsitsiklis, Neuro-Dynamic Programming, 1$^st$ edition, Athena Scientific, Belmont, 1996. Google Scholar

[3]

D. P. Bertsekas, Value and policy iterations in optimal control and adaptive dynamic programming, IEEE Transactions on Neural Networks and Learning Systems, 28 (2017), 500-509.  doi: 10.1109/TNNLS.2015.2503980.  Google Scholar

[4]

X. S. ChenX. Li and F. H. Yi, Optimal stopping investment with non-smooth utility over an infinite time horizon, Journal of Industrial & Management Optimization, 15 (2019), 81-96.   Google Scholar

[5]

Y. F. Chen and Y. G. Zhu, Indifinite LQ optimal control with process state inequality constraints for discrete-time uncertain systems, Journal of Industrial & Management Optimization, 14 (2018), 913-930.   Google Scholar

[6]

T. ChengF. L. Lewis and M. Abu-Khalaf, A neural network solution for fixed-final time optimal control of nonlinear systems, Automatica, 43 (2007), 482-490.  doi: 10.1016/j.automatica.2006.09.021.  Google Scholar

[7]

M. G. Gan, J. G. Zhao and C. Zhang, Extended adaptive optimal control of linear systems with unknown dynamics using adaptive dynamic programming, Asian J Control, (2019), 1–10. doi: 10.1002/asjc.2243.  Google Scholar

[8]

W. N. GaoY. JiangZ. P. Jiang and Y. T. Chai, Output-feedback adaptive optimal control of interconnected systems based on robust adaptive dynamic programming, Automatica, 72 (2016), 37-45.  doi: 10.1016/j.automatica.2016.05.008.  Google Scholar

[9]

A. Heydari and S. N. Balakrishnan, Finite-horizon control-constrained nonlinear optimal control using single network adaptvie critics, IEEE Transactions on Neural Networks and Learning Systems, 24 (2013), 145-157.   Google Scholar

[10]

Y. Jiang and Z. P. Jiang, Computational adaptive optimal control for continuous-time linear systems with completely unknown dynamics, Automatica, 48 (2012), 2699-2704.  doi: 10.1016/j.automatica.2012.06.096.  Google Scholar

[11]

Y. Jiang and Z. P. Jiang, Global adaptive dynamic programming for continuous-time nonlinear systems, IEE Trans on Automatic Control, 60 (2015), 2917-2929.  doi: 10.1109/TAC.2015.2414811.  Google Scholar

[12]

W. K. Jong, J. P. Byung, Y. Haeun, H. L. Jay and M. L. Jong, Deep reinforcement learning based finite-horizon optimal tracking control for nonlinear systems, in International Federation Automatic Control, (2018), 257–262. Google Scholar

[13]

R. Kamalapurkar, P. Walters and W. E. Dixon, Model-based reinforcement learning for approximate optimal control, in Reinforcement Learning for Optimal Feedback Control, Communications and Control Engineering. Springer, Cham, (2018), 99–148. Google Scholar

[14]

J. Y. LeeB. P. Jin and Y. H. Chio, Integral Q-learning and explorized policy iteration for adaptive optimal control of continuous-time linear systems, Automatica, 48 (2012), 2850-2859.  doi: 10.1016/j.automatica.2012.06.008.  Google Scholar

[15]

F. L. Lewis and D. R. Liu, Reinforcement learning and approximate dynamic programming for feedback control, IEEE Circuits and Systems Magazine, 9 (2015), 32-50.   Google Scholar

[16]

F. L. Levis, D. L. Vrabie and V. L. Syrmos, Optimal Control, 3$^th$ edition, John Wiley and Sons, Hoboken, 2015. Google Scholar

[17]

J. N. LiT. Y. ChaiF. L. LewisZ. T. Ding and Y. Jiang, Off-policy interleaved Q-learning: optimal control for affine nonlinear discrete-time systems, IEEE Transactions on Neural Networks and Learning Systems, 30 (2019), 1308-1320.  doi: 10.1109/TNNLS.2018.2861945.  Google Scholar

[18]

J. N. LiT. Y. ChaiF. L. LewisJ. L. FanZ. T. Ding and J. L. Ding, Off-policy Q-learning: Set-point design for optimizing dual-rate rougher flotation operational processes, IEEE Transactions on Industrial Electronics, 65 (2018), 4092-4102.  doi: 10.1109/TIE.2017.2760245.  Google Scholar

[19]

X. X. LiZ. H. PengL. Liang and W. Z. Zha, Policy iteration based Q-learning for linear nonzero-sum quadratic differential games, Science China(Information Sciences), 62 (2019), 195-213.   Google Scholar

[20]

Q. LinQ. L. Wei and D. R. Liu, A novel optimal tracking control scheme for a class of discrete-time nonlinear systems using generalised policy iteration adaptive dynamic programming algorithm, International Journal of Systems Science, 48 (2017), 525-534.  doi: 10.1080/00207721.2016.1188177.  Google Scholar

[21]

H. L. Liu and Q. X. Zhu, New forms of Riccati equations and the further results of the optimal control for linear discrete-time systems, International Journal of Control, Automation, and Systems, 12 (2014), 1160-1166.  doi: 10.1007/s12555-013-0202-x.  Google Scholar

[22]

B. LuoD. R. LiuT. W. Huang and D. Wang, Data-based approximate policy iteration for nonlinear continuous-time optimal control design, Automatica, 50 (2014), 3281-3290.  doi: 10.1016/j.automatica.2014.10.056.  Google Scholar

[23]

B. LuoD. R. LiuT. W. Huang and D. Wang, Model-free optimal tracking control via critic-only Q-learning, IEEE Transactions on Neural Networks and Learning Systems, 27 (2016), 2134-2144.  doi: 10.1109/TNNLS.2016.2585520.  Google Scholar

[24]

B. LuoD. R. Liu and H. N. Wu, Adaptive constrained optimal control design for data-based nonlinear discrete-time systems with critic-only structure, IEEE Transactions on Neural Networks and Learning Systems, 29 (2018), 2099-2111.  doi: 10.1109/TNNLS.2017.2751018.  Google Scholar

[25]

B. LuoD. R. LiuH. N. WuD. Wang and F. L. Lewis, Policy gradient adaptive dynamic programming for data-based optimal control, IEEE Transactions on Cybernetics, 47 (2017), 3341-3354.  doi: 10.1109/TCYB.2016.2623859.  Google Scholar

[26]

B. LuoH. N. Wu and T. W. Huang, Optimal output regulation for model-free quanser helicopter with multistep Q-learning, IEEE Transactions on Industrial Electronics, 65 (2018), 4953-4961.  doi: 10.1109/TIE.2017.2772162.  Google Scholar

[27]

B. LuoY. Yang and D. R. Liu, Adaptive Q-Learning for data-based optimal output regulation with experience replay, IEEE Transactions on Cybernetics, 48 (2018), 3337-3348.   Google Scholar

[28]

Y. F. LvN. JingQ. M. YangX. Wu and Y. Guo, Online adaptive optimal control for continuous-time nonlinear systems with completely unknown dynamics, International Journal of Control, 89 (2016), 99-112.  doi: 10.1080/00207179.2015.1060362.  Google Scholar

[29]

W. B. Powell, Approximate Dynamic Programming: Solving the Curses of Dimensionality, 2$^nd$ edition, Wiley Princeton, New Jersey, 2011. Google Scholar

[30]

A. SahooH. Xu and S. Jagannathan, Approximate optimal control of affine nonlinear continuous-time systems using event-sampled neurodynamic programming, IEEE Transactions on Neural Networks and Learning Systems, 28 (2017), 639-652.   Google Scholar

[31]

K. G. Vamvoudakis, Q-learning for continuous-time linear systems: A model-free infinite horizon optimal control approach, Systems and Control Letters, 100 (2017), 14-20.   Google Scholar

[32]

K. G. Vamvoudakis and F. L. Lewis, Online actor-critc algorithm to solve the continuous-time infinite horizon optimal control problem, Automatica, 46 (2010), 878-888.   Google Scholar

[33]

D. WangD. R. Liu and Q. L. Wei, Finite-horizon neuro-optimal tracking control for a class of discrete-time nonlinear systems using adaptive dynamic programming approach, Neurocomputing, 78 (2012), 14-22.   Google Scholar

[34]

F. Y. WangN. JinD. R. Liu and Q. L. Wei, Adaptive dynamic programming for finite-horizon optimal control of discrete-time nonlinear systems with $\varepsilon -$ error bound, IEEE Transactions on Neural Networks, 22 (2011), 24-36.   Google Scholar

[35]

Q. L. Wei and D. R. Liu, A novel policy iteration based deterministic Q-learning for discrete-time nonlinear systems, Science China(Information Sciences), 58 (2015), 1-15.   Google Scholar

[36]

Q. L. WeiD. R. Liu and H. Q. Lin, Value iteration adaptive dynamic programming for optimal control of discrete-time nonlinear systems, IEE Trans Cybern, 46 (2016), 840-853.   Google Scholar

[37]

Y. WuZ. H. Yuan and Y. P. Wu, Optimal tracking control for networked control systems with random time delays and packet dropouts, Journal of Industrial & Management Optimization, 11 (2015), 1343-1354.   Google Scholar

[38]

H. G. ZhangJ. HeY. H. Luo and G. Y. Xiao, Data-driven optimal consensus control for discrete-time multi-agent systems with unknown dynamics using reinforcement learning method, IEEE Transactions on Industrial Electronics, 64 (2017), 4091-4100.   Google Scholar

[39]

H. G. ZhangQ. L. Wei and Y. H. Luo, A novel infinite-time optimal tracking control scheme for a class of discrete-time nonlinear systems via the greedy HDP iteration algorithm, IEEE Transactions on Systems Man and Cybernetics Part B, 38 (2008), 937-942.   Google Scholar

[40]

Q. C. ZhangD. B. Zhao and D. Wang, Event-based robust control for uncertain nonlinear systems using adaptive dynamic programming, IEEE Transactions on Neural Networks and Learning Systems, 29 (2018), 37-50.   Google Scholar

[41]

J. G. ZhaoM. G. Gan and C. Zhang, Event-triggered ${{H}_{\infty }}$ optimal control for continuous-time nonlinear systems using neurodynamic programming, Neurocomputing, 360 (2019), 14-24.   Google Scholar

[42]

Q. M. Zhao, Finite-horizon Optimal Control of Linear and a Class of Nonlinear Systems, Ph.D thesis, Missouri University of Science and Technology. 2013. Google Scholar

[43]

Q. M. ZhaoX. Hao and J. Sarangapani, Finite-horizon near optimal adaptive control of uncertain linear discrete-time systems, Optimal Control Applications and Methods, 36 (2016), 853-872.   Google Scholar

[44]

X. N. ZhongH. B. HeD. Wang and Z. Ni, Model-free adaptive optimal control for unknown nonlinear zero-sum differnetial game, IEEE Transactions on Cybernetics, 48 (2018), 1633-1646.   Google Scholar

[45]

Q. X. Zhu and G. M. Xie, Finite-horizon optimal control of discrete-time switched linear systems, Mathematical Problems in Engineering, 2012 (2012), 1-12.   Google Scholar

show all references

References:
[1]

A. Al-TamimiF. L. Lewis and M. Abu-Khalaf, Model-free Q-learning design for discrete-time zero-sum games with application to H-infinity control, Automatica, 43 (2007), 473-481.  doi: 10.1016/j.automatica.2006.09.019.  Google Scholar

[2]

D. P. Bertsekas and J. N. Tsitsiklis, Neuro-Dynamic Programming, 1$^st$ edition, Athena Scientific, Belmont, 1996. Google Scholar

[3]

D. P. Bertsekas, Value and policy iterations in optimal control and adaptive dynamic programming, IEEE Transactions on Neural Networks and Learning Systems, 28 (2017), 500-509.  doi: 10.1109/TNNLS.2015.2503980.  Google Scholar

[4]

X. S. ChenX. Li and F. H. Yi, Optimal stopping investment with non-smooth utility over an infinite time horizon, Journal of Industrial & Management Optimization, 15 (2019), 81-96.   Google Scholar

[5]

Y. F. Chen and Y. G. Zhu, Indifinite LQ optimal control with process state inequality constraints for discrete-time uncertain systems, Journal of Industrial & Management Optimization, 14 (2018), 913-930.   Google Scholar

[6]

T. ChengF. L. Lewis and M. Abu-Khalaf, A neural network solution for fixed-final time optimal control of nonlinear systems, Automatica, 43 (2007), 482-490.  doi: 10.1016/j.automatica.2006.09.021.  Google Scholar

[7]

M. G. Gan, J. G. Zhao and C. Zhang, Extended adaptive optimal control of linear systems with unknown dynamics using adaptive dynamic programming, Asian J Control, (2019), 1–10. doi: 10.1002/asjc.2243.  Google Scholar

[8]

W. N. GaoY. JiangZ. P. Jiang and Y. T. Chai, Output-feedback adaptive optimal control of interconnected systems based on robust adaptive dynamic programming, Automatica, 72 (2016), 37-45.  doi: 10.1016/j.automatica.2016.05.008.  Google Scholar

[9]

A. Heydari and S. N. Balakrishnan, Finite-horizon control-constrained nonlinear optimal control using single network adaptvie critics, IEEE Transactions on Neural Networks and Learning Systems, 24 (2013), 145-157.   Google Scholar

[10]

Y. Jiang and Z. P. Jiang, Computational adaptive optimal control for continuous-time linear systems with completely unknown dynamics, Automatica, 48 (2012), 2699-2704.  doi: 10.1016/j.automatica.2012.06.096.  Google Scholar

[11]

Y. Jiang and Z. P. Jiang, Global adaptive dynamic programming for continuous-time nonlinear systems, IEE Trans on Automatic Control, 60 (2015), 2917-2929.  doi: 10.1109/TAC.2015.2414811.  Google Scholar

[12]

W. K. Jong, J. P. Byung, Y. Haeun, H. L. Jay and M. L. Jong, Deep reinforcement learning based finite-horizon optimal tracking control for nonlinear systems, in International Federation Automatic Control, (2018), 257–262. Google Scholar

[13]

R. Kamalapurkar, P. Walters and W. E. Dixon, Model-based reinforcement learning for approximate optimal control, in Reinforcement Learning for Optimal Feedback Control, Communications and Control Engineering. Springer, Cham, (2018), 99–148. Google Scholar

[14]

J. Y. LeeB. P. Jin and Y. H. Chio, Integral Q-learning and explorized policy iteration for adaptive optimal control of continuous-time linear systems, Automatica, 48 (2012), 2850-2859.  doi: 10.1016/j.automatica.2012.06.008.  Google Scholar

[15]

F. L. Lewis and D. R. Liu, Reinforcement learning and approximate dynamic programming for feedback control, IEEE Circuits and Systems Magazine, 9 (2015), 32-50.   Google Scholar

[16]

F. L. Levis, D. L. Vrabie and V. L. Syrmos, Optimal Control, 3$^th$ edition, John Wiley and Sons, Hoboken, 2015. Google Scholar

[17]

J. N. LiT. Y. ChaiF. L. LewisZ. T. Ding and Y. Jiang, Off-policy interleaved Q-learning: optimal control for affine nonlinear discrete-time systems, IEEE Transactions on Neural Networks and Learning Systems, 30 (2019), 1308-1320.  doi: 10.1109/TNNLS.2018.2861945.  Google Scholar

[18]

J. N. LiT. Y. ChaiF. L. LewisJ. L. FanZ. T. Ding and J. L. Ding, Off-policy Q-learning: Set-point design for optimizing dual-rate rougher flotation operational processes, IEEE Transactions on Industrial Electronics, 65 (2018), 4092-4102.  doi: 10.1109/TIE.2017.2760245.  Google Scholar

[19]

X. X. LiZ. H. PengL. Liang and W. Z. Zha, Policy iteration based Q-learning for linear nonzero-sum quadratic differential games, Science China(Information Sciences), 62 (2019), 195-213.   Google Scholar

[20]

Q. LinQ. L. Wei and D. R. Liu, A novel optimal tracking control scheme for a class of discrete-time nonlinear systems using generalised policy iteration adaptive dynamic programming algorithm, International Journal of Systems Science, 48 (2017), 525-534.  doi: 10.1080/00207721.2016.1188177.  Google Scholar

[21]

H. L. Liu and Q. X. Zhu, New forms of Riccati equations and the further results of the optimal control for linear discrete-time systems, International Journal of Control, Automation, and Systems, 12 (2014), 1160-1166.  doi: 10.1007/s12555-013-0202-x.  Google Scholar

[22]

B. LuoD. R. LiuT. W. Huang and D. Wang, Data-based approximate policy iteration for nonlinear continuous-time optimal control design, Automatica, 50 (2014), 3281-3290.  doi: 10.1016/j.automatica.2014.10.056.  Google Scholar

[23]

B. LuoD. R. LiuT. W. Huang and D. Wang, Model-free optimal tracking control via critic-only Q-learning, IEEE Transactions on Neural Networks and Learning Systems, 27 (2016), 2134-2144.  doi: 10.1109/TNNLS.2016.2585520.  Google Scholar

[24]

B. LuoD. R. Liu and H. N. Wu, Adaptive constrained optimal control design for data-based nonlinear discrete-time systems with critic-only structure, IEEE Transactions on Neural Networks and Learning Systems, 29 (2018), 2099-2111.  doi: 10.1109/TNNLS.2017.2751018.  Google Scholar

[25]

B. LuoD. R. LiuH. N. WuD. Wang and F. L. Lewis, Policy gradient adaptive dynamic programming for data-based optimal control, IEEE Transactions on Cybernetics, 47 (2017), 3341-3354.  doi: 10.1109/TCYB.2016.2623859.  Google Scholar

[26]

B. LuoH. N. Wu and T. W. Huang, Optimal output regulation for model-free quanser helicopter with multistep Q-learning, IEEE Transactions on Industrial Electronics, 65 (2018), 4953-4961.  doi: 10.1109/TIE.2017.2772162.  Google Scholar

[27]

B. LuoY. Yang and D. R. Liu, Adaptive Q-Learning for data-based optimal output regulation with experience replay, IEEE Transactions on Cybernetics, 48 (2018), 3337-3348.   Google Scholar

[28]

Y. F. LvN. JingQ. M. YangX. Wu and Y. Guo, Online adaptive optimal control for continuous-time nonlinear systems with completely unknown dynamics, International Journal of Control, 89 (2016), 99-112.  doi: 10.1080/00207179.2015.1060362.  Google Scholar

[29]

W. B. Powell, Approximate Dynamic Programming: Solving the Curses of Dimensionality, 2$^nd$ edition, Wiley Princeton, New Jersey, 2011. Google Scholar

[30]

A. SahooH. Xu and S. Jagannathan, Approximate optimal control of affine nonlinear continuous-time systems using event-sampled neurodynamic programming, IEEE Transactions on Neural Networks and Learning Systems, 28 (2017), 639-652.   Google Scholar

[31]

K. G. Vamvoudakis, Q-learning for continuous-time linear systems: A model-free infinite horizon optimal control approach, Systems and Control Letters, 100 (2017), 14-20.   Google Scholar

[32]

K. G. Vamvoudakis and F. L. Lewis, Online actor-critc algorithm to solve the continuous-time infinite horizon optimal control problem, Automatica, 46 (2010), 878-888.   Google Scholar

[33]

D. WangD. R. Liu and Q. L. Wei, Finite-horizon neuro-optimal tracking control for a class of discrete-time nonlinear systems using adaptive dynamic programming approach, Neurocomputing, 78 (2012), 14-22.   Google Scholar

[34]

F. Y. WangN. JinD. R. Liu and Q. L. Wei, Adaptive dynamic programming for finite-horizon optimal control of discrete-time nonlinear systems with $\varepsilon -$ error bound, IEEE Transactions on Neural Networks, 22 (2011), 24-36.   Google Scholar

[35]

Q. L. Wei and D. R. Liu, A novel policy iteration based deterministic Q-learning for discrete-time nonlinear systems, Science China(Information Sciences), 58 (2015), 1-15.   Google Scholar

[36]

Q. L. WeiD. R. Liu and H. Q. Lin, Value iteration adaptive dynamic programming for optimal control of discrete-time nonlinear systems, IEE Trans Cybern, 46 (2016), 840-853.   Google Scholar

[37]

Y. WuZ. H. Yuan and Y. P. Wu, Optimal tracking control for networked control systems with random time delays and packet dropouts, Journal of Industrial & Management Optimization, 11 (2015), 1343-1354.   Google Scholar

[38]

H. G. ZhangJ. HeY. H. Luo and G. Y. Xiao, Data-driven optimal consensus control for discrete-time multi-agent systems with unknown dynamics using reinforcement learning method, IEEE Transactions on Industrial Electronics, 64 (2017), 4091-4100.   Google Scholar

[39]

H. G. ZhangQ. L. Wei and Y. H. Luo, A novel infinite-time optimal tracking control scheme for a class of discrete-time nonlinear systems via the greedy HDP iteration algorithm, IEEE Transactions on Systems Man and Cybernetics Part B, 38 (2008), 937-942.   Google Scholar

[40]

Q. C. ZhangD. B. Zhao and D. Wang, Event-based robust control for uncertain nonlinear systems using adaptive dynamic programming, IEEE Transactions on Neural Networks and Learning Systems, 29 (2018), 37-50.   Google Scholar

[41]

J. G. ZhaoM. G. Gan and C. Zhang, Event-triggered ${{H}_{\infty }}$ optimal control for continuous-time nonlinear systems using neurodynamic programming, Neurocomputing, 360 (2019), 14-24.   Google Scholar

[42]

Q. M. Zhao, Finite-horizon Optimal Control of Linear and a Class of Nonlinear Systems, Ph.D thesis, Missouri University of Science and Technology. 2013. Google Scholar

[43]

Q. M. ZhaoX. Hao and J. Sarangapani, Finite-horizon near optimal adaptive control of uncertain linear discrete-time systems, Optimal Control Applications and Methods, 36 (2016), 853-872.   Google Scholar

[44]

X. N. ZhongH. B. HeD. Wang and Z. Ni, Model-free adaptive optimal control for unknown nonlinear zero-sum differnetial game, IEEE Transactions on Cybernetics, 48 (2018), 1633-1646.   Google Scholar

[45]

Q. X. Zhu and G. M. Xie, Finite-horizon optimal control of discrete-time switched linear systems, Mathematical Problems in Engineering, 2012 (2012), 1-12.   Google Scholar

Figure 1.  The flow chart of Algorithm 1
Figure 2.  Initial system state $x_0$ are randomly selected from a compact set $\Omega : = \{-1\le {{x}_{1}},{{x}_{2}}\le 1\}$
Figure 3.  The convergence process of $\hat W$
Figure 4.  The trajectories of system states
Figure 5.  The optimal control input $u$
Figure 6.  The convergence process of $\hat W$
Figure 7.  The trajectories of system states
Figure 8.  The optimal control input $u$
[1]

Chuandong Li, Fali Ma, Tingwen Huang. 2-D analysis based iterative learning control for linear discrete-time systems with time delay. Journal of Industrial & Management Optimization, 2011, 7 (1) : 175-181. doi: 10.3934/jimo.2011.7.175

[2]

Vladimir Gaitsgory, Alex Parkinson, Ilya Shvartsman. Linear programming formulations of deterministic infinite horizon optimal control problems in discrete time. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3821-3838. doi: 10.3934/dcdsb.2017192

[3]

Vladimir Gaitsgory, Alex Parkinson, Ilya Shvartsman. Linear programming based optimality conditions and approximate solution of a deterministic infinite horizon discounted optimal control problem in discrete time. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1743-1767. doi: 10.3934/dcdsb.2018235

[4]

Qingling Zhang, Guoliang Wang, Wanquan Liu, Yi Zhang. Stabilization of discrete-time Markovian jump systems with partially unknown transition probabilities. Discrete & Continuous Dynamical Systems - B, 2011, 16 (4) : 1197-1211. doi: 10.3934/dcdsb.2011.16.1197

[5]

Byungik Kahng, Miguel Mendes. The characterization of maximal invariant sets of non-linear discrete-time control dynamical systems. Conference Publications, 2013, 2013 (special) : 393-406. doi: 10.3934/proc.2013.2013.393

[6]

Hongyan Yan, Yun Sun, Yuanguo Zhu. A linear-quadratic control problem of uncertain discrete-time switched systems. Journal of Industrial & Management Optimization, 2017, 13 (1) : 267-282. doi: 10.3934/jimo.2016016

[7]

Victor Kozyakin. Minimax joint spectral radius and stabilizability of discrete-time linear switching control systems. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3537-3556. doi: 10.3934/dcdsb.2018277

[8]

Yuefen Chen, Yuanguo Zhu. Indefinite LQ optimal control with process state inequality constraints for discrete-time uncertain systems. Journal of Industrial & Management Optimization, 2018, 14 (3) : 913-930. doi: 10.3934/jimo.2017082

[9]

Alexander J. Zaslavski. The turnpike property of discrete-time control problems arising in economic dynamics. Discrete & Continuous Dynamical Systems - B, 2005, 5 (3) : 861-880. doi: 10.3934/dcdsb.2005.5.861

[10]

Vladimir Răsvan. On the central stability zone for linear discrete-time Hamiltonian systems. Conference Publications, 2003, 2003 (Special) : 734-741. doi: 10.3934/proc.2003.2003.734

[11]

Ta-Wei Hung, Ping-Ting Chen. On the optimal replenishment in a finite planning horizon with learning effect of setup costs. Journal of Industrial & Management Optimization, 2010, 6 (2) : 425-433. doi: 10.3934/jimo.2010.6.425

[12]

Ran Dong, Xuerong Mao. Asymptotic stabilization of continuous-time periodic stochastic systems by feedback control based on periodic discrete-time observations. Mathematical Control & Related Fields, 2019  doi: 10.3934/mcrf.2020017

[13]

Sie Long Kek, Mohd Ismail Abd Aziz, Kok Lay Teo, Rohanin Ahmad. An iterative algorithm based on model-reality differences for discrete-time nonlinear stochastic optimal control problems. Numerical Algebra, Control & Optimization, 2013, 3 (1) : 109-125. doi: 10.3934/naco.2013.3.109

[14]

Agnieszka B. Malinowska, Tatiana Odzijewicz. Optimal control of the discrete-time fractional-order Cucker-Smale model. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 347-357. doi: 10.3934/dcdsb.2018023

[15]

Sie Long Kek, Kok Lay Teo, Mohd Ismail Abd Aziz. Filtering solution of nonlinear stochastic optimal control problem in discrete-time with model-reality differences. Numerical Algebra, Control & Optimization, 2012, 2 (1) : 207-222. doi: 10.3934/naco.2012.2.207

[16]

Sie Long Kek, Mohd Ismail Abd Aziz. Output regulation for discrete-time nonlinear stochastic optimal control problems with model-reality differences. Numerical Algebra, Control & Optimization, 2015, 5 (3) : 275-288. doi: 10.3934/naco.2015.5.275

[17]

Zhongkui Li, Zhisheng Duan, Guanrong Chen. Consensus of discrete-time linear multi-agent systems with observer-type protocols. Discrete & Continuous Dynamical Systems - B, 2011, 16 (2) : 489-505. doi: 10.3934/dcdsb.2011.16.489

[18]

Paula A. González-Parra, Sunmi Lee, Leticia Velázquez, Carlos Castillo-Chavez. A note on the use of optimal control on a discrete time model of influenza dynamics. Mathematical Biosciences & Engineering, 2011, 8 (1) : 183-197. doi: 10.3934/mbe.2011.8.183

[19]

Elena K. Kostousova. On control synthesis for uncertain dynamical discrete-time systems through polyhedral techniques. Conference Publications, 2015, 2015 (special) : 723-732. doi: 10.3934/proc.2015.0723

[20]

Elena K. Kostousova. On polyhedral control synthesis for dynamical discrete-time systems under uncertainties and state constraints. Discrete & Continuous Dynamical Systems - A, 2018, 38 (12) : 6149-6162. doi: 10.3934/dcds.2018153

2019 Impact Factor: 1.366

Metrics

  • PDF downloads (52)
  • HTML views (200)
  • Cited by (0)

Other articles
by authors

[Back to Top]