November  2019, 2(4): 315-331. doi: 10.3934/mfc.2019020

A Sim2real method based on DDQN for training a self-driving scale car

1. 

School of Information Science and Technology, North China University of Technology, Beijing 100144, China

2. 

State Key Laboratory of Turbulence and Complex Systems, College of Engineering, Peking University, Beijing 100871, China

* Corresponding author: Tao Du

Published  December 2019

The self-driving based on deep reinforcement learning, as the most important application of artificial intelligence, has become a popular topic. Most of the current self-driving methods focus on how to directly learn end-to-end self-driving control strategy from the raw sensory data. Essentially, this control strategy can be considered as a mapping between images and driving behavior, which usually faces a problem of low generalization ability. To improve the generalization ability for the driving behavior, the reinforcement learning method requires extrinsic reward from the real environment, which may damage the car. In order to obtain a good generalization ability in safety, a virtual simulation environment that can be constructed different driving scene is designed by Unity. A theoretical model is established and analyzed in the virtual simulation environment, and it is trained by double Deep Q-network. Then, the trained model is migrated to a scale car in real world. This process is also called a sim2real method. The sim2real training method efficiently handles these two problems. The simulations and experiments are carried out to evaluate the performance and effectiveness of the proposed algorithm. Finally, it is demonstrated that the scale car in real world obtains the capability for autonomous driving.

Citation: Qi Zhang, Tao Du, Changzheng Tian. A Sim2real method based on DDQN for training a self-driving scale car. Mathematical Foundations of Computing, 2019, 2 (4) : 315-331. doi: 10.3934/mfc.2019020
References:
[1]

H. Abraham, C. Lee, S. Brady, C. Fitzgerald, B. Mehler, B. Reimer and J. F. Coughlin, Autonomous vehicles, trust, and driving alternatives: a survey of consumer preferences, Massachusetts Inst. Technol, AgeLab, Cambridge, (2016), 1–16. Google Scholar

[2]

K. J. Aditya, Working model of self-driving car using Convolutional Neural Network, Raspberry Pi and Arduino, in 2018 Second International Conference on Electronics, Communication and Aerospace Technology, IEEE, 2018, 1630–1635. Google Scholar

[3]

P. Andhare and S. Rawat, Pick and place industrial robot controller with computer vision, in 2016 International Conference on Computing Communication Control and automation, 2016, 1–4. doi: 10.1109/ICCUBEA.2016.7860048.  Google Scholar

[4]

C. Chen, A. Seff, A. Kornhauser and J. Xiao, Deepdriving: Learning affordance for direct perception in autonomous driving, in IEEE International Conference on Computer Vision, 2015, 2722–2730. doi: 10.1109/ICCV.2015.312.  Google Scholar

[5]

Z. Chen and X. Huang, End-to-end learning for lane keeping of self-driving cars, in IEEE Intelligent Vehicles Symposium, IEEE, 2018, 1856–1860. doi: 10.1109/IVS.2017.7995975.  Google Scholar

[6]

F. Codevilla, M. Miiller, A. Lopez, V. Koltun and A. Dosovitskiy, End-to-end driving via conditional imitation learning, in IEEE International Conference on Robotics and Automation, IEEE, 2018, 4693–4700. doi: 10.1109/ICRA.2018.8460487.  Google Scholar

[7]

D. Dorr, D. Grabengiesser and F. Gauterin, Online driving style recognition using fuzzy logic, in 17th International IEEE Conference on Intelligent Transportation Systems, IEEE, 2014, 1021–1026. doi: 10.1109/ITSC.2014.6957822.  Google Scholar

[8]

X. Liang, T. Wang, L. Yang and E. Xing, CIRL: controllable imitative reinforcement learning for vision-based self-driving, in Proceedings of the European Conference on Computer Vision, 2018, 604–620. doi: 10.1007/978-3-030-01234-2_36.  Google Scholar

[9]

L. J. Lin, Reinforcement Learning for Robots Using Neural Networks, Ph.D thesis, Carnegie Mellon University in Pittsburgh, 1993. Google Scholar

[10]

R. R. Meganathan, A. A. Kasi and S. Jagannath, Computer vision based novel steering angle calculation for autonomous vehicles, in 2018 Second IEEE International Conference on Robotic Computing, 2018, 143–146. Google Scholar

[11]

https://github.com/naokishibuya/car-behavioral-cloning Google Scholar

[12]

V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra and M. Riedmiller, Playing atari with deep reinforcement learning, preprint, arXiv: 1312.5602. Google Scholar

[13]

V. Mnih and et al., Human-level control through deep reinforcement learning, Nature, 518 (2015), 529-533.  doi: 10.1038/nature14236.  Google Scholar

[14]

C. J. Pretorius, M. C. du Plessis and J. W. Gonsalves, The transferability of evolved hexapod locomotion controllers from simulation to real hardware, in 2017 IEEE International Conference on Real-time Computing and Robotics, 2017, 567–574. doi: 10.1109/RCAR.2017.8311923.  Google Scholar

[15]

Understanding the Fatal Tesla Accident on Autopilot and the NHTSA Probe, Electrek, 2016. Available from: https://electrek.co/2016/07/01/understanding-fatal-tesla-accident-autopilot-nhtsa-probe/. Google Scholar

[16]

M. SadeghzadehD. Calvert and H. A. Abdullah, Self-learning visual servoing of robot manipulator using explanation-based fuzzy neural networks and Q-learning, Journal of Intelligent and Robotic Systems, 78 (2015), 83-104.  doi: 10.1007/s10846-014-0151-5.  Google Scholar

[17] R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, $2^nd$ edition, Adaptive Computation and Machine Learning. MIT Press, Cambridge, MA, 2018.   Google Scholar
[18]

https://docs.donkeycar.com/ Google Scholar

[19]

https://github.com/autorope/donkeycar Google Scholar

[20]

H. Van, A Guez and D Silver, Deep reinforcement learning with double Q-Learning, in National Conference on Artificial Intelligence, 2016, 2094–2100. Google Scholar

[21]

D. WangJ. WenY. WangX. Huang and F. Pei, End-to-end self-driving using deep neural networks with multi-auxiliary tasks, Automotive Innovation, 2 (2019), 127-136.  doi: 10.1007/s42154-019-00057-1.  Google Scholar

[22]

C. J. Watkins and P. Dayan, Q-learning, Machine Learning, 8 (1992), 279-292.  doi: 10.1007/BF00992698.  Google Scholar

[23]

T. Yamawaki and M. Yashima, Application of adam to iterative learning for an in-hand manipulation task, ROMANSY 22 Robot Design, Dynamics and Control, 584 (2019), 272-279.  doi: 10.1007/978-3-319-78963-7_35.  Google Scholar

show all references

References:
[1]

H. Abraham, C. Lee, S. Brady, C. Fitzgerald, B. Mehler, B. Reimer and J. F. Coughlin, Autonomous vehicles, trust, and driving alternatives: a survey of consumer preferences, Massachusetts Inst. Technol, AgeLab, Cambridge, (2016), 1–16. Google Scholar

[2]

K. J. Aditya, Working model of self-driving car using Convolutional Neural Network, Raspberry Pi and Arduino, in 2018 Second International Conference on Electronics, Communication and Aerospace Technology, IEEE, 2018, 1630–1635. Google Scholar

[3]

P. Andhare and S. Rawat, Pick and place industrial robot controller with computer vision, in 2016 International Conference on Computing Communication Control and automation, 2016, 1–4. doi: 10.1109/ICCUBEA.2016.7860048.  Google Scholar

[4]

C. Chen, A. Seff, A. Kornhauser and J. Xiao, Deepdriving: Learning affordance for direct perception in autonomous driving, in IEEE International Conference on Computer Vision, 2015, 2722–2730. doi: 10.1109/ICCV.2015.312.  Google Scholar

[5]

Z. Chen and X. Huang, End-to-end learning for lane keeping of self-driving cars, in IEEE Intelligent Vehicles Symposium, IEEE, 2018, 1856–1860. doi: 10.1109/IVS.2017.7995975.  Google Scholar

[6]

F. Codevilla, M. Miiller, A. Lopez, V. Koltun and A. Dosovitskiy, End-to-end driving via conditional imitation learning, in IEEE International Conference on Robotics and Automation, IEEE, 2018, 4693–4700. doi: 10.1109/ICRA.2018.8460487.  Google Scholar

[7]

D. Dorr, D. Grabengiesser and F. Gauterin, Online driving style recognition using fuzzy logic, in 17th International IEEE Conference on Intelligent Transportation Systems, IEEE, 2014, 1021–1026. doi: 10.1109/ITSC.2014.6957822.  Google Scholar

[8]

X. Liang, T. Wang, L. Yang and E. Xing, CIRL: controllable imitative reinforcement learning for vision-based self-driving, in Proceedings of the European Conference on Computer Vision, 2018, 604–620. doi: 10.1007/978-3-030-01234-2_36.  Google Scholar

[9]

L. J. Lin, Reinforcement Learning for Robots Using Neural Networks, Ph.D thesis, Carnegie Mellon University in Pittsburgh, 1993. Google Scholar

[10]

R. R. Meganathan, A. A. Kasi and S. Jagannath, Computer vision based novel steering angle calculation for autonomous vehicles, in 2018 Second IEEE International Conference on Robotic Computing, 2018, 143–146. Google Scholar

[11]

https://github.com/naokishibuya/car-behavioral-cloning Google Scholar

[12]

V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra and M. Riedmiller, Playing atari with deep reinforcement learning, preprint, arXiv: 1312.5602. Google Scholar

[13]

V. Mnih and et al., Human-level control through deep reinforcement learning, Nature, 518 (2015), 529-533.  doi: 10.1038/nature14236.  Google Scholar

[14]

C. J. Pretorius, M. C. du Plessis and J. W. Gonsalves, The transferability of evolved hexapod locomotion controllers from simulation to real hardware, in 2017 IEEE International Conference on Real-time Computing and Robotics, 2017, 567–574. doi: 10.1109/RCAR.2017.8311923.  Google Scholar

[15]

Understanding the Fatal Tesla Accident on Autopilot and the NHTSA Probe, Electrek, 2016. Available from: https://electrek.co/2016/07/01/understanding-fatal-tesla-accident-autopilot-nhtsa-probe/. Google Scholar

[16]

M. SadeghzadehD. Calvert and H. A. Abdullah, Self-learning visual servoing of robot manipulator using explanation-based fuzzy neural networks and Q-learning, Journal of Intelligent and Robotic Systems, 78 (2015), 83-104.  doi: 10.1007/s10846-014-0151-5.  Google Scholar

[17] R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, $2^nd$ edition, Adaptive Computation and Machine Learning. MIT Press, Cambridge, MA, 2018.   Google Scholar
[18]

https://docs.donkeycar.com/ Google Scholar

[19]

https://github.com/autorope/donkeycar Google Scholar

[20]

H. Van, A Guez and D Silver, Deep reinforcement learning with double Q-Learning, in National Conference on Artificial Intelligence, 2016, 2094–2100. Google Scholar

[21]

D. WangJ. WenY. WangX. Huang and F. Pei, End-to-end self-driving using deep neural networks with multi-auxiliary tasks, Automotive Innovation, 2 (2019), 127-136.  doi: 10.1007/s42154-019-00057-1.  Google Scholar

[22]

C. J. Watkins and P. Dayan, Q-learning, Machine Learning, 8 (1992), 279-292.  doi: 10.1007/BF00992698.  Google Scholar

[23]

T. Yamawaki and M. Yashima, Application of adam to iterative learning for an in-hand manipulation task, ROMANSY 22 Robot Design, Dynamics and Control, 584 (2019), 272-279.  doi: 10.1007/978-3-319-78963-7_35.  Google Scholar

Figure 1.  The reinforcement learning scale car based on DDQN
Figure 2.  One 1:16 scale car. There is an opensource DIY self-driving platform for small scale cars called donkeycar [18]
Figure 3.  The process of reinforcement learning
Figure 4.  The architecture of the network
Figure 5.  The examples of raw images transfer to the segmented images
Figure 6.  The learning curve of "average rewards - train episodes"
Figure 7.  The scale vehicle car in the Unity Simulation
Figure 8.  The road for self-driving scale vehicle car, which contains two fast curves and two gentle curves
Figure 9.  The trained self-driving scale vehicle car
Figure 10.  An obstacle is added on the road. The angle of view of the car in the lower left of the figure
Figure 11.  There are five obstacles on the left figure. And there are three obstacles on the right figure
Table 1.  Performance of CNN, DDQN in the same road. The number means the times of the car outside
5(night) 3 0
10(daylight) 4 1
10(night) 8 2
15(daylight) 9 1
15(night) 12 3
5(night) 3 0
10(daylight) 4 1
10(night) 8 2
15(daylight) 9 1
15(night) 12 3
Table 2.  Performance of CNN, DDQN in the same road with obstacle(s) and in five laps. The number means the times of the car hitting the obstacle(s)
3 3 0
5 4 0
3 3 0
5 4 0
[1]

Jingang Zhao, Chi Zhang. Finite-horizon optimal control of discrete-time linear systems with completely unknown dynamics using Q-learning. Journal of Industrial & Management Optimization, 2017, 13 (5) : 0-0. doi: 10.3934/jimo.2020030

[2]

Martin Benning, Elena Celledoni, Matthias J. Ehrhardt, Brynjulf Owren, Carola-Bibiane Schönlieb. Deep learning as optimal control problems: Models and numerical methods. Journal of Computational Dynamics, 2019, 6 (2) : 171-198. doi: 10.3934/jcd.2019009

[3]

Tieliang Gong, Qian Zhao, Deyu Meng, Zongben Xu. Why curriculum learning & self-paced learning work in big/noisy data: A theoretical perspective. Big Data & Information Analytics, 2016, 1 (1) : 111-127. doi: 10.3934/bdia.2016.1.111

[4]

Alan Beggs. Learning in monotone bayesian games. Journal of Dynamics & Games, 2015, 2 (2) : 117-140. doi: 10.3934/jdg.2015.2.117

[5]

Yangyang Xu, Wotao Yin, Stanley Osher. Learning circulant sensing kernels. Inverse Problems & Imaging, 2014, 8 (3) : 901-923. doi: 10.3934/ipi.2014.8.901

[6]

Nicolás M. Crisosto, Christopher M. Kribs-Zaleta, Carlos Castillo-Chávez, Stephen Wirkus. Community resilience in collaborative learning. Discrete & Continuous Dynamical Systems - B, 2010, 14 (1) : 17-40. doi: 10.3934/dcdsb.2010.14.17

[7]

Mauro Maggioni, James M. Murphy. Learning by active nonlinear diffusion. Foundations of Data Science, 2019, 1 (3) : 271-291. doi: 10.3934/fods.2019012

[8]

Minlong Lin, Ke Tang. Selective further learning of hybrid ensemble for class imbalanced increment learning. Big Data & Information Analytics, 2017, 2 (1) : 1-21. doi: 10.3934/bdia.2017005

[9]

Ning Zhang, Qiang Wu. Online learning for supervised dimension reduction. Mathematical Foundations of Computing, 2019, 2 (2) : 95-106. doi: 10.3934/mfc.2019008

[10]

Yang Wang, Zhengfang Zhou. Source extraction in audio via background learning. Inverse Problems & Imaging, 2013, 7 (1) : 283-290. doi: 10.3934/ipi.2013.7.283

[11]

Wei Xue, Wensheng Zhang, Gaohang Yu. Least absolute deviations learning of multiple tasks. Journal of Industrial & Management Optimization, 2018, 14 (2) : 719-729. doi: 10.3934/jimo.2017071

[12]

G. Calafiore, M.C. Campi. A learning theory approach to the construction of predictor models. Conference Publications, 2003, 2003 (Special) : 156-166. doi: 10.3934/proc.2003.2003.156

[13]

Miguel A. Dumett, Roberto Cominetti. On the stability of an adaptive learning dynamics in traffic games. Journal of Dynamics & Games, 2018, 5 (4) : 265-282. doi: 10.3934/jdg.2018017

[14]

Mikhail Langovoy, Akhilesh Gotmare, Martin Jaggi. Unsupervised robust nonparametric learning of hidden community properties. Mathematical Foundations of Computing, 2019, 2 (2) : 127-147. doi: 10.3934/mfc.2019010

[15]

Émilie Chouzenoux, Henri Gérard, Jean-Christophe Pesquet. General risk measures for robust machine learning. Foundations of Data Science, 2019, 1 (3) : 249-269. doi: 10.3934/fods.2019011

[16]

Jiang Xie, Junfu Xu, Celine Nie, Qing Nie. Machine learning of swimming data via wisdom of crowd and regression analysis. Mathematical Biosciences & Engineering, 2017, 14 (2) : 511-527. doi: 10.3934/mbe.2017031

[17]

Ta-Wei Hung, Ping-Ting Chen. On the optimal replenishment in a finite planning horizon with learning effect of setup costs. Journal of Industrial & Management Optimization, 2010, 6 (2) : 425-433. doi: 10.3934/jimo.2010.6.425

[18]

Mingbao Cheng, Shuxian Xiao, Guosheng Liu. Single-machine rescheduling problems with learning effect under disruptions. Journal of Industrial & Management Optimization, 2018, 14 (3) : 967-980. doi: 10.3934/jimo.2017085

[19]

A. Mittal, N. Hemachandra. Learning algorithms for finite horizon constrained Markov decision processes. Journal of Industrial & Management Optimization, 2007, 3 (3) : 429-444. doi: 10.3934/jimo.2007.3.429

[20]

Michael K. Ng, Chi-Pan Tam, Fan Wang. Multi-view foreground segmentation via fourth order tensor learning. Inverse Problems & Imaging, 2013, 7 (3) : 885-906. doi: 10.3934/ipi.2013.7.885

 Impact Factor: 

Metrics

  • PDF downloads (89)
  • HTML views (157)
  • Cited by (0)

Other articles
by authors

[Back to Top]