Crash course learning: an automated approach to simulation-driven LiDAR-based training of neural networks for obstacle avoidance in mobile robotics

This paper proposes and implements a self-supervised simulation-driven approach to data collection used for training of perception-based shallow neural networks for mobile robot obstacle avoidance. In the approach, a 2D LiDAR sensor was used as an information source for training neural networks. The paper analyzes neural network performance in terms of numbers of layers and neurons, as well as the amount of data needed for reliable robot operation. Once the best architecture is identified, it is trained using only data obtained in simulation and then implemented and tested on a real robot Turtlebot 2 in several simulations and real-world scenarios. Based on obtained results it is shown that this fast and simple approach is very powerful with good results in a variety of challenging environments, with both static and dynamic obstacles.

___

  • [1] Sullivan K, Lawson W. Reactive ground vehicle control via deep networks. In: Robotics: Science and Systems: Workshop-New Frontiers for Deep Learning in Robotics; Boston, MA, USA; 2017. pp. 1–7.
  • [2] Hadsell R, Sermanet P, Ben J, Erkan A, Scoffier M et al. Learning long-range vision for autonomous off-road driving. Journal of Field Robotics 2009; 26 (2): 120–144. doi: 10.1002/rob.20276
  • [3] Giusti A, Guzzi J, Cireşan DC, He FL, Rodríguez JP et al. A machine learning approach to visual perception of forest trails for mobile robots. IEEE Robotics and Automation Letters 2015; 1 (2): 661–667
  • [4] Tai L, Li S, Liu M. A deep-network solution towards model-less obstacle avoidance. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems; Daejeon, Korea; 2016. pp. 2759–2764.
  • [5] Gandhi D, Pinto L, Gupta A. Learning to fly by crashing. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems; Vancouver, Canada; 2017. pp. 3948–3955.
  • [6] Pinto L, Gupta A. Supersizing self-supervision: learning to grasp from 50k tries and 700 robot hours. In: 2016 IEEE International Conference on Robotics and Automation; Stockholm, Sweden; 2016. pp. 3406–3413.
  • [7] Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J et al. Human-level control through deep reinforcement learning. Nature 2015; 518 (7540): 529-533. doi: 10.1038/nature14236
  • [8] Xie L, Wang S, Markham A, Trigoni N. Towards monocular vision based obstacle avoidance through deep reinforcement learning. arXiv preprint. arXiv: 1706.09829, 2017.
  • [9] Maturana D, Scherer S. 3D convolutional neural networks for landing zone detection from LiDAR. In: 2015 IEEE International Conference on Robotics and Automation; Seattle, WA, USA; 2015. pp. 3471–3478.
  • [10] Liu C, Zheng B, Wang C, Zhao Y, Fu S et al. CNN-based vision model for obstacle avoidance of mobile robot. In: 2017 3rd International Conference on Mechanical, Electronic and Information Technology Engineering; Chengdu, China; 2017. pp. 1-4.
  • [11] Correa DSO, Sciotti DF, Prado MG, Sales DO, Wolf DF et al. Mobile robots navigation in indoor environments using kinect sensor. In: 2012 Second Brazilian Conference on Critical Embedded Systems; Campinas, Brazil; 2012. pp. 36–41. doi: 10.1109/CBSEC.2012.18
  • [12] Bousmalis K, Irpan A, Wohlhart P, Bai Y, Kelcey M et al. Using simulation and domain adaptation to improve efficiency of deep robotic grasping. arXiv preprint. arXiv: 1709.07857.
  • [13] Zhu Y, Mottaghi R, Kolve E, Lim JJ, Gupta A et al. Target-driven visual navigation in indoor scenes using deep reinforcement learning. In: IEEE International Conference on Robotics and Automation; Singapore; 2017. pp. 3357-3364. doi: 10.1109/ICRA.2017.7989381
  • [14] Koenig NP, Howard A. Design and use paradigms for Gazebo, an open-source multi-robot simulator. In: 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems; Sendai, Japan; 2004. pp. 2149–2154. doi: 10.1109/IROS.2004.1389727
  • [15] Kružić S, Musić J, Stančić I, Papić V. Influence of data collection parameters on performance of neural network19based obstacle avoidance. In: 3rd International Conference on Smart and Sustainable Technologies; Split, Croatia; 2018. pp. 1-6.
  • [16] Quigley M, Conley K, Gerkey B, Faust J, Foote T et al. ROS: an open-source robot operating system. In: ICRA Workshop on Open Source Software; Kobe, Japan; 2009. pp. 1-6.
  • [17] Fox D, Burgard W, Thrun S. The dynamic window approach to collision avoidance. IEEE Robotics Automation Magazine 1997; 4 (1): 23–33. doi: 10.1109/100.580977