Comparing of Some Convolutional Neural Network (CNN) Architectures for Lane Detection

Comparing of Some Convolutional Neural Network (CNN) Architectures for Lane Detection

Advanced driver assistance functions help us prevent the human-based accidents and reduce the damage and costs. One of the most important functions is the lane keeping assist which keeps the car safely in its lane by preventing careless lane changes. Therefore, many researches focused on the lane detection using an onboard camera on the car as a cost-effective sensor solution and used conventional computer vision techniques . Even though these techniques provided successful outputs regarding lane detection, they were time-consuming and required hand-crafted stuff in scenario-based parameter tuning. Deep learning-based techniques have been used in lane detection in the last decade. More successful results were obtained with fewer parameter tuning and hand-crafted things. The most popular deep learning method for lane detection is convolutional neural networks (CNN). In this study, some reputed CNN architectures were used as a basis for developing a deep neural network. This network outputs were the lane line coefficients to fit a second order polynomial. In the experiments, the developed network was investigated by comparing the performance of the CNN architectures. The results showed that the deeper architectures with bigger batch size are stronger than the shallow ones.

___

  • [1] Y. Xing, C. Lv, L. Chen, H. Wang, H. Wang, D. Cao, E. Velenis, F. Wang, “Advances in vision-based lane detection: Algorithms, integration, assessment, and perspectives on ACP-based parallel vision,” IEEE/CAA Journal of Automatica Sinica, Vol. 5, No. 3, 2018, pp. 645-661.
  • [2] C. Kreucher, S. Lakshmanan, “LANA: A lane extraction algorithm that uses frequency domain features,” IEEE Transactions on Robotics and Automation, Vol. 15, No. 2, 1999, pp. 343-350.
  • [3] J. M. Collado, C. Hilario, A. de la Escalera, J. M. Armingol, “Adaptative road lanes detection and classification,” Advanced Concepts for Intelligent Vision Systems: 8th International Conference, Antwerp, Belgium, 2006.
  • [4] A. Borkar, M. Hayes, M. T. Smith, S. Pankanti, “A layered approach to robust lane detection at night,” 2009 IEEE Workshop on Computational Intelligence in Vehicles and Vehicular Systems, Nashville, US, 2009, pp. 51-57.
  • [5] R. K. Satzoda, M. M. Trivedi, “Efficient lane and vehicle detection with integrated synergies (ELVIS),” 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Columbus, US, 2014, pp. 708-713.
  • [6] A. Mammeri, A. Boukerche, Z. Tang, “A real-time lane marking localization, tracking and communication system,” Computer Communications, Vol. 73, 2016, pp. 132-143.
  • [7] S. Jung, J. Youn, S. Sull., “Efficient Lane Detection Based on Spatiotemporal Images,” in IEEE Transactions on Intelligent Transportation Systems, Vol. 17, No. 1, 2016, pp. 289-295.
  • [8] Y. Wang, D. Shen, E. K. Teoh, “Lane detection using spline model,” Pattern Recognition Letters, Vol. 21, No. 8, 2000, pp. 677-689.
  • [9] K. H. Lim, K. P. Seng, L. Ang, “River Flow Lane Detection and Kalman Filtering-Based B-Spline Lane Tracking,” International Journal of Vehicular Technology, Vol. 2012, 2012, pp. 1-10.
  • [10] H. Tan, Y. Zhou, Y. Zhu, D. Yao, J. Wang, "Improved river flow and random sample consensus for curve lane detection," Advances in Mechanical Engineering, Vol. 7, No. 7, 2015.
  • [11] J. Niu, J. Lu, M. Xu, P. Lv, X. Zhao, "Robust lane detection using two-stage feature extraction with curve fitting," Pattern Recognition, Vol. 59, 2016, pp. 225-233.
  • [12] A. Gurghian, T. Koduri, S. V. Bailur, K. J. Carey, V. N. Murali, "DeepLanes: End-to-end lane position estimation using deep neural networks," 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Las Vegas, US, 2016, pp. 38-45.
  • [13] B. He, R. Ai, Y. Yan, X. Lang, "Accurate and robust lane detection based on Dual-View Convolutional Neutral Network," 2016 IEEE Intelligent Vehicles Symposium (IV), Gothenburg, Sweden, 2016, pp. 1041-1046.
  • [14] S. Lee, J. Kim, J. S. Yoon, S. Shin, O. Bailo, N. Kim, T. Lee, H. S. Hong, S. Han, I. S. Kweon, "VPGNet: Vanishing Point Guided Network for lane and road marking detection and recognition," 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017, pp. 1965-1973.
  • [15] F. Pizzati, M. Allodi, A. Barrera, F. Garcia, “Lane detection and classification using cascaded CNNs,” Computer Aided Systems Theory – EUROCAST 2019, 2020, pp. 95-103.
  • [16] I. Goodfellow, Y. Bengio and A. Courville, Deep learning, The MIT Press, 2017, p. 330.
  • [17] M. Nielson, Neural Networks and Deep Learning, Determination press, 2015, p. 174.
  • [18] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov, “{Dropout: A simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, Vol. 15, No. 1, 2014, pp. 1929-1958.
  • [19] F. Chollet, Deep Learning with Python, Manning, 2017, p.109.
  • [20] S. Ioffe, C. Szegedy, “Batch Normalization: Accelerating deep network training by reducing internal covariate shift,” Proceedings of the 32nd International Conference on International Conference on Machine Learning – Volume 37, Lille, France, 2015, pp. 448-456.
  • [21] D. Foster, Generative Deep Learning, O’Reilly Media, 2019, p. 38.
  • [22] Y. Lecun, L. Bottou, Y. Bengio, P. Haffner, "Gradient-based learning applied to document recognition," in Proceedings of the IEEE, Vol. 86, No. 11, 1998, pp. 2278-2324.
  • [23] A. Krizhevsky, I. Sutskever, G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, US, 2012, pp. 1097-1105.
  • [24] K. Simonyan, A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” International Conference on Learning Representations, 2015.
  • [25] K. He, X. Zhang, S. Ren and J. Sun, "Deep residual learning for image recognition," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, US, 2016, pp. 770-778.
  • [26] "TuSimple/tusimple-benchmark", GitHub, 2020. [Online]. Available: https://github.com/TuSimple/tusimple-benchmark. Accessed: 17- May-2020
  • [27] F. V. Beers, A. Lindström, E. Okafor, M. A. Wiering, “Deep neural networks with intersection over union loss for binary image segmentation,” Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods, 2019