Development and optimization of a DSP-based real-time lane detection algorithm on a mobile platform

In this study, image processing-based real-time lane detection, which is one of the significant problems in autonomous vehicle control, is explored. A mobile robot platform is developed for that purpose. The motion of the mobile robot is provided by 4 direct current motors, which are independently controlled. An image processing code is developed in a Visual DSP 5.0 environment and run on a BF-561 processor embedded in the ADSP BF-561 EZ-KIT LITE evaluation board (Analog Devices). In the image processing algorithm, Hough lines obtained from the Hough transform of the captured images are called candidate lane marks. Various elimination methods are implemented on these candidate lane marks to detect the actual lane marks. Once the actual lane marks are determined, the real-world coordinates of these lane marks are computed using inverse perspective mapping. The heading angle of the mobile robot is then determined based on the position of the lane marks and the center of the mobile robot. The developed mobile robot platform and the lane detection algorithm are tested under various conditions, including dashed lane marks, varying lightening conditions, and the presence of one lane mark only. It is observed that the algorithm performs successfully and detects the lane marks even in the presence of various disturbance effects under these conditions. After the optimization of the developed image processing code, the number of frames processed by the algorithm increases from 3/s to 30/s, which should be satisfactory for real-time applications.

Development and optimization of a DSP-based real-time lane detection algorithm on a mobile platform

In this study, image processing-based real-time lane detection, which is one of the significant problems in autonomous vehicle control, is explored. A mobile robot platform is developed for that purpose. The motion of the mobile robot is provided by 4 direct current motors, which are independently controlled. An image processing code is developed in a Visual DSP 5.0 environment and run on a BF-561 processor embedded in the ADSP BF-561 EZ-KIT LITE evaluation board (Analog Devices). In the image processing algorithm, Hough lines obtained from the Hough transform of the captured images are called candidate lane marks. Various elimination methods are implemented on these candidate lane marks to detect the actual lane marks. Once the actual lane marks are determined, the real-world coordinates of these lane marks are computed using inverse perspective mapping. The heading angle of the mobile robot is then determined based on the position of the lane marks and the center of the mobile robot. The developed mobile robot platform and the lane detection algorithm are tested under various conditions, including dashed lane marks, varying lightening conditions, and the presence of one lane mark only. It is observed that the algorithm performs successfully and detects the lane marks even in the presence of various disturbance effects under these conditions. After the optimization of the developed image processing code, the number of frames processed by the algorithm increases from 3/s to 30/s, which should be satisfactory for real-time applications.

___

  • Wikipedia, Autonomous car, http://en.wikipedia.org/wiki/Autonomous car, last accessed 22 November 2011.
  • M. Ekinci, W.J.F. Gibbs, B.T. Thomas, “Knowledge-based navigation for autonomous road vehicles”, Turkish Journal of Electrical Engineering & Computer Sciences, Vol. 8, pp. 1–29, 2000.
  • A. Saudi, J. Teo, M.H.A. Hijazi, J. Sulaiman, “Fast lane detection with randomized Hough transform”, International Symposium on Information Technology, Vol. 4, pp. 1–5, 2008.
  • A. Borkar, M. Hayes, M.T. Smith, “Polar randomized Hough transform for lane detection using loose constraints of parallel lines”, IEEE International Conference on Acoustics, pp. 1037–1040, 2011.
  • L. Qing, Z. Nanning, C. Hong, “Lane boundary detection using an adaptive randomized Hough transform”, 5th World Congress on Intelligent Control and Automation, Vol. 5, pp. 4084–4088, 2004.
  • S. Lee, H. Son, K. Min, “Implementation of lane detection system using optimized Hough transform circuit”, IEEE Pacific Conference on Circuits and Systems, pp. 406–409, 2010.
  • J. Wang, Z. Liang, Y. Xi, “Lane detection based on random Hough transform on region of interesting”, IEEE International Conference on Information and Automation, Vol. 4, pp. 1735–1740, 2010.
  • Y. Wang, E.K. Teoh, D. Shen, “Lane detection and tracking using B-snake”, Image and Vision Computing, Vol. 22, pp. 269–280, 2004.
  • S.P. Adhikari, H. Kim, “Dynamic programming and curve fitting based road boundary detection”, 9th WSEAS International Conference on Computational Intelligence, pp. 236–240, 2010.
  • D.J. Kang, M.H. Jung, “Road lane segmentation using dynamic programming for active safety vehicles”, Journal of Pattern Reorganization Letters, Vol. 24, pp. 3177–3185, 2003.
  • A. Borkar, M. Hayes, M.T. Smith, “Robust lane detectıon and tracking with RANSAC and Kalman filter”, IEEE International Conference on Image Processing, Vol. 7, pp. 3261–3264, 2009.
  • T. Suttorp, T. Bucher, “Learning of Kalman filter parameters for lane detection”, IEEE Intelligent Vehicles Symposium, pp. 552–557, 2006.
  • N. Apostoloff, A. Zelinsky, “Robot vision based lane tracking using multiple cues and particle filtering”, IEEE Intelligent Vehicles Symposium, pp. 558–563, 2003.
  • H.M. Mandalia, D.D.M. Salvucci, “Using support vector machines for lane-change detection”, Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 49, pp. 1965–1969, 2005.
  • M.D. Jochem, D.A. Pomerleau, E.C. Thorpe, MANIAC: A Next Generation Neurally Based Autonomous Road Follower, The Robotics Institute, Carnegie Mellon University, Pittsburg, PA, USA. K. Kluge, S. Lakshmanan, “A deformable-template approach to lane detection”, Proceedings of the 95th Symposium on Intelligent Vehicles, pp. 54–59, 1995.
  • T.Y. Sun, S.J. Tsai, V. Chan, “HSI color model based lane-marking detection”, IEEE Intelligent Transportation Systems Conference, pp. 1168–1172, 2006.
  • D.H. Ballard, “Generalizing the Hough transform to detect arbitrary shapes”, Pattern Recognition, Vol. 13, pp. 111–122, 1981.
  • D. Ioannou, W. Huda, A.F. Laine, “Circle recognition through a 2D Hough transform and radius histogramming”, Journal of Image and Vision Computing, Vol. 17, pp. 15–26, 1999.
  • T.C. Chen, K.L. Chung, “An efficient randomized algorithm for detecting circles”, Journal of Computer Vision and Image Understanding, Vol. 83, pp. 172–191, 2001.
  • S.Y. Guo, X.F. Zhang, F. Zhang, “Adaptive randomized Hough transform for circle detection using moving window”, International Conference on Machine Learning and Cybernetics, pp. 3380–3385, 2006.
  • A. Goneid, S. El-Gindi, A. Sewisy, “A method for the Hough transform detection of circles and ellipses using a 1-dimensional array”, IEEE International Conference on Cybernetics and Simulation, Vol. 4, pp. 3154–3157, 1997.
  • A.C. G¨urb¨uz, “Line detection with adaptive random samples”, Turkish Journal of Electrical Engineering & Com- puter Sciences, Vol. 19, pp. 21–32, 2011.
  • G. K¨u¸c¨ukyıldız, H. Ocak, “Development and optimization of DSP based real time lane detection algorithm on a mobile robot platform”, IEEE Signal Processing and Communications Applications Conference, pp. 1–4, 2012
  • (article in Turkish with abstract in English). M. Bertozzi, A. Broggi, A. Fascioli, “Stereo inverse perspective mapping: theory and applications”, Journal of Image and Vision Computing, Vol. 16, pp. 585–590, 2007.
  • H.A. Malot, H.H. Bultoff, J.J. Little, S. Bohrer, “Inverse perspective mapping simplifies optical flow computation and obstacle detection”, Journal of Biological Cybernetics, Vol. 64, pp. 177–185, 1992.
  • G.Y. Jiang, T.Y. Choi, S.K. Hong, J.W. Bae, B.S. Song, “Lane and obstacle detection based on fast inverse perspective mapping algorithm”, IEEE Transactions on Cybernetics, Vol. 4, pp. 2969–2974, 2000.
  • E. ˙Ince, “Measuring traffic flow and classifying vehicle types: a surveillance video based approach”, Turkish Journal of Electrical Engineering & Computer Sciences, Vol. 19, pp. 607–620, 2011.
  • Gonzalez RC, Woods RE, Digital Image Processing, 3rd ed., Upper Saddle River, NJ, USA, Prentice Hall, pp. 142–147, 2007. Vision and Autonomous Systems Center, Carnegie Mellon University, Sample Database, http://vasc.ri.cmu.edu/idb/html/road/index.html, last accessed 30 October 2012.