SİMAFOR BAYRAK TANIMA İÇİN KÜMELEME TABANLI ÖZNİTELİK ÇIKARMA YÖNTEMLERİ

Simafor ile bayrak muhaberesi elektronik harp esnasında yayın kontrolü yapılırken gemiler arasında çokça kullanılan bir görsel muhabere yöntemidir. Bu makalede RGB kamera imgeleri ile çalışan, yıüksek performanslı ve düşük maliyetli bir otomatik simafor tanıma sistemi önerilmektedir. Bu maksatla İngilizce simafor alfabesindeki harflere tekabül gelen bir vei seti oluşturulmuş ve morfolojik işlemler ile sancaklar arasındaki açılar otomatik olarak tespit edilmiştir. Sancak mevkilerinin özetlemesinde üç metot kullanılmıştır: ikili erozyon ile küçülme, kortalamalar ile öbekleme ve hiyerarşik toplayıcı öbekleme. Çok küçük boyutlu uzayda elde edilen öznitelikler ile karar destek makinaları kullanılarak sınıflandırma gerçekleştirilmiştir. Çapraz doğrulama deneyleri ile Kinct algılayıcıya ve hesap bakımından maliyetli sinir ağlarının çalışacağı GPU donanımlarına gereksinim duyulmaksızın %99.76’lık bir başarıma ulaşıldığı gözlemlenmiştir.

CLUSTERING BASED FEATURE EXTRACTION METHODS FOR SEMAPHORE FLAG RECOGNITION

Semaphore flag signaling is a visual communication system used by Naval vessels when the radio emissions are under strict control during electronic warfare. This paper presents an autonomous semaphore flag signaling recognition system that translates the RGB images into letters with a high performance at a low cost. A dataset is created for the semaphore flag signals for the English alphabet and the relative angles of the flags are acquired via morphological operations. The summarization of the flag locations is conducted with three methods: binary erosion the shrinking, kmeans clustering and hierarchical agglomerative clustering. The resulting features with low dimensionality are then classified with support vector machine classifier with polynomial kernel. The cross-validation experiments show that the proposed methodology yields 99.76% accuracy, with no need for a Kinect sensor and computationally expensive neural network training that requires GPUs, as proposed by the similar works in the literature.

___

  • Cao, Z., Hidalgo, G., Simon, T., Wei, S. E., and Sheikh, Y., (2018). “OpenPose: realtime multi-person 2D pose estimation using Part Affinity Field”, arXiv preprint arXiv:1812.08008.
  • Dalal, N., and Triggs, B., (2005). “Histograms of oriented gradients for human detection”, IEEE computer society conference on computer vision and pattern recognition (CVPR'05), Vol. 1, pp. 886-893.
  • Day, W. H., and Edelsbrunner, H., (1984). “Efficient algorithms for agglomerative hierarchical clustering methods”, Journal of classification, Vol. 1(1), pp. 7-24.
  • Freeman, W. T., and Roth, M., (1995). “Orientation histograms for hand gesture recognition”, International workshop on automatic face and gesture recognition, Vol. 12, pp. 296-301.
  • Gundogdu B., “Semaphore Flag Signalling Dataset”, Mendeley Data, V1, 2019, online:accesed 20 December 2019
  • Hsieh, S. W., and Shih, Y., (2015). “Using bioloid robots as tangible learning companions for enhancing learning of a semaphore flag-signaling system”, The Asian Conference on Education International Development.
  • Hung, I. C., Hsu, H. H., and Chen, N. S., (2015). “Communicating through body: a situated embodiment-based strategy with flag semaphore for procedural knowledge construction”, Educational Technology Research and Development, Vol. 63(5), pp. 749-769.
  • Ikizler, N., and Duygulu, P., (2009). “Histogram of oriented rectangles: A new pose descriptor for human action recognition”, Image and vision computing, Vol. 27(10), pp. 1515-1526.
  • Iwane, N., (2012). “Arm movement recognition for flag signaling with Kinect sensor”, IEEE International Conference on Virtual Environments Human-Computer Interfaces and Measurement Systems (VECIMS), pp. 86-90.
  • Kara, Y. A., Uçarer, Ö. K., and Gündoğdu, B., (2019). “Automatic warship recognition system: Dataset, feature representation and classification analysis” 27th Signal Processing and Communications Applications Conference (SIU), pp. 1-4.
  • Lloyd, S., (1982). “Least squares quantization in PCM”, IEEE transactions on information theory, Vol. 28(2), pp. 129-137.
  • Motty, A., Yogitha, A., and Nandakumar, R., (2019). “Flag semaphore detection using tensorflow and opencv”, International Journal of Recent Technology and Engineering, Vol. 7(6).
  • Müllner, D., (2011). “Modern hierarchical, agglomerative clustering algorithms”, arXiv preprint arXiv:1109.2378.
  • Pratt, William K. (2013). Introduction to digital image processing. CRC press.
  • Rachmad, A., and Fuad, M., (2015). “Geometry algorithm on skeleton image based semaphore gesture recognition”, Journal of Theoretical and Applied Information Technology, Vol. 81(1), pp. 102.
  • Rao, C. K., Prasada, B., and Sarma, K. R., (1976). “A parallel shrinking algorithm for binary patterns”, Computer Graphics and Image Processing, Vol. 5(2), pp. 265-270.
  • Suykens, J. A., and Vandewalle, J., (1999). “Least squares support vector machine classifiers”, Neural processing letters, Vol 9(3), pp. 293-300.
  • Temlioglu, E., Erer, I., and Kumlu, D., (2017). “Histograms of dominant orientations for anti-personnel landmine detection using ground penetrating radar”, 4th International Conference on Electrical and Electronic Engineering (ICEEE), pp. 329-332.
  • Tian, N., Kuo, B., Ren, X., Yu, M., Zhang, R., Huang, B., and Sojoudi, S., (2018). “A cloud-based robust semaphore mirroring system for social robots”, IEEE 14th International Conference on Automation Science and Engineering (CASE), pp. 1351-1358.
  • Wikimedia Commons, “Semaphore Signals A-Z”, https://commons. wikimedia.org/wiki/File:Semaphore_Signals_A-Z.jpg, accessed 23.12.2019.
  • Zhao, Q., Li, Y., Yang, N., Yang, Y., and Zhu, M., (2016). “A convolutional neural network approach for semaphore flag signaling recognition”, IEEE International Conference on Signal and Image Processing (ICSIP), pp. 466-470.
  • Zhu, Q., Yeh, M. C., Cheng, K. T., and Avidan, S., (2006). “Fast human detection using a cascade of histograms of oriented gradients”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), Vol. 2, pp. 1491-1498.