Yapay Zekâya Dayalı Robot Kol ile Hareket ve Farklı Nesnelerin Sertlik Kontrolü

Çalışmada 3D baskı teknolojilerinden Fused Deposition Modeling (FDM) yazıcı kullanılarak robotik kol üretilmiştir. Üretilen robot kolun görüntü işleme teknikleri ve makine öğrenme algoritmaları kullanarak dokunsal algılama ve hareket planlaması araştırılmıştır. Bu çalışmanın amacı, robotik kolun kontrolsüz kuvvet uygulamasını engellemek ve dokunsal kavrama sorunlarını çözmek için görüntü işleme teknikleri ve derin öğrenme algoritmaları kullanılarak yenilikçi yaklaşımların araştırılması ve uygulanmasıdır. Bu çalışmada, CAD programı ile tasarımı gerçekleştirilmiş parçaların FDM tipi üç boyutlu yazıcı kullanılarak katı modelleri alınmış ve montaj için uygun hale getirilmiştir. Montajı tamamlanan robotik elin kontrol sistemi ise temel olarak Raspberry Pi kontrol kartı, servo motorlar, basınç sesörleri ve kameradan oluşmaktadır. Robotik kola ait her parmak ucuna yerleştirilen basınç sensörleri ile ürünün sertliği ölçülerek dokunsal algılama işlemi gerçekleştirilmiştir. Raspberrry pi kontrol kartı kullanılarak sensörlerden alınan veriler işlenmekte ve servo motorlara uygun hareket ve kavrama basınç bilgisi gönderilmektedir. Kamera kullanılarak elde edilen insan elinin olası hareketleri ile robotik kol için referans bir veri seti hazırlanmıştır. Veri setine ait görüntüler üzerinde Gaussian filtreleme yöntemi kullanılarak görüntü işleme sağlanmıştır. Bununla birlikte veri seti üzerinde makine öğrenme algoritmaları kullanarak robotik kolun hareket açısal konumu optimize edilmiş ve HitNet, CNN, Kapsül Ağları ve Naive Bayes derin öğrenme modelleri kullanılarak robot kolun hareket planlanması %90 doğruluk oranı ile sınıflandırılmıştır. Performans değerlendirme kriterlerine göre başarıları kıyaslanan derin öğrenme modelleri arasında, robotik kolun hareket planlaması için; HitNET algoritması ile 97.23%, CNN ile 97.48%, Capsnet algoritması ile %98,58 ve Naive Bayes modeli ile %98.61 doğruluk oranı elde edilmiştir. Performans değerlendirme kriterleri sonucunda; Naive Bayes modelinin %98.61 doğruluk, %98.63 özgüllük, %98.65 duyarlılık, 1.39 hata oranı ve %68.64 F-ölçüsü değeri ile diğer modellere göre daha başarılı sonuç verdiği gözlemlenmiştir.

Motion Control of the Robot Arm Manufactured with a Three-Dimensional Printer and Hardness Detection of Objects

In the study, a robotic arm was produced using a Fused Deposition Modeling (FDM) printer, one of the 3D printing technologies. Tactile sensing and motion planning of the produced robot arm was investigated by using image processing techniques and machine learning algorithms. This study aims to investigate and apply innovative approaches using image processing techniques and deep learning algorithms to prevent uncontrolled force application of the robotic arm and to solve tactile grip problems. In this study, solid models of the parts were designed by CAD program and manufactured using FDM type three-dimensional printer. The control system of the robotic hand consists of a Raspberry Pi control card, servo motors, pressure sensors, and a camera. Tactile sensing was performed by measuring the hardness of the product with pressure sensors placed on each fingertip of the robotic arm. Raspberry pi control card is receive the data from the sensors are process them, after that the appropriate motion and clutch pressure information is sent to the servo motors. A reference data set for the robotic arm was prepared with the possible movements of the human hand obtained using the camera. Image processing is provided by using the Gaussian filtering method on the images of the data set. In addition, the angular position of the robotic arm's motion was optimized using machine learning algorithms on the data set, and the motion planning of the robot arm was classified with 90% accuracy using HitNet, CNN, Capsule Networks, and Naive Bayes deep learning models. Among the deep learning models which were very successful are compared each other according to the performance evaluation criteria, for the motion planning of the robotic arm; The accuracy rate was 97.23% with the HitNET algorithm, 97.48% with CNN, 98.58% with the Capsnet algorithm and 98.61% with the Naive Bayes model. As a result of the performance evaluation criteria; It has been observed that the Naive Bayes model gives more successful results than other models with 98.61% accuracy, 98.63% specificity, 98.65% sensitivity, 1.39 error rate, and 68.64% F-measure value.

___

  • [1] E. Kahya, “Kivi Hasatı İçin Robotik T utucu T asarımı”, Uluslararası Teknolojik Bilimler Dergisi, 6(2), 18-35, 2014.
  • [2] Y. Bayrak, S. T anju, E. Öztürk, M. Ş. Dilege, “Akciğer Kanserinde Robotik Lobektomi: Erken Dönem Sonuçlar”, Turk Gogus Kalp Dama, 22(4), 785-789, 2014.
  • [3] B. Robins, K. Dautenhahn, R. T. Boekhorst, A. Billard, “Robotic Assistants İn Therapy And Education Of Children With Autism: Can A Small Humanoid Robot Help Encourage Social İnteraction Skills?”, Universal Access in the Information Society, 4(2), 105- 120, 2005.
  • [4] Ö. F. Görçün, “Lojistikte T eknoloji Kullanımı Ve Robotik Sistemler-Technology Utılızatıon In Logıstıcs And Robotıc Systems”, Mehmet Akif Ersoy Üniversitesi Sosyal Bilimler Enstitüsü Dergisi, 10(24), 351-368, 2019.
  • [5] U. Yüzgeç, H. E. Büyüktepe, C. Karakuzu, “Kablosuz Eldiven Sistemi ile Kontrol Edilen Robot Kol T asarımı”, Türkiye Bilişim Vakfı Bilgisayar Bilimleri ve Mühendisliği Dergisi, 9(2), 35-42, 2016.
  • [6] Y. Pititeeraphab, M. Sangworasil, “ Design And Construction Of System T o Control The Movement Of The Robot Arm” In 2015 8th Biomedical Engineering International Conference (BMEiCO N), Pattaya, T hailand, 14, November, 2015.
  • [7] Y. Zhuang, L. Duanling, “Principle And Mechanical Analysis Of A Pneumatic Underactuated Bionic Hand”, IEEE International Conference on Robotics and Biomimetics (ROBIO), Guilin, China 432436, December, 2009.
  • [8] İ. Özkök, G. Kucukyildiz, S. Karakaya, H. Ocak, “ Kinect Tabanlı Robot Kolu Kontrolü”, In Otomatik Kontrol Ulusal Toplantisi, (2013).
  • [9] K. Hosoda, T . “Iwase, Robust Haptic Recognition By Anthropomorphic Bionic Hand T hrough Dynamic İnteraction”, Inte rnational Conference on Intelligent Robots and Systems, 12361241, October, 2010.
  • [10] G. C. Choudhary, R. B. V. Chethan, “Real T ime Robotic Arm Control Using Hand Gestures”, International Conference on High Performance Computing and Applications, Bhubaneswar, India, 13, December, 2014.
  • [11] K. S. Sree, T . Bikku, S. Mounika, N. Ravinder, M. L. Kumar, C. Prasad, “ EMG Controlled Bionic Robotic Arm using Artificial Intelligence and Machine Learning”, In 2021 Fifth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud)(I-SMAC), Palladam, India, 548554, 1113 November, 2021.
  • [12] A. Hekmatmanesh, H. Wu, H. Handroos, “ Largest Lyapunov Exponent Optimization for Control of a Bionic-Hand: A Brain Computer Interface Study”, Frontiers in Rehabilitation Sciences 2, 802-070, 2022.
  • [13] S. Ryew, C. Hyoukryeol, "Doubleactive Universal Joint (Dauj): Robotic Jointmechanism For Human-Like Motions", Transactions on Robotics and Automation, 17(3), 290-300, 2001.
  • [14] S. Hafiane, Y. Salih, A. S. Malik, “3d Hand Recognition For T elerobotics”, Symposium on Computers & Informatics, Langkawi , Malaysia, 132137, September, 2013.
  • [15] G. Gómez, A. Hernandez, P. E. Hotz, R. Pfeifer, “An Adaptive Learning Mechanism For Teaching A Robotic Hand T o Grasp”, In Inte rnational symposium on adaptive motion of animals and machine s, September, 2005.
  • [16] H. Kawasaki, T . Komatsu, K. Uchiyama, “Dexterous Anthropomorphic Robot Hand With Distributed T actile Sensor: Gifu Hand II”, IEEE/ASME Transactions On Mechatronics, 7(3), 296-303, 2002.
  • [17] F. Doğan, İ. T ürkoğlu, “Derin Öğrenme Modelleri ve Uygulama Alanlarına İlişkin Bir Derleme” Dicle Üniversitesi Mühendislik Fakültesi Mühendislik Dergisi, 10(2), 409-445, 2019.
  • [18] J. Xing, G. Fang, J. Zhong, J. Li, “Application of Face Recognition Based on CNN in Fatigue Driving Detection”, In Proceedings of the 2019 International Conference on Artificial Intelligence and Advanced Manufacturing, Dublin Ireland, 15, 1719 October, 2019.
  • [19] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, “Going Deeper With Convolutions”, In Proceedings of the IEEE conference on computer vision and pattern recognition, Boston, 19, 712 June, 2015.
  • [20] D. Palaz, M. Magimai-Doss, R. Collobert, “Analysis Of Cnn- Based Speech Recognition System Using Raw Speech As İnput”, Proceedings of the Annual Conference of the International Speech Communication Association , 1115, January, 2015.
  • [21] R. A. A. R. Agha, M. N. Sefer, P. Fattah, “A comprehensive study on sign languages recognition systems using (SVM, KNN, CNN and ANN)”, Proceedings of the First International Conference on Data Science, E-learning and Information Systems, Madrid, Spain, 16, 0102 October, 2018.
  • [22] P. Ballester, R. M. Araujo, “On the performance of GoogLeNet and AlexNet applied to sketches”, Thirtieth AAAI Conference on Artificial Intelligence, Arizona, USA, 11241128, 1217 February, 2016.
  • [23] G. T ripathi, K. Singh, D. K. Vishwakarma, “Convolutional neural networks for crowd behaviour analysis: a survey”, The Visual Computer, 35(5), 753-776, 2019.
  • [24] R. Zhao, R. Yan, Z. Chen, K. Mao, P. Wang, R. X. Gao, “Deep Learning and Its Applications to Machine Health Monitoring”, Mechanical Systems and Signal Processing, 14(8), 213-237, 2016.
  • [25] Internet: Convolutional Neural Network (CNN) T utorial, https://www.kaggle.com/kanncaa1/convolutional-neural-network- cnn-tutorial, 12.08.2021.
  • [26] L. Huang, J. Li, H. Hao, X. Li, “Micro-Seismic Event Detection And Location İn Underground Mines By Using Convolutional Neural Networks (Cnn) And Deep Learning”, Tunnelling and Underground Space Technology, 8, 265–276, 2018.
  • [27] T . Guo, J. Dong, H. Li, Y. Gao, “Simple convolutional neural network on image classification”, 2nd International Conference on Big Data Analysis, Beijing, China, 721–724, 1012 March, 2017.
  • [28] A. Deliege, A. Cioppa, M. V. Droogenbroeck, “Hitnet: a neural network with capsules embedded in a hit -or-miss layer, extended with hybrid data augmentation and ghost capsules”, arXiv preprint, 2018.
  • [29] Internet: Hitnet: A Neural Network With Capsules Embedded İn A Hit-Or-Miss Layer, Extended With Hybrid Data Augmentation And Ghost Capsules, http://www.telecom.ulg.ac.be/hitnet/, 12.08.2021.
  • [30] C. Xiang, L. Zhang, Y. Tang, W. Zou, C. Xu, “MS-CapsNet: A novel multi-scale capsule network”, Signal Processing Letters, 25(12), 1850-1854, 2018.
  • [31] S. T oraman, “Kapsül ağları kullanılarak eeg sinyallerinin sınıflandırılması”, Fırat Üniversitesi Mühendislik Bilimleri Dergisi, 32(1), 203-209, 2020.
  • [32] M. K. Patrick, A. F. Adekoya, A. A. Mighty, B. Y. Edward, “ Capsule networks–a survey”, Journal of King Saud University- Computer and Information Sciences, 34(1), 1295-1310, 2019.
  • [33] R. Mukhometzianov, J. Carrillo, “CapsNet comparative performance evaluation for image classification”, arXiv, 1805.11195, 2018.
  • [34] W. Huang, F. Zhou, “ DA-CapsNet: dual attention mechanism capsule network”, Scientific Reports, 10(1), 1-13. 2020.
  • [35] F. Beşer, M. A. Kizrak, B. Bolat, T . Yildirim, “Recognition Of Sign Language Using Capsule Networks”, 26th Signal Processing and Communications Applications Conference (SIU), Izmir, T urkey, 14, 0205 May, 2018.
  • [36] İ. Soyhan, S. Gurel, S. A. T ekin, “Yapay Zeka Tabanlı Görüntü İşleme T ekniklerinin İnsansız Hava Araçları Üzerinde Uygulamaları”, Avrupa Bilim ve Teknoloji Dergisi, (24), 469-473, 2021.
  • [37] S. Solak, U. Altınışık, “Görüntü işleme teknikleri ve kümeleme yöntemleri kullanılarak fındık meyvesinin tespit ve sınıflandırılması”, Sakarya University Journal of Science, 22(1), 56-65, 2018.
  • [38] A. Ravishankar, S. Anusha, H. K. Akshatha, A. Raj, S. Jahnavi, J. Madhura, "A Survey On Noise Reduction T echniques İn Medical İmages", International Conference of Electronics Communication and Aerospace Technology, Coimbatore, India, 385389, 2022 April, 2017.
  • [39] A. Kumar, S. S. Sodhi, “Comparative analysis of gaussian filter, median filter and denoise autoenocoder”, 7th International Conference O n Computing For Sustainable Global De ve lopme nt, New Delhi, India, 4551, 1214 March, 2020.
  • [40] Z. M asetic, A. Subasi, “Congestive heart failure detection using random forest classifier”, Computer methods and programs in biomedicine, 130, 54-64, 2016.
  • [41] M . M ursalin, Y. Zhang, Y. Chen, N. V. “Chawla, Automated epileptic seizure detection using improved correlation-based feature selection with random forest classifier”, Neurocomputing, 241, 204-214, 2017.
  • [42] A. Ozcift, A. Gulten, “Classifier ensemble construction with rotation forest to improve medical diagnosis performance of machine learning algorithms”, Computer methods and programs in biomedicine, 104(3), 443-451, 2011.
  • [43] P. Eusebi, “Diagnostic accuracy measures”, Cerebrovascular Diseases, 36(4), 267-272, 2013.
  • [44] Ö. Ekrem, O. K. M . Salman, B. Aksoy, S. A. İnan, “Yapay Zekâ Yöntemleri Kullanılarak Kalp Hastalığının Tespiti”, Mühendislik Bilimleri ve Tasarım Dergisi, 8(5), 241-254, 2020.
  • [45] A.M . Šimundić, “M easures of diagnostic accuracy: Basic definitions”, The Electronic Journal of the International Federation of Clinical Chemistry and Laboratory Medicine, 19(4), 203, 2009.
  • [46] A. Şenol, Y. Canbay, M . Kaya, “M akine Öğrenmesi Yaklaşımlarını Kullanarak Salgınları Erken Evrede Tespit Etme Alanındaki Eğilimler”, International Journal of Informatics Technologies, 14(4), 355-366, 2021.
  • [47] L. Q. T an, S. Q. Xie, I. C. Lin, T . Lin, “Development of a multifingered robotic hand”, International Conference on Information and Automation, Zhuhai/Macau, China, 15411545, June, 2009.
  • [48] Z. Ye, D. Li, “Principle And Mechanical Analysis Of A Pneumatic Underactuated Bionic Hand”, International Conference on Robotics and Biomimetics, Guilin, China, 432436, 1923 December, 2009.
  • [49] Y. Hirano, K. Kitahama, S. Yoshizawa, “Image-Based Object Recognition And Dexterous Hand/Arm Motion Planning Using Rrts For Grasping İn Cluttered Scene”, International Conference on Intelligent Robots and Systems, Edmonton, Canada, 20412046, 0206 August, 2005.
  • [50] S. Mahboubi, S. Davis, M. Nefti-Meziani, “Variable Stiffness Robotic Hand For Stable Grasp And Flexible Handling”, IEEE Access, 6, 68195-68209, 2018.
  • [51] K. Mitsui, R. Ozawa, T . Kou, “An under-actuated robotic hand for multiple grasps”, International Conference on Intelligent Robots and Systems, Tokyo, Japan, 54755480, 0307 November, 2013.
  • [52] S. W. Ruehl, C. Parlitz, G. Heppner, A. Hermann, A. Roennau, R. Dillmann, “Experimental Evaluation Of T he Schunk 5-Finger Gripping Hand For Grasping T asks”, International Conference on Robotics and Biomimetics, Bali, Indonesia, 24652470, 0510 December, 2014.
  • [53] L. Jiang, K. Low, J. Costa, R. J. Black, Y. L. Park, “Fiber Optically Sensorized Multi-Fingered Robotic Hand”, International Conference on Intelligent Robots and Systems, Hamburg, Germany, 17631768, 28 September, 2015.
Bilişim Teknolojileri Dergisi-Cover
  • ISSN: 1307-9697
  • Yayın Aralığı: Yılda 4 Sayı
  • Başlangıç: 2008
  • Yayıncı: Gazi Üniversitesi Bilişim Enstitüsü