Comparative Analysis of CNN Models and Bayesian Optimization-Based Machine Learning Algorithms in Leaf Type Classification

Comparative Analysis of CNN Models and Bayesian Optimization-Based Machine Learning Algorithms in Leaf Type Classification

In this study, the leaves are classified by various Machine Learning (ML) and Deep Learning (DL) based Convolutional Neural Networks (CNN) methods. In the proposed method, first, image pre-processing is performed to increase the accuracy of the posterior process. The obtained image is a grayscale image without noise as a result of the pre-processing. These preprocessed images are used in classification with ML and DL. The Speeded Up Robust Features (SURF) are extracted from the grayscale image for ML-based learning. The features are restructured as visual words using the Bag of Visual Words (BoVW) method. Then, histograms are generated for each image according to the frequency of the visual word. Those histograms represent the new feature data. The histogram features are classified by four different ML methods, Decision Tree (DT), k-Nearest Neighbor (KNN), Naive Bayes (NB) and Support Vector Machine (SVM). Before using the ML methods, Bayesian Optimization (BO) method, which is one of the Hyperparameter Optimization (HO) algorithms, is applied to determine hyperparameters. In the classification process performed with four different ML algorithms, the best accuracy is achieved with the KNN algorithm as 98.09%. Resnet18, ResNet50, MobileNet, GoogLeNet, DenseNet, which are state-of-the-art CNN architectures, are used for DL-based learning. CNN models have higher accuracy than ML algorithms.

___

  • [1] J. S. Cope, D. Corney, J. Y. Clark, P. Remagnino, and P. Wilkin, "Plant species identification using digital morphometrics: A review," Expert Systems with Applications, vol. 39, no. 8, pp. 7562-7573, 2012.
  • [2] Z.-Q. Zhao, L.-H. Ma, Y.-m. Cheung, X. Wu, Y. Tang, and C. L. P. Chen, "ApLeaf: An efficient android-based plant leaf identification system," Neurocomputing, vol. 151, pp. 1112-1119, 2015.
  • [3] B. Harish, A. Hedge, O. Venkatesh, D. Spoorthy, and D. Sushma, "Classification of plant leaves using Morphological features and Zernike moments," in 2013 international conference on advances in computing, communications and informatics (ICACCI), 2013: IEEE, pp. 1827-1831.
  • [4] C. Zhao, S. S. Chan, W.-K. Cham, and L. Chu, "Plant identification using leaf shapes—A pattern counting approach," Pattern Recognition, vol. 48, no. 10, pp. 3203-3215, 2015.
  • [5] X. Wang, J. Liang, and F. Guo, "Feature extraction algorithm based on dual-scale decomposition and local binary descriptors for plant leaf recognition," Digital Signal Processing, vol. 34, pp. 101-107, 2014.
  • [6] K. K. Thyagharajan and I. Kiruba Raji, "A Review of Visual Descriptors and Classification Techniques Used in Leaf Species Identification," Archives of Computational Methods in Engineering, vol. 26, no. 4, pp. 933-960, 2019/09/01 2019.
  • [7] J. Chaki, R. Parekh, and S. Bhattacharya, "Plant leaf classification using multiple descriptors: A hierarchical approach," Journal of King Saud University - Computer and Information Sciences, vol. 32, no. 10, pp. 1158-1172, 2020/12/01/ 2020.
  • [8] A. Aakif and M. F. Khan, "Automatic classification of plants based on their leaves," Biosystems Engineering, vol. 139, pp. 66-75, 2015.
  • [9] L. Longlong, J. M. Garibaldi, and H. Dongjian, "Leaf classification using multiple feature analysis based on semi-supervised clustering," Journal of Intelligent & Fuzzy Systems, vol. 29, no. 4, pp. 1465-1477, 2015.
  • [10] J. Su, M. Wang, Z. Wu, and Q. Chen, "Fast Plant Leaf Recognition Using Improved Multiscale Triangle Representation and KNN for Optimization," IEEE Access, vol. 8, pp. 208753-208766, 2020.
  • [11] K. J. Gaston and M. A. O'Neill, "Automated species identification: why not?," Philosophical Transactions of the Royal Society of London B: Biological Sciences, vol. 359, no. 1444, pp. 655-667, 2004.
  • [12] A. Kulkarni, H. Rai, K. Jahagirdar, and P. Upparamani, "A leaf recognition technique for plant classification using RBPNN and Zernike moments," International Journal of Advanced Research in Computer and Communication Engineering, vol. 2, no. 1, pp. 984-988, 2013.
  • [13] N. Kumar et al., "Leafsnap: A computer vision system for automatic plant species identification," in Computer vision–ECCV 2012: Springer, 2012, pp. 502-516.
  • [14] J. Wäldchen, M. Rzanny, M. Seeland, and P. Mäder, "Automated plant species identification—Trends and future directions," PLoS computational biology, vol. 14, no. 4, p. e1005993, 2018.
  • [15] D. Hall, C. McCool, F. Dayoub, N. Sunderhauf, and B. Upcroft, "Evaluation of features for leaf classification in challenging conditions," in 2015 IEEE Winter Conference on Applications of Computer Vision, 2015: IEEE, pp. 797-804.
  • [16] B. Liu and R. Bruch, "Weed detection for selective spraying: A review," Current Robotics Reports, vol. 1, no. 1, pp. 19-26, 2020.
  • [17] M. F. Aslan, A. Durdu, K. Sabanci, E. Ropelewska, and S. S. Gültekin, "A Comprehensive Survey of the Recent Studies with UAV for Precision Agriculture in Open Fields and Greenhouses," Applied Sciences, vol. 12, no. 3, p. 1047, 2022.
  • [18] T. Munisami, M. Ramsurn, S. Kishnah, and S. Pudaruth, "Plant leaf recognition using shape features and colour histogram with K-nearest neighbour classifiers," Procedia Computer Science, vol. 58, pp. 740-747, 2015.
  • [19] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
  • [20] A. G. Howard et al., "Mobilenets: Efficient convolutional neural networks for mobile vision applications," arXiv preprint arXiv:1704.04861, 2017.
  • [21] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, "Mobilenetv2: Inverted residuals and linear bottlenecks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 4510-4520.
  • [22] C. Szegedy et al., "Going deeper with convolutions," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1-9.
  • [23] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700-4708.
  • [24] T. Jin, X. Hou, P. Li, and F. Zhou, "A novel method of automatic plant species identification using sparse representation of leaf tooth features," PLoS One, vol. 10, no. 10, p. e0139482, 2015.
  • [25] B. Vijaya Lakshmi and V. Mohan, "Plant leaf image detection method using a midpoint circle algorithm for shape-based feature extraction," Journal of Modern Applied Statistical Methods, vol. 16, no. 1, p. 26, 2017.
  • [26] M. Koklu, M. F. Unlersen, I. A. Ozkan, M. F. Aslan, and K. Sabanci, "A CNN-SVM study based on selected deep features for grapevine leaves classification," Measurement, vol. 188, p. 110425, 2022/01/01/ 2022.
  • [27] G. Sharma, A. Kumar, N. Gour, A. K. Saini, A. Upadhyay, and A. Kumar, "Cognitive framework and learning paradigms of plant leaf classification using artificial neural network and support vector machine," Journal of Experimental & Theoretical Artificial Intelligence, pp. 1-26, 2022.
  • [28] Y. Arun and G. S. Viknesh, "Leaf Classification for Plant Recognition Using EfficientNet Architecture," in 2022 IEEE Fourth International Conference on Advances in Electronics, Computers and Communications (ICAECC), 2022, pp. 1-5.
  • [29] G. Saleem, M. Akhtar, N. Ahmed, and W. S. Qureshi, "Automated analysis of visual leaf shape features for plant classification," Computers and Electronics in Agriculture, vol. 157, pp. 270-280, 2019/02/01/ 2019.
  • [30] B. Dudi and V. Rajesh, "Optimized threshold-based convolutional neural network for plant leaf classification: a challenge towards untrained data," Journal of Combinatorial Optimization, vol. 43, no. 2, pp. 312-349, 2022/03/01 2022.
  • [31] S. Sladojevic, M. Arsenovic, A. Anderla, D. Culibrk, and D. Stefanovic, "Deep neural networks based recognition of plant diseases by leaf image classification," Computational intelligence and neuroscience, vol. 2016, 2016.
  • [32] S. A. Wagle, R. Harikrishnan, S. H. M. Ali, and M. Faseehuddin, "Classification of Plant Leaves Using New Compact Convolutional Neural Network Models," Plants, vol. 11, no. 1, p. 24, 2022.
  • [33] A. Bakhshipour and A. Jafari, "Evaluation of support vector machine and artificial neural networks in weed detection using shape features," Computers and Electronics in Agriculture, vol. 145, pp. 153-160, 2018.
  • [34] S. J. Kho, S. Manickam, S. Malek, M. Mosleh, and S. K. Dhillon, "Automated plant identification using artificial neural network and support vector machine," Frontiers in Life Science, vol. 10, no. 1, pp. 98-107, 2017.
  • [35] (07.05.2022). UCI Machine Learning Repository [Online]. Available: https://archive.ics.uci.edu.
  • [36] (10.08.2022). Swedish Leaf Dataset [Online]. Available: https://www.cvl.isy.liu.se/en/research/datasets/swedish-leaf/.
  • [37] N. Otsu, "A Threshold Selection Method from Gray-Level Histograms," IEEE Transactions on Systems, Man, and Cybernetics, vol. 9, no. 1, pp. 62-66, 1979.
  • [38] A. K. Jain, "Data clustering: 50 years beyond K-means," Pattern recognition letters, vol. 31, no. 8, pp. 651-666, 2010.
  • [39] K. R. Žalik, "An efficient k′-means clustering algorithm," Pattern Recognition Letters, vol. 29, no. 9, pp. 1385-1391, 2008.
  • [40] H. Kato and T. Harada, "Image reconstruction from bag-of-visual-words," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 955-962.
  • [41] A. Bosch, X. Muñoz, and R. Martí, "Which is the best way to organize/classify images by content?," Image and vision computing, vol. 25, no. 6, pp. 778-791, 2007.
  • [42] S. Xu, T. Fang, D. Li, and S. Wang, "Object classification of aerial images with bag-of-visual words," IEEE Geoscience and Remote Sensing Letters, vol. 7, no. 2, pp. 366-370, 2010.
  • [43] M. Paul, R. K. Karsh, and F. A. Talukdar, "Image hashing based on shape context and speeded up robust features (SURF)," in 2019 International Conference on Automation, Computational and Technology Management (ICACTM), 2019: IEEE, pp. 464-468.
  • [44] T. Sunitha and T. S. Sivarani, "Novel content based medical image retrieval based on BoVW classification method," Biomedical Signal Processing and Control, vol. 77, p. 103678, 2022/08/01/ 2022.
  • [45] E. R. Vimina and K. P. Jacob, "Feature fusion method using BoVW framework for enhancing image retrieval," IET Image Processing, vol. 13, no. 11, pp. 1979-1985, 2019.
  • [46] F. Baig et al., "Boosting the Performance of the BoVW Model Using SURF–CoHOG-Based Sparse Features with Relevance Feedback for CBIR," Iranian Journal of Science and Technology, Transactions of Electrical Engineering, vol. 44, no. 1, pp. 99-118, 2020/03/01 2020.
  • [47] M. F. Aslan, A. Durdu, and K. Sabanci, "Human action recognition with bag of visual words using different machine learning methods and hyperparameter optimization," Neural Computing and Applications, vol. 32, no. 12, pp. 8585-8597, 2020.
  • [48] M. Ghalan and R. K. Aggarwal, "Multifold Classification for Human Action Recognition," in 2021 IEEE Bombay Section Signature Conference (IBSSC), 2021, pp. 1-6.
  • [49] P. Panchal, S. Panchal, and S. Shah, "A comparison of SIFT and SURF," International Journal of Innovative Research in Computer and Communication Engineering, vol. 1, no. 2, pp. 323-327, 2013.
  • [50] N. Y. Khan, B. McCane, and G. Wyvill, "SIFT and SURF Performance Evaluation against Various Image Deformations on Benchmark Dataset," in 2011 International Conference on Digital Image Computing: Techniques and Applications, 2011, pp. 501-506.
  • [51] E. Karami, S. Prasad, and M. Shehata, "Image matching using SIFT, SURF, BRIEF and ORB: performance comparison for distorted images," arXiv preprint arXiv:1710.02726, 2017.
  • [52] S. Routray, A. K. Ray, and C. Mishra, "Analysis of various image feature extraction methods against noisy image: SIFT, SURF and HOG," in 2017 Second International Conference on Electrical, Computer and Communication Technologies (ICECCT), 2017, pp. 1-5.
  • [53] L. Cao, J. Ling, and X. Xiao, "Study on the Influence of Image Noise on Monocular Feature-Based Visual SLAM Based on FFDNet," Sensors, vol. 20, no. 17, p. 4922, 2020.
  • [54] D. M. Farid, L. Zhang, C. M. Rahman, M. A. Hossain, and R. Strachan, "Hybrid decision tree and naïve Bayes classifiers for multi-class classification tasks," Expert Systems with Applications, vol. 41, no. 4, pp. 1937-1946, 2014.
  • [55] N. Patel and S. Upadhyay, "Study of various decision tree pruning methods with their empirical comparison in WEKA," International journal of computer applications, vol. 60, no. 12, 2012.
  • [56] A. Priyama, R. G. Abhijeeta, A. Ratheeb, and S. Srivastavab, "Comparative analysis of decision tree classification algorithms," International Journal of Current Engineering and Technology, vol. 3, no. 2, pp. 334-337, 2013.
  • [57] N. Bhatia, "Survey of nearest neighbor techniques," arXiv preprint arXiv:1007.0085, 2010.
  • [58] T. Cover and P. Hart, "Nearest neighbor pattern classification," IEEE transactions on information theory, vol. 13, no. 1, pp. 21-27, 1967.
  • [59] J. Kim, B.-S. Kim, and S. Savarese, "Comparing image classification methods: K-nearest-neighbor and support-vector-machines," Ann Arbor, vol. 1001, pp. 48109-2122, 2012.
  • [60] K. Sabancı and M. Koklu, "The Classification of Eye State by Using kNN and MLP Classification Models According to the EEG Signals," International Journal of Intelligent Systems and Applications in Engineering, vol. 3, no. 4, pp. 127-130, 2015.
  • [61] A. Ben-Hur and J. Weston, "A user’s guide to support vector machines," in Data mining techniques for the life sciences: Springer, 2010, pp. 223-239.
  • [62] N. Reljin and D. Pokrajac, "Classification of performers using support vector machines," in Neural Network Applications in Electrical Engineering, 2008. NEUREL 2008. 9th Symposium on, 2008: IEEE, pp. 165-169.
  • [63] X. Yang, R. Zhang, Z. Zhai, Y. Pang, and Z. Jin, "Machine learning for cultivar classification of apricots (Prunus armeniaca L.) based on shape features," Scientia Horticulturae, vol. 256, p. 108524, 2019/10/15/ 2019.
  • [64] M. F. Aslan, Y. Celik, K. Sabanci, and A. Durdu, "Breast Cancer Diagnosis by Different Machine Learning Methods Using Blood Analysis Data," International Journal of Intelligent Systems and Applications in Engineering, vol. 6, no. 4, pp. 289-293, 2018.
  • [65] J. Abellán and J. G. Castellano, "Improving the Naive Bayes Classifier via a Quick Variable Selection Method Using Maximum of Entropy," Entropy, vol. 19, no. 6, p. 247, 2017.
  • [66] M. Ahmed, M. Shahjaman, M. Rana, M. Mollah, and N. Haque, "Robustification of Naïve Bayes Classifier and Its Application for Microarray Gene Expression Data Analysis," BioMed research international, vol. 2017, 2017.
  • [67] S. Mukherjee and N. Sharma, "Intrusion detection using naive Bayes classifier with feature reduction," Procedia Technology, vol. 4, pp. 119-128, 2012.
  • [68] Y. Ozaki, M. Yano, and M. Onishi, "Effective hyperparameter optimization using Nelder-Mead method in deep learning," IPSJ Transactions on Computer Vision and Applications, vol. 9, no. 1, p. 20, 2017.
  • [69] M. F. Aslan, A. Durdu, A. Yusefi, K. Sabanci, and C. Sungur, "A tutorial: Mobile robotics, SLAM, bayesian filter, keyframe bundle adjustment and ROS applications," Robot Operating System (ROS), pp. 227-269, 2021.
  • [70] M. F. Aslan, K. Sabanci, A. Durdu, and M. F. Unlersen, "COVID-19 diagnosis using state-of-the-art CNN architecture features and Bayesian Optimization," Computers in Biology and Medicine, vol. 142, p. 105244, 2022/03/01/ 2022.
  • [71] M. F. Aslan, M. F. Unlersen, K. Sabanci, and A. Durdu, "CNN-based transfer learning–BiLSTM network: A novel approach for COVID-19 infection detection," Applied Soft Computing, vol. 98, p. 106912, 2021.
  • [72] M. F. Aslan, K. Sabanci, and A. Durdu, "A CNN-based novel solution for determining the survival status of heart failure patients with clinical record data: numeric to image," Biomedical Signal Processing Control, vol. 68, p. 102716, 2021.
  • [73] A. Mikołajczyk and M. Grochowski, "Data augmentation for improving deep learning in image classification problem," in 2018 international interdisciplinary PhD workshop (IIPhDW), 2018: IEEE, pp. 117-122.
  • [74] C. Shorten and T. M. Khoshgoftaar, "A survey on image data augmentation for deep learning," Journal of Big Data, vol. 6, no. 1, pp. 1-48, 2019.
  • [75] J.-E. Kim, N.-E. Nam, J.-S. Shim, Y.-H. Jung, B.-H. Cho, and J. J. Hwang, "Transfer Learning via Deep Neural Networks for Implant Fixture System Classification Using Periapical Radiographs," Journal of Clinical Medicine, vol. 9, no. 4, p. 1117, 2020.
  • [76] K. Sabanci, M. F. Aslan, E. Ropelewska, and M. F. Unlersen, "A convolutional neural network‐based comparative study for pepper seed classification: Analysis of selected deep features with support vector machine," Journal of Food Process Engineering, p. e13955, 2021.
  • [77] T. Hinz, N. Navarro-Guerrero, S. Magg, and S. Wermter, "Speeding up the Hyperparameter Optimization of Deep Convolutional Neural Networks," International Journal of Computational Intelligence and Applications, p. 1850008, 2018.