Görüntü artırma tekniklerinin cilt kanseri türleri üzerinde evrişimsel sinir ağları ile sınıflandırma başarılarının karşılaştırılması

Derin öğrenme yaklaşımlarından evrişimsel sinir ağları algoritması ile görüntü veri setleri üzerinde sınıflandırma çalışmaları yaygın olarak tıp ve tarım gibi birçok alanda başarılı bir şekilde yapılmaktadır. Ancak, görüntü veri setleri içerisinde bulunan sınıfların örnek sayıları dengesiz olduğu durumlarda bu algoritmanın sınıflandırma başarımı olumsuz yönde etkilenmektedir. Genelde çoğunluk sınıfının aksine azınlık sınıfı(ları) evrişimsel sinir ağları algoritması tarafından iyi bir şekilde öğrenilmemektedir. Bunun gibi durumlarda aşırı örnekleme yöntemlerine başvurmak başarılı sonuçlar alınmasını sağlamaktadır. Aşırı örnekleme yöntemleri ile azınlık sınıfı(ları) örneklerinin sayısı artırılarak çoğunluk sınıfının örnek sayısına yakın ya da eşit olmaktadır. Bu çalışmada literatürde sıkça kullanılan; yer değiştirme, döndürme, rastgele silme, gürültü ekleme, resimlerin karıştırılması, çekirdek filtreleri, çekişmeli üretici ağlar, çevirme, özellik uzayı dönüşümü, kırpma ve renk uzayı dönüşümü aşırı örnekleme yöntemleri Ham10000 veri seti üzerinde uygulanmıştır. Uygulama sonucunda elde edilen sonuçlara göre sınıflandırma başarısı açısından aşırı örnekleme yöntemleri karşılaştırılmıştır. Üç farklı evrişimsel sinir ağları modellerinden; ResNet50, DenseNet201, MobileNet ile elde edilen sınıflandırma sonuçlarına göre doğruluk açısından ResNet50 modelinde gürültü ekleme yöntemi 0.967, DenseNet201 modelinde renk uzayı dönüşümü yöntemi 0.965 ve MobileNet modelinde ise Resimlerin karıştırılması yöntemi 0.974 sınıflandırma başarısı değeri ile diğer aşırı örnekleme yöntemlerinden daha iyi bir sonuç elde etmiştir.

Comparison of the classification perfomance of image augmentation techniques with convolutional neural networks on skin cancer types

Classification studies on image data sets with convolutional neural networks algorithm, which is one of the deep learning approaches, are widely performed successfully in many fields such as medicine and agriculture. However, in cases where the sample numbers of the classes in the image datasets are imbalanced, the classification performance of this algorithm is negatively affected. In general, unlike the majority class, the minority class(es) are not well learned by the convolutional neural network algorithm. In such cases, applying oversampling methods provides successful results. With oversampling methods, the number of minority class(s) samples is increased, making it close to or equal to the sample number of the majority class. In this study, translation, rotation, random erasing, noise injection, mixing of images, kernel filters, generative adversarial networks, flipping, feature space transformation, cropping and color space transformation oversampling methods frequently used in the literature were applied on the Ham10000 dataset. According to the results obtained as a result of the study, oversampling methods have been compared in terms of classification performance. From three different convolutional neural network models, according to the classification results obtained with ResNet50, DenseNet201, MobileNet, in terms of accuracy, the noise injection method in the ResNet50 model was 0.967, the color space transformation method in the DenseNet201 model was 0.965, and in the MobileNet model, the mixing of images method had a classification performance value of 0.974 which was better than the other oversampling methods.

___

  • M. Šprogar, M. Colnaric, and D. Verber, On data windows for fault detection with neural networks, IFAC-PapersOnLine, vol. 54, no. 4, pp. 38–43, Jan. 2021. doi: 10.1016/j.ıfacol.2021.10.007.
  • Z. Kayumov, D. Tumakov, and S. Mosin, Hierarchical convolutional neural network for handwritten digits recognition, Procedia Comput Sci, vol. 171, pp. 1927–1934, Jan. 2020. doi: 10.1016/j.procs.2020.04.206.
  • F. Luongo, R. Hakim, J. H. Nguyen, A. Anandkumar, and A. J. Hung, Deep learning-based computer vision to recognize and classify suturing gestures in robot-assisted surgery, Surgery (United States), vol. 169, no. 5, pp. 1240–1244, May 2021. doi: 10.1016/j.surg.2020.08.016.
  • D. Li, T. Haritunians, E. Mengesha, S. R. Targan, and D. Mcgovern, Using deeplearning and genetic bigdata to predict crohn’s disease, 2019. doi: 10.1016/s0016-5085(19)36864-7.
  • O. Russakovsky et al., Imagenet large scale visual recognition challenge, Int J Comput Vis, vol. 115, no. 3, pp. 211–252, Dec. 2015. doi: 10.1007/s11263-015-0816-y/fıgures/16.
  • S. Reshma Prakash and P. Nath Singh, Object detection through region proposal based techniques, Mater Today Proc, vol. 46, pp. 3997–4002, Jan. 2021. doi: 10.1016/j.matpr.2021.02.533.
  • C. Shorten and T. M. Khoshgoftaar, A survey on ımage data augmentation for deep learning, J Big Data, vol. 6, no. 1, pp. 1–48, Dec. 2019. doi: 10.1186/s40537-019-0197-0/fıgures/33.
  • J. Lemley, S. Bazrafkan, and P. Corcoran, Smart augmentation - learning an optimal data augmentation strategy, IEEE Access, vol. 5, pp. 5858–5869, Mar. 2017. doi: 10.1109/access.2017.2696121.
  • M. Nagaraju, P. Chawla, S. Upadhyay, and R. Tiwari, Convolution network model based leaf disease detection using augmentation techniques, Expert Syst, vol. 39, no. 4, p. e12885, May 2022. doi: 10.1111/exsy.12885.
  • J. M. Haut, M. E. Paoletti, J. Plaza, A. Plaza, and J. Li, Hyperspectral ımage classification using random occlusion data augmentation, IEEE Geoscience and Remote Sensing Letters, vol. 16, no. 11, pp. 1751–1755, Nov. 2019. doi: 10.1109/lgrs.2019.2909495.
  • Z. Li, K. Kamnitsas, and B. Glocker, Analyzing overfitting under class ımbalance in neural networks for ımage segmentation, IEEE Trans Med Imaging, vol. 40, no. 3, pp. 1065–1077, Mar. 2021. doi: 10.1109/tmı.2020.3046692.
  • P. Thanapol, K. Lavangnananda, P. Bouvry, F. Pinel, and F. Leprevost, Reducing overfitting and ımproving generalization in training convolutional neural network (CNN) under limited sample sizes in ımage recognition, InCIT 2020 - 5th International Conference on Information Technology, pp. 300–305, Oct. 2020. doi: 10.1109/ıncıt50588.2020.9310787.
  • W. Lee and K. Seo, Downsampling for binary classification with a highly ımbalanced dataset using active learning, Big Data Research, vol. 28, p. 100314, May 2022. doi: 10.1016/j.bdr.2022.100314.
  • M. Nagaraju, P. Chawla, and N. Kumar, Performance improvement of deep learning models using image augmentation techniques, Multimed Tools Appl, vol. 81, no. 7, pp. 9177–9200, Mar. 2022. doi: 10.1007/s11042-021-11869-x/tables/10.
  • M. John and S. Santhanalakshmi, Image augmentation using GAN models in computer vision, proceedings - 2nd International Conference on Smart Electronics and Communication, ICOSEC 2021, pp. 1194–1201, 2021. doi: 10.1109/ıcosec51865.2021.9591692.
  • D. Vitas, M. Tomic, and M. Burul, Image augmentation techniques for cascade model training, 2018 Zooming Innovation in Consumer Technologies Conference, ZINC 2018, pp. 78–83, Aug. 2018. doi: 10.1109/zınc.2018.8448407.
  • G. Chandrashekar, A. Raaza, V. Rajendran, and D. Ravikumar, Side scan sonar image augmentation for sediment classification using deep learning based transfer learning approach, Mater Today Proc, Jul. 2021. doi: 10.1016/j.matpr.2021.07.222.
  • M. L. Huang, T. C. Chuang, and Y. C. Liao, Application of transfer learning and image augmentation technology for tomato pest identification, Sustainable Computing: Informatics and Systems, vol. 33, p. 100646, Jan. 2022. doi: 10.1016/j.suscom.2021.100646.
  • S. I. Hossain et al., Exploring convolutional neural networks with transfer learning for diagnosing Lyme disease from skin lesion images, Comput Methods Programs Biomed, vol. 215, p. 106624, Mar. 2022. doi: 10.1016/j.cmpb.2022.106624.
  • P. Bansal, R. Garg, and P. Soni, Detection of melanoma in dermoscopic images by integrating features extracted using handcrafted and deep learning models, Comput Ind Eng, vol. 168, p. 108060, Jun. 2022. doi: 10.1016/j.cıe.2022.108060.
  • M. A. Khan, Y. D. Zhang, M. Sharif, and T. Akram, Pixels to Classes: Intelligent learning framework for multiclass skin lesion localization and classification, Computers & Electrical Engineering, vol. 90, p. 106956, Mar. 2021. doi: 10.1016/j.compeleceng.2020.106956.
  • K. Ali, Z. A. Shaikh, A. A. Khan, and A. A. Laghari, Multiclass skin cancer classification using EfficientNets – a first step towards preventing skin cancer, Neuroscience Informatics, vol. 2, no. 4, p. 100034, Dec. 2022. doi: 10.1016/j.neurı.2021.100034.
  • C. Barata, M. E. Celebi, and J. S. Marques, A survey of feature extraction in dermoscopy ımage analysis of skin cancer, IEEE J Biomed Health Inform, vol. 23, no. 3, pp. 1096–1109, May 2019. doi: 10.1109/jbhı.2018.2845939.
  • K. M. Hosny, M. A. Kassem, and M. M. Foaud, Classification of skin lesions using transfer learning and augmentation with Alex-net, PLoS One, vol. 14, no. 5, p. e0217293, May 2019. doi: 10.1371/journal.pone.0217293.
  • H. Kim, W. K. Jung, Y. C. Park, J. W. Lee, and S. H. Ahn, Broken stitch detection method for sewing operation using CNN feature map and image-processing techniques, Expert Syst Appl, vol. 188, p. 116014, Feb. 2022. doi: 10.1016/j.eswa.2021.116014.
  • P. Baro and M. D. Borah, A factor based multiple ımputation approach to handle class ımbalance, Procedia Comput Sci, vol. 218, pp. 103–112, Jan. 2023. doi: 10.1016/j.procs.2022.12.406.
  • Q. Dai, J. wei Liu, and Y. hui Shi, Class-overlap undersampling based on Schur decomposition for Class-imbalance problems, Expert Syst Appl, vol. 221, p. 119735, Jul. 2023. doi: 10.1016/j.eswa.2023.119735.
  • X. Yuan, S. Chen, H. Zhou, C. Sun, and L. Yuwen, CHSMOTE: Convex hull-based synthetic minority oversampling technique for alleviating the class imbalance problem, Inf Sci (N Y), vol. 623, pp. 324–341, Apr. 2023. doi: 10.1016/j.ıns.2022.12.056.
  • Z. Jiang, L. Zhao, Y. Lu, Y. Zhan, and Q. Mao, A semi-supervised resampling method for class-imbalanced learning, Expert Syst Appl, vol. 221, p. 119733, Jul. 2023. doi: 10.1016/j.eswa.2023.119733.
  • F. Sultana, A. Sufian, and P. Dutta, Evolution of ımage segmentation using deep convolutional neural network: a survey, Knowl Based Syst, vol. 201–202, p. 106062, Aug. 2020. doi: 10.1016/j.knosys.2020.106062.
  • M. Temraz and M. T. Keane, Solving the class imbalance problem using a counterfactual method for data augmentation, Machine Learning with Applications, vol. 9, p. 100375, Sep. 2022. doi: 10.1016/j.mlwa.2022.100375.
  • J. Engelmann and S. Lessmann, Conditional wasserstein gan-based oversampling of tabular data for imbalanced learning, Expert Syst Appl, vol. 174, p. 114582, Jul. 2021. doi: 10.1016/j.eswa.2021.114582.
  • S. Xue et al., 2D probabilistic undersampling pattern optimization for MR image reconstruction, Med Image Anal, vol. 77, p. 102346, Apr. 2022. doi: 10.1016/j.medıa.2021.102346.
  • L. Nanni, M. Paci, S. Brahnam, and A. Lumini, Comparison of different ımage data augmentation approaches, Journal of Imaging 2021, Vol. 7, Page 254, vol. 7, no. 12, p. 254, Nov. 2021. doi: 10.3390/jımagıng7120254.
  • L. Perez and J. Wang, The Effectiveness of data augmentation in ımage classification using deep learning, Dec. 2017, Accessed: May 25, 2023. [Online]. available: https://arxiv.org/abs/1712.04621v1
  • M. V. Valueva, N. N. Nagornov, P. A. Lyakhov, G. V. Valuev, and N. I. Chervyakov, Application of the residue number system to reduce hardware costs of the convolutional neural network implementation, Math Comput Simul, vol. 177, pp. 232–243, Nov. 2020. doi: 10.1016/j.matcom.2020.04.031.
  • J. X. Mi, J. Feng, and K. Y. Huang, Designing efficient convolutional neural network structure: A survey, Neurocomputing, vol. 489, pp. 139–156, Jun. 2022. doi: 10.1016/j.neucom.2021.08.158.
  • S. Kalra and A. Leekha, Survey of convolutional neural networks for image captioning, Journal of Information and Optimization Sciences, vol. 41, no. 1, pp. 239–260, Jan. 2020. doi: 10.1080/02522667.2020.1715602.
  • S. Ding, H. Zhu, W. Jia, and C. Su, A survey on feature extraction for pattern recognition, Artif Intell Rev, vol. 37, no. 3, pp. 169–180, Mar. 2012. doi: 10.1007/s10462-011-9225-y.
  • T. Kavzoglu and P. M. Mather, The role of feature selection in artificial neural network applications, Int J Remote Sens, vol. 23, no. 15, pp. 2919–2937, 2002. doi: 10.1080/01431160110107743.
  • A. Ferreira and G. Giraldi, Convolutional neural network approaches to granite tiles classification, Expert Syst Appl, vol. 84, pp. 1–11, Oct. 2017. doi: 10.1016/j.eswa.2017.04.053.
  • P. Witoonchart and P. Chongstitvatana, Application of structured support vector machine backpropagation to a convolutional neural network for human pose estimation, Neural Networks, vol. 92, pp. 39–46, Aug. 2017. doi: 10.1016/j.neunet.2017.02.005.
  • A. Deshpande, V. V. Estrela, and P. Patavardhan, The DCT-CNN-ResNet50 architecture to classify brain tumors with super-resolution, convolutional neural network, and the ResNet50, Neuroscience Informatics, vol. 1, no. 4, p. 100013, Dec. 2021. doi: 10.1016/j.neurı.2021.100013.
  • M. Ebrahim, M. Al-Ayyoub, and M. A. Alsmirat, Will transfer learning enhance ımagenet classification accuracy using ımagenet-pretrained models?, 2019 10th International Conference on Information and Communication Systems, ICICS 2019, pp. 211–216, Jun. 2019. doi: 10.1109/ıacs.2019.8809114.
  • S. ur Rehman et al., Unsupervised pre-trained filter learning approach for efficient convolution neural network, Neurocomputing, vol. 365, pp. 171–190, Nov. 2019. doi: 10.1016/j.neucom.2019.06.084.
  • L. Zaniolo and O. Marques, On the use of variable stride in convolutional neural networks, Multimed Tools Appl, vol. 79, no. 19–20, pp. 13581–13598, May 2020. doi: 10.1007/s11042-019-08385-4/metrıcs.
  • A. D. Nguyen, S. Choi, W. Kim, S. Ahn, J. Kim, and S. Lee, Distribution padding in convolutional neural networks, Proceedings - International Conference on Image Processing, ICIP, vol. 2019-September, pp. 4275–4279, Sep. 2019. doi: 10.1109/ıcıp.2019.8803537.
  • F. Agarap, Deep learning using rectified linear units (ReLU), Mar. 2018. doi: 10.48550/arxiv.1803.08375.
  • Z. Song et al., A sparsity-based stochastic pooling mechanism for deep convolutional neural networks, Neural Networks, vol. 105, pp. 340–345, Sep. 2018. doi: 10.1016/j.neunet.2018.05.015.
  • A. Zafar et al., A comparison of pooling methods for convolutional neural networks, Applied Sciences 2022, Vol. 12, Page 8643, vol. 12, no. 17, p. 8643, Aug. 2022. doi: 10.3390/app12178643.
  • W. S. Yang, Y. C. Feng, and T. Y. Liang, An ımage quality predictor based on convolution neural networks, ISPACS 2021 - International Symposium on Intelligent Signal Processing and Communication Systems: 5G Dream to Reality, Proceeding, 2021. doi: 10.1109/ıspacs51563.2021.9651075.
  • A. Bilal, G. Sun, and S. Mazhar, Finger-vein recognition using a novel enhancement method with convolutional neural network, Journal of Chinese Institute of Engineers, vol. 44, no. 5, pp. 407–417, 2021. doi: 10.1080/02533839.2021.1919561.
  • N. Srivastava, G. Hinton, A. Krizhevsky, and R. Salakhutdinov, Dropout: a simple way to prevent neural networks from overfitting, Journal of Machine Learning Research, vol. 15, pp. 1929–1958, 2014. doi: 10.5555/2627435.
  • C. Liu, Y. Liang, and W. Wen, Fire ımage augmentation based on diverse alpha compositing for fire detection, Proceedings - 2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics, CISP-BMEI 2022, 2022. doi: 10.1109/cısp-bmeı56279.2022.9979846.
  • J. Wang and S. Lee, Data augmentation methods applying grayscale images for convolutional neural networks in machine vision, Applied Sciences (Switzerland), vol. 11, no. 15, Aug. 2021. doi: 10.3390/app11156721.
  • R. Takahashi, T. Matsubara, and K. Uehara, Data augmentation using random ımage cropping and patching for deep CNNs, IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 9, pp. 2917–2931, Sep. 2020. doi: 10.1109/tcsvt.2019.2935128.
  • N. Tajbakhsh, M. B. Gotway, and J. Liang, Computer-aided pulmonary embolism detection using a novel vessel-aligned multi-planar image representation and convolutional neural networks, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9350, pp. 62–69, 2015. doi: 10.1007/978-3-319-24571-3_8/cover.
  • A. Gupta, S. Venkatesh, S. Chopra, and C. Ledig, Generative ımage translation for data augmentation of bone lesion pathology, Proceedings of Machine Learning Research, vol. 102. PMLR, pp. 225–235, May 24, 2019. Accessed: Dec. 31, 2022. [Online]. available: https://proceedings.mlr.press/v102/gupta19b.html
  • F. J. Moreno-Barea, F. Strazzera, J. M. Jerez, D. Urda, and L. Franco, Forward noise adjustment scheme for data augmentation, Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence, SSCI 2018, pp. 728–734, Jan. 2019. doi: 10.1109/sscı.2018.8628917.
  • G. Kang, X. Dong, L. Zheng, and Y. Yang, PatchShuffle regularization, Jul. 2017, Accessed: Dec. 31, 2022. [Online]. available: https://www.researchgate.net/publication/318670749_patchshuffle_regularization
  • N. Nausheen, A. Seal, P. Khanna, and S. Halder, A FPGA based implementation of Sobel edge detection, Microprocess Microsyst, vol. 56, pp. 84–91, Feb. 2018. doi: 10.1016/j.mıcpro.2017.10.011.
  • T. Ma et al., Optimized Laplacian image sharpening algorithm based on graphic processing unit, Physica A: Statistical Mechanics and its Applications, vol. 416, pp. 400–410, Dec. 2014. doi: 10.1016/j.physa.2014.09.026.
  • H. Inoue, Data Augmentation by Pairing Samples for Images Classification, ICLR 2018 Conference, ArXiv, 2018. doi:10.48550/arxiv.1801.02929
  • Saran, F. Nar, and A. N. Saran, Perlin random erasing for data augmentation, SIU 2021 - 29th IEEE Conference on Signal Processing and Communications Applications, Proceedings, Jun. 2021. doi: 10.1109/sıu53274.2021.9477804.
  • S. Ozair and Y. Bengio, Deep Directed Generative Autoencoders, Oct. 2014, Accessed: Dec. 31, 2022. [Online]. available: https://www.researchgate.net/publication/313857747_dataset_augmentation_in_feature_space
  • C. Dewi, R. C. Chen, Y. T. Liu, and H. Yu, Various generative adversarial networks model for synthetic prohibitory sign image generation, Applied Sciences (Switzerland), vol. 11, no. 7, Apr. 2021. doi: 10.3390/app11072913.
  • L. van der Maaten and G. Hinton, Visualizing data using t-SNE, Journal of Machine Learning Research, vol. 9, no. 86, pp. 2579–2605, 2008, Accessed: Dec. 31, 2022. [Online]. available: http://jmlr.org/papers/v9/vandermaaten08a.html
  • V. Sandfort, K. Yan, P. J. Pickhardt, and R. M. Summers, Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks, Scientific Reports 2019 9:1, vol. 9, no. 1, pp. 1–9, Nov. 2019. doi: 10.1038/s41598-019-52737-x.
  • Z. Qin, Z. Liu, P. Zhu, and Y. Xue, A GAN-based image synthesis method for skin lesion classification, Comput Methods Programs Biomed, vol. 195, p. 105568, Oct. 2020. doi: 10.1016/j.cmpb.2020.105568.
  • N. E. Khalifa, M. Loey, and S. Mirjalili, A comprehensive survey of recent trends in deep learning for digital images augmentation, Artif Intell Rev, vol. 55, no. 3, pp. 2351–2377, Mar. 2022. doi: 10.1007/s10462-021-10066-4/tables/5.
  • V. Srivastava, D. Kumar, and S. Roy, A median based quadrilateral local quantized ternary pattern technique for the classification of dermatoscopic images of skin cancer, Computers and Electrical Engineering, vol. 102, p. 108259, Sep. 2022. doi: 10.1016/j.compeleceng.2022.108259.
  • K. S. Sudeep and K. K. Pal, Preprocessing for image classification by convolutional neural networks, 2016 IEEE International Conference on Recent Trends in Electronics, Information and Communication Technology, RTEICT 2016 - Proceedings, pp. 1778–1781, Jan. 2017. doi: 10.1109/rteıct.2016.7808140.
  • M. H. Rahman, M. K. A. Jannat, M. S. Islam, G. Grossi, S. Bursic, and M. Aktaruzzaman, Real-time face mask position recognition system based on MobileNet model, Smart Health, p. 100382, Jan. 2023. doi: 10.1016/j.smhl.2023.100382.
  • M. S. H. Talukder and A. K. Sarkar, Nutrients deficiency diagnosis of rice crop by weighted average ensemble learning, Smart Agricultural Technology, vol. 4, p. 100155, Aug. 2023. doi: 10.1016/j.atech.2022.100155.
  • G. S. Nijaguna, J. A. Babu, B. D. Parameshachari, R. P. de Prado, and J. Frnda, Quantum Fruit Fly algorithm and ResNet50-VGG16 for medical diagnosis, Appl Soft Comput, vol. 136, p. 110055, Mar. 2023. doi: 10.1016/j.asoc.2023.110055.
  • P. Tschandl, The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Harvard Dataverse, 2018. doi: 10.7910/dvn/dbw86t.
  • M. H. Ibrahim, ODBOT: Outlier detection-based oversampling technique for imbalanced datasets learning, Neural Comput Appl, vol. 33, no. 22, pp. 15781–15806, Nov. 2021. doi: 10.1007/s00521-021-06198-x/tables/17.
  • P. Carcagnì et al., Classification of skin lesions by combining multilevel learnings in a DenseNet architecture, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11751 LNCS, pp. 335–344, 2019. doi: 10.1007/978-3-030-30642-7_30/cover.
  • A. Sharma and P. K. Mishra, Covid-MANet: Multi-task attention network for explainable diagnosis and severity assessment of COVID-19 from CXR images, Pattern Recognit, vol. 131, p. 108826, Nov. 2022. doi: 10.1016/j.patcog.2022.108826.
  • T. Gangavarapu and N. Patil, A novel filter–wrapper hybrid greedy ensemble approach optimized using the genetic algorithm to reduce the dimensionality of high-dimensional biomedical datasets, Appl Soft Comput, vol. 81, p. 105538, Aug. 2019. doi: 10.1016/j.asoc.2019.105538.
  • C. Xin et al., An improved transformer network for skin cancer classification, Comput Biol Med, vol. 149, p. 105939, 2022. doi: 10.1016/j.compbiomed.2022.105939.
  • P. Benedetti, D. Perri, M. Simonetti, O. Gervasi, G. Reali, and M. Femminella, Skin cancer classification using ınception network and transfer learning, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 12249 LNCS, pp. 536–545, Nov. 2021. doi: 10.1007/978-3-030-58799-4_39.
  • A. Rasheed, A. Iqbal Umar, S. Hamad Shirazi, Z. Khan, S. Nawaz, and M. Shahzad, Automatic eczema classification in clinical images based on hybrid deep neural network, Comput Biol Med, vol. 147, p. 105807, 2022. doi: 10.1016/j.compbiomed.2022.105807.
  • M. Cakmak and M. E. Tenekeci, Melanoma detection from dermoscopy images using Nasnet Mobile with transfer learning, SIU 2021 - 29th IEEE Conference on Signal Processing and Communications Applications, Proceedings, Jun. 2021. doi: 10.1109/sıu53274.2021.9477985.
Niğde Ömer Halisdemir Üniversitesi Mühendislik Bilimleri Dergisi-Cover
  • ISSN: 2564-6605
  • Yayın Aralığı: Yılda 4 Sayı
  • Başlangıç: 2017
  • Yayıncı: Niğde Ömer Halisdemir Üniversitesi