Transfer Öğrenme ve Çekişmeli Üretici Ağ Yaklaşımlarını Kullanarak Cilt Lezyonu Sınıflandırma Doğruluğunun İyileştirilmesi

Bu çalışmada, en yaygın kanser türlerinden biri olan cilt kanseri imgelerinin sınıflandırılmasına odaklanılmıştır. Yapılanaraştırma sonucunda cilt kanseriyle ilgili literatürdeki en kapsamlı etiketlenmiş veri kümesinin HAM10000 olduğugörülmüştür. 7 farklı lezyon türüne ait 10.000’den fazla etiketli imge içeren bu veri kümesinin klasik Evrişimsel SinirAğlarıyla (ESA) sınıflandırma doğruluğunun arttırılması amaçlanmaktadır. Bu makalede, mevcut iki farklı tekniğin (transferöğrenme ve imge üretimi) lezyon sınıflandırma doğruluğuna etkisi incelenmiştir. Birinci teknik, cilt lezyonu veri kümesinisınıflandırmak için tasarlanan yeni bir ESA’ya, ImageNet veri kümesiyle eğitilmiş AlexNET ağındaki parametrelerin kısmive tam transfer yoluyla aktarılmasıdır. İkinci teknik, gerçek lezyon imgelerinden imge üretilmesiyle veri kümesiningenişletilmesidir. Bu genişletme işleminde klasik üretme ve Çekişmeli Üretici Ağ (ÇÜA) tekniklerinin başarımlarıdeğerlendirilmiştir. Yapılan deneysel çalışmalar neticesinde, kısmi parametre transferi ve Derin Evrişimsel Çekişmeli ÜreticiAğ (DEÇÜA) temelli imge üretim tekniği kullanılarak veri kümesinin genişletilmesi yaklaşımlarının birlikte kullanılması enyüksek lezyon sınıflandırma doğruluğunu (%93) vermiştir. Yöntemler, literatürdeki güncel yöntemle kıyaslanarak toplamdoğruluk başarımındaki üstünlüğü gösterilmiştir.

Improving Skin Lesion Classification Accuracy Using Transfer Learning And Generative Adversarial Network

This study focuses on the classification of skin cancer images, one of the most common types of cancer. As a result of the research, it was found that the most comprehensive labeled data set in the literature related to skin cancer were HAM10000. This data set, which contains more than 10,000 tagged images of 7 different lesion types, is aim to increase accuracy in classification with Convolutional Neural Networks (CNNs). In this study, the effect of two different techniques (transfer learning and image production) on lesion classification accuracy was investigated. The first technique is the partial and full transfer of parameters in the ImageNet image set-trained AlexNET network to a new CNN designed to classify the skin lesion data set. The second technique is to expand the dataset by generating images from true lesion images. In this expansion process, the performances of classical production and Generative Adversarial Network (GAN) techniques were evaluated. As a result of the experimental studies, the use of partial parameter transfer and data set expansion approaches using Deep Convolutional Generative Adversarial Network (DCGAN) based image generation technique yielded the highest lesion classification accuracy (93%). The methods have been compared to the current method in the literature and total accuracy performance is demonstrated.

___

  • [1] Frid-Adar M, Diamant I, Klang E, Amitai M, Goldberger J, and Greenspan H. GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing 2018; 321: 321–331.
  • [2] Shijie J, Ping W, Peiyi J, and Siping H. Research on data augmentation for image classification based on convolution neural networks. Chinese Autom. Congr. CAC; 2016; China. 4165–4170.
  • [3] Goodfellow I. J, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengi Y. Generative Adversarial Networks. arXiv:1406.2661v1 2014; 1–9.
  • [4] Isola P, Zhu J. Y, Zhou T, and Efros A. A. Image-to-image translation with conditional adversarial networks. Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition CVPR 2017; USA: 5967–5976.
  • [5] Yeh RA, Chen C, Yian Lim T, Schwing A. G, Hasegawa-Johnson M, and Do M. N. Semantic image inpainting with deep generative models. Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition CVPR; 2017; USA: 6882–6890.
  • [6] Bisla D, Choromanska A, Stein J. A, Polsky D, and Berman R. Skin Lesion Segmentation and Classification with Deep Learning System. arXiv:1902.06061v1 2019; 1-6.
  • [7] Han C et al. GAN-based synthetic brain MR image generation. Proc. - Int. Symp. Biomed. Imaging; 2018; USA. 734– 738.
  • [8] Bi L, Feng D, and Kim J. Improving Automatic Skin Lesion Segmentation using Adversarial Learning based Data Augmentation. arXiv:1807.08392 2018; 1–6.
  • [9] Xue Y, Xu T, Zhang H, Long L. R, and Huang X. SegAN: Adversarial Network with Multi-scale L 1 Loss for Medical Image Segmentation. Neuroinformatics 2018;16: 383–392.
  • [10] Siegel RL, Miller KD, and Jemal A. Cancer statistics, CA. Cancer J. Clin 2019; 69: 7–34.
  • [11] Vestergaard ME, Macaskill P, Holt PE, and Menzies SW. Dermoscopy compared with naked eye examination for the diagnosis of primary melanoma: A meta-analysis of studies performed in a clinical setting. Br. J. Dermatol 2008; 159: 669–676.
  • [12] Codella NCF et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 International symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC). Proc. - Int. Symp. Biomed. Imaging 2018; USA: 168–172.
  • [13] Brinker TJ et al. Deep learning outperformed 136 of 157 dermatologists in a head-to-head dermoscopic melanoma image classification task. Eur. J. Cancer; 2019; 113: 47–54.
  • [14] Khan MA, Saba T, and Sharif M. Multi-Model Deep Neural Network based Features Extraction and Optimal Selection Approach for Skin Lesion Classification. Int. Conf. Comput. Inf. Sci; 2019; Saudi Arabia: 1–7.
  • [15] Tschandl P, Rosendahl C, and Kittler H. Data descriptor: The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions Sci. Data 2018; 5:1–9.
  • [16] Krizhevsky A. and Hinton G. E. ImageNet Classification with Deep Convolutional Neural Networks. Adv. Neural Inf. Process. Syst. 2012;1907(1105): 1–9.
  • [17] Akçay M. E, Kundegorski Samet, Devereux M, and Breckon T. P. Transfer Learning Using Convolutional Neural Networks for Object Classification Within X-Ray Baggage Security Imagery. International Conference on Image Processing (ICIP) 2016; USA: 23: 1057 – 1061.
  • [18] “Image.net.” Erişim Tarihi: 15/05/2019.
  • [19] Yosinski J, Clune J, Bengio Y, and Lipson H. How transferable are features in deep neural networks ? Adv. Neural Inf. Process. Syst. 2014;2: 3320-3328.
  • [20] Lange J, Bok V. GANs in Action, MEAP. Shelter Island, NY / USA: Manning Publications, 2018.
  • [21] Goodfellow I. Tutorial: Generative Adversarial Networks. arXiv:1701.00160v4 2016;1-57.
  • [22] Radford A, Metz L, and Chintala S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks International Conference on Image Processing (ICIP) USA; 2015: 1–16.
  • [23] Kingma DP and Adam JBa.: A Method for Stochastic Optimization 2014; 1–15.
Fırat Üniversitesi Mühendislik Bilimleri Dergisi-Cover
  • ISSN: 1308-9072
  • Yayın Aralığı: Yılda 2 Sayı
  • Başlangıç: 1987
  • Yayıncı: FIRAT ÜNİVERSİTESİ