Çekişmeli üretken ağ modellerinin görüntü üretme performanslarının incelenmesi

Derin öğrenme alanında yaşanan en önemli gelişmelerden biri, hiç şüphesiz çekişmeli üretken ağ (Generative adversarial network-GAN) modelleridir. GAN olarak anılan bu modeller, görüntü veri kümesinin genişletilmesinde (image augmentation), resim/karikatür boyamada (painting), yüksek çözünürlüğe sahip süper görüntü elde etmede, bir görüntüdeki doku/desenin başka bir görüntüye transferinde kullanılan en modern yaklaşımlar olarak karşımıza çıkmaktadır. Bu çalışmada literatürde yaygın olarak kullanılan GAN modellerinin (cGAN, DCGAN, InfoGAN, SGAN, ACGAN, WGAN-GP, LSGAN), gerçek görüntülere çok benzeyen sentetik görüntüleri üretmedeki performansları incelenmiştir. Çalışmanın orijinalliği, cGAN ve DCGAN’ın avantajlarını barındıran hibrit bir GAN modeli (cDCGAN) geliştirilmesi ve GAN yöntemlerinin performansları, derin öğrenme tabanlı evrişimsel sinir ağları(CNN) ile kıyaslamalı olarak değerlendirmesidir. Kodlanan modellerle veri kümelerindeki görüntülere benzer sentetik görüntüler üretilmiştir. Üretilen sentetik görüntülerin mevcut görüntülere benzerliklerini hesaplamak, böylece model performansını değerlendirebilmek için fréchet başlangıç mesafesi (FID) metriği ve CNN kullanılmıştır. Yapılan deneysel çalışmalarda, tüm modellerin zamana bağlı görüntü üretim performansları değerlendirilmiştir. Sonuç olarak, LSGAN modeliyle üretilen görüntülerin yüksek sınıflandırma başarım oranı sağladığı, ancak DCGAN ve WGANGP ile daha gürültüsüz net görüntüler ürettiği gözlemlenmiştir.

Investigation of generative adversarial network models' image generation performance

One of the most important developments in the field of deep learning is the generative adversarial network(GAN) models. These models, known as GAN, are the most modern approaches used in image editing, image/cartoon painting, high resolution super image acquisition, and the transfer of the texture/pattern in another image to another image. In this study, the performances of GAN models (cGAN, DCGAN, InfoGAN, SGAN, ACGAN, WGAN-GP, LSGAN), which are commonly used in the literature, in producing synthetic images very similar to the real images were investigated. The originality of the study is the development of a hybrid GAN model (cDCGAN) that incorporates the advantages of cGAN and DCGAN and evaluates the performances of GAN methods in comparison with deep learning based convolutional neural networks (CNN). Synthetic images similar to the images in the data sets were generated with the encoded models. Fréchet inception distance (FID) metric and CNN were used to calculate the similarity of the produced synthetic images to the existing images so as to evaluate the model performance. In the experimental studies, time-based image production performances of all models were evaluated. As a result, it was observed that the images produced by the LSGAN model provide a high classification performance rate, but with DCGAN and WGANGP, it produces clearer noise images.

___

  • Wu, X., Xu, K. ve Hall, P., A survey of image synthesis and editing with generative adversarial networks, Tsinghua Sci. Technol., 22, 6, 660–674,(2017).
  • Wason, R., Deep learning: Evolution and expansion, Cognitive Systems Research, 52, 701-708, (2018).
  • Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. ve Bengio, Y, Generative Adversarial Networks, in Proc. 27th Int. Conf. Neural Information Processing Systems, 2672–2680, Montreal, (2014).
  • Ledig, C., Theis, L., Husz´ar, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z. ve Twitter, W. S., Photo-realistic single image super-resolution using a generative adversarial network, 017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 105–114, Honolulu, (2017).
  • Goodfellow, I., NIPS 2016 Tutorial: Generative Adversarial Networks, arXiv:1701.00160, (2016).
  • Silva, T. A, Beginner’s Guide to Generative Adversarial Networks (GANs), https://skymind.ai/wiki/generative-adversarial-network-gan, (28.12.2018).
  • Langr, J. ve Bok, V., GANs in Action, MEAP edition Manning Publications, 350, Newyork USA, (2018).
  • Hua, G., Jégou, H , Adversarial Training for Sketch Retrieval in Creswell, A. ve Bharath, A.A., Computer Vision – ECCV 2016 Workshops, Springer International Publishing, 798-809, Switzerland, (2016).
  • Mirza, M. ve Osindero, S., Conditional Generative Adversarial Nets, CoRR, abs/1411.1784, 1–7, (2014).
  • Frid-Adar, M., Diamant, I., Klang, E., Amitai, M., Goldberger, J.ve Greenspan, H., GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification, Neurocomputing, 321, 321–331, (2018).
  • Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I. ve Abbeel, P., InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets, In Proceedings of the 30th International Conference on Neural Information Processing Systems, 2172–2180, Barcelona, (2016).
  • Radford, A., Metz, L. ve Chintala, S., Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks, arXiv:1511.06434, (2016)
  • Odena, A., Olah, C. ve Shlens, J., Conditional Image Synthesis With Auxiliary Classifier GANs., In Proceedings of the International Conference on Machine Learning, 2642–2651, Sydney, (2017). Odena, A, Semi-Supervised Learning with Generative Adversarial Networks, arXiv:1606.01583, (2016).
  • Silva, T, Semi-supervised learning with Generative Adversarial Networks (GANs), (2018). https://towardsdatascience.com/semi-supervised-learning-with-gans-9f3cb128c5e, (25.12.2018).
  • Arjovsky, M., Chintala, S. ve Bottou, L., Wasserstein GAN, arXiv:1701.07875, (2017).
  • Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V. ve Courville, A.C., Improved Training of Wasserstein Gans, In Proceedings of the Advances in Neural Information Processing Systems, 5769–5779, Long Beach, (2017).
  • Mao, X., Li, Q., Xie, H., Lau, R. Y.K., Wang, Z. ve Smolley, S. P., Least Squares Generative Adversarial Networks, 2017 IEEE International Conference on Computer Vision , 2813–2821, Venice, (2017).
  • Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B. ve Hochreiter, S., GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium, 31st Conference on Neural Information Processing Systems (NIPS 2017), 6626-6637, Long Beach, (2017).
  • Krizhevsky, A., Sutskever, I. ve Hinton, G. E., ImageNet Classification with Deep Convolutional Neural Networks, Proceedings of the 25th International Conference on Neural Information Processing Systems (NIPS'12), 1097–1105, Lake Tahoe, (2012).
  • Zeiler, M. D., ADADELTA: An Adaptive Learning Rate Method, arXiv:1212.5701, (2012).