Yüz Görüntüleri İçin Yeni Bir Süper Çözünürlük Yaklaşımı ve Farklı Görüntü Kalitesindeki Veri Setlerinin Başarı Performansı Üzerindeki Etkileri

Çözünürlük kavramı, çeşitli bilgisayarlı görü uygulamaları için büyük önem arz etmektedir. Son yıllarda donanımsal ilerlemeler sayesinde görüntü çözünürlüklerini artırmaya yönelik süper çözünürlük uygulamaları araştırmacıların odak noktası haline gelmiştir. Bu çalışmada ise yeni bir derin öğrenme tabanlı süper çözünürlük modeli (SISRGAN) önerilmiştir. Ayrıca, süper çözünürlük uygulamaları için CelebA veri setinden farklı kalite seviyelerinde üç farklı veri seti oluşturulmuştur. Çalışmalar sonucunda elde edilen sonuçlar görüntü kalite metrikleri (tepe sinyal gürültü oranı ve yapısal benzerlik indeksi) kullanılarak literatürde yer alan önemli modeller ile karşılaştırılmıştır. Önerilen derin ağ modelinin hem görsel kalitedeki iyileşme hem de metrik değerleri açısından daha üstün bir başarı ortaya koymuştur. Bununla birlikte, süper çözünürlüklü görüntünün oluşturulacağı düşük çözünürlüklü görüntü kalitesinin başarıyı doğrudan etkilediği görülmüştür.

A New Super Resolution Approach for Face Images and the Effects of the Datasets of Different Image Quality on the Success Performance

The term of resolution is of great importance for various computer vision applications. In recent years, super resolution applications to increase image resolutions have become the focus of researchers, thanks to hardware advances. In this paper, a new deep learning-based super resolution model (SISRGAN) is proposed. In addition, three different datasets with different quality levels were created from the CelebA dataset for super resolution applications. The results obtained as a result of the studies were compared with the state-of-the-art models in the literature using image quality metrics (peak signal to noise ratio and structural similarity index). The proposed deep network model showed a superior success in terms of both visual quality improvement and metric values. However, it has been seen that the low resolution image quality in which the super resolution image will be created directly affects the success.

___

  • M. Protter, M. Elad, H. Takeda and P Milanfar, “Generalizing the nonlocal-means to super-resolution reconstruction”, IEEE Transactions on image processing, 18(1), 36-51, 2008.
  • R. W. Gerchberg, “Super-resolution through error energy reduction”, Optica Acta: International Journal of Optics, 21(9), 709-720, 1974.
  • K. Nasrollahi and T. B. Moeslund, “Super-resolution: a comprehensive survey. Machine vision and applications”, 25(6), 1423-1468, 2014.
  • H. Hurkal and Z. Orman, “A survey on image super-resolution with generative adversarial networks”, Acta Infologica, 4(2), 139-154, 2020.
  • L. Yue et al., “Image super-resolution:the techniques, applications, and future”, Signal Processing 128, 389-408, 2018.
  • R. Yan, K. Yang and K. Wang, “NLFNet: Non-Local Fusion Towards Generalized Multimodal Semantic Segmentation across RGB-Depth, Polarization, and Thermal Images”, 2021 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp 1129-1135, 2021.
  • C. Dong, C. C. Loy, K. He, X. Tan, “Image super-resolution using deep convolutional networks”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 38:295-307, 2015.
  • Y. Liang, J. Wang, S. Zhou, Y. Gong and N. Zheng “Incorporating image priors with deep convolutional neural networks for image super-resolution”, Neurocomputing, 194:340–347, 2016.
  • J. Kim, J. K. Lee and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks”, IEEE CVPR, 1646–1654, 2016. I. Goodfellow et al., “Generative adversarial networks”, In Advances in Neural Information Processing Systems (NIPS), pp 2672–2680, 2014.
  • C. Ledig et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Honolulu, 105-114, 2017.
  • S. J. Park, H. Son, S. Cho, K. S. Hong, S. Lee, “Srfeat: single image super-resolution with feature discrimination”, In: Proceedings of the European conference on computer vision (ECCV), pp 439–455, 2018.
  • M. Wang et al., “Improved face super-resolution generative adversarial networks”, Machine Vision and Applications, 31:22, 2020.
  • F. M. Senalp and M. Ceylan, “Deep learning based super resolution and classification applications for neonatal thermal images”, Traitement du Signal 38(5), 1361-1368, 2021.
  • Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep Learning Face Attributes in the Wild”, in Proc. IEEE/CVF International Conference on Computer Vision, 2015.
  • A. Radford, L. Metz and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks”, arXiv preprint arXiv:1511.06434, 2015.
  • J. Johnson, A. Alahi and F. Li, “Perceptual losses for real-time style transfer and super resolution”, In European Conference on Computer Vision (ECCV) Springer, 694–711, 2016.
  • D. Weixiang, P. Addepalli and Y. Zhao, “The Spatial Resolution Enhancement for a Thermogram Enabled by Controlled Sub-pixel Movements,” IEEE Transactions on Instrumentation and Measurement 69(6), 3566-3575, 2019.
  • Y. Gu et al., “MedSRGAN: medical images super-resolution using generative adversarial networks”, Multimed Tools Appl. 79, 21815–21840, 2020.
  • H. Javaid et al., “Video colour variation detection and motion magnification to observe subtle changes”, M.Sc. Thesis Blekinge Institute of Technology Faisalabad Pakistan, 2013.
  • A. Dosovitskiy and T. Brox, “Generating images with perceptual similarity metrics based on deep networks”, In Advances in Neural Information Processing Systems (NIPS), 658–666, 2016.
Mühendislik Bilimleri ve Araştırmaları Dergisi-Cover
  • ISSN: 2687-4415
  • Yayın Aralığı: Yılda 2 Sayı
  • Başlangıç: 2019
  • Yayıncı: Bandırma Onyedi Eylül Üniversitesi