Üretken Yapay Zekâ

Bir makine, insan yapımı eserlerden ayırt edilemeyecek, benzersiz içerikler oluşturabilir mi? Bunu, karmaşık gerçek dünya veri örneklerinin yapısını öğrenerek ve aynı yapıya bağlı benzer sentetik örnekler üreterek, üretken çekişmeli ağların (GAN'lar) yardımıyla yapmak mümkün müdür? Üretken modellerin gelişmesindeki ilerlemelere göre, cevap en azından bir dereceye kadar evet gibi görünüyor. Üretken modellerde hedef, mevcut veri setine yakın sentetik örnekler üretmektir; diğer bir deyişle, verilen bir veri setindeki örneklerin türetildiği dağılımı öğrenip bu dağılımdan yeni örnekler üretmektir. Genel olarak veri setleri sınırlı sayıda örnekler içerdiğinden, her zaman veri seti dağılımını tam olarak öğrenmek mümkün olmamaktadır. Bu nedenle gerçek veri dağılımına mümkün olduğunca benzer bir dağılım modellenmeye çalışılmaktadır. Üretilen verilerin kalitesini ölçmek için kapsamlı ve kapsayıcı bir metriğe ihtiyaç duyulmaktadır. Bu çalışmada, GAN’ların değerlendirme puanlarını önemli ölçüde etkileyebilecek gerçek veri kümelerindeki küçük değişikliklerin ve ince farklılıkların etkilerini hafifletmeye yardımcı olabilecek yeni bir yöntemden bahsedilmiş olup çeşitli uygulamalar yapılmıştır. Ayrıca GAN’lar ile ilgili etik ve hukuki problemlere değinilmiştir.

Generative Artificial Intelligence

Can a machine attempt to approach the task of creating unique content that would be indistinguishable from human-produced artefacts? Is it possible to do this with the help of generative adversarial networks (GANs) by learning the structure of the complex real-world data examples and generating similar synthetic examples that are bound by the same structure? According to the recent advances in the development of generative models, the answer seems to be yes, at least to an extent. In generative models, the goal is to generate synthetic samples close to the existing dataset; In other words, it is to learn the distribution from which the samples in a given data set are derived and to generate new samples from this distribution. In general, since the datasets contain a limited number of samples, it is not always possible to learn the dataset distribution exactly. For this reason, it is tried to model a distribution as similar as possible to the actual data distribution. A comprehensive and inclusive metric is needed to measure the quality of the data produced. In this study, a new method that can help mitigate the effects of small changes and subtle differences in real datasets that can significantly affect the evaluation scores of GANs is mentioned and various applications are made. In addition, ethical and legal problems related to GANs are mentioned.

___

  • Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative Adversarial Networks, In Proceedings of the 27th International Conference on Neural Information Processing Systems - (NIPS’14), 2, (pp.2672–2680). Montreal, Canada, December.
  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2020). Generative adversarial networks. Communications of the ACM, 63(11), 139-144.
  • Xu, L., Zeng, X., Li, W., & Huang, Z. (2020). Multi-granularity generative adversarial nets with reconstructive sampling for image inpainting, Neurocomputing, 402, 220–234. https://doi.org/10.1016/j.neucom.2020.04.011 [ Hedjazi, M. A., & Genc, Y. (2021). Efficient texture-aware multi-GAN for image inpainting, Knowledge-Based Systems, 217, 106789. https://doi.org/10.1016/j.knosys.2021.106789
  • Babcock, J., & Bali, R. (2021). Generative AI with Python and TensorFlow 2. Birmingham, UK: Packt Publishing.
  • Available:http://blogs.evergreen.edu/cpat/files/2013/05/Computer-Power-and-Human-Reason.pdf (Erişim tarihi: 02.01.2023)
  • Available:https://developers.google.com/machine-learning/gan/generative (Erişim tarihi: 29.12.2022 )
  • Gatys, L. A., Ecker, A. S., & Bethge, M. (2016). Image style transfer using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2414-2423).
  • Çelik, G. Üretken Ağlar ve Uygulamaları. Türkçe Doktora Tezi. İnönü Üniversitesi Fen Bilimleri Enstitüsü, İnönü, Haziran, 2021.
  • Available: https://github.com/NVlabs/stylegan2 (Erişim tarihi: 10.01.2023)
  • Wu, Y., Donahue, J., Balduzzi, D., Simonyan, K., & Lillicrap, T. (2019). Logan: Latent optimisation for generative adversarial networks. arXiv preprint arXiv:1912.00953.
  • Available: https://github.com/robbiebarrat/art-DCGAN (Erişim tarihi: 12.01.2023)
  • Available:https://news.mit.edu/2020/rewriting-rules-machine-generated-art-0818 (Erişim tarihi: 13.01.2023)
  • Liu, G., Reda, F. A., Shih, K. J., Wang, T. C., Tao, A., & Catanzaro, B. (2018). Image inpainting for irregular holes using partial convolutions. In Proceedings of the European conference on computer vision (ECCV) (pp. 85-100).
  • Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., ... & Shi, W. (2017). Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4681-4690).
  • Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223-2232).
  • Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.
  • Pan, Z., Yu, W., Yi, X., Khan, A., Yuan, F., & Zheng, Y. (2019). Recent progress on generative adversarial networks (GANs): A survey. IEEE Access, 7, 36322-36333.
  • Cai, Y., Wang, X., Yu, Z., Li, F., Xu, P., Li, Y., & Li, L. (2019). Dualattn-GAN: Text to image synthesis with dual attentional generative adversarial network. IEEE Access, 7, 183706-183716.
  • Karras, T., Laine, S., & Aila, T. (2019). A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4401-4410).
  • Figueira, A., & Vaz, B. (2022). Survey on synthetic data generation, evaluation methods and GANs. Mathematics, 10(15), 2733.
  • Wang, Z., She, Q., & Ward, T. E. (2021). Generative adversarial networks in computer vision: A survey and taxonomy. ACM Computing Surveys (CSUR), 54(2), 1-38.
  • Saxena, D., & Cao, J. (2021). Generative adversarial networks (GANs) challenges, solutions, and future directions. ACM Computing Surveys (CSUR), 54(3), 1-42.
  • Zhao, J., Mathieu, M., & LeCun, Y. (2016). Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126.
  • Arjovsky, M., Chintala, S., & Bottou, L. (2017, July). Wasserstein generative adversarial networks. In International conference on machine learning (pp. 214-223). PMLR.
  • Park, S. W., & Kwon, J. (2019). Sphere generative adversarial network based on geometric moment matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 4292-4301).
  • Jolicoeur-Martineau, A. (2018). The relativistic discriminator: a key element missing from standard GAN. arXiv preprint arXiv:1807.00734.
  • Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., & Courville, A. C. (2017). Improved training of wasserstein gans. Advances in neural information processing systems, 30.
  • Miyato, T., Kataoka, T., Koyama, M., & Yoshida, Y. (2018). Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957.
  • Wei, X., Gong, B., Liu, Z., Lu, W., & Wang, L. (2018). Improving the improved training of wasserstein gans: A consistency term and its dual effect. arXiv preprint arXiv:1803.01541.
  • Chen, T., Zhai, X., Ritter, M., Lucic, M., & Houlsby, N. (2019). Self-supervised gans via auxiliary rotation loss. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 12154-12163).
  • Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.
  • Berthelot, D., Schumm, T., & Metz, L. (2017). Began: Boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717.
  • Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2017). Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196.
  • Gong, X., Chang, S., Jiang, Y., & Wang, Z. (2019). Autogan: Neural architecture search for generative adversarial networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 3224-3234).
  • Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X. (2016). Improved techniques for training gans. Advances in neural information processing systems, 29.
  • Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., & Hochreiter, S. (2017). Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30.
  • Yadav, D., & Salmani, S. (2019, May). Deepfake: A survey on facial forgery technique using generative adversarial network. In 2019 International conference on intelligent computing and control systems (ICCS) (pp. 852-857). IEEE.
  • Shahriar, S., & Al Roken, N. (2022). How can generative adversarial networks impact computer generated art? Insights from poetry to melody conversion. International Journal of Information Management Data Insights, 2(1), 100066.
  • Aldausari, N., Sowmya, A., Marcus, N., & Mohammadi, G. (2022). Video generative adversarial networks: a review. ACM Computing Surveys (CSUR), 55(2), 1-25.
  • Cheng, J., Yang, Y., Tang, X., Xiong, N., Zhang, Y., & Lei, F. (2020). Generative adversarial networks: a literature review. KSII Transactions on Internet and Information Systems (TIIS), 14(12), 4625-4647.
  • Wu, Q., Zhu, B., Yong, B., Wei, Y., Jiang, X., Zhou, R., & Zhou, Q. (2021). ClothGAN: generation of fashionable Dunhuang clothes using generative adversarial networks. Connection Science, 33(2), 341-358.
  • Raff, E. (2022). Inside deep learning: Math, algorithms, models. Manning Publications.
  • Parmar, G., Zhang, R., & Zhu, J. Y. (2022). On aliased resizing and surprising subtleties in gan evaluation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 11410-11420).
  • Bińkowski, M., Sutherland, D. J., Arbel, M., & Gretton, A. (2018). Demystifying mmd gans. arXiv preprint arXiv:1801.01401.
  • Gerstner, E. (2020). Face/off:" DeepFake" face swaps and privacy laws. Def. Counsel J., 87, 1.
  • Li, Y., Chang, M. C., & Lyu, S. (2018). In ictu oculi: Exposing ai generated fake face videos by detecting eye blinking. arXiv preprint arXiv:1806.02877.
  • Kataoka, Y., Matsubara, T., & Uehara, K. (2016, June). Image generation using generative adversarial networks and attention mechanism. In 2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS) (pp. 1-6). IEEE.
  • Isola, P., Zhu, J.-Y., Zhou, T., & Efros, A., A. (2017). "Image-to-Image Translation with Conditional Adversarial Networks" IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 5967 - 5976), IEEE.
  • Zhu, J. Y., Zhang, R., Pathak, D., Darrell, T., Efros, A. A., Wang, O., & Shechtman, E. (2017). Toward multimodal image-to-image translation. Advances in neural information processing systems, 30.