PROBLEMS AND OPPORTUNITIES OF ARTIFICIAL INTELLIGENCE

This article reviews Artificial Intelligence (AI)’s challenges and opportunities and discusses where AI might be headed. In the first part of the article, it was tried to reveal the differences between Symbolic AI and Deep Learning approaches, then long promises but short deliveries of AI were mentioned. When we review the problems of AI in general terms, it is a problem that the media has high expectations about AI and keeps the problems and restrictions it creates low. Today, while AI is stuck with issues such as deepfake applications and carbon footprints that create moral and climatologic problems; on the other hand, it is struggling with problems such as deep learning models requiring huge amounts of data. Another problem with deep learning is that deep learning models are a black-box and not open to improvements because it is not known where mistakes were made. Among the new paths ahead of AI are Hierarchical Temporal Memory (HTM) models and hybrid models that generally try to bridge the gap between Symbolic AI and Connectionist AI. If we consider that the most important leaps in AI have been made with the features of the brain that AI can imitate, then the developed HTM models may also be a new opportunity for AI.
Anahtar Kelimeler:

Deep Learning, GPT-3, Black-box

PROBLEMS AND OPPORTUNITIES OF ARTIFICIAL INTELLIGENCE

This article reviews Artificial Intelligence (AI)’s challenges and opportunities and discusses where AI might be headed. In the first part of the article, it was tried to reveal the differences between Symbolic AI and Deep Learning approaches, then long promises but short deliveries of AI were mentioned. When we review the problems of AI in general terms, it is a problem that the media has high expectations about AI and keeps the problems and restrictions it creates low. Today, while AI is stuck with issues such as deepfake applications and carbon footprints that create moral and climatologic problems; on the other hand, it is struggling with problems such as deep learning models requiring huge amounts of data. Another problem with deep learning is that deep learning models are a black-box and not open to improvements because it is not known where mistakes were made. Among the new paths ahead of AI are Hierarchical Temporal Memory (HTM) models and hybrid models that generally try to bridge the gap between Symbolic AI and Connectionist AI. If we consider that the most important leaps in AI have been made with the features of the brain that AI can imitate, then the developed HTM models may also be a new opportunity for AI.

___

  • Abid, A., Farooqi, M. & Zou, J. (2021). Persistent anti-muslim bias in large language models. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. (pp. 298-306). USA.
  • Ahmad, S. & Scheinkman, L. (2019). How can we be so dense? The benefits of using highly sparse representations. 2 April 2022, https://arxiv.org/pdf/1903.11257.pdf.
  • Anjali. (2021). What Is GPT-3 & Why is it so popular? 28 May 2021, Medium. https://medium.com/eoraa- co/what-is-gpt-3-why-is-it-so-popular-b92e87fddafe.
  • Barlow, H. B. (1961). Possible principles underlying the transformations of sensory messages. In W. A. Rosenblith (Ed.), Sensory Communication, (pp. 217-234). Cambridge University.
  • Bathaee, Y. (2018). The artificial intelligence black box and the failure of intent and causation. Harvard Journal of Law & Technology, 31 (2), 890-938.
  • Bautista, I., Sarkar, S. & Bhanja, S. (2020). Matlab HTM: A sequence memory model of neocortical layers for anomaly detection. SoftwareX, 11 (1), 1-7.
  • Bender, E. M., Gebru, T., McMillan-Major, A. & Shmitchell, S. (2021). On the dangers of stochastic parrots. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. (pp. 610-623). New York.
  • Bengio, Y. (2022). Superintelligence: Futurology vs. science. 24 January 2022, https://yoshuabengio.org/2022/01/24/superintelligence-futurology-vs-science/.
  • Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., Arx, S. von, Rernstein, M. S. & Liang, P. (2021). On the opportunities and risks of foundation models. 10 April 2022, https://arxiv.org/pdf/2108.07258.
  • Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre, P., Zeitzoff, T., Filar, B., Anderson, H., Roff, H., Allen, G. C., Steinhardt, J., Flynn, C., Héigeartaigh, S. Ó., Beard, S., Belfield, H., Farquhar, S., ...& Amodei, D. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. 10 February 2022, https://arxiv.org/ftp/arxiv/papers/1802/1802.07228.pdf.
  • Buhrmester, V., Münch, D. & Arens, M. (2021). Analysis of explainers of black box deep neural networks for computer vision: A survey. Machine Learning and Knowledge Extraction, 3 (4), 966-989.
  • Byrne, F. (2015). Encoding reality: Prediction-assisted cortical learning algorithm in hierarchical temporal memory. 8 March 2022, https://arxiv.org/pdf/1509.08255.pdf.
  • Chaudhuri, S., Ellis, K., Polozov, O., Singh, R., Solar-Lezama, A. & Yue, Y. (2021). Neuro-symbolic programming. Foundations and Trends® in Programming Languages, 7 (3), 158-243.
  • Dickson, B. (2019). What happens when you combine neural networks and rule-based AI? 5 November 2021, https://bdtechtalks.com/2019/06/05/mit-ibm-hybrid-ai/.
  • Dickson, B. (2022). Neuro-symbolic AI brings us closer to machines with common sense. 14 March 2022, https://bdtechtalks.com/2022/03/14/neuro-symbolic-ai-common-sense/.
  • Egri-Nagy, A. & Törmänen, A. (2022). Advancing human understanding with deep learning go AI engines. IS4SI 2021, 22.
  • Feldman, P., Dant, A. & Massey, A. (2019). Integrating artificial intelligence into weapon systems. 10 March 2022, https://arxiv.org/pdf/1905.03899.pdf.
  • Franco, L., Rolls, E. T., Aggelopoulos, N. C. & Jerez, J. M. (2007). Neuronal selectivity, population sparseness, and ergodicity in the inferior temporal visual cortex. Biological Cybernetics, 96 (6), 547-560.
  • Fraternali, P., Milani, F., Nahime Torres, R., Zangrando, N. & di Milano Piazza Leonardo, P. (2022). Black-box error diagnosis in deep neural networks: A survey of tools. 17 January 2022, https://arxiv.org/pdf/2201.06444.pdf.
  • Garcez, A. S. d’Avila, Broda, K. & Gabbay, D. M. (2002). Neural-symbolic learning systems: Foundations and applications. Springer Science & Business Media.
  • Hao, K. (2019). Training a single AI model can emit as much carbon as five cars in their lifetimes. 6 April 2022, https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as- five-cars-in-their-lifetimes/.
  • Hawkins, J. (2021). A thousand brains: A new theory of intelligence. Hachette Book Group.
  • Hawkins, J. & Ahmad, S. (2016). Why neurons have thousands of synapses, a theory of sequence memory in neocortex. Frontiers in Neural Circuits, 10, 1-13.
  • Hawkings, J. & Blakeslee, S. (2004). On intelligence: How a new understanding of the brain will lead to the creation of truly intelligent machines. An Owl Book, Henry Holt and Company.
  • Hawkins, J., Lewis, M., Klukas, M., Purdy, S. & Ahmad, S. (2019). A framework for intelligence and cortical function based on grid cells in the neocortex. Frontiers in Neural Circuits, 12, 1-14.
  • Hoover, A. K., Spryszynski, A. & Halper, M. (2019). Deep learning in the IT curriculum. Proceedings of the 20th Annual SIG Conference on Information Technology Education, (pp. 49-54), Tacoma, WA.
  • Kahneman, D. (2011). Thinking, fast and slow. Macmillian.
  • Kahneman, D. (2012). Of 2 minds: How fast and slow thinking shape perception and choice [excerpt]. Scientific American.
  • Krestinskaya, O., Ibrayev, T. & James, A. P. (2018). Hierarchical temporal memory features with memristor logic circuits for pattern recognition. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 37 (6), 1143-1156.
  • Krizhevsky, A., Sutskever, I. & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25, 1-9.
  • Lacoste, A., Luccioni, A., Schmidt, V. & Dandres, T. (2019). Quantifying the carbon emissions of machine learning. 4 November 2021, https://arxiv.org/pdf/1910.09700.pdf.
  • Lannelongue, L., Grealey, J., Inouye, M., Lannelongue, L. & Inouye, M. (2021). Green algorithms: Quantifying the carbon footprint of computation. Advanced Science News, 8, 1-10.
  • Lehky, S. R., Tanaka, K. & Sereno, A. B. (2021). Pseudo sparse neural coding in the visual system of primates. Communications Biology, 4 (1), 1-12.
  • Lomonaco, V. (2019). A machine learning guide to HTM (hierarchical temporal memory). 24 October 2021, https://numenta.com/blog/2019/10/24/machine-learning-guide-to-htm.
  • Mao, J., Gan, C., Kohli, P., Tenenbaum, J. B. & Wu, J. (2019). The neuro-symbolic concept learner: Interpreting scenes, words, and sentences from natural supervision. 6 April 2022, https://arxiv.org/pdf/1904.12584.pdf.
  • Marcus, G. (2022). Deep learning is hitting a wall. 10 March 2022, https://nautil.us/deep-learning-is-hitting-a- wall-14467/.
  • Marcus, G. & Davis, E. (2020). GPT-3, bloviator: Open AI’s language generator has no idea what it’s talking about. 22 February 2022, https://www.technologyreview.com/2020/08/22/1007539/gpt3-openai-language- generator-artificial-intelligence-ai-opinion/.
  • Masson-Delmotte, V., Zhai, P., Chen, Y., Goldfarb, L., Gomis, M. I., Matthews, J. B. R., Berger, S., Huang, M., Yelekçi, O., Yu, R., Zhou, B., Lonnoy, E., Maycock, T. K., Waterfield, T., Leitzell, K. & Caud, N. (2021). Working Group I Contribution to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. 2021 Intergovernmental Panel on Climate Change. IPCC, Switzerland.
  • Mortensen, J. (2022). Is black box AI dangerous? Tech evaluate. 16 March 2022, https://www.techevaluate.com/is-black-box-ai-dangerous/.
  • Narasimman, L. (2022). The top 4 AI projects everyone is talking about. 1 June 2022, https://narasimmantech.com/the-top-four-ai-projects-everyone-is-talking-about-2/.
  • Naser, M. Z. & Ross, B. (2022). An opinion piece on the dos and don’ts of artificial intelligence in civil engineering and charting a path from data-driven analysis to causal knowledge discovery. Civil Engineering and Environmental Systems, 39 (1), 1-11.
  • Niu, D., Yang, L., Cai, T., Li, L., Wu, X. & Wang, Z. (2022). A new hierarchical temporal memory algorithm based on activation intensity. Computational Intelligence and Neuroscience, 2022, 1-17.
  • Numenta. (2017). Biological and machine intelligence (BAMI). 8 March 2022, https://numenta.com/assets/pdf/biological-and-machine-intelligence/BAMI-Complete.pdf.
  • Olshausen, B. A. & Field, D. J. (2004). Sparse coding of sensory inputs. Current Opinion in Neurobiology, 14 (4), 481-487.
  • Pati, S. (2022). The future of artificial intelligence and metaverse in 2030. 7 March 2022, https://zipe- education.com/the-future-of-artificial-intelligence-and-metaverse/.
  • Purdue University News. (2021). Think the brain is always efficient? Think again. 5 March 2022, https://www.purdue.edu/newsroom/releases/2021/Q1/think-the-brain-is-always-efficient-think-again.html.
  • Rathore, M. S. (2022). Top artificial intelligence interview questions and answers. TechStack. 16 April 2022, https://www.techstack.in/blog/artificial-intelligence-interview-questions-and-answers/.
  • Riganelli, O., Saltarel, P., Tundo, A., Mobilio, M. & Mariani, L. (2021). Cloud failure prediction with hierarchical temporal memory: An empirical assessment. 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA), (pp. 785-790), Italy.
  • Schuller, B. W., Akman, A., Chang, Y., Coppock, H., Gebhard, A., Kathan, A., Rituerto-González, E., Triantafyllopoulos, A. & Pokorny, F. B. (2022). Climate change & computer audition: A call to action and overview on audio intelligence to help save the planet a preprint. Climate Change & Computer Audition, 1-28.
  • Schwartz, R., Dodge, J., Smith, N. A. & Etzioni, O. (2020). Green AI. Communications of the ACM, 63 (12), 54- 63.
  • Sezgin, E., Sirrianni, J. & Linwood, S. L. (2022). Operationalizing and implementing pretrained, large artificial intelligence linguistic models in the US Health Care System: Outlook of Generative Pretrained Transformer 3 (GPT-3) as a service model. JMIR Medical Informatics, 10 (2), e32875.
  • Shah, D., Ghate, P., Paranjape, M. & Kumar, A. (2017). Application of hierarchical temporal memory theory for document categorization. 2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation. (pp. 1-6), USA.
  • Singhi, R. (2019). The rise and fall of symbolic AI. Philosophical presuppositions of AI. 14 September 2021, https://towardsdatascience.com/rise-and-fall-of-symbolic-ai-6b7abd2420f2.
  • Soon, O. Y. & Hui, L. K. (2022). Making artificial intelligence work for sustainability. 16 April 2022, https://technologymagazine.com/ai-and-machine-learning/making-artificial-intelligence-work-for-sustainability.
  • Soviany, P., Ionescu, R. T., Rota, P. & Sebe, N. (2022). Curriculum learning: A survey. International Journal of Computer Vision, 130, 1-40.
  • Strubell, E., Ganesh, A. & Mccallum, A. (2019). Energy and policy considerations for deep learning in NLP. In the 57th Annual Meeting of the Association for Computational Linguistics. Italy.
  • Struye, J. & Latré, S. (2020). Hierarchical temporal memory and recurrent neural networks for time series prediction: An empirical validation and reduction to multilayer perceptrons. Neurocomputing, 396, 291-301.
  • Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I. & Fergus, R. (2013). Intriguing properties of neural networks. 19 February 2022, https://arxiv.org/pdf/1312.6199.pdf.
  • Taylor, P. (2022). IBM sells off large parts of Watson health business. 24 January 2022, https://pharmaphorum.com/news/ibm-sells-off-large-parts-of-watson-health-business/.
  • Thoppilan, R., Freitas, D. de, Hall, J., Kulshreshtha, A., Cheng, H.-T., Jin, A., Bos, T., Baker, L., Du, Y., Li, Y., Lee, H., Zheng, S. H., Ghafouri, A. & Le, Q. (2022). LaMDA: Language models for dialog applications. 10 February 2022, https://arxiv.org/pdf/2201.08239.pdf.
  • Trafton, A. (2022). Dendrites may help neurons perform complicated calculations. 7 February 2022, https://news.mit.edu/2022/dendrites-help-neurons-perform-0217.
  • Urbina, F., Lentzos, F., Invernizzi, C. & Ekins, S. (2022). Dual use of artificial-intelligence-powered drug discovery. Nature Machine Intelligence, 4 (3), 189–191.
  • Vale, D., El-Sharif, A. & Ali, M. (2022). Explainable artificial intelligence (XAI) post-hoc explainability methods: Risks and limitations in non-discrimination law. AI and Ethics, 1, 1-12.
  • Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P.-S., Cheng, M., Glaese, M., Balle, B., Kasirzadeh, A., Kenton, Z., Brown, S., Hawkins, W., Stepleton, T., Biles, C., Birhane, A., Haas, J., Rimell, L., Hendricks, L. A., ... & Gabriel, I. (2021). Ethical and social risks of harm from Language Models. DeepMind.
  • Yang, G., Hu, E. J., Babuschkin, I., Sidor, S., Liu, X., Farhi, D., Ryder, N., Pachocki, J., Chen, W. & Gao, J. (2022). Tensor programs v: Tuning large neural networks via zero-shot hyperparameter transfer. 35th Conference on Neural Information Processing Systems (NeurIPS 2021).
  • Zador, A. M. (2019). A critique of pure learning and what artificial neural networks can learn from animal brains. Nature Communications, 10 (1), 1-7.
  • Zednik, C. (2021). Solving the black box problem: A normative framework for explainable artificial intelligence. Philosophy & Technology, 34 (2), 265-288.
  • Zhang, Z., Han, X., Zhou, H., Ke, P., Gu, Y., Ye, D., Qin, Y., Su, Y., Ji, H., Guan, J., Qi, F., Wang, X., Zheng, Y., Zeng, G., Cao, H., Chen, S., Li, D., Sun, Z., Liu, Z., ... & Sun, M. (2021). CPM: A large-scale generative Chinese Pre-trained language model. AI Open, 2, 93-99.