Yarı Markov Karar Süreci Problemlerinin Çözümünde Çok Katmanlı Yapay Sinir Ağlarıyla Fonksiyon Yaklaşımlı Ödüllü Öğrenme Algoritması

Real life problems are generally large-scale and difficult to model. Therefore, these problems can't be mostly solved by classical optimisation methods. This paper presents a reinforcement learning algorithm using a multi-layer artificial neural network to find an approximate solution for large-scale semi Markov decision problems. Performance of the developed algorithm is measured and compared to the classical reinforcement algorithm on a small-scale numerical example. According to results of numerical examples, a number of hidden layer are the key success factors, and average cost of the solution generated by the developed algorithm is approximately equal to that generated by the classical reinforcement algorithm.

A Reinforcement Learning Algorithm Using Multi-Layer Artificial Neural Networks for Semi-Markov Decision Problems

Real life problems are generally large-scale and difficult to model. Therefore, these problems can’t be mostly solved by classical optimization methods. This paper presents a reinforcement learning algorithm using a multi-layer artificial neural network to find an approximate solution for large-scale semi Markov decision problems. Performance of the developed algorithm is measured and compared to the classical reinforcement algorithm on a small-scale numerical example. According to results of numerical examples, the number of hidden layer is the key success factor, and average cost of the solution generated by the developed algorithm is approximately equal to that generated by the classical reinforcement algorithm.

___

  • (REFERENCES) Ocaktan, M.A.B. (2012) İkame ürün dağıtım ağlarında stok optimizasyonu ve optimal dağıtım politikaları, Doktora Tezi, Sakarya Üniversitesi, Endüstri Mühendisliği A.B.D.
  • Gosavi, A. (2004) ‘A reinforcement learning algorithm based on policy iteration for average reward: empirical results with yield management and convergence analysis’, Machine Learning, vol. 55, pp. 5-29.
  • Tadepalli, P. and Ok, D. (1998) ‘Model based average reward reinforcement learning algorithms’, Artificial Intelligence, vol. 100, pp. 177-2
  • Shioyama, T. (1991) ‘Optimal control of a queuing network system with two types of customers’, European Journal of Operational Research, vol. 52, pp. 361-372.
  • Dimitri P.B. and Tsitsiklis, J. (1996) Neurodynamic programming, Athena Scientific.
  • Sutton, R. and Barto, A.G. (1998) Reinforcement learning, Cambridge: The MIT Press.
  • Gosavi,G. (2003) Simulation-based optimization, Kluwer Academic Publishers.
  • Buşoniu, L., Babuska, R., Schutter, B.D. and Ernst, D. (2010) Reinforcement learning and dynamic programming using function approximators, CRC Press.
  • Puterman, M.L. (1994) Markov decision processes: discrete stochastic dynamic programming, John Wiley & Sons.
  • Das, T.K., Gosavi, A., Mahadevan, S. and Marchalleck, N. (1999) ‘Solving semi-Markov decisions problems using average reward reinforcement learning’, Management Science, vol. 45, no. 4, pp. 560-574.
  • Gosavi, A. (2004) ‘Reinforcement learning for long-run average cost’, European Journal of Operational Research, vol.155, no. 3, pp. 6546
  • Bellman, R. (1954) ‘The theory of dynamic programming’, Bulletin of American Society, vol. 60, pp. 503-516.
Sakarya University Journal of Science-Cover
  • Yayın Aralığı: Yılda 6 Sayı
  • Başlangıç: 1997
  • Yayıncı: Sakarya Üniversitesi