ELMAN AĞININ SIMULATED ANNEALING ALGORİTMASI KULLANARAK SİSTEM KİMLİKLENDİRME İÇİN EĞİTİLMESİ

Elman ağı geribeslemeli yapay sinir ağlarının özel bir türüdür. Elman ağının ileribesleme bağlantıları, ileribeslemeli ağlar gibi yalın geriyayılım algoritması ile eğitilmekte, geribesleme bağlantıları ise sabit kalmaktadır. Geribesleme bağlantılarının uygun değerlerde seçilmesi, eğitmenin başarısı için önemlidir. Ancak, bu değerlerin belirlenmesi uzunca bir deneme yanılma işlemiyle olabilmektedir. Bu çalışmada, Simulated annealing (SA) algoritmasının, dinamik sistemlerin kimliklendirilmesi amacıyla Elman ağının eğitilmesi için kullanılması tanımlanmıştır. SA algoritması, tüm bağlantıların optimal ağırlık değerlerini sağlayabilecek, etkili bir rasgele araştırma algoritmasıdır.

TRAINING ELMAN NETWORK FOR SYSTEM IDENTIFICATION USING SIMULATED ANNEALING ALGORITHM

A special type of recurrent neural networks is the Elman network. Feedforward connections of the Elman network can be trained essentially as feedforward networks by means of the simple backpropagation algorithm, their feedback connections have to be kept constant. It is important to select correct values for the feedback connections for the training to convergence. However, finding these values can be a lengthy trial-and-error process. This paper describes the use of simulated annealing (SA) algorithm to train the Elman network for dynamic systems identification. The SA algorithm is an efficient random search procedure which can simultaneously obtain the optimal weight values of all connections.

___

  • [1] D.T. Pham and D. Karaboga, “Intelligent Optimisation Techniques: Genetic Algorithms, Tabu Search, Simulated Annealing and Neural Networks”, Advanced Manufacturing Series, Springer-Verlag, London, 2000.
  • [2] L.R. Medsker, L.C. Jain, “Recurrent Neural Networks, Design and Applications”, CRC Press, London, 2001.
  • [3] X. Liu, “Modelling and Prediction Using Neural Networks”, PhD Thesis, University of Wales College of Cardiff, Cardiff, UK, 1993.
  • [4] D.T. Pham, X. Liu, “Neural networks for identification, prediction and control”, London, Springer-Verlag, 1995.
  • [5] D.T. Pham, X. Liu, “Training of Elman networks and dynamic system modelling”, Int. Journal of Systems Science, Vol.27, No.2, pp.221–226, 1996.
  • [6] L. Yun, A. Häuβler., “Artificial evolution of neural networks and its application to feedback control”, Artificial Intelligence in Engineering, Vol.10, No.2, pp.143– 152, 1996.
  • [7] C.C. Ku, K.Y. Lee, “Diagonal recurrent neural networks for dynamic systems control”, IEEE Trans. on Neural Networks, Vol.6, No.1, pp.144–156, 1995.
  • [8] D.T. Pham, X. Liu, “Dynamic system identification using partially recurrent neural networks”, Journal of Systems Engineering, Vol.2, No.2, pp.90–97, 1992.
  • [9] D.T. Pham, S.J. Oh, “A recurrent backpropagation neural network for dynamic system identification”, Journal of Systems Engineering, Vol.2, No.4, pp.213–223, 1992.
  • [10] J.L. Elman, “Finding Structure in Time”, Cognitive Science, Vol.14, pp.179-211, 1990.
  • [11] D.T. Pham., D. Karaboga, “Training Elman and Jordan networks for system identification using genetic algorithms”, Artificial Intelligence in Engineering, Vol.13, pp.107-117, 1999.
  • [12] D. Karaboga and A. Kalınlı, “Training Recurrent Neural Networks Using Tabu Search Algorithm”, 5th Turkish Symposium on Artificial Intelligence and Neural Networks, pp.293-298, Türkiye, 1996.
  • [13] A. Kalınlı, D. Karaboga, “Geribeslemeli Yapay Sinir Ağlarının Paralel Tabu Araştırma Algoritması Kullanarak Eğitilmesi”, 11th Turkish Symposium on Artificial Intelligence and Neural Networks (TAINN’2002), ss.261-270, İstanbul, Haziran 20-21, 2002.
  • [14] A. Kalınlı, “ Geribeslemeli Yapay Sinir Ağlarının Genetik Operatörlere Dayalı Tabu Araştırma Algoritması Kullanarak Eğitilmesi”, Doktora tezi, Fen Bilimleri Enstitüsü, Erciyes Üniversitesi, Kayseri, 1996.
  • [15] S. Kirkpatrick, C.D. Gelatt, M.P. Vecchi, “Optimization by Simulated Annealing”, Science, Vol.220, pp.671-680, 1983.
  • [16] K.A. Dowsland, “Simulated Annealing. In Modern Heuristic Techniques for Combinatorial Problems”, (Editor, Reeves, C.R.), McGraw-Hill, 1995.
  • [17] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller and E. Teller, “Equation of The State Calculation by Fast Computing Machines”, Journal of Chemical Physics, Vol.21, pp.1087-1092, 1953.
  • [18] S. Jhonson, C. Aragon, L. Mccgeoch and C. Schevon, “Optimization By Simulated Annealing: An Experimental Evaluation, Part-I, Graph Partitioning”, Operations Research, Vol.37, pp.865-892, 1989.
  • [19] D.T. Pham, X. Liu, “Identification of linear and nonlinear dynamic systems using recurrent neural networks”, Artificial Intelligence in Engineering, Vol.8, pp.90–97, 1993.
  • [20] D.E. Rumelhart, J.L. McClelland, “Explorations in the Micro-Structure of Cognition, Parallel Distrubuted Processing 1”, Cambridge, MA: MIT Press, 1986.