Enhanced hybrid method of divide-and-conquer and RBF neural networks for function approximation of complex problems

Enhanced hybrid method of divide-and-conquer and RBF neural networks for function approximation of complex problems

This paper provides an enhanced method focused on reducing the computational complexity of function approximation problems by dividing the input data vectors into small groups, which avoids the curse of dimensionality. The computational complexity and memory requirements of the approximated problem are higher when the input data dimensionality increases. A divide-and-conquer algorithm is used to distribute the input data of the complex problem to a divided radial basis function neural network (Div-RBFNN). Under this algorithm, the input data variables are typically distributed to different RBFNNs based on whether the number of the input data dimensions is odd or even. In this paper, each Div-RBFNN will be executed independently and the resulting outputs of all Div-RBFNNs will be grouped using a system of linear combination function. The parameters of each Div-RBFNN (centers, radii, and weights) will be optimized using an efficient learning algorithm, which depends on an enhanced clustering algorithm for function approximation, which clusters the centers of the RBFs. Compared to traditional RBFNNs, the proposed methodology reduces the number of executing parameters. It further outperforms traditional RBFNNs not only with respect to the execution time but also in terms of the number of executing parameters of the system, which produces better approximation error.

___

  • [1] Vehtari A, Lampinen J. Bayesian Input Variable Selection Using Posterior Probabilities and Expected Utilities. Tech Report B31. Helsinki, Finland: Helsinki University of Technology, 2002.
  • [2] Strass H, Wallner JP. Analyzing the computational complexity of abstract dialectical frameworks via approximation x point theory. Artif Intell 2015; 226: 34-74.
  • [3] Kainen P, Kurkov V, Sanguineti M. Dependence of computational models on input dimension: tractability of approximation and optimization tasks. IEEE T Inform Theory 2012; 58: 1203-1214.
  • [4] Kurkova V. Dimension-independent rates of approximation by neural networks. In: Warwick K, Karny M, editors. Computer-Intensive Methods in Control and Signal Processing. The Curse of Dimensionality. Boston, MA, USA: Birkhauser, 1997. pp. 261-270.
  • [5] Bengio S, Bengio Y. Taking on the curse of dimensionality in joint distributions using neural networks. IEEE T Neural Networ 2000; 11: 550-557.
  • [6] Bhagat S, Deodhare D. Divide and conquer strategies for MLP training. In: IJCNN International Joint Conference on Neural Networks; 16{21 July 2006; Vancouver, Canada. New York, NY, USA: IEEE. pp. 3415-3420.
  • [7] Guo Q, Chen BW, Jiang F, Ji X, Kung SY. Efficient divide-and-conquer classi cation based on feature space decomposition. arXiv:1501.07584.
  • [8] Awad M. Input variable selection using parallel processing of RBF neural Networks (PP-RBFNNs). Int Arab J Inf Techn 2010; 7: 6-13.
  • [9] Ebrahimzadeh A, Khazaee AA. An efficient technique for classi cation of electrocardiogram signals. Adv Electr Comp Eng 2009; 9: 89-93.
  • [10] Ziehe A, Nolte G, Sander T, Muller KR, Curio G. A comparison of ICA based artifact reduction methods for MEG. In: 12th International Conference on Biomagnetism; 13{17 August 2001; Helsinki, Finland. Helsinki, Finland: Helsinki University of Technology. pp. 895-899.
  • [11] Ferrari S, Maggioni M, Borghese NA. Multiscale approximation with hierarchical radial basis functions networks. IEEE T Neural Networ 2004; 15: 178-188.
  • [12] Pomares H, Rojas I, Gonzalez J, Prieto A. structure identi cation in complete rule-based fuzzy systems. IEEE T Fuzzy Syst 2002; 10: 349-359.
  • [13] Chen Y, Wang JZ. Kernel machines and additive fuzzy systems: classi cation and function approximation. In: 12th IEEE International Conference on Fuzzy Systems; 25{28 March 2003; St Louis, MO, USA. New York, NY, USA: IEEE. pp. 789-795.
  • [14] Cheung YM, Huang RB. A divide-and-conquer learning approach to radial basis function networks. Neural Process Lett 2005; 21: 189-206.
  • [15] Noel S, Szu H. Multiple-resolution divide and conquer neural networks for large-scale TSP-like energy minimization problems. In: IEEE International Conference on Neural Networks; 12 June 1997; Houston, TX, USA. New York, NY, USA: IEEE. pp. 1278-1283.
  • [16] Pomares H, Rojas I, Awad M, Valenzuela O. An enhanced clustering function approximation technique for a radial basis function neural networks. Math Comput Model 2012; 55: 286-302.
  • [17] Dias FM, Antunes A, Vieira J, Mota A. A sliding window solution for the on-line implementation of the Levenberg{ Marquardt algorithm. Eng Appl Artif Intell 2006; 19: 1-7.
  • [18] Awad M, Pomares H, Rojas F, Herrera LJ, Gonzalez J, Guillen A. Approximating I/O data using radial basis functions: a new clustering-based approach. In: 8th International Work-Conference on Arti cial Neural Networks; 5{7 June 2005; Barcelona, Spain. Berlin, Germany: Springer-Verlag. pp. 289-296.
  • [19] Kumar RH, Kumar BV, Karthik K, Chand JL, Kumar CN. Performance analysis of singular value decomposition (SVD) and radial basis function (RBF) neural networks for epilepsy risk levels classi cations from EEG signals. Int J Soft Comput Eng 2012; 2: 232-236.
  • [20] Fulginei FR, Laudani A, Salvini A, Parodi M. Automatic and parallel optimized learning for neural networks performing MIMO applications. Adv Electr Comp Eng 2013; 13: 3-12.