Bayesian estimation of discrete-time cellular neural network coefficients

Bayesian estimation of discrete-time cellular neural network coefficients

A new method for finding the network coefficients of a discrete-time cellular neural network (DTCNN) is proposed. This new method uses a probabilistic approach that itself uses Bayesian learning to estimate the network coefficients. A posterior probability density function (PDF) is composed using the likelihood and prior PDFs derived from the system model and prior information, respectively. This posterior PDF is used to draw samples with the help of the Metropolis algorithm, a special case of the Metropolis--Hastings algorithm where the proposal distribution function is symmetric, and resulting samples are then averaged to find the minimum mean square error (MMSE) estimate of the network coefficients. A couple of image processing applications are performed using these estimated parameters and the results are compared with those of some well-known methods.

___

  • [1] Chua LO, Yang L. Cellular neural networks: Theory. IEEE T Circuits Syst 1988; 35:1257-1272.
  • [2] Chua LO, Yang L. Cellular neural networks: Applications. IEEE T Circuits Syst 1988; 35: 1273-1290.
  • [3] Kim K, Lee S, Kim JY, Kim M, Yoo HJ. A configurable heterogeneous multicore architecture with cellular neural network for real-time object recognition. IEEE T Circ Syst Vid 2009; 19-11: 1612-1622.
  • [4] Costantini G, Casali D, Carota M. CNN-Based unsupervised pattern classification for linearly and non linearly separable data sets. WSEAS Transactions on Circuits and Systems 2005; 4-5: 448-452.
  • [5] Kananen A, Paasio A, Laiho M, Halonen K. CNN applications from the hardware point of view: Video sequence segmentation. Int J Circ Theor App 2002; 30-(2-3): 117-137.
  • [6] Nossek JA. Design and learning with cellular neural networks. In: Third IEEE International Workshop on Cellular Neural Networks and their Applications, CNNA-94; 18–21 December 1994; Rome, Italy. pp. 137-146.
  • [7] Vinyoles-Serra M, Jankowski S, Szymanski Z. Cellular neural network learning using multilayer perceptron. In: 20th European Conference on Circuit Theory and Design, ECCTD2011; 29–31 August 2011; Linkping, Sweden. pp. 214-217.
  • [8] G¨uzeli¸s C, Karamahmut S. Recurrent perceptron learning algorithm for completely stable neural networks. In: Cellular Neural Networks and their Applications, CNNA-94; 18–21 December 1994; Rome, Italy. pp. 177-182.
  • [9] Luitel B, Venayagamoorthy GK. Decentralized asynchronous learning in cellular neural networks. IEEE T Neural Netw Learn Syst 2012; 23:1755-1766.
  • [10] Kozek T, Roska T, Chua LO. Genetic algorithm for CNN template learning. IEEE T Circuits Syst 1993; 40-6: 392-402.
  • [11] Unal M, Onat M, Bal A. Cellular neural network training by ant colony optimization algorithm. In: 2010 IEEE 18th ¨ Signal Processing and Communications Applications Conference, SIU2010; 22–24 April 2010; Diyarbakır, Turkey. pp. 1661-1666.
  • [12] Moreno-Armendariz MA, Pazienza GE, Yu W. Training cellular neural networks with stable learning algorithm. Lect Notes Comput Sc 2006; 3971: 558-563.
  • [13] Metropolis N, Rosenbluth AW, Rosenbluth MN, Teller AH, Teller E. Equations of state calculations by fast computing machines. J Chem Phys 1953; 21: 1087-1092.
  • [14] Hastings WK. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 1970; 57: 97-109.
  • [15] Arik S. Stability analysis of dynamical neural networks. PhD, South Bank University, London, UK, 1997.
  • [16] Ding X, He L, Carin L. Bayesian robust principal component analysis. IEEE T Image Process 2011; 20-12: 3419- 3430.
  • [17] Tipping ME. Sparse Bayesian learning and the relevance vector machine. J Mach Learn Res 2001; 1: 211-244.
  • [18] Ding M, He L, Dunson D, Carin L. Nonparametric Bayesian segmentation of a multivariate inhomogeneous spacetime poisson process. Bayesian Anal 2012; 7-4: 813-840.
  • [19] Fox EB, Sudderth EB, Jordan MI, Willsky AS. Bayesian nonparametric inference of switching dynamic linear models. IEEE T Signal Proces 2011; 59-4: 1569-1585.
  • [20] Hartmann S, Sprenger J. Bayesian epistemology. Routledge Companion to Epistemology 2010; 609-620.
  • [21] Kononenko I. Inductive and Bayesian learning in medical diagnosis. Appl Artif Intel 1993; 7-4: 317-337.
  • [22] Parisi F, Scharff RL. The role of status quo bias and Bayesian learning in the creation of new legal rights. J Law Econ 2006; 7-10: 1-27.
  • [23] Neal RM. Bayesian learning for neural networks. PhD, University of Toronto, Toronto, Canada, 1995.
  • [24] Freitas JFG. Bayesian methods for neural networks. PhD, University of Cambridge, Cambridge, UK, 1999.
  • [25] Ozer HM. Bayesian learning for cellular neural networks. MSc, Kadir Has University, ¨ ˙Istanbul, Turkey, 2013.
  • [26] MacKay DJC. Information Theory, Inference and Learning Algorithms. Cambridge, UK: Cambridge University Press, 2003.
  • [27] Bishop CM. Pattern Recognition and Machine Learning. New York, NY, USA: Springer, 2006
  • [28] Lim JS. Two Dimensional Signal and Image Processing. Upper Saddle River, NJ, USA: Prentice Hall, 1990.