Spatiotemporal realization of an artificial retina model and performance evaluation through ISI- and spike count-based image reconstruction methods

Development of an artificial retina model that can mimic the biologic retina is a highly challenging task and this task is an important step in the development of a visual prosthesis. The receptive field structure of the retina layer is usually modeled as a 2D difference of Gaussian (DOG) filter profile. In the present study, as a different approach, a retina model including a 3D 2-stage DOG filter (3D-ADOG) that has an adaptively changing bandwidth with respect to the local image statistic is developed. Using this modeling, the adaptive image processing of the retina can be realized. The contribution of the developed model in terms of the image quality is evaluated via simulation studies using test images. The first simulation results, including only the spike count-based reconstruction for a test video sequence, were previously published. In this study, in addition to the spike count-based reconstruction, the interspike interval measure is also used in the simulation study. The reconstruction results are compared using the statistical parameters of the mean squared error (MSE), universal quality index (UQI), and histogram similarity ratio (HSR), which characterize the image likelihood. To evaluate the performance of the model versus time, time-dependent changes in the MSE, HSR, and UQI parameters are obtained and compared to the standard model. From these results, it is concluded that the 3D-ADOG filter-based retina model preserves the spatial details of the image and produces a larger number of different gray tone levels, which are important for the visual perception of an image, in comparison with the well-known classical DOG filter-based retina model. The retina implant systems based on this model can provide better visual perception for implant recipients.

Spatiotemporal realization of an artificial retina model and performance evaluation through ISI- and spike count-based image reconstruction methods

Development of an artificial retina model that can mimic the biologic retina is a highly challenging task and this task is an important step in the development of a visual prosthesis. The receptive field structure of the retina layer is usually modeled as a 2D difference of Gaussian (DOG) filter profile. In the present study, as a different approach, a retina model including a 3D 2-stage DOG filter (3D-ADOG) that has an adaptively changing bandwidth with respect to the local image statistic is developed. Using this modeling, the adaptive image processing of the retina can be realized. The contribution of the developed model in terms of the image quality is evaluated via simulation studies using test images. The first simulation results, including only the spike count-based reconstruction for a test video sequence, were previously published. In this study, in addition to the spike count-based reconstruction, the interspike interval measure is also used in the simulation study. The reconstruction results are compared using the statistical parameters of the mean squared error (MSE), universal quality index (UQI), and histogram similarity ratio (HSR), which characterize the image likelihood. To evaluate the performance of the model versus time, time-dependent changes in the MSE, HSR, and UQI parameters are obtained and compared to the standard model. From these results, it is concluded that the 3D-ADOG filter-based retina model preserves the spatial details of the image and produces a larger number of different gray tone levels, which are important for the visual perception of an image, in comparison with the well-known classical DOG filter-based retina model. The retina implant systems based on this model can provide better visual perception for implant recipients.

___

  • M. Matthaei, O. Zeitz, M. Keser¨ u, L. Wagenfeld, R. Hornig, N. Post, G. Richard, “Progress in the development of vision prostheses”, Ophthalmologica, Vol. 225, pp. 187–192, 2011.
  • D. Weiland, M.S. Humayun, “Visual prosthesis”, Proceedings of the IEEE, Vol. 96, pp. 1076–1084, 2008.
  • E. Margalit, M. Maia, J.D. Weiland, R.J. Greenberg, M.D. Gildo, Y. Fujii, G. Torres, D.V. Piyathaisere, T.M. O’Hearn, W. Liu, G. Lazzi, G. Dagnelie, D.A. Scribner, E. de Juan Jr, M.S. Humayun, “Retinal prosthesis for the blind”, Survey of Ophthalmology, Vol. 47, pp. 335–356, 2002.
  • K.A. Zaghloul, K. Boahen, “Circuit designs that model the properties of the outer and inner retina”, Ophthalmology Research: Visual Prosthesis and Ophthalmic Devices, pp. 135–159, 2007.
  • C.A. Morillas, S.F. Romero, A. Martinez, F.J. Pelayo, E. Rosa, E. Fernandez, “A design framework to model retinas”, BioSystems, Vol. 87, pp. 156–163, 2007.
  • R. Eckmiller, D. Neumann, O. Baruth, “Tunable retina encoders for retina implants: why and how”, Journal of Neural Engineering, Vol. 2, pp. 91–104, 2005.
  • H. Wei, X. Guan, “The simulation of early vision in biological retina and analysis on its performance”, Congress on Image and Signal Processing, Vol. 4, pp. 413–418, 2008.
  • J. Liu, X. Gou, “Information processing model of artificial vision prosthesis”, 2nd International Conference on Concrete Engineering and Technology, Vol. 2, pp. 551–555, 2010.
  • A. Wohrer, P. Kornprobst, “Virtual retina: a biological retina model and simulator with contrast gain control”, Journal of Computational Neuroscience, Vol. 26, pp. 219–249, 2009.
  • M.S. Humayun, E. de Juan Jr, J.D. Weiland, G. Dagnelie, S. Katona, R. Greenberg, S. Suzuki, “Pattern electrical stimulation of the human retina”, Vision Research, Vol. 39, pp. 2569–2576, 1999.
  • D. Balya, I. Petras, T. Roska, “Implementing the multilayer retinal model on the complex-cell CNN-UM chip prototype”, International Journal of Bifurcation and Chaos, Vol. 14, pp. 427–451, 2004.
  • C.F. Cai, P.J. Liang, P.M. Zhang, “A simulation study on the encoding mechanism of retinal ganglion cell”, Lecture Notes in Computer Science, Vol. 4689, pp. 470–479, 2007.
  • L.J. Croner, E. Kaplan, “Receptive field of P and M ganglion cells across the primate retina”, Vision Research, Vol. 35, pp. 7–24, 1995.
  • R.V. Rullen, S.J. Thorpe, “Rate coding versus temporal order coding: what the retinal ganglion cells tell the visual cortex”, Neural Computation, Vol. 13, pp. 1255–1283, 2001.
  • J.W. Pillow, J. Shlens, L. Paninski, A. Sher, A.M. Litke, E.J. Chichilnisky, E.P. Simoncelli, “Spatio-temporal correlations and visual signaling in a complete neuronal population”, Nature, Vol. 454, pp. 995–999, 2008.
  • J.L. Gauthier, G.D. Field, A. Sher, M. Greschner, J. Shlens, A.M. Litke, “Receptive fields in primate retina are coordinated to sample visual space more uniformly”, PLoS Biology, Vol. 7, pp. 1–9, 2009.
  • I. Karagoz, M. Ozden, “Adaptive artificial retina model to improve perception quality of retina implant recipients”, 4th International Conference on BioMedical Engineering and Informatics, pp. 91–95, 2011.
  • I. Karagoz, M. Ozden, G. Sobaci, “Multi stage local adaptive DOG filter based retina model developed for visual prosthesis system and simulation results”, Association for Research in Vision and Ophthalmology Annual Meeting, 20 L.F. Abbott, “Lapique’s introduction of the integrate-and-fire model neuron (1907)”, Brain Research Bulletin, Vol. 50, pp. 303–304, 1999.
  • A. Thiel, M. Greschner, J. Ammermuller, “The temporal structure of transient ON/OFF ganglion cell responses and its relation to intra-retinal processing”, Journal of Computational Neuroscience, Vol. 21, pp. 131–151, 2006.
  • M.N. Geffen, S.E.J. Vries, M. Meister, “Retinal ganglion cells can rapidly change polarity from OFF to ON”, PLoS Biology, Vol. 5, pp. 640–651, 2007.
  • Berkeley Computer Vision Group, “Contour detection and image segmentation resources Berkeley segmentation data set and benchmarks 500 (BSDS500)”, available at http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/BSR/BSR bsds500.tgz, last accessed 20 July 2012.
  • T. Gollisch, M. Meister, “Rapid neural coding in the retina with relative spike latencies”, Science, Vol. 319, pp. 1108–1112, 2008.
  • A. Bhattacharyya, “On a measure of divergence between two statistical populations defined by their probability distributions”, Bulletin of the Calcutta Mathematical Society, Vol. 35, pp. 99–109, 1943.
  • Z. Wang, A.C. Bovik, “A universal image quality index”, IEEE Signal Processing Letters, Vol. 9, pp. 81–84, 2002.