Hybrid Biometric System Using Iris and Speaker Recognition

In this study, a hybrid security system is proposed. The proposed system is composed of two subsystems namely iris recognition system (IRS) and speaker recognition system (SRS). Pre-processing, feature extraction and feature matching are the main steps of these systems. In IRS subsystem, Gaussian filter, Canny edge detector, Hough transform, and histogram equalization is performed for pre-processing, respectively. After that, by applying 4-level Discrete Wavelet Transform (DWT) to pure iris image, the iris image is decomposed into four sub-bands (LL4, LH4, HL4 and HH4). In order to extract the feature vector from iris pattern, the LH4, HL4 and HH4 sub-bands (matrices) are merged into one matrix. Finally the matrix is transformed in vector to obtain the feature vector of iris image. For SRS subsystem, the pre-processing step includes spectral arrangement, silence part removing and band limitation operations. After pre-processing, frame blocking and windowing are applied to the long-term speech samples and then Fast Fourier Transform (FFT) is performed for the each short-term speech segments (frames). Finally, the Mel Frequency Cepstral Coefficients (MFCC) technique is performed in order to obtain feature vector of the speech. The feature matching step of both IRS and SRS is implemented with Dynamic Time Warping (DTW) which is an efficient algorithm to measure the distance between two vectors. According to the DTW results, the false acceptance rate (FAR) is zero and false rejecting rate (FRR) is about 4 % for the proposed hybrid system. 

___

  • [1] L.V. Birgale, M. Kokare, “Iris Recognition Using Discrete Wavelet Transform”, International Conference on Digital Image Processing, 7-9 March 2009, Bangkok.
  • [2] J.G. Daugman,” High Confidence Visual Recognition of Persons by a Test of analysis of Statistical Independence”, IEEE Trans. On Pattern Analysis and Machine Learning, Vol. 15, Number 11, 1993, pp. 1148-1161.
  • [3] Zh. Lin, B. Lu, “Iris Recognition Method Based on the Imaginary Coefficients of Morlet Wavelet Transform ”,2010 Seventh International Conference on Fuzzy Systems and Knowledge Discovery (FSDK), 10-12 August 2010, Yantai, Shandong.
  • [4] S.D. Thepade, P. Bidwai, “Iris Recognition Using Fractional Coefficients of Transforms, Wavelet Transforms and Hybrid Wavelet Transform”, 2013 International Conference on Control Computing Communication & Materials (ICCCCM), 3-4 August 2013, Allahabad.
  • [5] J. Hansen, T. Hasan, “Speaker Recognition by Machines and Humans: A Tutorial Review”, IEEE Signal Processing Magazine, Vol.32, Number 6, 2015, pp. 74-99.
  • [6] S. Prabhakar, S. Pankanti, A. Jain, “Biometric Recognition: Security and Privacy Concerns”, IEEE Security & Privacy, Published by IEEE Computer Society, 8 April, 2003
  • [7] M. Müller, “Information Retrieval for Music and Motion”, Springer, 2007.
  • [8] A. Sukhwal, M. Kumar, “Comparative Study of Different Classifiers Based Speaker Recognition System Using Modified MFCC for Noisy Environment” 2015 International Conference on Green Computing and Internet of Things (ICGCIoT), 8-10 October 2015, Noida.
  • [9] K. S. Ahmad, A. S. Thosar, J. H. Nirmal, V. S. Pende, “A Unique Approach in Text Independent Speaker Recognition Using MFCC feature Sets and Probabilistic Neural Network” 2015 Eighth International Conference on Advances in Pattern Recognition (ICAPR), 4-7 January 2015, Kolkata.
  • [10] H. B. Kekre, V. A. Bharadi, A. R. Sawant, O. Kadam, P. Lanke, R .Lodhija, “Speaker Recognition Using Vector Quantization by MFCC and KMCG Clustering Algorithm”, 2012 International Conference on Communication, Information & Computing Technology (ICCICT), Oct. 19-20, 2012, Mumbai, India.
  • [11] [11] J. Martinez, H. Perez, E. Escamilla, M. M. Suzuki, “Speaker Recognition Using Mel Frequency Cepstral Coefficients (MFCC) and Vector Quantization (VQ) Techniques”, 22nd International Conference on Electrical Communications and Computers (CONIELECOMP), 27-29 February, 2012, Cholula, Puebla.