Biometric identification using panoramic dental radiographic images with few-shot learning

Biometric identification using panoramic dental radiographic images with few-shot learning

Determining identity is a crucial task especially in the cases of mass disasters such as tsunamis, earthquakes, fires, epidemics, and in forensics. Although there are various studies in the literature on biometric identification from radiographic dental images, more research is still required. In this study, a panoramic dental radiographic (PDR) imagebased human identification system was developed using a customized deep convolutional neural network model in a few-shot learning scheme. The proposed model (PDR-net) was trained on 600 PDR images obtained from a total of 300 patients. As the PDR images of the patients were very different in terms of pose and intensity, they were first cropped by the domain experts according to the region of interest and adjusted to standard view with histogram equalization. A customized data augmentation approach was applied in order for the model to generalize better while it was being trained. The proposed model achieved a prediction accuracy of 84.72% and 97.91% in Rank-1 and Rank-10, respectively, by testing 144 PDR images of 72 patients that had not been previously used in training. It was concluded that well known similarity metrics such as Euclidean, Manhattan, Cosine, Pearson, Kendall’s Tau and sum of absolute difference can be utilized in few-shot learning. Moreover, Cosine and Pearson similarity achieved the highest Rank 1 score of 84.72%. It was observed that as the number of rank increased, the Spearman and Kendall’s Tau metrics had the same success as Cosine and Pearson. Based on the superimposed heatmap image analysis, it was determined that the maxillary, mandibular, nasal fossa, sinus and other bone forms in the mouth contributed biometric identification. It was also found that customized data augmentation parameters contributed positively to biometric identification.

___

  • [1] Fereira JL, Fereira AE, Ortega AI. Methods for the analysis of hard dental tissues exposed to high temperature. Forensic Science International 2008; 178 (2): 119-124. doi: 10.1016/j.forsciint.2007.12.009
  • [2] Holden JL, Clement JG, Phakey PP. Age and temperature related changes to the ultrastructure and composition of human bone mineral. Journal of Bone and Mineral Research 1995; 10 (9): 1400-1409. doi: 10.1002/jbmr.5650100918
  • [3] Beale DR. The importance of dental records for identification. New Zealand Dental Journal 1991; 87 (389): 84–87. doi: 10.5455/aim.2015.23.49-52
  • [4] Petju M, Suteerayongprasert A, Thongpud R, Hassiri K. Importance of dental records for victim identification following the Indian ocean tsunami disaster in Thailand. Public Health 2007; 121 (4): 251–257. doi: 10.1016/j.puhe.2006.12.003
  • [5] Oktay AB. Human identification with dental panoramic radiographic images. IET Biometrics 2018; 7 (4): 349–355, doi:10.1049/iet-bmt.2017.0078
  • [6] Abdel-Mottaleb M, Nomir O, Nassar DE, Fahmy G, Ammar HH. Challenges of developing an automated dental identification system. In: IEEE 46th Midwest Symp. on Circuits and Systems Conference; Cairo, EGYPT;2003. pp. 411–414.
  • [7] Jain AK, Chen H. Matching of dental X-ray images for human identification. Pattern Recognition 2004; 37 (7): 1519–1532. doi:10.1016/j.patcog.2003.12.016
  • [8] Nassar DEM, Ammar HH. A neural network system for matching dental radiographs. Pattern Recognition 2007; 40 (1): 65–79. doi:10.1016/j.patcog.2006.04.046
  • [9] Zhou J, Abdel Mottaleb M. A content-based system for human identification based on bitewing dental X-ray images. Pattern Recognition 2005; 38 (11): 2132–2142. doi:10.1016/j.patcog.2005.01.011
  • [10] Heinrich A, Güttler FV, Schenkl S, Wagner R, Teichgräber UKM. Automatic human identification based on dental X-ray radiographs using computer vision. Scientific reports 2020; 10 (1): 1-13. doi:10.1038/s41598-020-60817-6
  • [11] Lai Y, Fan F, Wu Q, Ke W, Liao P et al. LCANet: Learnable Connected Attention Network for Human Identification Using Dental Images. IEEE Transactions on Medical Imaging 2021; 40 (2): 905-915. doi: 10.1109/TMI.2020.3041452
  • [12] Chen H, Jain, AK. Tooth contour extraction for matching dental radiographs. In: 17th International Conference on Pattern Recognition; Cambridge, UK;2004. pp. 522–525.
  • [13] Ajaz A, Kathirvelu D. Dental biometrics: computer aided human identification system using the dental panoramic radiographs. In: 2013 international conference on communication and signal processing; Melmaruvathur, INDIA;2013, pp. 717–721.
  • [14] Gorza L, Mânica S. Accuracy of dental identification of individuals with unrestored permanent teeth by visual comparison with radiographs of mixed dentition. Forensic Science International 2018; 289 (1):337–343. doi:10.1016/j.forsciint.2018.06.004
  • [15] Fan F, Ke W, Wu W, Tian X, Lyu T et al. Automatic human identification from panoramic dental radiographs using the convolutional neural network. Forensic Science International 2020; 314 (1): 121,130. doi: 10.1016/j.forsciint.2020.110416
  • [16] Gurses A, Ayse BO. Human Identification with Panoramic Dental Images using Mask R-CNN and SURF. In: 5th International Conference on Computer Science and Engineering UBMK; Diyarbakır, TURKEY; 2020 pp.232-237.
  • [17] Lee JH, Kim DH, Jeong SN, Choi SH. Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm. Jornal of Dentistry 2018; 77: 106-111. doi: 10.1016/j.jdent.2018.07.015
  • [18] Suzuki K. Overview of deep learning in medical imaging. Radiological physics and technology 2017; 10 (3): 257-273. doi:10.1007/s12194-017-0406-5
  • [19] Liu B, Yu X, Yu A, Zhang P, Wan G et al. Deep few-shot learning for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing 2018; 57 (4): 2290-2304. doi: 10.1109/TGRS.2018.2872830
  • [20] Wang Y, Yao Q, Kwok JT, Ni LM. Generalizing from a few examples: A survey on few-shot learning. ACM Computing Surveys 2020; 53 (3): 1-34. doi:10.1145/3386252
  • [21] Ataş M, Yardimci Y, Temizel A. A new approach to aflatoxin detection in chili pepper by machine vision. Computers and electronics in agriculture 2010; 87: 129-141. doi: 10.1016/j.compag.2012.06.001
  • [22] He K, Zhang X, Ren S, Sun J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: IEEE international conference on computer vision; Santiago, CHILE; 2015 pp.1026-1034.
  • [23] Nonis F, Barbiero P, Cirrincione G, Olivetti EC, Marcolin F et al. Understanding Abstraction in Deep CNN: An Application on Facial Emotion Recognition. Progresses in Artificial Intelligence and Neural Systems. SINGAPORE: Springer, 2021, pp. 281-290.
  • [24] Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D et al. Grad-CAM: visual explanations from deep networks via gradient-based localization. In: IEEE International Conference on Computer Vision; Venice, ITALY; 2017; pp:618–626.
  • [25] Ataş M. Hand tremor based biometric recognition using leap motion device. IEEE Access 2017; 5: 23320-23326. doi: 10.1109/ACCESS.2017.2764471
Turkish Journal of Electrical Engineering and Computer Sciences-Cover
  • ISSN: 1300-0632
  • Yayın Aralığı: Yılda 6 Sayı
  • Yayıncı: TÜBİTAK