Büyükten Küçüğe Oto-kodlayıcılar ile dişlerin konumlandırılması*

Dişlerin lokalizasyonu, bilgisayar destekli gerçekleştirilen dental görüntülerden insan kimliklendirme ve medikal tanı için bir önşarttır. Konvolusyonel sinir ağları ve oto-kodlayıcılar gibi klasik derin öğrenme mimarileri diş tanıma işlemi için başarılı gözükse de dental görüntülerin çok büyük olması nedeniyle tüm arama uzayının taranması mümkün gözükmemektedir. Bu çalışmada, büyükten-küçüğe yığınlanmış bir oto-kodlayıcı yapısı ile dental görüntülerden dişleri tanıyan bir sistem sunulmuştur. Önerilen mimari, girdi görüntü yamalarının boyutlarının her kademede arttığı bir kademeli yığınlanmış oto-kodlayıcı yapısı içerir. İlerdeki katmanlara sadece bulunan aday diş yamaları verilir; böylece alakasız yamalar elimine edilmiş olur. Önerilen mimari tanıma aşamasındaki maliyeti düşürmekle beraber hassas konumlandırma imkanı sunar. Geliştirilen metot, 206 dental panoramik görüntü içeren bir veri kümesi üzerinde test edilmiştir ve sonuçlar ümit vericidir.

Tooth Localization with Coarse-to-Fine Auto-Encoders

 Localization of teeth is a prerequisite task for most of the computerized methods for dental images such as medical diagnosis and human identification. Classical deep learning architectures like convolutional neural networks and auto-encoders seem to work well for tooth detection, however, it is non-trivial because of the large image size. In this study, a coarse-to-fine stacked auto-encoder architecture is presented for detection of teeth in dental panoramic images. The proposed architecture involves cascaded stacked auto-encoders where sizes of the input patches increase with the successive steps. Only the detected candidate tooth patches are fed into the successive layers, thus the irrelevant patches are eliminated. The proposed architecture decreases the cost of detection process while providing precise localization. The method is tested and validated on a dataset containing 206 dental panoramic images and the results are promising.

___

  • P.-L. Lin, P.-W. Huang, Y. S. Cho, and C.-H. Kuo. An automatic and effective tooth isolation method for dental radiographs. Opto-Electronics Review, 21:126- 136, 2013.
  • A. Katsumata and H. Fujita. Progress of computeraided detection/diagnosis (cad) in dentistry. Japanese Dental Science Review, 50(3):63 - 68, 2014
  • P. Lin, Y. Lai, and P. Huang. An effective classification and numbering system for dental bitewing radiographs using teeth region and contour information. Pattern Recognition, 43(4):1380 - 1392, 2010.
  • J. Zhou and M. Abdel-Mottaleb. A content-based system for human identification based on bitewing dental x-ray images. Pattern Recognition, 38(11), 2005.
  • M. Abdel-Mottaleb, O. Nomir, D. Nassar, G. Fahmy, and H. Ammar. Challenges of developing an automated dental identification system. In Circuits and Systems, 2003 IEEE 46th Midwest Symposium on, volume 1, pages 411-414, 2003.
  • M. H. Mahoor and M. Abdel-Mottaleb. Classification and numbering of teeth in dental bitewing images. Pattern Recognition, 38(4):577 - 586, 2005
  • D. Frejlichowski and R. Wanat. Extraction of teeth shapes from orthopantomograms for forensic human identi_cation. In P. Real, D. Diaz-Pernil, H. Molina Abril, A. Berciano, and W. Kropatsch, editors, Computer Analysis of Images and Patterns, volume 6855 of Lecture Notes in Computer Science, pages 65- 72. Springer Berlin Heidelberg, 2011.
  • A. K. Jain and H. Chen. Registration of dental atlas to radiographs for human identification. In A. K. Jain and N. K. Ratha, editors, Biometric Technology for Human Identification II, volume 5779 of Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, pages 292-298, Mar. 2005.
  • Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. Deepface: Closing the gap to human-level performance in face verification. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 1701-1708, June 2014.
  • O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211-252, 2015.
  • F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A unified embedding for face recognition and clustering. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
  • J. Xu, L. Xiang, Q. Liu, H. Gilmore, J. Wu, J. Tang, and A. Madabhushi. Stacked sparse autoencoder (ssae) for nuclei detection on breast cancer istopathology images. IEEE Transactions on Medical Imaging, 35(1):119-130, Jan 2016.
  • H.-C. Shin, M. Orton, D. Collins, S. Doran, and M. Leach. Stacked autoencoders for unsupervised feature learning and multiple organ detection in a pilot study using 4d patient data. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(8):1930-1943, 2013.
  • S. Honari, J. Yosinski, P. Vincent, and C. J. Pal. Recombinator networks: Learning coarse-to-_ne feature aggregation. June 2015.
  • J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
  • B. Hariharan, P. A. Arbel_aez, R. B. Girshick, and J. Malik. Hypercolumns for object segmentation and _ne-grained localization. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR, pages 447-456, 2015.
  • J. Zhang, S. Shan, M. Kan, and X. Chen. Coarse-tofine auto-encoder networks (CFAN) for real-time face alignment. In Computer Vision - ECCV 2014 - 13th European Conference, Proceedings, Part II, pages 1- 16, 2014