Medical image fusion with convolutional neural network in multiscale transform domain

Medical image fusion with convolutional neural network in multiscale transform domain

Multimodal medical image fusion approaches have been commonly used to diagnose diseases and involve merging multiple images of different modes to achieve superior image quality and to reduce uncertainty and redundancy in order to increase the clinical applicability. In this paper, we proposed a new medical image fusion algorithm based on a convolutional neural network (CNN) to obtain a weight map for multiscale transform (curvelet/ non-subsampled shearlet transform) domains that enhance the textual and edge property. The aim of the method is achieving the best visualization and highest details in a single fused image without losing spectral and anatomical details. In the proposed method, firstly, non-subsampled shearlet transform (NSST) and curvelet transform (CvT) were used to decompose the source image into low-frequency and high-frequency coefficients. Secondly, the low-frequency and high-frequency coefficients were fused by the weight map generated by Siamese Convolutional Neural Network (SCNN), where the weight map get by a series of feature maps and fuses the pixel activity information from different sources. Finally, the fused image was reconstructed by inverse multi-scale transform (MST). For testing of proposed method, standard gray-scaled magnetic resonance (MR) images and colored positron emission tomography (PET) images taken from Brain Atlas Datasets were used. The proposed method can effectively preserve the detailed structure information and performs well in terms of both visual quality and objective assessment. The fusion experimental results were evaluated (according to quality metrics) with quantitative and qualitative criteria.

___

  • [1] Pajares G, De La Cruz JM. A wavelet-based image fusion tutorial. Pattern recognition 2004; 37 (9): 1855-72. doi: 10.1016/j.patcog.2004.03.010
  • [2] Vijayarajan R, Muttan S. Discrete wavelet transform based principal component averaging fusion for medical images. AEU-International Journal of Electronics and Communications 2015; 69 (6): 896-902. doi: 10.1016/j.aeue.2015.02.007
  • [3] Rockinger (1999). Image fusion toolbox. Website http://www.metapix.de/toolbox.htm [accessed 01 2021].
  • [4] Burt PJ, Adelson EH. The Laplacian pyramid as a compact image code. InReadings in computer vision, San Francisco, CA, USA: Morgan Kaufmann Publishers Inc, 1987, pp. 671-679
  • [5] Yang Y, Tong S, Huang S, Lin P, Fang Y. A hybrid method for multi-focus image fusion based on fast discrete curvelet transform. IEEE Access 2017; 5: 14898-913. doi: 10.1109/ACCESS.2017.2698217
  • [6] Candes EJ, Donoho DL. Curvelets: A Surprisingly Effective Nonadaptive Representation for Objects With Edges. Nashville, Tenn, USA: Stanford Univ Ca Dept of Statistics, 2000.
  • [7] Guorong G, Luping X, Dongzhu F. Multi-focus image fusion based on non-subsampled shearlet transform. IET Image Process 2013; 7 (6): 633-639. doi: 10.1049/iet-ipr.2012.0558
  • [8] Zhang Q, Guo B. Multifocus image fusion using the nonsubsampled contourlet transform. Signal Process 2009; 89 (7): 1334-1346. doi: 10.1016/j.sigpro.2009.01.012
  • [9] Shen R, Cheng I, Basu A. Cross-scale coefficient selection for volumetric medical image fusion. IEEE T Bio-Med Eng 2012; 60 (4): 1069-1079.
  • [10] Du J, Li W, Xiao B, Nawaz Q. Union Laplacian pyramid with multiple features for medical image fusion. Neurocomputing 2016; 194: 326-339.
  • [11] Singh S, Gupta D, Anand R, Kumar V. Nonsubsampled shearlet based CT and MR medical image fusion using biologically inspired spiking neural network. Biomed Signal Proces 2015; 18: 91-101. doi: 10.1016/j.bspc.2014.11.009
  • [12] An H, Qi Y, Cheng Z. A novel image fusion method based on particle swarm optimization. Heidelberg, Berlin, Germany: Springer-In Advances in Wireless Networks and Information Systems, 2010, pp. 527-535.
  • [13] Madheswari K, Venkateswaran N. An optimal weighted averaging fusion strategy for thermal and visible images using dual tree discrete wavelet transform and self tunning particle swarm optimization. Multimedia Tools and Applications 2017; 76 (20): 20989-21010. doi: 10.1007/s11042-016-4030-x
  • [14] Abas AI, Baykan NA. Multi-Focus Image Fusion with Multi-Scale Transform Optimized by Metaheuristic Algorithms. Traitement du Signal 2021; 38 (2). doi: 10.18280/ts.380201
  • [15] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE 1998; 86 (11): 2278-324.
  • [16] Kaur H, Koundal D, Kadyan V. Image fusion techniques: a survey. Archives of Computational Methods in Engineering 2021: 1-23. doi: 10.1007/s11831-021-09540-7
  • [17] Liu Y, Chen X, Peng H, Wang Z. Multi-focus image fusion with a deep convolutional neural network. Inform Fusion 2017; 36: 191-207.
  • [18] Liu Y, Chen X, Cheng J, Peng H. A medical image fusion method based on convolutional neural networks. In: IEEE 2017 20th international conference on information fusion (Fusion); Xi’an, China; 2017. pp. 1-7.
  • [19] Liu Y, Chen X, Cheng J, Peng H, Wang Z. Infrared and visible image fusion with convolutional neural networks. International Journal of Wavelets, Multiresolution and Information Processing 2018; 16 (03): 1850018.
  • [20] Li Y, Zhao J, Lv Z, Pan Z. Multimodal Medical Supervised Image Fusion Method by CNN. Frontiers in Neuroscience 2021; 15: 303. doi: 10.3389/fnins.2021.638976
  • [21] Liu Y, Chen X, Peng H, Wang Z. Multi-focus image fusion with a deep convolutional neural network. Information Fusion 2017; 36: 191-207. doi: 10.1016/j.inffus.2016.12.001
  • [22] Jia Y, Shelhamer E, Donahue J et al. Caffe: Convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM international conference on Multimedia; Florida, USA; 2014. pp. 675-678.
  • [23] Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. Proceedings of the thirteenth international conference on artificial intelligence and statistics: JMLR Workshop and Conference Proceedings; Sardinia, Italy; 2010. pp. 249-256.
  • [24] Zhang Z, Blum RS. A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application. Proceedings of the IEEE 1999; 87 (8): 1315-1326.
  • [25] Candes E, Demanet L, Donoho D, Ying L. Fast discrete curvelet transforms. Multiscale Model Sim 2006; 5 (3): 861-899.
  • [26] Easley G, Labate D, Lim W-Q. Sparse directional image representations using the discrete shearlet transform. Applied and Computational Harmonic Analysis 2008; 25 (1): 25-46.
  • [27] Cao Y, Li S, Hu J. Multi-focus image fusion by nonsubsampled shearlet transform. In: IEEE 2011 Sixth International Conference on Image and Graphics; Anhui, China; 2011. pp. 17-21.
  • [28] Kong W, Lei Y. Technique for image fusion between gray-scale visual light and infrared images based on NSST and improved RF. Optik 2013; 124 (23): 6423-6431.
  • [29] Yin M, Liu X, Liu Y, Chen X. Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain. IEEE Transactions on Instrumentation and Measurement 2018; 68 (1): 49-64. doi: 10.1109/TIM.2018.2838778
  • [30] Summers D. Harvard Whole Brain Atlas: www. med. harvard. edu/AANLIB/home.html. Journal of Neurology, Neurosurgery and Psychiatry 2003; 74 (3): 288-288.
  • [31] Chaudhuri S, Kotwal K. Hyperspectral image fusion. Berlin, Germany: Springer, 2013.
  • [32] Xydeas Ca, Petrovic V. Objective image fusion performance measure. Electronics letters 2000; 36 (4): 308-309.
  • [33] Liu Y, Liu S, Wang Z. A general framework for image fusion based on multi-scale transform and sparse representation. Information fusion 2015; 24: 147-64. doi: 10.1016/j.inffus.2014.09.004
  • 34] Piella G, Heijmans H. A new quality metric for image fusion. Proceedings of the IEEE 2003 International Conference on Image Processing (Cat No 03CH37429); Barcelona, Spain; 2003. pp. III-173.
  • [35] Wang K, Zheng M, Wei H, Qi G, Li Y. Multi-modality medical image fusion using convolutional neural network and contrast pyramid. Sensors-Basel 2020; 20 (8): 2169.
  • [36] Asha C, Lal S, Gurupur VP, Saxena PP. Multi-modal medical image fusion with adaptive weighted combination of NSST bands using chaotic grey wolf optimization. IEEE Access 2019; 7: 40782-40796.
  • [37] Lu B, Wang H, Miao C. Medical image fusion with adaptive local geometrical structure and wavelet transform. Procedia Environmental Sciences 2011; 8: 262-9. doi: 10.1016/j.proenv.2011.10.042
  • [38] Li S, Yang B, Hu J. Performance comparison of different multi-resolution transforms for image fusion. Inform Fusion 2011; 12 (2): 74-84.
Turkish Journal of Electrical Engineering and Computer Sciences-Cover
  • ISSN: 1300-0632
  • Yayın Aralığı: Yılda 6 Sayı
  • Yayıncı: TÜBİTAK
Sayıdaki Diğer Makaleler

Malignant skin melanoma detection using image augmentation by oversampling in nonlinear lower-dimensional embedding manifold

Olusola Oluwakemi ABAYOMI-ALLI, Robertas DAMAŠEVIČIUS, Sanjay MISRA, Rytis MASKELIŪNAS, Adebayo ABAYOMI-ALLI

Leukocyte classification based on feature selection using extra trees classifier: a transfer learning approach

Diana BABY, Jude HEMANTH, Anishin Raj MM, Sujitha Juliet DEVARAJ

Evolution of histopathological breast cancer images classification using stochastic dilated residual ghost model

Ramgopal Kashyap

MRI based genomic analysis of glioma using three pathway deep convolutional neural network for IDH classification

Sonal GORE, Jayant JAGTAP

Attention augmented residual network for tomato disease detection and classification

KUMIE Gedamu, Getinet YILMA, Seid BELAY, Maregu ASSEFA, Melese AYALEW, Ariyo OLUWASANMI, Zhiguang QIN

A hybrid convolutional neural network approach for feature selection and disease classification

Prajna Paramita DEBETA, Puspanjali MOHAPATRA

New normal: cooperative paradigm for COVID-19 timely detection and containment using Internet of things and deep learning

FAROOQUE HASSAN KUMBHAR, SYED ALİ HASSAN, SOO YOUNG SHİN

Diagnosis of paroxysmal atrial fibrillation from thirty-minute heart rate variability data using convolutional neural networks

Resul KARA, Murat SURUCU, Yalcin ISLER

Detection of amyotrophic lateral sclerosis disease by variational mode decomposition and convolution neural network methods from event-related potential signals

Ramis İLERİ, Fırat ORHANBULUCU, Fatma LATİFOĞLU

Employing deep learning architectures for image-based automatic cataract diagnosis

Ömer TÜRK, Erdoğan ALDEMİR, Ömer Faruk ERTUĞRUL, Emrullah ACAR