Spatial-aware global contrast representation for saliency detection

Spatial-aware global contrast representation for saliency detection

Deep learning networks have been demonstrated to be helpful when used in salient object detection andachieved superior performance than the methods that are based on low-level hand-crafted features. In this paper, wepropose a novel spatial-aware contrast cube-based convolution neural network (CNN) which can further improve thedetection performance. From this cube data structure, the contrast of the superpixel is extracted. Meanwhile, thespatial information is preserved during the transformation. The proposed method has two advantages compared to theexisting deep learning-based saliency methods. First, instead of feeding the deep learning networks with raw imagepatches or pixels, we explore the spatial-aware contrast cubes of superpixels as training samples of CNN. Our method issuperior because the saliency of a region is more dependent on its contrast with the other regions than its appearance.Second, to adapt to the diversity of a real scene, both the color and textural cues are considered. Two CNNs, colorCNN and textural CNN, are constructed to extract corresponding features. The saliency maps generated by two cuesare concatenated in a dynamic way to achieve optimum results. The proposed method achieves the maximum precisionof 0.9856, 0.9250, and 0.8949 on three benchmark datasets, MSRA1000, ECSSD, and PASCAL-S, respectively, whichshows an improvement of performance in comparison to the state-of-the-art saliency detection methods.

___

  • [1] He S, Lau RWH, Liu W, Huang Z, Yang Q. Supercnn: A superpixelwise convolutional neural network for salient object detection. International Journal of Computer Vision 2015; 115(3): 330-344. doi: 10.1007/s11263-015-0822-0
  • [2] Cheng MM, Zhang GX, Mitra NJ, Huang X, Hu SM. Global contrast based salient region detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 2015; 37(3): 569-582. doi: 10.1109/TPAMI.2014.2345401
  • [3] Heo D, Lee E, Ko BC. Pedestrian detection at night using deep neural networks and saliency maps. Electronic Imaging 2018; 2018(17): 1-9. doi: 10.2352/J.ImagingSci.Technol.2017.61.6.060403
  • [4] Zhang F, Du B, Zhang L. Saliency-guided unsupervised feature learning for scene classification. IEEE Transactions on Geoscience and Remote Sensing 2015; 53(4): 2175-2184. doi: 10.1109/TGRS.2014.2357078
  • [5] Bai C, Chen J, Huang L, Kpalma K, Chen S. Saliency-based multi-feature modeling for semantic image retrieval. Journal of Visual Communication and Image Representation 2018; 50: 199-204. doi: 10.1016/j.jvcir.2017.11.021
  • [6] Shafieyan F, Karimi N, Mirmahboub B, Samavi S, Shirani S. Image retargeting using depth assisted saliency map. Signal Processing: Image Communication 2017; 50: 34-43. doi: 10.1016/j.image.2016.10.006
  • [7] Ramanathan S, Katti H, Sebe N, Kankanhalli M, Chua TS. An eye fixation database for saliency detection in images. In: European Conference on Computer Vision; Heraklion, Crete, Greece; 2010. pp. 30-43.
  • [8] Alexe B, Deselaers T, Ferrari V. Measuring the objectness of image windows. IEEE Transactions on Pattern Analysis and Machine Intelligence 2012; 34(11): 2189-2202. doi: 10.1109/TPAMI.2012.28
  • [9] Jiang P, Ling H, Yu J, Peng J. Salient region detection by ufo: Uniqueness, focusness and objectness. In: IEEE International Conference on Computer Vision; Sydney, Australia; 2013. pp. 1976-1983.
  • [10] Mangim GR. Neural mechanisms of visual selective attention. Psychophysiology 1995; 32(1):4-18. doi: 10.1111/j.1469-8986.1995.tb03400.x
  • [11] Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 1998; 20(11): 1254-1259. doi: 10.1109/34.730558
  • [12] Liu T, Yuan Z, Sun J, Wang J, Zheng N et al. Learning to detect a salient object. IEEE Transactions on Pattern Analysis and Machine Intelligence 2011; 33(2): 353-367. doi: 10.1109/TPAMI.2010.70
  • [13] Achanta R, Hemami S, Estrada F, Susstrunk S. Frequency-tuned salient region detection. In: IEEE Conference on Computer Vision and Pattern Recognition; Miami, FL, USA; 2009. pp. 1597-1604.
  • [14] Goferman S. Context-aware saliency detection. In: IEEE Conference on Computer Vision and Pattern Recognition; San Francisco, CA, USA; 2010. pp. 2376-2383.
  • [15] Perazzi F, Krähenbühl P, Pritch Y, Hornung A. Saliency filters: Contrast based filtering for salient region detection. In: IEEE Conference on Computer Vision and Pattern Recognition; Providence, RI, USA; 2012. pp. 733-740.
  • [16] Yan Q, Xu L, Shi J, Jia J. Hierarchical saliency detection. In: IEEE Conference on Computer Vision and Pattern Recognition; Portland, Oregon, USA; 2013. pp. 1155-1162.
  • [17] Yang C, Zhang L, Lu H, Ruan X, Yang MH. Saliency detection via graph-based manifold ranking. In: IEEE Conference on Computer Vision and Pattern Recognition; Portland, Oregon, USA; 2013. pp. 3166-3173.
  • [18] Wang L, Lu H, Ruan X, Yang MH. Deep networks for saliency detection via local estimation and global search. In: IEEE Conference on Computer Vision and Pattern Recognition; Boston, MA, USA; 2015. pp. 3183-3192.
  • [19] Lee G, Tai YW, Kim J. Deep saliency with encoded low level distance map and high level features. In: IEEE Conference on Computer Vision and Pattern Recognition; Boston, MA, USA; 2015. pp. 660-668.
  • [20] Tong N, Lu H, Zhang Y, Ruan X. Salient object detection via global and local cues. Pattern Recognition 2015; 48(10): 3258-3267. doi: 10.1016/j.patcog.2014.12.005
  • [21] Xu L, Zeng L, Duan H. An effective vector model for global-contrast-based saliency detection. Journal of Visual Communication and Image Representation 2015; 30: 64-74. doi: 10.1016/j.jvcir.2015.03.011
  • [22] Xu D, Tang Z, Xu W. Salient object detection based on color names. KSII Transactions on Internet and Information Systems 2013; 7(11): 2737-2753. doi: 10.3837/tiis.2013.11.011
  • [23] Li G, Yu Y. Visual saliency based on multiscale deep features. In: IEEE Conference on Computer Vision and Pattern Recognition; Boston, MA, USA; 2015. pp. 5455-5463.
  • [24] Zhao R, Ouyang W, Li H, Wang X. Saliency detection by multi-context deep learning. In: IEEE Conference on Computer Vision and Pattern Recognition; Boston, MA, USA; 2015. pp. 1265-1274.
  • [25] Wang W, Shen J, Yang R, Porikli F. Saliency-aware video object segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 2018; 40(1): 20-33. doi: 10.1109/TPAMI.2017.2662005
  • [26] Kutbay U, Hardalaç F, Akbulut M, Akaslan Ü, Serhatlıoğlu S. A Computer-Aided Diagnosis System for Measuring Carotid Artery Intima-Media Thickness (IMT) Using Quaternion Vectors. Journal of Medical Systems 2016; 40(6): 149. doi: 10.1007/s1091
  • [27] Bonnin-Pascual F, Ortiz A. A novel approach for defect detection on vessel structures using saliency-related features. Ocean Engineering 2018; 149: 397-408. doi: 10.1016/j.oceaneng.2017.08.024
  • [28] Achanta R, Shaji A, Smith K, Lucchi A, Fua P et al. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Transactions on Pattern Analysis and Machine Intelligence 2012; 34(11): 2274-2282. doi: 10.1109/TPAMI.2012.120
  • [29] Khan FS, Anwer RM, Van de Weijer J, Bagdanov AD, Vanrell M et al. Color attributes for object detection. In: IEEE Conference on Computer Vision and Pattern Recognition; Providence, RI, USA; 2012. pp. 3306-3313.
  • [30] Li Y, Hou X, Koch C, Rehg JM, Yuille AL. The secrets of salient object segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition; Columbus, OH, USA; 2014. pp. 280-287.
  • [31] Wei Y, Wen F, Zhu W, Sun J. Geodesic saliency using background priors. In: European Conference on Computer Vision; Firenze, Italy; 2012. pp. 29-42.
  • [32] Zhu W, Liang S, Wei Y, Sun J. Saliency optimization from robust background detection. In: IEEE Conference on Computer Vision and Pattern Recognition; Columbus, Ohio, USA; 2014. pp. 2814-2821.
  • [33] Kim J, Han D, Tai Y, Kim J. Salient region detection via high-dimensional color transform. In: IEEE Conference on Computer Vision and Pattern Recognition; Columbus, OH, USA; 2014. pp. 883-890.