Can additional spectral bands be estimated from aerial color images?

Can additional spectral bands be estimated from aerial color images?

Inspired by the surprising performances of deep generative models, in this paper we present the preliminaryresults of an overly ambitious task: estimating computationally the additional spectral bands of a color aerial image. Wehave harnessed the expressive power of deep generative models to estimate the distribution of mostly infrared bands ofaerial scenes, using only color RGB channels as input. Our approach has been tested from multiple aspects, including thereconstruction error of the additional bands and the effect of estimated bands on scene classification performance, as wellas through the transfer potential of the trained network to a distinct dataset. To our surprise, the initial experimentshave shown us that deep generative models can indeed learn to estimate additional bands up to a certain degree and canthus computationally reinforce datasets stemming from color-only sensors.

___

  • [1] Nasrabadi NM. Hyperspectral target detection: an overview of current and future challenges. IEEE Signal Processing Magazine 2014; 31: 34–44.
  • [2] Liu Y, Liu Y, Ding L. Scene classification based on two-stage deep feature fusion. IEEE Geoscience Remote Sensing 2018; 15: 183–186.
  • [3] Weng Q, Mao Z, Lin J, Guo W. Land-use classification via extreme learning classifier based on deep convolutional features. IEEE Geoscience and Remote Sensing Letters 2017; 10: 704–708.
  • [4] Yu Y, Lu F. Aerial scene classification via multilevel fusion based on deep convolutional neural networks. IEEE Geoscience and Remote Sensing Letters 2018; 15: 287–291.
  • [5] Aptoula E, Dalla Mura M, Lefevre S. Vector attribute profiles for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing 2016; 54: 3208–3220.
  • [6] Pham MT, Lefevre S, Aptoula E. Local feature-based attribute profiles for optical remote sensing image classification. IEEE Transactions on Geoscience and Remote Sensing 2018; 65: 1199–1212.
  • [7] Pham MT, Aptoula E, Lefevre S. Feature profiles from attribute filtering for classification of remote sensing images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2018; 11: 249–256.
  • [8] Cheng G, Han J, Lu X. Remote sensing image scene classification: benchmark and state of the art. Proceedings of the IEEE 2017; 105: 1865–1883.
  • [9] Chen X, Duan Y, Houthooft R, Schulman J, Sutskever I et al. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In: Proceedings of NIPS; Barcelona, Spain; 2016. pp. 2172– 2180.
  • [10] Kingma DP, Welling M. Auto-encoding variational Bayes. CoRR, vol. abs/1312.6114, 2013.
  • [11] Hinton G, Salakhutdinov R. Reducing the dimensionality of data with neural networks. Science 2006; 31: 504–507.
  • [12] Masci J, Meier U, Ciresan D, Schmidhuber J. Stacked convolutional auto-encoders for hierarchical feature extraction. In: Proceedings of the Artificial Neural Networks and Machine Learning Conference; Espoo, Finland; 2015. pp. 52–59.
  • [13] Helber P, Bischke B, Dengel A, Borth D. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. CoRR, vol. abs/1709.00029, 2017.
  • [14] Yang Y, Newsam S. Geographic image retrieval using local invariant features. IEEE Transactions on Geoscience and Remote Sensing 2013; 10: 818–832.
  • [15] Xia GS, Hu J, Hu F, Shi B, Bai X et al. Aid: A benchmark data set for performance evaluaton of aerial scene classification. IEEE Transactions on Geoscience and Remote Sensing 2017; 55: 3965–3981.