Descripteur



Etendre la recherche sur niveau(x) vers le bas
Learning and transferring deep joint spectral–spatial features for hyperspectral classification / Jingxiang Yang in IEEE Transactions on geoscience and remote sensing, vol 55 n° 8 (August 2017)
![]()
[article]
Titre : Learning and transferring deep joint spectral–spatial features for hyperspectral classification Type de document : Article/Communication Auteurs : Jingxiang Yang, Auteur ; Yong-Qiang Zhao, Auteur ; Jonathan Cheung-Wai Chan, Auteur Année de publication : 2017 Article en page(s) : pp 4729 - 4742 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] classification dirigée
[Termes descripteurs IGN] classification par réseau neuronal
[Termes descripteurs IGN] extraction de traits caractéristiques
[Termes descripteurs IGN] filtrage numérique d'image
[Termes descripteurs IGN] image AVIRIS
[Termes descripteurs IGN] image hyperspectrale
[Termes descripteurs IGN] image ROSIS
[Termes descripteurs IGN] réseau neuronal convolutifRésumé : (Auteur) Feature extraction is of significance for hyperspectral image (HSI) classification. Compared with conventional hand-crafted feature extraction, deep learning can automatically learn features with discriminative information. However, two issues exist in applying deep learning to HSIs. One issue is how to jointly extract spectral features and spatial features, and the other one is how to train the deep model when training samples are scarce. In this paper, a deep convolutional neural network with two-branch architecture is proposed to extract the joint spectral-spatial features from HSIs. The two branches of the proposed network are devoted to features from the spectral domain as well as the spatial domain. The learned spectral features and spatial features are then concatenated and fed to fully connected layers to extract the joint spectral-spatial features for classification. When the training samples are limited, we investigate the transfer learning to improve the performance. Low and mid-layers of the network are pretrained and transferred from other data sources; only top layers are trained with limited training samples extracted from the target scene. Experiments on Airborne Visible/Infrared Imaging Spectrometer and Reflective Optics System Imaging Spectrometer data demonstrate that the learned deep joint spectral-spatial features are discriminative, and competitive classification results can be achieved when compared with state-of-the-art methods. The experiments also reveal that the transferred features boost the classification performance. Numéro de notice : A2017-503 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2017.2698503 En ligne : http://dx.doi.org/10.1109/TGRS.2017.2698503 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=86448
in IEEE Transactions on geoscience and remote sensing > vol 55 n° 8 (August 2017) . - pp 4729 - 4742[article]Learning sensor-specific spatial-spectral features of hyperspectral images via convolutional neural networks / Shaohui Mei in IEEE Transactions on geoscience and remote sensing, vol 55 n° 8 (August 2017)
![]()
[article]
Titre : Learning sensor-specific spatial-spectral features of hyperspectral images via convolutional neural networks Type de document : Article/Communication Auteurs : Shaohui Mei, Auteur ; Jingyu Ji, Auteur ; Junhui Hou, Auteur ; et al., Auteur Année de publication : 2017 Article en page(s) : pp 4520 - 4533 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes descripteurs IGN] apprentissage dirigé
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] extraction de couche
[Termes descripteurs IGN] filtrage numérique d'image
[Termes descripteurs IGN] image AVIRIS
[Termes descripteurs IGN] image hyperspectrale
[Termes descripteurs IGN] image ROSIS
[Termes descripteurs IGN] réseau neuronal convolutifRésumé : (Auteur) Convolutional neural network (CNN) is well known for its capability of feature learning and has made revolutionary achievements in many applications, such as scene recognition and target detection. In this paper, its capability of feature learning in hyperspectral images is explored by constructing a five-layer CNN for classification (C-CNN). The proposed C-CNN is constructed by including recent advances in deep learning area, such as batch normalization, dropout, and parametric rectified linear unit (PReLU) activation function. In addition, both spatial context and spectral information are elegantly integrated into the C-CNN such that spatial-spectral features are learned for hyperspectral images. A companion feature-learning CNN (FL-CNN) is constructed by extracting fully connected feature layers in this C-CNN. Both supervised and unsupervised modes are designed for the proposed FL-CNN to learn sensor-specific spatial-spectral features. Extensive experimental results on four benchmark data sets from two well-known hyperspectral sensors, namely airborne visible/infrared imaging spectrometer (AVIRIS) and reflective optics system imaging spectrometer (ROSIS) sensors, demonstrate that our proposed C-CNN outperforms the state-of-the-art CNN-based classification methods, and its corresponding FL-CNN is very effective to extract sensor-specific spatial-spectral features for hyperspectral application Numéro de notice : A2017-499 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2017.2693346 En ligne : http://dx.doi.org/10.1109/TGRS.2017.2693346 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=86441
in IEEE Transactions on geoscience and remote sensing > vol 55 n° 8 (August 2017) . - pp 4520 - 4533[article]Generalized composite kernel framework for hyperspectral image classification / J. Li in IEEE Transactions on geoscience and remote sensing, vol 51 n° 9 (September 2013)
![]()
[article]
Titre : Generalized composite kernel framework for hyperspectral image classification Type de document : Article/Communication Auteurs : J. Li, Auteur ; Prashanth Reddy Marpu, Auteur ; Antonio Plaza, Auteur ; José M. Bioucas-Dias, Auteur ; et al., Auteur Année de publication : 2013 Article en page(s) : pp 4816 - 4829 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes descripteurs IGN] classification dirigée
[Termes descripteurs IGN] données localisées
[Termes descripteurs IGN] image AVIRIS
[Termes descripteurs IGN] image hyperspectrale
[Termes descripteurs IGN] image ROSIS
[Termes descripteurs IGN] méthode fondée sur le noyau
[Termes descripteurs IGN] régression logistique
[Termes descripteurs IGN] séparateur à vaste margeRésumé : (Auteur) This paper presents a new framework for the development of generalized composite kernel machines for hyperspectral image classification. We construct a new family of generalized composite kernels which exhibit great flexibility when combining the spectral and the spatial information contained in the hyperspectral data, without any weight parameters. The classifier adopted in this work is the multinomial logistic regression, and the spatial information is modeled from extended multiattribute profiles. In order to illustrate the good performance of the proposed framework, support vector machines are also used for evaluation purposes. Our experimental results with real hyperspectral images collected by the National Aeronautics and Space Administration Jet Propulsion Laboratory's Airborne Visible/Infrared Imaging Spectrometer and the Reflective Optics Spectrographic Imaging System indicate that the proposed framework leads to state-of-the-art classification performance in complex analysis scenarios. Numéro de notice : A2013-536 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2012.2230268 En ligne : https://doi.org/10.1109/TGRS.2012.2230268 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=32673
in IEEE Transactions on geoscience and remote sensing > vol 51 n° 9 (September 2013) . - pp 4816 - 4829[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 065-2013091 RAB Revue Centre de documentation En réserve 3L Disponible Semisupervised self-learning for hyperspectral image classification / Immaculada Dopido in IEEE Transactions on geoscience and remote sensing, vol 51 n° 7 Tome 1 (July 2013)
![]()
[article]
Titre : Semisupervised self-learning for hyperspectral image classification Type de document : Article/Communication Auteurs : Immaculada Dopido, Auteur ; Jun Li, Auteur ; Prashanth Reddy Marpu, Auteur ; et al., Auteur Année de publication : 2013 Article en page(s) : pp 4032 - 4044 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes descripteurs IGN] apprentissage semi-dirigé
[Termes descripteurs IGN] classification par régression logistique multinomiale
[Termes descripteurs IGN] classification par séparateurs à vaste marge
[Termes descripteurs IGN] classification semi-dirigée
[Termes descripteurs IGN] image AVIRIS
[Termes descripteurs IGN] image hyperspectrale
[Termes descripteurs IGN] image ROSIS
[Termes descripteurs IGN] régression logistiqueRésumé : (Auteur) Remotely sensed hyperspectral imaging allows for the detailed analysis of the surface of the Earth using advanced imaging instruments which can produce high-dimensional images with hundreds of spectral bands. Supervised hyperspectral image classification is a difficult task due to the unbalance between the high dimensionality of the data and the limited availability of labeled training samples in real analysis scenarios. While the collection of labeled samples is generally difficult, expensive, and time-consuming, unlabeled samples can be generated in a much easier way. This observation has fostered the idea of adopting semisupervised learning techniques in hyperspectral image classification. The main assumption of such techniques is that the new (unlabeled) training samples can be obtained from a (limited) set of available labeled samples without significant effort/cost. In this paper, we develop a new approach for semisupervised learning which adapts available active learning methods (in which a trained expert actively selects unlabeled samples) to a self-learning framework in which the machine learning algorithm itself selects the most useful and informative unlabeled samples for classification purposes. In this way, the labels of the selected pixels are estimated by the classifier itself, with the advantage that no extra cost is required for labeling the selected pixels using this machine-machine framework when compared with traditional machine-human active learning. The proposed approach is illustrated with two different classifiers: multinomial logistic regression and a probabilistic pixelwise support vector machine. Our experimental results with real hyperspectral images collected by the National Aeronautics and Space Administration Jet Propulsion Laboratory's Airborne Visible-Infrared Imaging Spectrometer and the Reflective Optics Spectrographic Imaging System indicate that the use of self-learning represents an effective and promising strategy in the cont- xt of hyperspectral image classification. Numéro de notice : A2013-374 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2012.2228275 En ligne : https://doi.org/10.1109/TGRS.2012.2228275 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=32512
in IEEE Transactions on geoscience and remote sensing > vol 51 n° 7 Tome 1 (July 2013) . - pp 4032 - 4044[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 065-2013071A RAB Revue Centre de documentation En réserve 3L Disponible Mapping salt-marsh vegetation by multispectral and hyperspectral remote sensing / E. Belluco in Remote sensing of environment, vol 105 n° 1 (15/11/2006)
[article]
Titre : Mapping salt-marsh vegetation by multispectral and hyperspectral remote sensing Type de document : Article/Communication Auteurs : E. Belluco, Auteur ; M. Camuffo, Auteur ; et al., Auteur Année de publication : 2006 Article en page(s) : pp 54 - 67 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes descripteurs IGN] carte de la végétation
[Termes descripteurs IGN] classification dirigée
[Termes descripteurs IGN] classification par maximum de vraisemblance
[Termes descripteurs IGN] flore halophile
[Termes descripteurs IGN] flore locale
[Termes descripteurs IGN] image CASI
[Termes descripteurs IGN] image hyperspectrale
[Termes descripteurs IGN] image Ikonos
[Termes descripteurs IGN] image MIVIS
[Termes descripteurs IGN] image multibande
[Termes descripteurs IGN] image Quickbird
[Termes descripteurs IGN] image ROSIS
[Termes descripteurs IGN] marais salé
[Termes descripteurs IGN] VeniseRésumé : (Auteur) Tidal marshes are characterized by complex patterns both in their geomorphic and ecological features. Such patterns arise through the elaboration of a network structure driven by the tidal forcing and through the interaction between hydrodynamical, geophysical and ecological components (chiefly vegetation). Intertidal morphological and ecological structures possess characteristic extent (order of kilometers) and small-scale features (down to tens of centimeters) which are not simultaneously accessible through field observations, thus making remote sensing a necessary observation tool. This paper describes a set of remote sensing observations from several satellite and airborne platforms, the collection of concurrent ground reference data and the vegetation distributions that may be inferred from them, with specific application to the Lagoon of Venice (Italy). The data set comprises ROSIS, CASI, MIVIS, IKONOS and QuickBird acquisitions, which cover a wide range of spatial and spectral resolutions. We show that spatially-detailed and quantitatively reliable vegetation maps may be derived from remote sensing in tidal environments through unsupervised (K-means) and supervised algorithms (Maximum Likelihood and Spectral Angle Mapper). We find that, for the objective of intertidal vegetation classification, hyperspectral data contain largely redundant information. This in particular implies that a reduction of the spectral features is required for the application of the Maximum Likelihood classifier. A large number of experiments with different feature extraction/selection algorithms show that the use of four bands derived from Maximum Noise Fraction transforms and four RGBI broad bands obtained by spectral averaging yield very similar classification performances. The classifications from hyperspectral data are somewhat superior to those from multispectral data, but the close performance and the results of the features reduction experiments show that spatial resolution affects classification accuracy much more importantly than spectral resolution. Monitoring schemes of tidal environment vegetation may thus be based on high-resolution satellite acquisitions accompanied by systematic ancillary field observations at a relatively limited number of reference sites, with practical consequences of some relevance. Copyright Elsevier Numéro de notice : A2006-502 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=28226
in Remote sensing of environment > vol 105 n° 1 (15/11/2006) . - pp 54 - 67[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 110-06191 RAB Revue Centre de documentation En réserve 3L Disponible