Détail de l'auteur
Auteur Shaohui Mei |
Documents disponibles écrits par cet auteur (3)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Rotation-invariant feature learning in VHR optical remote sensing images via nested siamese structure with double center loss / Ruoqiao Jiang in IEEE Transactions on geoscience and remote sensing, vol 59 n° 4 (April 2021)
[article]
Titre : Rotation-invariant feature learning in VHR optical remote sensing images via nested siamese structure with double center loss Type de document : Article/Communication Auteurs : Ruoqiao Jiang, Auteur ; Shaohui Mei, Auteur ; Mingyang Ma, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 3326 - 3337 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] échantillon
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image à très haute résolution
[Termes IGN] invariant
[Termes IGN] réseau neuronal siamois
[Termes IGN] rotationRésumé : (auteur) Rotation-invariant features are of great importance for object detection and image classification in very-high-resolution (VHR) optical remote sensing images. Though multibranch convolutional neural network (mCNN) has been demonstrated to be very effective for rotation-invariant feature learning, how to effectively train such a network is still an open problem. In this article, a nested Siamese structure (NSS) is proposed for training the mCNN to learn effective rotation-invariant features, which consists of an inner Siamese structure to enhance intraclass cohesion and an outer Siamese structure to enlarge interclass margin. Moreover, a double center loss (DCL) function, in which training samples from the same class are mapped closer to each other while those from different classes are mapped far away to each other, is proposed to train the proposed NSS even with a small amount of training samples. Experimental results over three benchmark data sets demonstrate that the proposed NSS trained by DCL is very effective to encounter rotation varieties when learning features for image classification and outperforms several state-of-the-art rotation-invariant feature learning algorithms even when a small amount of training samples are available. Numéro de notice : A2021-286 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3021283 Date de publication en ligne : 18/07/2020 En ligne : https://doi.org/10.1109/TGRS.2020.3021283 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97395
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 4 (April 2021) . - pp 3326 - 3337[article]Learning sensor-specific spatial-spectral features of hyperspectral images via convolutional neural networks / Shaohui Mei in IEEE Transactions on geoscience and remote sensing, vol 55 n° 8 (August 2017)
[article]
Titre : Learning sensor-specific spatial-spectral features of hyperspectral images via convolutional neural networks Type de document : Article/Communication Auteurs : Shaohui Mei, Auteur ; Jingyu Ji, Auteur ; Junhui Hou, Auteur ; et al., Auteur Année de publication : 2017 Article en page(s) : pp 4520 - 4533 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage dirigé
[Termes IGN] apprentissage profond
[Termes IGN] extraction de couche
[Termes IGN] filtrage numérique d'image
[Termes IGN] image AVIRIS
[Termes IGN] image hyperspectrale
[Termes IGN] image ROSIS
[Termes IGN] réseau neuronal convolutifRésumé : (Auteur) Convolutional neural network (CNN) is well known for its capability of feature learning and has made revolutionary achievements in many applications, such as scene recognition and target detection. In this paper, its capability of feature learning in hyperspectral images is explored by constructing a five-layer CNN for classification (C-CNN). The proposed C-CNN is constructed by including recent advances in deep learning area, such as batch normalization, dropout, and parametric rectified linear unit (PReLU) activation function. In addition, both spatial context and spectral information are elegantly integrated into the C-CNN such that spatial-spectral features are learned for hyperspectral images. A companion feature-learning CNN (FL-CNN) is constructed by extracting fully connected feature layers in this C-CNN. Both supervised and unsupervised modes are designed for the proposed FL-CNN to learn sensor-specific spatial-spectral features. Extensive experimental results on four benchmark data sets from two well-known hyperspectral sensors, namely airborne visible/infrared imaging spectrometer (AVIRIS) and reflective optics system imaging spectrometer (ROSIS) sensors, demonstrate that our proposed C-CNN outperforms the state-of-the-art CNN-based classification methods, and its corresponding FL-CNN is very effective to extract sensor-specific spatial-spectral features for hyperspectral application Numéro de notice : A2017-499 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2017.2693346 En ligne : http://dx.doi.org/10.1109/TGRS.2017.2693346 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=86441
in IEEE Transactions on geoscience and remote sensing > vol 55 n° 8 (August 2017) . - pp 4520 - 4533[article]Hyperspectral image resolution enhancement using high-resolution multispectral image based on spectral unmixing / Mohamed Amine Bendoumi in IEEE Transactions on geoscience and remote sensing, vol 52 n° 10 tome 2 (October 2014)
[article]
Titre : Hyperspectral image resolution enhancement using high-resolution multispectral image based on spectral unmixing Type de document : Article/Communication Auteurs : Mohamed Amine Bendoumi, Auteur ; Mingyi He, Auteur ; Shaohui Mei, Auteur Année de publication : 2014 Article en page(s) : pp 6574 - 6583 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] analyse des mélanges spectraux
[Termes IGN] fusion d'images
[Termes IGN] image hyperspectrale
[Termes IGN] image multibandeRésumé : (Auteur) In this paper, a hyperspectral (HS) image resolution enhancement algorithm based on spectral unmixing is proposed for the fusion of the high-spatial-resolution multispectral (MS) image and the low-spatial-resolution HS image (HSI). As a result, a high-spatial-resolution HSI is reconstructed based on the high spectral features of the HSI represented by endmembers and the high spatial features of the MS image represented by abundances. Since the number of endmembers extracted from the MS image cannot exceed the number of bands in least-squares-based spectral unmixing algorithm, large reconstruction errors will occur for the HSI, which degrades the fusion performance of the enhanced HSI. Therefore, in this paper, a novel fusion framework is also proposed by dividing the whole image into several subimages, based on which the performance of the proposed spectral-unmixing-based fusion algorithm can be further improved. Finally, experiments on the Hyperspectral Digital Imagery Collection Experiment and Airborne Visible/Infrared Imaging Spectrometer data demonstrate that the proposed fusion algorithms outperform other famous fusion techniques in both spatial and spectral domains. Numéro de notice : A2014-485 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2014.2298056 En ligne : https://doi.org/10.1109/TGRS.2014.2298056 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=74684
in IEEE Transactions on geoscience and remote sensing > vol 52 n° 10 tome 2 (October 2014) . - pp 6574 - 6583[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 065-2014101B RAB Revue Centre de documentation En réserve L003 Disponible