Détail de l'auteur
Auteur Wei Han |
Documents disponibles écrits par cet auteur (3)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Unsupervised representation high-resolution remote sensing image scene classification via contrastive learning convolutional neural network / Fengpeng Li in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 8 (August 2021)
[article]
Titre : Unsupervised representation high-resolution remote sensing image scene classification via contrastive learning convolutional neural network Type de document : Article/Communication Auteurs : Fengpeng Li, Auteur ; Jiabao Li, Auteur ; Wei Han, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 577 - 591 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification non dirigée
[Termes IGN] classification par réseau neuronal
[Termes IGN] grande échelle
[Termes IGN] image à haute résolution
[Termes IGN] image aérienne
[Termes IGN] moyenne échelle
[Termes IGN] petite échelle
[Termes IGN] régression linéaire
[Termes IGN] réseau neuronal convolutifRésumé : (Auteur) Inspired by the outstanding achievement of deep learning, supervised deep learning representation methods for high-spatial-resolution remote sensing image scene classification obtained state-of-the-art performance. However, supervised deep learning representation methods need a considerable amount of labeled data to capture class-specific features, limiting the application of deep learning-based methods while there are a few labeled training samples. An unsupervised deep learning representation, high-resolution remote sensing image scene classification method is proposed in this work to address this issue. The proposed method, called contrastive learning, narrows the distance between positive views: color channels belonging to the same images widens the gaps between negative view pairs consisting of color channels from different images to obtain class-specific data representations of the input data without any supervised information. The classifier uses extracted features by the convolutional neural network (CNN)-based feature extractor with labeled information of training data to set space of each category and then, using linear regression, makes predictions in the testing procedure. Comparing with existing unsupervised deep learning representation high-resolution remote sensing image scene classification methods, contrastive learning CNN achieves state-of-the-art performance on three different scale benchmark data sets: small scale RSSCN7 data set, midscale aerial image data set, and large-scale NWPU-RESISC45 data set. Numéro de notice : A2021-670 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.87.8.577 Date de publication en ligne : 01/08/2021 En ligne : https://doi.org/10.14358/PERS.87.8.577 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98806
in Photogrammetric Engineering & Remote Sensing, PERS > vol 87 n° 8 (August 2021) . - pp 577 - 591[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 105-2021081 SL Revue Centre de documentation Revues en salle Disponible High-resolution remote sensing image scene classification via key filter bank based on convolutional neural network / Fengpeng Li in IEEE Transactions on geoscience and remote sensing, vol 58 n° 11 (November 2020)
[article]
Titre : High-resolution remote sensing image scene classification via key filter bank based on convolutional neural network Type de document : Article/Communication Auteurs : Fengpeng Li, Auteur ; Ruyi Feng, Auteur ; Wei Han, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 8077 - 8092 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] filtrage numérique d'image
[Termes IGN] image à haute résolution
[Termes IGN] jeu de données
[Termes IGN] segmentation sémantique
[Termes IGN] test statistiqueRésumé : (auteur) High-resolution remote sensing (HRRS) image scene classification has attracted an enormous amount of attention due to its wide application in a range of tasks. Due to the rapid development of deep learning (DL), models based on convolutional neural network (CNN) have made competitive achievements on HRRS image scene classification because of the excellent representation capacity of DL. The scene labels of HRRS images extremely depend on the combination of global information and information from key regions or locations. However, most existing models based on CNN tend only to represent the global features of images or overstate local information capturing from key regions or locations, which may confuse different categories. To address this issue, a key region or location capturing method called key filter bank (KFB) is proposed in this article, and KFB can retain global information at the same time. This method can combine with different CNN models to improve the performance of HRRS imagery scene classification. Moreover, for the convenience of practical tasks, an end-to-end model called KFBNet where KFB combined with DenseNet-121 is proposed to compare the performance with existing models. This model is evaluated on public benchmark data sets, and the proposed model makes better performance on benchmarks than the state-of-the-art methods. Numéro de notice : A2020-683 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.2987060 Date de publication en ligne : 23/04/2020 En ligne : https://doi.org/10.1109/TGRS.2020.2987060 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96208
in IEEE Transactions on geoscience and remote sensing > vol 58 n° 11 (November 2020) . - pp 8077 - 8092[article]A semi-supervised generative framework with deep learning features for high-resolution remote sensing image scene classification / Wei Han in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)
[article]
Titre : A semi-supervised generative framework with deep learning features for high-resolution remote sensing image scene classification Type de document : Article/Communication Auteurs : Wei Han, Auteur ; Ruyi Feng, Auteur ; Lizhe Wang, Auteur ; Yafan Cheng, Auteur Année de publication : 2018 Article en page(s) : pp 23 - 43 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] analyse de sensibilité
[Termes IGN] apprentissage profond
[Termes IGN] classification semi-dirigée
[Termes IGN] réseau neuronal convolutif
[Termes IGN] scèneRésumé : (Auteur) High resolution remote sensing (HRRS) image scene classification plays a crucial role in a wide range of applications and has been receiving significant attention. Recently, remarkable efforts have been made to develop a variety of approaches for HRRS scene classification, wherein deep-learning-based methods have achieved considerable performance in comparison with state-of-the-art methods. However, the deep-learning-based methods have faced a severe limitation that a great number of manually-annotated HRRS samples are needed to obtain a reliable model. However, there are still not sufficient annotation datasets in the field of remote sensing. In addition, it is a challenge to get a large scale HRRS image dataset due to the abundant diversities and variations in HRRS images. In order to address the problem, we propose a semi-supervised generative framework (SSGF), which combines the deep learning features, a self-label technique, and a discriminative evaluation method to complete the task of scene classification and annotating datasets. On this basis, we further develop an extended algorithm (SSGA-E) and evaluate it by exclusive experiments. The experimental results show that the SSGA-E outperforms most of the fully-supervised methods and semi-supervised methods. It has achieved the third best accuracy on the UCM dataset, the second best accuracy on the WHU-RS, the NWPU-RESISC45, and the AID datasets. The impressive results demonstrate that the proposed SSGF and the extended method is effective to solve the problem of lacking an annotated HRRS dataset, which can learn valuable information from unlabeled samples to improve classification ability and obtain a reliable annotation dataset for supervised learning. Numéro de notice : A2018-489 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2017.11.004 Date de publication en ligne : 14/11/2017 En ligne : https://doi.org/10.1016/j.isprsjprs.2017.11.004 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91225
in ISPRS Journal of photogrammetry and remote sensing > vol 145 - part A (November 2018) . - pp 23 - 43[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018111 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018113 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018112 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt