Détail de l'auteur
Auteur Yansheng Li |
Documents disponibles écrits par cet auteur (3)



Learning deep semantic segmentation network under multiple weakly-supervised constraints for cross-domain remote sensing image semantic segmentation / Yansheng Li in ISPRS Journal of photogrammetry and remote sensing, vol 175 (May 2021)
![]()
[article]
Titre : Learning deep semantic segmentation network under multiple weakly-supervised constraints for cross-domain remote sensing image semantic segmentation Type de document : Article/Communication Auteurs : Yansheng Li, Auteur ; Te Shi, Auteur ; Yongjun Zhang, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 20 - 33 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] classification semi-dirigée
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] programmation par contraintes
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Due to its wide applications, remote sensing (RS) image semantic segmentation has attracted increasing research interest in recent years. Benefiting from its hierarchical abstract ability, the deep semantic segmentation network (DSSN) has achieved tremendous success on RS image semantic segmentation and has gradually become the mainstream technology. However, the superior performance of DSSN highly depends on two conditions: (I) massive quantities of labeled training data exist; (II) the testing data seriously resemble the training data. In actual RS applications, it is difficult to fully meet these conditions due to the RS sensor variation and the distinct landscape variation in different geographic locations. To make DSSN fit the actual RS scenario, this paper exploits the cross-domain RS image semantic segmentation task, which means that DSSN is trained on one labeled dataset (i.e., the source domain) but is tested on another varied dataset (i.e., the target domain). In this setting, the performance of DSSN is inevitably very limited due to the data shift between the source and target domains. To reduce the disadvantageous influence of data shift, this paper proposes a novel objective function with multiple weakly-supervised constraints to learn DSSN for cross-domain RS image semantic segmentation. Through carefully examining the characteristics of cross-domain RS image semantic segmentation, multiple weakly-supervised constraints include the weakly-supervised transfer invariant constraint (WTIC), weakly-supervised pseudo-label constraint (WPLC) and weakly-supervised rotation consistency constraint (WRCC). Specifically, DualGAN is recommended to conduct unsupervised style transfer between the source and target domains to carry out WTIC. To make full use of the merits of multiple constraints, this paper presents a dynamic optimization strategy that dynamically adjusts the constraint weights of the objective function during the training process. With full consideration of the characteristics of the cross-domain RS image semantic segmentation task, this paper gives two cross-domain RS image semantic segmentation settings: (I) variation in geographic location and (II) variation in both geographic location and imaging mode. Extensive experiments demonstrate that our proposed method remarkably outperforms the state-of-the-art methods under both of these settings. The collected datasets and evaluation benchmarks have been made publicly available online (https://github.com/te-shi/MUCSS). Numéro de notice : A2021-261 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.02.009 Date de publication en ligne : 06/03/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.02.009 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97302
in ISPRS Journal of photogrammetry and remote sensing > vol 175 (May 2021) . - pp 20 - 33[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021051 SL Revue Centre de documentation Revues en salle Disponible 081-2021052 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt 081-2021053 DEP-RECP Revue Saint-Mandé Dépôt en unité Exclu du prêt Scene context-driven vehicle detection in high-resolution aerial images / Chao Tao in IEEE Transactions on geoscience and remote sensing, Vol 57 n° 10 (October 2019)
![]()
[article]
Titre : Scene context-driven vehicle detection in high-resolution aerial images Type de document : Article/Communication Auteurs : Chao Tao, Auteur ; Li Mi, Auteur ; Yansheng Li, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 7339 - 7351 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification orientée objet
[Termes IGN] détection d'objet
[Termes IGN] image à haute résolution
[Termes IGN] image aérienne
[Termes IGN] objet mobile
[Termes IGN] véhicule automobileRésumé : (auteur) As the spatial resolution of remote sensing images is improving gradually, it is feasible to realize “scene-object” collaborative image interpretation. Unfortunately, this idea is not fully utilized in vehicle detection from high-resolution aerial images, and most of the existing methods may be promoted by considering the variability of vehicle spatial distribution in different image scenes and treating vehicle detection tasks scene-specific. With this motivation, a scene context-driven vehicle detection method is proposed in this paper. At first, we perform scene classification using the deep learning method and, then, detect vehicles in roads and parking lots separately through different vehicle detectors. Afterward, we further optimize the detection results using different postprocessing rules according to different scene types. Experimental results show that the proposed approach outperforms the state-of-the-art algorithms in terms of higher detection accuracy rate and lower false alarm rate. Numéro de notice : A2019-535 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2912985 Date de publication en ligne : 03/06/2019 En ligne : http://doi.org/10.1109/TGRS.2019.2912985 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94131
in IEEE Transactions on geoscience and remote sensing > Vol 57 n° 10 (October 2019) . - pp 7339 - 7351[article]Large-scale remote sensing image retrieval by deep hashing neural networks / Yansheng Li in IEEE Transactions on geoscience and remote sensing, vol 56 n° 2 (February 2018)
![]()
[article]
Titre : Large-scale remote sensing image retrieval by deep hashing neural networks Type de document : Article/Communication Auteurs : Yansheng Li, Auteur ; Yongjun Zhang, Auteur ; Xin Huang, Auteur ; Hu Zhu, Auteur ; Jiayi Ma, Auteur Année de publication : 2018 Article en page(s) : pp 950 - 965 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal
[Termes IGN] données d'entrainement (apprentissage automatique)Résumé : (Auteur) As one of the most challenging tasks of remote sensing big data mining, large-scale remote sensing image retrieval has attracted increasing attention from researchers. Existing large-scale remote sensing image retrieval approaches are generally implemented by using hashing learning methods, which take handcrafted features as inputs and map the high-dimensional feature vector to the low-dimensional binary feature vector to reduce feature-searching complexity levels. As a means of applying the merits of deep learning, this paper proposes a novel large-scale remote sensing image retrieval approach based on deep hashing neural networks (DHNNs). More specifically, DHNNs are composed of deep feature learning neural networks and hashing learning neural networks and can be optimized in an end-to-end manner. Rather than requiring to dedicate expertise and effort to the design of feature descriptors, we can automatically learn good feature extraction operations and feature hashing mapping under the supervision of labeled samples. To broaden the application field, DHNNs are evaluated under two representative remote sensing cases: scarce and sufficient labeled samples. To make up for a lack of labeled samples, DHNNs can be trained via transfer learning for the former case. For the latter case, DHNNs can be trained via supervised learning from scratch with the aid of a vast number of labeled samples. Extensive experiments on one public remote sensing image data set with a limited number of labeled samples and on another public data set with plenty of labeled samples show that the proposed remote sensing image retrieval approach based on DHNNs can remarkably outperform state-of-the-art methods under both of the examined conditions. Numéro de notice : A2018-192 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2017.2756911 Date de publication en ligne : 13/10/2017 En ligne : https://doi.org/10.1109/TGRS.2017.2756911 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=89857
in IEEE Transactions on geoscience and remote sensing > vol 56 n° 2 (February 2018) . - pp 950 - 965[article]