Descripteur
Termes IGN > mathématiques > statistique mathématique > analyse de données > segmentation > segmentation sémantique
segmentation sémantiqueSynonyme(s)étiquetage sémantique étiquetage de données |
Documents disponibles dans cette catégorie (204)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Marrying deep learning and data fusion for accurate semantic labeling of Sentinel-2 images / Guillemette Fonteix in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2021 (July 2021)
[article]
Titre : Marrying deep learning and data fusion for accurate semantic labeling of Sentinel-2 images Type de document : Article/Communication Auteurs : Guillemette Fonteix, Auteur ; M. Swaine, Auteur ; M. Leras, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 101 - 107 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] carte de confiance
[Termes IGN] chaîne de traitement
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] fusion d'images
[Termes IGN] image optique
[Termes IGN] image Sentinel-MSI
[Termes IGN] segmentation sémantique
[Termes IGN] série temporelleRésumé : (auteur) The understanding of the Earth through global land monitoring from satellite images paves the way towards many applications including flight simulations, urban management and telecommunications. The twin satellites from the Sentinel-2 mission developed by the European Space Agency (ESA) provide 13 spectral bands with a high observation frequency worldwide. In this paper, we present a novel multi-temporal approach for land-cover classification of Sentinel-2 images whereby a time-series of images is classified using fully convolutional network U-Net models and then coupled by a developed probabilistic algorithm. The proposed pipeline further includes an automatic quality control and correction step whereby an external source can be introduced in order to validate and correct the deep learning classification. The final step consists of adjusting the combined predictions to the cloud-free mosaic built from Sentinel-2 L2A images in order for the classification to more closely match the reference mosaic image. Numéro de notice : A2021-492 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.5194/isprs-annals-V-3-2021-101-2021 Date de publication en ligne : 17/06/2021 En ligne : http://dx.doi.org/10.5194/isprs-annals-V-3-2021-101-2021 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97957
in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences > vol V-2-2021 (July 2021) . - pp 101 - 107[article]Domain adaptive transfer attack-based segmentation networks for building extraction from aerial images / Younghwan Na in IEEE Transactions on geoscience and remote sensing, vol 59 n° 6 (June 2021)
[article]
Titre : Domain adaptive transfer attack-based segmentation networks for building extraction from aerial images Type de document : Article/Communication Auteurs : Younghwan Na, Auteur ; Jun Hee Kim, Auteur ; Kyungsu Lee, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 5171 - 5182 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection du bâti
[Termes IGN] entropie
[Termes IGN] image aérienne
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Semantic segmentation models based on convolutional neural networks (CNNs) have gained much attention in relation to remote sensing and have achieved remarkable performance for the extraction of buildings from high-resolution aerial images. However, the issue of limited generalization for unseen images remains. When there is a domain gap between the training and test data sets, the CNN-based segmentation models trained by a training data set fail to segment buildings for the test data set. In this article, we propose segmentation networks based on a domain adaptive transfer attack (DATA) scheme for building extraction from aerial images. The proposed system combines the domain transfer and the adversarial attack concepts. Based on the DATA scheme, the distribution of the input images can be shifted to that of the target images while turning images into adversarial examples against a target network. Defending adversarial examples adapted to the target domain can overcome the performance degradation due to the domain gap and increase the robustness of the segmentation model. Cross-data set experiments and ablation study are conducted for three different data sets: the Inria aerial image labeling data set, the Massachusetts building data set, and the WHU East Asia data set. Compared with the performance of the segmentation network without the DATA scheme, the proposed method shows improvements in the overall intersection over union (IoU). Moreover, it is verified that the proposed method outperforms even when compared with feature adaptation (FA) and output space adaptation (OSA). Numéro de notice : A2021-427 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3010055 Date de publication en ligne : 30/07/2020 En ligne : https://doi.org/10.1109/TGRS.2020.3010055 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97783
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 6 (June 2021) . - pp 5171 - 5182[article]Multiscale cloud detection in remote sensing images using a dual convolutional neural network / Markku Luotamo in IEEE Transactions on geoscience and remote sensing, vol 59 n° 6 (June 2021)
[article]
Titre : Multiscale cloud detection in remote sensing images using a dual convolutional neural network Type de document : Article/Communication Auteurs : Markku Luotamo, Auteur ; Sari Metsämäki, Auteur ; Arto Klami, Auteur Année de publication : 2021 Article en page(s) : pp Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] classification pixellaire
[Termes IGN] détection des nuages
[Termes IGN] granularité d'image
[Termes IGN] image Sentinel-MSI
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Semantic segmentation by convolutional neural networks (CNN) has advanced the state of the art in pixel-level classification of remote sensing images. However, processing large images typically requires analyzing the image in small patches, and hence, features that have a large spatial extent still cause challenges in tasks, such as cloud masking. To support a wider scale of spatial features while simultaneously reducing computational requirements for large satellite images, we propose an architecture of two cascaded CNN model components successively processing undersampled and full-resolution images. The first component distinguishes between patches in the inner cloud area from patches at the cloud’s boundary region. For the cloud-ambiguous edge patches requiring further segmentation, the framework then delegates computation to a fine-grained model component. We apply the architecture to a cloud detection data set of complete Sentinel-2 multispectral images, approximately annotated for minimal false negatives in a land-use application. On this specific task and data, we achieve a 16% relative improvement in pixel accuracy over a CNN baseline based on patching. Numéro de notice : A2021-425 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3015272 Date de publication en ligne : 21/08/2020 En ligne : https://doi.org/10.1109/TGRS.2020.3015272 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97781
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 6 (June 2021) . - pp[article]Semantic signatures for large-scale visual localization / Li Weng in Multimedia tools and applications, vol 80 n° 15 (June 2021)
[article]
Titre : Semantic signatures for large-scale visual localization Type de document : Article/Communication Auteurs : Li Weng , Auteur ; Valérie Gouet-Brunet , Auteur ; Bahman Soheilian , Auteur Année de publication : 2021 Projets : THINGS2D0 / Gouet-Brunet, Valérie Article en page(s) : pp 22347 - 22372 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement sémantique
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image numérique
[Termes IGN] information sémantique
[Termes IGN] recherche d'image basée sur le contenu
[Termes IGN] segmentation sémantique
[Termes IGN] zone urbaineRésumé : (auteur) Visual localization is a useful alternative to standard localization techniques. It works by utilizing cameras. In a typical scenario, features are extracted from captured images and compared with geo-referenced databases. Location information is then inferred from the matching results. Conventional schemes mainly use low-level visual features. These approaches offer good accuracy but suffer from scalability issues. In order to assist localization in large urban areas, this work explores a different path by utilizing high-level semantic information. It is found that object information in a street view can facilitate localization. A novel descriptor scheme called “semantic signature” is proposed to summarize this information. A semantic signature consists of type and angle information of visible objects at a spatial location. Several metrics and protocols are proposed for signature comparison and retrieval. They illustrate different trade-offs between accuracy and complexity. Extensive simulation results confirm the potential of the proposed scheme in large-scale applications. This paper is an extended version of a conference paper in CBMI’18. A more efficient retrieval protocol is presented with additional experiment results. Numéro de notice : A2021-787 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Autre URL associée : vers ArXiv Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11042-020-08992-6 Date de publication en ligne : 07/05/2020 En ligne : https://doi.org/10.1007/s11042-020-08992-6 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95407
in Multimedia tools and applications > vol 80 n° 15 (June 2021) . - pp 22347 - 22372[article]Learning deep semantic segmentation network under multiple weakly-supervised constraints for cross-domain remote sensing image semantic segmentation / Yansheng Li in ISPRS Journal of photogrammetry and remote sensing, vol 175 (May 2021)
[article]
Titre : Learning deep semantic segmentation network under multiple weakly-supervised constraints for cross-domain remote sensing image semantic segmentation Type de document : Article/Communication Auteurs : Yansheng Li, Auteur ; Te Shi, Auteur ; Yongjun Zhang, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 20 - 33 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] classification semi-dirigée
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] programmation par contraintes
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Due to its wide applications, remote sensing (RS) image semantic segmentation has attracted increasing research interest in recent years. Benefiting from its hierarchical abstract ability, the deep semantic segmentation network (DSSN) has achieved tremendous success on RS image semantic segmentation and has gradually become the mainstream technology. However, the superior performance of DSSN highly depends on two conditions: (I) massive quantities of labeled training data exist; (II) the testing data seriously resemble the training data. In actual RS applications, it is difficult to fully meet these conditions due to the RS sensor variation and the distinct landscape variation in different geographic locations. To make DSSN fit the actual RS scenario, this paper exploits the cross-domain RS image semantic segmentation task, which means that DSSN is trained on one labeled dataset (i.e., the source domain) but is tested on another varied dataset (i.e., the target domain). In this setting, the performance of DSSN is inevitably very limited due to the data shift between the source and target domains. To reduce the disadvantageous influence of data shift, this paper proposes a novel objective function with multiple weakly-supervised constraints to learn DSSN for cross-domain RS image semantic segmentation. Through carefully examining the characteristics of cross-domain RS image semantic segmentation, multiple weakly-supervised constraints include the weakly-supervised transfer invariant constraint (WTIC), weakly-supervised pseudo-label constraint (WPLC) and weakly-supervised rotation consistency constraint (WRCC). Specifically, DualGAN is recommended to conduct unsupervised style transfer between the source and target domains to carry out WTIC. To make full use of the merits of multiple constraints, this paper presents a dynamic optimization strategy that dynamically adjusts the constraint weights of the objective function during the training process. With full consideration of the characteristics of the cross-domain RS image semantic segmentation task, this paper gives two cross-domain RS image semantic segmentation settings: (I) variation in geographic location and (II) variation in both geographic location and imaging mode. Extensive experiments demonstrate that our proposed method remarkably outperforms the state-of-the-art methods under both of these settings. The collected datasets and evaluation benchmarks have been made publicly available online (https://github.com/te-shi/MUCSS). Numéro de notice : A2021-261 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.02.009 Date de publication en ligne : 06/03/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.02.009 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97302
in ISPRS Journal of photogrammetry and remote sensing > vol 175 (May 2021) . - pp 20 - 33[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021051 SL Revue Centre de documentation Revues en salle Disponible 081-2021052 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt 081-2021053 DEP-RECP Revue Saint-Mandé Dépôt en unité Exclu du prêt Learning from multimodal and multitemporal earth observation data for building damage mapping / Bruno Adriano in ISPRS Journal of photogrammetry and remote sensing, vol 175 (May 2021)PermalinkSemantic hierarchy emerges in deep generative representations for scene synthesis / Ceyuan Yang in International journal of computer vision, vol 129 n° 5 (May 2021)PermalinkA stacked dense denoising–segmentation network for undersampled tomograms and knowledge transfer using synthetic tomograms / Dimitrios Bellos in Machine Vision and Applications, vol 32 n° 3 (May 2021)PermalinkA graph-based semi-supervised approach to classification learning in digital geographies / Pengyuan Liu in Computers, Environment and Urban Systems, vol 86 (March 2021)PermalinkOntology-based semantic conceptualisation of historical built heritage to generate parametric structured models from point clouds / Elisabetta Colucci in Applied sciences, vol 11 n° 6 (March 2021)PermalinkRecognition of varying size scene images using semantic analysis of deep activation maps / Shikha Gupta in Machine Vision and Applications, vol 32 n° 2 (March 2021)PermalinkLand cover harmonization using Latent Dirichlet Allocation / Zhan Li in International journal of geographical information science IJGIS, vol 35 n° 2 (February 2021)Permalink3D urban scene understanding by analysis of LiDAR, color and hyperspectral data / David Duque-Arias (2021)PermalinkPermalinkApprentissage profond et IA pour l’amélioration de la robustesse des techniques de localisation par vision artificielle / Achref Elouni (2021)PermalinkPermalinkPermalinkBuilding extraction from Lidar data using statistical methods / Haval Abdul-Jabbar Sadeq in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 1 (January 2021)PermalinkClustering et apprentissage profond sous contraintes pour l’analyse de séries temporelles : Application à l’analyse temporelle incrémentale en télédétection / Baptiste Lafabregue (2021)PermalinkPermalinkPermalinkDeep convolutional neural networks for scene understanding and motion planning for self-driving vehicles / Abdelhak Loukkal (2021)PermalinkDétection d’ouvertures par segmentation sémantique de nuages de points 3D : apport de l’apprentissage profond / Camille Lhenry (2021)PermalinkDétection/reconnaissance d'objets urbains à partir de données 3D multicapteurs prises au niveau du sol, en continu / Younes Zegaoui (2021)PermalinkPermalink