Détail de l'auteur
Auteur Kun Yang |
Documents disponibles écrits par cet auteur (1)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Semantic segmentation of high-resolution remote sensing images based on a class feature attention mechanism fused with Deeplabv3+ / Zhimin Wang in Computers & geosciences, vol 158 (January 2022)
[article]
Titre : Semantic segmentation of high-resolution remote sensing images based on a class feature attention mechanism fused with Deeplabv3+ Type de document : Article/Communication Auteurs : Zhimin Wang, Auteur ; Jiasheng Wang, Auteur ; Kun Yang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 104969 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classe sémantique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] image à haute résolution
[Termes IGN] image Gaofen
[Termes IGN] raisonnement sémantique
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Aiming at solving the problems of inaccurate segmentation of edge targets, inconsistent segmentation of different types of targets, and slow prediction efficiency on semantic segmentation of high-resolution remote sensing images by classical semantic segmentation network, this study proposed a class feature attention mechanism fused with an improved Deeplabv3+ network called CFAMNet for semantic segmentation of common features in remote sensing images. First, the correlation between classes is enhanced using the class feature attention module to extract and process different categories of semantic information better. Second, the multi-parallel atrous spatial pyramid pooling structure is used to enhance the correlation between spaces, to extract the context information of different scales of an image better. Finally, the encoder-decoder structure is used to refine the segmentation results. The segmentation effect of the proposed network is verified by experiments on the public data set GaoFen image dataset (GID). The experimental results show that the CFAMNet can achieve the mean intersection over union (MIOU) and overall accuracy (OA) of 77.22% and 85.01%, respectively, on the GID, thus surpassing the current mainstream semantic segmentation networks. Numéro de notice : A2022-030 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.cageo.2021.104969 Date de publication en ligne : 26/10/2021 En ligne : https://doi.org/10.1016/j.cageo.2021.104969 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99269
in Computers & geosciences > vol 158 (January 2022) . - n° 104969[article]