Détail de l'auteur
Auteur Qian Liu |
Documents disponibles écrits par cet auteur (1)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
A unified attention paradigm for hyperspectral image classification / Qian Liu in IEEE Transactions on geoscience and remote sensing, vol 61 n° 3 (March 2023)
[article]
Titre : A unified attention paradigm for hyperspectral image classification Type de document : Article/Communication Auteurs : Qian Liu, Auteur ; Zebin Wu, Auteur ; Yang Xu, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 5506316 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image hyperspectrale
[Termes IGN] précision de la classification
[Termes IGN] séparateur à vaste margeRésumé : (auteur) Attention mechanisms improve the classification accuracies by enhancing the salient information for hyperspectral images (HSIs). However, existing HSI attention models are driven by advanced achievements of computer vision, which are not able to fully exploit the spectral–spatial structure prior of HSIs and effectively refine features from a global perspective. In this article, we propose a unified attention paradigm (UAP) that defines the attention mechanism as a general three-stage process including optimizing feature representations, strengthening information interaction, and emphasizing meaningful information. Meanwhile, we designed a novel efficient spectral–spatial attention module (ESSAM) under this paradigm, which adaptively adjusts feature responses along the spectral and spatial dimensions at an extremely low parameter cost. Specifically, we construct a parameter-free spectral attention block that employs multiscale structured encodings and similarity calculations to perform global cross-channel interactions, and a memory-enhanced spatial attention block that captures key semantics of images stored in a learnable memory unit and models global spatial relationship by constructing semantic-to-pixel dependencies. ESSAM takes full account of the spatial distribution and low-dimensional characteristics of HSIs, with better interpretability and lower complexity. We develop a dense convolutional network based on efficient spectral–spatial attention network (ESSAN) and experiment on three real hyperspectral datasets. The experimental results demonstrate that the proposed ESSAM brings higher accuracy improvement compared to advanced attention models. Numéro de notice : A2023-185 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2023.3257321 Date de publication en ligne : 15/12/2023 En ligne : https://doi.org/10.1109/TGRS.2023.3257321 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102957
in IEEE Transactions on geoscience and remote sensing > vol 61 n° 3 (March 2023) . - n° 5506316[article]