Descripteur
Termes IGN > 1-Candidats > attention (apprentissage automatique)
attention (apprentissage automatique) |
Documents disponibles dans cette catégorie (42)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Flood vulnerability assessment of urban buildings based on integrating high-resolution remote sensing and street view images / Ziyao Xing in Sustainable Cities and Society, vol 92 (May 2023)
[article]
Titre : Flood vulnerability assessment of urban buildings based on integrating high-resolution remote sensing and street view images Type de document : Article/Communication Auteurs : Ziyao Xing, Auteur ; Shuai Yang, Auteur ; Xuli Zan, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 104467 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] bâtiment
[Termes IGN] Chine
[Termes IGN] gestion des risques
[Termes IGN] image Streetview
[Termes IGN] inondation
[Termes IGN] milieu urbain
[Termes IGN] planification urbaine
[Termes IGN] Quickbird
[Termes IGN] segmentation sémantique
[Termes IGN] vulnérabilitéRésumé : (auteur) Urban flood risk management requires an extensive investigation of the vulnerability characteristics of buildings. Large-scale field surveys usually cost a lot of time and money, while satellite remote sensing and street view images can provide information on the tops and facades of buildings respectively. Thereupon, this paper develops a building vulnerability assessment framework using remote sensing and street view features. Specifically, a UNet-based semantic segmentation model, FSA-UNet (Fusion-Self-Attention-UNet) is proposed to integrate remote sensing and street view features and the vulnerability information contained in the images is fully exploited. And the building vulnerability index is generated to provide the spatial distribution characteristics of urban building vulnerability. The experiment shows that the mIoU of the proposed model can reach 82% for building vulnerability classification in Hefei, China, which is more accurate than the traditional semantic segmentation models. The results indicate that the integration of street view and remote sensing image features can improve the ability of building vulnerability assessment, and the model proposed in this study can better capture the correlation features of multi-angle images through the self-attention mechanism and combines hierarchy features and edge information to improve the classification effect. This study can support for disaster management and urban planning. Numéro de notice : A2023-152 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.scs.2023.104467 Date de publication en ligne : 23/02/2023 En ligne : https://doi.org/10.1016/j.scs.2023.104467 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102826
in Sustainable Cities and Society > vol 92 (May 2023) . - n° 104467[article]Global-aware siamese network for change detection on remote sensing images / Ruiqian Zhang in ISPRS Journal of photogrammetry and remote sensing, vol 199 (May 2023)
[article]
Titre : Global-aware siamese network for change detection on remote sensing images Type de document : Article/Communication Auteurs : Ruiqian Zhang, Auteur ; Hanchao Zhang, Auteur ; Xiaogang Ning, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : pp 61 - 72 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse de sensibilité
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] détection de changement
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image à haute résolution
[Termes IGN] optimisation (mathématiques)
[Termes IGN] réseau neuronal siamoisRésumé : (auteur) Change detection (CD) in remote sensing images is one of the most important technical options to identify changes in observations in an efficient manner. CD has a wide range of applications, such as land use investigation, urban planning, environmental monitoring and disaster mapping. However, the frequently occurring class imbalance problem brings huge challenges to the change detection applications. To address this issue, we develop a novel global-aware siamese network (GAS-Net), aiming to generate global-aware features for efficient change detection by incorporating the relationships between scenes and foregrounds. The proposed GAS-Net, consisting of the global-attention module (GAM) and foreground-awareness module (FAM) that both learns contextual relationships and enhances symbiotic relation learning between scene and foreground. The experimental results demonstrate the effectiveness and robustness of the proposed GAS-Net, achieving up to 91.21% and 95.84% F1 score on two widely used public datasets, i.e., Levir-CD and Lebedev-CD dataset. The source code is available at https://github.com/xiaoxiangAQ/GAS-Net. Numéro de notice : 2023-204 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.isprsjprs.2023.04.001 Date de publication en ligne : 05/04/2023 En ligne : https://doi.org/10.1016/j.isprsjprs.2023.04.001 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=103106
in ISPRS Journal of photogrammetry and remote sensing > vol 199 (May 2023) . - pp 61 - 72[article]A unified attention paradigm for hyperspectral image classification / Qian Liu in IEEE Transactions on geoscience and remote sensing, vol 61 n° 3 (March 2023)
[article]
Titre : A unified attention paradigm for hyperspectral image classification Type de document : Article/Communication Auteurs : Qian Liu, Auteur ; Zebin Wu, Auteur ; Yang Xu, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 5506316 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image hyperspectrale
[Termes IGN] précision de la classification
[Termes IGN] séparateur à vaste margeRésumé : (auteur) Attention mechanisms improve the classification accuracies by enhancing the salient information for hyperspectral images (HSIs). However, existing HSI attention models are driven by advanced achievements of computer vision, which are not able to fully exploit the spectral–spatial structure prior of HSIs and effectively refine features from a global perspective. In this article, we propose a unified attention paradigm (UAP) that defines the attention mechanism as a general three-stage process including optimizing feature representations, strengthening information interaction, and emphasizing meaningful information. Meanwhile, we designed a novel efficient spectral–spatial attention module (ESSAM) under this paradigm, which adaptively adjusts feature responses along the spectral and spatial dimensions at an extremely low parameter cost. Specifically, we construct a parameter-free spectral attention block that employs multiscale structured encodings and similarity calculations to perform global cross-channel interactions, and a memory-enhanced spatial attention block that captures key semantics of images stored in a learnable memory unit and models global spatial relationship by constructing semantic-to-pixel dependencies. ESSAM takes full account of the spatial distribution and low-dimensional characteristics of HSIs, with better interpretability and lower complexity. We develop a dense convolutional network based on efficient spectral–spatial attention network (ESSAN) and experiment on three real hyperspectral datasets. The experimental results demonstrate that the proposed ESSAM brings higher accuracy improvement compared to advanced attention models. Numéro de notice : A2023-185 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2023.3257321 Date de publication en ligne : 15/12/2023 En ligne : https://doi.org/10.1109/TGRS.2023.3257321 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102957
in IEEE Transactions on geoscience and remote sensing > vol 61 n° 3 (March 2023) . - n° 5506316[article]Cross-supervised learning for cloud detection / Kang Wu in GIScience and remote sensing, vol 60 n° 1 (2023)
[article]
Titre : Cross-supervised learning for cloud detection Type de document : Article/Communication Auteurs : Kang Wu, Auteur ; Zunxiao Xu, Auteur ; Xinrong Lyu, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 2147298 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage dirigé
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] détection d'objet
[Termes IGN] données étiquetées d'entrainement
[Termes IGN] nuageRésumé : (auteur) We present a new learning paradigm, that is, cross-supervised learning, and explore its use for cloud detection. The cross-supervised learning paradigm is characterized by both supervised training and mutually supervised training, and is performed by two base networks. In addition to the individual supervised training for labeled data, the two base networks perform the mutually supervised training using prediction results provided by each other for unlabeled data. Specifically, we develop In-extensive Nets for implementing the base networks. The In-extensive Nets consist of two Intensive Nets and are trained using the cross-supervised learning paradigm. The Intensive Net leverages information from the labeled cloudy images using a focal attention guidance module (FAGM) and a regression block. The cross-supervised learning paradigm empowers the In-extensive Nets to learn from both labeled and unlabeled cloudy images, substantially reducing the number of labeled cloudy images (that tend to cost expensive manual effort) required for training. Experimental results verify that In-extensive Nets perform well and have an obvious advantage in the situations where there are only a few labeled cloudy images available for training. The implementation code for the proposed paradigm is available at https://gitee.com/kang_wu/in-extensive-nets. Numéro de notice : A2023-190 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1080/15481603.2022.2147298 Date de publication en ligne : 03/01/2023 En ligne : https://doi.org/10.1080/15481603.2022.2147298 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102969
in GIScience and remote sensing > vol 60 n° 1 (2023) . - n° 2147298[article]A geometry-aware attention network for semantic segmentation of MLS point clouds / Jie Wan in International journal of geographical information science IJGIS, vol 37 n° 1 (January 2023)
[article]
Titre : A geometry-aware attention network for semantic segmentation of MLS point clouds Type de document : Article/Communication Auteurs : Jie Wan, Auteur ; Yongyang Xu, Auteur ; Qinjun Qiu, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : pp 138 - 161 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] corrélation
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] figure géométrique
[Termes IGN] fonction de perte
[Termes IGN] graphe
[Termes IGN] Perceptron multicouche
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantique
[Termes IGN] semis de pointsRésumé : (auteur) Semantic segmentation of mobile laser scanning (MLS) point clouds can provide meaningful 3 D semantic information of urban facilities for various applications. However, it still remains a challenge to extract accurate 3 D semantic information from MLS point cloud data due to its irregular 3 D geometric structure in a large-scale outdoor scene. To this end, this study develops a geometry-aware attention point network (GAANet) with geometric properties of the point cloud as a reference. Specifically, the proposed method first builds a graph-like region for each input point to establish the geometric correlation toward its neighbors for robustly descripting local geometry-aware features. Thereafter, the method introduces a novel multi-head attention mechanism to efficiently learn local discriminative features on the constructed graphs and a feature combination operation to capture both local and global geometric dependencies inside fused point features for significantly facilitating the segmentation of small or incomplete 3 D objects at point-level. Finally, an adaptive loss function is appended to handle class imbalance for the overall performance improvement. The validation experiments on two challenging benchmarks demonstrate the effectiveness and powerful generation ability of the proposed method, which achieves state-of-the-art performance with mean IoU of 65.09% and 95.20% in the Toronto-3D and Oakland 3-D MLS dataset, respectively. Numéro de notice : A2023-038 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1080/13658816.2022.2111572 Date de publication en ligne : 24/08/2022 En ligne : https://doi.org/10.1080/13658816.2022.2111572 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102309
in International journal of geographical information science IJGIS > vol 37 n° 1 (January 2023) . - pp 138 - 161[article]HGAT-VCA: Integrating high-order graph attention network with vector cellular automata for urban growth simulation / Xuefeng Guan in Computers, Environment and Urban Systems, vol 99 (January 2023)PermalinkMTMGNN: Multi-time multi-graph neural network for metro passenger flow prediction / Du Yin in Geoinformatica, vol 27 n° 1 (January 2023)Permalink3D target detection using dual domain attention and SIFT operator in indoor scenes / Hanshuo Zhao in The Visual Computer, vol 38 n° 11 (November 2022)PermalinkCross-guided pyramid attention-based residual hyperdense network for hyperspectral image pansharpening / Jiahui Qu in IEEE Transactions on geoscience and remote sensing, vol 60 n° 11 (November 2022)PermalinkForeground-aware refinement network for building extraction from remote sensing images / Zhang Yan in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 11 (November 2022)PermalinkGA-Net: A geometry prior assisted neural network for road extraction / Xin Chen in International journal of applied Earth observation and geoinformation, vol 114 (November 2022)PermalinkA joint deep learning network of point clouds and multiple views for roadside object classification from lidar point clouds / Lina Fang in ISPRS Journal of photogrammetry and remote sensing, vol 193 (November 2022)PermalinkEstimating urban functional distributions with semantics preserved POI embedding / Weiming Huang in International journal of geographical information science IJGIS, vol 36 n° 10 (October 2022)PermalinkA relation-augmented embedded graph attention network for remote sensing object detection / Shu Tian in IEEE Transactions on geoscience and remote sensing, vol 60 n° 10 (October 2022)PermalinkSingle-image super-resolution for remote sensing images using a deep generative adversarial network with local and global attention mechanisms / Yadong Li in IEEE Transactions on geoscience and remote sensing, vol 60 n° 10 (October 2022)PermalinkAttention mechanisms in computer vision: A survey / Meng-Hao Guo in Computational Visual Media, vol 8 n° 3 (September 2022)PermalinkHuman perception evaluation system for urban streetscapes based on computer vision algorithms with attention mechanisms / Yunhao Li in Transactions in GIS, vol 26 n° 6 (September 2022)PermalinkHyperspectral unmixing using transformer network / Preetam Ghosh in IEEE Transactions on geoscience and remote sensing, vol 60 n° 8 (August 2022)PermalinkSpatial–spectral attention network guided with change magnitude image for land cover change detection using remote sensing images / Zhiyong Lv in IEEE Transactions on geoscience and remote sensing, vol 60 n° 8 (August 2022)PermalinkUsing attributes explicitly reflecting user preference in a self-attention network for next POI recommendation / Ruijing Li in ISPRS International journal of geo-information, vol 11 n° 8 (August 2022)PermalinkA lightweight network with attention decoder for real-time semantic segmentation / Kang Wang in The Visual Computer, vol 38 n° 7 (July 2022)PermalinkModeling human–human interaction with attention-based high-order GCN for trajectory prediction / Yanyan Fang in The Visual Computer, vol 38 n° 7 (July 2022)PermalinkA second-order attention network for glacial lake segmentation from remotely sensed imagery / Shidong Wang in ISPRS Journal of photogrammetry and remote sensing, vol 189 (July 2022)PermalinkSpatial-temporal attentive LSTM for vehicle-trajectory prediction / Rui Jiang in ISPRS International journal of geo-information, vol 11 n° 7 (July 2022)PermalinkContext-aware network for semantic segmentation toward large-scale point clouds in urban environments / Chun Liu in IEEE Transactions on geoscience and remote sensing, vol 60 n° 6 (June 2022)PermalinkExtracting the urban landscape features of the historic district from street view images based on deep learning: A case study in the Beijing Core area / Siming Yin in ISPRS International journal of geo-information, vol 11 n° 6 (June 2022)PermalinkFeature-selection high-resolution network with hypersphere embedding for semantic segmentation of VHR remote sensing images / Hanwen Xu in IEEE Transactions on geoscience and remote sensing, vol 60 n° 6 (June 2022)PermalinkEfficient convolutional neural architecture search for LiDAR DSM classification / Aili Wang in IEEE Transactions on geoscience and remote sensing, vol 60 n° 5 (May 2022)PermalinkMulti-modal temporal attention models for crop mapping from satellite time series / Vivien Sainte Fare Garnot in ISPRS Journal of photogrammetry and remote sensing, vol 187 (May 2022)PermalinkDeep generative model for spatial–spectral unmixing with multiple endmember priors / Shuaikai Shi in IEEE Transactions on geoscience and remote sensing, vol 60 n° 4 (April 2022)PermalinkA graph attention network for road marking classification from mobile LiDAR point clouds / Lina Fang in International journal of applied Earth observation and geoinformation, vol 108 (April 2022)PermalinkGraph neural network based model for multi-behavior session-based recommendation / Bo Yu in Geoinformatica, vol 26 n° 2 (April 2022)PermalinkVisual vs internal attention mechanisms in deep neural networks for image classification and object detection / Abraham Montoya Obeso in Pattern recognition, vol 123 (March 2022)PermalinkAnalysis of pedestrian movements and gestures using an on-board camera to predict their intentions / Joseph Gesnouin (2022)PermalinkPermalinkLearning spatio-temporal representations of satellite time series for large-scale crop mapping / Vivien Sainte Fare Garnot (2022)PermalinkPermalinkSelf-attention and generative adversarial networks for algae monitoring / Nhut Hai Huynh in European journal of remote sensing, vol 55 n° 1 (2022)PermalinkSemantic segmentation of high-resolution remote sensing images based on a class feature attention mechanism fused with Deeplabv3+ / Zhimin Wang in Computers & geosciences, vol 158 (January 2022)PermalinkTowards expressive graph neural networks : Theory, algorithms, and applications / Georgios Dasoulas (2022)PermalinkPermalinkUnsupervised pansharpening based on self-attention mechanism / Ying Qu in IEEE Transactions on geoscience and remote sensing, vol 59 n° 4 (April 2021)Permalink