Détail de l'auteur
Auteur Congcong Wang |
Documents disponibles écrits par cet auteur (1)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Adaptive feature weighted fusion nested U-Net with discrete wavelet transform for change detection of high-resolution remote sensing images / Congcong Wang in Remote sensing, vol 13 n° 24 (December-2 2021)
[article]
Titre : Adaptive feature weighted fusion nested U-Net with discrete wavelet transform for change detection of high-resolution remote sensing images Type de document : Article/Communication Auteurs : Congcong Wang, Auteur ; Wenbin Sun, Auteur ; Deqin Fan, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : n° Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse comparative
[Termes IGN] détection de changement
[Termes IGN] fusion de données
[Termes IGN] image à haute résolution
[Termes IGN] pondération
[Termes IGN] réseau neuronal siamois
[Termes IGN] transformation en ondelettesRésumé : (auteur) The characteristics of a wide variety of scales about objects and complex texture features of high-resolution remote sensing images make deep learning-based change detection methods the mainstream method. However, existing deep learning methods have problems with spatial information loss and insufficient feature representation, resulting in unsatisfactory effects of small objects detection and boundary positioning in high-resolution remote sensing images change detection. To address the problems, a network architecture based on 2-dimensional discrete wavelet transform and adaptive feature weighted fusion is proposed. The proposed network takes Siamese network and Nested U-Net as the backbone; 2-dimensional discrete wavelet transform is used to replace the pooling layer; and the inverse transform is used to replace the upsampling to realize image reconstruction, reduce the loss of spatial information, and fully retain the original image information. In this way, the proposed network can accurately detect changed objects of different scales and reconstruct change maps with clear boundaries. Furthermore, different feature fusion methods of different stages are proposed to fully integrate multi-scale and multi-level features and improve the comprehensive representation ability of features, so as to achieve a more refined change detection effect while reducing pseudo-changes. To verify the effectiveness and advancement of the proposed method, it is compared with seven state-of-the-art methods on two datasets of Lebedev and SenseTime from the three aspects of quantitative analysis, qualitative analysis, and efficiency analysis, and the effectiveness of proposed modules is validated by an ablation study. The results of quantitative analysis and efficiency analysis show that, under the premise of taking into account the operation efficiency, our method can improve the recall while ensuring the detection precision, and realize the improvement of the overall detection performance. Specifically, it shows an average improvement of 37.9% and 12.35% on recall, and 34.76% and 11.88% on F1 with the Lebedev and SenseTime datasets, respectively, compared to other methods. The qualitative analysis shows that our method has better performance on small objects detection and boundary positioning than other methods, and a more refined change map can be obtained. Numéro de notice : A2021-920 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.3390/rs13244971 Date de publication en ligne : 07/12/2021 En ligne : https://doi.org/10.3390/rs13244971 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99244
in Remote sensing > vol 13 n° 24 (December-2 2021) . - n°[article]