Détail de l'auteur
Auteur Wei Wei |
Documents disponibles écrits par cet auteur (2)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
DSNUNet: An improved forest change detection network by combining Sentinel-1 and Sentinel-2 images / Jiawei Jiang in Remote sensing, vol 14 n° 19 (October-1 2022)
[article]
Titre : DSNUNet: An improved forest change detection network by combining Sentinel-1 and Sentinel-2 images Type de document : Article/Communication Auteurs : Jiawei Jiang, Auteur ; Yuanjun Xing, Auteur ; Wei Wei, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 5046 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] apprentissage profond
[Termes IGN] Chine
[Termes IGN] détection de changement
[Termes IGN] gestion forestière
[Termes IGN] image radar moirée
[Termes IGN] image Sentinel-MSI
[Termes IGN] image Sentinel-SAR
[Termes IGN] réseau neuronal siamois
[Termes IGN] ressources forestièresRésumé : (auteur) The use of remote sensing images to detect forest changes is of great significance for forest resource management. With the development and implementation of deep learning algorithms in change detection, a large number of models have been designed to detect changes in multi-phase remote sensing images. Although synthetic aperture radar (SAR) data have strong potential for application in forest change detection tasks, most existing deep learning-based models have been designed for optical imagery. Therefore, to effectively combine optical and SAR data in forest change detection, this paper proposes a double Siamese branch-based change detection network called DSNUNet. DSNUNet uses two sets of feature branches to extract features from dual-phase optical and SAR images and employs shared weights to combine features into groups. In the proposed DSNUNet, different feature extraction branch widths were used to compensate for a difference in the amount of information between optical and SAR images. The proposed DSNUNet was validated by experiments on the manually annotated forest change detection dataset. According to the obtained results, the proposed method outperformed other change detection methods, achieving an F1-score of 76.40%. In addition, different combinations of width between feature extraction branches were analyzed in this study. The results revealed an optimal performance of the model at initial channel numbers of the optical imaging branch and SAR image branch of 32 and 8, respectively. The prediction results demonstrated the effectiveness of the proposed method in accurately predicting forest changes and suppressing cloud interferences to some extent. Numéro de notice : A2022-772 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.3390/rs14195046 Date de publication en ligne : 10/10/2022 En ligne : https://doi.org/10.3390/rs14195046 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101800
in Remote sensing > vol 14 n° 19 (October-1 2022) . - n° 5046[article]Dictionary learning for promoting structured sparsity in hyperspectral compressive sensing / Lei Zhang in IEEE Transactions on geoscience and remote sensing, vol 54 n° 12 (December 2016)
[article]
Titre : Dictionary learning for promoting structured sparsity in hyperspectral compressive sensing Type de document : Article/Communication Auteurs : Lei Zhang, Auteur ; Wei Wei, Auteur ; Yanning Zhang, Auteur ; et al., Auteur Année de publication : 2016 Article en page(s) : pp 7223 - 7235 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage automatique
[Termes IGN] bruit blanc
[Termes IGN] compression d'image
[Termes IGN] image hyperspectrale
[Termes IGN] reconstruction d'imageRésumé : (Auteur) The ability to accurately represent a hyperspectral image (HSI) as a combination of a small number of elements from an appropriate dictionary underpins much of the recent progress in hyperspectral compressive sensing (HCS). Preserving structure in the sparse representation is critical to achieving an accurate reconstruction but has thus far only been partially exploited because existing methods assume a predefined dictionary. To address this problem, a structured sparsity-based hyperspectral blind compressive sensing method is presented in this study. For the reconstructed HSI, a data-adaptive dictionary is learned directly from its noisy measurements, which promotes the underlying structured sparsity and obviously improves reconstruction accuracy. Specifically, a fully structured dictionary prior is first proposed to jointly depict the structure in each dictionary atom as well as the correlation between atoms, where the magnitude of each atom is also regularized. Then, a reweighted Laplace prior is employed to model the structured sparsity in the representation of the HSI. Based on these two priors, a unified optimization framework is proposed to learn both the dictionary and sparse representation from the measurements by alternatively optimizing two separate latent variable Bayes models. With the learned dictionary, the structured sparsity of HSIs can be well described by the reweighted Laplace prior. In addition, both the learned dictionary and sparse representation are robust to noise corruption in the measurements. Extensive experiments on three hyperspectral data sets demonstrate that the proposed method outperforms several state-of-the-art HCS methods in terms of the reconstruction accuracy achieved. Numéro de notice : A2016-929 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2016.2598577 En ligne : http://dx.doi.org/10.1109/TGRS.2016.2598577 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=83343
in IEEE Transactions on geoscience and remote sensing > vol 54 n° 12 (December 2016) . - pp 7223 - 7235[article]