Détail de l'auteur
Auteur Xiaolu Jiang |
Documents disponibles écrits par cet auteur (1)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Unmixing-based spatiotemporal image fusion accounting for complex land cover changes / Xiaolu Jiang in IEEE Transactions on geoscience and remote sensing, vol 60 n° 5 (May 2022)
[article]
Titre : Unmixing-based spatiotemporal image fusion accounting for complex land cover changes Type de document : Article/Communication Auteurs : Xiaolu Jiang, Auteur ; Bo Huang, Auteur Année de publication : 2022 Article en page(s) : n° 5623010 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse des mélanges spectraux
[Termes IGN] changement d'occupation du sol
[Termes IGN] données spatiotemporelles
[Termes IGN] fusion d'images
[Termes IGN] image Landsat
[Termes IGN] image Terra-MODIS
[Termes IGN] réflectance spectrale
[Termes IGN] régression géographiquement pondéréeRésumé : (auteur) Spatiotemporal reflectance fusion has received considerable attention in recent decades. However, various challenges remain despite varying levels of success, especially regarding the recovery of spatial details with complex land cover changes. Taking the blending of Landsat and Moderate Resolution Imaging Spectroradiometer (MODIS) images as an example, this article presents a locally weighted unmixing-based spatiotemporal image fusion model (LWU-STFM) that focuses on recovering complex land cover changes. The core idea is to redefine the land use class of each pixel featuring land cover change at the prediction date. The spatial unmixing process is enhanced using a proposed geographically spectrum-weighted regression (GSWR), and then, we optimize similar neighboring pixels for the final weighted-based prediction. Experiments are conducted using semisimulated and actual time-series Landsat–MODIS datasets to demonstrate the performance of the proposed LWU-STFM compared with the classic spatial and temporal adaptive reflectance fusion model (STARFM), flexible spatiotemporal data fusion (FSDAF), two enhanced FSDAF models (SFSDAF and FSDAF 2.0), and a virtual image pair-based spatiotemporal fusion model for spatial weighting (VIPSTF-SW). The results reveal that the proposed LWU-STFM outperforms the other five models with the best quantitative accuracy. In terms of the relative dimensionless global error (ERGAS) index, the errors of Landsat-like images generated using LWU-STFM are 2.8%–63.4% lower than those of other models. From visual comparisons, LWU-STFM predictions illustrate encouraging improvements in recovering spatial details of pixels with complex land cover changes in heterogeneous landscapes and, thus, advancing applications of spatiotemporal image fusion for continuous and fine-scale land surface monitoring. Numéro de notice : A2022-409 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2022.3173172 Date de publication en ligne : 05/05/2022 En ligne : https://doi.org/10.1109/TGRS.2022.3173172 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100744
in IEEE Transactions on geoscience and remote sensing > vol 60 n° 5 (May 2022) . - n° 5623010[article]