Descripteur
Termes IGN > 1- Outils - instruments et méthodes > instrument > capteur (télédétection) > pouvoir de résolution géométrique
pouvoir de résolution géométriqueSynonyme(s)résolution spatiale résolution géométriqueVoir aussi |
Documents disponibles dans cette catégorie (46)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Multiresolution analysis pansharpening based on variation factor for multispectral and panchromatic images from different times / Peng Wang in IEEE Transactions on geoscience and remote sensing, vol 61 n° 3 (March 2023)
[article]
Titre : Multiresolution analysis pansharpening based on variation factor for multispectral and panchromatic images from different times Type de document : Article/Communication Auteurs : Peng Wang, Auteur ; Hongyu Yao, Auteur ; Bo Huang, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 5401217 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse multirésolution
[Termes IGN] données multitemporelles
[Termes IGN] image multibande
[Termes IGN] image panchromatique
[Termes IGN] pansharpening (fusion d'images)
[Termes IGN] pouvoir de résolution géométriqueRésumé : (auteur) Most pansharpening methods refer to the fusion of the original low-resolution multispectral (MS) and high-resolution panchromatic (PAN) images acquired simultaneously over the same area. Due to its good robustness, multiresolution analysis (MRA) has become one of the important categories of pansharpening methods. However, when only MS and PAN images acquired at different times can be provided, the fusion results from current MRA methods are often not ideal due to the failure to effectively analyze multitemporal misalignments between MS and PAN images from different times. To solve this issue, MRA pansharpening based on variation factor for MS and PAN images from different times is proposed. The MRA pansharpening based on dual-scale regression model is first established, and the variation factor is then introduced to effectively analyze the multitemporal misalignments by using the alternating direction method of multipliers (ADMM), yielding the final fusion results. Experiments with synthetic and real datasets show that the proposed method exhibits significant performance improvement compared to the traditional pansharpening methods, as well as the state-of-the-art MRA methods. Visual comparisons demonstrate that the variation factor introduces encouraging improvements in the compensation of multitemporal misalignments in ground objects and advances pansharpening applications for MS and PAN images acquired at different times. Numéro de notice : A2023-184 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2023.3252001 En ligne : https://doi.org/10.1109/TGRS.2023.3252001 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102956
in IEEE Transactions on geoscience and remote sensing > vol 61 n° 3 (March 2023) . - n° 5401217[article]Multi-nomenclature, multi-resolution joint translation: an application to land-cover mapping / Luc Baudoux in International journal of geographical information science IJGIS, vol 37 n° 2 (February 2023)
[article]
Titre : Multi-nomenclature, multi-resolution joint translation: an application to land-cover mapping Type de document : Article/Communication Auteurs : Luc Baudoux , Auteur ; Jordi Inglada, Auteur ; Clément Mallet , Auteur Année de publication : 2023 Projets : AI4GEO / Article en page(s) : pp 403 - 437 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Cartographie thématique
[Termes IGN] apprentissage profond
[Termes IGN] carte d'occupation du sol
[Termes IGN] carte d'utilisation du sol
[Termes IGN] carte thématique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] harmonisation des données
[Termes IGN] nomenclature
[Termes IGN] pouvoir de résolution géométriqueRésumé : (auteur) Land-use/land-cover (LULC) maps describe the Earth’s surface with discrete classes at a specific spatial resolution. The chosen classes and resolution highly depend on peculiar uses, making it mandatory to develop methods to adapt these characteristics for a large range of applications. Recently, a convolutional neural network (CNN)-based method was introduced to take into account both spatial and geographical context to translate a LULC map into another one. However, this model only works for two maps: one source and one target. Inspired by natural language translation using multiple-language models, this article explores how to translate one LULC map into several targets with distinct nomenclatures and spatial resolutions. We first propose a new data set based on six open access LULC maps to train our CNN-based encoder-decoder framework. We then apply such a framework to convert each of these six maps into each of the others using our Multi-Landcover Translation network (MLCT-Net). Extensive experiments are conducted at a country scale (namely France). The results reveal that our MLCT-Net outperforms its semantic counterparts and gives on par results with mono-LULC models when evaluated on areas similar to those used for training. Furthermore, it outperforms the mono-LULC models when applied to totally new landscapes. Numéro de notice : A2023-075 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/13658816.2022.2120996 Date de publication en ligne : 10/10/2022 En ligne : https://doi.org/10.1080/13658816.2022.2120996 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101797
in International journal of geographical information science IJGIS > vol 37 n° 2 (February 2023) . - pp 403 - 437[article]Single-image super-resolution for remote sensing images using a deep generative adversarial network with local and global attention mechanisms / Yadong Li in IEEE Transactions on geoscience and remote sensing, vol 60 n° 10 (October 2022)
[article]
Titre : Single-image super-resolution for remote sensing images using a deep generative adversarial network with local and global attention mechanisms Type de document : Article/Communication Auteurs : Yadong Li, Auteur ; Sébastien Mavromatis, Auteur ; Feng Zhang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 3000224 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] image isolée
[Termes IGN] pouvoir de résolution géométrique
[Termes IGN] pouvoir de résolution spectrale
[Termes IGN] reconstruction d'image
[Termes IGN] réseau antagoniste génératifRésumé : (auteur) Super-resolution (SR) technology is an important way to improve spatial resolution under the condition of sensor hardware limitations. With the development of deep learning (DL), some DL-based SR models have achieved state-of-the-art performance, especially the convolutional neural network (CNN). However, considering that remote sensing images usually contain a variety of ground scenes and objects with different scales, orientations, and spectral characteristics, previous works usually treat important and unnecessary features equally or only apply different weights in the local receptive field, which ignores long-range dependencies; it is still a challenging task to exploit features on different levels and reconstruct images with realistic details. To address these problems, an attention-based generative adversarial network (SRAGAN) is proposed in this article, which applies both local and global attention mechanisms. Specifically, we apply local attention in the SR model to focus on structural components of the earth’s surface that require more attention, and global attention is used to capture long-range interdependencies in the channel and spatial dimensions to further refine details. To optimize the adversarial learning process, we also use local and global attentions in the discriminator model to enhance the discriminative ability and apply the gradient penalty in the form of hinge loss and loss function that combines L1 pixel loss, L1 perceptual loss, and relativistic adversarial loss to promote rich details. The experiments show that SRAGAN can achieve performance improvements and reconstruct better details compared with current state-of-the-art SR methods. A series of ablation investigations and model analyses validate the efficiency and effectiveness of our method. Numéro de notice : A2022-767 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2021.3093043 Date de publication en ligne : 12/07/2021 En ligne : https://doi.org/10.1109/TGRS.2021.3093043 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101789
in IEEE Transactions on geoscience and remote sensing > vol 60 n° 10 (October 2022) . - n° 3000224[article]PolGAN: A deep-learning-based unsupervised forest height estimation based on the synergy of PolInSAR and LiDAR data / Qi Zhang in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)
[article]
Titre : PolGAN: A deep-learning-based unsupervised forest height estimation based on the synergy of PolInSAR and LiDAR data Type de document : Article/Communication Auteurs : Qi Zhang, Auteur ; Linlin Ge, Auteur ; Scott Hensley, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 123 - 139 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] analyse discriminante
[Termes IGN] apprentissage non-dirigé
[Termes IGN] apprentissage profond
[Termes IGN] bande L
[Termes IGN] données lidar
[Termes IGN] forêt boréale
[Termes IGN] forêt tropicale
[Termes IGN] Global Ecosystem Dynamics Investigation lidar
[Termes IGN] hauteur de la végétation
[Termes IGN] hauteur des arbres
[Termes IGN] image captée par drone
[Termes IGN] interféromètrie par radar à antenne synthétique
[Termes IGN] pansharpening (fusion d'images)
[Termes IGN] polarimétrie radar
[Termes IGN] pouvoir de résolution géométrique
[Termes IGN] réseau antagoniste génératif
[Termes IGN] semis de pointsRésumé : (auteur) This paper describes a deep-learning-based unsupervised forest height estimation method based on the synergy of the high-resolution L-band repeat-pass Polarimetric Synthetic Aperture Radar Interferometry (PolInSAR) and low-resolution large-footprint full-waveform Light Detection and Ranging (LiDAR) data. Unlike traditional PolInSAR-based methods, the proposed method reformulates the forest height inversion as a pan-sharpening process between the low-resolution LiDAR height and the high-resolution PolSAR and PolInSAR features. A tailored Generative Adversarial Network (GAN) called PolGAN with one generator and dual (coherence and spatial) discriminators is proposed to this end, where a progressive pan-sharpening strategy underpins the generator to overcome the significant difference between spatial resolutions of LiDAR and SAR-related inputs. Forest height estimates with high spatial resolution and vertical accuracy are generated through a continuous generative and adversarial process. UAVSAR PolInSAR and LVIS LiDAR data collected over tropical and boreal forest sites are used for experiments. Ablation study is conducted over the boreal site evidencing the superiority of the progressive generator with dual discriminators employed in PolGAN (RMSE: 1.21 m) in comparison with the standard generator with dual discriminators (RMSE: 2.43 m) and the progressive generator with a single coherence (RMSE: 2.74 m) or spatial discriminator (RMSE: 5.87 m). Besides that, by reducing the dependency on theoretical models and utilizing the shape, texture, and spatial information embedded in the high-spatial-resolution features, the PolGAN method achieves an RMSE of 2.37 m over the tropical forest site, which is much more accurate than the traditional PolInSAR-based Kapok method (RMSE: 8.02 m). Numéro de notice : A2022-195 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.02.008 Date de publication en ligne : 17/02/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.02.008 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99962
in ISPRS Journal of photogrammetry and remote sensing > vol 186 (April 2022) . - pp 123 - 139[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022041 SL Revue Centre de documentation Revues en salle Disponible 081-2022043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2022042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Spatiotemporal fusion modelling using STARFM: Examples of Landsat 8 and Sentinel-2 NDVI in Bavaria / Maninder Singh Dhillon in Remote sensing, vol 14 n° 3 (February-1 2022)
[article]
Titre : Spatiotemporal fusion modelling using STARFM: Examples of Landsat 8 and Sentinel-2 NDVI in Bavaria Type de document : Article/Communication Auteurs : Maninder Singh Dhillon, Auteur ; Thorsten Dahms, Auteur ; Carina Kübert-Flock, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 677 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] Bavière (Allemagne)
[Termes IGN] carte d'occupation du sol
[Termes IGN] fusion de données
[Termes IGN] image Landsat-8
[Termes IGN] image Sentinel-MSI
[Termes IGN] image Terra-MODIS
[Termes IGN] Normalized Difference Vegetation Index
[Termes IGN] pouvoir de résolution géométrique
[Termes IGN] réflectance
[Termes IGN] surveillance de la végétation
[Termes IGN] utilisation du solRésumé : (auteur) The increasing availability and variety of global satellite products provide a new level of data with different spatial, temporal, and spectral resolutions; however, identifying the most suited resolution for a specific application consumes increasingly more time and computation effort. The region’s cloud coverage additionally influences the choice of the best trade-off between spatial and temporal resolution, and different pixel sizes of remote sensing (RS) data may hinder the accurate monitoring of different land cover (LC) classes such as agriculture, forest, grassland, water, urban, and natural-seminatural. To investigate the importance of RS data for these LC classes, the present study fuses NDVIs of two high spatial resolution data (high pair) (Landsat (30 m, 16 days; L) and Sentinel-2 (10 m, 5–6 days; S), with four low spatial resolution data (low pair) (MOD13Q1 (250 m, 16 days), MCD43A4 (500 m, one day), MOD09GQ (250 m, one-day), and MOD09Q1 (250 m, eight day)) using the spatial and temporal adaptive reflectance fusion model (STARFM), which fills regions’ cloud or shadow gaps without losing spatial information. These eight synthetic NDVI STARFM products (2: high pair multiply 4: low pair) offer a spatial resolution of 10 or 30 m and temporal resolution of 1, 8, or 16 days for the entire state of Bavaria (Germany) in 2019. Due to their higher revisit frequency and more cloud and shadow-free scenes (S = 13, L = 9), Sentinel-2 (overall R2 = 0.71, and RMSE = 0.11) synthetic NDVI products provide more accurate results than Landsat (overall R2 = 0.61, and RMSE = 0.13). Likewise, for the agriculture class, synthetic products obtained using Sentinel-2 resulted in higher accuracy than Landsat except for L-MOD13Q1 (R2 = 0.62, RMSE = 0.11), resulting in similar accuracy preciseness as S-MOD13Q1 (R2 = 0.68, RMSE = 0.13). Similarly, comparing L-MOD13Q1 (R2 = 0.60, RMSE = 0.05) and S-MOD13Q1 (R2 = 0.52, RMSE = 0.09) for the forest class, the former resulted in higher accuracy and precision than the latter. Conclusively, both L-MOD13Q1 and S-MOD13Q1 are suitable for agricultural and forest monitoring; however, the spatial resolution of 30 m and low storage capacity makes L-MOD13Q1 more prominent and faster than that of S-MOD13Q1 with the 10-m spatial resolution. Numéro de notice : A2022-124 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.3390/rs14030677 Date de publication en ligne : 31/01/2022 En ligne : https://doi.org/10.3390/rs14030677 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99687
in Remote sensing > vol 14 n° 3 (February-1 2022) . - n° 677[article]Fusion de données hyperspectrales et panchromatiques dans le domaine réflectif / Yohann Constans (2022)PermalinkA novel unmixing-based hypersharpening method via convolutional neural network / Xiaochen Lu in IEEE Transactions on geoscience and remote sensing, vol 60 n° 1 (January 2022)PermalinkPermalinkTowards urban flood susceptibility mapping using data-driven models in Berlin, Germany / Omar Seleem in Geomatics, Natural Hazards and Risk, vol 13 (2022)PermalinkField scale wheat LAI retrieval from multispectral Sentinel 2A-MSI and LandSat 8-OLI imagery: effect of atmospheric correction, image resolutions and inversion techniques / Rajkumar Dhakar in Geocarto international, vol 36 n° 18 ([01/10/2021])PermalinkIntegrating spatio-temporal-spectral information for downscaling Sentinel-3 OLCI images / Yijie Tang in ISPRS Journal of photogrammetry and remote sensing, vol 180 (October 2021)PermalinkUnsupervised pansharpening based on self-attention mechanism / Ying Qu in IEEE Transactions on geoscience and remote sensing, vol 59 n° 4 (April 2021)PermalinkPermalinkModel based signal processing techniques for nonconventional optical imaging systems / Daniele Picone (2021)PermalinkfusionImage: An R package for pan‐sharpening images in open source software / Fulgencio Cánovas‐García in Transactions in GIS, Vol 24 n° 5 (October 2020)Permalink