Descripteur
Documents disponibles dans cette catégorie (241)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Semantic feature-constrained multitask siamese network for building change detection in high-spatial-resolution remote sensing imagery / Qian Shen in ISPRS Journal of photogrammetry and remote sensing, vol 189 (July 2022)
[article]
Titre : Semantic feature-constrained multitask siamese network for building change detection in high-spatial-resolution remote sensing imagery Type de document : Article/Communication Auteurs : Qian Shen, Auteur ; Jiru Huang, Auteur ; Min Wang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 78 - 94 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de changement
[Termes IGN] détection du bâti
[Termes IGN] données qualitatives
[Termes IGN] estimation quantitative
[Termes IGN] fusion d'images
[Termes IGN] image à haute résolution
[Termes IGN] image multibande
[Termes IGN] jeu de données
[Termes IGN] réseau neuronal siamoisRésumé : (auteur) In the field of remote sensing applications, semantic change detection (SCD) simultaneously identifies changed areas and their change types by jointly conducting bitemporal image classification and change detection. It facilitates change reasoning and provides more application value than binary change detection (BCD), which offers only a binary map of the changed/unchanged areas. In this study, we propose a multitask Siamese network, named the semantic feature-constrained change detection (SFCCD) network, for building change detection in bitemporal high-spatial-resolution (HSR) images. SFCCD conducts feature extraction, semantic segmentation and change detection simultaneously, where change detection and semantic segmentation are the main and auxiliary tasks, respectively. For the segmentation task, ResNet50 is used to conduct image feature extraction, and the extracted semantic features are provided to execute the change detection task via a series of jump connections. For the change detection task, a global channel attention (GCA) module and a multiscale feature fusion (MSFF) module are designed, where high-level features offer training guidance to the low-level feature maps, and multiscale features are fused with multiple convolutions that possess different receptive fields. In bitemporal HSR images with different view angles, high-rise buildings have different directional height displacements, which generally cause serious false alarms for common change detection methods. However, known public building change detection datasets often lack buildings with height displacement. We thus create the Nanjing Dataset (NJDS) and design the aforementioned network structures and modules to target this issue. Experiments for method validation and comparison are conducted on the NJDS and two additional public datasets, i.e., the WHU Building Dataset (WBDS) and Google Dataset (GDS). Ablation experiments on the NJDS show that the joint utilization of the GCA and MSFF modules performs better than several classic modules, including atrous spatial pyramid pooling (ASPP), efficient spatial pyramid (ESP), channel attention block (CAB) and global attention upsampling (GAU) modules, in dealing with building height displacement. Furthermore, SFCCD achieves higher accuracy in terms of the OA, recall, F1-score and mIoU measures than several state-of-the-art change detection methods, including deeply supervised image fusion network (DSIFN), the dual-task constrained deep Siamese convolutional network (DTCDSCN), and multitask U-Net (MTU-Net). Numéro de notice : A2022-412 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.05.001 Date de publication en ligne : 12/05/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.05.001 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100762
in ISPRS Journal of photogrammetry and remote sensing > vol 189 (July 2022) . - pp 78 - 94[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 081-2022071 SL Revue Centre de documentation Revues en salle Disponible A dual-generator translation network fusing texture and structure features for SAR and optical image matching / Han Nie in Remote sensing, Vol 14 n° 12 (June-2 2022)
[article]
Titre : A dual-generator translation network fusing texture and structure features for SAR and optical image matching Type de document : Article/Communication Auteurs : Han Nie, Auteur ; Zhitao Fu, Auteur ; Bo-Hui Tang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 2946 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] agrégation de détails
[Termes IGN] appariement d'images
[Termes IGN] fusion d'images
[Termes IGN] image radar moirée
[Termes IGN] image Sentinel-MSI
[Termes IGN] image Sentinel-SAR
[Termes IGN] rapport signal sur bruit
[Termes IGN] rift
[Termes IGN] texture d'imageRésumé : (auteur) The matching problem for heterologous remote sensing images can be simplified to the matching problem for pseudo homologous remote sensing images via image translation to improve the matching performance. Among such applications, the translation of synthetic aperture radar (SAR) and optical images is the current focus of research. However, the existing methods for SAR-to-optical translation have two main drawbacks. First, single generators usually sacrifice either structure or texture features to balance the model performance and complexity, which often results in textural or structural distortion; second, due to large nonlinear radiation distortions (NRDs) in SAR images, there are still visual differences between the pseudo-optical images generated by current generative adversarial networks (GANs) and real optical images. Therefore, we propose a dual-generator translation network for fusing structure and texture features. On the one hand, the proposed network has dual generators, a texture generator, and a structure generator, with good cross-coupling to obtain high-accuracy structure and texture features; on the other hand, frequency-domain and spatial-domain loss functions are introduced to reduce the differences between pseudo-optical images and real optical images. Extensive quantitative and qualitative experiments show that our method achieves state-of-the-art performance on publicly available optical and SAR datasets. Our method improves the peak signal-to-noise ratio (PSNR) by 21.0%, the chromatic feature similarity (FSIMc) by 6.9%, and the structural similarity (SSIM) by 161.7% in terms of the average metric values on all test images compared with the next best results. In addition, we present a before-and-after translation comparison experiment to show that our method improves the average keypoint repeatability by approximately 111.7% and the matching accuracy by approximately 5.25%. Numéro de notice : A2022-562 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.3390/rs14122946 Date de publication en ligne : 20/06/2022 En ligne : https://doi.org/10.3390/rs14122946 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101237
in Remote sensing > Vol 14 n° 12 (June-2 2022) . - n° 2946[article]HyperNet: A deep network for hyperspectral, multispectral, and panchromatic image fusion / Kun Li in ISPRS Journal of photogrammetry and remote sensing, vol 188 (June 2022)
[article]
Titre : HyperNet: A deep network for hyperspectral, multispectral, and panchromatic image fusion Type de document : Article/Communication Auteurs : Kun Li, Auteur ; Wei Zhang, Auteur ; Dian Yu, Auteur ; Xin Tian, Auteur Année de publication : 2022 Article en page(s) : pp 30 - 44 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] fusion d'images
[Termes IGN] image à haute résolution
[Termes IGN] image floue
[Termes IGN] image hyperspectrale
[Termes IGN] image multibande
[Termes IGN] image panchromatique
[Termes IGN] pansharpening (fusion d'images)
[Termes IGN] réseau neuronal profondRésumé : (Auteur) Traditional approaches mainly fuse a hyperspectral image (HSI) with a high-resolution multispectral image (MSI) to improve the spatial resolution of the HSI. However, such improvement in the spatial resolution of HSIs is still limited because the spatial resolution of MSIs remains low. To further improve the spatial resolution of HSIs, we propose HyperNet, a deep network for the fusion of HSI, MSI, and panchromatic image (PAN), which effectively injects the spatial details of an MSI and a PAN into an HSI while preserving the spectral information of the HSI. Thus, we design HyperNet on the basis of a uniform fusion strategy to solve the problem of complex fusion of three types of sources (i.e., HSI, MSI, and PAN). In particular, the spatial details of the MSI and the PAN are extracted by multiple specially designed multiscale-attention-enhance blocks in which multi-scale convolution is used to adaptively extract features from different reception fields, and two attention mechanisms are adopted to enhance the representation capability of features along the spectral and spatial dimensions, respectively. Through the capability of feature reuse and interaction in a specially designed dense-detail-insertion block, the previously extracted features are subsequently injected into the HSI according to the unidirectional feature propagation among the layers of dense connection. Finally, we construct an efficient loss function by integrating the multi-scale structural similarity index with the norm, which drives HyperNet to generate high-quality results with a good balance between spatial and spectral qualities. Extensive experiments on simulated and real data sets qualitatively and quantitatively demonstrate the superiority of HyperNet over other state-of-the-art methods. Numéro de notice : A2022-272 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.04.001 Date de publication en ligne : 07/04/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.04.001 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100461
in ISPRS Journal of photogrammetry and remote sensing > vol 188 (June 2022) . - pp 30 - 44[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022061 SL Revue Centre de documentation Revues en salle Disponible 081-2022063 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2022062 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Classification of vegetation classes by using time series of Sentinel-2 images for large scale mapping in Cameroon / Hermann Tagne in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-3-2022 (2022 edition)
[article]
Titre : Classification of vegetation classes by using time series of Sentinel-2 images for large scale mapping in Cameroon Type de document : Article/Communication Auteurs : Hermann Tagne, Auteur ; Arnaud Le Bris , Auteur ; David Monkam, Auteur ; Clément Mallet , Auteur Année de publication : 2022 Projets : TOSCA Parcelle / Le Bris, Arnaud Article en page(s) : pp 673 - 680 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] Cameroun
[Termes IGN] carte de la végétation
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] fusion d'images
[Termes IGN] image Sentinel-MSI
[Termes IGN] occupation du sol
[Termes IGN] série temporelleRésumé : (auteur) Sentinel-2 satellites provide dense image time series exhibiting high spectral, spatial and temporal resolutions. These images are in particular of utter interest for Land-Cover (LC) mapping at large scales. LC maps can now be computed on a yearly basis at the scale of a country with efficient supervised classifiers, assuming suitable training data are available. However, the efficient exploitation of large amount of Sentinel-2 imagery still remain challenging on unexplored areas where state-of-the-art classifiers are prone to fail. This paper focuses on Land-Cover mapping over Cameroon for the purpose of updating the Very High Resolution national topographic geodatabase. The ι2 framework is adopted and tested for the specificity of the country. Here, experiments focus on generic vegetation classes (five) which enables providing robust focusing masks for higher resolution classifications. Two strategies are compared: (i) a LC map is calculated out of a year long time series and (ii) monthly LC maps are generated and merged into a single yearly map. Satisfactory accuracy scores are obtained (>94% in Overall Accuracy), allowing to provide a first step towards finer-grained map retrieval. Numéro de notice : A2022-426 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.5194/isprs-annals-V-3-2022-673-2022 Date de publication en ligne : 18/05/2022 En ligne : https://doi.org/10.5194/isprs-annals-V-3-2022-673-2022 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100731
in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences > vol V-3-2022 (2022 edition) . - pp 673 - 680[article]Fusion of optical, radar and waveform LiDAR observations for land cover classification / Huiran Jin in ISPRS Journal of photogrammetry and remote sensing, vol 187 (May 2022)
[article]
Titre : Fusion of optical, radar and waveform LiDAR observations for land cover classification Type de document : Article/Communication Auteurs : Huiran Jin, Auteur ; Giorgos Mountrakis, Auteur Année de publication : 2022 Article en page(s) : pp 171 - 190 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] analyse comparative
[Termes IGN] carte de la végétation
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] fusion d'images
[Termes IGN] image ALOS-PALSAR
[Termes IGN] image Landsat-TM
[Termes IGN] image multitemporelle
[Termes IGN] occupation du solRésumé : (Auteur) Land cover is an integral component for characterizing anthropogenic activity and promoting sustainable land use. Mapping distribution and coverage of land cover at broad spatiotemporal scales largely relies on classification of remotely sensed data. Although recently multi-source data fusion has been playing an increasingly active role in land cover classification, our intensive review of current studies shows that the integration of optical, synthetic aperture radar (SAR) and light detection and ranging (LiDAR) observations has not been thoroughly evaluated. In this research, we bridged this gap by i) summarizing related fusion studies and assessing their reported accuracy improvements, and ii) conducting our own case study where for the first time fusion of optical, radar and waveform LiDAR observations and the associated improvements in classification accuracy are assessed using data collected by spaceborne or appropriately simulated platforms in the LiDAR case. Multitemporal Landsat-5/Thematic Mapper (TM) and Advanced Land Observing Satellite-1/ Phased Array type L-band SAR (ALOS-1/PALSAR) imagery acquired in the Central New York (CNY) region close to the collection of airborne waveform LVIS (Land, Vegetation, and Ice Sensor) data were examined. Classification was conducted using a random forest algorithm and different feature sets in terms of sensor and seasonality as input variables. Results indicate that the combined spectral, scattering and vertical structural information provided the maximum discriminative capability among different land cover types, giving rise to the highest overall accuracy of 83% (2–19% and 9–35% superior to the two-sensor and single-sensor scenarios with overall accuracies of 64–81% and 48–74%, respectively). Greater improvement was achieved when combining multitemporal Landsat images with LVIS-derived canopy height metrics as opposed to PALSAR features, suggesting that LVIS contributed more useful thematic information complementary to spectral data and beneficial to the classification task, especially for vegetation classes. With the Global Ecosystem Dynamics Investigation (GEDI), a recently launched LiDAR instrument of similar properties to the LVIS sensor now operating onboard the International Space Station (ISS), it is our hope that this research will act as a literature summary and offer guidelines for further applications of multi-date and multi-type remotely sensed data fusion for improved land cover classification. Numéro de notice : A2022-228 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.03.010 Date de publication en ligne : 17/03/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.03.010 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100214
in ISPRS Journal of photogrammetry and remote sensing > vol 187 (May 2022) . - pp 171 - 190[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022051 SL Revue Centre de documentation Revues en salle Disponible 081-2022053 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2022052 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Multi-modal temporal attention models for crop mapping from satellite time series / Vivien Sainte Fare Garnot in ISPRS Journal of photogrammetry and remote sensing, vol 187 (May 2022)PermalinkUnmixing-based spatiotemporal image fusion accounting for complex land cover changes / Xiaolu Jiang in IEEE Transactions on geoscience and remote sensing, vol 60 n° 5 (May 2022)PermalinkPolGAN: A deep-learning-based unsupervised forest height estimation based on the synergy of PolInSAR and LiDAR data / Qi Zhang in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)PermalinkCombined use of Sentinel-1 and Sentinel-2 data for improving above-ground biomass estimation / Narissara Nuthammachot in Geocarto international, vol 37 n° 2 ([15/01/2022])PermalinkFusion de données hyperspectrales et panchromatiques dans le domaine réflectif / Yohann Constans (2022)PermalinkA novel unmixing-based hypersharpening method via convolutional neural network / Xiaochen Lu in IEEE Transactions on geoscience and remote sensing, vol 60 n° 1 (January 2022)PermalinkIntegrating spatio-temporal-spectral information for downscaling Sentinel-3 OLCI images / Yijie Tang in ISPRS Journal of photogrammetry and remote sensing, vol 180 (October 2021)PermalinkA methodology for producing realistic hill-shading map based on shaded relief map, digital orthophotographic map fusion and IHS transformation / Hongyun Zeng in Annals of GIS, vol 27 n° 4 (October 2021)PermalinkA novel method based on deep learning, GIS and geomatics software for building a 3D city model from VHR satellite stereo imagery / Massimiliano Pepe in ISPRS International journal of geo-information, vol 10 n° 10 (October 2021)PermalinkRecurrent-based regression of Sentinel time series for continuous vegetation monitoring / Anatol Garioud in Remote sensing of environment, vol 263 (15 September 2021)Permalink