Descripteur
Termes IGN > informatique > intelligence artificielle > apprentissage automatique > apprentissage non-dirigé
apprentissage non-dirigéVoir aussi |
Documents disponibles dans cette catégorie (18)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Graph learning based on signal smoothness representation for homogeneous and heterogeneous change detection / David Alejandro Jimenez-Sierra in IEEE Transactions on geoscience and remote sensing, vol 60 n° 4 (April 2022)
[article]
Titre : Graph learning based on signal smoothness representation for homogeneous and heterogeneous change detection Type de document : Article/Communication Auteurs : David Alejandro Jimenez-Sierra, Auteur ; David Alfredo Quintero-Olaya, Auteur ; Juan Carlos Alvear-Muñoz, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 4410416 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] apprentissage non-dirigé
[Termes IGN] apprentissage profond
[Termes IGN] détection de changement
[Termes IGN] graphe
[Termes IGN] image multibande
[Termes IGN] image radar moirée
[Termes IGN] Kappa de Cohen
[Termes IGN] lissage de données
[Termes IGN] processus gaussien
[Termes IGN] réseau sémantique
[Termes IGN] segmentation d'image
[Termes IGN] seuillage
[Termes IGN] superpixelRésumé : (auteur) Graph-based methods are promising approaches for traditional and modern techniques in change detection (CD) applications. Nonetheless, some graph-based approaches omit the existence of useful priors that account for the structure of a scene, and the inter- and intra-relationships between the pixels are analyzed. To address this issue, in this article, we propose a framework for CD based on graph fusion and driven by graph signal smoothness representation. In addition to modifying the graph learning stage, in the proposed model, we apply a Gaussian mixture model for superpixel segmentation (GMMSP) as a downsampling module to reduce the computational cost required to learn the graph of the entire images. We carry out tests on 14 real cases of natural disasters, farming, and construction. The dataset contains homogeneous cases with multispectral (MS) and synthetic aperture radar (SAR) images, along with heterogeneous cases that include MS/SAR images. We compare our approach against probabilistic thresholding, unsupervised learning, deep learning, and graph-based methods. In terms of Cohen’s kappa coefficient, our proposed model based on graph signal smoothness representation outperformed state-of-the-art approaches in ten out of 14 datasets. Numéro de notice : A2022-379 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2022.3168126 Date de publication en ligne : 18/04/2022 En ligne : https://doi.org/10.1109/TGRS.2022.3168126 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100643
in IEEE Transactions on geoscience and remote sensing > vol 60 n° 4 (April 2022) . - n° 4410416[article]PolGAN: A deep-learning-based unsupervised forest height estimation based on the synergy of PolInSAR and LiDAR data / Qi Zhang in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)
[article]
Titre : PolGAN: A deep-learning-based unsupervised forest height estimation based on the synergy of PolInSAR and LiDAR data Type de document : Article/Communication Auteurs : Qi Zhang, Auteur ; Linlin Ge, Auteur ; Scott Hensley, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 123 - 139 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] analyse discriminante
[Termes IGN] apprentissage non-dirigé
[Termes IGN] apprentissage profond
[Termes IGN] bande L
[Termes IGN] données lidar
[Termes IGN] forêt boréale
[Termes IGN] forêt tropicale
[Termes IGN] Global Ecosystem Dynamics Investigation lidar
[Termes IGN] hauteur de la végétation
[Termes IGN] hauteur des arbres
[Termes IGN] image captée par drone
[Termes IGN] interféromètrie par radar à antenne synthétique
[Termes IGN] pansharpening (fusion d'images)
[Termes IGN] polarimétrie radar
[Termes IGN] pouvoir de résolution géométrique
[Termes IGN] réseau antagoniste génératif
[Termes IGN] semis de pointsRésumé : (auteur) This paper describes a deep-learning-based unsupervised forest height estimation method based on the synergy of the high-resolution L-band repeat-pass Polarimetric Synthetic Aperture Radar Interferometry (PolInSAR) and low-resolution large-footprint full-waveform Light Detection and Ranging (LiDAR) data. Unlike traditional PolInSAR-based methods, the proposed method reformulates the forest height inversion as a pan-sharpening process between the low-resolution LiDAR height and the high-resolution PolSAR and PolInSAR features. A tailored Generative Adversarial Network (GAN) called PolGAN with one generator and dual (coherence and spatial) discriminators is proposed to this end, where a progressive pan-sharpening strategy underpins the generator to overcome the significant difference between spatial resolutions of LiDAR and SAR-related inputs. Forest height estimates with high spatial resolution and vertical accuracy are generated through a continuous generative and adversarial process. UAVSAR PolInSAR and LVIS LiDAR data collected over tropical and boreal forest sites are used for experiments. Ablation study is conducted over the boreal site evidencing the superiority of the progressive generator with dual discriminators employed in PolGAN (RMSE: 1.21 m) in comparison with the standard generator with dual discriminators (RMSE: 2.43 m) and the progressive generator with a single coherence (RMSE: 2.74 m) or spatial discriminator (RMSE: 5.87 m). Besides that, by reducing the dependency on theoretical models and utilizing the shape, texture, and spatial information embedded in the high-spatial-resolution features, the PolGAN method achieves an RMSE of 2.37 m over the tropical forest site, which is much more accurate than the traditional PolInSAR-based Kapok method (RMSE: 8.02 m). Numéro de notice : A2022-195 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.02.008 Date de publication en ligne : 17/02/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.02.008 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99962
in ISPRS Journal of photogrammetry and remote sensing > vol 186 (April 2022) . - pp 123 - 139[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022041 SL Revue Centre de documentation Revues en salle Disponible 081-2022043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2022042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Aboveground biomass of salt-marsh vegetation in coastal wetlands: Sample expansion of in situ hyperspectral and Sentinel-2 data using a generative adversarial network / Chen Chen in Remote sensing of environment, vol 270 (March 2022)
[article]
Titre : Aboveground biomass of salt-marsh vegetation in coastal wetlands: Sample expansion of in situ hyperspectral and Sentinel-2 data using a generative adversarial network Type de document : Article/Communication Auteurs : Chen Chen, Auteur ; Yi Ma, Auteur ; Guangbo Ren, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 112885 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] biomasse aérienne
[Termes IGN] carte d'occupation du sol
[Termes IGN] carte thématique
[Termes IGN] image hyperspectrale
[Termes IGN] image Sentinel-MSI
[Termes IGN] littoral
[Termes IGN] marais salant
[Termes IGN] réseau antagoniste génératifRésumé : (auteur) Coastal wetlands are main components of the “blue carbon” ecosystems in coastal zones. Salt-marsh biomass is especially important regarding climate-change mitigation. Generating high precision biomass maps for evaluating the ecological functions of coastal wetlands is essential; however, conducting accurate biomass inversions with limited in situ observations from coastal wetlands is challenging. We propose a generative adversarial network with a constrained factor model (GAN-CF) for expanding limited in situ salt-marsh biomass observations. We used Sentinel-2 images and a deep belief network based on the conjugate gradient method (CG-DBN) for obtaining land-cover maps and the salt-marsh distribution (species: Phragmites australis, Suaeda glauca, Spartina alterniflora, and mixed species dominated by Tamarix chinensis) in the study area. This study bridges in situ hyperspectral and Sentinel-2 multispectral data by a satellite-band equivalent conversion model. The biomass and multispectral data derived from Sentinel-2 were used as input for the proposed GAN-CF model, which produced and constrained the generated samples based on the features (i.e., spectra, vegetation index, and biomass) of the in situ observations. Aboveground biomass (AGB) maps at 10-m spatial resolution were produced by constructing multiple linear regression models (MLRMs) based on the generated samples of each salt-marsh type using Sentinel-2 images. The quantity and richness of the generated samples improved the AGB estimations in the study area. The inversion accuracy of S. alterniflora was significantly improved (RMSE = 3.71 Mg/ha); the estimated AGB was strongly related to the in situ observations (R = 0.923). The estimated AGB was validated using in situ observations. The total amount of salt-marsh AGB in the study area in 2019 was estimated at 2.36 × 105 Mg, with 7.95 Mg/ha average. The salt-marsh biomass in decreasing order was as follows: P. australis (12.7 Mg/ha) > S. alterniflora (11.5 Mg/ha) > mixed species (8.97 Mg/ha) > S. glauca (2.18 Mg/ha). The salt-marsh area in decreasing order was as follows: S. glauca (10,410 ha) > P. australis (7320 ha) > mixed species (6740 ha) > S. alterniflora (5240 ha). By a feasibility analysis we estimated the biomass based on the Sentinel-2 data covering the Yellow River delta wetland in May, July, and September 2019 and the Jiaozhou Bay wetland in September 2019 by using the generated samples. The generated samples based on the 2013–2019 in situ observations constitute a salt-marsh biomass database, which can be useful for quantifying the regional carbon storage and ecological restoration monitoring. Numéro de notice : A2022-128 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.rse.2021.112885 Date de publication en ligne : 07/01/2022 En ligne : https://doi.org/10.1016/j.rse.2021.112885 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99710
in Remote sensing of environment > vol 270 (March 2022) . - n° 112885[article]Neural map style transfer exploration with GANs / Sidonie Christophe in International journal of cartography, vol 8 n° 1 (March 2022)
[article]
Titre : Neural map style transfer exploration with GANs Type de document : Article/Communication Auteurs : Sidonie Christophe , Auteur ; Samuel Mermet , Auteur ; Morgan Laurent, Auteur ; Guillaume Touya , Auteur Année de publication : 2022 Projets : 1-Pas de projet / Article en page(s) : pp 18 - 36 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Termes IGN] apprentissage profond
[Termes IGN] classification non dirigée
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] grille d'échantillonnage
[Termes IGN] orthoimage
[Termes IGN] représentation cartographique
[Termes IGN] réseau antagoniste génératif
[Termes IGN] style cartographique
[Termes IGN] visualisation cartographique
[Vedettes matières IGN] GéovisualisationRésumé : (auteur) Neural Style Transfer is a Computer Vision topic intending to transfer the visual appearance or the style of images to other images. Developments in deep learning nicely generate stylized images from texture-based examples or transfer the style of a photograph to another one. In map design, the style is a multi-dimensional complex problem related to recognizable visual salient features and topological arrangements, supporting the description of geographic spaces at a specific scale. The map style transfer is still at stake to generate a diversity of possible new styles to render geographical features. Generative adversarial Networks (GANs) techniques, well supporting image-to-image translation tasks, offer new perspectives for map style transfer. We propose to use accessible GAN architectures, in order to experiment and assess neural map style transfer to ortho-images, while using different map designs of various geographic spaces, from simple-styled (Plan maps) to complex-styled (old Cassini, Etat-Major, or Scan50 B&W). This transfer task and our global protocol are presented, including the sampling grid, the training and test of Pix2Pix and CycleGAN models, such as the perceptual assessment of the generated outputs. Promising results are discussed, opening research issues for neural map style transfer exploration with GANs. Numéro de notice : A2022-172 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/23729333.2022.2031554 Date de publication en ligne : 13/02/2022 En ligne : https://doi.org/10.1080/23729333.2022.2031554 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99807
in International journal of cartography > vol 8 n° 1 (March 2022) . - pp 18 - 36[article]Building footprint extraction in Yangon city from monocular optical satellite image using deep learning / Hein Thura Aung in Geocarto international, vol 37 n° 3 ([01/02/2022])
[article]
Titre : Building footprint extraction in Yangon city from monocular optical satellite image using deep learning Type de document : Article/Communication Auteurs : Hein Thura Aung, Auteur ; Sao Hone Pha, Auteur ; Wataru Takeuchi, Auteur Année de publication : 2022 Article en page(s) : pp 792 - 812 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] Birmanie
[Termes IGN] détection du bâti
[Termes IGN] empreinte
[Termes IGN] image Geoeye
[Termes IGN] image isolée
[Termes IGN] réseau antagoniste génératif
[Termes IGN] vision monoculaireRésumé : (auteur) In this research, building footprints in Yangon City, Myanmar are extracted only from monocular optical satellite image by using conditional generative adversarial network (CGAN). Both training dataset and validating dataset are created from GeoEYE image of Dagon Township in Yangon City. Eight training models are created according to the change of values in three training parameters; learning rate, β1 term of Adam, and number of filters in the first convolution layer of the generator and the discriminator. The images of the validating dataset are divided into four image groups; trees, buildings, mixed trees and buildings, and pagodas. The output images of eight trained models are transformed to the vector images and then evaluated by comparing with manually digitized polygons using completeness, correctness and F1 measure. According to the results, by using CGAN, building footprints can be extracted up to 71% of completeness, 81% of correctness and 69% of F1 score from only monocular optical satellite image. Numéro de notice : A2022-345 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2020.1740949 Date de publication en ligne : 20/03/2020 En ligne : https://doi.org/10.1080/10106049.2020.1740949 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100526
in Geocarto international > vol 37 n° 3 [01/02/2022] . - pp 792 - 812[article]Siamese Adversarial Network for image classification of heavy mineral grains / Huizhen Hao in Computers & geosciences, vol 159 (February 2022)PermalinkApprentissage de représentations et modèles génératifs profonds dans les systèmes dynamiques / Jean-Yves Franceschi (2022)PermalinkContribution to object extraction in cartography : A novel deep learning-based solution to recognise, segment and post-process the road transport network as a continuous geospatial element in high-resolution aerial orthoimagery / Calimanut-Ionut Cira (2022)PermalinkDeep image translation with an affinity-based change prior for unsupervised multimodal change detection / Luigi Tommaso Luppino in IEEE Transactions on geoscience and remote sensing, vol 60 n° 1 (January 2022)PermalinkPermalinkPermalinkSelf-attention and generative adversarial networks for algae monitoring / Nhut Hai Huynh in European journal of remote sensing, vol 55 n° 1 (2022)PermalinkUnsupervised generative models for data analysis and explainable artificial intelligence / Mohanad Abukmeil (2022)PermalinkBuilding detection with convolutional networks trained with transfer learning / Simon Šanca in Geodetski vestnik, vol 65 n° 4 (December 2021 - February 2022)PermalinkUtilisation de l’apprentissage profond dans la modélisation 3D urbaine : partie 2, post-traitement et évaluation / Hamza Ben Addou in Géomatique expert, n° 136 (novembre - décembre 2021)Permalink