Descripteur
Termes IGN > informatique > intelligence artificielle > apprentissage automatique > données d'entrainement (apprentissage automatique)
données d'entrainement (apprentissage automatique)Synonyme(s)base d'apprentissageVoir aussi |
Documents disponibles dans cette catégorie (93)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Deriving map images of generalised mountain roads with generative adversarial networks / Azelle Courtial in International journal of geographical information science IJGIS, vol 37 n° 3 (March 2023)
[article]
Titre : Deriving map images of generalised mountain roads with generative adversarial networks Type de document : Article/Communication Auteurs : Azelle Courtial , Auteur ; Guillaume Touya , Auteur ; Xiang Zhang, Auteur Année de publication : 2023 Article en page(s) : pp 499 - 528 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Termes IGN] analyse comparative
[Termes IGN] apprentissage dirigé
[Termes IGN] apprentissage non-dirigé
[Termes IGN] carte routière
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] généralisation cartographique automatisée
[Termes IGN] montagne
[Termes IGN] réseau antagoniste génératif
[Vedettes matières IGN] GénéralisationRésumé : (auteur) Map generalisation is a process that transforms geographic information for a cartographic at a specific scale. The goal is to produce legible and informative maps even at small scales from a detailed dataset. The potential of deep learning to help in this task is still unknown. This article examines the use case of mountain road generalisation, to explore the potential of a specific deep learning approach: generative adversarial networks (GAN). Our goal is to generate images that depict road maps generalised at the 1:250k scale, from images that depict road maps of the same area using un-generalised 1:25k data. This paper not only shows the potential of deep learning to generate generalised mountain roads, but also analyses how the process of deep learning generalisation works, compares supervised and unsupervised learning and explores possible improvements. With this experiment we have exhibited an unsupervised model that is able to generate generalised maps evaluated as good as the reference and reviewed some possible improvements for deep learning-based generalisation, including training set management and the definition of a new road connectivity loss. All our results are evaluated visually using a four questions process and validated by a user test conducted on 113 individuals. Numéro de notice : A2023-073 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/13658816.2022.2123488 Date de publication en ligne : 20/10/2022 En ligne : https://doi.org/10.1080/13658816.2022.2123488 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101901
in International journal of geographical information science IJGIS > vol 37 n° 3 (March 2023) . - pp 499 - 528[article]Comparative analysis of different CNN models for building segmentation from satellite and UAV images / Batuhan Sariturk in Photogrammetric Engineering & Remote Sensing, PERS, vol 89 n° 2 (February 2023)
[article]
Titre : Comparative analysis of different CNN models for building segmentation from satellite and UAV images Type de document : Article/Communication Auteurs : Batuhan Sariturk, Auteur ; Damla Kumbasar, Auteur ; Dursun Zafer Seker, Auteur Année de publication : 2023 Article en page(s) : pp 97 - 105 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse comparative
[Termes IGN] bati
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] image captée par drone
[Termes IGN] image satellite
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Building segmentation has numerous application areas such as urban planning and disaster management. In this study, 12 CNN models (U-Net, FPN, and LinkNet using EfficientNet-B5 backbone, U-Net, SegNet, FCN, and six Residual U-Net models) were generated and used for building segmentation. Inria Aerial Image Labeling Data Set was used to train models, and three data sets (Inria Aerial Image Labeling Data Set, Massachusetts Buildings Data Set, and Syedra Archaeological Site Data Set) were used to evaluate trained models. On the Inria test set, Residual-2 U-Net has the highest F1 and Intersection over Union (IoU) scores with 0.824 and 0.722, respectively. On the Syedra test set, LinkNet-EfficientNet-B5 has F1 and IoU scores of 0.336 and 0.246. On the Massachusetts test set, Residual-4 U-Net has F1 and IoU scores of 0.394 and 0.259. It has been observed that, for all sets, at least two of the top three models used residual connections. Therefore, for this study, residual connections are more successful than conventional convolutional layers. Numéro de notice : A2023-143 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.22-00084R2 Date de publication en ligne : 01/02/2023 En ligne : https://doi.org/10.14358/PERS.22-00084R2 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102718
in Photogrammetric Engineering & Remote Sensing, PERS > vol 89 n° 2 (February 2023) . - pp 97 - 105[article]Multi-nomenclature, multi-resolution joint translation: an application to land-cover mapping / Luc Baudoux in International journal of geographical information science IJGIS, vol 37 n° 2 (February 2023)
[article]
Titre : Multi-nomenclature, multi-resolution joint translation: an application to land-cover mapping Type de document : Article/Communication Auteurs : Luc Baudoux , Auteur ; Jordi Inglada, Auteur ; Clément Mallet , Auteur Année de publication : 2023 Projets : AI4GEO / Article en page(s) : pp 403 - 437 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Cartographie thématique
[Termes IGN] apprentissage profond
[Termes IGN] carte d'occupation du sol
[Termes IGN] carte d'utilisation du sol
[Termes IGN] carte thématique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] harmonisation des données
[Termes IGN] nomenclature
[Termes IGN] pouvoir de résolution géométriqueRésumé : (auteur) Land-use/land-cover (LULC) maps describe the Earth’s surface with discrete classes at a specific spatial resolution. The chosen classes and resolution highly depend on peculiar uses, making it mandatory to develop methods to adapt these characteristics for a large range of applications. Recently, a convolutional neural network (CNN)-based method was introduced to take into account both spatial and geographical context to translate a LULC map into another one. However, this model only works for two maps: one source and one target. Inspired by natural language translation using multiple-language models, this article explores how to translate one LULC map into several targets with distinct nomenclatures and spatial resolutions. We first propose a new data set based on six open access LULC maps to train our CNN-based encoder-decoder framework. We then apply such a framework to convert each of these six maps into each of the others using our Multi-Landcover Translation network (MLCT-Net). Extensive experiments are conducted at a country scale (namely France). The results reveal that our MLCT-Net outperforms its semantic counterparts and gives on par results with mono-LULC models when evaluated on areas similar to those used for training. Furthermore, it outperforms the mono-LULC models when applied to totally new landscapes. Numéro de notice : A2023-075 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/13658816.2022.2120996 Date de publication en ligne : 10/10/2022 En ligne : https://doi.org/10.1080/13658816.2022.2120996 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101797
in International journal of geographical information science IJGIS > vol 37 n° 2 (February 2023) . - pp 403 - 437[article]Cross-supervised learning for cloud detection / Kang Wu in GIScience and remote sensing, vol 60 n° 1 (2023)
[article]
Titre : Cross-supervised learning for cloud detection Type de document : Article/Communication Auteurs : Kang Wu, Auteur ; Zunxiao Xu, Auteur ; Xinrong Lyu, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 2147298 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage dirigé
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] détection d'objet
[Termes IGN] données étiquetées d'entrainement
[Termes IGN] nuageRésumé : (auteur) We present a new learning paradigm, that is, cross-supervised learning, and explore its use for cloud detection. The cross-supervised learning paradigm is characterized by both supervised training and mutually supervised training, and is performed by two base networks. In addition to the individual supervised training for labeled data, the two base networks perform the mutually supervised training using prediction results provided by each other for unlabeled data. Specifically, we develop In-extensive Nets for implementing the base networks. The In-extensive Nets consist of two Intensive Nets and are trained using the cross-supervised learning paradigm. The Intensive Net leverages information from the labeled cloudy images using a focal attention guidance module (FAGM) and a regression block. The cross-supervised learning paradigm empowers the In-extensive Nets to learn from both labeled and unlabeled cloudy images, substantially reducing the number of labeled cloudy images (that tend to cost expensive manual effort) required for training. Experimental results verify that In-extensive Nets perform well and have an obvious advantage in the situations where there are only a few labeled cloudy images available for training. The implementation code for the proposed paradigm is available at https://gitee.com/kang_wu/in-extensive-nets. Numéro de notice : A2023-190 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1080/15481603.2022.2147298 Date de publication en ligne : 03/01/2023 En ligne : https://doi.org/10.1080/15481603.2022.2147298 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102969
in GIScience and remote sensing > vol 60 n° 1 (2023) . - n° 2147298[article]Decision tree-based machine learning models for above-ground biomass estimation using multi-source remote sensing data and object-based image analysis / Haifa Tamiminia in Geocarto international, vol 38 n° inconnu ([01/01/2023])
[article]
Titre : Decision tree-based machine learning models for above-ground biomass estimation using multi-source remote sensing data and object-based image analysis Type de document : Article/Communication Auteurs : Haifa Tamiminia, Auteur ; Bahram Salehi, Auteur ; Masoud Mahdianpari, Auteur ; et al., Auteur Année de publication : 2023 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] analyse d'image orientée objet
[Termes IGN] biomasse aérienne
[Termes IGN] boosting adapté
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification pixellaire
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] Extreme Gradient Machine
[Termes IGN] image ALOS-PALSAR
[Termes IGN] image Landsat
[Termes IGN] image Sentinel-MSI
[Termes IGN] image Sentinel-SAR
[Termes IGN] New York (Etats-Unis ; état)
[Termes IGN] réserve naturelleRésumé : (auteur) Forest above-ground biomass (AGB) estimation provides valuable information about the carbon cycle. Thus, the overall goal of this paper is to present an approach to enhance the accuracy of the AGB estimation. The main objectives are to: 1) investigate the performance of remote sensing data sources, including airborne light detection and ranging (LiDAR), optical, SAR, and their combination to improve the AGB predictions, 2) examine the capability of tree-based machine learning models, and 3) compare the performance of pixel-based and object-based image analysis (OBIA). To investigate the performance of machine learning models, multiple tree-based algorithms were fitted to predictors derived from airborne LiDAR data, Landsat, Sentinel-2, Sentinel-1, and PALSAR-2/PALSAR SAR data collected within New York’s Adirondack Park. Combining remote sensing data from multiple sources improved the model accuracy (RMSE: 52.14 Mg ha−1 and R2: 0.49). There was no significant difference among gradient boosting machine (GBM), random forest (RF), and extreme gradient boosting (XGBoost) models. In addition, pixel-based and object-based models were compared using the airborne LiDAR-derived AGB raster as a training/testing sample. The OBIA provided the best results with the RMSE of 33.77 Mg ha−1 and R2 of 0.81 for the combination of optical and SAR data in the GBM model. Numéro de notice : A2022-331 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1080/10106049.2022.2071475 Date de publication en ligne : 27/04/2022 En ligne : https://doi.org/10.1080/10106049.2022.2071475 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100607
in Geocarto international > vol 38 n° inconnu [01/01/2023][article]Semi-supervised label propagation for multi-source remote sensing image change detection / Fan Hao in Computers & geosciences, vol 170 (January 2023)PermalinkA survey and benchmark of automatic surface reconstruction from point clouds / Raphaël Sulzer (2023)PermalinkA deep learning framework based on generative adversarial networks and vision transformer for complex wetland classification using limited training samples / Ali Jamali in International journal of applied Earth observation and geoinformation, vol 115 (December 2022)PermalinkA machine learning approach for detecting rescue requests from social media / Zheye Wang in ISPRS International journal of geo-information, vol 11 n° 11 (November 2022)PermalinkDeep learning high resolution burned area mapping by transfer learning from Landsat-8 to PlanetScope / V.S. Martins in Remote sensing of environment, vol 280 (October 2022)PermalinkAn improved multi-task pointwise network for segmentation of building roofs in airborne laser scanning point clouds / Chaoquan Zhang in Photogrammetric record, vol 37 n° 179 (September 2022)PermalinkAnalytical method for high-precision seabed surface modelling combining B-spline functions and Fourier series / Tyler Susa in Marine geodesy, vol 45 n° 5 (September 2022)PermalinkCrowdsourcing-based application to solve the problem of insufficient training data in deep learning-based classification of satellite images / Ekrem Saralioglu in Geocarto international, vol 37 n° 18 ([01/09/2022])PermalinkLearning indoor point cloud semantic segmentation from image-level labels / Youcheng Song in The Visual Computer, vol 38 n° 9 (September 2022)PermalinkImproving remote sensing classification: A deep-learning-assisted model / Tsimur Davydzenka in Computers & geosciences, vol 164 (July 2022)Permalink