Descripteur
Termes IGN > informatique > intelligence artificielle > apprentissage automatique > données d'entrainement (apprentissage automatique)
données d'entrainement (apprentissage automatique)Synonyme(s)base d'apprentissageVoir aussi |
Documents disponibles dans cette catégorie (91)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
MTLM: a multi-task learning model for travel time estimation / Saijun Xu in Geoinformatica, vol 26 n° 2 (April 2022)
[article]
Titre : MTLM: a multi-task learning model for travel time estimation Type de document : Article/Communication Auteurs : Saijun Xu, Auteur ; Ruoqian Zhang, Auteur ; Wanjun Cheng, Auteur ; Jiajie Xu, Auteur Année de publication : 2022 Article en page(s) : pp 379 - 395 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique
[Termes IGN] analyse coût-avantage
[Termes IGN] apprentissage automatique
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] durée de trajet
[Termes IGN] modèle de simulation
[Termes IGN] transport collectif
[Termes IGN] transport intermodalRésumé : (auteur) Travel time estimation (TTE) is an important research topic in many geographic applications for smart city research. However, existing approaches either ignore the impact of transportation modes, or assume the mode information is known for each training trajectory and the query input. In this paper, we propose a multi-task learning model for travel time estimation called MTLM, which recommends the appropriate transportation mode for users, and then estimates the related travel time of the path. It integrates transportation-mode recommendation task and travel time estimation task to capture the mutual influence between them for more accurate TTE results. Furthermore, it captures spatio-temporal dependencies and transportation mode effect by learning effective representations for TTE. It combines the transportation-mode recommendation loss and TTE loss for training. Extensive experiments on real datasets demonstrate the effectiveness of our proposed methods. Numéro de notice : A2022-325 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article DOI : https://doi.org/10.1007/s10707-020-00422-x Date de publication en ligne : 15/08/2020 En ligne : https://doi.org/10.1007/s10707-020-00422-x Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100488
in Geoinformatica > vol 26 n° 2 (April 2022) . - pp 379 - 395[article]Spatially oriented convolutional neural network for spatial relation extraction from natural language texts / Qinjun Qiu in Transactions in GIS, vol 26 n° 2 (April 2022)
[article]
Titre : Spatially oriented convolutional neural network for spatial relation extraction from natural language texts Type de document : Article/Communication Auteurs : Qinjun Qiu, Auteur ; Zhong Xie, Auteur ; Kai Ma, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 839 - 866 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique web
[Termes IGN] appariement sémantique
[Termes IGN] apprentissage dirigé
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] exploration de données
[Termes IGN] langage naturel (informatique)
[Termes IGN] proximité sémantique
[Termes IGN] relation spatiale
[Termes IGN] relation topologique
[Termes IGN] site wiki
[Termes IGN] spatial metrics
[Termes IGN] système à base de connaissancesRésumé : (auteur) Spatial relation extraction (e.g., topological relations, directional relations, and distance relations) from natural language descriptions is a fundamental but challenging task in several practical applications. Current state-of-the-art methods rely on rule-based metrics, either those specifically developed for extracting spatial relations or those integrated in methods that combine multiple metrics. However, these methods all rely on developed rules and do not effectively capture the characteristics of natural language spatial relations because the descriptions may be heterogeneous and vague and may be context sparse. In this article, we present a spatially oriented piecewise convolutional neural network (SP-CNN) that is specifically designed with these linguistic issues in mind. Our method extends a general piecewise convolutional neural network with a set of improvements designed to tackle the task of spatial relation extraction. We also propose an automated workflow for generating training datasets by integrating new sentences with those in a knowledge base, based on string similarity and semantic similarity, and then transforming the sentences into training data. We exploit a spatially oriented channel that uses prior human knowledge to automatically match words and understand the linguistic clues to spatial relations, finally leading to an extraction decision. We present both the qualitative and quantitative performance of the proposed methodology using a large dataset collected from Wikipedia. The experimental results demonstrate that the SP-CNN, with its supervised machine learning, can significantly outperform current state-of-the-art methods on constructed datasets. Numéro de notice : A2022-365 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article DOI : 10.1111/tgis.12887 Date de publication en ligne : 27/12/2021 En ligne : https://doi.org/10.1111/tgis.12887 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100584
in Transactions in GIS > vol 26 n° 2 (April 2022) . - pp 839 - 866[article]Hierarchical learning with backtracking algorithm based on the visual confusion label tree for large-scale image classification / Yuntao Liu in The Visual Computer, vol 38 n° 3 (March 2022)
[article]
Titre : Hierarchical learning with backtracking algorithm based on the visual confusion label tree for large-scale image classification Type de document : Article/Communication Auteurs : Yuntao Liu, Auteur ; Yong Dou, Auteur ; Ruochun Jin, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 897 - 917 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage automatique
[Termes IGN] classification bayesienne
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] réseau neuronal convolutif
[Termes IGN] segmentation sémantiqueRésumé : (auteur) In this paper, a hierarchical learning algorithm based on the Bayesian Neural Network classifier with backtracking is proposed to support large-scale image classification, where a Visual Confusion Label Tree is established for constructing a hierarchical structure for large numbers of categories in image datasets and determining the hierarchical learning tasks automatically. Specifically, the Visual Confusion Label Tree is established based on outputs of convolution neural network models. One parent node on the Visual Confusion Label Tree contains a set of sibling coarse-grained categories, and child nodes have several sets of fine-grained categories which are partitions of categories on the parent node. The proposed Hierarchical Bayesian Neural Network with backtracking algorithm can benefit from the hierarchical structure of the Visual Confusion Label Tree. Focusing on those confusion subsets instead of the entire set of categories makes the classification ability of the tree classifier stronger. The backtracking algorithm can utilize the uncertainty information captured from the Bayesian Neural Network to make a second classification to re-correct samples that were classified incorrectly in the previous classification process. Experiments on four large-scale datasets show that our tree classifier obtains a significant improvement over the state-of-the-art tree classifier, which have demonstrated the discriminative hierarchical structure of our Visual Confusion Label Tree and the effectiveness of our Hierarchical Bayesian Neural Network with backtracking algorithm. Numéro de notice : A2022-149 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1007/s00371-021-02058-w Date de publication en ligne : 04/02/2021 En ligne : http://dx.doi.org/10.1007/s00371-021-02058-w Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100070
in The Visual Computer > vol 38 n° 3 (March 2022) . - pp 897 - 917[article]Neural map style transfer exploration with GANs / Sidonie Christophe in International journal of cartography, vol 8 n° 1 (March 2022)
[article]
Titre : Neural map style transfer exploration with GANs Type de document : Article/Communication Auteurs : Sidonie Christophe , Auteur ; Samuel Mermet , Auteur ; Morgan Laurent, Auteur ; Guillaume Touya , Auteur Année de publication : 2022 Projets : 1-Pas de projet / Article en page(s) : pp 18 - 36 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Termes IGN] apprentissage profond
[Termes IGN] classification non dirigée
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] grille d'échantillonnage
[Termes IGN] orthoimage
[Termes IGN] représentation cartographique
[Termes IGN] réseau antagoniste génératif
[Termes IGN] style cartographique
[Termes IGN] visualisation cartographique
[Vedettes matières IGN] GéovisualisationRésumé : (auteur) Neural Style Transfer is a Computer Vision topic intending to transfer the visual appearance or the style of images to other images. Developments in deep learning nicely generate stylized images from texture-based examples or transfer the style of a photograph to another one. In map design, the style is a multi-dimensional complex problem related to recognizable visual salient features and topological arrangements, supporting the description of geographic spaces at a specific scale. The map style transfer is still at stake to generate a diversity of possible new styles to render geographical features. Generative adversarial Networks (GANs) techniques, well supporting image-to-image translation tasks, offer new perspectives for map style transfer. We propose to use accessible GAN architectures, in order to experiment and assess neural map style transfer to ortho-images, while using different map designs of various geographic spaces, from simple-styled (Plan maps) to complex-styled (old Cassini, Etat-Major, or Scan50 B&W). This transfer task and our global protocol are presented, including the sampling grid, the training and test of Pix2Pix and CycleGAN models, such as the perceptual assessment of the generated outputs. Promising results are discussed, opening research issues for neural map style transfer exploration with GANs. Numéro de notice : A2022-172 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/23729333.2022.2031554 Date de publication en ligne : 13/02/2022 En ligne : https://doi.org/10.1080/23729333.2022.2031554 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99807
in International journal of cartography > vol 8 n° 1 (March 2022) . - pp 18 - 36[article]Ultrahigh-resolution boreal forest canopy mapping: Combining UAV imagery and photogrammetric point clouds in a deep-learning-based approach / Linyuan Li in International journal of applied Earth observation and geoinformation, vol 107 (March 2022)
[article]
Titre : Ultrahigh-resolution boreal forest canopy mapping: Combining UAV imagery and photogrammetric point clouds in a deep-learning-based approach Type de document : Article/Communication Auteurs : Linyuan Li, Auteur ; Xihan Mu, Auteur ; Francesco Chianucci, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 102686 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] algorithme SLIC
[Termes IGN] apprentissage profond
[Termes IGN] canopée
[Termes IGN] carte forestière
[Termes IGN] Chine
[Termes IGN] classification par maximum de vraisemblance
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] couvert forestier
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] données lidar
[Termes IGN] faisceau laser
[Termes IGN] forêt boréale
[Termes IGN] image captée par drone
[Termes IGN] modèle numérique de surface de la canopée
[Termes IGN] modèle numérique de terrain
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] sous-étage
[Termes IGN] structure-from-motionRésumé : (auteur) Accurate wall-to-wall estimation of forest crown cover is critical for a wide range of ecological studies. Notwithstanding the increasing use of UAVs in forest canopy mapping, the ultrahigh-resolution UAV imagery requires an appropriate procedure to separate the contribution of understorey from overstorey vegetation, which is complicated by the spectral similarity between the two forest components and the illumination environment. In this study, we investigated the integration of deep learning and the combined data of imagery and photogrammetric point clouds for boreal forest canopy mapping. The procedure enables the automatic creation of training sets of tree crown (overstorey) and background (understorey) data via the combination of UAV images and their associated photogrammetric point clouds and expands the applicability of deep learning models with self-supervision. Based on the UAV images with different overlap levels of 12 conifer forest plots that are categorized into “I”, “II” and “III” complexity levels according to illumination environment, we compared the self-supervised deep learning-predicted canopy maps from original images with manual delineation data and found an average intersection of union (IoU) larger than 0.9 for “complexity I” and “complexity II” plots and larger than 0.75 for “complexity III” plots. The proposed method was then compared with three classical image segmentation methods (i.e., maximum likelihood, Kmeans, and Otsu) in the plot-level crown cover estimation, showing outperformance in overstorey canopy extraction against other methods. The proposed method was also validated against wall-to-wall and pointwise crown cover estimates using UAV LiDAR and in situ digital cover photography (DCP) benchmarking methods. The results showed that the model-predicted crown cover was in line with the UAV LiDAR method (RMSE of 0.06) and deviate from the DCP method (RMSE of 0.18). We subsequently compared the new method and the commonly used UAV structure-from-motion (SfM) method at varying forward and lateral overlaps over all plots and a rugged terrain region, yielding results showing that the method-predicted crown cover was relatively insensitive to varying overlap (largest bias of less than 0.15), whereas the UAV SfM-estimated crown cover was seriously affected by overlap and decreased with decreasing overlap. In addition, canopy mapping over rugged terrain verified the merits of the new method, with no need for a detailed digital terrain model (DTM). The new method is recommended to be used in various image overlaps, illuminations, and terrains due to its robustness and high accuracy. This study offers opportunities to promote forest ecological applications (e.g., leaf area index estimation) and sustainable management (e.g., deforestation). Numéro de notice : A2022-192 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.1016/j.jag.2022.102686 Date de publication en ligne : 05/02/2022 En ligne : https://doi.org/10.1016/j.jag.2022.102686 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99951
in International journal of applied Earth observation and geoinformation > vol 107 (March 2022) . - n° 102686[article]Visual vs internal attention mechanisms in deep neural networks for image classification and object detection / Abraham Montoya Obeso in Pattern recognition, vol 123 (March 2022)PermalinkDetection of damaged buildings after an earthquake with convolutional neural networks in conjunction with image segmentation / Ramazan Unlu in The Visual Computer, vol 38 n° 2 (February 2022)PermalinkGisGCN: a visual graph-based framework to match geographical areas through time / Margarita Khokhlova in ISPRS International journal of geo-information, vol 11 n° 2 (February 2022)PermalinkA benchmark of named entity recognition approaches in historical documents : application to 19th century French directories / Nathalie Abadie (2022)PermalinkPermalinkDetection of windthrown tree stems on UAV-orthomosaics using U-Net convolutional networks / Stefan Reder in Remote sensing, vol 14 n° 1 (January-1 2022)PermalinkFlood susceptibility mapping using meta-heuristic algorithms / Alireza Arabameri in Geomatics, Natural Hazards and Risk, vol 13 (2022)PermalinkGénération d’un jeu de données d’entraînement et mise en oeuvre d’une architecture de détection par deep learning des numéros de parcelles sur les plans du cadastre Napoléonien / Tiecoumba Ibrahim Tamela (2022)PermalinkGlobal and climate challenges, graph-based data analysis for multisource information extraction / Morgane Batelier (2022)PermalinkPermalink