Descripteur
Termes IGN > informatique > intelligence artificielle > apprentissage automatique > apprentissage dirigé
apprentissage dirigéSynonyme(s)apprentissage superviséVoir aussi |
Documents disponibles dans cette catégorie (197)



Etendre la recherche sur niveau(x) vers le bas
Improvement in crop mapping from satellite image time series by effectively supervising deep neural networks / Sina Mohammadi in ISPRS Journal of photogrammetry and remote sensing, vol 198 (April 2023)
![]()
[article]
Titre : Improvement in crop mapping from satellite image time series by effectively supervising deep neural networks Type de document : Article/Communication Auteurs : Sina Mohammadi, Auteur ; Mariana Belgiu, Auteur ; Alfred Stein, Auteur Année de publication : 2023 Article en page(s) : pp 272 - 283 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage dirigé
[Termes IGN] apprentissage profond
[Termes IGN] carte de la végétation
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] classification par réseau neuronal récurrent
[Termes IGN] cultures
[Termes IGN] image Landsat-ETM+
[Termes IGN] image Landsat-OLI
[Termes IGN] Normalized Difference Vegetation Index
[Termes IGN] série temporelleRésumé : (auteur) Deep learning methods have achieved promising results in crop mapping using satellite image time series. A challenge still remains on how to better learn discriminative feature representations to detect crop types when the model is applied to unseen data. To address this challenge and reveal the importance of proper supervision of deep neural networks in improving performance, we propose to supervise intermediate layers of a designed 3D Fully Convolutional Neural Network (FCN) by employing two middle supervision methods: Cross-entropy loss Middle Supervision (CE-MidS) and a novel middle supervision method, namely Supervised Contrastive loss Middle Supervision (SupCon-MidS). This method pulls together features belonging to the same class in embedding space, while pushing apart features from different classes. We demonstrate that SupCon-MidS enhances feature discrimination and clustering throughout the network, thereby improving the network performance. In addition, we employ two output supervision methods, namely F1 loss and Intersection Over Union (IOU) loss. Our experiments on identifying corn, soybean, and the class Other from Landsat image time series in the U.S. corn belt show that the best set-up of our method, namely IOU+SupCon-MidS, is able to outperform the state-of-the-art methods by
scores of 3.5% and 0.5% on average when testing its accuracy across a different year (local test) and different regions (spatial test), respectively. Further, adding SupCon-MidS to the output supervision methods improves
scores by 1.2% and 7.6% on average in local and spatial tests, respectively. We conclude that proper supervision of deep neural networks plays a significant role in improving crop mapping performance. The code and data are available at: https://github.com/Sina-Mohammadi/CropSupervision.Numéro de notice : A2023-203 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.isprsjprs.2023.03.007 Date de publication en ligne : 29/03/2023 En ligne : https://doi.org/10.1016/j.isprsjprs.2023.03.007 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=103105
in ISPRS Journal of photogrammetry and remote sensing > vol 198 (April 2023) . - pp 272 - 283[article]Deriving map images of generalised mountain roads with generative adversarial networks / Azelle Courtial in International journal of geographical information science IJGIS, vol 37 n° 3 (March 2023)
![]()
[article]
Titre : Deriving map images of generalised mountain roads with generative adversarial networks Type de document : Article/Communication Auteurs : Azelle Courtial , Auteur ; Guillaume Touya
, Auteur ; Xiang Zhang, Auteur
Année de publication : 2023 Article en page(s) : pp 499 - 528 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Termes IGN] analyse comparative
[Termes IGN] apprentissage dirigé
[Termes IGN] apprentissage non-dirigé
[Termes IGN] carte routière
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] généralisation cartographique automatisée
[Termes IGN] montagne
[Termes IGN] réseau antagoniste génératif
[Vedettes matières IGN] GénéralisationRésumé : (auteur) Map generalisation is a process that transforms geographic information for a cartographic at a specific scale. The goal is to produce legible and informative maps even at small scales from a detailed dataset. The potential of deep learning to help in this task is still unknown. This article examines the use case of mountain road generalisation, to explore the potential of a specific deep learning approach: generative adversarial networks (GAN). Our goal is to generate images that depict road maps generalised at the 1:250k scale, from images that depict road maps of the same area using un-generalised 1:25k data. This paper not only shows the potential of deep learning to generate generalised mountain roads, but also analyses how the process of deep learning generalisation works, compares supervised and unsupervised learning and explores possible improvements. With this experiment we have exhibited an unsupervised model that is able to generate generalised maps evaluated as good as the reference and reviewed some possible improvements for deep learning-based generalisation, including training set management and the definition of a new road connectivity loss. All our results are evaluated visually using a four questions process and validated by a user test conducted on 113 individuals. Numéro de notice : A2023-073 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/13658816.2022.2123488 Date de publication en ligne : 20/10/2022 En ligne : https://doi.org/10.1080/13658816.2022.2123488 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101901
in International journal of geographical information science IJGIS > vol 37 n° 3 (March 2023) . - pp 499 - 528[article]Cross-supervised learning for cloud detection / Kang Wu in GIScience and remote sensing, vol 60 n° 1 (2023)
![]()
[article]
Titre : Cross-supervised learning for cloud detection Type de document : Article/Communication Auteurs : Kang Wu, Auteur ; Zunxiao Xu, Auteur ; Xinrong Lyu, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 2147298 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage dirigé
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] détection d'objet
[Termes IGN] données étiquetées d'entrainement
[Termes IGN] nuageRésumé : (auteur) We present a new learning paradigm, that is, cross-supervised learning, and explore its use for cloud detection. The cross-supervised learning paradigm is characterized by both supervised training and mutually supervised training, and is performed by two base networks. In addition to the individual supervised training for labeled data, the two base networks perform the mutually supervised training using prediction results provided by each other for unlabeled data. Specifically, we develop In-extensive Nets for implementing the base networks. The In-extensive Nets consist of two Intensive Nets and are trained using the cross-supervised learning paradigm. The Intensive Net leverages information from the labeled cloudy images using a focal attention guidance module (FAGM) and a regression block. The cross-supervised learning paradigm empowers the In-extensive Nets to learn from both labeled and unlabeled cloudy images, substantially reducing the number of labeled cloudy images (that tend to cost expensive manual effort) required for training. Experimental results verify that In-extensive Nets perform well and have an obvious advantage in the situations where there are only a few labeled cloudy images available for training. The implementation code for the proposed paradigm is available at https://gitee.com/kang_wu/in-extensive-nets. Numéro de notice : A2023-190 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1080/15481603.2022.2147298 Date de publication en ligne : 03/01/2023 En ligne : https://doi.org/10.1080/15481603.2022.2147298 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102969
in GIScience and remote sensing > vol 60 n° 1 (2023) . - n° 2147298[article]Learning indoor point cloud semantic segmentation from image-level labels / Youcheng Song in The Visual Computer, vol 38 n° 9 (September 2022)
![]()
[article]
Titre : Learning indoor point cloud semantic segmentation from image-level labels Type de document : Article/Communication Auteurs : Youcheng Song, Auteur ; Zhengxing Sun, Auteur ; Qian Li, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 3253 - 3265 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie numérique
[Termes IGN] apprentissage dirigé
[Termes IGN] données d'entrainement sans étiquette
[Termes IGN] image RVB
[Termes IGN] scène intérieure
[Termes IGN] segmentation d'image
[Termes IGN] segmentation sémantique
[Termes IGN] semis de pointsRésumé : (auteur) The data-hungry nature of deep learning and the high cost of annotating point-level labels make it difficult to apply semantic segmentation methods to indoor point cloud scenes. Therefore, exploring how to make point cloud segmentation methods less rely on point-level labels is a promising research topic. In this paper, we introduce a weakly supervised framework for semantic segmentation on indoor point clouds. To reduce the labor cost in data annotation, we use image-level weak labels that only indicate the classes that appeared in the rendered images of point clouds. The experiments validate the effectiveness and scalability of our framework. Our segmentation results on both ScanNet and S3DIS datasets outperform the state-of-the-art method using a similar level of weak supervision. Numéro de notice : A2022-793 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1007/s00371-022-02569-0 Date de publication en ligne : 02/07/2022 En ligne : https://doi.org/10.1007/s00371-022-02569-0 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101917
in The Visual Computer > vol 38 n° 9 (September 2022) . - pp 3253 - 3265[article]Spatially oriented convolutional neural network for spatial relation extraction from natural language texts / Qinjun Qiu in Transactions in GIS, vol 26 n° 2 (April 2022)
![]()
[article]
Titre : Spatially oriented convolutional neural network for spatial relation extraction from natural language texts Type de document : Article/Communication Auteurs : Qinjun Qiu, Auteur ; Zhong Xie, Auteur ; Kai Ma, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 839 - 866 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique web
[Termes IGN] appariement sémantique
[Termes IGN] apprentissage dirigé
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] exploration de données
[Termes IGN] langage naturel (informatique)
[Termes IGN] proximité sémantique
[Termes IGN] relation spatiale
[Termes IGN] relation topologique
[Termes IGN] site wiki
[Termes IGN] spatial metrics
[Termes IGN] système à base de connaissancesRésumé : (auteur) Spatial relation extraction (e.g., topological relations, directional relations, and distance relations) from natural language descriptions is a fundamental but challenging task in several practical applications. Current state-of-the-art methods rely on rule-based metrics, either those specifically developed for extracting spatial relations or those integrated in methods that combine multiple metrics. However, these methods all rely on developed rules and do not effectively capture the characteristics of natural language spatial relations because the descriptions may be heterogeneous and vague and may be context sparse. In this article, we present a spatially oriented piecewise convolutional neural network (SP-CNN) that is specifically designed with these linguistic issues in mind. Our method extends a general piecewise convolutional neural network with a set of improvements designed to tackle the task of spatial relation extraction. We also propose an automated workflow for generating training datasets by integrating new sentences with those in a knowledge base, based on string similarity and semantic similarity, and then transforming the sentences into training data. We exploit a spatially oriented channel that uses prior human knowledge to automatically match words and understand the linguistic clues to spatial relations, finally leading to an extraction decision. We present both the qualitative and quantitative performance of the proposed methodology using a large dataset collected from Wikipedia. The experimental results demonstrate that the SP-CNN, with its supervised machine learning, can significantly outperform current state-of-the-art methods on constructed datasets. Numéro de notice : A2022-365 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article DOI : 10.1111/tgis.12887 Date de publication en ligne : 27/12/2021 En ligne : https://doi.org/10.1111/tgis.12887 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100584
in Transactions in GIS > vol 26 n° 2 (April 2022) . - pp 839 - 866[article]Évaluation des apports de l’apprentissage profond au sein d’un service dédié à la numérisation du patrimoine / Maxime Mérizette in XYZ, n° 170 (mars 2022)
PermalinkPermalinkMLMT-CNN for object detection and segmentation in multi-layer and multi-spectral images / Majedaldein Almahasneh in Machine Vision and Applications, vol 33 n° 1 (January 2022)
PermalinkPermalinkNational scale mapping of larch plantations for Wales using the Sentinel-2 data archive / Suvarna M. Punalekar in Forest ecology and management, vol 501 (December-1 2021)
PermalinkBagging and boosting ensemble classifiers for classification of multispectral, hyperspectral and PolSAR data: A comparative evaluation / Hamid Jafarzadeh in Remote sensing, vol 13 n° 21 (November-1 2021)
PermalinkA comparison of a gradient boosting decision tree, random forests, and artificial neural networks to model urban land use changes: the case of the Seoul metropolitan area / Myung-Jin Jun in International journal of geographical information science IJGIS, vol 35 n° 11 (November 2021)
PermalinkDiffuse attenuation coefficient (Kd) from ICESat-2 ATLAS spaceborne Lidar using random-forest regression / Forrest Corcoran in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 11 (November 2021)
PermalinkTwo hidden layer neural network-based rotation forest ensemble for hyperspectral image classification / Laxmi Narayana Eeti in Geocarto international, vol 36 n° 16 ([01/09/2021])
PermalinkAn adaptive filtering algorithm of multilevel resolution point cloud / Youyuan Li in Survey review, Vol 53 n° 379 (July 2021)
Permalink