Descripteur
Termes IGN > informatique > intelligence artificielle > apprentissage automatique > données d'entrainement (apprentissage automatique) > données étiquetées d'entrainement
données étiquetées d'entrainementVoir aussi |
Documents disponibles dans cette catégorie (11)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Cross-supervised learning for cloud detection / Kang Wu in GIScience and remote sensing, vol 60 n° 1 (2023)
[article]
Titre : Cross-supervised learning for cloud detection Type de document : Article/Communication Auteurs : Kang Wu, Auteur ; Zunxiao Xu, Auteur ; Xinrong Lyu, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 2147298 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage dirigé
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] détection d'objet
[Termes IGN] données étiquetées d'entrainement
[Termes IGN] nuageRésumé : (auteur) We present a new learning paradigm, that is, cross-supervised learning, and explore its use for cloud detection. The cross-supervised learning paradigm is characterized by both supervised training and mutually supervised training, and is performed by two base networks. In addition to the individual supervised training for labeled data, the two base networks perform the mutually supervised training using prediction results provided by each other for unlabeled data. Specifically, we develop In-extensive Nets for implementing the base networks. The In-extensive Nets consist of two Intensive Nets and are trained using the cross-supervised learning paradigm. The Intensive Net leverages information from the labeled cloudy images using a focal attention guidance module (FAGM) and a regression block. The cross-supervised learning paradigm empowers the In-extensive Nets to learn from both labeled and unlabeled cloudy images, substantially reducing the number of labeled cloudy images (that tend to cost expensive manual effort) required for training. Experimental results verify that In-extensive Nets perform well and have an obvious advantage in the situations where there are only a few labeled cloudy images available for training. The implementation code for the proposed paradigm is available at https://gitee.com/kang_wu/in-extensive-nets. Numéro de notice : A2023-190 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1080/15481603.2022.2147298 Date de publication en ligne : 03/01/2023 En ligne : https://doi.org/10.1080/15481603.2022.2147298 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102969
in GIScience and remote sensing > vol 60 n° 1 (2023) . - n° 2147298[article]An informal road detection neural network for societal impact in developing countries / Inger Fabris-Rotelli in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-4-2022 (2022 edition)
[article]
Titre : An informal road detection neural network for societal impact in developing countries Type de document : Article/Communication Auteurs : Inger Fabris-Rotelli, Auteur ; Abraham Wannenburg, Auteur ; Gao Maribe, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 267 - 274 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] Afrique du sud (état)
[Termes IGN] apprentissage profond
[Termes IGN] données étiquetées d'entrainement
[Termes IGN] extraction du réseau routier
[Termes IGN] image satellite
[Termes IGN] impact social
[Termes IGN] pays en développement
[Termes IGN] réseau neuronal artificielRésumé : (auteur) Roads found in informal settlements arise out of convenience, and are often not recorded or maintained by authorities. This complicates service delivery, sustainable development and crisis mitigation, including management and tracking of COVID-19. We, therefore, aim to extract informal roads in remote sensing images. Existing techniques aiming at the extraction of formal roads are not suitable for the problem due to the complex physical and spectral properties of informal roads. The only existing approaches for informal roads, namely (Nobrega et al., 2006, Thiede et al., 2020), do not consider neural networks as a solution. Neural networks show promise in overcoming these complexities. However, they require a large amount of data to learn, which is currently not available due to the expensive and time-consuming nature of collecting such data. This paper implements a neural network to extract informal roads from a data set digitised by this research group. Data quality is assessed by calculating validity completeness, homogeneity and the V-measure, a measure of consistency, in order to evaluate the overall usability of the dataset for neural network informal road detection. We implement the GANs-UNet model that obtained the highest F1-score in a 2020 review paper (Abdollahi et al., 2020) on the state-of-the-art deep learning models used to extract formal roads. The results indicate that the model is able to extract informal roads successfully in the presence of appropriate training data. Numéro de notice : A2022-424 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.5194/isprs-annals-V-4-2022-267-2022 Date de publication en ligne : 18/05/2022 En ligne : https://doi.org/10.5194/isprs-annals-V-4-2022-267-2022 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100729
in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences > vol V-4-2022 (2022 edition) . - pp 267 - 274[article]Learning from the past: crowd-driven active transfer learning for semantic segmentation of multi-temporal 3D point clouds / Michael Kölle in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2022 (2022 edition)
[article]
Titre : Learning from the past: crowd-driven active transfer learning for semantic segmentation of multi-temporal 3D point clouds Type de document : Article/Communication Auteurs : Michael Kölle, Auteur ; Volker Walter, Auteur ; Uwe Soergel, Auteur Année de publication : 2022 Article en page(s) : pp 259 - 266 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] apprentissage automatique
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] données étiquetées d'entrainement
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] données multitemporelles
[Termes IGN] orthoimage couleur
[Termes IGN] production participative
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] traitement de données localiséesRésumé : (auteur) The main bottleneck of machine learning systems, such as convolutional neural networks, is the availability of labeled training data. Hence, much effort (and thus cost) is caused by setting up proper training data sets. However, models trained on specific data sets often perform unsatisfactorily when used to derive predictions for another (yet related) data set. We aim to overcome this problem by employing active learning to iteratively adapt an existing classifier to another domain. Precisely, we are concerned with semantic segmentation of 3D point clouds of multiple epochs. We first establish a Random Forest classifier for the first epoch of our data set and adapt it for successful prediction to two more temporally disjoint point clouds of the same but extended area. The point clouds, which are part of the newly introduced Hessigheim 3D benchmark data set, incorporate different characteristics with respect to the acquisition date and sensor configuration. We demonstrate that our workflow for domain adaptation is designed in such a way that it i) offers the possibility to greatly reduce labeling effort compared to a passive learning baseline or to an active learning baseline trained from scratch, if the domain gap is small enough and ii) at least does not cause more expenses (compared to a newly initialized active learning loop), if the domain gap is severe. The latter is especially beneficial in scenarios where the similarity of two different domains is hard to assess. Numéro de notice : A2022-435 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE/IMAGERIE Nature : Article DOI : 10.5194/isprs-annals-V-2-2022-259-2022 Date de publication en ligne : 17/05/2022 En ligne : https://doi.org/10.5194/isprs-annals-V-2-2022-259-2022 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100743
in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences > vol V-2-2022 (2022 edition) . - pp 259 - 266[article]Weakly supervised semantic segmentation of airborne laser scanning point clouds / Yaping Lin in ISPRS Journal of photogrammetry and remote sensing, vol 187 (May 2022)
[article]
Titre : Weakly supervised semantic segmentation of airborne laser scanning point clouds Type de document : Article/Communication Auteurs : Yaping Lin, Auteur ; M. George Vosselman, Auteur ; Michael Ying Yang, Auteur Année de publication : 2022 Article en page(s) : pp 79 - 100 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] chevauchement
[Termes IGN] classification dirigée
[Termes IGN] données étiquetées d'entrainement
[Termes IGN] données laser
[Termes IGN] données localisées 3D
[Termes IGN] hétérogénéité sémantique
[Termes IGN] segmentation sémantique
[Termes IGN] semis de pointsRésumé : (Auteur) While modern deep learning algorithms for semantic segmentation of airborne laser scanning (ALS) point clouds have achieved considerable success, the training process often requires a large number of labelled 3D points. Pointwise annotation of 3D point clouds, especially for large scale ALS datasets, is extremely time-consuming work. Weak supervision that only needs a few annotation efforts but can make networks achieve comparable performance is an alternative solution. Assigning a weak label to a subcloud, a group of points, is an efficient annotation strategy. With the supervision of subcloud labels, we first train a classification network that produces pseudo labels for the training data. Then the pseudo labels are taken as the input of a segmentation network which gives the final predictions on the testing data. As the quality of pseudo labels determines the performance of the segmentation network on testing data, we propose an overlap region loss and an elevation attention unit for the classification network to obtain more accurate pseudo labels. The overlap region loss that considers the nearby subcloud semantic information is introduced to enhance the awareness of the semantic heterogeneity within a subcloud. The elevation attention helps the classification network to encode more representative features for ALS point clouds. For the segmentation network, in order to effectively learn representative features from inaccurate pseudo labels, we adopt a supervised contrastive loss that uncovers the underlying correlations of class-specific features. Extensive experiments on three ALS datasets demonstrate the superior performance of our model to the baseline method (Wei et al., 2020). With the same amount of labelling efforts, for the ISPRS benchmark dataset, the Rotterdam dataset and the DFC2019 dataset, our method rises the overall accuracy by 0.062, 0.112 and 0.031, and the average F1 score by 0.09, 0.178 and 0.043 respectively. Our code is publicly available at ‘https://github.com/yaping222/Weak_ALS.git’. Numéro de notice : A2022-227 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.03.001 Date de publication en ligne : 11/03/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.03.001 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100197
in ISPRS Journal of photogrammetry and remote sensing > vol 187 (May 2022) . - pp 79 - 100[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022051 SL Revue Centre de documentation Revues en salle Disponible 081-2022053 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2022052 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Spectral-spatial classification method for hyperspectral images using stacked sparse autoencoder suitable in limited labelled samples situation / Seyyed Ali Ahmadi in Geocarto international, vol 37 n° 7 ([15/04/2022])
[article]
Titre : Spectral-spatial classification method for hyperspectral images using stacked sparse autoencoder suitable in limited labelled samples situation Type de document : Article/Communication Auteurs : Seyyed Ali Ahmadi, Auteur ; Nasser Mehrshad, Auteur ; Seyyed Mohammadali Arghavan, Auteur Année de publication : 2022 Article en page(s) : pp 2031 - 2054 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse de sensibilité
[Termes IGN] apprentissage profond
[Termes IGN] données étiquetées d'entrainement
[Termes IGN] échantillonnage de données
[Termes IGN] filtre de Gabor
[Termes IGN] image hyperspectraleRésumé : (auteur) Recently, deep learning (DL)-based methods have attracted increasing attention for hyperspectral images (HSIs) classification. However, the complex structure and limited number of labelled training samples of HSIs negatively affect the performance of DL models. In this paper, a spectral-spatial classification method is proposed based on the combination of local and global spatial information, including extended multi-attribute profiles and multiscale Gabor features, with sparse stacked autoencoder (GEAE). GEAE stacks the spatial and spectral information to form the fused features. Also, GEAE generates virtual samples using weighted average of available samples for expanding the training set so that many parameters of DL network can be learned optimally in limited labelled samples situations. Therefore, the similarity between samples is determined with distance metric learning to overcome the problems of Euclidean distance-based similarity metrics. The experimental results on three HSIs datasets demonstrate the effectiveness of the GEAE in comparison to some existing classification methods. Numéro de notice : A2022-498 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2020.1797188 Date de publication en ligne : 10/08/2020 En ligne : https://doi.org/10.1080/10106049.2020.1797188 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100990
in Geocarto international > vol 37 n° 7 [15/04/2022] . - pp 2031 - 2054[article]Detection of damaged buildings after an earthquake with convolutional neural networks in conjunction with image segmentation / Ramazan Unlu in The Visual Computer, vol 38 n° 2 (February 2022)PermalinkGisGCN: a visual graph-based framework to match geographical areas through time / Margarita Khokhlova in ISPRS International journal of geo-information, vol 11 n° 2 (February 2022)PermalinkCrop rotation modeling for deep learning-based parcel classification from satellite time series / Félix Quinton in Remote sensing, vol 13 n° 22 (November-2 2021)PermalinkSingle annotated pixel based weakly supervised semantic segmentation under driving scenes / Xi Li in Pattern recognition, vol 116 (August 2021)PermalinkExtracting event-related information from a corpus regarding soil industrial pollution / Chuanming Dong (2021)PermalinkHyperspectral classification with noisy label detection via superpixel-to-pixel weighting distance / Bing Tu in IEEE Transactions on geoscience and remote sensing, vol 58 n° 6 (June 2020)Permalink