Descripteur
Termes IGN > informatique > intelligence artificielle > apprentissage automatique > données d'entrainement (apprentissage automatique)
données d'entrainement (apprentissage automatique)Synonyme(s)base d'apprentissageVoir aussi |
Documents disponibles dans cette catégorie (86)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
CSVM architectures for pixel-wise object detection in high-resolution remote sensing images / Youyou Li in IEEE Transactions on geoscience and remote sensing, vol 58 n° 9 (September 2020)
[article]
Titre : CSVM architectures for pixel-wise object detection in high-resolution remote sensing images Type de document : Article/Communication Auteurs : Youyou Li, Auteur ; Farid Melgani, Auteur ; Binbin He, Auteur Année de publication : 2020 Article en page(s) : pp 6059 - 6070 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] détection d'objet
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] image captée par drone
[Termes IGN] processeur graphiqueRésumé : (auteur) Detecting objects becomes an increasingly important task in very high resolution (VHR) remote sensing imagery analysis. With the development of GPU-computing capability, a growing number of deep convolutional neural networks (CNNs) have been designed to address the object detection challenge. However, compared with CPU, GPU is much more costly. Therefore, GPU-based methods are less attractive in practical applications. In this article, we propose a CPU-based method that is based on convolutional support vector machines (CSVMs) to address the object detection challenge in VHR images. Experiments are conducted on three VHR and two unmanned aerial vehicle (UAV) data sets with very limited training data. Results show that the proposed CSVM achieves competitive performance compared to U-Net which is an efficient CNN-based model designed for small training data sets. Numéro de notice : A2020-527 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.2972289 Date de publication en ligne : 02/03/2020 En ligne : https://doi.org/10.1109/TGRS.2020.2972289 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95705
in IEEE Transactions on geoscience and remote sensing > vol 58 n° 9 (September 2020) . - pp 6059 - 6070[article]Vehicle detection of multi-source remote sensing data using active fine-tuning network / Xin Wu in ISPRS Journal of photogrammetry and remote sensing, vol 167 (September 2020)
[article]
Titre : Vehicle detection of multi-source remote sensing data using active fine-tuning network Type de document : Article/Communication Auteurs : Xin Wu, Auteur ; Wei Li, Auteur ; Danfeng Hong, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 39 - 53 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] Allemagne
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] données multisources
[Termes IGN] image aérienne
[Termes IGN] modèle numérique de surface
[Termes IGN] modèle stéréoscopique
[Termes IGN] segmentation
[Termes IGN] segmentation sémantique
[Termes IGN] véhiculeRésumé : (auteur) Vehicle detection in remote sensing images has attracted increasing interest in recent years. However, its detection ability is limited due to lack of well-annotated samples, especially in densely crowded scenes. Furthermore, since a list of remotely sensed data sources is available, efficient exploitation of useful information from multi-source data for better vehicle detection is challenging. To solve the above issues, a multi-source active fine-tuning vehicle detection (Ms-AFt) framework is proposed, which integrates transfer learning, segmentation, and active classification into a unified framework for auto-labeling and detection. The proposed Ms-AFt employs a fine-tuning network to firstly generate a vehicle training set from an unlabeled dataset. To cope with the diversity of vehicle categories, a multi-source based segmentation branch is then designed to construct additional candidate object sets. The separation of high quality vehicles is realized by a designed attentive classifications network. Finally, all three branches are combined to achieve vehicle detection. Extensive experimental results conducted on two open ISPRS benchmark datasets, namely the Vaihingen village and Potsdam city datasets, demonstrate the superiority and effectiveness of the proposed Ms-AFt for vehicle detection. In addition, the generalization ability of Ms-AFt in dense remote sensing scenes is further verified on stereo aerial imagery of a large camping site. Numéro de notice : A2020-546 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.06.016 Date de publication en ligne : 13/07/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.06.016 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95772
in ISPRS Journal of photogrammetry and remote sensing > vol 167 (September 2020) . - pp 39 - 53[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020091 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020093 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020092 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt SemCity Toulouse: a benchmark for building instance segmentation in satellite images / Ribana Roscher in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-5-2020 (August 2020)
[article]
Titre : SemCity Toulouse: a benchmark for building instance segmentation in satellite images Type de document : Article/Communication Auteurs : Ribana Roscher, Auteur ; Michele Volpi, Auteur ; Clément Mallet , Auteur ; Lukas Drees, Auteur ; Jan Dirk Wegner, Auteur Année de publication : 2020 Projets : 1-Pas de projet / Conférence : ISPRS 2020, Commission 5, virtual Congress, Imaging today foreseeing tomorrow 31/08/2020 02/09/2020 Nice (en ligne) France Annals Commission 5 Article en page(s) : pp 109 - 116 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Intelligence artificielle
[Termes IGN] analyse d'image orientée objet
[Termes IGN] apprentissage automatique
[Termes IGN] bati
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] instance
[Termes IGN] Toulouse
[Termes IGN] zone urbaine denseRésumé : (auteur) In order to reach the goal of reliably solving Earth monitoring tasks, automated and efficient machine learning methods are necessary for large-scale scene analysis and interpretation. A typical bottleneck of supervised learning approaches is the availability of accurate (manually) labeled training data, which is particularly important to train state-of-the-art (deep) learning methods. We present SemCity Toulouse, a publicly available, very high resolution, multi-spectral benchmark data set for training and evaluation of sophisticated machine learning models. The benchmark acts as test bed for single building instance segmentation which has been rarely considered before in densely built urban areas. Additional information is provided in the form of a multi-class semantic segmentation annotation covering the same area plus an adjacent area 3 times larger. The data set addresses interested researchers from various communities such as photogrammetry and remote sensing, but also computer vision and machine learning. Numéro de notice : A2020-503 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.5194/isprs-annals-V-5-2020-109-2020 Date de publication en ligne : 03/08/2020 En ligne : https://doi.org/10.5194/isprs-annals-V-5-2020-109-2020 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95639
in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences > vol V-5-2020 (August 2020) . - pp 109 - 116[article]Ensemble learning for hyperspectral image classification using tangent collaborative representation / Hongjun Su in IEEE Transactions on geoscience and remote sensing, vol 58 n° 6 (June 2020)
[article]
Titre : Ensemble learning for hyperspectral image classification using tangent collaborative representation Type de document : Article/Communication Auteurs : Hongjun Su, Auteur ; Yao Yu, Auteur ; Qian Du, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 3778 - 3790 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image numérique
[Termes IGN] boosting adapté
[Termes IGN] Bootstrap (statistique)
[Termes IGN] classification
[Termes IGN] classification dirigée
[Termes IGN] classification orientée objet
[Termes IGN] conception collaborative
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] échantillon
[Termes IGN] image hyperspectrale
[Termes IGN] neurone artificiel
[Termes IGN] performance
[Termes IGN] régressionRésumé : (auteur) Recently, collaborative representation classification (CRC) has attracted much attention for hyperspectral image analysis. In particular, tangent space CRC (TCRC) has achieved excellent performance for hyperspectral image classification in a simplified tangent space. In this article, novel Bagging-based TCRC (TCRC-bagging) and Boosting-based TCRC (TCRC-boosting) methods are proposed. The main idea of TCRC-bagging is to generate diverse TCRC classification results using the bootstrap sample method, which can enhance the accuracy and diversity of a single classifier simultaneously. For TCRC-boosting, it can provide the most informative training samples by changing their distributions dynamically for each base TCRC learner. The effectiveness of the proposed methods is validated using three real hyperspectral data sets. The experimental results show that both TCRC-bagging and TCRC-boosting outperform their single classifier counterpart. In particular, the TCRC-boosting provides superior performance compared with the TCRC-bagging. Numéro de notice : A2020-280 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2957135 Date de publication en ligne : 01/01/2020 En ligne : https://doi.org/10.1109/TGRS.2019.2957135 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95100
in IEEE Transactions on geoscience and remote sensing > vol 58 n° 6 (June 2020) . - pp 3778 - 3790[article]Hyperspectral classification with noisy label detection via superpixel-to-pixel weighting distance / Bing Tu in IEEE Transactions on geoscience and remote sensing, vol 58 n° 6 (June 2020)
[article]
Titre : Hyperspectral classification with noisy label detection via superpixel-to-pixel weighting distance Type de document : Article/Communication Auteurs : Bing Tu, Auteur ; Chengle Zhou, Auteur ; Danbing He, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 4116 - 4131 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse de groupement
[Termes IGN] classification barycentrique
[Termes IGN] classification dirigée
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] données étiquetées d'entrainement
[Termes IGN] erreur d'échantillon
[Termes IGN] image hyperspectrale
[Termes IGN] pondération
[Termes IGN] précision de la classification
[Termes IGN] superpixelRésumé : (auteur) Classification is an important technique for remotely sensed hyperspectral image (HSI) exploitation. Often, the presence of wrong (noisy) labels presents a drawback for accurate supervised classification. In this article, we introduce a new framework for noisy label detection that combines a superpixel-to-pixel weighting distance (SPWD) and density peak clustering. The proposed method is able to accurately detect and remove noisy labels in the training set before HSI classification. It considers two weak assumptions when exploiting the spectral–spatial information contained in the HSI: 1) all the pixels in a superpixel belong to the same class and 2) close pixels in spectral space have the same label. The proposed method consists of the following steps. First, a superpixel segmentation step is used to obtain self-adaptive spatial information for each training sample. Then, a metric is utilized to measure the spectral distance information between each superpixel and pixel. Meanwhile, in order to overcome the first weak assumption, we use K nearest neighbors to obtain the closest neighborhoods of pixels around each superpixel, and a Gaussian weight is employed to mitigate the second weak assumption by adapting the original distance information. Next, the noisy labels in the original training set are removed by a density threshold-based decision function. Finally, the support vector machine (SVM) classifier is employed to evaluate the effectiveness of the proposed SPWD detection method in terms of classification accuracy. Experiments performed on several real HSI data sets demonstrate that the method can effectively improve the performance of classifiers trained with noisy training sets in terms of classification accuracy. Numéro de notice : A2020-283 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2961141 Date de publication en ligne : 13/01/2020 En ligne : https://doi.org/10.1109/TGRS.2019.2961141 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95105
in IEEE Transactions on geoscience and remote sensing > vol 58 n° 6 (June 2020) . - pp 4116 - 4131[article]Deep learning for enrichment of vector spatial databases: Application to highway interchange / Guillaume Touya in ACM Transactions on spatial algorithms and systems, TOSAS, vol 6 n° 3 (May 2020)PermalinkDeep learning for remote sensing images with open source software / Rémi Cresson (2020)PermalinkPermalinkPermalinkVery high resolution land cover mapping of urban areas at global scale with convolutional neural network / Thomas Tilak (2020)PermalinkDeep learning for conifer/deciduous classification of airborne LiDAR 3D point clouds representing individual trees / Hamid Hamraz in ISPRS Journal of photogrammetry and remote sensing, Vol 158 (December 2019)PermalinkUsing a U-net convolutional neural network to map woody vegetation extent from high resolution satellite imagery across Queensland, Australia / Neil Flood in International journal of applied Earth observation and geoinformation, vol 82 (October 2019)PermalinkRoofN3D: a database for 3D building reconstruction with deep learning / Andreas Wichmann in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 6 (June 2019)PermalinkLearning spectral-spatial-temporal features via a recurrent convolutional neural network for change detection in multispectral imagery / Lichao Mou in IEEE Transactions on geoscience and remote sensing, vol 57 n° 2 (February 2019)PermalinkLarge-scale remote sensing image retrieval by deep hashing neural networks / Yansheng Li in IEEE Transactions on geoscience and remote sensing, vol 56 n° 2 (February 2018)PermalinkPermalinkForêts aléatoires pour la détection des feux tricolores à partir de profils de vitesse GPS / Yann Méneroux (2016)PermalinkConférence d'apprentissage 99, actes de CAP'99, Ecole Polytechnique, Palaiseau, 15-18 juin 1999 / Michèle Sebag (1999)Permalink