Descripteur
Termes IGN > informatique > intelligence artificielle > apprentissage automatique > apprentissage semi-dirigé
apprentissage semi-dirigé |
Documents disponibles dans cette catégorie (25)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Predicting vegetation stratum occupancy from airborne LiDAR data with deep learning / Ekaterina Kalinicheva in International journal of applied Earth observation and geoinformation, vol 112 (August 2022)
[article]
Titre : Predicting vegetation stratum occupancy from airborne LiDAR data with deep learning Type de document : Article/Communication Auteurs : Ekaterina Kalinicheva , Auteur ; Loïc Landrieu , Auteur ; Clément Mallet , Auteur ; Nesrine Chehata , Auteur Année de publication : 2022 Projets : TOSCA-FRISBEE / Article en page(s) : n° 102863 Note générale : bibliographie
This study has been co-funded by CNES (TOSCA FRISBEE Project, convention no200769/00) and CONFETTI Project (Nouvelle Aquitaine Region project, France).Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] apprentissage semi-dirigé
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] parcelle agricole
[Termes IGN] régression
[Termes IGN] semis de points
[Termes IGN] strate végétaleRésumé : (auteur) We propose a new deep learning-based method for estimating the occupancy of vegetation strata from airborne 3D LiDAR point clouds. Our model predicts rasterized occupancy maps for three vegetation strata corresponding to lower, medium, and higher cover. Our weakly-supervised training scheme allows our network to only be supervised with vegetation occupancy values aggregated over cylindrical plots containing thousands of points. Such ground truth is easier to produce than pixel-wise or point-wise annotations. Our method outperforms handcrafted and deep learning baselines in terms of precision by up to 30%, while simultaneously providing visual and interpretable predictions. We provide an open-source implementation along with a dataset of 199 agricultural plots to train and evaluate weakly supervised occupancy regression algorithms. Numéro de notice : A2022-578 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.jag.2022.102863 Date de publication en ligne : 19/07/2022 En ligne : https://doi.org/10.1016/j.jag.2022.102863 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99425
in International journal of applied Earth observation and geoinformation > vol 112 (August 2022) . - n° 102863[article]Documents numériques
peut être téléchargé
Predicting vegetation stratum ... - pdf auteurAdobe Acrobat PDF Deep learning based 2D and 3D object detection and tracking on monocular video in the context of autonomous vehicles / Zhujun Xu (2022)
Titre : Deep learning based 2D and 3D object detection and tracking on monocular video in the context of autonomous vehicles Type de document : Thèse/HDR Auteurs : Zhujun Xu, Auteur ; Eric Chaumette, Directeur de thèse ; Damien Vivet, Directeur de thèse Editeur : Toulouse : Université de Toulouse Année de publication : 2022 Importance : 136 p. Format : 21 x 30 cm Note générale : bibliographie
Thèse en vue de l'obtention du Doctorat de l'Université de Toulouse, spécialité Informatique et TélécommunicationsLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] apprentissage semi-dirigé
[Termes IGN] architecture de réseau
[Termes IGN] détection d'objet
[Termes IGN] échantillonnage de données
[Termes IGN] objet 3D
[Termes IGN] segmentation d'image
[Termes IGN] véhicule automobile
[Termes IGN] vidéo
[Termes IGN] vision par ordinateurIndex. décimale : THESE Thèses et HDR Résumé : (auteur) The objective of this thesis is to develop deep learning based 2D and 3D object detection and tracking methods on monocular video and apply them to the context of autonomous vehicles. Actually, when directly using still image detectors to process a video stream, the accuracy suffers from sampled image quality problems. Moreover, generating 3D annotations is time-consuming and expensive due to the data fusion and large numbers of frames. We therefore take advantage of the temporal information in videos such as the object consistency, to improve the performance. The methods should not introduce too much extra computational burden, since the autonomous vehicle demands a real-time performance.Multiple methods can be involved in different steps, for example, data preparation, network architecture and post-processing. First, we propose a post-processing method called heatmap propagation based on a one-stage detector CenterNet for video object detection. Our method propagates the previous reliable long-term detection in the form of heatmap to the upcoming frame. Then, to distinguish different objects of the same class, we propose a frame-to-frame network architecture for video instance segmentation by using the instance sequence queries. The tracking of instances is achieved without extra post-processing for data association. Finally, we propose a semi-supervised learning method to generate 3D annotations for 2D video object tracking dataset. This helps to enrich the training process for 3D object detection. Each of the three methods can be individually applied to leverage image detectors to video applications. We also propose two complete network structures to solve 2D and 3D object detection and tracking on monocular video. Note de contenu : 1- Introduction
2- Video object detection avec la heatmap propagation (propagation de carte de chaleur)
3- Video instance segmentation with instance sequence queries
4- Semi-supervised learning of monocular 3D object detection with 2D video tracking annotations
5- Conclusions and perspectivesNuméro de notice : 24072 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse française Note de thèse : Thèse de Doctorat : Informatique et Télécommunications : Toulouse : 2022 DOI : sans En ligne : https://www.theses.fr/2022ESAE0019 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102136 Resolution enhancement for large-scale land cover mapping via weakly supervised deep learning / Qiutong Yu in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 6 (June 2021)
[article]
Titre : Resolution enhancement for large-scale land cover mapping via weakly supervised deep learning Type de document : Article/Communication Auteurs : Qiutong Yu, Auteur ; Wei Liu, Auteur ; Wesley Nunes Gonçalves, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 405 - 412 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] apprentissage profond
[Termes IGN] apprentissage semi-dirigé
[Termes IGN] carte d'occupation du sol
[Termes IGN] changement d'occupation du sol
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] fusion d'images
[Termes IGN] image à haute résolution
[Termes IGN] image multibande
[Termes IGN] image Sentinel-MSI
[Termes IGN] image Sentinel-SAR
[Termes IGN] image Terra-MODIS
[Termes IGN] série temporelleRésumé : (Auteur) Multispectral satellite imagery is the primary data source for monitoring land cover change and characterizing land cover globally. However, the consistency of land cover monitoring is limited by the spatial and temporal resolutions of the acquired satellite images. The public availability of daily high-resolution images is still scarce. This paper aims to fill this gap by proposing a novel spatiotemporal fusion method to enhance daily low spatial resolution land cover mapping using a weakly supervised deep convolutional neural network. We merge Sentinel images and moderate resolution imaging spectroradiometer (MODIS )-derived thematic land cover maps under the application background of massive remote sensing data and the large spatial resolution gaps between MODIS data and Sentinel images. The neural network training was conducted on the public data set SEN12MS, while the validation and testing used ground truth data from the 2020 IEEE Geoscience and Remote Sensing Society data fusion contest. The proposed data fusion method shows that the synthesized land cover map has significantly higher spatial resolution than the corresponding MODIS-derived land cover map. The ensemble approach can be implemented for generating high-resolution time series of satellite images by fusing fine images from Sentinel-1 and -2 and daily coarse images from MODIS. Numéro de notice : A2021-373 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.87.6.405 Date de publication en ligne : 01/06/2021 En ligne : https://doi.org/10.14358/PERS.87.6.405 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97825
in Photogrammetric Engineering & Remote Sensing, PERS > vol 87 n° 6 (June 2021) . - pp 405 - 412[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 105-2021061 SL Revue Centre de documentation Revues en salle Disponible Semi-supervised joint learning for hand gesture recognition from a single color image / Chi Xu in Sensors, vol 21 n° 3 (February 2021)
[article]
Titre : Semi-supervised joint learning for hand gesture recognition from a single color image Type de document : Article/Communication Auteurs : Chi Xu, Auteur ; Yunkai Jiang, Auteur ; Jun Zhou, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : n° 1007 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] apprentissage semi-dirigé
[Termes IGN] détection d'objet
[Termes IGN] estimation de pose
[Termes IGN] image en couleur
[Termes IGN] jeu de données
[Termes IGN] reconnaissance de gestesRésumé : (auteur) Hand gesture recognition and hand pose estimation are two closely correlated tasks. In this paper, we propose a deep-learning based approach which jointly learns an intermediate level shared feature for these two tasks, so that the hand gesture recognition task can be benefited from the hand pose estimation task. In the training process, a semi-supervised training scheme is designed to solve the problem of lacking proper annotation. Our approach detects the foreground hand, recognizes the hand gesture, and estimates the corresponding 3D hand pose simultaneously. To evaluate the hand gesture recognition performance of the state-of-the-arts, we propose a challenging hand gesture recognition dataset collected in unconstrained environments. Experimental results show that, the gesture recognition accuracy of ours is significantly boosted by leveraging the knowledge learned from the hand pose estimation task. Numéro de notice : A2021-160 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/s21031007 Date de publication en ligne : 02/02/2021 En ligne : https://doi.org/10.3390/s21031007 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97076
in Sensors > vol 21 n° 3 (February 2021) . - n° 1007[article]X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data / Danfeng Hong in ISPRS Journal of photogrammetry and remote sensing, vol 167 (September 2020)
[article]
Titre : X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data Type de document : Article/Communication Auteurs : Danfeng Hong, Auteur ; Naoto Yokoya, Auteur ; Gui-Song Sia, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 12 - 23 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] apprentissage profond
[Termes IGN] apprentissage semi-dirigé
[Termes IGN] bruit blanc
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] compréhension de l'image
[Termes IGN] image hyperspectrale
[Termes IGN] image multibande
[Termes IGN] image radar moirée
[Termes IGN] image Sentinel-MSI
[Termes IGN] scène urbaine
[Termes IGN] transmission de donnéesRésumé : (auteur) This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods. Numéro de notice : A2020-544 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.06.014 Date de publication en ligne : 11/07/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.06.014 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95770
in ISPRS Journal of photogrammetry and remote sensing > vol 167 (September 2020) . - pp 12 - 23[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020091 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020093 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020092 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Subpixel-pixel-superpixel-based multiview active learning for hyperspectral images classification / Yu Li in IEEE Transactions on geoscience and remote sensing, vol 58 n° 7 (July 2020)PermalinkPredictive mapping with small field sample data using semi‐supervised machine learning / Fei Du in Transactions in GIS, Vol 24 n° 2 (April 2020)PermalinkHeuristic sample learning for complex urban scenes: Application to urban functional-zone mapping with VHR images and POI data / Xiuyuan Zhang in ISPRS Journal of photogrammetry and remote sensing, vol 161 (March 2020)PermalinkA double-strategy-check active learning algorithm for hyperspectral image classification / Ying Cui in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 11 (November 2019)PermalinkPermalinkPermalinkA novel semisupervised active-learning algorithm for hyperspectral image classification / Zengmao Wang in IEEE Transactions on geoscience and remote sensing, vol 55 n° 6 (June 2017)PermalinkPermalinkRandom-walker-based collaborative learning for hyperspectral image classification / Bin Sun in IEEE Transactions on geoscience and remote sensing, vol 55 n° 1 (January 2017)PermalinkSemisupervised classification for hyperspectral image based on multi-decision labeling and deep feature learning / Xiaorui Ma in ISPRS Journal of photogrammetry and remote sensing, vol 120 (october 2016)Permalink