Descripteur
Termes IGN > informatique > intelligence artificielle > apprentissage automatique > apprentissage profond
apprentissage profond |
Documents disponibles dans cette catégorie (315)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Deep traffic light detection by overlaying synthetic context on arbitrary natural images / Jean Pablo Vieira de Mello in Computers and graphics, vol 94 n° 1 (February 2021)
[article]
Titre : Deep traffic light detection by overlaying synthetic context on arbitrary natural images Type de document : Article/Communication Auteurs : Jean Pablo Vieira de Mello, Auteur ; Lucas Tabelini, Auteur ; Rodrigo F. Berriel, Auteur Année de publication : 2021 Article en page(s) : pp 76 - 86 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] apprentissage profond
[Termes IGN] détection d'objet
[Termes IGN] échantillonnage d'image
[Termes IGN] feu de circulation
[Termes IGN] image à haute résolution
[Termes IGN] navigation autonome
[Termes IGN] signalisation routière
[Termes IGN] trafic routierRésumé : (auteur) Deep neural networks come as an effective solution to many problems associated with autonomous driving. By providing real image samples with traffic context to the network, the model learns to detect and classify elements of interest, such as pedestrians, traffic signs, and traffic lights. However, acquiring and annotating real data can be extremely costly in terms of time and effort. In this context, we propose a method to generate artificial traffic-related training data for deep traffic light detectors. This data is generated using basic non-realistic computer graphics to blend fake traffic scenes on top of arbitrary image backgrounds that are not related to the traffic domain. Thus, a large amount of training data can be generated without annotation efforts. Furthermore, it also tackles the intrinsic data imbalance problem in traffic light datasets, caused mainly by the low amount of samples of the yellow state. Experiments show that it is possible to achieve results comparable to those obtained with real training data from the problem domain, yielding an average mAP and an average F1-score which are each nearly 4 p.p. higher than the respective metrics obtained with a real-world reference model. Numéro de notice : A2021-151 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.cag.2020.09.012 Date de publication en ligne : 09/10/2020 En ligne : https://doi.org/10.1016/j.cag.2020.09.012 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97027
in Computers and graphics > vol 94 n° 1 (February 2021) . - pp 76 - 86[article]GTP-PNet: A residual learning network based on gradient transformation prior for pansharpening / Hao Zhang in ISPRS Journal of photogrammetry and remote sensing, vol 172 (February 2021)
[article]
Titre : GTP-PNet: A residual learning network based on gradient transformation prior for pansharpening Type de document : Article/Communication Auteurs : Hao Zhang, Auteur ; Jiayi Ma, Auteur Année de publication : 2021 Article en page(s) : pp 223 - 239 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification dirigée
[Termes IGN] fusion d'images
[Termes IGN] gradient
[Termes IGN] image multibande
[Termes IGN] image panchromatique
[Termes IGN] Normalized Difference Vegetation Index
[Termes IGN] pansharpening (fusion d'images)
[Termes IGN] régressionRésumé : (auteur) Pansharpening aims to fuse low-resolution multi-spectral image and high-resolution panchromatic (PAN) image to produce a high-resolution multi-spectral (HRMS) image. In this paper, a new residual learning network based on gradient transformation prior, termed as GTP-PNet, is proposed to generate the high-quality HRMS image with accurate spectral distribution as well as reasonable spatial structure. Different from previous deep models that only rely on supervision of the HRMS reference image, we introduce the gradient transformation prior to the deep model, so as to improve the solution accuracy. Our model consists of two networks, namely gradient transformation network (TNet) and pansharpening network (PNet). TNet is committed to seeking the nonlinear mapping between gradients of PAN and HRMS images, which is essentially a spatial relationship regression of imaging bands in different ranges. PNet is the residual learning network used to generate the HRMS image, which is not only supervised by the HRMS reference image, but also constrained by the trained TNet. As a result, the HRMS image generated by PNet not only approximates the HRMS reference image in the spectral distribution, but also conforms to the gradient transformation prior in the spatial structure. Experimental results demonstrate the significant superiority of our method over the current state-of-the-arts in terms of both subjective visual effect and quantitative metrics. We also apply our method to produce the HR normalized difference vegetation index in remote sensing, which can achieve the best performance. Moreover, our method is much competitive compared with the state-of-the-art alternatives in running efficiency. Numéro de notice : A2021-089 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.12.014 Date de publication en ligne : 11/01/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.12.014 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96859
in ISPRS Journal of photogrammetry and remote sensing > vol 172 (February 2021) . - pp 223 - 239[article]Réservation
Réserver ce documentExemplaires(2)
Code-barres Cote Support Localisation Section Disponibilité 081-2021021 SL Revue Centre de documentation Revues en salle Disponible 081-2021022 DEP-RECF Revue Nancy Bibliothèque Nancy IFN Exclu du prêt Multiscale CNN with autoencoder regularization joint contextual attention network for SAR image classification / Zitong Wu in IEEE Transactions on geoscience and remote sensing, vol 59 n° 2 (February 2021)
[article]
Titre : Multiscale CNN with autoencoder regularization joint contextual attention network for SAR image classification Type de document : Article/Communication Auteurs : Zitong Wu, Auteur ; Biao Hou, Auteur ; Licheng Jiao, Auteur Année de publication : 2021 Article en page(s) : pp 1200 - 1213 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification contextuelle
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] image radar moiréeRésumé : (auteur) Synthetic aperture radar (SAR) image classification is a fundamental research direction in image interpretation. With the development of various intelligent technologies, deep learning techniques are gradually being applied to SAR image classification. In this study, a new SAR classification algorithm known as the multiscale convolutional neural network with an autoencoder regularization joint contextual attention network (MCAR-CAN) is proposed. The MCAR-CAN has two branches: the autoencoder regularization branch and the context attention branch. First, autoencoder regularization is used for the reconstruction of the input to regularize the classification in the autoencoder regularization branch. Multiscale input and an asymmetric structure of the autoencoder branch cause the network more to be focused on classification than on reconstruction. Second, the attention mechanism is used to produce an attention map in which each attention weight corresponds to a context correlation in attention branch. The robust features are obtained by the attention mechanism. Finally, the features obtained by the two branches are spliced for classification. In addition, a new training strategy and a postprocessing method are designed to further improve the classification accuracy. Experiments performed on the data from three SAR images demonstrated the effectiveness and robustness of the proposed algorithm. Numéro de notice : A2021-113 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3004911 Date de publication en ligne : 07/07/2020 En ligne : https://doi.org/10.1109/TGRS.2020.3004911 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96918
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 2 (February 2021) . - pp 1200 - 1213[article]Semi-supervised joint learning for hand gesture recognition from a single color image / Chi Xu in Sensors, vol 21 n° 3 (February 2021)
[article]
Titre : Semi-supervised joint learning for hand gesture recognition from a single color image Type de document : Article/Communication Auteurs : Chi Xu, Auteur ; Yunkai Jiang, Auteur ; Jun Zhou, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : n° 1007 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] apprentissage semi-dirigé
[Termes IGN] détection d'objet
[Termes IGN] estimation de pose
[Termes IGN] image en couleur
[Termes IGN] jeu de données
[Termes IGN] reconnaissance de gestesRésumé : (auteur) Hand gesture recognition and hand pose estimation are two closely correlated tasks. In this paper, we propose a deep-learning based approach which jointly learns an intermediate level shared feature for these two tasks, so that the hand gesture recognition task can be benefited from the hand pose estimation task. In the training process, a semi-supervised training scheme is designed to solve the problem of lacking proper annotation. Our approach detects the foreground hand, recognizes the hand gesture, and estimates the corresponding 3D hand pose simultaneously. To evaluate the hand gesture recognition performance of the state-of-the-arts, we propose a challenging hand gesture recognition dataset collected in unconstrained environments. Experimental results show that, the gesture recognition accuracy of ours is significantly boosted by leveraging the knowledge learned from the hand pose estimation task. Numéro de notice : A2021-160 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/s21031007 Date de publication en ligne : 02/02/2021 En ligne : https://doi.org/10.3390/s21031007 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97076
in Sensors > vol 21 n° 3 (February 2021) . - n° 1007[article]Unsupervised deep representation learning for real-time tracking / Ning Wang in International journal of computer vision, vol 129 n° 2 (February 2021)
[article]
Titre : Unsupervised deep representation learning for real-time tracking Type de document : Article/Communication Auteurs : Ning Wang, Auteur ; Wengang Zhou, Auteur ; Yibing Song, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 400 - 418 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement d'images
[Termes IGN] apprentissage profond
[Termes IGN] classification non dirigée
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de cible
[Termes IGN] filtre
[Termes IGN] objet mobile
[Termes IGN] oculométrie
[Termes IGN] reconnaissance d'objets
[Termes IGN] réseau neuronal siamois
[Termes IGN] temps réel
[Termes IGN] traçage
[Termes IGN] trajectoire (véhicule non spatial)
[Termes IGN] vision par ordinateurRésumé : (auteur) The advancement of visual tracking has continuously been brought by deep learning models. Typically, supervised learning is employed to train these models with expensive labeled data. In order to reduce the workload of manual annotation and learn to track arbitrary objects, we propose an unsupervised learning method for visual tracking. The motivation of our unsupervised learning is that a robust tracker should be effective in bidirectional tracking. Specifically, the tracker is able to forward localize a target object in successive frames and backtrace to its initial position in the first frame. Based on such a motivation, in the training process, we measure the consistency between forward and backward trajectories to learn a robust tracker from scratch merely using unlabeled videos. We build our framework on a Siamese correlation filter network, and propose a multi-frame validation scheme and a cost-sensitive loss to facilitate unsupervised learning. Without bells and whistles, the proposed unsupervised tracker achieves the baseline accuracy of classic fully supervised trackers while achieving a real-time speed. Furthermore, our unsupervised framework exhibits a potential in leveraging more unlabeled or weakly labeled data to further improve the tracking accuracy. Numéro de notice : A2021-353 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1007/s11263-020-01357-4 Date de publication en ligne : 21/09/2020 En ligne : https://doi.org/10.1007/s11263-020-01357-4 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97604
in International journal of computer vision > vol 129 n° 2 (February 2021) . - pp 400 - 418[article]Permalink3D urban scene understanding by analysis of LiDAR, color and hyperspectral data / David Duque-Arias (2021)PermalinkPermalinkAleatoric uncertainty estimation for dense stereo matching via CNN-based cost volume analysis / Max Mehltretter in ISPRS Journal of photogrammetry and remote sensing, vol 171 (January 2021)PermalinkPermalinkPermalinkApport de la photogrammétrie et de l’intelligence artificielle à la détection des zones amiantées sur les fronts rocheux / Philippe Caudal (2021)PermalinkApports des méthodes d'apprentissage profond pour la reconnaissance automatique des modes d'occupation des sols et d'objets par télédétection en milieu tropical / Guillaume Rousset (2021)PermalinkApprentissage profond et IA pour l’amélioration de la robustesse des techniques de localisation par vision artificielle / Achref Elouni (2021)PermalinkPermalinkPermalinkPermalinkPermalinkPermalinkPermalinkPermalinkPermalinkClustering et apprentissage profond sous contraintes pour l’analyse de séries temporelles : Application à l’analyse temporelle incrémentale en télédétection / Baptiste Lafabregue (2021)PermalinkCombining deep learning and mathematical morphology for historical map segmentation / Yizi Chen (2021)PermalinkConnecting images through time and sources: Introducing low-data, heterogeneous instance retrieval / Dimitri Gominski (2021)PermalinkPermalinkContributions to graph-based hierarchical analysis for images and 3D point clouds / Leonardo Gigli (2021)PermalinkPermalinkDeep convolutional neural networks for scene understanding and motion planning for self-driving vehicles / Abdelhak Loukkal (2021)PermalinkPermalinkDeep learning for wildfire progression monitoring using SAR and optical satellite image time series / Puzhao Zhang (2021)PermalinkDescription et recherche d’image généralisables pour l’interconnexion et l’analyse multi-source / Dimitri Gominski (2021)PermalinkDétection d’ouvertures par segmentation sémantique de nuages de points 3D : apport de l’apprentissage profond / Camille Lhenry (2021)PermalinkDétection/reconnaissance d'objets urbains à partir de données 3D multicapteurs prises au niveau du sol, en continu / Younes Zegaoui (2021)PermalinkDétection et reconstruction 3D d’arbres urbains par segmentation de nuages de points : apport de l’apprentissage profond / Victor Alteirac (2021)PermalinkEvaluation of a neural network with uncertainty for detection of ice and water in SAR imagery / Nazanin Asadi in IEEE Transactions on geoscience and remote sensing, vol 59 n° 1 (January 2021)PermalinkExploration of reinforcement learning algorithms for autonomous vehicle visual perception and control / Florence Carton (2021)PermalinkExtracting event-related information from a corpus regarding soil industrial pollution / Chuanming Dong (2021)PermalinkFrom point clouds to high-fidelity models - advanced methods for image-based 3D reconstruction / Audrey Richard (2021)PermalinkFuNet: A novel road extraction network with fusion of location data and remote sensing imagery / Kai Zhou in ISPRS International journal of geo-information, vol 10 n° 1 (January 2021)PermalinkGenerative adversarial networks to generalise urban areas in topographic maps / Azelle Courtial (2021)PermalinkImage matching from handcrafted to deep features: A survey / Jiayi Ma in International journal of computer vision, vol 29 n° 1 (January 2021)PermalinkImproving traffic sign recognition results in urban areas by overcoming the impact of scale and rotation / Roholah Yazdan in ISPRS Journal of photogrammetry and remote sensing, vol 171 (January 2021)PermalinkInitialization methods of convolutional neural networks for detection of image manipulations / Ivan Castillo Camacho (2021)PermalinkPermalinkLANet: Local attention embedding to improve the semantic segmentation of remote sensing images / Lei Ding in IEEE Transactions on geoscience and remote sensing, vol 59 n° 1 (January 2021)PermalinkLearning-based representations and methods for 3D shape analysis, manipulation and reconstruction / Marie-Julie Rakotosaona (2021)PermalinkPermalinkLearning embeddings for cross-time geographic areas represented as graphs / Margarita Khokhlova (2021)PermalinkPermalink