Descripteur
Termes IGN > informatique > intelligence artificielle > apprentissage automatique > apprentissage profond
apprentissage profond |
Documents disponibles dans cette catégorie (647)
![](./images/expand_all.gif)
![](./images/collapse_all.gif)
Etendre la recherche sur niveau(x) vers le bas
Fine-grained object recognition and zero-shot learning in remote sensing imagery / Gencer Sumbul in IEEE Transactions on geoscience and remote sensing, vol 56 n° 2 (February 2018)
![]()
[article]
Titre : Fine-grained object recognition and zero-shot learning in remote sensing imagery Type de document : Article/Communication Auteurs : Gencer Sumbul, Auteur ; Ramazan Gokberk Cinbis, Auteur ; Selim Aksoy, Auteur Année de publication : 2018 Article en page(s) : pp 770 - 779 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] arbre urbain
[Termes IGN] image numérique
[Termes IGN] inférence
[Termes IGN] reconnaissance de formes
[Termes IGN] réseau neuronal convolutifRésumé : (Auteur) Fine-grained object recognition that aims to identify the type of an object among a large number of subcategories is an emerging application with the increasing resolution that exposes new details in image data. Traditional fully supervised algorithms fail to handle this problem where there is low between-class variance and high within-class variance for the classes of interest with small sample sizes. We study an even more extreme scenario named zero-shot learning (ZSL) in which no training example exists for some of the classes. ZSL aims to build a recognition model for new unseen categories by relating them to seen classes that were previously learned. We establish this relation by learning a compatibility function between image features extracted via a convolutional neural network and auxiliary information that describes the semantics of the classes of interest by using training samples from the seen classes. Then, we show how knowledge transfer can be performed for the unseen classes by maximizing this function during inference. We introduce a new data set that contains 40 different types of street trees in 1-ft spatial resolution aerial data, and evaluate the performance of this model with manually annotated attributes, a natural language model, and a scientific taxonomy as auxiliary information. The experiments show that the proposed model achieves 14.3% recognition accuracy for the classes with no training examples, which is significantly better than a random guess accuracy of 6.3% for 16 test classes, and three other ZSL algorithms. Numéro de notice : A2018-190 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2017.2754648 Date de publication en ligne : 18/10/2017 En ligne : https://doi.org/10.1109/TGRS.2017.2754648 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=89855
in IEEE Transactions on geoscience and remote sensing > vol 56 n° 2 (February 2018) . - pp 770 - 779[article]Large-scale remote sensing image retrieval by deep hashing neural networks / Yansheng Li in IEEE Transactions on geoscience and remote sensing, vol 56 n° 2 (February 2018)
![]()
[article]
Titre : Large-scale remote sensing image retrieval by deep hashing neural networks Type de document : Article/Communication Auteurs : Yansheng Li, Auteur ; Yongjun Zhang, Auteur ; Xin Huang, Auteur ; Hu Zhu, Auteur ; Jiayi Ma, Auteur Année de publication : 2018 Article en page(s) : pp 950 - 965 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal
[Termes IGN] données d'entrainement (apprentissage automatique)Résumé : (Auteur) As one of the most challenging tasks of remote sensing big data mining, large-scale remote sensing image retrieval has attracted increasing attention from researchers. Existing large-scale remote sensing image retrieval approaches are generally implemented by using hashing learning methods, which take handcrafted features as inputs and map the high-dimensional feature vector to the low-dimensional binary feature vector to reduce feature-searching complexity levels. As a means of applying the merits of deep learning, this paper proposes a novel large-scale remote sensing image retrieval approach based on deep hashing neural networks (DHNNs). More specifically, DHNNs are composed of deep feature learning neural networks and hashing learning neural networks and can be optimized in an end-to-end manner. Rather than requiring to dedicate expertise and effort to the design of feature descriptors, we can automatically learn good feature extraction operations and feature hashing mapping under the supervision of labeled samples. To broaden the application field, DHNNs are evaluated under two representative remote sensing cases: scarce and sufficient labeled samples. To make up for a lack of labeled samples, DHNNs can be trained via transfer learning for the former case. For the latter case, DHNNs can be trained via supervised learning from scratch with the aid of a vast number of labeled samples. Extensive experiments on one public remote sensing image data set with a limited number of labeled samples and on another public data set with plenty of labeled samples show that the proposed remote sensing image retrieval approach based on DHNNs can remarkably outperform state-of-the-art methods under both of the examined conditions. Numéro de notice : A2018-192 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2017.2756911 Date de publication en ligne : 13/10/2017 En ligne : https://doi.org/10.1109/TGRS.2017.2756911 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=89857
in IEEE Transactions on geoscience and remote sensing > vol 56 n° 2 (February 2018) . - pp 950 - 965[article]Multisource remote sensing data classification based on convolutional neural network / Xiaodong Xu in IEEE Transactions on geoscience and remote sensing, vol 56 n° 2 (February 2018)
![]()
[article]
Titre : Multisource remote sensing data classification based on convolutional neural network Type de document : Article/Communication Auteurs : Xiaodong Xu, Auteur ; Wei Li, Auteur ; Qiong Ran, Auteur ; Qian Du, Auteur ; Lianru Gao, Auteur ; Bing Zhang, Auteur Année de publication : 2018 Article en page(s) : pp 937 - 949 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] classification par réseau neuronal
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction automatique
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image hyperspectrale
[Termes IGN] réseau neuronal convolutifRésumé : (Auteur) As a list of remotely sensed data sources is available, how to efficiently exploit useful information from multisource data for better Earth observation becomes an interesting but challenging problem. In this paper, the classification fusion of hyperspectral imagery (HSI) and data from other multiple sensors, such as light detection and ranging (LiDAR) data, is investigated with the state-of-the-art deep learning, named the two-branch convolution neural network (CNN). More specific, a two-tunnel CNN framework is first developed to extract spectral-spatial features from HSI; besides, the CNN with cascade block is designed for feature extraction from LiDAR or high-resolution visual image. In the feature fusion stage, the spatial and spectral features of HSI are first integrated in a dual-tunnel branch, and then combined with other data features extracted from a cascade network. Experimental results based on several multisource data demonstrate the proposed two-branch CNN that can achieve more excellent classification performance than some existing methods. Numéro de notice : A2018-191 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2017.2756851 Date de publication en ligne : 16/10/2017 En ligne : https://doi.org/10.1109/TGRS.2017.2756851 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=89856
in IEEE Transactions on geoscience and remote sensing > vol 56 n° 2 (February 2018) . - pp 937 - 949[article]
Titre : Advances in airborne Lidar systems and data processing Type de document : Monographie Auteurs : Jie Shan, Éditeur scientifique ; Juha Hyyppä, Éditeur scientifique Editeur : Bâle [Suisse] : Multidisciplinary Digital Publishing Institute MDPI Année de publication : 2018 Importance : 493 p. Format : 17 x 25 cm ISBN/ISSN/EAN : 978-3-03842-673-8 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] détection d'arbres
[Termes IGN] données lidar
[Termes IGN] enregistrement de données
[Termes IGN] hauteur des arbres
[Termes IGN] image multibande
[Termes IGN] modèle numérique de terrain
[Termes IGN] photon
[Termes IGN] reconstruction 3D du bâti
[Termes IGN] semis de points
[Termes IGN] télédétection par lidarRésumé : (éditeur) This book collects the papers in the special issue "Airborne Laser Scanning" in Remote Sensing (Nov. 2016) and several other selected papers published in the same journal in the past few years. Our intention is to reflect recent technological developments and innovative techniques in this field. The book consists of 23 papers in six subject areas: 1) Single photon and Geiger-mode Lidar, 2) Multispectral lidar, 3) Waveform lidar, 4) Registration of point clouds, 5) Trees and terrain, and 6) Building extraction. The book is a valuable resource for scientists, engineers, developers, instructors, and graduate students interested in lidar systems and data processing. Note de contenu : 1- Single photon and Geiger-mode Lidar
2- Multispectral Lidar
3- Waveform Lidar
4- Registration of Point Clouds
5- Trees and Terrain
6- Building ExtractionNuméro de notice : 25932 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Monographie En ligne : https://doi.org/10.3390/books978-3-03842-674-5 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96235
Titre : Apprentissage de modalités auxiliaires pour la localisation basée vision Type de document : Article/Communication Auteurs : Nathan Piasco , Auteur ; Désiré Sidibé, Auteur ; Valérie Gouet-Brunet
, Auteur ; Cédric Demonceaux, Auteur
Editeur : Saint-Mandé : Institut national de l'information géographique et forestière - IGN (2012-) Année de publication : 2018 Projets : PLaTINUM / Gouet-Brunet, Valérie Conférence : RFIAP 2018, Reconnaissance des Formes, Image, Apprentissage et Perception 01/06/2018 01/06/2018 Champs-sur-Marne France Open Access Proceedings Importance : 8 p. Format : 21 x 30 cm Note générale : bibliographie Langues : Français (fre) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage profond
[Termes IGN] localisation basée vision
[Termes IGN] réseau neuronal convolutifRésumé : (auteur) Dans cet article, nous présentons une nouvelle méthode d’apprentissage à partir de modalités auxiliaires pour améliorer un système de localisation basée vision. Afin de bénéficier des informations de modalités auxiliaires disponibles pendant l’apprentissage, nous entraînons un réseau convolutif à recréer l’apparence de ces modalités annexes. Nous validons notre approche en l’appliquant à un problème de description d’images pour la localisation. Les résultats obtenus montrent que notre système est capable d’améliorer un descripteur d’images en apprenant correctement l’apparence d’une modalité annexe. Comparé à l’état de l’art, le réseau présenté permet d’obtenir des résultats de localisation comparables, tout en étant plus compacte et plus simple à entraîner. Numéro de notice : C2018-006 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésNat DOI : sans Date de publication en ligne : 28/06/2018 En ligne : https://rfiap2018.ign.fr/programmeCFPT Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90335 Documents numériques
en open access
Apprentissage de modalités auxiliaires ... - pdf éditeurAdobe Acrobat PDF PermalinkClassification à très haute résolution (THR) spatiale et fusion d'occupation des sols (OCS) / Tristan Postadjian (2018)
PermalinkClassification à très large échelle d'images satellite à très haute résolution spatiale par réseaux de neurones convolutifs / Tristan Postadjian (2018)
![]()
PermalinkPermalinkConvolutional neural network for traffic signal inference based on GPS traces / Yann Méneroux (2018)
PermalinkDecision fusion of SPOT6 and multitemporal Sentinel2 images for urban area detection / Cyril Wendl (2018)
PermalinkDeep learning based vehicular mobility models for intelligent transportation systems / Jian Zhang (2018)
PermalinkDomain adaptation for large scale classification of very high resolution satellite images with deep convolutional neural networks / Tristan Postadjian (2018)
PermalinkPermalinkFrom Google Maps to a fine-grained catalog of street trees / Steve Branson in ISPRS Journal of photogrammetry and remote sensing, vol 135 (January 2018)
PermalinkFusion tardive d’images SPOT-6/7 et de données multitemporelles Sentinel-2 pour la détection de la tache urbaine / Cyril Wendl (2018)
![]()
PermalinkPermalinkLearning multiscale deep features for high-resolution satellite image scene classification / Qingshan Liu in IEEE Transactions on geoscience and remote sensing, vol 56 n° 1 (January 2018)
PermalinkLocalisation d'objets urbains à partir de sources multiples dont des images aériennes / Lionel Pibre (2018)
PermalinkLocalisation par l'image en milieu urbain : application à la réalité augmentée / Antoine Fond (2018)
PermalinkPermalinkOn the production of semantic and textured 3D meshes of large scale urban environments from mobile mapping images and LIDAR scans / Mohamed Boussaha (2018)
![]()
PermalinkRéseaux de neurones convolutionnels profonds pour la détection de petits véhicules en imagerie aérienne / Jean Ogier du Terrail (2018)
PermalinkPermalinkPermalinkSuperpixel partitioning of very high resolution satellite images for large-scale classification perspectives with deep convolutional neural networks / Tristan Postadjian (2018)
PermalinkSuperPoint Graph : segmentation sémantique de nuages de points LiDAR à grande échelle / Loïc Landrieu (2018)
PermalinkToponym matching through deep neural networks / Rui Santos in International journal of geographical information science IJGIS, vol 32 n° 1-2 (January - February 2018)
PermalinkComplex-valued convolutional neural network and its application in polarimetric SAR image classification / Zhimian Zhang in IEEE Transactions on geoscience and remote sensing, vol 55 n° 12 (December 2017)
PermalinkDiscriminative feature learning for unsupervised change detection in heterogeneous images based on a coupled neural network / Wei Zhao in IEEE Transactions on geoscience and remote sensing, vol 55 n° 12 (December 2017)
PermalinkHigh-resolution aerial image labeling with convolutional neural networks / Emmanuel Maggiori in IEEE Transactions on geoscience and remote sensing, vol 55 n° 12 (December 2017)
PermalinkMultilayer projective dictionary pair learning and sparse autoencoder for PolSAR image classification / Yanqiao Chen in IEEE Transactions on geoscience and remote sensing, vol 55 n° 12 (December 2017)
PermalinkPermalinkUnsupervised-restricted deconvolutional neural network for very high resolution remote-sensing image classification / Yiting Tao in IEEE Transactions on geoscience and remote sensing, vol 55 n° 12 (December 2017)
PermalinkHybrid image noise reduction algorithm based on genetic ant colony and PCNN / Chong Shen in The Visual Computer, vol 33 n° 11 (November 2017)
PermalinkAtmospheric correction over coastal waters using multilayer neural networks / Yongzhen Fan in Remote sensing of environment, vol 199 (15 September 2017)
PermalinkForest change detection in incomplete satellite images with deep neural networks / Salman H. Khan in IEEE Transactions on geoscience and remote sensing, vol 55 n° 9 (September 2017)
PermalinkRecurrent neural networks to correct satellite image classification maps / Emmanuel Maggiori in IEEE Transactions on geoscience and remote sensing, vol 55 n° 9 (September 2017)
PermalinkRemote sensing scene classification by unsupervised representation learning / Xiaoqiang Lu in IEEE Transactions on geoscience and remote sensing, vol 55 n° 9 (September 2017)
PermalinkSDE: A novel selective, discriminative and equalizing feature representation for visual recognition / Guo-Sen Xie in International journal of computer vision, vol 124 n° 2 (1 September 2017)
PermalinkSIG et intelligence artificielle : quels développements et quel futur ? / Christian Carolin in Géomatique expert, n° 118 (septembre - octobre 2017)
PermalinkLearning and transferring deep joint spectral–spatial features for hyperspectral classification / Jingxiang Yang in IEEE Transactions on geoscience and remote sensing, vol 55 n° 8 (August 2017)
PermalinkLearning sensor-specific spatial-spectral features of hyperspectral images via convolutional neural networks / Shaohui Mei in IEEE Transactions on geoscience and remote sensing, vol 55 n° 8 (August 2017)
PermalinkA relative evaluation of random forests for land cover mapping in an urban area / Di Shi in Photogrammetric Engineering & Remote Sensing, PERS, vol 83 n° 8 (August 2017)
PermalinkSimultaneous extraction of roads and buildings in remote sensing imagery with convolutional neural networks / Rasha Alshehhi in ISPRS Journal of photogrammetry and remote sensing, vol 130 (August 2017)
PermalinkLearning to diversify deep belief networks for hyperspectral image classification / Ping Zhong in IEEE Transactions on geoscience and remote sensing, vol 55 n° 6 (June 2017)
PermalinkInvestigating the potential of deep neural networks for large-scale classification of very high resolution satellite images / Tristan Postadjian in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol IV-1/W1 (May 2017)
PermalinkDeep supervised and contractive neural network for SAR image classification / Jie Geng in IEEE Transactions on geoscience and remote sensing, vol 55 n° 4 (April 2017)
PermalinkAmélioration de la vitesse et de la qualité d'image du rendu basé image / Rodrigo Ortiz Cayón (2017)
PermalinkPermalink