Descripteur
Documents disponibles dans cette catégorie (275)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Journées de la recherche 2019 / Anonyme in Géomatique expert, n° 127 (avril - mai 2019)
[article]
Titre : Journées de la recherche 2019 Type de document : Article/Communication Auteurs : Anonyme, Auteur Année de publication : 2019 Article en page(s) : pp 23 - 34 Langues : Français (fre) Descripteur : [Vedettes matières IGN] Information géographique
[Termes IGN] apprentissage profond
[Termes IGN] base de connaissances
[Termes IGN] carte de Cassini
[Termes IGN] données localisées
[Termes IGN] données localisées 3D
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] géoréférencement
[Termes IGN] parcelle agricole
[Termes IGN] paroisse
[Termes IGN] photographie argentique
[Termes IGN] qualité des données
[Termes IGN] réseau neuronal convolutif
[Termes IGN] segmentation
[Termes IGN] segmentation d'image
[Termes IGN] semis de points
[Termes IGN] série temporelleRésumé : (Auteur) Cette année, les journées de la recherche de l’IGN ont fait la part belle aux réseaux de neurones – un sujet décidément très à la mode – ainsi qu’à différentes initiatives d’archivage et de consultation des données géographiques anciennes. Numéro de notice : A2019-308 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE/IMAGERIE/INFORMATIQUE/POSITIONNEMENT Nature : Article DOI : sans Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93284
in Géomatique expert > n° 127 (avril - mai 2019) . - pp 23 - 34[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité IFN-001-P002141 PER Revue Nogent-sur-Vernisson Salle périodiques Exclu du prêt Learning high-level features by fusing multi-view representation of MLS point clouds for 3D object recognition in road environments / Zhipeng Luo in ISPRS Journal of photogrammetry and remote sensing, vol 150 (April 2019)
[article]
Titre : Learning high-level features by fusing multi-view representation of MLS point clouds for 3D object recognition in road environments Type de document : Article/Communication Auteurs : Zhipeng Luo, Auteur ; Jonathan Li, Auteur ; Zhenlong Xiao, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 44 - 58 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] fusion de données
[Termes IGN] jointure spatiale
[Termes IGN] objet 3D
[Termes IGN] reconnaissance d'objets
[Termes IGN] représentation multiple
[Termes IGN] réseau neuronal convolutif
[Termes IGN] semis de pointsRésumé : (Auteur) Most existing 3D object recognition methods still suffer from low descriptiveness and weak robustness although remarkable progress has made in 3D computer vision. The major challenge lies in effectively mining high-level 3D shape features. This paper presents a high-level feature learning framework for 3D object recognition through fusing multiple 2D representations of point clouds. The framework has two key components: (1) three discriminative low-level 3D shape descriptors for obtaining multi-view 2D representation of 3D point clouds. These descriptors preserve both local and global spatial relationships of points from different perspectives and build a bridge between 3D point clouds and 2D Convolutional Neural Networks (CNN). (2) A two-stage fusion network, which consists of a deep feature learning module and two fusion modules, for extracting and fusing high-level features. The proposed method was tested on three datasets, one of which is Sydney Urban Objects dataset and the other two were acquired by a mobile laser scanning (MLS) system along urban roads. The results obtained from comprehensive experiments demonstrated that our method is superior to the state-of-the-art methods in descriptiveness, robustness and efficiency. Our method achieves high recognition rates of 94.6%, 93.1% and 74.9% on the above three datasets, respectively. Numéro de notice : A2019-137 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.01.024 Date de publication en ligne : 16/02/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.01.024 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92468
in ISPRS Journal of photogrammetry and remote sensing > vol 150 (April 2019) . - pp 44 - 58[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019041 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Vehicle detection in aerial images / Michael Ying Yang in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 4 (avril 2019)
[article]
Titre : Vehicle detection in aerial images Type de document : Article/Communication Auteurs : Michael Ying Yang, Auteur ; Wentong Liao, Auteur ; Xinbo Li, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 297 - 304 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] compréhension de l'image
[Termes IGN] détection d'objet
[Termes IGN] entropie
[Termes IGN] image aérienne
[Termes IGN] orthoimage
[Termes IGN] précision de la classification
[Termes IGN] qualité d'image
[Termes IGN] réseau neuronal convolutif
[Termes IGN] véhicule automobileRésumé : (Auteur) The detection of vehicles in aerial images is widely applied in many applications. Comparing with object detection in the ground view images, vehicle detection in aerial images remains a challenging problem because of small vehicle size and the complex background. In this paper, we propose a novel double focal loss convolutional neural network (DFL-CNN) framework. In the proposed framework, the skip connection is used in the CNN structure to enhance the feature learning. Also, the focal loss function is used to substitute for conventional cross entropy loss function in both of the region proposal network (RPN) and the final classifier. We further introduce the first large-scale vehicle detection dataset ITCVD with ground truth annotations for all the vehicles in the scene. We demonstrate the performance of our model on the existing benchmark German Aerospace Center (DLR) 3K dataset as well as the ITCVD dataset. The experimental results show that our DFL-CNN outperforms the baselines on vehicle detection. Numéro de notice : A2019-163 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.85.4.297 Date de publication en ligne : 01/04/2019 En ligne : https://doi.org/10.14358/PERS.85.4.297 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92568
in Photogrammetric Engineering & Remote Sensing, PERS > vol 85 n° 4 (avril 2019) . - pp 297 - 304[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 105-2019041 SL Revue Centre de documentation Revues en salle Disponible DuPLO: A DUal view Point deep Learning architecture for time series classificatiOn / Roberto Interdonato in ISPRS Journal of photogrammetry and remote sensing, vol 149 (March 2019)
[article]
Titre : DuPLO: A DUal view Point deep Learning architecture for time series classificatiOn Type de document : Article/Communication Auteurs : Roberto Interdonato, Auteur ; Dino Ienco, Auteur ; Raffaele Gaetano, Auteur ; Kenji Ose, Auteur Année de publication : 2019 Article en page(s) : pp 91 - 104 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification dirigée
[Termes IGN] image à haute résolution
[Termes IGN] image Sentinel-MSI
[Termes IGN] occupation du sol
[Termes IGN] réseau neuronal convolutif
[Termes IGN] série temporelleRésumé : (Auteur) Nowadays, modern Earth Observation systems continuously generate huge amounts of data. A notable example is represented by the Sentinel-2 mission, which provides images at high spatial resolution (up to 10 m) with high temporal revisit period (every 5 days), which can be organized in Satellite Image Time Series (SITS). While the use of SITS has been proved to be beneficial in the context of Land Use/Land Cover (LULC) map generation, unfortunately, most of machine learning approaches commonly leveraged in remote sensing field fail to take advantage of spatio-temporal dependencies present in such data. Recently, new generation deep learning methods allowed to significantly advance research in this field. These approaches have generally focused on a single type of neural network, i.e., Convolutional Neural Networks (CNNs) or Recurrent Neural Networks (RNNs), which model different but complementary information: spatial autocorrelation (CNNs) and temporal dependencies (RNNs). In this work, we propose the first deep learning architecture for the analysis of SITS data, namely DuPLO (DUal view Point deep Learning architecture for time series classificatiOn), that combines Convolutional and Recurrent neural networks to exploit their complementarity. Our hypothesis is that, since CNNs and RNNs capture different aspects of the data, a combination of both models would produce a more diverse and complete representation of the information for the underlying land cover classification task. Experiments carried out on two study sites characterized by different land cover characteristics (i.e., the Gard site in Mainland France and Reunion Island, a overseas department of France in the Indian Ocean), demonstrate the significance of our proposal. Numéro de notice : A2019-115 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.01.011 Date de publication en ligne : 24/01/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.01.011 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92441
in ISPRS Journal of photogrammetry and remote sensing > vol 149 (March 2019) . - pp 91 - 104[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019031 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019033 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019032 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Learning to segment moving objects / Pavel Tokmakov in International journal of computer vision, vol 127 n° 3 (March 2019)
[article]
Titre : Learning to segment moving objects Type de document : Article/Communication Auteurs : Pavel Tokmakov, Auteur ; Cordelia Schmid, Auteur ; Karteek Alahari, Auteur Année de publication : 2019 Article en page(s) : pp 282 - 301 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage profond
[Termes IGN] cohérence temporelle
[Termes IGN] image vidéo
[Termes IGN] objet mobile
[Termes IGN] reconnaissance d'objets
[Termes IGN] réseau neuronal convolutif
[Termes IGN] séquence d'imagesRésumé : (Auteur) We study the problem of segmenting moving objects in unconstrained videos. Given a video, the task is to segment all the objects that exhibit independent motion in at least one frame. We formulate this as a learning problem and design our framework with three cues: (1) independent object motion between a pair of frames, which complements object recognition, (2) object appearance, which helps to correct errors in motion estimation, and (3) temporal consistency, which imposes additional constraints on the segmentation. The framework is a two-stream neural network with an explicit memory module. The two streams encode appearance and motion cues in a video sequence respectively, while the memory module captures the evolution of objects over time, exploiting the temporal consistency. The motion stream is a convolutional neural network trained on synthetic videos to segment independently moving objects in the optical flow field. The module to build a “visual memory” in video, i.e., a joint representation of all the video frames, is realized with a convolutional recurrent unit learned from a small number of training video sequences. For every pixel in a frame of a test video, our approach assigns an object or background label based on the learned spatio-temporal features as well as the “visual memory” specific to the video. We evaluate our method extensively on three benchmarks, DAVIS, Freiburg-Berkeley motion segmentation dataset and SegTrack. In addition, we provide an extensive ablation study to investigate both the choice of the training data and the influence of each component in the proposed framework. Numéro de notice : A2018-601 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-018-1122-2 Date de publication en ligne : 22/09/2018 En ligne : https://doi.org/10.1007/s11263-018-1122-2 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92528
in International journal of computer vision > vol 127 n° 3 (March 2019) . - pp 282 - 301[article]Semantic understanding of scenes through the ADE20K dataset / Bolei Zhou in International journal of computer vision, vol 127 n° 3 (March 2019)PermalinkLearning spectral-spatial-temporal features via a recurrent convolutional neural network for change detection in multispectral imagery / Lichao Mou in IEEE Transactions on geoscience and remote sensing, vol 57 n° 2 (February 2019)PermalinkAnalyse d’images par méthode de Deep Learning appliquée au contexte routier en conditions météorologiques dégradées / Khouloud Dahmane (2019)PermalinkPermalinkChallenges in grassland mowing event detection with multimodal Sentinel images / Anatol Garioud (2019)PermalinkCorrecting rural building annotations in OpenStreetMap using convolutional neural networks / John E. Vargas-Muñoz in ISPRS Journal of photogrammetry and remote sensing, vol 147 (January 2019)PermalinkDataPink, l'IA au service de l'information géographique / Anonyme in Géomatique expert, n° 126 (janvier - février 2019)PermalinkEvaluating SAR-optical sensor fusion for aboveground biomass estimation in a Brazilian tropical forest / Aline Bernarda Debastiani in Annals of forest research, vol 62 n° 1 (January - June 2019)PermalinkPermalinkPermalink