Descripteur
Documents disponibles dans cette catégorie (63)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Exploring semantic elements for urban scene recognition: Deep integration of high-resolution imagery and OpenStreetMap (OSM) / Wenzhi Zhao in ISPRS Journal of photogrammetry and remote sensing, vol 151 (May 2019)
[article]
Titre : Exploring semantic elements for urban scene recognition: Deep integration of high-resolution imagery and OpenStreetMap (OSM) Type de document : Article/Communication Auteurs : Wenzhi Zhao, Auteur ; Yanchen Bo, Auteur ; Jiage Chen, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 237 - 250 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classe sémantique
[Termes IGN] compréhension de l'image
[Termes IGN] fusion de données
[Termes IGN] image à haute résolution
[Termes IGN] reconnaissance d'objets
[Termes IGN] scène urbaineRésumé : (Auteur) Urban scenes refer to city blocks which are basic units of megacities, they play an important role in citizens’ welfare and city management. Remote sensing imagery with largescale coverage and accurate target descriptions, has been regarded as an ideal solution for monitoring the urban environment. However, due to the heterogeneity of remote sensing images, it is difficult to access their geographical content at the object level, let alone understanding urban scenes at the block level. Recently, deep learning-based strategies have been applied to interpret urban scenes with remarkable accuracies. However, the deep neural networks require a substantial number of training samples which are hard to satisfy, especially for high-resolution images. Meanwhile, the crowed-sourced Open Street Map (OSM) data provides rich annotation information about the urban targets but may encounter the problem of insufficient sampling (limited by the places where people can go). As a result, the combination of OSM and remote sensing images for efficient urban scene recognition is urgently needed. In this paper, we present a novel strategy to transfer existing OSM data to high-resolution images for semantic element determination and urban scene understanding. To be specific, the object-based convolutional neural network (OCNN) can be utilized for geographical object detection by feeding it rich semantic elements derived from OSM data. Then, geographical objects are further delineated into their functional labels by integrating points of interest (POIs), which contain rich semantic terms, such as commercial or educational labels. Lastly, the categories of urban scenes are easily acquired from the semantic objects inside. Experimental results indicate that the proposed method has an ability to classify complex urban scenes. The classification accuracies of the Beijing dataset are as high as 91% at the object-level and 88% at the scene level. Additionally, we are probably the first to investigate the object level semantic mapping by incorporating high-resolution images and OSM data of urban areas. Consequently, the method presented is effective in delineating urban scenes that could further boost urban environment monitoring and planning with high-resolution images. Numéro de notice : A2019-209 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.03.019 Date de publication en ligne : 29/03/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.03.019 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92675
in ISPRS Journal of photogrammetry and remote sensing > vol 151 (May 2019) . - pp 237 - 250[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019051 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019053 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019052 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Pairwise coarse registration of point clouds in urban scenes using voxel-based 4-planes congruent sets / Yusheng Xu in ISPRS Journal of photogrammetry and remote sensing, vol 151 (May 2019)
[article]
Titre : Pairwise coarse registration of point clouds in urban scenes using voxel-based 4-planes congruent sets Type de document : Article/Communication Auteurs : Yusheng Xu, Auteur ; Richard Boerner, Auteur ; Wei Yao, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 106 - 123 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] appariement d'images
[Termes IGN] congruence
[Termes IGN] données 4D
[Termes IGN] données lidar
[Termes IGN] données spatiotemporelles
[Termes IGN] modèle stéréoscopique
[Termes IGN] octree
[Termes IGN] Ransac (algorithme)
[Termes IGN] scène urbaine
[Termes IGN] semis de points
[Termes IGN] surface plane
[Termes IGN] voxelRésumé : (Auteur) To ensure complete coverage when measuring a large-scale urban area, pairwise registration between point clouds acquired via terrestrial laser scanning or stereo image matching is usually necessary when there is insufficient georeferencing information from additional GNSS and INS sensors. In this paper, we propose a semi-automatic and target-less method for coarse registration of point clouds using geometric constraints of voxel-based 4-plane congruent sets (V4PCS). The planar patches are firstly extracted from voxelized point clouds. Then, the transformation invariant, 4-plane congruent sets are constructed from extracted planar surfaces in each point cloud. Initial transformation parameters between point clouds are estimated via corresponding congruent sets having the highest registration scores in the RANSAC process. Finally, a closed-form solution is performed to achieve optimized transformation parameters by finding all corresponding planar patches using the initial transformation parameters. Experimental results reveal that our proposed method can be effective for registering point clouds acquired from various scenes. A success rate of better than 80% was achieved, with average rotation errors of about 0.5 degrees and average translation errors less than approximately 0.6 m. In addition, our proposed method is more efficient than other baseline methods when using the same hardware and software configuration conditions. Numéro de notice : A2019-207 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.02.015 Date de publication en ligne : 18/03/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.02.015 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92673
in ISPRS Journal of photogrammetry and remote sensing > vol 151 (May 2019) . - pp 106 - 123[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019051 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019053 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019052 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Improving LiDAR classification accuracy by contextual label smoothing in post-processing / Nan Li in ISPRS Journal of photogrammetry and remote sensing, vol 148 (February 2019)
[article]
Titre : Improving LiDAR classification accuracy by contextual label smoothing in post-processing Type de document : Article/Communication Auteurs : Nan Li, Auteur ; Chun Liu, Auteur ; Norbert Pfeifer, Auteur Année de publication : 2019 Article en page(s) : pp 13 - 31 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] graphe
[Termes IGN] lissage de valeur
[Termes IGN] post-traitement
[Termes IGN] précision de la classification
[Termes IGN] régularisation
[Termes IGN] scène urbaine
[Termes IGN] semis de pointsRésumé : (Auteur) We propose a contextual label-smoothing method to improve the LiDAR classification accuracy in a post-processing step. Under the framework of global graph-structured regularization, we enhance the effectiveness of label smoothing from two aspects. First, each point can collect sufficient label-relevant neighborhood information to verify its label based on an optimal graph. Second, the input label probability set is improved by probabilistic label relaxation to be more consistent with the spatial context. With this optimal graph and reliable label probability set, the final labels are computed by graph-structured regularization. We demonstrate the contextual label-smoothing approach on two separate urban airborne LiDAR datasets with complex urban scenes. Significant improvements in the classification accuracies are achieved without losing small objects (such as façades and cars). The overall accuracy is increased by 7.01% on the Vienna dataset and 6.88% on the Vaihingen dataset. Moreover, most large, wrongly labeled regions are corrected by long-range interactions that are derived from the optimal graph, and misclassified regions that lack neighborhood communications in terms of correct labels are also corrected with the probabilistic label relaxation. Numéro de notice : A2019-069 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.11.022 Date de publication en ligne : 13/12/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.11.022 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92156
in ISPRS Journal of photogrammetry and remote sensing > vol 148 (February 2019) . - pp 13 - 31[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019021 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019023 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019022 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt
Titre : Learning scene geometry for visual localization in challenging conditions Type de document : Article/Communication Auteurs : Nathan Piasco , Auteur ; Désiré Sidibé, Auteur ; Valérie Gouet-Brunet , Auteur ; Cédric Demonceaux, Auteur Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2019 Projets : PLaTINUM / Gouet-Brunet, Valérie Conférence : ICRA 2019, International Conference on Robotics and Automation 20/05/2019 24/05/2019 Montréal Québec - Canada Proceedings IEEE Importance : pp 9094 - 9100 Format : 21 x 30 cm Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] analyse visuelle
[Termes IGN] appariement d'images
[Termes IGN] carte de profondeur
[Termes IGN] descripteur
[Termes IGN] géométrie de l'image
[Termes IGN] image RVB
[Termes IGN] localisation basée vision
[Termes IGN] précision de localisation
[Termes IGN] prise de vue nocturne
[Termes IGN] robotique
[Termes IGN] scène urbaine
[Termes IGN] variation diurne
[Termes IGN] variation saisonnière
[Termes IGN] vision par ordinateurRésumé : (auteur) We propose a new approach for outdoor large scale image based localization that can deal with challenging scenarios like cross-season, cross-weather, day/night and longterm localization. The key component of our method is a new learned global image descriptor, that can effectively benefit from scene geometry information during training. At test time, our system is capable of inferring the depth map related to the query image and use it to increase localization accuracy. We are able to increase recall@1 performances by 2.15% on cross-weather and long-term localization scenario and by 4.24% points on a challenging winter/summer localization sequence versus state-of-the-art methods. Our method can also use weakly annotated data to localize night images across a reference dataset of daytime images. Numéro de notice : C2019-002 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/ICRA.2019.8794221 Date de publication en ligne : 12/08/2019 En ligne : http://doi.org/10.1109/ICRA.2019.8794221 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93774 Documents numériques
en open access
Learning scene geometry... - pdf auteurAdobe Acrobat PDF Semantic aware quality evaluation of 3D building models : Modeling and simulation / Oussama Ennafii (2019)
Titre : Semantic aware quality evaluation of 3D building models : Modeling and simulation Titre original : Evaluation de la qualité des modèles 3D de bâtiments Type de document : Thèse/HDR Auteurs : Oussama Ennafii , Auteur ; Clément Mallet , Directeur de thèse ; Florent Lafarge, Directeur de thèse Editeur : Champs/Marne : Université Paris-Est Année de publication : 2019 Importance : 238 p. Format : 21 x 30 cm Note générale : bibliographie
Dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy delivered by Université Paris-Est, Speciality Geographical Information Sciences and Technologies
Thèse récompensée par le prix 2020 EuroSDR PhD Award.Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage automatique
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] détection d'erreur
[Termes IGN] généralisation
[Termes IGN] image à très haute résolution
[Termes IGN] information sémantique
[Termes IGN] modèle 3D de l'espace urbain
[Termes IGN] modèle numérique de surface
[Termes IGN] modélisation 3D
[Termes IGN] modélisation du bâti
[Termes IGN] scène urbaine
[Termes IGN] taxinomieIndex. décimale : THESE Thèses et HDR Résumé : (auteur) The automatic generation of 3D building models from geospatial data is now a standard procedure. An abundant literature covers the last two decades and several softwares are now available. However, urban areas are very complex environments. Inevitably, practitioners still have to visually assess, at city-scale, the correctness of these models and detect frequent reconstruction errors. Such a process relies on experts, and is highly time-consuming with approximately two hours/km² per expert. This work proposes an approach for automatically evaluating the quality of 3D building models. Potential errors are compiled in a novel hierarchical and modular taxonomy. This allows, for the first time, to disentangle fidelity and modeling errors, whatever the level of details of the modeled buildings. The quality of models is predicted using the geometric properties of buildings and, when available, Very High Resolution images and Digital Surface Models. A baseline of handcrafted, yet generic, features is fed into a Random Forest or Support Vector Machine classifiers. Richer features, relying on graph kernels as well as Scattering Networks, were proposed to better take into consideration structure. Both multi-class and multi-label cases are studied: due to the interdependence between classes of errors, it is possible to retrieve all errors at the same time while simply predicting correct and erroneous buildings. The proposed framework was tested on three distinct urban areas in France with more than 3,000 buildings. 80%-99% F-score values are attained for the most frequent errors. For scalability purposes, the impact of the urban area composition on the error prediction was also studied, in terms of transferability, generalization, and representativeness of the classifiers. It shows the necessity of multi-modal remote sensing data and mixing training samples from various cities to ensure a stability of the detection ratios, even with very limited training set sizes. Note de contenu : 1- Introduction
2- State of the art
3- Semantic evaluation of 3D models
4- A learning approach for quality evaluation
5- Assessing the learned approach
6- Computing a better representation
7- Assessing the advanced features
8- ConclusionNuméro de notice : 25860 Affiliation des auteurs : LASTIG MATIS (2012-2019) Thématique : IMAGERIE Nature : Thèse française Note de thèse : Thèse de Doctorat : Spécialité : Sciences et Technologies de l'Information Géographique : Paris-Est, 2019 Organisme de stage : Lastig (IGN) nature-HAL : Thèse DOI : sans En ligne : https://hal.science/tel-02879809 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95395 Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 25860-02 THESE Livre Centre de documentation Thèses Disponible 25860-01 THESE Livre Centre de documentation Thèses Disponible 25860-03 THESE Livre Centre de documentation Thèses Disponible Towards visual urban scene understanding for autonomous vehicle path tracking using GPS positioning data / Citlalli Gamez Serna (2019)PermalinkAutomatic building rooftop extraction from aerial images via hierarchical RGB-D priors / Shibiao Xu in IEEE Transactions on geoscience and remote sensing, vol 56 n° 12 (December 2018)PermalinkAugmented reality meets computer vision : efficient data generation for urban driving scenes / Hassan Abu Alhaija in International journal of computer vision, vol 126 n° 9 (September 2018)PermalinkA deep neural network with spatial pooling (DNNSP) for 3-D point cloud classification / Zhen Wang in IEEE Transactions on geoscience and remote sensing, vol 56 n° 8 (August 2018)PermalinkUsing UAVs for map creation and updating: A case study in Rwanda / Mila Koeva in Survey review, vol 50 n° 361 (July 2018)PermalinkA voxel- and graph-based strategy for segmenting man-made infrastructures using perceptual grouping laws: comparison and evaluation / Yusheng Xu in Photogrammetric Engineering & Remote Sensing, PERS, vol 84 n° 6 (juin 2018)PermalinkLarge-scale supervised learning for 3D Point cloud labeling : Semantic3d.Net / Timo Hackel in Photogrammetric Engineering & Remote Sensing, PERS, vol 84 n° 5 (mai 2018)PermalinkRevue des descripteurs tridimensionnels (3D) pour la catégorisation des nuages de points acquis avec un système LiDAR de télémétrie mobile / Sylvie Daniel in Geomatica, vol 72 n° 1 (March 2018)PermalinkComparative study of visual saliency maps in the problem of classification of architectural images with Deep CNNs / Abraham Montoya Obeso (2018)PermalinkA stixel approach for enhancing semantic image segmentation using prior map information / Sylvain Jonchery (2018)PermalinkSingle image dehazing via an improved atmospheric scattering model / Mingye Ju in The Visual Computer, vol 33 n° 12 (December 2017)PermalinkAn effective spherical panoramic LoD model for a mobile street view service / Xianxiong Liu in Transactions in GIS, vol 21 n° 5 (October 2017)PermalinkDisocclusion of 3D LiDAR point clouds using range images / Pierre Biasutti in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol IV-1/W1 (May 2017)PermalinkEfficient edge-aware surface mesh reconstruction for urban scenes / András Bódis-Szomorú in Computer Vision and image understanding, vol 157 (April 2017)PermalinkPermalinkPermalinkPré-segmentation pour la classification faiblement supervisée de scènes urbaines à partir de nuages de points 3D LIDAR / Stéphane Guinard (2017)PermalinkSVM et réseaux neuronaux convolutifs pour la classification de scènes urbaines / Amaury Zarzelli (2017)PermalinkTélédétection pour l'observation des surfaces continentales, Ch. 2. Analyse de scènes urbaines avec un véhicule de cartographie mobile / Bruno Vallet (2017)PermalinkTélédétection pour l'observation des surfaces continentales, Volume 5. Observation des surfaces continentales par télédétection 3 / Nicolas Baghdadi (2017)Permalink