Descripteur
Documents disponibles dans cette catégorie (1068)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Improving trajectory estimation using 3D city models and kinematic point clouds / Lucas Lucks in Transactions in GIS, Vol 25 n° 1 (February 2021)
[article]
Titre : Improving trajectory estimation using 3D city models and kinematic point clouds Type de document : Article/Communication Auteurs : Lucas Lucks, Auteur ; Lasse Klingbeil, Auteur ; Lutz Plümer, Auteur ; Youness Dehbi, Auteur Année de publication : 2021 Article en page(s) : pp 238 - 260 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Navigation et positionnement
[Termes IGN] algorithme ICP
[Termes IGN] bruit (théorie du signal)
[Termes IGN] centrale inertielle
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] interpolation
[Termes IGN] milieu urbain
[Termes IGN] modèle 3D de l'espace urbain
[Termes IGN] modèle sémantique de données
[Termes IGN] navigation autonome
[Termes IGN] semis de pointsRésumé : (Auteur) Accurate and robust positioning of vehicles in urban environments is of high importance for autonomous driving or mobile mapping. In mobile mapping systems, a simultaneous mapping of the environment using laser scanning and an accurate positioning using global navigation satellite systems are targeted. This requirement is often not guaranteed in shadowed cities where global navigation satellite system signals are usually disturbed, weak or even unavailable. We propose a novel approach which incorporates prior knowledge (i.e., a 3D city model of the environment) and improves the trajectory. The recorded point cloud is matched with the semantic city model using a point‐to‐plane iterative closest point method. A pre‐classification step enables an informed sampling of appropriate matching points. Random forest is used as classifier to discriminate between facade and remaining points. Local inconsistencies are tackled by a segmentwise partitioning of the point cloud where an interpolation guarantees a seamless transition between the segments. The general applicability of the method implemented is demonstrated on an inner‐city data set recorded with a mobile mapping system. Numéro de notice : A2021-188 Affiliation des auteurs : non IGN Thématique : IMAGERIE/POSITIONNEMENT Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1111/tgis.12719 Date de publication en ligne : 02/01/2021 En ligne : https://doi.org/10.1111/tgis.12719 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97157
in Transactions in GIS > Vol 25 n° 1 (February 2021) . - pp 238 - 260[article]A density-based algorithm for the detection of individual trees from LiDAR data / Melissa Latella in Remote sensing, Vol 13 n° 2 (January-2 2021)
[article]
Titre : A density-based algorithm for the detection of individual trees from LiDAR data Type de document : Article/Communication Auteurs : Melissa Latella, Auteur ; Fabio Sola, Auteur ; Carlo Camporeal, Auteur Année de publication : 2021 Article en page(s) : n° 322 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] arbre (flore)
[Termes IGN] comptage
[Termes IGN] densité de la végétation
[Termes IGN] détection d'arbres
[Termes IGN] distribution spatiale
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] forêt de feuillus
[Termes IGN] hauteur des arbres
[Termes IGN] inventaire forestier (techniques et méthodes)
[Termes IGN] semis de points
[Termes IGN] sous-étageRésumé : (auteur) Nowadays, LiDAR is widely used for individual tree detection, usually providing higher accuracy in coniferous stands than in deciduous ones, where the rounded-crown, the presence of understory vegetation, and the random spatial tree distribution may affect the identification algorithms. In this work, we propose a novel algorithm that aims to overcome these difficulties and yield the coordinates and the height of the individual trees on the basis of the point density features of the input point cloud. The algorithm was tested on twelve deciduous areas, assessing its performance on both regular-patterned plantations and stands with randomly distributed trees. For all cases, the algorithm provides high accuracy tree count (F-score > 0.7) and satisfying stem locations (position error around 1.0 m). In comparison to other common tools, the algorithm is weakly sensitive to the parameter setup and can be applied with little knowledge of the study site, thus reducing the effort and cost of field campaigns. Furthermore, it demonstrates to require just 2 points·m−2 as minimum point density, allowing for the analysis of low-density point clouds. Despite its simplicity, it may set the basis for more complex tools, such as those for crown segmentation or biomass computation, with potential applications in forest modeling and management. Numéro de notice : A2021-196 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.3390/rs13020322 Date de publication en ligne : 19/01/2021 En ligne : https://doi.org/10.3390/rs13020322 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97146
in Remote sensing > Vol 13 n° 2 (January-2 2021) . - n° 322[article]
Titre : 3D object detection using lidar point clouds and 2D image object detection Type de document : Mémoire Auteurs : Topi Miekkala, Auteur Editeur : Tampere [Finlande] : Tampere University Année de publication : 2021 Importance : 67 p. Format : 21 x 30 cm Note générale : bibliographie
Master of Science Thesis, Automation EngineeringLangues : Français (fre) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] apprentissage profond
[Termes IGN] détection d'objet
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] fusion de données
[Termes IGN] image 2D
[Termes IGN] navigation autonome
[Termes IGN] objet 3D
[Termes IGN] piéton
[Termes IGN] point d'intérêt
[Termes IGN] segmentation
[Termes IGN] semis de points
[Termes IGN] temps réel
[Termes IGN] vision par ordinateurRésumé : (auteur) This master thesis is about the environmental sensing of an automated vehicle, and its ability to recognize objects of interest such as other road users including pedestrians and other vehicles. Automated driving is a popular and growing field of research, and the continuous increase in the demand of self-driving vehicles requires manufacturers to constantly improve the safety and environmental sensing capabilities of their vehicles. Deep learning neural networks and sensor data fusion are significant tools in the development of detection algorithms of automated vehicles. This thesis presents a method combining neural networks and sensor data fusion to implement 3D object detection into a self-driving car. The method uses an onboard camera sensor and a state of the art 2D image object detector YOLO v4, combining its detections with the data of a lidar sensor, which produces dense point clouds of its environment. These point clouds can be used to estimate distances and locations of surrounding targets. Using inter-sensor calibration between the camera and the lidar, the 3D points outputted by the lidar can be projected on a 2D image, therefore allowing the 3D location estimation of 2D objects detected in an image. The thesis first presents the research questions and the theoretical methods used to implement the algorithm. Some background on automated driving is also presented, followed by the specific research environment and vehicle used in this thesis. The thesis also presents the software implementations and vehicle system integration steps needed to implement everything into a self-driving car to achieve a real-time 3D object detection system. The results of this thesis show that using sensor data fusion, such a system can be integrated fully into a self-driving vehicle, and the processing times of the algorithm can be kept at a real-time rate. Note de contenu : 1- Introduction
2- Methods for sensor data and object detection
3- Autonomous driving and environmental sensing
4- Experiments
5- Evaluation
6- ConclusionNuméro de notice : 28594 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Mémoire masters divers En ligne : https://trepo.tuni.fi/handle/10024/132285 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99323 3D urban scene understanding by analysis of LiDAR, color and hyperspectral data / David Duque-Arias (2021)
Titre : 3D urban scene understanding by analysis of LiDAR, color and hyperspectral data Type de document : Thèse/HDR Auteurs : David Duque-Arias, Auteur ; Beatriz Marcotegui, Directeur de thèse ; Jean-Emmanuel Deschaud, Directeur de thèse Editeur : Paris : Université Paris Sciences et Lettres Année de publication : 2021 Importance : 191 p. Format : 21 x 30 cm Note générale : bibliographie
Thèse de Doctorat de l'Université PSL, Spécialité : Morphologie MathématiqueLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] analyse de scène 3D
[Termes IGN] apprentissage profond
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] graphe
[Termes IGN] image hyperspectrale
[Termes IGN] image optique
[Termes IGN] modélisation géométrique de prise de vue
[Termes IGN] monde virtuel
[Termes IGN] morphologie mathématique
[Termes IGN] navigation autonome
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] traitement d'imageIndex. décimale : THESE Thèses et HDR Résumé : (auteur) Point clouds have attracted the interest of the research community over the last years. Initially, they were mostly used for remote sensing applications. More recently, thanks to the development of low-cost sensors and the publication of some open source libraries, they have become very popular and have been applied to a wider range of applications. One of them is the autonomous vehicle where many efforts have been made in the last century to make it real. A very important bottleneck nowadays for the autonomous vehicle is the evaluation of the proposed algorithms. Due to the huge number of possible scenarios, it is not feasible to perform it in real life. An alternative is to simulate virtual environments where all possible configurations can be set up beforehand. However, they are not as realistic as the real world is. In this thesis, we studied the pertinence of including hyperspectral images in the creation of new virtual environments. Furthermore, we proposed new methods to improve 3D scene understanding for autonomous vehicles. During this research, we addressed the following topics. Firstly, we analyzed the spectrum in color and hyperspectral images because it provides a description about the electromagnetic radiation at different frequencies. Some applications rely only on visible colors. In other cases, such as the characterization of materials, the study of the invisible range is required. For this purpose, we proposed a simplified spectrum representation that preserves its diversity, the Graph-based color lines (GCL) model. Secondly, we studied the integration of hyperspectral images, color images and point clouds in urban scenes. The analysis was carried out by using the data acquired during this thesis in the context of the REPLICA project FUI 24. We inspected spectral signatures of different objects and reflectance histograms of the images. The obtained results demonstrate that urban scenes are challenging scenarios for current technology of hyperspectral cameras due to the presence of uncontrolled light conditions and moving actors. Thirdly, we worked with 3D point clouds from urban scenes that have proved to be a reliable type of data, much less sensitive to illumination variations than cameras. They are more accurate than color images and permit to obtain precise 3D models of urban environments. Deep learning techniques are very popular in this domain. A key element of these techniques is the loss function that drives the optimization process. We proposed two new loss functions to perform semantic segmentation tasks: power Jaccard loss and hierarchical loss. They obtained a higher performance in evaluated scenarios than classical losses not only in 3D point clouds but also in color and gray scale images. Moreover, we proposed a new dataset (Paris Carla 3D Dataset) composed of synthetic and real point clouds from urban scenes. It is expected to be used by the research community for different automatic tasks such as semantic segmentation, instance segmentation and scene completion. Finally, we conducted a detailed analysis of the influence of RGB features in semantic segmentation of urban point clouds. We compared several training scenarios and identified that color systematically improves the performance in certain classes. It demonstrates that including a more detailed description of the spectrum, when the hyperspectral cameras technology increases its sensitivity, can be useful to improve scene description of urban scenes. Note de contenu : 1- Introduction
2- Data used in this thesis
3- Graph based color lines (GCL)
4- Study of REPLICA data
5- Power Jaccard losses for semantic segmentation
6- Segmentation of point clouds
7- Conclusions and perspectivesNuméro de notice : 28464 Affiliation des auteurs : non IGN Thématique : IMAGERIE/MATHEMATIQUE/URBANISME Nature : Thèse française Note de thèse : Thèse de Doctorat : Morphologie Mathématique : Paris sciences et lettres : 2021 Organisme de stage : Centre de Morphologie Mathématique DOI : sans En ligne : https://pastel.hal.science/tel-03434199/ Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99076
Titre : AI4GEO: a data intelligence platform for 3D geospatial mapping Type de document : Article/Communication Auteurs : Pierre-Marie Brunet, Auteur ; Pierre Lassalle, Auteur ; Simon Baillarin, Auteur ; Bruno Vallet , Auteur ; Arnaud Le Bris , Auteur ; Gaëlle Romeyer , Auteur ; Guy Le Besnerais, Auteur ; Flora Weissgerber, Auteur ; Gilles Foulon, Auteur ; Vincent Gaudissart, Auteur ; Christophe Triquet, Auteur ; Michael Darques, Auteur ; Gwénaël Souillé, Auteur ; Laurent Gabet, Auteur ; Cedrik Ferrero, Auteur ; Thanh-Long Huynh, Auteur ; Emeric Lavergne, Auteur Editeur : International Society for Photogrammetry and Remote Sensing ISPRS Année de publication : 2021 Collection : International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, ISSN 1682-1750 num. 43-B2-2021 Projets : AI4GEO / Conférence : ISPRS 2021, Commission 2, XXIV ISPRS Congress, Imaging today foreseeing tomorrow 05/07/2021 09/07/2021 Nice Virtuel France OA Archives Commission 2 Importance : pp 817 - 823 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie numérique
[Termes IGN] chaîne de traitement
[Termes IGN] données localisées 3D
[Termes IGN] données massives
[Termes IGN] jeu de données localisées
[Termes IGN] plateforme logicielle
[Termes IGN] segmentation sémantique
[Termes IGN] traitement de données localiséesRésumé : (auteur) The availability of 3D Geospatial information is a key issue for many expanding sectors such as autonomous vehicles, business intelligence and urban planning. Its production is now possible thanks to the abundance of available data (Earth observation satellite constellations, insitu data, …) but manual interventions are still needed to guarantee a high level of quality, which prevents mass production. New artificial intelligence and big data technologies adapted to 3D imagery can help to remove these obstacles. The AI4GEO project aims at developing an automatic solution for producing 3D geospatial information and new added-value services. This paper will first introduce AI4GEO initiative, context and overall objectives. It will then present the current status of the project and in particular it will focus on the innovative platform put in place to handle big 3D datasets for analytics needs and it will present the first results of 3D semantic segmentations and associated perspectives. Numéro de notice : C2021-015 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Autre URL associée : vers HAL Thématique : IMAGERIE/INFORMATIQUE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.5194/isprs-archives-XLIII-B2-2021-817-2021 Date de publication en ligne : 28/06/2021 En ligne : https://doi.org/10.5194/isprs-archives-XLIII-B2-2021-817-2021 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98067 An efficient representation of 3D buildings: application to the evaluation of city models / Oussama Ennafii (2021)PermalinkApport des méthodes : imagerie drone, LiDAR et imagerie hyperspectrale pour l’étude du littoral vendéen / Mathis Baudis (2021)PermalinkApport de la photogrammétrie et de l’intelligence artificielle à la détection des zones amiantées sur les fronts rocheux / Philippe Caudal (2021)PermalinkAutomatic object extraction from airborne laser scanning point clouds for digital base map production / Elyta Widyaningrum (2021)PermalinkBuilding extraction from Lidar data using statistical methods / Haval Abdul-Jabbar Sadeq in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 1 (January 2021)PermalinkCalcul de la largeur à pleins bords de grands cours d’eau à partir de MNT LiDAR / Nicolas Fermen (2021)PermalinkContributions to graph-based hierarchical analysis for images and 3D point clouds / Leonardo Gigli (2021)PermalinkConvex hull: another perspective about model predictions and map derivatives from remote sensing data / Jean-Pierre Renaud (2021)PermalinkCorrection radiométrique et recalage de nuages de points pour la reconstruction tridimensionnelle d'oeuvres du patrimoine culturel / Nathan Sanchiz (2021)PermalinkDeep convolutional neural networks for scene understanding and motion planning for self-driving vehicles / Abdelhak Loukkal (2021)Permalink