Descripteur
Termes IGN > géomatique > données localisées
données localiséesSynonyme(s)spatial data ;données géospatiales ;données géographiques données à référence spatialeVoir aussi |
Documents disponibles dans cette catégorie (3735)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Tropical forest canopy height estimation from combined polarimetric SAR and LiDAR using machine-learning / Maryam Pourshamsi in ISPRS Journal of photogrammetry and remote sensing, vol 172 (February 2021)
[article]
Titre : Tropical forest canopy height estimation from combined polarimetric SAR and LiDAR using machine-learning Type de document : Article/Communication Auteurs : Maryam Pourshamsi, Auteur ; Junshi Xia, Auteur ; Naoto Yokoya, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 79 - 94 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage automatique
[Termes IGN] bande L
[Termes IGN] canopée
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] données lidar
[Termes IGN] données polarimétriques
[Termes IGN] forêt tropicale
[Termes IGN] Gabon
[Termes IGN] hauteur des arbres
[Termes IGN] image captée par drone
[Termes IGN] image radar moirée
[Termes IGN] Rotation Forest classification
[Termes IGN] semis de pointsRésumé : (auteur) Forest height is an important forest biophysical parameter which is used to derive important information about forest ecosystems, such as forest above ground biomass. In this paper, the potential of combining Polarimetric Synthetic Aperture Radar (PolSAR) variables with LiDAR measurements for forest height estimation is investigated. This will be conducted using different machine learning algorithms including Random Forest (RFs), Rotation Forest (RoFs), Canonical Correlation Forest (CCFs) and Support Vector Machine (SVMs). Various PolSAR parameters are required as input variables to ensure a successful height retrieval across different forest heights ranges. The algorithms are trained with 5000 LiDAR samples (less than 1% of the full scene) and different polarimetric variables. To examine the dependency of the algorithm on input training samples, three different subsets are identified which each includes different features: subset 1 is quiet diverse and includes non-vegetated region, short/sparse vegetation (0–20 m), vegetation with mid-range height (20–40 m) to tall/dense ones (40–60 m); subset 2 covers mostly the dense vegetated area with height ranges 40–60 m; and subset 3 mostly covers the non-vegetated to short/sparse vegetation (0–20 m) .The trained algorithms were used to estimate the height for the areas outside the identified subset. The results were validated with independent samples of LiDAR-derived height showing high accuracy (with the average R2 = 0.70 and RMSE = 10 m between all the algorithms and different training samples). The results confirm that it is possible to estimate forest canopy height using PolSAR parameters together with a small coverage of LiDAR height as training data. Numéro de notice : A2021-086 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.11.008 Date de publication en ligne : 19/12/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.11.008 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96846
in ISPRS Journal of photogrammetry and remote sensing > vol 172 (February 2021) . - pp 79 - 94[article]Réservation
Réserver ce documentExemplaires(2)
Code-barres Cote Support Localisation Section Disponibilité 081-2021021 SL Revue Centre de documentation Revues en salle Disponible 081-2021022 DEP-RECF Revue Nancy Bibliothèque Nancy IFN Exclu du prêt Using automated vegetation cover estimation from close-range photogrammetric point clouds to compare vegetation location properties in mountain terrain / R. Niederheiser in GIScience and remote sensing, vol 58 n° 1 (February 2021)
[article]
Titre : Using automated vegetation cover estimation from close-range photogrammetric point clouds to compare vegetation location properties in mountain terrain Type de document : Article/Communication Auteurs : R. Niederheiser, Auteur ; M. Winkler, Auteur ; V. Di Cecco, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 120 - 137 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie terrestre
[Termes IGN] Alpes
[Termes IGN] caméra numérique
[Termes IGN] carte de la végétation
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification semi-dirigée
[Termes IGN] couvert végétal
[Termes IGN] distribution de Poisson
[Termes IGN] données topographiques
[Termes IGN] indice de végétation
[Termes IGN] module linéaire
[Termes IGN] montagne
[Termes IGN] occupation du sol
[Termes IGN] photogrammétrie métrologique
[Termes IGN] semis de pointsRésumé : (auteur) In this paper we present a low-cost approach to mapping vegetation cover by means of high-resolution close-range terrestrial photogrammetry. A total of 249 clusters of nine 1 m2 plots each, arranged in a 3 × 3 grid, were set up on 18 summits in Mediterranean mountain regions and in the Alps to capture images for photogrammetric processing and in-situ vegetation cover estimates. This was done with a hand-held pole-mounted digital single-lens reflex (DSLR) camera. Low-growing vegetation was automatically segmented using high-resolution point clouds. For classifying vegetation we used a two-step semi-supervised Random Forest approach. First, we applied an expert-based rule set using the Excess Green index (ExG) to predefine non-vegetation and vegetation points. Second, we applied a Random Forest classifier to further enhance the classification of vegetation points using selected topographic parameters (elevation, slope, aspect, roughness, potential solar irradiation) and additional vegetation indices (Excess Green Minus Excess Red (ExGR) and the vegetation index VEG). For ground cover estimation the photogrammetric point clouds were meshed using Screened Poisson Reconstruction. The relative influence of the topographic parameters on the vegetation cover was determined with linear mixed-effects models (LMMs). Analysis of the LMMs revealed a high impact of elevation, aspect, solar irradiation, and standard deviation of slope. The presented approach goes beyond vegetation cover values based on conventional orthoimages and in-situ vegetation cover estimates from field surveys in that it is able to differentiate complete 3D surface areas, including overhangs, and can distinguish between vegetation-covered and other surfaces in an automated manner. The results of the Random Forest classification confirmed it as suitable for vegetation classification, but the relative feature importance values indicate that the classifier did not leverage the potential of the included topographic parameters. In contrast, our application of LMMs utilized the topographic parameters and was able to reveal dependencies in the two biomes, such as elevation and aspect, which were able to explain between 87% and 92.5% of variance. Numéro de notice : A2021-258 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1080/15481603.2020.1859264 Date de publication en ligne : 13/01/2021 En ligne : https://doi.org/10.1080/15481603.2020.1859264 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97295
in GIScience and remote sensing > vol 58 n° 1 (February 2021) . - pp 120 - 137[article]A density-based algorithm for the detection of individual trees from LiDAR data / Melissa Latella in Remote sensing, Vol 13 n° 2 (January-2 2021)
[article]
Titre : A density-based algorithm for the detection of individual trees from LiDAR data Type de document : Article/Communication Auteurs : Melissa Latella, Auteur ; Fabio Sola, Auteur ; Carlo Camporeal, Auteur Année de publication : 2021 Article en page(s) : n° 322 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] arbre (flore)
[Termes IGN] comptage
[Termes IGN] densité de la végétation
[Termes IGN] détection d'arbres
[Termes IGN] distribution spatiale
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] forêt de feuillus
[Termes IGN] hauteur des arbres
[Termes IGN] inventaire forestier (techniques et méthodes)
[Termes IGN] semis de points
[Termes IGN] sous-étageRésumé : (auteur) Nowadays, LiDAR is widely used for individual tree detection, usually providing higher accuracy in coniferous stands than in deciduous ones, where the rounded-crown, the presence of understory vegetation, and the random spatial tree distribution may affect the identification algorithms. In this work, we propose a novel algorithm that aims to overcome these difficulties and yield the coordinates and the height of the individual trees on the basis of the point density features of the input point cloud. The algorithm was tested on twelve deciduous areas, assessing its performance on both regular-patterned plantations and stands with randomly distributed trees. For all cases, the algorithm provides high accuracy tree count (F-score > 0.7) and satisfying stem locations (position error around 1.0 m). In comparison to other common tools, the algorithm is weakly sensitive to the parameter setup and can be applied with little knowledge of the study site, thus reducing the effort and cost of field campaigns. Furthermore, it demonstrates to require just 2 points·m−2 as minimum point density, allowing for the analysis of low-density point clouds. Despite its simplicity, it may set the basis for more complex tools, such as those for crown segmentation or biomass computation, with potential applications in forest modeling and management. Numéro de notice : A2021-196 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.3390/rs13020322 Date de publication en ligne : 19/01/2021 En ligne : https://doi.org/10.3390/rs13020322 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97146
in Remote sensing > Vol 13 n° 2 (January-2 2021) . - n° 322[article]
Titre : 3D object detection using lidar point clouds and 2D image object detection Type de document : Mémoire Auteurs : Topi Miekkala, Auteur Editeur : Tampere [Finlande] : Tampere University Année de publication : 2021 Importance : 67 p. Format : 21 x 30 cm Note générale : bibliographie
Master of Science Thesis, Automation EngineeringLangues : Français (fre) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] apprentissage profond
[Termes IGN] détection d'objet
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] fusion de données
[Termes IGN] image 2D
[Termes IGN] navigation autonome
[Termes IGN] objet 3D
[Termes IGN] piéton
[Termes IGN] point d'intérêt
[Termes IGN] segmentation
[Termes IGN] semis de points
[Termes IGN] temps réel
[Termes IGN] vision par ordinateurRésumé : (auteur) This master thesis is about the environmental sensing of an automated vehicle, and its ability to recognize objects of interest such as other road users including pedestrians and other vehicles. Automated driving is a popular and growing field of research, and the continuous increase in the demand of self-driving vehicles requires manufacturers to constantly improve the safety and environmental sensing capabilities of their vehicles. Deep learning neural networks and sensor data fusion are significant tools in the development of detection algorithms of automated vehicles. This thesis presents a method combining neural networks and sensor data fusion to implement 3D object detection into a self-driving car. The method uses an onboard camera sensor and a state of the art 2D image object detector YOLO v4, combining its detections with the data of a lidar sensor, which produces dense point clouds of its environment. These point clouds can be used to estimate distances and locations of surrounding targets. Using inter-sensor calibration between the camera and the lidar, the 3D points outputted by the lidar can be projected on a 2D image, therefore allowing the 3D location estimation of 2D objects detected in an image. The thesis first presents the research questions and the theoretical methods used to implement the algorithm. Some background on automated driving is also presented, followed by the specific research environment and vehicle used in this thesis. The thesis also presents the software implementations and vehicle system integration steps needed to implement everything into a self-driving car to achieve a real-time 3D object detection system. The results of this thesis show that using sensor data fusion, such a system can be integrated fully into a self-driving vehicle, and the processing times of the algorithm can be kept at a real-time rate. Note de contenu : 1- Introduction
2- Methods for sensor data and object detection
3- Autonomous driving and environmental sensing
4- Experiments
5- Evaluation
6- ConclusionNuméro de notice : 28594 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Mémoire masters divers En ligne : https://trepo.tuni.fi/handle/10024/132285 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99323 3D urban scene understanding by analysis of LiDAR, color and hyperspectral data / David Duque-Arias (2021)
Titre : 3D urban scene understanding by analysis of LiDAR, color and hyperspectral data Type de document : Thèse/HDR Auteurs : David Duque-Arias, Auteur ; Beatriz Marcotegui, Directeur de thèse ; Jean-Emmanuel Deschaud, Directeur de thèse Editeur : Paris : Université Paris Sciences et Lettres Année de publication : 2021 Importance : 191 p. Format : 21 x 30 cm Note générale : bibliographie
Thèse de Doctorat de l'Université PSL, Spécialité : Morphologie MathématiqueLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] analyse de scène 3D
[Termes IGN] apprentissage profond
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] graphe
[Termes IGN] image hyperspectrale
[Termes IGN] image optique
[Termes IGN] modélisation géométrique de prise de vue
[Termes IGN] monde virtuel
[Termes IGN] morphologie mathématique
[Termes IGN] navigation autonome
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] traitement d'imageIndex. décimale : THESE Thèses et HDR Résumé : (auteur) Point clouds have attracted the interest of the research community over the last years. Initially, they were mostly used for remote sensing applications. More recently, thanks to the development of low-cost sensors and the publication of some open source libraries, they have become very popular and have been applied to a wider range of applications. One of them is the autonomous vehicle where many efforts have been made in the last century to make it real. A very important bottleneck nowadays for the autonomous vehicle is the evaluation of the proposed algorithms. Due to the huge number of possible scenarios, it is not feasible to perform it in real life. An alternative is to simulate virtual environments where all possible configurations can be set up beforehand. However, they are not as realistic as the real world is. In this thesis, we studied the pertinence of including hyperspectral images in the creation of new virtual environments. Furthermore, we proposed new methods to improve 3D scene understanding for autonomous vehicles. During this research, we addressed the following topics. Firstly, we analyzed the spectrum in color and hyperspectral images because it provides a description about the electromagnetic radiation at different frequencies. Some applications rely only on visible colors. In other cases, such as the characterization of materials, the study of the invisible range is required. For this purpose, we proposed a simplified spectrum representation that preserves its diversity, the Graph-based color lines (GCL) model. Secondly, we studied the integration of hyperspectral images, color images and point clouds in urban scenes. The analysis was carried out by using the data acquired during this thesis in the context of the REPLICA project FUI 24. We inspected spectral signatures of different objects and reflectance histograms of the images. The obtained results demonstrate that urban scenes are challenging scenarios for current technology of hyperspectral cameras due to the presence of uncontrolled light conditions and moving actors. Thirdly, we worked with 3D point clouds from urban scenes that have proved to be a reliable type of data, much less sensitive to illumination variations than cameras. They are more accurate than color images and permit to obtain precise 3D models of urban environments. Deep learning techniques are very popular in this domain. A key element of these techniques is the loss function that drives the optimization process. We proposed two new loss functions to perform semantic segmentation tasks: power Jaccard loss and hierarchical loss. They obtained a higher performance in evaluated scenarios than classical losses not only in 3D point clouds but also in color and gray scale images. Moreover, we proposed a new dataset (Paris Carla 3D Dataset) composed of synthetic and real point clouds from urban scenes. It is expected to be used by the research community for different automatic tasks such as semantic segmentation, instance segmentation and scene completion. Finally, we conducted a detailed analysis of the influence of RGB features in semantic segmentation of urban point clouds. We compared several training scenarios and identified that color systematically improves the performance in certain classes. It demonstrates that including a more detailed description of the spectrum, when the hyperspectral cameras technology increases its sensitivity, can be useful to improve scene description of urban scenes. Note de contenu : 1- Introduction
2- Data used in this thesis
3- Graph based color lines (GCL)
4- Study of REPLICA data
5- Power Jaccard losses for semantic segmentation
6- Segmentation of point clouds
7- Conclusions and perspectivesNuméro de notice : 28464 Affiliation des auteurs : non IGN Thématique : IMAGERIE/MATHEMATIQUE/URBANISME Nature : Thèse française Note de thèse : Thèse de Doctorat : Morphologie Mathématique : Paris sciences et lettres : 2021 Organisme de stage : Centre de Morphologie Mathématique DOI : sans En ligne : https://pastel.hal.science/tel-03434199/ Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99076 Acquisition lasergrammétrique d’ouvrages d’art pour l’interopérabilité BIM-SIG, cas pratique du syndicat mixte "Routes de Guadeloupe" / Sonia Sermanson (2021)PermalinkPermalinkAn efficient representation of 3D buildings: application to the evaluation of city models / Oussama Ennafii (2021)PermalinkApplications of remote sensing data in mapping of forest growing stock and biomass / Jose Aranha (2021)PermalinkApport des méthodes : imagerie drone, LiDAR et imagerie hyperspectrale pour l’étude du littoral vendéen / Mathis Baudis (2021)PermalinkApport de la photogrammétrie et de l’intelligence artificielle à la détection des zones amiantées sur les fronts rocheux / Philippe Caudal (2021)PermalinkPermalinkAutomatic object extraction from airborne laser scanning point clouds for digital base map production / Elyta Widyaningrum (2021)PermalinkPermalinkPermalink