Descripteur
Documents disponibles dans cette catégorie (9404)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Titre : 3D object detection using lidar point clouds and 2D image object detection Type de document : Mémoire Auteurs : Topi Miekkala, Auteur Editeur : Tampere [Finlande] : Tampere University Année de publication : 2021 Importance : 67 p. Format : 21 x 30 cm Note générale : bibliographie
Master of Science Thesis, Automation EngineeringLangues : Français (fre) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] apprentissage profond
[Termes IGN] détection d'objet
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] fusion de données
[Termes IGN] image 2D
[Termes IGN] navigation autonome
[Termes IGN] objet 3D
[Termes IGN] piéton
[Termes IGN] point d'intérêt
[Termes IGN] segmentation
[Termes IGN] semis de points
[Termes IGN] temps réel
[Termes IGN] vision par ordinateurRésumé : (auteur) This master thesis is about the environmental sensing of an automated vehicle, and its ability to recognize objects of interest such as other road users including pedestrians and other vehicles. Automated driving is a popular and growing field of research, and the continuous increase in the demand of self-driving vehicles requires manufacturers to constantly improve the safety and environmental sensing capabilities of their vehicles. Deep learning neural networks and sensor data fusion are significant tools in the development of detection algorithms of automated vehicles. This thesis presents a method combining neural networks and sensor data fusion to implement 3D object detection into a self-driving car. The method uses an onboard camera sensor and a state of the art 2D image object detector YOLO v4, combining its detections with the data of a lidar sensor, which produces dense point clouds of its environment. These point clouds can be used to estimate distances and locations of surrounding targets. Using inter-sensor calibration between the camera and the lidar, the 3D points outputted by the lidar can be projected on a 2D image, therefore allowing the 3D location estimation of 2D objects detected in an image. The thesis first presents the research questions and the theoretical methods used to implement the algorithm. Some background on automated driving is also presented, followed by the specific research environment and vehicle used in this thesis. The thesis also presents the software implementations and vehicle system integration steps needed to implement everything into a self-driving car to achieve a real-time 3D object detection system. The results of this thesis show that using sensor data fusion, such a system can be integrated fully into a self-driving vehicle, and the processing times of the algorithm can be kept at a real-time rate. Note de contenu : 1- Introduction
2- Methods for sensor data and object detection
3- Autonomous driving and environmental sensing
4- Experiments
5- Evaluation
6- ConclusionNuméro de notice : 28594 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Mémoire masters divers En ligne : https://trepo.tuni.fi/handle/10024/132285 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99323
Titre : 3D point cloud compression Type de document : Thèse/HDR Auteurs : Chao Cao, Auteur ; Titus Zaharia, Directeur de thèse ; Marius Preda, Directeur de thèse Editeur : Paris : Institut Polytechnique de Paris Année de publication : 2021 Importance : 165 p. Format : 21 x 30 cm Note générale : Bibliographie
Thèse de doctorat de l’Institut polytechnique de Paris, Spécialité InformatiqueLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] compression d'image
[Termes IGN] corrélation automatique de points homologues
[Termes IGN] couleur (variable spectrale)
[Termes IGN] état de l'art
[Termes IGN] objet 3D
[Termes IGN] précision géométrique (imagerie)
[Termes IGN] scène 3D
[Termes IGN] segmentation d'image
[Termes IGN] semis de points
[Termes IGN] structure-from-motionIndex. décimale : THESE Thèses et HDR Résumé : (Auteur) With the rapid growth of multimedia content, 3D objects are becoming more and more popular. Most of the time, they are modeled as complex polygonal meshes or dense point clouds, providing immersive experiences in different industrial and consumer multimedia applications. The point cloud, which is easier to acquire than mesh and is widely applicable, has raised many interests in both the academic and commercial worlds.A point cloud is a set of points with different properties such as their geometrical locations and the associated attributes (e.g., color, material properties, etc.). The number of the points within a point cloud can range from a thousand, to constitute simple 3D objects, up to billions, to realistically represent complex 3D scenes. Such huge amounts of data bring great technological challenges in terms of transmission, processing, and storage of point clouds.In recent years, numerous research works focused their efforts on the compression of meshes, while less was addressed for point clouds. We have identified two main approaches in the literature: a purely geometric one based on octree decomposition, and a hybrid one based on both geometry and video coding. The first approach can provide accurate 3D geometry information but contains weak temporal consistency. The second one can efficiently remove the temporal redundancy yet a decrease of geometrical precision can be observed after the projection. Thus, the tradeoff between compression efficiency and accurate prediction needs to be optimized.We focused on exploring the temporal correlations between dynamic dense point clouds. We proposed different approaches to improve the compression performance of the MPEG (Moving Picture Experts Group) V-PCC (Video-based Point Cloud Compression) test model, which provides state-of-the-art compression on dynamic dense point clouds.First, an octree-based adaptive segmentation is proposed to cluster the points with different motion amplitudes into 3D cubes. Then, motion estimation is applied to these cubes using affine transformation. Gains in terms of rate-distortion (RD) performance have been observed in sequences with relatively low motion amplitudes. However, the cost of building an octree for the dense point cloud remains expensive while the resulting octree structures contain poor temporal consistency for the sequences with higher motion amplitudes.An anatomical structure is then proposed to model the motion of the point clouds representing humanoids more inherently. With the help of 2D pose estimation tools, the motion is estimated from 14 anatomical segments using affine transformation.Moreover, we propose a novel solution for color prediction and discuss the residual coding from prediction. It is shown that instead of encoding redundant texture information, it is more valuable to code the residuals, which leads to a better RD performance.Although our contributions have improved the performances of the V-PCC test models, the temporal compression of dynamic point clouds remains a highly challenging task. Due to the limitations of the current acquisition technology, the acquired point clouds can be noisy in both geometry and attribute domains, which makes it challenging to achieve accurate motion estimation. In future studies, the technologies used for 3D meshes may be exploited and adapted to provide temporal-consistent connectivity information between dynamic 3D point clouds. Note de contenu : Chapter 1 - Introduction
1.1. Background and motivation
1.2. Outline of the thesis and contributions
Chapter 2 - 3D Point Cloud Compression: State of the art
2.1. The 3D PCC “Universe Map” for methods
2.2. 1D methods: geometry traversal
2.3. 2D methods: Projection and mapping onto 2D planar domains
2.4. 3D methods: Direct exploitation of 3D correlations
2.5. DL-based methods
2.6. 3D PCC: What is missing?
2.7. MPEG 3D PCC standards
Chapter 3 - Extended Study of MPEG V-PCC and G-PCC Approaches
3.1. V-PCC methodology
3.2. Experimental evaluation of V-PCC
3.3. G-PCC methodology
3.4. Experimental evaluation of G-PCC
3.5. Experiments on the V-PCC inter-coding mode
3.6. Conclusion
Chapter 4 - Octree-based RDO segmentation
4.1. Pipeline
4.2. RDO-based octree segmentation
4.3. Prediction modeS
4.4. Experimental results
4.5. Conclusion
Chapter 5 - Skeleton-based motion estimation and compensation
5.1. Introduction
5.2. 3D Skeleton Generation
5.3. Motion estimation and compression
5.4. Experimental results
5.5. Conclusion
Chapter 6 - Temporal prediction using anatomical segmentation
6.1. Introduction
6.2. A novel dynamic 3D point cloud dataset
6.3. Prediction structure
6.4. Improved anatomy segmentation
6.5. Experimental results
6.6. Conclusion
Chapter 7 - A novel color compression for point clouds using affine transformation
7.1. Introduction
7.2. The residuals from both geometry and color
7.3. The prediction structure
7.4. Compression of the color residuals
7.5. Experimental results
7.6. Conclusion
Chapter 8 - Conclusion and future work
8.1. Conclusion
8.2. Future workNuméro de notice : 26821 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Thèse française Note de thèse : Thèse de Doctorat : informatique : Paris : 2021 Organisme de stage : Telecom SudParis nature-HAL : Thèse DOI : sans Date de publication en ligne : 13/04/2022 En ligne : https://tel.hal.science/tel-03524521 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100476 3D urban scene understanding by analysis of LiDAR, color and hyperspectral data / David Duque-Arias (2021)
Titre : 3D urban scene understanding by analysis of LiDAR, color and hyperspectral data Type de document : Thèse/HDR Auteurs : David Duque-Arias, Auteur ; Beatriz Marcotegui, Directeur de thèse ; Jean-Emmanuel Deschaud, Directeur de thèse Editeur : Paris : Université Paris Sciences et Lettres Année de publication : 2021 Importance : 191 p. Format : 21 x 30 cm Note générale : bibliographie
Thèse de Doctorat de l'Université PSL, Spécialité : Morphologie MathématiqueLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] analyse de scène 3D
[Termes IGN] apprentissage profond
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] graphe
[Termes IGN] image hyperspectrale
[Termes IGN] image optique
[Termes IGN] modélisation géométrique de prise de vue
[Termes IGN] monde virtuel
[Termes IGN] morphologie mathématique
[Termes IGN] navigation autonome
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] traitement d'imageIndex. décimale : THESE Thèses et HDR Résumé : (auteur) Point clouds have attracted the interest of the research community over the last years. Initially, they were mostly used for remote sensing applications. More recently, thanks to the development of low-cost sensors and the publication of some open source libraries, they have become very popular and have been applied to a wider range of applications. One of them is the autonomous vehicle where many efforts have been made in the last century to make it real. A very important bottleneck nowadays for the autonomous vehicle is the evaluation of the proposed algorithms. Due to the huge number of possible scenarios, it is not feasible to perform it in real life. An alternative is to simulate virtual environments where all possible configurations can be set up beforehand. However, they are not as realistic as the real world is. In this thesis, we studied the pertinence of including hyperspectral images in the creation of new virtual environments. Furthermore, we proposed new methods to improve 3D scene understanding for autonomous vehicles. During this research, we addressed the following topics. Firstly, we analyzed the spectrum in color and hyperspectral images because it provides a description about the electromagnetic radiation at different frequencies. Some applications rely only on visible colors. In other cases, such as the characterization of materials, the study of the invisible range is required. For this purpose, we proposed a simplified spectrum representation that preserves its diversity, the Graph-based color lines (GCL) model. Secondly, we studied the integration of hyperspectral images, color images and point clouds in urban scenes. The analysis was carried out by using the data acquired during this thesis in the context of the REPLICA project FUI 24. We inspected spectral signatures of different objects and reflectance histograms of the images. The obtained results demonstrate that urban scenes are challenging scenarios for current technology of hyperspectral cameras due to the presence of uncontrolled light conditions and moving actors. Thirdly, we worked with 3D point clouds from urban scenes that have proved to be a reliable type of data, much less sensitive to illumination variations than cameras. They are more accurate than color images and permit to obtain precise 3D models of urban environments. Deep learning techniques are very popular in this domain. A key element of these techniques is the loss function that drives the optimization process. We proposed two new loss functions to perform semantic segmentation tasks: power Jaccard loss and hierarchical loss. They obtained a higher performance in evaluated scenarios than classical losses not only in 3D point clouds but also in color and gray scale images. Moreover, we proposed a new dataset (Paris Carla 3D Dataset) composed of synthetic and real point clouds from urban scenes. It is expected to be used by the research community for different automatic tasks such as semantic segmentation, instance segmentation and scene completion. Finally, we conducted a detailed analysis of the influence of RGB features in semantic segmentation of urban point clouds. We compared several training scenarios and identified that color systematically improves the performance in certain classes. It demonstrates that including a more detailed description of the spectrum, when the hyperspectral cameras technology increases its sensitivity, can be useful to improve scene description of urban scenes. Note de contenu : 1- Introduction
2- Data used in this thesis
3- Graph based color lines (GCL)
4- Study of REPLICA data
5- Power Jaccard losses for semantic segmentation
6- Segmentation of point clouds
7- Conclusions and perspectivesNuméro de notice : 28464 Affiliation des auteurs : non IGN Thématique : IMAGERIE/MATHEMATIQUE/URBANISME Nature : Thèse française Note de thèse : Thèse de Doctorat : Morphologie Mathématique : Paris sciences et lettres : 2021 Organisme de stage : Centre de Morphologie Mathématique DOI : sans En ligne : https://pastel.hal.science/tel-03434199/ Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99076 Aleatoric uncertainty estimation for dense stereo matching via CNN-based cost volume analysis / Max Mehltretter in ISPRS Journal of photogrammetry and remote sensing, vol 171 (January 2021)
[article]
Titre : Aleatoric uncertainty estimation for dense stereo matching via CNN-based cost volume analysis Type de document : Article/Communication Auteurs : Max Mehltretter, Auteur ; Christian Heipke, Auteur Année de publication : 2021 Article en page(s) : pp 63 - 75 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement d'images
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] corrélation épipolaire dense
[Termes IGN] couple stéréoscopique
[Termes IGN] courbe épipolaire
[Termes IGN] disparité
[Termes IGN] effet de profondeur cinétique
[Termes IGN] image RVB
[Termes IGN] modèle d'incertitude
[Termes IGN] modèle stochastique
[Termes IGN] voxelRésumé : (auteur) Motivated by the need to identify erroneous disparity estimates, various methods for the estimation of aleatoric uncertainty in the context of dense stereo matching have been presented in recent years. Especially, the introduction of deep learning based methods and the accompanying significant improvement in accuracy have greatly increased the popularity of this field. Despite this remarkable development, most of these methods rely on features learned from disparity maps only, neglecting the corresponding 3-dimensional cost volumes. However, conventional hand-crafted methods have already demonstrated that the additional information contained in such cost volumes are beneficial for the task of uncertainty estimation. In this paper, we combine the advantages of deep learning and cost volume based features and present a new Convolutional Neural Network (CNN) architecture to directly learn features for the task of aleatoric uncertainty estimation from volumetric 3D data. Furthermore, we discuss and apply three different uncertainty models to train our CNN without the need to provide ground truth for uncertainty. In an extensive evaluation on three datasets using three common dense stereo matching methods, we investigate the effects of these uncertainty models and demonstrate the generality and state-of-the-art accuracy of the proposed method. Numéro de notice : A2021-012 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.11.003 Date de publication en ligne : 18/11/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.11.003 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96415
in ISPRS Journal of photogrammetry and remote sensing > vol 171 (January 2021) . - pp 63 - 75[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021011 SL Revue Centre de documentation Revues en salle Disponible 081-2021013 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021012 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Amélioration et adaptation du protocole de mesure d’empreintes d’abrasion par photogrammétrie / Hiba Sayeh (2021)
Titre : Amélioration et adaptation du protocole de mesure d’empreintes d’abrasion par photogrammétrie Type de document : Mémoire Auteurs : Hiba Sayeh, Auteur Editeur : Strasbourg : Institut National des Sciences Appliquées INSA Strasbourg Année de publication : 2021 Importance : 109 p. Format : 21 x 30 cm Note générale : bibliographie
Mémoire de fin d'études d'Ingénieur INSALangues : Français (fre) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] 3DReshaper
[Termes IGN] empreinte
[Termes IGN] étalonnage d'instrument
[Termes IGN] géoréférencement
[Termes IGN] métrologie dimensionelle
[Termes IGN] MicMac
[Termes IGN] semis de points
[Termes IGN] texture d'imageRésumé : (auteur) Le projet de fin d’études a été proposé dans le cadre d’optimisation du procédé photogrammétrique élaboré par la CNR pour le calcul de volume d’empreintes d’abrasion. Le protocole mis en place consiste à réaliser un nuage de points d’un échantillon d’abrasion sur MicMac et de calculer le volume de la cavité sur 3DReshaper. Cette manipulation présente des contraintes et l’objectif est de proposer une méthode alternative élaborée sur un logiciel commercial intuitif. Les enjeux sont la taille des empreintes à modéliser qui nécessitent une maîtrise solide de la métrologie des objets de petites dimensions, la mise en place d’un protocole automatisé, la surface réfléchissante des plaques de verre et finalement l’atteinte d’une haute précision fixée à une tolérance de 2% par rapport aux volumes MicMac. Des tests de calibration, de géoréférencement et de texture artificielle seront réalisés pour aboutir à la précision attendue. Note de contenu : Introduction
1- Etat de l'art
2- Protocole actuellement déployé
3- Problématique de géoréférencement et de calibration
4- Thématique de texture
5- Validation de la méthode Metashape
6- Analyse des précisions et des résultats atteints par lasergrammétrie
ConclusionNuméro de notice : 28683 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Mémoire ingénieur INSAS Organisme de stage : Centre d'Analyse Comportementale des Ouvrages Hydrauliques En ligne : http://eprints2.insa-strasbourg.fr/4496/ Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99978 Amélioration des résolutions spatiale et spectrale d’images satellitaires par réseaux antagonistes / Anaïs Gastineau (2021)PermalinkAn improved approach based on terrain-dependent mathematical models for georeferencing pushbroom satellite images / Behrooz Moradi in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 1 (January 2021)PermalinkAnalyse de la dynamique d’embroussaillement des pelouses calcaires par traitement d’images / Théo Mesure (2021)PermalinkAnalyse spatio-temporaire des dégradations et évolution des forêts par télédétection : cas du Parc National de Theniet El Had (Algérie) / Faouzi Berrichi in Bulletin des sciences géographiques, n° 32 (2019 - 2021)PermalinkPermalinkPermalinkApport des données Sentinel-1 pour le suivi continu de la forêt tropicale : Cas de la Guyane / Marie Ballère (2021)PermalinkApport des méthodes : imagerie drone, LiDAR et imagerie hyperspectrale pour l’étude du littoral vendéen / Mathis Baudis (2021)PermalinkApport de la photogrammétrie dans la documentation et le suivi d’une tranchée archéologique / Iris Lucas (2021)PermalinkApport de la photogrammétrie et de l’intelligence artificielle à la détection des zones amiantées sur les fronts rocheux / Philippe Caudal (2021)PermalinkApport de la photogrammétrie satellite pour la modélisation du manteau neigeux / César Deschamps-Berger (2021)PermalinkApports des méthodes d'apprentissage profond pour la reconnaissance automatique des modes d'occupation des sols et d'objets par télédétection en milieu tropical / Guillaume Rousset (2021)PermalinkApprentissage profond et IA pour l’amélioration de la robustesse des techniques de localisation par vision artificielle / Achref Elouni (2021)PermalinkPermalinkAssessing the interest of a multi-modal gap-filling strategy for monitoring changes in grassland parcels / Anatol Garioud (2021)PermalinkAssessment of combining convolutional neural networks and object based image analysis to land cover classification using Sentinel 2 satellite imagery (Tenes region, Algeria) / N. Zaabar (2021)PermalinkAutomated detection of individual Juniper tree location and forest cover changes using Google Earth Engine / Sudeera Wickramarathna in Annals of forest research, vol 64 n° 1 (2021)PermalinkAutomated detection of lineaments express geological linear features of a tropical region using topographic fabric grain algorithm and the SRTM DEM / Samy Ismail Elmahdy in Geocarto international, vol 36 n° 1 ([01/01/2021])PermalinkAutomatic object extraction from airborne laser scanning point clouds for digital base map production / Elyta Widyaningrum (2021)PermalinkPermalinkBeach morphology and its dynamism from remote sensing for coastal management support / Carlos Cabezas Rabadán (2021)PermalinkPermalinkBuilding extraction from Lidar data using statistical methods / Haval Abdul-Jabbar Sadeq in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 1 (January 2021)PermalinkCalcul de la largeur à pleins bords de grands cours d’eau à partir de MNT LiDAR / Nicolas Fermen (2021)PermalinkChange detection of land use and land cover, using landsat-8 and sentinel-2A images / Mohammed Abdulmohsen Alhedyan (2021)PermalinkCombining deep learning and mathematical morphology for historical map segmentation / Yizi Chen (2021)PermalinkPermalinkConnecting images through time and sources: Introducing low-data, heterogeneous instance retrieval / Dimitri Gominski (2021)PermalinkConsolidation of crowd-sourced geo-ragged data for parameterized travel recommendations / Ago Luberg (2021)PermalinkPermalinkContribution des SIG et de la modélisation volumique à la caractérisation géomorphologique et géologique de la région des Doukkala « Meseta côtière, Maroc » / Youness Ahmed Laaziz (2021)PermalinkContributions to graph-based hierarchical analysis for images and 3D point clouds / Leonardo Gigli (2021)PermalinkCorrection radiométrique et recalage de nuages de points pour la reconstruction tridimensionnelle d'oeuvres du patrimoine culturel / Nathan Sanchiz (2021)PermalinkDeep convolutional neural networks for scene understanding and motion planning for self-driving vehicles / Abdelhak Loukkal (2021)PermalinkPermalinkPermalinkDeep learning for wildfire progression monitoring using SAR and optical satellite image time series / Puzhao Zhang (2021)PermalinkDescription et recherche d’image généralisables pour l’interconnexion et l’analyse multi-source / Dimitri Gominski (2021)PermalinkDétection de changement d’occupation du sol à l’aide de données Sentinel en contexte tropical / Lucas Martelet (2021)PermalinkDétection/reconnaissance d'objets urbains à partir de données 3D multicapteurs prises au niveau du sol, en continu / Younes Zegaoui (2021)PermalinkDétection et reconstruction 3D d’arbres urbains par segmentation de nuages de points : apport de l’apprentissage profond / Victor Alteirac (2021)PermalinkPermalinkDéveloppement d’outils d’exploitation des archives photographiques aériennes de l’IGN pour caractériser l’évolution pluridécennale du littoral sur l’île de la Réunion / Adinane Oladjidé Ayichemi (2021)PermalinkPermalinkDynamics of inundation events in the rivers-estuaries-ocean continuum in Bengal delta : synergy between hydrodynamic modelling and spaceborne remote sensing / Md Jamal Uddin Kahn (2021)Permalink