Descripteur
Termes IGN > informatique > intelligence artificielle > vision par ordinateur > cartographie et localisation simultanées
cartographie et localisation simultanéesSynonyme(s)SLAMVoir aussi |
Documents disponibles dans cette catégorie (28)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Titre : De la navigation visuelle à l’analyse sémantique pour véhicules autonomes Type de document : Thèse/HDR Auteurs : Emir Hrustic, Auteur ; Eric Chaumette, Directeur de thèse ; Damien Vivet, Auteur Editeur : Toulouse : Université de Toulouse Année de publication : 2021 Importance : 193 p. Format : 21 x 30 cm Note générale : bibliographie
Thèse en vue de l'obtention du Doctorat de l'Université de Toulouse délivré par l'Institut Supérieur de l’Aéronautique et de l’Espace, spécialité Informatique et TélécommunicationsLangues : Français (fre) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage automatique
[Termes IGN] apprentissage profond
[Termes IGN] capteur optique
[Termes IGN] cartographie et localisation simultanées
[Termes IGN] détection d'objet
[Termes IGN] filtre de Kalman
[Termes IGN] information sémantique
[Termes IGN] navigation autonome
[Termes IGN] segmentation sémantique
[Termes IGN] signalisation routière
[Termes IGN] vision par ordinateurIndex. décimale : THESE Thèses et HDR Résumé : (auteur) Les travaux actuels dans le domaine de la navigation autonome s’intéressent principalement à l’étude d’algorithmes de localisation sur la base d’hybridation multi-capteurs ou d’approche de type localisation et cartographie simultanées (SLAM). Aujourd’hui des méthodes bien connues et assez fiables existent comme par exemple ORB-SLAM, SVO, PTAM. L’ensemble de ces méthodes peut être considéré comme des approches « bas niveau » dans le sens où l’interprétation de la scène reste très limitée. En effet, celle-ci est représentée par des nuages de points 3D ou au mieux des amers géométriques.Il est à noter qu’avec le machine learning et plus récemment l’engouement pour le Deep-Learning, des techniques d’analyse d’image émergent avec l’extraction d’objets statiques ou mobiles (détection de piétons, de panneaux, de marquages au sol. Ces approches restent cependant encore décorrélées de l’étape de navigation à proprement parlé. L’ambition de ce projet est d’intégrer les couches d’analyse de scène dans le cadre de la navigation autonome, à savoir intégrer les informations sémantiques dans l’étape de calcul de position. Nous souhaitons donc mettre en place une cartographie d'objets, dite sémantique, qu'ils soient routiers (panneau, feux, marquages au sol particuliers...), urbains (enseignes de magasin...) et éventuellement d’événements (accidents, travaux, déviations...). Ce type de cartographie permettra la navigation par amers visuels de haut niveau bien plus robustes dans le temps mais également plus facilement détectable en cas de variation de luminosité (jour nuit). Ce projet se situe ainsi à l’intersection de diverses thématiques : - L’apprentissage automatique, l’analyse d’image et la détection d’objets - La localisation par vision (odométrie visuelle, hybridation) - La cartographie sémantique géolocalisée (SLAM+GNSS). Note de contenu : 1- Introduction
2- La navigation autonome de véhicule par capteurs optiques
3- Extraction d’amers sémantiques
4- Intégration d’amers sémantiques dans un framework de type SLAM
5- Intégration de contraintes pour compenser les erreurs de modélisation d’un système
ConclusionNuméro de notice : 28597 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse française Note de thèse : Thèse de Doctorat : Informatique et Télécommunications : Toulouse : 2021 Organisme de stage : ISAE-ONERA SCANR DOI : sans En ligne : http://www.theses.fr/2021ESAE0008 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99356 Urban Wi-Fi fingerprinting along a public transport route / Guenther Retscher in Journal of applied geodesy, vol 14 n° 4 (October 2020)
[article]
Titre : Urban Wi-Fi fingerprinting along a public transport route Type de document : Article/Communication Auteurs : Guenther Retscher, Auteur ; Aizhan Bekenova, Auteur Année de publication : 2020 Article en page(s) : pp 379 – 392 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Navigation et positionnement
[Termes IGN] accès sans fils à l'internet
[Termes IGN] cartographie et localisation simultanées
[Termes IGN] centrale inertielle
[Termes IGN] empreinte
[Termes IGN] itinéraire
[Termes IGN] migration pendulaire
[Termes IGN] positionnement par WiFi
[Termes IGN] programmation par contraintes
[Termes IGN] qualité du signal
[Termes IGN] service fondé sur la position
[Termes IGN] téléphone intelligent
[Termes IGN] transport collectif
[Termes IGN] zone urbaineRésumé : (auteur) The outreach of Wi-Fi localization is extended in this study for urban wide applications as they provide the high potential to employ them for numerous applications for localization and guidance in urban environments. The selected application presented in this paper is the localization and routing of public transport smartphone users. For the conducted investigations, Received Signal Strength Indicator (RSSI) values are collected for users who are travelling from home in a residential neighbourhood to work in the city centre and return along the same route. Special tramway trains are selected which provide two on-board Wi-Fi Access Points (APs). Firstly, the availability, visibility and RSSI stability of the Wi-Fi signal behavior of these APs and the APs in the surrounding environment along the routes is analyzed. Then the trajectories are estimated based on location fingerprinting. A first analyses reveals that significant differences exists between the six employed smartphones as well as times of the day, e. g. in the morning at peak hours or at off-peak hours. From the long-time observations it is seen that the two on-board APs show a high stability of the RSSI signals at the same times of the day and along the whole route. It is therefore currently investigated how they can confirm and validate user localization along the route and if they can contribute to constrain the overall positioning solution in combination with the inertial smartphone sensors. Moreover, the railway track can serve as a further constraint. As an outlook on future work, the development of a Simultaneous Localization and Mapping (SLAM) solution with a fusion with the smartphone inertial sensors is proposed. Numéro de notice : A2020-676 Affiliation des auteurs : non IGN Thématique : POSITIONNEMENT/URBANISME Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1515/jag-2020-0015 Date de publication en ligne : 16/07/2020 En ligne : https://doi.org/10.1515/jag-2020-0015 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96168
in Journal of applied geodesy > vol 14 n° 4 (October 2020) . - pp 379 – 392[article]Under-canopy UAV laser scanning for accurate forest field measurements / Eric Hyyppä in ISPRS Journal of photogrammetry and remote sensing, vol 164 (June 2020)
[article]
Titre : Under-canopy UAV laser scanning for accurate forest field measurements Type de document : Article/Communication Auteurs : Eric Hyyppä, Auteur ; Juha Hyyppä, Auteur ; Teemu Hakala, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 41 - 60 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] balayage laser
[Termes IGN] canopée
[Termes IGN] cartographie et localisation simultanées
[Termes IGN] densité du bois
[Termes IGN] diamètre à hauteur de poitrine
[Termes IGN] données lidar
[Termes IGN] erreur moyenne quadratique
[Termes IGN] Finlande
[Termes IGN] forêt boréale
[Termes IGN] hauteur à la base du houppier
[Termes IGN] hauteur des arbres
[Termes IGN] image captée par drone
[Termes IGN] inventaire forestier local
[Termes IGN] modèle de croissance végétale
[Termes IGN] semis de points
[Termes IGN] télédétection aérienne
[Termes IGN] télémètre laser terrestre
[Termes IGN] télémétrie laser aéroporté
[Termes IGN] troncRésumé : (auteur) Surveying and robotic technologies are converging, offering great potential for robotic-assisted data collection and support for labour intensive surveying activities. From a forest monitoring perspective, there are several technological and operational aspects to address concerning under-canopy flying unmanned airborne vehicles (UAV). To demonstrate this emerging technology, we investigated tree detection and stem curve estimation using laser scanning data obtained with an under-canopy flying UAV. To this end, we mounted a Kaarta Stencil-1 laser scanner with an integrated simultaneous localization and mapping (SLAM) system on board an UAV that was manually piloted with the help of video goggles receiving a live video feed from the onboard camera of the UAV. Using the under-canopy flying UAV, we collected SLAM-corrected point cloud data in a boreal forest on two 32 m 32 m test sites that were characterized as sparse ( = 42 trees) and obstructed ( = 43 trees), respectively. Novel data processing algorithms were applied for the point clouds in order to detect the stems of individual trees and to extract their stem curves and diameters at breast height (DBH). The estimated tree attributes were compared against highly accurate field reference data that was acquired semi-manually with a multi-scan terrestrial laser scanner (TLS). The proposed method succeeded in detecting 93% of the stems in the sparse plot and 84% of the stems in the obstructed plot. In the sparse plot, the DBH and stem curve estimates had a root-mean-squared error (RMSE) of 0.60 cm (2.2%) and 1.2 cm (5.0%), respectively, whereas the corresponding values for the obstructed plot were 0.92 cm (3.1%) and 1.4 cm (5.2%). By combining the stem curves extracted from the under-canopy UAV laser scanning data with tree heights derived from above-canopy UAV laser scanning data, we computed stem volumes for the detected trees with a relative RMSE of 10.1% in both plots. Thus, the combination of under-canopy and above-canopy UAV laser scanning allowed us to extract the stem volumes with an accuracy comparable to the past best studies based on TLS in boreal forest conditions. Since the stems of several spruces located on the test sites suffered from severe occlusion and could not be detected with the stem-based method, we developed a separate work flow capable of detecting trees with occluded stems. The proposed work flow enabled us to detect 98% of trees in the sparse plot and 93% of the trees in the obstructed plot with a 100% correction level in both plots. A key benefit provided by the under-canopy UAV laser scanner is the short period of time required for data collection, currently demonstrated to be much faster than the time required for field measurements and TLS. The quality of the measurements acquired with the under-canopy flying UAV combined with the demonstrated efficiency indicates operational potential for supporting fast and accurate forest resource inventories. Numéro de notice : A2020-240 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.03.021 Date de publication en ligne : 11/04/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.03.021 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94994
in ISPRS Journal of photogrammetry and remote sensing > vol 164 (June 2020) . - pp 41 - 60[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020061 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020063 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020062 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt A review of techniques for 3D reconstruction of indoor environments / Zhizhong Kang in ISPRS International journal of geo-information, vol 9 n° 5 (May 2020)
[article]
Titre : A review of techniques for 3D reconstruction of indoor environments Type de document : Article/Communication Auteurs : Zhizhong Kang, Auteur ; Juntao Yang, Auteur ; Zhou Yang, Auteur ; Sai Cheng, Auteur Année de publication : 2020 Article en page(s) : 31 p. Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] apprentissage profond
[Termes IGN] cartographie et localisation simultanées
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] espace intérieur
[Termes IGN] image RVB
[Termes IGN] indoorGML
[Termes IGN] jeu de données localisées
[Termes IGN] modèle géométrique
[Termes IGN] modèle sémantique de données
[Termes IGN] modèle topologique de données
[Termes IGN] reconstruction 3DRésumé : (auteur) Indoor environment model reconstruction has emerged as a significant and challenging task in terms of the provision of a semantically rich and geometrically accurate indoor model. Recently, there has been an increasing amount of research related to indoor environment reconstruction. Therefore, this paper reviews the state-of-the-art techniques for the three-dimensional (3D) reconstruction of indoor environments. First, some of the available benchmark datasets for 3D reconstruction of indoor environments are described and discussed. Then, data collection of 3D indoor spaces is briefly summarized. Furthermore, an overview of the geometric, semantic, and topological reconstruction of the indoor environment is presented, where the existing methodologies, advantages, and disadvantages of these three reconstruction types are analyzed and summarized. Finally, future research directions, including technique challenges and trends, are discussed for the purpose of promoting future research interest. It can be concluded that most of the existing indoor environment reconstruction methods are based on the strong Manhattan assumption, which may not be true in a real indoor environment, hence limiting the effectiveness and robustness of existing indoor environment reconstruction methods. Moreover, based on the hierarchical pyramid structures and the learnable parameters of deep-learning architectures, multi-task collaborative schemes to share parameters and to jointly optimize each other using redundant and complementary information from different perspectives show their potential for the 3D reconstruction of indoor environments. Furthermore, indoor–outdoor space seamless integration to achieve a full representation of both interior and exterior buildings is also heavily in demand. Numéro de notice : A2020-299 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi9050330 Date de publication en ligne : 19/05/2020 En ligne : https://doi.org/10.3390/ijgi9050330 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95139
in ISPRS International journal of geo-information > vol 9 n° 5 (May 2020) . - 31 p.[article]
Titre : Collaborative visual-inertial state and scene estimation Type de document : Thèse/HDR Auteurs : Marco Karrer, Auteur ; Margarita Chli, Directeur de thèse Editeur : Zurich : Eidgenossische Technische Hochschule ETH - Ecole Polytechnique Fédérale de Zurich EPFZ Année de publication : 2020 Importance : 151 p. Format : 21 x 30 cm Note générale : bibliographie
A thesis submitted to attain the degree of Doctor of Sciences of ETH Zurich in Mechanical EngineeringLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Acquisition d'image(s) et de donnée(s)
[Termes IGN] cartographie et localisation simultanées
[Termes IGN] centrale inertielle
[Termes IGN] compensation par faisceaux
[Termes IGN] estimation de pose
[Termes IGN] image captée par drone
[Termes IGN] reconstruction d'objet
[Termes IGN] robotique
[Termes IGN] système multi-agents
[Termes IGN] vision par ordinateurIndex. décimale : THESE Thèses et HDR Résumé : (auteur) The capability of a robot to create a map of its workspace on the fly, while constantly updating it and continuously estimating its motion in it, constitutes one of the central research problems in mobile robotics and is referred to as Simultaneous Localization And Mapping (SLAM) in the literature. Relying solely on the sensor-suite onboard the robot, SLAM is a core building block in enabling the navigational autonomy necessary to facilitate the general use of mobile robots and has been the subject of booming research interest spanning over three decades. With the largest body of related literature addressing the challenge of single-agent SLAM, it is only very recently, with the relative maturity of this field that approaches tackling collaborative SLAM with multiple agents have started appearing. The potential of collaborative multi-agent SLAM is great; not only promising to boost the efficiency of robotic missions by splitting the task at hand to more agents but also to improve the overall robustness and accuracy by boosting the amount of data that each agent’s estimation process has access to. While SLAM can be performed using a variety of different sensors, this thesis is focused on the fusion of visual and inertial cues, as one of the most common combinations of sensing modalities in robotics today. The information richness captured by cameras, along with the high-frequency and metric information provided by Inertial Measurement Units (IMUs) in combination with the low weight and power consumption offered by a visual-inertial sensor suite render this setup ideal for a wide variety of applications and robotic platforms, in particular to resource-constrained platforms such as Unmanned Aerial Vehicles (UAVs). The majority of the state-of-the-art visual-inertial estimators are designed as odometry algorithms, providing only estimates consistent within a limited time-horizon. This lack in global consistency of estimates, however, poses a major hurdle in an effective fusion of data from multiple agents and the practi- cal definition of a common reference frame, which is imperative before collaborative effort can be coordinated. In this spirit, this thesis investigates the potential of global optimization, based on a central access point (server) as a first approach, demonstrating global consistency using only monocular-inertial data. Fusing data from multiple agents, not only consistency can be maintained, but also the accuracy is shown to improve at times, revealing the great potential of collaborative SLAM. Aiming at improving the computational efficiency, in a second approach a more efficient system architecture is employed, allowing a more suitable distribution of the computational load amongst the agents and the server. Furthermore, the architecture implements a two-way communication enabling a tighter collaboration between the agents as they become capable of re-using information captured by other agents through communication with the server, enabling improvements of their onboard pose tracking online, during the mission. In addition to general collaborative SLAM without specific assumptions on the agents’ relative pose configuration, we investigate the potential of a configuration with two agents, carrying one camera each with overlapping fields of view, essentially forming a virtual stereo camera. With the ability of each robotic agent to move independently, the potential to control the stereo baseline according to the scene depth is very promising, for example at high altitudes where all scene points are far away and, therefore, only provide weak constraints on the metric scale in a standard single-agent system. To this end, an approach to estimate the time-varying stereo transformation formed between two agents is proposed, by fusing the egomotion estimates of the individual agents along with the image measurements extracted from the view-overlap in a tightly coupled fashion. Taking this virtual stereo camera idea a step further, a novel collaboration framework is presented, utilizing the view-overlap along with relative distance measurements across the two agents (e.g. obtained via Ultra-Wide Band (UWB) modules), in order to successfully perform state estimation at high altitudes where state-of-the-art single-agent methods fail. In the interest of low-latency pose estimation, each agent holds its own estimate of the map, while consistency between the agents is achieved using a novel consensus-based sliding window bundle adjustment. Despite that in this work, experiments are shown in a two-agent setup, the proposed distributed bundle adjustment scheme holds great potential for scaling up to larger problems with multiple agents, due to the asynchronicity of the proposed estimation process and the high level of parallelism it permits. The majority of the developed approaches in this thesis rely on sparse feature maps in order to allow for efficient and timely pose estimation, however, this translates to reduced awareness of the spatial structure of a robot’s workspace, which can be insufficient for tasks requiring careful scene interaction and manipulation of objects. Equipping a typical visual-inertial sensor suite with an RGB-D camera, an add-on framework is presented that enables the efficient fusion of naturally noisy depth information into an accurate, local, dense map of the scene, providing sufficient information for an agent to plan contact with a surface. With the focus on collaborative SLAM using visual-inertial data, the approaches and systems presented in this thesis contribute towards achieving collaborative Visual-Inertial SLAM (VI-SLAM) deployable in challenging real-world scenarios, where the participating agents’ experiences get fused and processed at a central access point. On the other side, it is shown that taking advantage of specific configurations can push the collaboration amongst the agents towards achieving greater general robustness and accuracy of scene and egomotion estimates in scenarios, where state-of-the-art single-agent systems are otherwise unsuccessful, paving the way towards intelligent robot collaboration. Note de contenu : Introduction
1- Real-time dense surface reconstruction for aerial manipulation
2- Towards globally consistent visual-inertial collaborative SLAM
3- CVI-SLAM – collaborative visual-inertial SLAM
4- Collaborative 6DoF relative pose estimation for two UAVs with overlapping fields of view
5- Distributed variable-baseline stereo SLAM from two UAVsNuméro de notice : 28318 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Thèse étrangère Note de thèse : PhD Thesis : Mechanical Engineering : ETH Zurich : 2020 DOI : sans En ligne : https://www.research-collection.ethz.ch/handle/20.500.11850/465334 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98251 Smoothing algorithms for navigation, localisation and mapping based on high-grade inertial sensors / Paul Chauchat (2020)PermalinkEnhanced 3D mapping with an RGB-D sensor via integration of depth measurements and image sequences / Bo Wu in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 9 (September 2019)PermalinkPoint clouds by SLAM-based mobile mapping systems: accuracy and geometric content validation in multisensor survey and stand-alone acquisition / Giulia Sammartano in Applied geomatics, vol 10 n° 4 (December 2018)Permalinkn° 225 - septembre 2015 - Temps, art et cartographie (Bulletin de Cartes & Géomatique) / Jasmine Desclaux-SalachasPermalinkMonroe county takes to the road with GIS technology vehicle / K. Corbey in GEO: Geoconnexion international, vol 11 n° 1 (january 2012)PermalinkPermalinkSemi-automatic process for hybrid DTM Generalization based on structural elements multi-analysis / M. Martin in Cartographic journal (the), vol 46 n° 2 (May 2009)PermalinkFighting wildfires with GPS in Portugal / E. Van Rees in Geoinformatics, vol 11 n° 7 (01/11/2008)PermalinkDevelopment of a robotic mobile mapping system by vision-aided inertial navigation / Fadi Atef Bayoud (2006)Permalink