Descripteur
Termes IGN > informatique > intelligence artificielle > vision par ordinateur
vision par ordinateurVoir aussi |
Documents disponibles dans cette catégorie (92)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Towards visual urban scene understanding for autonomous vehicle path tracking using GPS positioning data / Citlalli Gamez Serna (2019)
Titre : Towards visual urban scene understanding for autonomous vehicle path tracking using GPS positioning data Type de document : Thèse/HDR Auteurs : Citlalli Gamez Serna, Auteur ; Yassine Ruichek, Directeur de thèse Editeur : Dijon : Université Bourgogne Franche-Comté UBFC Année de publication : 2019 Importance : 178 p. Format : 21 x 30 cm Note générale : bibliographie
Thèse de Doctorat de l'Université Bourgogne Franche-Comté préparée à l'Université de Technologie de Belfort-Montbéliard, InformatiqueLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] compréhension de l'image
[Termes IGN] instance
[Termes IGN] milieu urbain
[Termes IGN] navigation autonome
[Termes IGN] récepteur GPS
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantique
[Termes IGN] signalisation routière
[Termes IGN] système de transport intelligent
[Termes IGN] trajectoire (véhicule non spatial)
[Termes IGN] véhicule sans pilote
[Termes IGN] vision par ordinateur
[Termes IGN] vision stéréoscopique
[Termes IGN] vitesseMots-clés libres : suivi d'itinéraire Index. décimale : THESE Thèses et HDR Résumé : (auteur) This PhD thesis focuses on developing a path tracking approach based on visual perception and localization in urban environments. The proposed approach comprises two systems. The first one concerns environment perception. This task is carried out using deep learning techniques to automatically extract 2D visual features and use them to learn in order to distinguish the different objects in the driving scenarios. Three deep learning techniques are adopted: semantic segmentation to assign each image pixel to a class, instance segmentation to identify separated instances of the same class and, image classification to further recognize the specific labels of the instances. Here our system segments 15 object classes and performs traffic sign recognition. The second system refers to path tracking. In order to follow a path, the equipped vehicle first travels and records the route with a stereo vision system and a GPS receiver (learning step). The proposed system analyses off-line the GPS path and identifies exactly the locations of dangerous (sharp) curves and speed limits. Later after the vehicle is able to localize itself, the vehicle control module together with our speed negotiation algorithm, takes into account the information extracted and computes the ideal speed to execute. Through experimental results of both systems, we prove that, the first one is capable to detect and recognize precisely objects of interest in urban scenarios, while the path tracking one reduces significantly the lateral errors between the learned and traveled path. We argue that the fusion of both systems will ameliorate the tracking approach for preventing accidents or implementing autonomous driving. Note de contenu : I- Context and problems
1- Introduction
II- Contribution
2- Proposed datasets
3- Traffic sign classification
4- Visual perception system for urban environments
5- Dynamic speed adaptation system for path tracking based on curvature
information and speed limits
III- Conclusions and future works
6- Conclusions and future worksNuméro de notice : 25967 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Thèse française Note de thèse : Thèse de Doctorat : Informatique : UBFC : 2019 Organisme de stage : CIAD Dijon nature-HAL : Thèse DOI : sans En ligne : https://tel.archives-ouvertes.fr/tel-02160966/document Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96587 Point clouds by SLAM-based mobile mapping systems: accuracy and geometric content validation in multisensor survey and stand-alone acquisition / Giulia Sammartano in Applied geomatics, vol 10 n° 4 (December 2018)
[article]
Titre : Point clouds by SLAM-based mobile mapping systems: accuracy and geometric content validation in multisensor survey and stand-alone acquisition Type de document : Article/Communication Auteurs : Giulia Sammartano, Auteur ; Antonia Spanò, Auteur Année de publication : 2018 Article en page(s) : pp 317 - 339 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] analyse comparative
[Termes IGN] carte d'intérieur
[Termes IGN] cartographie 3D
[Termes IGN] cartographie et localisation simultanées
[Termes IGN] données localisées 3D
[Termes IGN] intégration de données
[Termes IGN] lever souterrain
[Termes IGN] modèle 3D du site
[Termes IGN] patrimoine culturel
[Termes IGN] patrimoine immobilier
[Termes IGN] semis de pointsRésumé : (Auteur) The paper provides some operative replies to evaluate the effectiveness and the critical issues of the simultaneous localisation and mapping (SLAM)-based mobile mapping system (MMS) called ZEB by GeoSLAM™ https://geoslam.com/technology/. In these last years, this type of handheld 3D mapping technology has increasingly developed the framework of portable solutions for close-range mapping systems that have mainly been devoted to mapping the indoor building spaces of enclosed or underground environments, such as forestry applications and tunnels or mines. The research introduces a set of test datasets related to the documentation of landscape contexts or the 3D modelling of architectural complexes. These datasets are used to validate the accuracy and informative content richness about ZEB point clouds in stand-alone solutions and in cases of combined applications of this technology with multisensor survey approaches. In detail, the proposed validation method follows the fulfilment of the endorsed approach by use of root mean square error (RMSE) evaluation and deviation analysis assessment of point clouds between SLAM-based data and 3D point cloud surfaces computed by more precise measurement methods to evaluate the accuracy of the proposed approach. Furthermore, this study specifies the suitable scale for possible handlings about these peculiar point clouds and uses the profile extraction method in addition to feature analyses such as corner and plane deviation analysis of architectural elements. Finally, because of the experiences reported in the literature and performed in this work, a possible reversal is suggested. If in the 2000s, most studies focused on intelligently reducing the light detection and ranging (LiDAR) point clouds where they presented redundant and not useful information, contrariwise, in this sense, the use of MMS methods is proposed to be firstly considered and then to increase the information only wherever needed with more accurate high-scale methods. Numéro de notice : A2018-590 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s12518-018-0221-7 Date de publication en ligne : 01/06/2018 En ligne : https://doi.org/10.1007/s12518-018-0221-7 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92514
in Applied geomatics > vol 10 n° 4 (December 2018) . - pp 317 - 339[article]Augmented reality meets computer vision : efficient data generation for urban driving scenes / Hassan Abu Alhaija in International journal of computer vision, vol 126 n° 9 (September 2018)
[article]
Titre : Augmented reality meets computer vision : efficient data generation for urban driving scenes Type de document : Article/Communication Auteurs : Hassan Abu Alhaija, Auteur ; Siva Karthik Mustikovela, Auteur ; Lars Mescheder, Auteur ; Andreas Geiger, Auteur ; Carsten Rother, Auteur Année de publication : 2018 Article en page(s) : pp 961 - 972 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage automatique
[Termes IGN] détection d'objet
[Termes IGN] réalité augmentée
[Termes IGN] scène urbaine
[Termes IGN] vision par ordinateurRésumé : (Auteur) The success of deep learning in computer vision is based on the availability of large annotated datasets. To lower the need for hand labeled images, virtually rendered 3D worlds have recently gained popularity. Unfortunately, creating realistic 3D content is challenging on its own and requires significant human effort. In this work, we propose an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation and object detection models. Exploiting the fact that not all aspects of the scene are equally important for this task, we propose to augment real-world imagery with virtual objects of the target category. Capturing real-world images at large scale is easy and cheap, and directly provides real background appearances without the need for creating complex 3D models of the environment. We present an efficient procedure to augment these images with virtual objects. In contrast to modeling complete 3D environments, our data augmentation approach requires only a few user interactions in combination with 3D models of the target object category. Leveraging our approach, we introduce a novel dataset of augmented urban driving scenes with 360 degree images that are used as environment maps to create realistic lighting and reflections on rendered objects. We analyze the significance of realistic object placement by comparing manual placement by humans to automatic methods based on semantic scene analysis. This allows us to create composite images which exhibit both realistic background appearance as well as a large number of complex object arrangements. Through an extensive set of experiments, we conclude the right set of parameters to produce augmented data which can maximally enhance the performance of instance segmentation models. Further, we demonstrate the utility of the proposed approach on training standard deep models for semantic instance segmentation and object detection of cars in outdoor driving scenarios. We test the models trained on our augmented data on the KITTI 2015 dataset, which we have annotated with pixel-accurate ground truth, and on the Cityscapes dataset. Our experiments demonstrate that the models trained on augmented imagery generalize better than those trained on fully synthetic data or models trained on limited amounts of annotated real data. Numéro de notice : A2018-417 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-018-1070-x Date de publication en ligne : 07/03/2018 En ligne : https://doi.org/10.1007/s11263-018-1070-x Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90900
in International journal of computer vision > vol 126 n° 9 (September 2018) . - pp 961 - 972[article]Application of deep learning for object detection / Ajeet Ram Pathak in Procedia Computer Science, vol 132 (2018)
[article]
Titre : Application of deep learning for object detection Type de document : Article/Communication Auteurs : Ajeet Ram Pathak, Auteur ; Manjusha Pandey, Auteur ; Siddharth Rautaray, Auteur Année de publication : 2018 Article en page(s) : pp 1706 - 1717 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] état de l'art
[Termes IGN] réseau neuronal convolutif
[Termes IGN] vision par ordinateurRésumé : (auteur) The ubiquitous and wide applications like scene understanding, video surveillance, robotics, and self-driving systems triggered vast research in the domain of computer vision in the most recent decade. Being the core of all these applications, visual recognition systems which encompasses image classification, localization and detection have achieved great research momentum. Due to significant development in neural networks especially deep learning, these visual recognition systems have attained remarkable performance. Object detection is one of these domains witnessing great success in computer vision. This paper demystifies the role of deep learning techniques based on convolutional neural network for object detection. Deep learning frameworks and services available for object detection are also enunciated. Deep learning techniques for state-of-the-art object detection systems are assessed in this paper. Numéro de notice : A2018-585 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.procs.2018.05.144 Date de publication en ligne : 08/06/2018 En ligne : https://www.sciencedirect.com/science/article/pii/S1877050918308767 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92435
in Procedia Computer Science > vol 132 (2018) . - pp 1706 - 1717[article]Foreword to the theme issue on geospatial computer vision / Jan Dirk Wegner in ISPRS Journal of photogrammetry and remote sensing, vol 140 (June 2018)
[article]
Titre : Foreword to the theme issue on geospatial computer vision Type de document : Article/Communication Auteurs : Jan Dirk Wegner, Auteur ; Devis Tuia, Auteur ; Michael Ying Yang, Auteur ; Clément Mallet , Auteur Année de publication : 2018 Projets : 1-Pas de projet / Article en page(s) : pp 1 - 2 Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] données localisées
[Termes IGN] vision par ordinateurNuméro de notice : A2018-387 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2017.12.011 Date de publication en ligne : 09/01/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2017.12.011 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90801
in ISPRS Journal of photogrammetry and remote sensing > vol 140 (June 2018) . - pp 1 - 2[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018061 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018063 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018062 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Pré-estimation et analyse de la précision pour la cartographie par drone / Laurent Valentin Jospin in XYZ, n° 155 (juin - août 2018)PermalinkPermalinkLocalisation par l'image en milieu urbain : application à la réalité augmentée / Antoine Fond (2018)PermalinkMachine learning and pose estimation for autonomous robot grasping with collaborative robots / Victor Talbot (2018)PermalinkRéseaux de neurones convolutionnels profonds pour la détection de petits véhicules en imagerie aérienne / Jean Ogier du Terrail (2018)PermalinkPermalinkA geometric correspondence feature based-mismatch removal in vision based-mapping and navigation / Zeyu Li in Photogrammetric Engineering & Remote Sensing, PERS, vol 83 n° 10 (October 2017)PermalinkComplétion d'image exploitant des données multispectrales / Frédéric Bousefsaf in Revue Française de Photogrammétrie et de Télédétection, n° 215 (mai - août 2017)PermalinkCentimetric absolute localization using Unmanned Aerial Vehicles with airborne photogrammetry and on-board GPS / Mehdi Daakir (2017)PermalinkPermalink