Détail de l'auteur
Documents disponibles écrits par cet auteur (9)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Improving image description with auxiliary modality for visual localization in challenging conditions / Nathan Piasco in International journal of computer vision, vol 29 n° 1 (January 2021)
[article]
Titre : Improving image description with auxiliary modality for visual localization in challenging conditions Type de document : Article/Communication Auteurs : Nathan Piasco , Auteur ; Désiré Sidibé, Auteur ; Valérie Gouet-Brunet , Auteur ; Cédric Demonceaux, Auteur Année de publication : 2021 Projets : PLaTINUM / Gouet-Brunet, Valérie Article en page(s) : pp 185 - 202 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] descripteur
[Termes IGN] localisation basée image
[Termes IGN] localisation basée visionRésumé : (auteur) Image indexing for lifelong localization is a key component for a large panel of applications, including robot navigation, autonomous driving or cultural heritage valorization. The principal difficulty in long-term localization arises from the dynamic changes that affect outdoor environments. In this work, we propose a new approach for outdoor large scale image-based localization that can deal with challenging scenarios like cross-season, cross-weather and day/night localization. The key component of our method is a new learned global image descriptor, that can effectively benefit from scene geometry information during training. At test time, our system is capable of inferring the depth map related to the query image and use it to increase localization accuracy. We show through extensive evaluation that our method can improve localization performances, especially in challenging scenarios when the visual appearance of the scene has changed. Our method is able to leverage both visual and geometric clues from monocular images to create discriminative descriptors for cross-season localization and effective matching of images acquired at different time periods. Our method can also use weakly annotated data to localize night images across a reference dataset of daytime images. Finally we extended our method to reflectance modality and we compare multi-modal descriptors respectively based on geometry, material reflectance and a combination of both. Numéro de notice : A2021-132 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Autre URL associée : vers HAL Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-020-01363-6 Date de publication en ligne : 28/08/2020 En ligne : https://doi.org/10.1007/s11263-020-01363-6 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96971
in International journal of computer vision > vol 29 n° 1 (January 2021) . - pp 185 - 202[article]
Titre : Geometric camera pose refinement with learned depth maps Type de document : Article/Communication Auteurs : Nathan Piasco , Auteur ; Désiré Sidibé, Auteur ; Cédric Demonceaux, Auteur ; Valérie Gouet-Brunet , Auteur Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2019 Projets : PLaTINUM / Gouet-Brunet, Valérie Conférence : ICIP 2019, 26th IEEE International Conference on Image Processing 22/09/2019 25/09/2019 Taipei Taiwan Proceedings IEEE Importance : 5 p. Format : 21 x 30 cm Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] algorithme ICP
[Termes IGN] carte de profondeur
[Termes IGN] estimation de pose
[Termes IGN] réseau neuronal convolutif
[Termes IGN] scène intérieure
[Termes IGN] semis de pointsRésumé : (auteur) We present a new method for image-only camera relocalisation composed of a fast image indexing retrieval step followed by pose refinement based on ICP (Iterative Closest Point). The first step aims to find an initial pose for the query by evaluating images similarity with low dimensional global deep descriptors. Subsequently, we predict with a fully convolutional deep encoder-decoder neural network a dense depth map from the image query. We use this depth map to create a local point cloud and refine the initial query pose using an ICP algorithm.We demonstrate the effectiveness of our new approach on various indoor scenes. Compared to learned pose regression methods, our proposal can be used on multiple scenes without the need of a specific weights-setup for each scene, while showing equivalent results. Numéro de notice : C2019-015 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/ICIP.2019.8803014 Date de publication en ligne : 26/08/2019 En ligne : https://doi.org/10.1109/ICIP.2019.8803014 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93279
Titre : Learning scene geometry for visual localization in challenging conditions Type de document : Article/Communication Auteurs : Nathan Piasco , Auteur ; Désiré Sidibé, Auteur ; Valérie Gouet-Brunet , Auteur ; Cédric Demonceaux, Auteur Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2019 Projets : PLaTINUM / Gouet-Brunet, Valérie Conférence : ICRA 2019, International Conference on Robotics and Automation 20/05/2019 24/05/2019 Montréal Québec - Canada Proceedings IEEE Importance : pp 9094 - 9100 Format : 21 x 30 cm Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] analyse visuelle
[Termes IGN] appariement d'images
[Termes IGN] carte de profondeur
[Termes IGN] descripteur
[Termes IGN] géométrie de l'image
[Termes IGN] image RVB
[Termes IGN] localisation basée vision
[Termes IGN] précision de localisation
[Termes IGN] prise de vue nocturne
[Termes IGN] robotique
[Termes IGN] scène urbaine
[Termes IGN] variation diurne
[Termes IGN] variation saisonnière
[Termes IGN] vision par ordinateurRésumé : (auteur) We propose a new approach for outdoor large scale image based localization that can deal with challenging scenarios like cross-season, cross-weather, day/night and longterm localization. The key component of our method is a new learned global image descriptor, that can effectively benefit from scene geometry information during training. At test time, our system is capable of inferring the depth map related to the query image and use it to increase localization accuracy. We are able to increase recall@1 performances by 2.15% on cross-weather and long-term localization scenario and by 4.24% points on a challenging winter/summer localization sequence versus state-of-the-art methods. Our method can also use weakly annotated data to localize night images across a reference dataset of daytime images. Numéro de notice : C2019-002 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/ICRA.2019.8794221 Date de publication en ligne : 12/08/2019 En ligne : http://doi.org/10.1109/ICRA.2019.8794221 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93774 Documents numériques
en open access
Learning scene geometry... - pdf auteurAdobe Acrobat PDF
Titre : Perspective-n-learned-point: pose estimation from relative depth Type de document : Article/Communication Auteurs : Nathan Piasco , Auteur ; Désiré Sidibé, Auteur ; Cédric Demonceaux, Auteur ; Valérie Gouet-Brunet , Auteur Editeur : Saint-Mandé : Institut national de l'information géographique et forestière - IGN (2012-) Année de publication : 2019 Projets : PLaTINUM / Gouet-Brunet, Valérie Conférence : BMVC 2019, British Machine Vision Conference 09/09/2019 12/09/2019 Cardiff Royaume-Uni OA Proceedings Importance : 15 p. Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] carte de profondeur
[Termes IGN] classification par réseau neuronal
[Termes IGN] estimation de pose
[Termes IGN] géométrie de l'image
[Termes IGN] recherche d'image basée sur le contenuRésumé : (Auteur) In this paper we present an online camera pose estimation method that combines Content-Based Image Retrieval (CBIR) and pose refinement based on a learned representation of the scene geometry extracted from monocular images. Our pose estimation method is two-step, we first retrieve an initial 6 Degrees of Freedom (DoF) location of an unknown-pose query by retrieving the most similar candidate in a pool of geo-referenced images. In a second time, we refine the query pose with a Perspective-n-Point (PnP) algorithm where the 3D points are obtained thanks to a generated depth map from the retrieved image candidate. We make our method fast and lightweight by using a common neural network architecture to generate the image descriptor for image indexing and the depth map used to create the 3D points required in the PnP pose refinement step. We demonstrate the effectiveness of our proposal through extensive experimentation on both indoor and outdoor scenes, as well as generalisation capability of our method to unknown environment. Finally, we show how to deploy our system even if geometric information is missing to train our monocular-image-to-depth neural networks. Numéro de notice : C2019-025 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Autre URL associée : vers HAL Thématique : IMAGERIE/INFORMATIQUE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : sans Date de publication en ligne : 12/11/2019 En ligne : https://bmvc2019.org/wp-content/uploads/papers/0981-paper.pdf Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94320 Documents numériques
en open access
Perspective-n-learned-point ... - pdf auteurAdobe Acrobat PDF Vision-based localization with discriminative features from heterogeneous visual data / Nathan Piasco (2019)
Titre : Vision-based localization with discriminative features from heterogeneous visual data Type de document : Thèse/HDR Auteurs : Nathan Piasco , Auteur ; Valérie Gouet-Brunet , Directeur de thèse ; Cédric Demonceaux, Directeur de thèse Editeur : Dijon : Université Bourgogne Franche-Comté UBFC Année de publication : 2019 Importance : 174 p. Format : 21 x 30 cm Note générale : Bibliographie
Thèse présentée à l'école doctorale n° 37 de l'Université de Dijon pour l'obtention du Doctorat en instrumentation et informatique de l'imageLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] algorithme ICP
[Termes IGN] carte de profondeur
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données hétérogènes
[Termes IGN] estimation de pose
[Termes IGN] fonction de transfert de modulation
[Termes IGN] localisation basée image
[Termes IGN] localisation basée vision
[Termes IGN] recherche d'image basée sur le contenu
[Termes IGN] vision monoculaireIndex. décimale : THESE Thèses et HDR Résumé : (Auteur) Visual-based Localization (VBL) consists in retrieving the location of a visual image within a known space. VBL is involved in several present-day practical applications, such as indoor and outdoor navigation, 3D reconstruction, etc. The main challenge in VBL comes from the fact that the visual input to localize could have been taken at a different time than the reference database. Visual changes may occur on the observed environment during this period of time, especially for outdoor localization. Recent approaches use complementary information in order to address these visually challenging localization scenarios, like geometric information or semantic information. However geometric or semantic information are not always available or can be costly to obtain. In order to get free of any extra modalities used to solve challenging localization scenarios, we propose to use a modality transfer model capable of reproducing the underlying scene geometry from a monocular image. At first, we cast the localization problem as a Content-based Image Retrieval (CBIR) problem and we train a CNN image descriptor with radiometry to dense geometry transfer as side training objective. Once trained, our system can be used on monocular images only to construct an expressive descriptor for localization in challenging conditions. Secondly, we introduce a new relocalization pipeline to improve the localization given by our initial localization step. In a same manner as our global image descriptor, the relocalization is aided by the geometric information learned during an offline stage. The extra geometric information is used to constrain the final pose estimation of the query. Through comprehensive experiments, we demonstrate the effectiveness of our proposals for both indoor and outdoor localization. Note de contenu : 1. Introduction
1.1 Long-term mapping
1.2 pLaTINUM project
1.3 Visual-based Localization with heterogeneous data
2. Review of Visual-Based Localization methods
2.1 Data Representation
2.2 VBL methods
2.3 Data with Dissimilar Appearances
2.4 Data heterogeneity
2.5 Discussion
2.6 Conclusion
3 Side modality learning for localization
3.1 Related work
3.2 Model architectures and training
3.3 Implementation details
3.4 Long-term localization
3.5 Night to day localization scenarios
3.6 Laser reflectance as side information
3.7 Conclusion
4. Pose refinement with learned depth map
4.1 Method
4.2 Relative pose estimation
4.3 Preliminary results
4.4 Indoor localization
4.5 Unsupervised training and outdoor localization
4.6 Discussion
4.7 Conclusion
5. Conclusion
5.1 Summary of the thesis
5.2 Scientific contributions
5.3 Future Research
A Network architectures
A.1 Global image descriptor network
A.2 Multitask pose refinement networkNuméro de notice : 26415 Affiliation des auteurs : LASTIG MATIS (2012-2019) Thématique : IMAGERIE Nature : Thèse française Note de thèse : Thèse de Doctorat : Instrumentation et informatique de l'image : Dijon : 2019 Organisme de stage : LaSTIG (IGN) nature-HAL : Thèse DOI : sans Date de publication en ligne : 13/11/2020 En ligne : https://hal.science/tel-03003651/document Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96302 A survey on visual-based localization : on the benefit of heterogeneous data / Nathan Piasco in Pattern recognition, vol 74 (February 2018)PermalinkPermalinkPanorama de la recherche à l'IGN sur la localisation d'une caméra à partir d'images / Nathan Piasco (2018)PermalinkPermalink