Descripteur



Etendre la recherche sur niveau(x) vers le bas
Structure from motion for ordered and unordered image sets based on random k-d forests and global pose estimation / Xin Wang in ISPRS Journal of photogrammetry and remote sensing, vol 147 (January 2019)
![]()
[article]
Titre : Structure from motion for ordered and unordered image sets based on random k-d forests and global pose estimation Type de document : Article/Communication Auteurs : Xin Wang, Auteur ; Franz Rottensteiner, Auteur ; Christian Heipke, Auteur Année de publication : 2019 Article en page(s) : pp 19 - 41 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes descripteurs IGN] appariement d'images
[Termes descripteurs IGN] chaîne de traitement
[Termes descripteurs IGN] classification barycentrique
[Termes descripteurs IGN] classification par forêts aléatoires
[Termes descripteurs IGN] compensation par faisceaux
[Termes descripteurs IGN] estimation de pose
[Termes descripteurs IGN] image captée par drone
[Termes descripteurs IGN] matrice de rotation
[Termes descripteurs IGN] orientation relative
[Termes descripteurs IGN] Ransac (algorithme)
[Termes descripteurs IGN] recouvrement d'images
[Termes descripteurs IGN] SIFT (algorithme)
[Termes descripteurs IGN] structure-from-motion
[Termes descripteurs IGN] vision par ordinateurRésumé : (auteur) In this paper, we present a new fast and robust method for structure from motion (SfM) for data sets potentially comprising thousands of ordered or unordered images. Our work focuses on the two most time-consuming procedures: (a) image matching and (b) pose estimation. For image matching, a new method employing a random k-d forest is proposed to quickly obtain pairs of overlapping images from an unordered set. After that, image matching and the estimation of relative orientation parameters are performed only for pairs found to be very likely to overlap. For pose estimation, we use a two-stage global approach, separating the determination of rotation matrices and translation parameters; the latter are computed simultaneously using a new method. In order to cope with outliers in the relative orientations, which global approaches are particularly sensitive to, we present a new constraint based on triplet loop closure errors of rotation and translation. Finally, a robust bundle adjustment is carried out to refine the image orientation parameters. We demonstrate the potential and limitations of our pipeline using various real-world datasets including ordered image data acquired from UAV (unmanned aerial vehicle) and other platforms as well as unordered data from the internet. The experiments show that our work performs better than comparable state-of-the-art SfM systems in terms of run time, while we achieve a similar accuracy and robustness. Numéro de notice : A2019-033 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.11.009 date de publication en ligne : 15/11/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.11.009 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91970
in ISPRS Journal of photogrammetry and remote sensing > vol 147 (January 2019) . - pp 19 - 41[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019011 RAB Revue Centre de documentation En réserve 3L Disponible 081-2019013 DEP-EXM Revue MATIS Dépôt en unité Exclu du prêt 081-2019012 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Vision-based localization with discriminative features from heterogeneous visual data / Nathan Piasco (2019)
![]()
Titre : Vision-based localization with discriminative features from heterogeneous visual data Type de document : Thèse/HDR Auteurs : Nathan Piasco , Auteur ; Valérie Gouet-Brunet
, Directeur de thèse ; Cédric Demonceaux, Directeur de thèse
Editeur : Dijon : Université Bourgogne Franche-Comté UBFC Année de publication : 2019 Importance : 174 p. Format : 21 x 30 cm Note générale : Bibliographie
Thèse présentée à l'école doctorale n° 37 de l'Université de Dijon pour l'obtention du Doctorat en instrumentation et informatique de l'imageLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes descripteurs IGN] algorithme ICP
[Termes descripteurs IGN] carte de profondeur
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] données hétérogènes
[Termes descripteurs IGN] estimation de pose
[Termes descripteurs IGN] fonction de transfert de modulation
[Termes descripteurs IGN] localisation basée image
[Termes descripteurs IGN] localisation basée vision
[Termes descripteurs IGN] recherche d'image basée sur le contenu
[Termes descripteurs IGN] vision monoculaireRésumé : (Auteur) Visual-based Localization (VBL) consists in retrieving the location of a visual image within a known space. VBL is involved in several present-day practical applications, such as indoor and outdoor navigation, 3D reconstruction, etc. The main challenge in VBL comes from the fact that the visual input to localize could have been taken at a different time than the reference database. Visual changes may occur on the observed environment during this period of time, especially for outdoor localization. Recent approaches use complementary information in order to address these visually challenging localization scenarios, like geometric information or semantic information. However geometric or semantic information are not always available or can be costly to obtain. In order to get free of any extra modalities used to solve challenging localization scenarios, we propose to use a modality transfer model capable of reproducing the underlying scene geometry from a monocular image. At first, we cast the localization problem as a Content-based Image Retrieval (CBIR) problem and we train a CNN image descriptor with radiometry to dense geometry transfer as side training objective. Once trained, our system can be used on monocular images only to construct an expressive descriptor for localization in challenging conditions. Secondly, we introduce a new relocalization pipeline to improve the localization given by our initial localization step. In a same manner as our global image descriptor, the relocalization is aided by the geometric information learned during an offline stage. The extra geometric information is used to constrain the final pose estimation of the query. Through comprehensive experiments, we demonstrate the effectiveness of our proposals for both indoor and outdoor localization. Note de contenu : 1. Introduction
1.1 Long-term mapping
1.2 pLaTINUM project
1.3 Visual-based Localization with heterogeneous data
2. Review of Visual-Based Localization methods
2.1 Data Representation
2.2 VBL methods
2.3 Data with Dissimilar Appearances
2.4 Data heterogeneity
2.5 Discussion
2.6 Conclusion
3 Side modality learning for localization
3.1 Related work
3.2 Model architectures and training
3.3 Implementation details
3.4 Long-term localization
3.5 Night to day localization scenarios
3.6 Laser reflectance as side information
3.7 Conclusion
4. Pose refinement with learned depth map
4.1 Method
4.2 Relative pose estimation
4.3 Preliminary results
4.4 Indoor localization
4.5 Unsupervised training and outdoor localization
4.6 Discussion
4.7 Conclusion
5. Conclusion
5.1 Summary of the thesis
5.2 Scientific contributions
5.3 Future Research
A Network architectures
A.1 Global image descriptor network
A.2 Multitask pose refinement networkNuméro de notice : 26415 Affiliation des auteurs : LaSTIG MATIS (2012-2019) Thématique : IMAGERIE Nature : Thèse française Note de thèse : Thèse de Doctorat : Instrumentation et informatique de l'image : Dijon : 2019 nature-HAL : Thèse DOI : sans date de publication en ligne : 13/11/2020 En ligne : https://hal.archives-ouvertes.fr/tel-03003651/document Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96302 Depth-based hand pose estimation : Methods, data, and challenges / James Steven Supančič in International journal of computer vision, vol 126 n° 11 (November 2018)
![]()
[article]
Titre : Depth-based hand pose estimation : Methods, data, and challenges Type de document : Article/Communication Auteurs : James Steven Supančič, Auteur ; Grégory Rogez, Auteur ; Yi Yang, Auteur ; Jamie Shotton, Auteur ; Deva Ramanan, Auteur Année de publication : 2018 Article en page(s) : pp 1180 - 1198 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes descripteurs IGN] analyse comparative
[Termes descripteurs IGN] estimation de pose
[Termes descripteurs IGN] état de l'art
[Termes descripteurs IGN] image RVB
[Termes descripteurs IGN] plus proche voisin (algorithme)Résumé : (Auteur) Hand pose estimation has matured rapidly in recent years. The introduction of commodity depth sensors and a multitude of practical applications have spurred new advances. We provide an extensive analysis of the state-of-the-art, focusing on hand pose estimation from a single depth frame. To do so, we have implemented a considerable number of systems, and have released software and evaluation code. We summarize important conclusions here: (1) Coarse pose estimation appears viable for scenes with isolated hands. However, high precision pose estimation [required for immersive virtual reality and cluttered scenes (where hands may be interacting with nearby objects and surfaces) remain a challenge. To spur further progress we introduce a challenging new dataset with diverse, cluttered scenes. (2) Many methods evaluate themselves with disparate criteria, making comparisons difficult. We define a consistent evaluation criteria, rigorously motivated by human experiments. (3) We introduce a simple nearest-neighbor baseline that outperforms most existing systems. This implies that most systems do not generalize beyond their training sets. This also reinforces the under-appreciated point that training data is as important as the model itself. We conclude with directions for future progress. Numéro de notice : A2018-596 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-018-1081-7 date de publication en ligne : 12/04/2018 En ligne : https://doi.org/10.1007/s11263-018-1081-7 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92523
in International journal of computer vision > vol 126 n° 11 (November 2018) . - pp 1180 - 1198[article]3D urban geovisualization: in situ augmented and mixed reality experiments / Alexandre Devaux in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, IV-4 ([19/09/2018])
![]()
![]()
[article]
Titre : 3D urban geovisualization: in situ augmented and mixed reality experiments Type de document : Article/Communication Auteurs : Alexandre Devaux , Auteur ; Charlotte Hoarau
, Auteur ; Mathieu Brédif
, Auteur ; Sidonie Christophe
, Auteur
Année de publication : 2018 Projets : 1-Pas de projet / Conférence : ISPRS 2018, TC IV Mid-term Symposium 3D Spatial Information Science - The Engine of Change 01/10/2018 05/10/2018 Delft Pays-Bas Open access Annals Article en page(s) : pp 41 - 48 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Termes descripteurs IGN] appariement d'images
[Termes descripteurs IGN] bâtiment
[Termes descripteurs IGN] chaîne de traitement
[Termes descripteurs IGN] estimation de pose
[Termes descripteurs IGN] mise à l'échelle
[Termes descripteurs IGN] modèle 3D de l'espace urbain
[Termes descripteurs IGN] réalité augmentée
[Termes descripteurs IGN] réalité mixte
[Termes descripteurs IGN] rendu réaliste
[Vedettes matières IGN] GéovisualisationRésumé : (auteur) In this paper, we assume that augmented reality (AR) and mixed reality (MR) are relevant contexts for 3D urban geovisualization, especially in order to support the design of the urban spaces. We propose to design an in situ MR application, that could be helpful for urban designers, providing tools to interactively remove or replace buildings in situ. This use case requires advances regarding existing geovisualization methods. We highlight the need to adapt and extend existing 3D geovisualization pipelines, in order to adjust the specific requirements for AR/MR applications, in particular for data rendering and interaction. In order to reach this goal, we focus on and implement four elementary in situ and ex situ AR/MR experiments: each type of these AR/MR experiments helps to consider and specify a specific subproblem, i.e. scale modification, pose estimation, matching between scene and urban project realism, and the mix of real and virtual elements through portals, while proposing occlusion handling, rendering and interaction techniques to solve them. Numéro de notice : A2018-531 Affiliation des auteurs : LaSTIG COGIT (2012-2019) Thématique : GEOMATIQUE/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.5194/isprs-annals-IV-4-41-2018 date de publication en ligne : 19/09/2018 En ligne : http://dx.doi.org/10.5194/isprs-annals-IV-4-41-2018 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91482
in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences > IV-4 [19/09/2018] . - pp 41 - 48[article]Documents numériques
en open access
3D urban geovisualization - pdf éditeurAdobe Acrobat PDFImage-based synthesis for deep 3D human pose estimation / Grégory Rogez in International journal of computer vision, vol 126 n° 9 (September 2018)
![]()
[article]
Titre : Image-based synthesis for deep 3D human pose estimation Type de document : Article/Communication Auteurs : Grégory Rogez, Auteur ; Cordelia Schmid, Auteur Année de publication : 2018 Article en page(s) : pp 993 - 1008 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes descripteurs IGN] apprentissage automatique
[Termes descripteurs IGN] données localisées 3D
[Termes descripteurs IGN] estimation de pose
[Termes descripteurs IGN] réseau neuronal convolutif
[Termes descripteurs IGN] synthèse d'imageRésumé : (Auteur) This paper addresses the problem of 3D human pose estimation in the wild. A significant challenge is the lack of training data, i.e., 2D images of humans annotated with 3D poses. Such data is necessary to train state-of-the-art CNN architectures. Here, we propose a solution to generate a large set of photorealistic synthetic images of humans with 3D pose annotations. We introduce an image-based synthesis engine that artificially augments a dataset of real images with 2D human pose annotations using 3D motion capture data. Given a candidate 3D pose, our algorithm selects for each joint an image whose 2D pose locally matches the projected 3D pose. The selected images are then combined to generate a new synthetic image by stitching local image patches in a kinematically constrained manner. The resulting images are used to train an end-to-end CNN for full-body 3D pose estimation. We cluster the training data into a large number of pose classes and tackle pose estimation as a K-way classification problem. Such an approach is viable only with large training sets such as ours. Our method outperforms most of the published works in terms of 3D pose estimation in controlled environments (Human3.6M) and shows promising results for real-world images (LSP). This demonstrates that CNNs trained on artificial images generalize well to real images. Compared to data generated from more classical rendering engines, our synthetic images do not require any domain adaptation or fine-tuning stage. Numéro de notice : A2018-418 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-018-1071-9 date de publication en ligne : 19/03/2018 En ligne : https://doi.org/10.1007/s11263-018-1071-9 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90901
in International journal of computer vision > vol 126 n° 9 (September 2018) . - pp 993 - 1008[article]Landmark based localization in urban environment / Xiaozhi Qu in ISPRS Journal of photogrammetry and remote sensing, vol 140 (June 2018)
PermalinkSDF-2-SDF registration for real-time 3D reconstruction from RGB-D data / Miroslava Slavcheva in International journal of computer vision, vol 126 n° 6 (June 2018)
PermalinkReal-time accurate 3D head tracking and pose estimation with consumer RGB-D cameras / David Joseph Tan in International journal of computer vision, vol 126 n° 2-4 (April 2018)
PermalinkA survey on visual-based localization : on the benefit of heterogeneous data / Nathan Piasco in Pattern recognition, vol 74 (February 2018)
PermalinkAdéquation algorithme architecture pour la localisation basée image sur système embarqué / David Vandergucht (2018)
![]()
PermalinkGéo-référencement précis d'acquisition photogrammétrique de « longues » scènes d'intérieur / Truong Giang Nguyen (2018)
PermalinkLocalisation par l'image en milieu urbain : application à la réalité augmentée / Antoine Fond (2018)
PermalinkMachine learning and pose estimation for autonomous robot grasping with collaborative robots / Victor Talbot (2018)
PermalinkPermalinkAutomatic registration of images to untextured geometry using average shading gradients / Tobias Plötz in International journal of computer vision, vol 125 n° 1-3 (December 2017)
Permalink