Détail de l'autorité
3DV 2021, International Conference on 3D Vision 01/12/2021 03/12/2021 Londres online Royaume-Uni Proceedings IEEE
nom du congrès :
3DV 2021, International Conference on 3D Vision
début du congrès :
01/12/2021
fin du congrès :
03/12/2021
ville du congrès :
Londres online
pays du congrès :
Royaume-Uni
site des actes du congrès :
|
Documents disponibles (2)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Titre : Efficiently distributed watertight surface reconstruction Type de document : Article/Communication Auteurs : Laurent Caraffa , Auteur ; Yanis Marchand , Auteur ; Mathieu Brédif , Auteur ; Bruno Vallet , Auteur Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2021 Projets : 1-Pas de projet / Conférence : 3DV 2021, International Conference on 3D Vision 01/12/2021 03/12/2021 Londres online Royaume-Uni Proceedings IEEE Importance : pp 1432 - 1441 Format : 21 x 30 cm Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] algorithme Graph-Cut
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] reconstruction d'objet
[Termes IGN] scène
[Termes IGN] semis de points
[Termes IGN] Spark
[Termes IGN] triangulation de DelaunayRésumé : (auteur) We present an out-of-core and distributed surface reconstruction algorithm which scales efficiently on arbitrarily large point clouds (with optical centres) and produces a 3D watertight triangle mesh representing the surface of the underlying scene. Surface reconstruction from a point cloud is a difficult problem and existing state of the art approaches are usually based on complex pipelines making use of global algorithms (i.e. Delaunay triangulation, graph-cut optimisation). For one of these approaches, we investigate the distribution of all the steps (in particular Delaunay triangulation and graph-cut optimisation) in order to propose a fully scalable method. We show that the problem can be tiled and distributed across a cloud or a cluster of PCs by paying a careful attention to the interactions between tiles and using Spark computing framework. We confirm the efficiency of this approach with an in-depth quantitative evaluation and the successful reconstruction of a surface from a very large data set which combines more than 350 million aerial and terrestrial LiDAR points. Numéro de notice : C2021-037 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Autre URL associée : vers HAL Thématique : IMAGERIE/INFORMATIQUE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/3DV53792.2021.00150 En ligne : https://doi.org/10.1109/3DV53792.2021.00150 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99167
Titre : Representing shape collections with alignment-aware linear models Type de document : Article/Communication Auteurs : Romain Loiseau , Auteur ; Tom Monnier, Auteur ; Loïc Landrieu , Auteur ; Mathieu Aubry, Auteur Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2021 Autre Editeur : Ithaca [New York - Etats-Unis] : ArXiv - Université Cornell Projets : READY3D / Landrieu, Loïc Conférence : 3DV 2021, International Conference on 3D Vision 01/12/2021 03/12/2021 Londres online Royaume-Uni Proceedings IEEE Importance : pp 1044 - 1053 Format : 21 x 30 cm Note générale : bibliographie
This work was supported in part by ANR project READY3D ANR-19-CE23-0007 and HPC resources from GENCI-IDRIS (Grant 2020-AD011012096).Langues : Anglais (eng) Descripteur : [Termes IGN] analyse de données
[Termes IGN] apprentissage profond
[Termes IGN] données localisées 3D
[Termes IGN] modèle linéaire
[Termes IGN] réseau neuronal profond
[Termes IGN] segmentation
[Termes IGN] semis de points
[Termes IGN] transformation affineRésumé : (auteur) In this paper, we revisit the classical representation of 3D point clouds as linear shape models. Our key insight is to leverage deep learning to represent a collection of shapes as affine transformations of low-dimensional linear shape models. Each linear model is characterized by a shape prototype, a low-dimensional shape basis and two neural networks. The networks take as input a point cloud and predict the coordinates of a shape in the linear basis and the affine transformation which best approximate the input. Both linear models and neural networks are learned end-to-end using a single reconstruction loss. The main advantage of our approach is that, in contrast to many recent deep approaches which learn feature-based complex shape representations, our model is explicit and every operation occurs in 3D space. As a result, our linear shape models can be easily visualized and annotated, and failure cases can be visually understood. While our main goal is to introduce a compact and interpretable representation of shape collections, we show it leads to state of the art results for few-shot segmentation. Code and data are available at: https://romainloiseau.github.io/deep-linear-shapes Numéro de notice : C2021-036 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Autre URL associée : vers ArXiv Thématique : INFORMATIQUE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/3DV53792.2021.00112 Date de publication en ligne : 03/12/2021 En ligne : https://doi.org/10.1109/3DV53792.2021.00112 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98385