Détail de l'auteur
Auteur Tom Monnier |
Documents disponibles écrits par cet auteur (1)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Titre : Representing shape collections with alignment-aware linear models Type de document : Article/Communication Auteurs : Romain Loiseau , Auteur ; Tom Monnier, Auteur ; Loïc Landrieu , Auteur ; Mathieu Aubry, Auteur Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2021 Autre Editeur : Ithaca [New York - Etats-Unis] : ArXiv - Université Cornell Projets : READY3D / Landrieu, Loïc Conférence : 3DV 2021, International Conference on 3D Vision 01/12/2021 03/12/2021 Londres online Royaume-Uni Proceedings IEEE Importance : pp 1044 - 1053 Format : 21 x 30 cm Note générale : bibliographie
This work was supported in part by ANR project READY3D ANR-19-CE23-0007 and HPC resources from GENCI-IDRIS (Grant 2020-AD011012096).Langues : Anglais (eng) Descripteur : [Termes IGN] analyse de données
[Termes IGN] apprentissage profond
[Termes IGN] données localisées 3D
[Termes IGN] modèle linéaire
[Termes IGN] réseau neuronal profond
[Termes IGN] segmentation
[Termes IGN] semis de points
[Termes IGN] transformation affineRésumé : (auteur) In this paper, we revisit the classical representation of 3D point clouds as linear shape models. Our key insight is to leverage deep learning to represent a collection of shapes as affine transformations of low-dimensional linear shape models. Each linear model is characterized by a shape prototype, a low-dimensional shape basis and two neural networks. The networks take as input a point cloud and predict the coordinates of a shape in the linear basis and the affine transformation which best approximate the input. Both linear models and neural networks are learned end-to-end using a single reconstruction loss. The main advantage of our approach is that, in contrast to many recent deep approaches which learn feature-based complex shape representations, our model is explicit and every operation occurs in 3D space. As a result, our linear shape models can be easily visualized and annotated, and failure cases can be visually understood. While our main goal is to introduce a compact and interpretable representation of shape collections, we show it leads to state of the art results for few-shot segmentation. Code and data are available at: https://romainloiseau.github.io/deep-linear-shapes Numéro de notice : C2021-036 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Autre URL associée : vers ArXiv Thématique : INFORMATIQUE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/3DV53792.2021.00112 Date de publication en ligne : 03/12/2021 En ligne : https://doi.org/10.1109/3DV53792.2021.00112 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98385