Détail de l'auteur
Auteur Nadia Robertini |
Documents disponibles écrits par cet auteur (1)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Multi-view performance capture of surface details / Nadia Robertini in International journal of computer vision, vol 124 n° 1 (August 2017)
[article]
Titre : Multi-view performance capture of surface details Type de document : Article/Communication Auteurs : Nadia Robertini, Auteur ; Dan Casas, Auteur ; Edilson De Aguiar, Auteur ; Christian Theobalt, Auteur Année de publication : 2017 Article en page(s) : pp 96 – 113 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] courbe de Gauss
[Termes IGN] échelle d'intensité
[Termes IGN] image numérique
[Termes IGN] image vidéo
[Termes IGN] maille triangulaire
[Termes IGN] modèle de déformation des images
[Termes IGN] niveau de détail
[Termes IGN] noeud
[Termes IGN] optimisation (mathématiques)
[Termes IGN] reconstruction d'objetRésumé : (auteur) This paper presents a novel approach to recover true fine surface detail of deforming meshes reconstructed from multi-view video. Template-based methods for performance capture usually produce a coarse-to-medium scale detail 4D surface reconstruction which does not contain the real high-frequency geometric detail present in the original video footage. Fine scale deformation is often incorporated in a second pass by using stereo constraints, features, or shading-based refinement. In this paper, we propose an alternative solution to this second stage by formulating dense dynamic surface reconstruction as a global optimization problem of the densely deforming surface. Our main contribution is an implicit representation of a deformable mesh that uses a set of Gaussian functions on the surface to represent the initial coarse mesh, and a set of Gaussians for the images to represent the original captured multi-view images. We effectively find the fine scale deformations for all mesh vertices, which maximize photo-temporal-consistency, by densely optimizing our model-to-image consistency energy on all vertex positions. Our formulation yields a smooth closed form energy with implicit occlusion handling and analytic derivatives. Furthermore, it does not require error-prone correspondence finding or discrete sampling of surface displacement values. We demonstrate our approach on a variety of datasets of human subjects wearing loose clothing and performing different motions. We qualitatively and quantitatively demonstrate that our technique successfully reproduces finer detail than the input baseline geometry. Numéro de notice : A2017-401 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007%2Fs11263-016-0979-1 En ligne : https://doi.org/10.1007/s11263-016-0979-1 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=85943
in International journal of computer vision > vol 124 n° 1 (August 2017) . - pp 96 – 113[article]