Détail de l'auteur
Auteur Davide Scaramuzza |
Documents disponibles écrits par cet auteur (1)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
EMVS : Event-based Multi-View Stereo : 3D reconstruction with an event camera in real-time / Henri Rebecq in International journal of computer vision, vol 126 n° 12 (December 2018)
[article]
Titre : EMVS : Event-based Multi-View Stereo : 3D reconstruction with an event camera in real-time Type de document : Article/Communication Auteurs : Henri Rebecq, Auteur ; Guillermo Gallego, Auteur ; Elias Mueggler, Auteur ; Davide Scaramuzza, Auteur Année de publication : 2018 Article en page(s) : pp 1394 - 1414 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] caméra d'événement
[Termes IGN] carte de profondeur
[Termes IGN] luminance lumineuse
[Termes IGN] reconstruction 3D
[Termes IGN] temps réelRésumé : (Auteur) Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the output is composed of a sequence of asynchronous events rather than actual intensity images, traditional vision algorithms cannot be applied, so that a paradigm shift is needed. We introduce the problem of event-based multi-view stereo (EMVS) for event cameras and propose a solution to it. Unlike traditional MVS methods, which address the problem of estimating dense 3D structure from a set of known viewpoints, EMVS estimates semi-dense 3D structure from an event camera with known trajectory. Our EMVS solution elegantly exploits two inherent properties of an event camera: (1) its ability to respond to scene edges—which naturally provide semi-dense geometric information without any pre-processing operation—and (2) the fact that it provides continuous measurements as the sensor moves. Despite its simplicity (it can be implemented in a few lines of code), our algorithm is able to produce accurate, semi-dense depth maps, without requiring any explicit data association or intensity estimation. We successfully validate our method on both synthetic and real data. Our method is computationally very efficient and runs in real-time on a CPU. Numéro de notice : A2018-597 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-017-1050-6 Date de publication en ligne : 07/11/2017 En ligne : https://doi.org/10.1007/s11263-017-1050-6 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92524
in International journal of computer vision > vol 126 n° 12 (December 2018) . - pp 1394 - 1414[article]