Détail de l'auteur
Auteur Yu-Wing Tai |
Documents disponibles écrits par cet auteur (1)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Refining geometry from depth sensors using IR shading images / Gyeongmin Choe in International journal of computer vision, vol 122 n° 1 (March 2017)
[article]
Titre : Refining geometry from depth sensors using IR shading images Type de document : Article/Communication Auteurs : Gyeongmin Choe, Auteur ; Jaesik Park, Auteur ; Yu-Wing Tai, Auteur ; In So Kweon, Auteur Année de publication : 2017 Article en page(s) : pp 1 – 16 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] albedo
[Termes IGN] bande infrarouge
[Termes IGN] décomposition d'image
[Termes IGN] géométrie affine
[Termes IGN] image optique
[Termes IGN] Kinect
[Termes IGN] maille triangulaire
[Termes IGN] ombreRésumé : (auteur) We propose a method to refine geometry of 3D meshes from a consumer level depth camera, e.g. Kinect, by exploiting shading cues captured from an infrared (IR) camera. A major benefit to using an IR camera instead of an RGB camera is that the IR images captured are narrow band images that filter out most undesired ambient light, which makes our system robust against natural indoor illumination. Moreover, for many natural objects with colorful textures in the visible spectrum, the subjects appear to have a uniform albedo in the IR spectrum. Based on our analyses on the IR projector light of the Kinect, we define a near light source IR shading model that describes the captured intensity as a function of surface normals, albedo, lighting direction, and distance between light source and surface points. To resolve the ambiguity in our model between the normals and distances, we utilize an initial 3D mesh from the Kinect fusion and multi-view information to reliably estimate surface details that were not captured and reconstructed by the Kinect fusion. Our approach directly operates on the mesh model for geometry refinement. We ran experiments on our algorithm for geometries captured by both the Kinect I and Kinect II, as the depth acquisition in Kinect I is based on a structured-light technique and that of the Kinect II is based on a time-of-flight technology. The effectiveness of our approach is demonstrated through several challenging real-world examples. We have also performed a user study to evaluate the quality of the mesh models before and after our refinements. Numéro de notice : A2017-174 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007%2Fs11263-016-0937-y En ligne : https://doi.org/10.1007/s11263-016-0937-y Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=85921
in International journal of computer vision > vol 122 n° 1 (March 2017) . - pp 1 – 16[article]