Détail de l'auteur
Auteur Chenfanfu Jiang |
Documents disponibles écrits par cet auteur (1)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Configurable 3D scene synthesis and 2D image rendering with per-pixel ground truth using stochastic grammars / Chenfanfu Jiang in International journal of computer vision, vol 126 n° 9 (September 2018)
[article]
Titre : Configurable 3D scene synthesis and 2D image rendering with per-pixel ground truth using stochastic grammars Type de document : Article/Communication Auteurs : Chenfanfu Jiang, Auteur ; Shuyao Qi, Auteur ; Yixin Zhu, Auteur ; et al., Auteur Année de publication : 2018 Article en page(s) : pp 920 - 941 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage automatique
[Termes IGN] architecture pipeline (processeur)
[Termes IGN] compréhension de l'image
[Termes IGN] image RVB
[Termes IGN] rendu réaliste
[Termes IGN] scène intérieure
[Termes IGN] segmentation sémantique
[Termes IGN] synthèse d'imageRésumé : (Auteur) We propose a systematic learning-based approach to the generation of massive quantities of synthetic 3D scenes and arbitrary numbers of photorealistic 2D images thereof, with associated ground truth information, for the purposes of training, benchmarking, and diagnosing learning-based computer vision and robotics algorithms. In particular, we devise a learning-based pipeline of algorithms capable of automatically generating and rendering a potentially infinite variety of indoor scenes by using a stochastic grammar, represented as an attributed Spatial And-Or Graph, in conjunction with state-of-the-art physics-based rendering. Our pipeline is capable of synthesizing scene layouts with high diversity, and it is configurable inasmuch as it enables the precise customization and control of important attributes of the generated scenes. It renders photorealistic RGB images of the generated scenes while automatically synthesizing detailed, per-pixel ground truth data, including visible surface depth and normal, object identity, and material information (detailed to object parts), as well as environments (e.g., illuminations and camera viewpoints). We demonstrate the value of our synthesized dataset, by improving performance in certain machine-learning-based scene understanding tasks—depth and surface normal prediction, semantic segmentation, reconstruction, etc.—and by providing benchmarks for and diagnostics of trained models by modifying object attributes and scene properties in a controllable manner. Numéro de notice : A2018-416 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-018-1103-5 Date de publication en ligne : 30/06/2018 En ligne : https://doi.org/10.1007/s11263-018-1103-5 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90899
in International journal of computer vision > vol 126 n° 9 (September 2018) . - pp 920 - 941[article]