Détail de l'auteur
Auteur Hao Fang |
Documents disponibles écrits par cet auteur (2)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Structure-aware indoor scene reconstruction via two levels of abstraction / Hao Fang in ISPRS Journal of photogrammetry and remote sensing, vol 178 (August 2021)
[article]
Titre : Structure-aware indoor scene reconstruction via two levels of abstraction Type de document : Article/Communication Auteurs : Hao Fang, Auteur ; Cihui Pan, Auteur ; Hui Huang, Auteur Année de publication : 2021 Article en page(s) : pp 155 - 170 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] champ aléatoire de Markov
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] image optique
[Termes IGN] maillage
[Termes IGN] maille triangulaire
[Termes IGN] niveau d'abstraction
[Termes IGN] polygone
[Termes IGN] reconstruction 3D
[Termes IGN] reconstruction d'objet
[Termes IGN] scène intérieureRésumé : (auteur) In this paper, we propose a novel approach that reconstructs the indoor scene in a structure-aware manner and produces two meshes with different levels of abstraction. To be precise, we start from the raw triangular mesh of indoor scene and decompose it into two parts: structure and non-structure objects. On the one hand, structure objects are defined as significant permanent parts in the indoor environment such as floors, ceilings and walls. In the proposed algorithm, structure objects are abstracted by planar primitives and assembled into a polygonal structure mesh. This step produces a compact structure-aware watertight model that decreases the complexity of original mesh by three orders of magnitude. On the other hand, non-structure objects are movable objects in the indoor environment such as furniture and interior decoration. Meshes of these objects are repaired and simplified according to their relationship with respect to structure primitives. Finally, the union of all the non-structure meshes and structure mesh comprises the scene mesh. Note that structure mesh and scene mesh preserve various levels of abstraction and can be used for different applications according to user preference. Our experiments on both LIDAR and RGBD data scanned from simple to large scale indoor scenes indicate that the proposed framework generates structure-aware results while being robust and scalable. It is also compared qualitatively and quantitatively against popular mesh approximation, floorplan generation and piecewise-planar surface reconstruction methods to demonstrate its performance. Numéro de notice : A2021-561 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.06.007 Date de publication en ligne : 23/06/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.06.007 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98119
in ISPRS Journal of photogrammetry and remote sensing > vol 178 (August 2021) . - pp 155 - 170[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021081 SL Revue Centre de documentation Revues en salle Disponible 081-2021083 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021082 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Pyramid scene parsing network in 3D: Improving semantic segmentation of point clouds with multi-scale contextual information / Hao Fang in ISPRS Journal of photogrammetry and remote sensing, vol 154 (August 2019)
[article]
Titre : Pyramid scene parsing network in 3D: Improving semantic segmentation of point clouds with multi-scale contextual information Type de document : Article/Communication Auteurs : Hao Fang, Auteur ; Florent Lafarge, Auteur Année de publication : 2019 Article en page(s) : pp 246 - 258 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage profond
[Termes IGN] compréhension de l'image
[Termes IGN] données localisées 3D
[Termes IGN] prise en compte du contexte
[Termes IGN] représentation multiple
[Termes IGN] scène
[Termes IGN] scène intérieure
[Termes IGN] segmentation sémantique
[Termes IGN] semis de pointsRésumé : (Auteur) Analyzing and extracting geometric features from 3D data is a fundamental step in 3D scene understanding. Recent works demonstrated that deep learning architectures can operate directly on raw point clouds, i.e. without the use of intermediate grid-like structures. These architectures are however not designed to encode contextual information in-between objects efficiently. Inspired by a global feature aggregation algorithm designed for images (Zhao et al., 2017), we propose a 3D pyramid module to enrich pointwise features with multi-scale contextual information. Our module can be easily coupled with 3D semantic segmentation methods operating on 3D point clouds. We evaluated our method on three large scale datasets with four baseline models. Experimental results show that the use of enriched features brings significant improvements to the semantic segmentation of indoor and outdoor scenes. Numéro de notice : A2019-271 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.06.010 Date de publication en ligne : 01/07/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.06.010 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93089
in ISPRS Journal of photogrammetry and remote sensing > vol 154 (August 2019) . - pp 246 - 258[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019081 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019083 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019082 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt