Détail de l'auteur
Auteur Linxi Huan |
Documents disponibles écrits par cet auteur (2)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
GeoRec: Geometry-enhanced semantic 3D reconstruction of RGB-D indoor scenes / Linxi Huan in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)
[article]
Titre : GeoRec: Geometry-enhanced semantic 3D reconstruction of RGB-D indoor scenes Type de document : Article/Communication Auteurs : Linxi Huan, Auteur ; Xianwei Zheng, Auteur ; Jianya Gong, Auteur Année de publication : 2022 Article en page(s) : pp 301 - 314 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] données localisées 3D
[Termes IGN] géométrie
[Termes IGN] image RVB
[Termes IGN] maillage
[Termes IGN] modélisation sémantique
[Termes IGN] objet 3D
[Termes IGN] reconstruction 3D
[Termes IGN] reconstruction d'objet
[Termes IGN] scène intérieureRésumé : (auteur) Semantic indoor 3D modeling with multi-task deep neural networks is an efficient and low-cost way for reconstructing an indoor scene with geometrically complete room structure and semantic 3D individuals. Challenged by the complexity and clutter of indoor scenarios, the semantic reconstruction quality of current methods is still limited by the insufficient exploration and learning of 3D geometry information. To this end, this paper proposes an end-to-end multi-task neural network for geometry-enhanced semantic 3D reconstruction of RGB-D indoor scenes (termed as GeoRec). In the proposed GeoRec, we build a geometry extractor that can effectively learn geometry-enhanced feature representation from depth data, to improve the estimation accuracy of layout, camera pose and 3D object bounding boxes. We also introduce a novel object mesh generator that strengthens the reconstruction robustness of GeoRec to indoor occlusion with geometry-enhanced implicit shape embedding. With the parsed scene semantics and geometries, the proposed GeoRec reconstructs an indoor scene by placing reconstructed object mesh models with 3D object detection results in the estimated layout cuboid. Extensive experiments conducted on two benchmark datasets show that the proposed GeoRec yields outstanding performance with mean chamfer distance error for object reconstruction on the challenging Pix3D dataset, 70.45% mAP for 3D object detection and 77.1% 3D mIoU for layout estimation on the commonly-used SUN RGB-D dataset. Especially, the mesh reconstruction sub-network of GeoRec trained on Pix3D can be directly transferred to SUN RGB-D without any fine-tuning, manifesting a high generalization ability. Numéro de notice : A2022-235 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.isprsjprs.2022.02.014 Date de publication en ligne : 03/03/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.02.014 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100139
in ISPRS Journal of photogrammetry and remote sensing > vol 186 (April 2022) . - pp 301 - 314[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022041 SL Revue Centre de documentation Revues en salle Disponible 081-2022043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2022042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Parsing very high resolution urban scene images by learning deep ConvNets with edge-aware loss / Xianwei Zheng in ISPRS Journal of photogrammetry and remote sensing, vol 170 (December 2020)
[article]
Titre : Parsing very high resolution urban scene images by learning deep ConvNets with edge-aware loss Type de document : Article/Communication Auteurs : Xianwei Zheng, Auteur ; Linxi Huan, Auteur ; Gui-Song Xia, Auteur ; Jianya Gong, Auteur Année de publication : 2020 Article en page(s) : pp 15-28 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification basée sur les régions
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] contour
[Termes IGN] image à très haute résolution
[Termes IGN] méthode fondée sur le noyau
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantiqueRésumé : (Auteur) Parsing very high resolution (VHR) urban scene images into regions with semantic meaning, e.g. buildings and cars, is a fundamental task in urban scene understanding. However, due to the huge quantity of details contained in an image and the large variations of objects in scale and appearance, the existing semantic segmentation methods often break one object into pieces, or confuse adjacent objects and thus fail to depict these objects consistently. To address these issues uniformly, we propose a standalone end-to-end edge-aware neural network (EaNet) for urban scene semantic segmentation. For semantic consistency preservation inside objects, the EaNet model incorporates a large kernel pyramid pooling (LKPP) module to capture rich multi-scale context with strong continuous feature relations. To effectively separate confusing objects with sharp contours, a Dice-based edge-aware loss function (EA loss) is devised to guide the EaNet to refine both the pixel- and image-level edge information directly from semantic segmentation prediction. In the proposed EaNet model, the LKPP and the EA loss couple to enable comprehensive feature learning across an entire semantic object. Extensive experiments on three challenging datasets demonstrate that our method can be readily generalized to multi-scale ground/aerial urban scene images, achieving 81.7% in mIoU on Cityscapes Test set and 90.8% in the mean F1-score on the ISPRS Vaihingen 2D Test set. Code is available at: https://github.com/geovsion/EaNet. Numéro de notice : A2020-703 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.09.019 Date de publication en ligne : 14/10/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.09.019 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96228
in ISPRS Journal of photogrammetry and remote sensing > vol 170 (December 2020) . - pp 15-28[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 081-2020121 RAB Revue Centre de documentation En réserve L003 Disponible