Descripteur
Documents disponibles dans cette catégorie (231)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
PSSNet: Planarity-sensible Semantic Segmentation of large-scale urban meshes / Weixiao Gao in ISPRS Journal of photogrammetry and remote sensing, vol 196 (February 2023)
[article]
Titre : PSSNet: Planarity-sensible Semantic Segmentation of large-scale urban meshes Type de document : Article/Communication Auteurs : Weixiao Gao, Auteur ; Liangliang Nan, Auteur ; Bas Boom, Auteur ; Hugo Ledoux, Auteur Année de publication : 2023 Article en page(s) : pp 32 - 44 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] analyse de scène 3D
[Termes IGN] champ aléatoire de Markov
[Termes IGN] classification dirigée
[Termes IGN] contour
[Termes IGN] maillage
[Termes IGN] Perceptron multicouche
[Termes IGN] réseau neuronal de graphes
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantiqueRésumé : (Auteur) We introduce a novel deep learning-based framework to interpret 3D urban scenes represented as textured meshes. Based on the observation that object boundaries typically align with the boundaries of planar regions, our framework achieves semantic segmentation in two steps: planarity-sensible over-segmentation followed by semantic classification. The over-segmentation step generates an initial set of mesh segments that capture the planar and non-planar regions of urban scenes. In the subsequent classification step, we construct a graph that encodes the geometric and photometric features of the segments in its nodes and the multi-scale contextual features in its edges. The final semantic segmentation is obtained by classifying the segments using a graph convolutional network. Experiments and comparisons on two semantic urban mesh benchmarks demonstrate that our approach outperforms the state-of-the-art methods in terms of boundary quality, mean IoU (intersection over union), and generalization ability. We also introduce several new metrics for evaluating mesh over-segmentation methods dedicated to semantic segmentation, and our proposed over-segmentation approach outperforms state-of-the-art methods on all metrics. Our source code is available at https://github.com/WeixiaoGao/PSSNet. Numéro de notice : A2023-064 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.12.020 Date de publication en ligne : 02/01/2023 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.12.020 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102399
in ISPRS Journal of photogrammetry and remote sensing > vol 196 (February 2023) . - pp 32 - 44[article]A geometry-aware attention network for semantic segmentation of MLS point clouds / Jie Wan in International journal of geographical information science IJGIS, vol 37 n° 1 (January 2023)
[article]
Titre : A geometry-aware attention network for semantic segmentation of MLS point clouds Type de document : Article/Communication Auteurs : Jie Wan, Auteur ; Yongyang Xu, Auteur ; Qinjun Qiu, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : pp 138 - 161 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] corrélation
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] figure géométrique
[Termes IGN] fonction de perte
[Termes IGN] graphe
[Termes IGN] Perceptron multicouche
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantique
[Termes IGN] semis de pointsRésumé : (auteur) Semantic segmentation of mobile laser scanning (MLS) point clouds can provide meaningful 3 D semantic information of urban facilities for various applications. However, it still remains a challenge to extract accurate 3 D semantic information from MLS point cloud data due to its irregular 3 D geometric structure in a large-scale outdoor scene. To this end, this study develops a geometry-aware attention point network (GAANet) with geometric properties of the point cloud as a reference. Specifically, the proposed method first builds a graph-like region for each input point to establish the geometric correlation toward its neighbors for robustly descripting local geometry-aware features. Thereafter, the method introduces a novel multi-head attention mechanism to efficiently learn local discriminative features on the constructed graphs and a feature combination operation to capture both local and global geometric dependencies inside fused point features for significantly facilitating the segmentation of small or incomplete 3 D objects at point-level. Finally, an adaptive loss function is appended to handle class imbalance for the overall performance improvement. The validation experiments on two challenging benchmarks demonstrate the effectiveness and powerful generation ability of the proposed method, which achieves state-of-the-art performance with mean IoU of 65.09% and 95.20% in the Toronto-3D and Oakland 3-D MLS dataset, respectively. Numéro de notice : A2023-038 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1080/13658816.2022.2111572 Date de publication en ligne : 24/08/2022 En ligne : https://doi.org/10.1080/13658816.2022.2111572 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102309
in International journal of geographical information science IJGIS > vol 37 n° 1 (January 2023) . - pp 138 - 161[article]
Titre : Learnable Earth Parser: Discovering 3D Prototypes in Aerial Scans Type de document : Article/Communication Auteurs : Romain Loiseau , Auteur ; Elliot Vincent, Auteur ; Mathieu Aubry, Auteur ; Loïc Landrieu , Auteur Editeur : Ithaca [New York - Etats-Unis] : ArXiv - Université Cornell Année de publication : 2023 Importance : 18 p. Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] information complexe
[Termes IGN] scène 3D
[Termes IGN] semis de points
[Termes IGN] zone urbaineRésumé : (auteur) We propose an unsupervised method for parsing large 3D scans of real-world scenes into interpretable parts. Our goal is to provide a practical tool for analyzing 3D scenes with unique characteristics in the context of aerial surveying and mapping, without relying on application-specific user annotations. Our approach is based on a probabilistic reconstruction model that decomposes an input 3D point cloud into a small set of learned prototypical shapes. Our model provides an interpretable reconstruction of complex scenes and leads to relevant instance and semantic segmentations. To demonstrate the usefulness of our results, we introduce a novel dataset of seven diverse aerial LiDAR scans. We show that our method outperforms state-of-the-art unsupervised methods in terms of decomposition accuracy while remaining visually interpretable. Our method offers significant advantage over existing approaches, as it does not require any manual annotations, making it a practical and efficient tool for 3D scene analysis. Our code and dataset are available at https://imagine.enpc.fr/~loiseaur/learnable-earth-parser Numéro de notice : P2023-005 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : IMAGERIE/INFORMATIQUE Nature : Preprint nature-HAL : Préprint DOI : sans En ligne : https://hal.science/hal-04135416 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=103347 PSMNet-FusionX3 : LiDAR-guided deep learning stereo dense matching on aerial images / Teng Wu (2023)
Titre : PSMNet-FusionX3 : LiDAR-guided deep learning stereo dense matching on aerial images Type de document : Article/Communication Auteurs : Teng Wu , Auteur ; Bruno Vallet , Auteur ; Marc Pierrot-Deseilligny , Auteur Editeur : Computer vision foundation CVF Année de publication : 2023 Conférence : CVPR 2023, IEEE Conference on Computer Vision and Pattern Recognition workshops 18/06/2023 22/06/2023 Vancouver Colombie britannique - Canada OA Proceedings Importance : pp 6526 - 6535 Note générale : bibliographie
voir aussi https://openaccess.thecvf.com/content/CVPR2023W/PCV/supplemental/Wu_PSMNet-FusionX3_LiDAR-Guided_Deep_CVPRW_2023_supplemental.pdfLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] appariement dense
[Termes IGN] apprentissage profond
[Termes IGN] chaîne de traitement
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] image aérienne à axe vertical
[Termes IGN] scène 3D
[Termes IGN] Triangulated Irregular NetworkRésumé : (auteur) Dense image matching (DIM) and LiDAR are two complementary techniques for recovering the 3D geometry of real scenes. While DIM provides dense surfaces, they are often noisy and contaminated with outliers. Conversely, LiDAR is more accurate and robust, but less dense and more expensive compared to DIM. In this work, we investigate learning-based methods to refine surfaces produced by photogrammetry with sparse LiDAR point clouds. Unlike the current state-of-the-art approaches in the computer vision community, our focus is on aerial acquisitions typical in photogrammetry. We propose a densification pipeline that adopts a PSMNet backbone with triangulated irregular network interpolation based expansion, feature enhancement in cost volume, and conditional cost volume normalization, i.e. PSMNet-FusionX3. Our method works better on low density and is less sensitive to distribution, demonstrating its effectiveness across a range of LiDAR point cloud densities and distributions, including analyses of dataset shifts. Furthermore, we have made both our aerial (image and disparity) dataset and code available for public use. Further information can be found at https://github.com/ whuwuteng/PSMNet-FusionX3. Numéro de notice : C2023-006 Affiliation des auteurs : UGE-LASTIG (2020- ) Thématique : IMAGERIE/INFORMATIQUE Nature : Communication DOI : sans En ligne : https://openaccess.thecvf.com/content/CVPR2023W/PCV/papers/Wu_PSMNet-FusionX3_Li [...] Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=103277 Automatic registration method of multi-source point clouds based on building facades matching in urban scenes / Yumin Tan in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 12 (December 2022)
[article]
Titre : Automatic registration method of multi-source point clouds based on building facades matching in urban scenes Type de document : Article/Communication Auteurs : Yumin Tan, Auteur ; Yanzhe Shi, Auteur ; Yunxin Li, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 767 - 782 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie
[Termes IGN] algorithme ICP
[Termes IGN] appariement de formes
[Termes IGN] appariement de points
[Termes IGN] données lidar
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] façade
[Termes IGN] fusion de données multisource
[Termes IGN] modélisation 3D
[Termes IGN] photogrammétrie aérienne
[Termes IGN] points registration
[Termes IGN] Ransac (algorithme)
[Termes IGN] recalage de données localisées
[Termes IGN] scène urbaine
[Termes IGN] superposition de donnéesRésumé : (auteur) Both UAV photogrammetry and lidar have become common in deriv- ing three-dimensional models of urban scenes, and each has its own advantages and disadvantages. However, the fusion of these multisource data is still challenging, in which registration is one of the most important stages. In this paper, we propose a method of coarse point cloud registration which consists of two steps. The first step is to extract urban building facades in both an oblique photogrammetric point cloud and a lidar point cloud. The second step is to align the two point clouds using the extracted building facades. Object Vicinity Distribution Feature (Dijkman and Van Den Heuvel 2002) is introduced to describe the distribution of building facades and register the two heterologous point clouds. This method provides a good initial state for later refined registration process and is translation, rotation, and scale invariant. Experiment results show that the accuracy of this proposed automatic registration method is equiva- lent to the accuracy of manual registration with control points. Numéro de notice : A2022-882 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.22-00069R3 Date de publication en ligne : 01/12/2022 En ligne : https://doi.org/10.14358/PERS.22-00069R3 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102206
in Photogrammetric Engineering & Remote Sensing, PERS > vol 88 n° 12 (December 2022) . - pp 767 - 782[article]Automatic registration of point cloud and panoramic images in urban scenes based on pole matching / Yuan Wang in International journal of applied Earth observation and geoinformation, vol 115 (December 2022)PermalinkMapping impervious surfaces with a hierarchical spectral mixture analysis incorporating endmember spatial distribution / Zhenfeng Shao in Geo-spatial Information Science, vol 25 n° 4 (December 2022)PermalinkA joint deep learning network of point clouds and multiple views for roadside object classification from lidar point clouds / Lina Fang in ISPRS Journal of photogrammetry and remote sensing, vol 193 (November 2022)PermalinkMeasuring visual walkability perception using panoramic street view images, virtual reality, and deep learning / Yunqin Li in Sustainable Cities and Society, vol 86 (November 2022)PermalinkApplication of a graph convolutional network with visual and semantic features to classify urban scenes / Yongyang Xu in International journal of geographical information science IJGIS, vol 36 n° 10 (October 2022)PermalinkAttention mechanisms in computer vision: A survey / Meng-Hao Guo in Computational Visual Media, vol 8 n° 3 (September 2022)PermalinkDART-Lux: An unbiased and rapid Monte Carlo radiative transfer method for simulating remote sensing images / Yingjie Wang in Remote sensing of environment, vol 274 (June 2022)PermalinkApplication oriented quality evaluation of Gaofen-7 optical stereo satellite imagery / Jiaojiao Tian in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-1-2022 (2022 edition)PermalinkCooperative image orientation considering dynamic objects / P. Trusheim in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-1-2022 (2022 edition)PermalinkRailway lidar semantic segmentation with axially symmetrical convolutional learning / Antoine Manier in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2022 (2022 edition)Permalink