Descripteur
Documents disponibles dans cette catégorie (74)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Generation of concise 3D building model from dense meshes by extracting and completing planar primitives / Xinyi Liu in Photogrammetric record, vol 38 n° 181 (March 2023)
[article]
Titre : Generation of concise 3D building model from dense meshes by extracting and completing planar primitives Type de document : Article/Communication Auteurs : Xinyi Liu, Auteur ; Xianzhang Zhu, Auteur ; Yongjun Zhang, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : pp 22 - 46 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie
[Termes IGN] adjacence
[Termes IGN] bati
[Termes IGN] maillage
[Termes IGN] modélisation 3D du bâti BIM
[Termes IGN] modélisation du bâti
[Termes IGN] primitive géométrique
[Termes IGN] reconstruction 3D
[Termes IGN] segmentation en plan
[Termes IGN] semis de pointsRésumé : (auteur) The generation of a concise building model has been and continues to be a challenge in photogrammetry and computer graphics. The current methods typically focus on the simplicity and fidelity of the model, but those methods either fail to preserve the structural information or suffer from low computational efficiency. In this paper, we propose a novel method to generate concise building models from dense meshes by extracting and completing the planar primitives of the building. From the perspective of probability, we first extract planar primitives from the input mesh and obtain the adjacency relationships between the primitives. Since primitive loss and structural defects are inevitable in practice, we employ a novel structural completion approach to eliminate linkage errors. Finally, the concise polygonal mesh is reconstructed by connectivity-based primitive assembling. Our method is efficient and robust to various challenging data. Experiments on various building models revealed the efficacy and applicability of our method. Numéro de notice : A2023-162 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1111/phor.12438 Date de publication en ligne : 04/01/2023 En ligne : https://doi.org/10.1111/phor.12438 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102865
in Photogrammetric record > vol 38 n° 181 (March 2023) . - pp 22 - 46[article]PSSNet: Planarity-sensible Semantic Segmentation of large-scale urban meshes / Weixiao Gao in ISPRS Journal of photogrammetry and remote sensing, vol 196 (February 2023)
[article]
Titre : PSSNet: Planarity-sensible Semantic Segmentation of large-scale urban meshes Type de document : Article/Communication Auteurs : Weixiao Gao, Auteur ; Liangliang Nan, Auteur ; Bas Boom, Auteur ; Hugo Ledoux, Auteur Année de publication : 2023 Article en page(s) : pp 32 - 44 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] analyse de scène 3D
[Termes IGN] champ aléatoire de Markov
[Termes IGN] classification dirigée
[Termes IGN] contour
[Termes IGN] maillage
[Termes IGN] Perceptron multicouche
[Termes IGN] réseau neuronal de graphes
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantiqueRésumé : (Auteur) We introduce a novel deep learning-based framework to interpret 3D urban scenes represented as textured meshes. Based on the observation that object boundaries typically align with the boundaries of planar regions, our framework achieves semantic segmentation in two steps: planarity-sensible over-segmentation followed by semantic classification. The over-segmentation step generates an initial set of mesh segments that capture the planar and non-planar regions of urban scenes. In the subsequent classification step, we construct a graph that encodes the geometric and photometric features of the segments in its nodes and the multi-scale contextual features in its edges. The final semantic segmentation is obtained by classifying the segments using a graph convolutional network. Experiments and comparisons on two semantic urban mesh benchmarks demonstrate that our approach outperforms the state-of-the-art methods in terms of boundary quality, mean IoU (intersection over union), and generalization ability. We also introduce several new metrics for evaluating mesh over-segmentation methods dedicated to semantic segmentation, and our proposed over-segmentation approach outperforms state-of-the-art methods on all metrics. Our source code is available at https://github.com/WeixiaoGao/PSSNet. Numéro de notice : A2023-064 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.12.020 Date de publication en ligne : 02/01/2023 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.12.020 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102399
in ISPRS Journal of photogrammetry and remote sensing > vol 196 (February 2023) . - pp 32 - 44[article]A unified framework for automated registration of point clouds, mesh surfaces and 3D models by using planar surfaces / Yuan Zhao in Photogrammetric record, vol 37 n° 180 (December 2022)
[article]
Titre : A unified framework for automated registration of point clouds, mesh surfaces and 3D models by using planar surfaces Type de document : Article/Communication Auteurs : Yuan Zhao, Auteur ; Hang Zhao, Auteur ; Marko Radanovic, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 366 - 384 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] algorithme ICP
[Termes IGN] chevauchement
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] maillage
[Termes IGN] modélisation 3D du bâti BIM
[Termes IGN] recalage de données localisées
[Termes IGN] semis de points
[Termes IGN] superposition de données
[Termes IGN] surface planeRésumé : (auteur) Numéro de notice : A2022-939 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1111/phor.12428 Date de publication en ligne : 18/10/2022 En ligne : https://doi.org/10.1111/phor.12428 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102685
in Photogrammetric record > vol 37 n° 180 (December 2022) . - pp 366 - 384[article]3D building reconstruction from single street view images using deep learning / Hui En Pang in International journal of applied Earth observation and geoinformation, vol 112 (August 2022)
[article]
Titre : 3D building reconstruction from single street view images using deep learning Type de document : Article/Communication Auteurs : Hui En Pang, Auteur ; Filip Biljecki, Auteur Année de publication : 2022 Article en page(s) : n° 102859 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] empreinte
[Termes IGN] Helsinki
[Termes IGN] image Streetview
[Termes IGN] maillage
[Termes IGN] morphologie urbaine
[Termes IGN] précision géométrique (imagerie)
[Termes IGN] reconstruction 3D du bâti
[Termes IGN] segmentation d'image
[Termes IGN] semis de pointsRésumé : (auteur) 3D building models are an established instance of geospatial information in the built environment, but their acquisition remains complex and topical. Approaches to reconstruct 3D building models often require existing building information (e.g. their footprints) and data such as point clouds, which are scarce and laborious to acquire, limiting their expansion. In parallel, street view imagery (SVI) has been gaining currency, driven by the rapid expansion in coverage and advances in computer vision (CV), but it has not been used much for generating 3D city models. Traditional approaches that can use SVI for reconstruction require multiple images, while in practice, often only few street-level images provide an unobstructed view of a building. We develop the reconstruction of 3D building models from a single street view image using image-to-mesh reconstruction techniques modified from the CV domain. We regard three scenarios: (1) standalone single-view reconstruction; (2) reconstruction aided by a top view delineating the footprint; and (3) refinement of existing 3D models, i.e. we examine the use of SVI to enhance the level of detail of block (LoD1) models, which are common. The results suggest that trained models supporting (2) and (3) are able to reconstruct the overall geometry of a building, while the first scenario may derive the approximate mass of the building, useful to infer the urban form of cities. We evaluate the results by demonstrating their usefulness for volume estimation, with mean errors of less than 10% for the last two scenarios. As SVI is now available in most countries worldwide, including many regions that do not have existing footprint and/or 3D building data, our method can derive rapidly and cost-effectively the 3D urban form from SVI without requiring any existing building information. Obtaining 3D building models in regions that hitherto did not have any, may enable a number of 3D geospatial analyses locally for the first time. Numéro de notice : A2022-544 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.jag.2022.102859 Date de publication en ligne : 17/06/2022 En ligne : https://doi.org/10.1016/j.jag.2022.102859 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101160
in International journal of applied Earth observation and geoinformation > vol 112 (August 2022) . - n° 102859[article]DiffusionNet: discretization agnostic learning on surfaces / Nicholas Sharp in ACM Transactions on Graphics, TOG, Vol 41 n° 3 (June 2022)
[article]
Titre : DiffusionNet: discretization agnostic learning on surfaces Type de document : Article/Communication Auteurs : Nicholas Sharp, Auteur ; Souhaib Attaiki, Auteur ; K. Crane, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 1 - 16 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Intelligence artificielle
[Termes IGN] apprentissage profond
[Termes IGN] discrétisation
[Termes IGN] maillage
[Termes IGN] Perceptron multicouche
[Termes IGN] semis de points
[Termes IGN] Triangular Regular Network
[Termes IGN] voisinage (relation topologique)Résumé : (auteur) We introduce a new general-purpose approach to deep learning on three-dimensional surfaces based on the insight that a simple diffusion layer is highly effective for spatial communication. The resulting networks are automatically robust to changes in resolution and sampling of a surface—a basic property that is crucial for practical applications. Our networks can be discretized on various geometric representations, such as triangle meshes or point clouds, and can even be trained on one representation and then applied to another. We optimize the spatial support of diffusion as a continuous network parameter ranging from purely local to totally global, removing the burden of manually choosing neighborhood sizes. The only other ingredients in the method are a multi-layer perceptron applied independently at each point and spatial gradient features to support directional filters. The resulting networks are simple, robust, and efficient. Here, we focus primarily on triangle mesh surfaces and demonstrate state-of-the-art results for a variety of tasks, including surface classification, segmentation, and non-rigid correspondence. Numéro de notice : A2022-321 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1145/3507905 Date de publication en ligne : 07/03/2022 En ligne : https://doi.org/10.1145/3507905 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100369
in ACM Transactions on Graphics, TOG > Vol 41 n° 3 (June 2022) . - pp 1 - 16[article]Summarizing large scale 3D mesh for urban navigation / Imeen Ben Salah in Robotics and autonomous systems, vol 152 (June 2022)PermalinkSemantic segmentation of urban textured meshes through point sampling / Grégoire Grzeczkowicz in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2022 (2022 edition)Permalink3D building model simplification method considering both model mesh and building structure / Jiangfeng She in Transactions in GIS, vol 26 n° 3 (May 2022)PermalinkGeoRec: Geometry-enhanced semantic 3D reconstruction of RGB-D indoor scenes / Linxi Huan in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)PermalinkA cost-effective method for reconstructing city-building 3D models from sparse Lidar point clouds / Marek Kulawiak in Remote sensing, vol 14 n° 5 (March-1 2022)PermalinkUsing vertices of a triangular irregular network to calculate slope and aspect / Guanghui Hu in International journal of geographical information science IJGIS, vol 36 n° 2 (February 2022)PermalinkPermalinkDéveloppement d’outils et de méthodes permettant l’acquisition, le traitement et la diffusion de données issues de levés par drone / Guillaume Feuillatre (2022)PermalinkExplorer les processus de mobilité passée : raisonnement ontologique fondé sur la connaissance des pratiques socioculturelles et des vestiges archéologiques / Laure Nuninger in Revue internationale de géomatique, vol 31 n° 1-2 (janvier - juin 2022)PermalinkLevé et numérisation du château de Lichtenberg en vue d’une proposition de visite virtuelle du site à des périodes remarquables / Maxime Rocha (2022)Permalink