Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > reconstruction 3D
reconstruction 3DSynonyme(s)reconstruction volumique reconstruction volumique tridimensionnelle |
Documents disponibles dans cette catégorie (588)


Etendre la recherche sur niveau(x) vers le bas
Generation of concise 3D building model from dense meshes by extracting and completing planar primitives / Xinyi Liu in Photogrammetric record, vol 38 n° 181 (March 2023)
![]()
[article]
Titre : Generation of concise 3D building model from dense meshes by extracting and completing planar primitives Type de document : Article/Communication Auteurs : Xinyi Liu, Auteur ; Xianzhang Zhu, Auteur ; Yongjun Zhang, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : pp 22 - 46 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie
[Termes IGN] adjacence
[Termes IGN] Bâti-3D
[Termes IGN] maillage
[Termes IGN] modélisation du bâti
[Termes IGN] primitive géométrique
[Termes IGN] reconstruction 3D
[Termes IGN] segmentation en plan
[Termes IGN] semis de pointsRésumé : (auteur) The generation of a concise building model has been and continues to be a challenge in photogrammetry and computer graphics. The current methods typically focus on the simplicity and fidelity of the model, but those methods either fail to preserve the structural information or suffer from low computational efficiency. In this paper, we propose a novel method to generate concise building models from dense meshes by extracting and completing the planar primitives of the building. From the perspective of probability, we first extract planar primitives from the input mesh and obtain the adjacency relationships between the primitives. Since primitive loss and structural defects are inevitable in practice, we employ a novel structural completion approach to eliminate linkage errors. Finally, the concise polygonal mesh is reconstructed by connectivity-based primitive assembling. Our method is efficient and robust to various challenging data. Experiments on various building models revealed the efficacy and applicability of our method. Numéro de notice : A2023-162 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1111/phor.12438 Date de publication en ligne : 04/01/2023 En ligne : https://doi.org/10.1111/phor.12438 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102865
in Photogrammetric record > vol 38 n° 181 (March 2023) . - pp 22 - 46[article]A CNN based approach for the point-light photometric stereo problem / Fotios Logothetis in International journal of computer vision, vol 131 n° 1 (January 2023)
![]()
[article]
Titre : A CNN based approach for the point-light photometric stereo problem Type de document : Article/Communication Auteurs : Fotios Logothetis, Auteur ; Roberto Mecca, Auteur ; Ignas Budvytis, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : pp 101 - 120 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] distribution du coefficient de réflexion bidirectionnelle BRDF
[Termes IGN] éclairement lumineux
[Termes IGN] effet de profondeur cinétique
[Termes IGN] intensité lumineuse
[Termes IGN] itération
[Termes IGN] reconstruction 3D
[Termes IGN] réflectivité
[Termes IGN] stéréoscopie
[Termes IGN] vue perspectiveRésumé : (auteur) Reconstructing the 3D shape of an object using several images under different light sources is a very challenging task, especially when realistic assumptions such as light propagation and attenuation, perspective viewing geometry and specular light reflection are considered. Many of works tackling Photometric Stereo (PS) problems often relax most of the aforementioned assumptions. Especially they ignore specular reflection and global illumination effects. In this work, we propose a CNN-based approach capable of handling these realistic assumptions by leveraging recent improvements of deep neural networks for far-field Photometric Stereo and adapt them to the point light setup. We achieve this by employing an iterative procedure of point-light PS for shape estimation which has two main steps. Firstly we train a per-pixel CNN to predict surface normals from reflectance samples. Secondly, we compute the depth by integrating the normal field in order to iteratively estimate light directions and attenuation which is used to compensate the input images to compute reflectance samples for the next iteration. Our approach sigificantly outperforms the state-of-the-art on the DiLiGenT real world dataset. Furthermore, in order to measure the performance of our approach for near-field point-light source PS data, we introduce LUCES the first real-world ’dataset for near-fieLd point light soUrCe photomEtric Stereo’ of 14 objects of different materials were the effects of point light sources and perspective viewing are a lot more significant. Our approach also outperforms the competition on this dataset as well. Data and test code are available at the project page. Numéro de notice : A2023-048 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s11263-022-01689-3 Date de publication en ligne : 07/10/2022 En ligne : https://doi.org/10.1007/s11263-022-01689-3 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102364
in International journal of computer vision > vol 131 n° 1 (January 2023) . - pp 101 - 120[article]Reconstructing compact building models from point clouds using deep implicit fields / Zhaiyu Chen in ISPRS Journal of photogrammetry and remote sensing, vol 194 (December 2022)
![]()
[article]
Titre : Reconstructing compact building models from point clouds using deep implicit fields Type de document : Article/Communication Auteurs : Zhaiyu Chen, Auteur ; Hugo Ledoux, Auteur ; Seyran Khademi, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 58 - 73 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] Bâti-3D
[Termes IGN] champ aléatoire de Markov
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction de modèle
[Termes IGN] image à haute résolution
[Termes IGN] maillage par triangles
[Termes IGN] optimisation (mathématiques)
[Termes IGN] polygone
[Termes IGN] reconstruction 3D du bâti
[Termes IGN] semis de pointsRésumé : (auteur) While three-dimensional (3D) building models play an increasingly pivotal role in many real-world applications, obtaining a compact representation of buildings remains an open problem. In this paper, we present a novel framework for reconstructing compact, watertight, polygonal building models from point clouds. Our framework comprises three components: (a) a cell complex is generated via adaptive space partitioning that provides a polyhedral embedding as the candidate set; (b) an implicit field is learned by a deep neural network that facilitates building occupancy estimation; (c) a Markov random field is formulated to extract the outer surface of a building via combinatorial optimization. We evaluate and compare our method with state-of-the-art methods in generic reconstruction, model-based reconstruction, geometry simplification, and primitive assembly. Experiments on both synthetic and real-world point clouds have demonstrated that, with our neural-guided strategy, high-quality building models can be obtained with significant advantages in fidelity, compactness, and computational efficiency. Our method also shows robustness to noise and insufficient measurements, and it can directly generalize from synthetic scans to real-world measurements. The source code of this work is freely available at https://github.com/chenzhaiyu/points2poly. Numéro de notice : A2022-824 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.09.017 Date de publication en ligne : 17/10/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.09.017 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102001
in ISPRS Journal of photogrammetry and remote sensing > vol 194 (December 2022) . - pp 58 - 73[article]Point2Roof: End-to-end 3D building roof modeling from airborne LiDAR point clouds / Li Li in ISPRS Journal of photogrammetry and remote sensing, vol 193 (November 2022)
![]()
[article]
Titre : Point2Roof: End-to-end 3D building roof modeling from airborne LiDAR point clouds Type de document : Article/Communication Auteurs : Li Li, Auteur ; Nan Song, Auteur ; Fei Sun, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 17 - 28 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] modélisation 3D
[Termes IGN] Perceptron multicouche
[Termes IGN] primitive géométrique
[Termes IGN] reconstruction 3D du bâti
[Termes IGN] semis de points
[Termes IGN] toitRésumé : (auteur) Three-dimensional (3D) building roof reconstruction from airborne LiDAR point clouds is an important task in photogrammetry and computer vision. To automatically reconstruct the 3D building models at Level of Detail 2 (LoD-2) from airborne LiDAR point clouds, the data-driven approaches usually need to be performed in two steps: geometric primitive extraction and roof structure inference. Obviously, the traditional approaches are not end-to-end, the accumulated errors in different stages cannot be avoided and the final 3D roof models may not be optimal. In addition, the results of 3D roof models largely depend on the accuracy of geometric primitives (planes, lines, etc.). To solve these problems, we present a deep learning-based approach to directly reconstruct building roofs from airborne LiDAR point clouds, named Point2Roof. In our method, we start by extracting the deep features for each input point using PointNet++. Then, we identify a set of candidate corner points from the input point clouds using the extracted deep features. In addition, we also regress the offset for each candidate corner point to refine their locations. After that, these candidates are clustered into a set of initial vertices, and we further refine their locations to obtain the final accurate vertices. Finally, we propose a Paired Point Attention (PPA) module to predict the true model edges from an exhaustive set of candidate edges between the vertices. Unlike traditional roof modeling approaches, the proposed Point2Roof is end-to-end. However, due to the lack of a building reconstruction dataset, we construct a large-scale synthetic dataset to verify the effectiveness and robustness of the proposed Point2Roof. The experimental results conducted on the synthetic benchmark demonstrate that the proposed Point2Roof significantly outperforms the traditional roof modeling approaches. The experiments also show that the network trained on the synthetic dataset can be applied to the real point clouds after fine-tuning the trained model on a small real dataset. The large-scale synthetic dataset, the small real dataset and the source code of our approach are publicly available in https://github.com/Li-Li-Whu/Point2Roof. Numéro de notice : A2022-745 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.08.027 Date de publication en ligne : 10/09/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.08.027 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101728
in ISPRS Journal of photogrammetry and remote sensing > vol 193 (November 2022) . - pp 17 - 28[article]3D building reconstruction from single street view images using deep learning / Hui En Pang in International journal of applied Earth observation and geoinformation, vol 112 (August 2022)
![]()
[article]
Titre : 3D building reconstruction from single street view images using deep learning Type de document : Article/Communication Auteurs : Hui En Pang, Auteur ; Filip Biljecki, Auteur Année de publication : 2022 Article en page(s) : n° 102859 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] empreinte
[Termes IGN] Helsinki
[Termes IGN] image Streetview
[Termes IGN] maillage
[Termes IGN] morphologie urbaine
[Termes IGN] précision géométrique (imagerie)
[Termes IGN] reconstruction 3D du bâti
[Termes IGN] segmentation d'image
[Termes IGN] semis de pointsRésumé : (auteur) 3D building models are an established instance of geospatial information in the built environment, but their acquisition remains complex and topical. Approaches to reconstruct 3D building models often require existing building information (e.g. their footprints) and data such as point clouds, which are scarce and laborious to acquire, limiting their expansion. In parallel, street view imagery (SVI) has been gaining currency, driven by the rapid expansion in coverage and advances in computer vision (CV), but it has not been used much for generating 3D city models. Traditional approaches that can use SVI for reconstruction require multiple images, while in practice, often only few street-level images provide an unobstructed view of a building. We develop the reconstruction of 3D building models from a single street view image using image-to-mesh reconstruction techniques modified from the CV domain. We regard three scenarios: (1) standalone single-view reconstruction; (2) reconstruction aided by a top view delineating the footprint; and (3) refinement of existing 3D models, i.e. we examine the use of SVI to enhance the level of detail of block (LoD1) models, which are common. The results suggest that trained models supporting (2) and (3) are able to reconstruct the overall geometry of a building, while the first scenario may derive the approximate mass of the building, useful to infer the urban form of cities. We evaluate the results by demonstrating their usefulness for volume estimation, with mean errors of less than 10% for the last two scenarios. As SVI is now available in most countries worldwide, including many regions that do not have existing footprint and/or 3D building data, our method can derive rapidly and cost-effectively the 3D urban form from SVI without requiring any existing building information. Obtaining 3D building models in regions that hitherto did not have any, may enable a number of 3D geospatial analyses locally for the first time. Numéro de notice : A2022-544 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.jag.2022.102859 Date de publication en ligne : 17/06/2022 En ligne : https://doi.org/10.1016/j.jag.2022.102859 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101160
in International journal of applied Earth observation and geoinformation > vol 112 (August 2022) . - n° 102859[article]Évaluation de la qualité de modèles 3D issus de nuages de points / Tania Landes in XYZ, n° 171 (juin 2022)
PermalinkApplication oriented quality evaluation of Gaofen-7 optical stereo satellite imagery / Jiaojiao Tian in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-1-2022 (2022 edition)
PermalinkCity3D: Large-scale building reconstruction from airborne LiDAR point clouds / Jin Huang in Remote sensing, vol 14 n° 9 (May-1 2022)
PermalinkDirect photogrammetry with multispectral imagery for UAV-based snow depth estimation / Kathrin Maier in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)
PermalinkGeoRec: Geometry-enhanced semantic 3D reconstruction of RGB-D indoor scenes / Linxi Huan in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)
PermalinkAutomated 3D reconstruction of LoD2 and LoD1 models for All 10 million buildings of the Netherlands / Ravi Peters in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 3 (March 2022)
PermalinkA cost-effective method for reconstructing city-building 3D models from sparse Lidar point clouds / Marek Kulawiak in Remote sensing, vol 14 n° 5 (March-1 2022)
PermalinkÉvaluation des apports de l’apprentissage profond au sein d’un service dédié à la numérisation du patrimoine / Maxime Mérizette in XYZ, n° 170 (mars 2022)
PermalinkExploiting light directionality for image-based 3D reconstruction of non-collaborative surfaces / Ali Karami in Photogrammetric record, vol 37 n° 177 (March 2022)
PermalinkTraffic sign three-dimensional reconstruction based on point clouds and panoramic images / Minye Wang in Photogrammetric record, vol 37 n° 177 (March 2022)
Permalink