Mention de date : April 2023
Paru le : 01/04/2023 |
[n° ou bulletin]
[n° ou bulletin]
|
Dépouillements
Ajouter le résultat dans votre panierTowards global scale segmentation with OpenStreetMap and remote sensing / Munazza Usmani in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 8 (April 2023)
[article]
Titre : Towards global scale segmentation with OpenStreetMap and remote sensing Type de document : Article/Communication Auteurs : Munazza Usmani, Auteur ; Maurizio Napolitano, Auteur ; Francesca Bovolo, Auteur Année de publication : 2023 Article en page(s) : n° 100031 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] bâtiment
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données localisées des bénévoles
[Termes IGN] image à haute résolution
[Termes IGN] information sémantique
[Termes IGN] occupation du sol
[Termes IGN] OpenStreetMap
[Termes IGN] segmentation d'image
[Termes IGN] segmentation sémantique
[Termes IGN] utilisation du solRésumé : (auteur) Land Use Land Cover (LULC) segmentation is a famous application of remote sensing in an urban environment. Up-to-date and complete data are of major importance in this field. Although with some success, pixel-based segmentation remains challenging because of class variability. Due to the increasing popularity of crowd-sourcing projects, like OpenStreetMap, the need for user-generated content has also increased, providing a new prospect for LULC segmentation. We propose a deep-learning approach to segment objects in high-resolution imagery by using semantic crowdsource information. Due to satellite imagery and crowdsource database complexity, deep learning frameworks perform a significant role. This integration reduces computation and labor costs. Our methods are based on a fully convolutional neural network (CNN) that has been adapted for multi-source data processing. We discuss the use of data augmentation techniques and improvements to the training pipeline. We applied semantic (U-Net) and instance segmentation (Mask R-CNN) methods and, Mask R–CNN showed a significantly higher segmentation accuracy from both qualitative and quantitative viewpoints. The conducted methods reach 91% and 96% overall accuracy in building segmentation and 90% in road segmentation, demonstrating OSM and remote sensing complementarity and potential for city sensing applications. Numéro de notice : A2023-148 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.ophoto.2023.100031 Date de publication en ligne : 16/02/2023 En ligne : https://doi.org/10.1016/j.ophoto.2023.100031 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102807
in ISPRS Open Journal of Photogrammetry and Remote Sensing > vol 8 (April 2023) . - n° 100031[article]Point cloud registration for LiDAR and photogrammetric data: A critical synthesis and performance analysis on classic and deep learning algorithms / Ningli Xu in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 8 (April 2023)
[article]
Titre : Point cloud registration for LiDAR and photogrammetric data: A critical synthesis and performance analysis on classic and deep learning algorithms Type de document : Article/Communication Auteurs : Ningli Xu, Auteur ; Rongjun Qin, Auteur ; Shuang Song, Auteur Année de publication : 2023 Article en page(s) : n° 100032 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] algorithme ICP
[Termes IGN] chevauchement
[Termes IGN] données lidar
[Termes IGN] processus gaussien
[Termes IGN] recalage de données localisées
[Termes IGN] semis de points
[Termes IGN] superposition de donnéesRésumé : (auteur) Three-dimensional (3D) point cloud registration is a fundamental step for many 3D modeling and mapping applications. Existing approaches are highly disparate in the data source, scene complexity, and application, therefore the current practices in various point cloud registration tasks are still ad-hoc processes. Recent advances in computer vision and deep learning have shown promising performance in estimating rigid/similarity transformation between unregistered point clouds of complex objects and scenes. However, their performances are mostly evaluated using a limited number of datasets from a single sensor (e.g. Kinect or RealSense cameras), lacking a comprehensive overview of their applicability in photogrammetric 3D mapping scenarios. In this work, we provide a comprehensive review of the state-of-the-art (SOTA) point cloud registration methods, where we analyze and evaluate these methods using a diverse set of point cloud data from indoor to satellite sources. The quantitative analysis allows for exploring the strengths, applicability, challenges, and future trends of these methods. In contrast to existing analysis works that introduce point cloud registration as a holistic process, our experimental analysis is based on its inherent two-step process to better comprehend these approaches including feature/keypoint-based initial coarse registration and dense fine registration through cloud-to-cloud (C2C) optimization. More than ten methods, including classic hand-crafted, deep-learning-based feature correspondence, and robust C2C methods were tested. We observed that the success rate of most of the algorithms are fewer than 40% over the datasets we tested and there are still are large margin of improvement upon existing algorithms concerning 3D sparse corresopondence search, and the ability to register point clouds with complex geometry and occlusions. With the evaluated statistics on three datasets, we conclude the best-performing methods for each step and provide our recommendations, and outlook future efforts. Numéro de notice : A2023-149 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.ophoto.2023.100032 Date de publication en ligne : 16/02/2023 En ligne : https://doi.org/10.1016/j.ophoto.2023.100032 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102808
in ISPRS Open Journal of Photogrammetry and Remote Sensing > vol 8 (April 2023) . - n° 100032[article]