Descripteur
Documents disponibles dans cette catégorie (107)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Enhanced 3D mapping with an RGB-D sensor via integration of depth measurements and image sequences / Bo Wu in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 9 (September 2019)
[article]
Titre : Enhanced 3D mapping with an RGB-D sensor via integration of depth measurements and image sequences Type de document : Article/Communication Auteurs : Bo Wu, Auteur ; Xuming Ge, Auteur ; Linfu Xie, Auteur ; Wu Chen, Auteur Année de publication : 2019 Article en page(s) : pp 633 - 642 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] carte d'intérieur
[Termes IGN] carte de profondeur
[Termes IGN] cartographie 3D
[Termes IGN] cartographie et localisation simultanées
[Termes IGN] données localisées 3D
[Termes IGN] état de l'art
[Termes IGN] image RVB
[Termes IGN] intégration de données
[Termes IGN] modélisation 3D
[Termes IGN] semis de points
[Termes IGN] séquence d'images
[Termes IGN] structure-from-motionRésumé : (Auteur) State-of-the-art visual simultaneous localization and mapping (SLAM) techniques greatly facilitate three-dimensional (3D) mapping and modeling with the use of low-cost red-green-blue-depth (RGB-D) sensors. However, the effective range of such sensors is limited due to the working range of the infra-red (IR) camera, which provides depth information, and thus the practicability of such sensors in 3D mapping and modeling is limited. To address this limitation, we present a novel solution for enhanced 3D mapping using a low-cost RGB-D sensor. We carry out state-of-the-art visual SLAM to obtain 3D point clouds within the mapping range of the RGB-D sensor and implement an improved structure-from-motion (SfM) on the collected RGB image sequences with additional constraints from the depth information to produce image-based 3D point clouds. We then develop a feature-based scale-adaptive registration to merge the gained point clouds to further generate enhanced and extended 3D mapping results. We use two challenging test sites to examine the proposed method. At these two sites, the coverage of both generated 3D models increases by more than 50% with the proposed solution. Moreover, the proposed solution achieves a geometric accuracy of about 1% in a measurement range of about 20 m. These positive experimental results not only demonstrate the feasibility and practicality of the proposed solution but also its potential. Numéro de notice : A2019-415 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.85.9.633 Date de publication en ligne : 01/09/2019 En ligne : https://doi.org/10.14358/PERS.85.9.633 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93542
in Photogrammetric Engineering & Remote Sensing, PERS > vol 85 n° 9 (September 2019) . - pp 633 - 642[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 105-2019091 SL Revue Centre de documentation Revues en salle Disponible Improving public data for building segmentation from Convolutional Neural Networks (CNNs) for fused airborne lidar and image data using active contours / David Griffiths in ISPRS Journal of photogrammetry and remote sensing, vol 154 (August 2019)
[article]
Titre : Improving public data for building segmentation from Convolutional Neural Networks (CNNs) for fused airborne lidar and image data using active contours Type de document : Article/Communication Auteurs : David Griffiths, Auteur ; Jan Böhm , Auteur Année de publication : 2019 Article en page(s) : pp 70 - 83 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] apprentissage profond
[Termes IGN] bati
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de contours
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] données publiques
[Termes IGN] fusion de données
[Termes IGN] image RVB
[Termes IGN] Royaume-Uni
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] zone ruraleRésumé : (Auteur) Robust and reliable automatic building detection and segmentation from aerial images/point clouds has been a prominent field of research in remote sensing, computer vision and point cloud processing for a number of decades. One of the largest issues associated with deep learning methods is the high quantity of data required for training. To help address this we present a method to improve public GIS building footprint labels by using Morphological Geodesic Active Contours (MorphGACs). We demonstrate by improving the quality of building footprint labels for detection and semantic segmentation, more robust and reliable models can be obtained. We evaluate these methods over a large UK-based dataset of 24556 images containing 169835 building instances. This is achieved by training several Mask/Faster R-CNN and RetinaNet deep convolutional neural networks. Networks are supplied with both RGB and fused RGB-lidar data. We offer quantitative analysis on the benefits of the inclusion of depth data for building segmentation. By employing both methods we achieve a detection accuracy of 0.92 (mAP@0.5) and segmentation f1 scores of 0.94 over a 4911 test images ranging from urban to rural scenes. Numéro de notice : A2019-265 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.05.013 Date de publication en ligne : 06/06/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.05.013 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93079
in ISPRS Journal of photogrammetry and remote sensing > vol 154 (August 2019) . - pp 70 - 83[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019081 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019083 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019082 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Semantic façade segmentation from airborne oblique images / Yaping Lin in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 6 (June 2019)
[article]
Titre : Semantic façade segmentation from airborne oblique images Type de document : Article/Communication Auteurs : Yaping Lin, Auteur ; Francesco Nex, Auteur ; Michael Ying Yang, Auteur Année de publication : 2019 Article en page(s) : pp 425 - 433 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse comparative
[Termes IGN] champ aléatoire conditionnel
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] façade
[Termes IGN] image aérienne oblique
[Termes IGN] image RVB
[Termes IGN] segmentation d'image
[Termes IGN] segmentation sémantique
[Termes IGN] semis de pointsRésumé : (Auteur) In this paper, oblique airborne images with very high resolution are used to address the problem from aerial views in urban areas. Traditional classification method (i.e., random forests) is compared with state-of-the-art fully convolutional networks (FCNs). Random forests use hand-craft image features including red, green, blue (RGB), scale-invariant feature transform (SIFT), and Texton, and point cloud features consisting of normal vector and planarity extracted from different scales. In contrast, the inputs of FCNs are the RGB bands and the third components of normal vectors. In both cases, three-dimensional (3D) features are projected back into the image space to support the facade interpretation. Fully connected conditional random field (CRF) is finally taken as a post-processing of the FCN to refine the segmentation results. Several tests have been performed and the achieved results show that the models embedding the 3D component outperform the solution using only images. FCNs significantly outperformed random forests, especially for the balcony delineation. Numéro de notice : A2019-247 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.85.6.425 Date de publication en ligne : 01/06/2019 En ligne : https://doi.org/10.14358/PERS.85.6.425 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93003
in Photogrammetric Engineering & Remote Sensing, PERS > vol 85 n° 6 (June 2019) . - pp 425 - 433[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 105-2019061 SL Revue Centre de documentation Revues en salle Disponible Fusion of thermal imagery with point clouds for building façade thermal attribute mapping / Dong Lin in ISPRS Journal of photogrammetry and remote sensing, vol 151 (May 2019)
[article]
Titre : Fusion of thermal imagery with point clouds for building façade thermal attribute mapping Type de document : Article/Communication Auteurs : Dong Lin, Auteur ; Malgorzata Jarząbek-Rychard, Auteur ; Xiaochong Tong, Auteur ; Hans-Gerd Maas, Auteur Année de publication : 2019 Article en page(s) : pp 162 - 175 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement d'images
[Termes IGN] façade
[Termes IGN] image RVB
[Termes IGN] image thermique
[Termes IGN] Ransac (algorithme)
[Termes IGN] semis de points
[Termes IGN] SIFT (algorithme)
[Termes IGN] texturageRésumé : (Auteur) Thermal image data are widely used to assess the insulation quality of buildings and to detect thermal leakages. In our approach, we merge terrestrial thermal image data and 3D point clouds to perform thermal texture mapping for building facades. Since geo-referencing data of a hand-held thermal camera is usually not available in such applications, registration between thermal images and a 3D point cloud (for instance generated from RGB image data by structure-from-motion techniques) is essential. In our approach, thermal image data registration is conducted in four steps: First, another point cloud is generated from the thermal image data. Next, a coarse registration between thermal point cloud and RGB point cloud is performed using the fast global registration (FGR) algorithm. The best corresponding thermal-RGB image pairs are acquired by picking up the lowest Euclidean distance between the exterior orientation parameters of thermal images and transformed exterior orientation parameters of RGB images. Subsequently, radiation-invariant feature transform (RIFT), normalized barycentric coordinate system (NBCS) and random sample consensus (RANSAC) are employed to extract reliable matching features on thermal-RGB image pairs. Afterwards, a fine registration is performed by mono-plotting of the RGB image, followed by image resection of the thermal image. Finally, in terms of texture mapping algorithms, in order to remove the blur effects caused by small misalignments for different candidate images, a global image pose refinement approach, which aims to minimize the temperature disagreements provided by different images for the same object points, is proposed. In addition, in order to ensure high geometric and radiant accuracy, camera calibrations are performed. Experiments showed that the proposed method could not only achieve high geometric registration accuracy, but also provide a good radiometric accuracy with RMSE lower than 1.5 °C. Numéro de notice : A2019-208 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.03.010 Date de publication en ligne : 21/03/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.03.010 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92674
in ISPRS Journal of photogrammetry and remote sensing > vol 151 (May 2019) . - pp 162 - 175[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019051 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019053 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019052 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Albedo estimation for real-time 3D reconstruction using RGB-D and IR data / Patrick Stotko in ISPRS Journal of photogrammetry and remote sensing, vol 150 (April 2019)
[article]
Titre : Albedo estimation for real-time 3D reconstruction using RGB-D and IR data Type de document : Article/Communication Auteurs : Patrick Stotko, Auteur ; Michael Weinmann, Auteur ; Reinhard Klein, Auteur Année de publication : 2019 Article en page(s) : pp 213 - 225 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] albedo
[Termes IGN] image infrarouge
[Termes IGN] image RVB
[Termes IGN] longueur d'onde
[Termes IGN] méthode de réduction d'énergie
[Termes IGN] reconstruction 3D
[Termes IGN] réflectance
[Termes IGN] segmentation d'image
[Termes IGN] temps réel
[Termes IGN] texture d'imageRésumé : (Auteur) Reconstructing scenes in real-time using low-cost sensors has gained increasing attention in recent research and enabled numerous applications in graphics, vision, and robotics. While current techniques offer a substantial improvement regarding the quality of the reconstructed geometry, the degree of realism of the overall appearance is still lacking as the reconstruction of accurate surface appearance is highly challenging due to the complex interplay of surface geometry, reflectance properties and surrounding illumination. We present a novel approach that allows the reconstruction of both the geometry and the spatially varying surface albedo of a scene from RGB-D and IR data obtained via commodity sensors. In comparison to previous approaches, our approach offers an improved robustness and a significant speed-up to even fulfill the real-time requirements. For this purpose, we exploit the benefits of scene segmentation to improve albedo estimation due to the resulting better segment-wise coupling of IR and RGB data that takes into account the wavelength characteristics of different materials within the scene. The estimated albedo is directly integrated into the dense volumetric reconstruction framework using a novel weighting scheme to generate high-quality results. In our evaluation, we demonstrate that our approach allows albedo capturing of complicated scenarios including complex, high-frequent and strongly varying lighting as well as shadows. Numéro de notice : A2019-141 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.01.018 Date de publication en ligne : 04/03/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.01.018 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92479
in ISPRS Journal of photogrammetry and remote sensing > vol 150 (April 2019) . - pp 213 - 225[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019041 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Complete 3D scene parsing from an RGBD image / Chuhang Zou in International journal of computer vision, vol 127 n° 2 (February 2019)PermalinkPermalinkEstimation de profondeur à partir d'images monoculaires par apprentissage profond / Michel Moukari (2019)PermalinkPermalinkPermalinkPermalinkAutomatic building rooftop extraction from aerial images via hierarchical RGB-D priors / Shibiao Xu in IEEE Transactions on geoscience and remote sensing, vol 56 n° 12 (December 2018)PermalinkDepth-based hand pose estimation : Methods, data, and challenges / James Steven Supančič in International journal of computer vision, vol 126 n° 11 (November 2018)PermalinkConfigurable 3D scene synthesis and 2D image rendering with per-pixel ground truth using stochastic grammars / Chenfanfu Jiang in International journal of computer vision, vol 126 n° 9 (September 2018)PermalinkDetecting newly grown tree leaves from unmanned-aerial-vehicle images using hyperspectral target detection techniques / Chinsu Lin in ISPRS Journal of photogrammetry and remote sensing, vol 142 (August 2018)Permalink