|
[n° ou bulletin]
est un bulletin de ISPRS Journal of photogrammetry and remote sensing / International society for photogrammetry and remote sensing (1980 -) (1990 -) ![]()
[n° ou bulletin]
|
Réservation
Réserver ce documentExemplaires (3)
Code-barres | Cote | Support | Localisation | Section | Disponibilité |
---|---|---|---|---|---|
081-2022041 | SL | Revue | Centre de documentation | Revues en salle | Disponible |
081-2022043 | DEP-RECP | Revue | LaSTIG | Dépôt en unité | Exclu du prêt |
081-2022042 | DEP-RECF | Revue | Nancy | Dépôt en unité | Exclu du prêt |
Dépouillements


Direct photogrammetry with multispectral imagery for UAV-based snow depth estimation / Kathrin Maier in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)
![]()
[article]
Titre : Direct photogrammetry with multispectral imagery for UAV-based snow depth estimation Type de document : Article/Communication Auteurs : Kathrin Maier, Auteur ; Andrea Nascetti, Auteur ; Ward van Pelt, Auteur ; Gunhild Rosqvist, Auteur Année de publication : 2022 Article en page(s) : pp 1 - 18 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse en composantes principales
[Termes IGN] bande infrarouge
[Termes IGN] épaisseur
[Termes IGN] erreur moyenne quadratique
[Termes IGN] géoréférencement direct
[Termes IGN] image captée par drone
[Termes IGN] image multibande
[Termes IGN] manteau neigeux
[Termes IGN] modèle numérique de surface
[Termes IGN] photogrammétrie aérienne
[Termes IGN] positionnement cinématique en temps réel
[Termes IGN] qualité du modèle
[Termes IGN] reconstruction 3D
[Termes IGN] structure-from-motion
[Termes IGN] SuèdeRésumé : (Auteur) More accurate snow quality predictions are needed to economically and socially support communities in a changing Arctic environment. This contrasts with the current availability of affordable and efficient snow monitoring methods. In this study, a novel approach is presented to determine spatial snow depth distribution in challenging alpine terrain that was tested during a field campaign performed in the Tarfala valley, Kebnekaise mountains, northern Sweden, in April 2019. The combination of a multispectral camera and an Unmanned Aerial Vehicle (UAV) was used to derive three-dimensional (3D) snow surface models via Structure from Motion (SfM) with direct georeferencing. The main advantage over conventional photogrammetric surveys is the utilization of accurate Real-Time Kinematic (RTK) positioning which enables direct georeferencing of the images, and therefore eliminates the need for ground control points. The proposed method is capable of producing high-resolution 3D snow-covered surface models (7 cm/pixel) of alpine areas up to eight hectares in a fast, reliable and affordable way. The test sites’ average snow depth was 160 cm with an average standard deviation of 78 cm. The overall Root-Mean-Square Errors (RMSE) of the snow depth range from 11.52 cm for data acquired in ideal surveying conditions to 41.03 cm in aggravated light and wind conditions. Results of this study suggest that the red components in the electromagnetic spectrum, i.e., the red, red edge, and near-infrared (NIR) band, contain the majority of information used in photogrammetric processing. The experiments highlighted a significant influence of the multi-spectral imagery on the quality of the final snow depth estimation as well as a strong potential to reduce processing times and computational resources by limiting the dimensionality of the imagery through the application of a Principal Component Analysis (PCA) before the photogrammetric 3D reconstruction. The proposed method is part of closing the scale gap between discrete point measurements and regional-scale remote sensing and complements large-scale remote sensing data and snow model output with an adequate validation source. Numéro de notice : A2022-066 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.01.020 Date de publication en ligne : 09/02/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.01.020 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99783
in ISPRS Journal of photogrammetry and remote sensing > vol 186 (April 2022) . - pp 1 - 18[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022041 SL Revue Centre de documentation Revues en salle Disponible 081-2022043 DEP-RECP Revue LaSTIG Dépôt en unité Exclu du prêt 081-2022042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt VD-LAB: A view-decoupled network with local-global aggregation bridge for airborne laser scanning point cloud classification / Jihao Li in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)
![]()
[article]
Titre : VD-LAB: A view-decoupled network with local-global aggregation bridge for airborne laser scanning point cloud classification Type de document : Article/Communication Auteurs : Jihao Li, Auteur ; Martin Weinmann, Auteur ; Xian Sun, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 19 - 33 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] agrégation de détails
[Termes IGN] apprentissage profond
[Termes IGN] précision de la classification
[Termes IGN] qualité du modèle
[Termes IGN] semis de points
[Termes IGN] télémétrie laser aéroportéRésumé : (Auteur) Airborne Laser Scanning (ALS) point cloud classification is a valuable and practical task in the fields of photogrammetry and remote sensing. It takes an extremely important role in many applications of surveying, monitoring, planning, production and living. Recently, driven by the wave of deep learning, the classification of ALS point clouds has also been gradually shifting from traditional feature design to careful deep network architecture construction. Although significant progress has been achieved by leveraging deep learning technology, there are still some matters demanding prompt solution: (1) the coupling phenomenon of high-level semantic features from multiple field-of-views; (2) information propagation without aggregated local–global features in different levels of symmetrical structure; (3) quite serious class-imbalanced distribution problems in large-scale ALS point clouds. In this paper, to tackle these matters, we propose a novel View-Decoupled Network with Local–global Aggregation Bridge (VD-LAB) model. More concretely, a View-Decoupled (VD) grouping method is set at the deepest layer of the network. Then, we establish a Local–global Aggregation Bridge (LAB) between down-sampling path and up-sampling path of the same level. After that, a Self-Amelioration (SA) loss is taken as the optimization objective to train the whole model in an end-to-end manner. Extensive experiments on four challenging ALS point cloud datasets (LASDU, US3D, ISPRS 3D and GML) demonstrate that our VD-LAB is able to achieve state-of-the-art performance in terms of Overall Accuracy (OA) and mean -score (e.g., reaching 88.01% and 78.42% for LASDU dataset, respectively) with very few model parameters and it possesses a strong generalization capability. In addition, the visualization of achieved results also reveals more satisfactory classification for some categories, such as Water in the US3D dataset and Powerline in the ISPRS 3D dataset. Ultimately, the effect of each module in VD-LAB is assessed in detailed ablation analyses as well. Numéro de notice : A2022-067 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.01.012 Date de publication en ligne : 10/02/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.01.012 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99789
in ISPRS Journal of photogrammetry and remote sensing > vol 186 (April 2022) . - pp 19 - 33[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022041 SL Revue Centre de documentation Revues en salle Disponible 081-2022043 DEP-RECP Revue LaSTIG Dépôt en unité Exclu du prêt 081-2022042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt PolGAN: A deep-learning-based unsupervised forest height estimation based on the synergy of PolInSAR and LiDAR data / Qi Zhang in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)
![]()
[article]
Titre : PolGAN: A deep-learning-based unsupervised forest height estimation based on the synergy of PolInSAR and LiDAR data Type de document : Article/Communication Auteurs : Qi Zhang, Auteur ; Linlin Ge, Auteur ; Scott Hensley, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 123 - 139 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] analyse discriminante
[Termes IGN] apprentissage non-dirigé
[Termes IGN] apprentissage profond
[Termes IGN] bande L
[Termes IGN] données lidar
[Termes IGN] forêt boréale
[Termes IGN] forêt tropicale
[Termes IGN] Global Ecosystem Dynamics Investigation lidar
[Termes IGN] hauteur de la végétation
[Termes IGN] hauteur des arbres
[Termes IGN] image captée par drone
[Termes IGN] interféromètrie par radar à antenne synthétique
[Termes IGN] pansharpening (fusion d'images)
[Termes IGN] polarimétrie radar
[Termes IGN] pouvoir de résolution géométrique
[Termes IGN] réseau antagoniste génératif
[Termes IGN] semis de pointsRésumé : (auteur) This paper describes a deep-learning-based unsupervised forest height estimation method based on the synergy of the high-resolution L-band repeat-pass Polarimetric Synthetic Aperture Radar Interferometry (PolInSAR) and low-resolution large-footprint full-waveform Light Detection and Ranging (LiDAR) data. Unlike traditional PolInSAR-based methods, the proposed method reformulates the forest height inversion as a pan-sharpening process between the low-resolution LiDAR height and the high-resolution PolSAR and PolInSAR features. A tailored Generative Adversarial Network (GAN) called PolGAN with one generator and dual (coherence and spatial) discriminators is proposed to this end, where a progressive pan-sharpening strategy underpins the generator to overcome the significant difference between spatial resolutions of LiDAR and SAR-related inputs. Forest height estimates with high spatial resolution and vertical accuracy are generated through a continuous generative and adversarial process. UAVSAR PolInSAR and LVIS LiDAR data collected over tropical and boreal forest sites are used for experiments. Ablation study is conducted over the boreal site evidencing the superiority of the progressive generator with dual discriminators employed in PolGAN (RMSE: 1.21 m) in comparison with the standard generator with dual discriminators (RMSE: 2.43 m) and the progressive generator with a single coherence (RMSE: 2.74 m) or spatial discriminator (RMSE: 5.87 m). Besides that, by reducing the dependency on theoretical models and utilizing the shape, texture, and spatial information embedded in the high-spatial-resolution features, the PolGAN method achieves an RMSE of 2.37 m over the tropical forest site, which is much more accurate than the traditional PolInSAR-based Kapok method (RMSE: 8.02 m). Numéro de notice : A2022-195 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.02.008 Date de publication en ligne : 17/02/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.02.008 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99962
in ISPRS Journal of photogrammetry and remote sensing > vol 186 (April 2022) . - pp 123 - 139[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022041 SL Revue Centre de documentation Revues en salle Disponible 081-2022043 DEP-RECP Revue LaSTIG Dépôt en unité Exclu du prêt 081-2022042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt GeoRec: Geometry-enhanced semantic 3D reconstruction of RGB-D indoor scenes / Linxi Huan in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)
![]()
[article]
Titre : GeoRec: Geometry-enhanced semantic 3D reconstruction of RGB-D indoor scenes Type de document : Article/Communication Auteurs : Linxi Huan, Auteur ; Xianwei Zheng, Auteur ; Jianya Gong, Auteur Année de publication : 2022 Article en page(s) : pp 301 - 314 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] données localisées 3D
[Termes IGN] géométrie
[Termes IGN] image RVB
[Termes IGN] maillage
[Termes IGN] modélisation sémantique
[Termes IGN] objet 3D
[Termes IGN] reconstruction 3D
[Termes IGN] reconstruction d'objet
[Termes IGN] scène intérieureRésumé : (auteur) Semantic indoor 3D modeling with multi-task deep neural networks is an efficient and low-cost way for reconstructing an indoor scene with geometrically complete room structure and semantic 3D individuals. Challenged by the complexity and clutter of indoor scenarios, the semantic reconstruction quality of current methods is still limited by the insufficient exploration and learning of 3D geometry information. To this end, this paper proposes an end-to-end multi-task neural network for geometry-enhanced semantic 3D reconstruction of RGB-D indoor scenes (termed as GeoRec). In the proposed GeoRec, we build a geometry extractor that can effectively learn geometry-enhanced feature representation from depth data, to improve the estimation accuracy of layout, camera pose and 3D object bounding boxes. We also introduce a novel object mesh generator that strengthens the reconstruction robustness of GeoRec to indoor occlusion with geometry-enhanced implicit shape embedding. With the parsed scene semantics and geometries, the proposed GeoRec reconstructs an indoor scene by placing reconstructed object mesh models with 3D object detection results in the estimated layout cuboid. Extensive experiments conducted on two benchmark datasets show that the proposed GeoRec yields outstanding performance with mean chamfer distance error for object reconstruction on the challenging Pix3D dataset, 70.45% mAP for 3D object detection and 77.1% 3D mIoU for layout estimation on the commonly-used SUN RGB-D dataset. Especially, the mesh reconstruction sub-network of GeoRec trained on Pix3D can be directly transferred to SUN RGB-D without any fine-tuning, manifesting a high generalization ability. Numéro de notice : A2022-235 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.isprsjprs.2022.02.014 Date de publication en ligne : 03/03/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.02.014 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100139
in ISPRS Journal of photogrammetry and remote sensing > vol 186 (April 2022) . - pp 301 - 314[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022041 SL Revue Centre de documentation Revues en salle Disponible 081-2022043 DEP-RECP Revue LaSTIG Dépôt en unité Exclu du prêt 081-2022042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt