ISPRS Journal of photogrammetry and remote sensing / International society for photogrammetry and remote sensing (1980 -) . vol 187Paru le : 01/05/2022 |
[n° ou bulletin]
est un bulletin de ISPRS Journal of photogrammetry and remote sensing / International society for photogrammetry and remote sensing (1980 -) (1990 -)
[n° ou bulletin]
|
Exemplaires(3)
Code-barres | Cote | Support | Localisation | Section | Disponibilité |
---|---|---|---|---|---|
081-2022051 | SL | Revue | Centre de documentation | Revues en salle | Disponible |
081-2022053 | DEP-RECP | Revue | LASTIG | Dépôt en unité | Exclu du prêt |
081-2022052 | DEP-RECF | Revue | Nancy | Dépôt en unité | Exclu du prêt |
Dépouillements
Ajouter le résultat dans votre panierWeakly supervised semantic segmentation of airborne laser scanning point clouds / Yaping Lin in ISPRS Journal of photogrammetry and remote sensing, vol 187 (May 2022)
[article]
Titre : Weakly supervised semantic segmentation of airborne laser scanning point clouds Type de document : Article/Communication Auteurs : Yaping Lin, Auteur ; M. George Vosselman, Auteur ; Michael Ying Yang, Auteur Année de publication : 2022 Article en page(s) : pp 79 - 100 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] chevauchement
[Termes IGN] classification dirigée
[Termes IGN] données étiquetées d'entrainement
[Termes IGN] données laser
[Termes IGN] données localisées 3D
[Termes IGN] hétérogénéité sémantique
[Termes IGN] segmentation sémantique
[Termes IGN] semis de pointsRésumé : (Auteur) While modern deep learning algorithms for semantic segmentation of airborne laser scanning (ALS) point clouds have achieved considerable success, the training process often requires a large number of labelled 3D points. Pointwise annotation of 3D point clouds, especially for large scale ALS datasets, is extremely time-consuming work. Weak supervision that only needs a few annotation efforts but can make networks achieve comparable performance is an alternative solution. Assigning a weak label to a subcloud, a group of points, is an efficient annotation strategy. With the supervision of subcloud labels, we first train a classification network that produces pseudo labels for the training data. Then the pseudo labels are taken as the input of a segmentation network which gives the final predictions on the testing data. As the quality of pseudo labels determines the performance of the segmentation network on testing data, we propose an overlap region loss and an elevation attention unit for the classification network to obtain more accurate pseudo labels. The overlap region loss that considers the nearby subcloud semantic information is introduced to enhance the awareness of the semantic heterogeneity within a subcloud. The elevation attention helps the classification network to encode more representative features for ALS point clouds. For the segmentation network, in order to effectively learn representative features from inaccurate pseudo labels, we adopt a supervised contrastive loss that uncovers the underlying correlations of class-specific features. Extensive experiments on three ALS datasets demonstrate the superior performance of our model to the baseline method (Wei et al., 2020). With the same amount of labelling efforts, for the ISPRS benchmark dataset, the Rotterdam dataset and the DFC2019 dataset, our method rises the overall accuracy by 0.062, 0.112 and 0.031, and the average F1 score by 0.09, 0.178 and 0.043 respectively. Our code is publicly available at ‘https://github.com/yaping222/Weak_ALS.git’. Numéro de notice : A2022-227 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.03.001 Date de publication en ligne : 11/03/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.03.001 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100197
in ISPRS Journal of photogrammetry and remote sensing > vol 187 (May 2022) . - pp 79 - 100[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022051 SL Revue Centre de documentation Revues en salle Disponible 081-2022053 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2022052 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Fusion of optical, radar and waveform LiDAR observations for land cover classification / Huiran Jin in ISPRS Journal of photogrammetry and remote sensing, vol 187 (May 2022)
[article]
Titre : Fusion of optical, radar and waveform LiDAR observations for land cover classification Type de document : Article/Communication Auteurs : Huiran Jin, Auteur ; Giorgos Mountrakis, Auteur Année de publication : 2022 Article en page(s) : pp 171 - 190 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] analyse comparative
[Termes IGN] carte de la végétation
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] fusion d'images
[Termes IGN] image ALOS-PALSAR
[Termes IGN] image Landsat-TM
[Termes IGN] image multitemporelle
[Termes IGN] occupation du solRésumé : (Auteur) Land cover is an integral component for characterizing anthropogenic activity and promoting sustainable land use. Mapping distribution and coverage of land cover at broad spatiotemporal scales largely relies on classification of remotely sensed data. Although recently multi-source data fusion has been playing an increasingly active role in land cover classification, our intensive review of current studies shows that the integration of optical, synthetic aperture radar (SAR) and light detection and ranging (LiDAR) observations has not been thoroughly evaluated. In this research, we bridged this gap by i) summarizing related fusion studies and assessing their reported accuracy improvements, and ii) conducting our own case study where for the first time fusion of optical, radar and waveform LiDAR observations and the associated improvements in classification accuracy are assessed using data collected by spaceborne or appropriately simulated platforms in the LiDAR case. Multitemporal Landsat-5/Thematic Mapper (TM) and Advanced Land Observing Satellite-1/ Phased Array type L-band SAR (ALOS-1/PALSAR) imagery acquired in the Central New York (CNY) region close to the collection of airborne waveform LVIS (Land, Vegetation, and Ice Sensor) data were examined. Classification was conducted using a random forest algorithm and different feature sets in terms of sensor and seasonality as input variables. Results indicate that the combined spectral, scattering and vertical structural information provided the maximum discriminative capability among different land cover types, giving rise to the highest overall accuracy of 83% (2–19% and 9–35% superior to the two-sensor and single-sensor scenarios with overall accuracies of 64–81% and 48–74%, respectively). Greater improvement was achieved when combining multitemporal Landsat images with LVIS-derived canopy height metrics as opposed to PALSAR features, suggesting that LVIS contributed more useful thematic information complementary to spectral data and beneficial to the classification task, especially for vegetation classes. With the Global Ecosystem Dynamics Investigation (GEDI), a recently launched LiDAR instrument of similar properties to the LVIS sensor now operating onboard the International Space Station (ISS), it is our hope that this research will act as a literature summary and offer guidelines for further applications of multi-date and multi-type remotely sensed data fusion for improved land cover classification. Numéro de notice : A2022-228 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.03.010 Date de publication en ligne : 17/03/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.03.010 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100214
in ISPRS Journal of photogrammetry and remote sensing > vol 187 (May 2022) . - pp 171 - 190[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022051 SL Revue Centre de documentation Revues en salle Disponible 081-2022053 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2022052 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Individual tree detection and estimation of stem attributes with mobile laser scanning along boreal forest roads / Raul de Paula Pires in ISPRS Journal of photogrammetry and remote sensing, vol 187 (May 2022)
[article]
Titre : Individual tree detection and estimation of stem attributes with mobile laser scanning along boreal forest roads Type de document : Article/Communication Auteurs : Raul de Paula Pires, Auteur ; Kenneth Olofsson, Auteur ; Henrik J. Persson, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 211 - 224 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] collecte de données
[Termes IGN] détection d'arbres
[Termes IGN] diamètre à hauteur de poitrine
[Termes IGN] données localisées 3D
[Termes IGN] forêt boréale
[Termes IGN] inventaire forestier (techniques et méthodes)
[Termes IGN] lidar mobile
[Termes IGN] route
[Termes IGN] semis de points
[Termes IGN] Suède
[Termes IGN] tronc
[Termes IGN] volume en boisRésumé : (Auteur) The collection of field-reference data is a key task in remote sensing-based forest inventories. However, traditional methods of collection demand extensive personnel resources. Thus, field-reference data collection would benefit from more automated methods. In this study, we proposed a method for individual tree detection (ITD) and stem attribute estimation based on a car-mounted mobile laser scanner (MLS) operating along forest roads. We assessed its performance in six ranges with increasing mean distance from the roadside. We used a Riegl VUX-1LR sensor operating with high repetition rate, thus providing detailed cross sections of the stems. The algorithm we propose was designed for this sensor configuration, identifying the cross sections (or arcs) in the point cloud and aggregating those into single trees. Furthermore, we estimated diameter at breast height (DBH), stem profiles, and stem volume for each detected tree. The accuracy of ITD, DBH, and stem volume estimates varied with the trees’ distance from the road. In general, the proximity to the sensor of branches 0–10 m from the road caused commission errors in ITD and over estimation of stem attributes in this zone. At 50–60 m from roadside, stems were often occluded by branches, causing omissions and underestimation of stem attributes in this area. ITD’s precision and sensitivity varied from 82.8% to 100% and 62.7% to 96.7%, respectively. The RMSE of DBH estimates ranged from 1.81 cm (6.38%) to 4.84 cm (16.9%). Stem volume estimates had RMSEs ranging from 0.0800 m3 (10.1%) to 0.190 m3 (25.7%), depending on the distance to the sensor. The average proportion of detected reference volume was highly affected by the performance of ITD in the different zones. This proportion was highest from 0 to 10 m (113%), a zone that concentrated most ITD commission errors, and lowest from 50 to 60 m (66.6%), mostly due to the omission errors in this area. In the other zones, the RMSE ranged from 87.5% to 98.5%. These accuracies are in line with those obtained by other state-of-the-art MLS and terrestrial laser scanner (TLS) methods. The car-mounted MLS system used has the potential to collect data efficiently in large-scale inventories, being able to scan approximately 80 ha of forests per day depending on the survey setup. This data collection method could be used to increase the amount of field-reference data available in remote sensing-based forest inventories, improve models for area-based estimations, and support precision forestry development. Numéro de notice : A2022-229 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.03.004 Date de publication en ligne : 18/03/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.03.004 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100215
in ISPRS Journal of photogrammetry and remote sensing > vol 187 (May 2022) . - pp 211 - 224[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022051 SL Revue Centre de documentation Revues en salle Disponible 081-2022053 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2022052 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Multi-modal temporal attention models for crop mapping from satellite time series / Vivien Sainte Fare Garnot in ISPRS Journal of photogrammetry and remote sensing, vol 187 (May 2022)
[article]
Titre : Multi-modal temporal attention models for crop mapping from satellite time series Type de document : Article/Communication Auteurs : Vivien Sainte Fare Garnot , Auteur ; Loïc Landrieu , Auteur ; Nesrine Chehata , Auteur Année de publication : 2022 Projets : 3-projet - voir note / Article en page(s) : pp 294 - 305 Note générale : bibliographie
This work was partly supported by ASP, the French Payment Agency.Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] bande C
[Termes IGN] carte agricole
[Termes IGN] fusion d'images
[Termes IGN] image Sentinel-MSI
[Termes IGN] image Sentinel-SAR
[Termes IGN] parcelle agricole
[Termes IGN] Pastis
[Termes IGN] segmentation d'image
[Termes IGN] série temporelle
[Termes IGN] surface cultivéeRésumé : (auteur) Optical and radar satellite time series are synergetic: optical images contain rich spectral information, while C-band radar captures useful geometrical information and is immune to cloud cover. Motivated by the recent success of temporal attention-based methods across multiple crop mapping tasks, we propose to investigate how these models can be adapted to operate on several modalities. We implement and evaluate multiple fusion schemes, including a novel approach and simple adjustments to the training procedure, significantly improving performance and efficiency with little added complexity. We show that most fusion schemes have advantages and drawbacks, making them relevant for specific settings. We then evaluate the benefit of multimodality across several tasks: parcel classification, pixel-based segmentation, and panoptic parcel segmentation. We show that by leveraging both optical and radar time series, multimodal temporal attention-based models can outmatch single-modality models in terms of performance and resilience to cloud cover. To conduct these experiments, we augment the PASTIS dataset (Garnot and Landrieu, 2021a) with spatially aligned radar image time series. The resulting dataset, PASTIS-R, constitutes the first large-scale, multimodal, and open-access satellite time series dataset with semantic and instance annotations. (Dataset available at: https://zenodo.org/record/5735646) Numéro de notice : A2022-157 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Autre URL associée : vers ArXiv Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.03.012 Date de publication en ligne : 24/03/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.03.012 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100365
in ISPRS Journal of photogrammetry and remote sensing > vol 187 (May 2022) . - pp 294 - 305[article]Voir aussiExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022051 SL Revue Centre de documentation Revues en salle Disponible 081-2022053 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2022052 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt