ISPRS Journal of photogrammetry and remote sensing / International society for photogrammetry and remote sensing (1980 -) . Vol 158Paru le : 01/12/2019 |
[n° ou bulletin]
est un bulletin de ISPRS Journal of photogrammetry and remote sensing / International society for photogrammetry and remote sensing (1980 -) (1990 -)
[n° ou bulletin]
|
Exemplaires(3)
Code-barres | Cote | Support | Localisation | Section | Disponibilité |
---|---|---|---|---|---|
081-2019121 | RAB | Revue | Centre de documentation | En réserve L003 | Disponible |
081-2019123 | DEP-RECP | Revue | LASTIG | Dépôt en unité | Exclu du prêt |
081-2019122 | DEP-RECF | Revue | Nancy | Dépôt en unité | Exclu du prêt |
Dépouillements
Ajouter le résultat dans votre panierCombining Sentinel-1 and Sentinel-2 Satellite image time series for land cover mapping via a multi-source deep learning architecture / Dino Lenco in ISPRS Journal of photogrammetry and remote sensing, Vol 158 (December 2019)
[article]
Titre : Combining Sentinel-1 and Sentinel-2 Satellite image time series for land cover mapping via a multi-source deep learning architecture Type de document : Article/Communication Auteurs : Dino Lenco, Auteur ; Roberto Interdonato, Auteur ; Raffaele Gaetano, Auteur ; Ho Tong Minh Dinh, Auteur Année de publication : 2019 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] apprentissage profond
[Termes IGN] Burkina Faso
[Termes IGN] carte de la végétation
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] fusion d'images
[Termes IGN] image à haute résolution
[Termes IGN] image multibande
[Termes IGN] image radar moirée
[Termes IGN] image Sentinel-MSI
[Termes IGN] image Sentinel-SAR
[Termes IGN] occupation du sol
[Termes IGN] Réunion, île de la
[Termes IGN] série temporelle
[Termes IGN] utilisation du solRésumé : (auteur) The huge amount of data currently produced by modern Earth Observation (EO) missions has allowed for the design of advanced machine learning techniques able to support complex Land Use/Land Cover (LULC) mapping tasks. The Copernicus programme developed by the European Space Agency provides, with missions such as Sentinel-1 (S1) and Sentinel-2 (S2), radar and optical (multi-spectral) imagery, respectively, at 10 m spatial resolution with revisit time around 5 days. Such high temporal resolution allows to collect Satellite Image Time Series (SITS) that support a plethora of Earth surface monitoring tasks. How to effectively combine the complementary information provided by such sensors remains an open problem in the remote sensing field. In this work, we propose a deep learning architecture to combine information coming from S1 and S2 time series, namely TWINNS (TWIn Neural Networks for Sentinel data), able to discover spatial and temporal dependencies in both types of SITS. The proposed architecture is devised to boost the land cover classification task by leveraging two levels of complementarity, i.e., the interplay between radar and optical SITS as well as the synergy between spatial and temporal dependencies. Experiments carried out on two study sites characterized by different land cover characteristics (i.e., the Koumbia site in Burkina Faso and Reunion Island, a overseas department of France in the Indian Ocean), demonstrate the significance of our proposal. Numéro de notice : A2019-544 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.09.016 Date de publication en ligne : 27/09/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.09.016 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94186
in ISPRS Journal of photogrammetry and remote sensing > Vol 158 (December 2019)[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019121 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019123 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019122 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt An implicit radar convolutional burn index for burnt area mapping with Sentinel-1 C-band SAR data / Puzhao Zhang in ISPRS Journal of photogrammetry and remote sensing, Vol 158 (December 2019)
[article]
Titre : An implicit radar convolutional burn index for burnt area mapping with Sentinel-1 C-band SAR data Type de document : Article/Communication Auteurs : Puzhao Zhang, Auteur ; Andrea Nascetti, Auteur ; Yifang Ban, Auteur ; Maoguo Gong, Auteur Année de publication : 2019 Article en page(s) : pp 50 - 62 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] Californie (Etats-Unis)
[Termes IGN] carte de la végétation
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de changement
[Termes IGN] image à haute résolution
[Termes IGN] image multibande
[Termes IGN] image multitemporelle
[Termes IGN] image radar moirée
[Termes IGN] incendie
[Termes IGN] Normalized Difference Vegetation Index
[Termes IGN] Short Waves InfraRedRésumé : (auteur) Compared with optical sensors, the all-weather and day-and-night imaging ability of Synthetic Aperture Radar (SAR) makes it competitive for burnt area mapping. This study investigates the potential of Sentinel-1 C-band SAR sensors in burnt area mapping with an implicit Radar Convolutional Burn Index (RCBI). Based on multitemporal Sentinel-1 SAR data, a convolutional networks-based classification framework is proposed to learn the RCBI for highlighting the burnt areas. We explore the mapping accuracy level that can be achieved using SAR intensity and phase information for both VV and VH polarizations. Moreover, we investigate the decorrelation of Interferometric SAR (InSAR) coherence to wildfire events using different temporal baselines. The experimental results on two recent fire events, Thomas Fire (Dec., 2017) and Carr Fire (July, 2018) in California, demonstrate that the learnt RCBI has a better potential than the classical log-ratio operator in highlighting burnt areas. By exploiting both VV and VH information, the developed RCBI achieved an overall mapping accuracy of 94.68% and 94.17% on the Thomas Fire and Carr Fire, respectively. Numéro de notice : A2019-545 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.09.013 Date de publication en ligne : 04/10/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.09.013 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94189
in ISPRS Journal of photogrammetry and remote sensing > Vol 158 (December 2019) . - pp 50 - 62[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019121 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019123 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019122 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt On the value of corner reflectors and surface models in InSAR precise point positioning / Mengshi Yang in ISPRS Journal of photogrammetry and remote sensing, Vol 158 (December 2019)
[article]
Titre : On the value of corner reflectors and surface models in InSAR precise point positioning Type de document : Article/Communication Auteurs : Mengshi Yang, Auteur ; Paco Lopez-Dekker, Auteur ; Prabu Dheenathayalan, Auteur ; Mingsheng Liao, Auteur ; Ramon F. Hanssen, Auteur Année de publication : 2019 Article en page(s) : pp 113 - 122 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] coin réflecteur
[Termes IGN] correction d'image
[Termes IGN] géolocalisation
[Termes IGN] image radar moirée
[Termes IGN] image Sentinel-SAR
[Termes IGN] image TerraSAR-X
[Termes IGN] interféromètrie par radar à antenne synthétique
[Termes IGN] MNS lidar
[Termes IGN] Pays-Bas
[Termes IGN] point d'appui
[Termes IGN] positionnement ponctuel précis
[Termes IGN] semis de pointsRésumé : (auteur) To correctly interpret the estimated displacements in InSAR point clouds, especially in the built environment, these need to be linked to real-world structures. This requires the accurate and precise 3D positioning of each point. Artificial ground control points (GCPs), such as corner reflectors, serve this purpose, but since they require efforts and resources, there is a need for criteria to assess their usefulness. Here we evaluate the value and necessity of using GCPs for different scenarios, concerning the required efforts, and compare this to alternatives such as digital surface models (DSM) and advanced (geo) physical corrections. We consider single-epoch as well as multi-epoch GCP deployment, reflect on the number of GCPs required in relation to the number of SAR data acquisitions, and compare this with digital surface models of different quality levels. Analyzing the geolocation performance using TerraSAR-X and Sentinel-1 data, we evaluate the pros and cons of various deployment options and show that the multi-epoch deployment of a GCP yields optimal geolocalization results in terms of precision, accuracy, and reliability. Numéro de notice : A2019-546 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.10.006 Date de publication en ligne : 25/10/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.10.006 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94191
in ISPRS Journal of photogrammetry and remote sensing > Vol 158 (December 2019) . - pp 113 - 122[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019121 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019123 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019122 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Deep learning for conifer/deciduous classification of airborne LiDAR 3D point clouds representing individual trees / Hamid Hamraz in ISPRS Journal of photogrammetry and remote sensing, Vol 158 (December 2019)
[article]
Titre : Deep learning for conifer/deciduous classification of airborne LiDAR 3D point clouds representing individual trees Type de document : Article/Communication Auteurs : Hamid Hamraz, Auteur ; Nathan B. Jacobs, Auteur ; Marco A. Contreras, Auteur ; Chase H. Clark, Auteur Année de publication : 2019 Article en page(s) : pp 219 - 230 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] arbre caducifolié
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] houppier
[Termes IGN] modèle numérique de surface
[Termes IGN] Pinophyta
[Termes IGN] semis de pointsRésumé : (auteur) The purpose of this study was to investigate the use of deep learning for coniferous/deciduous classification of individual trees segmented from airborne LiDAR data. To enable processing by a deep convolutional neural network (CNN), we designed two discrete representations using leaf-off and leaf-on LiDAR data: a digital surface model with four channels (DSM × 4) and a set of four 2D views (4 × 2D). A training dataset of tree crowns was generated via segmentation of tree crowns, followed by co-registration with field data. Potential mislabels due to GPS error or tree leaning were corrected using a statistical ensemble filtering procedure. Because the training data was heavily unbalanced (~8% conifers), we trained an ensemble of CNNs on random balanced sub-samples. Benchmarked against multiple traditional shallow learning methods using manually designed features, the CNNs improved accuracies up to 14%. The 4 × 2D representation yielded similar classification accuracies to the DSM × 4 representation (~82% coniferous and ~90% deciduous) while converging faster. Further experimentation showed that early/late fusion of the channels in the representations did not affect the accuracies in a significant way. The data augmentation that was used for the CNN training improved the classification accuracies, but more real training instances (especially coniferous) likely results in much stronger improvements. Leaf-off LiDAR data were the primary source of useful information, which is likely due to the perennial nature of coniferous foliage. LiDAR intensity values also proved to be useful, but normalization yielded no significant improvement. As we observed, large training data may compensate for the lack of a subset of important domain data. Lastly, the classification accuracies of overstory trees (~90%) were more balanced than those of understory trees (~90% deciduous and ~65% coniferous), which is likely due to the incomplete capture of understory tree crowns via airborne LiDAR. In domains like remote sensing and biomedical imaging, where the data contain a large amount of information and are not friendly to human visual system, human-designed features may become suboptimal. As exemplified by this study, automatic, objective derivation of optimal features via deep learning can improve prediction tasks in such domains. Numéro de notice : A2019-547 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.10.011 Date de publication en ligne : 03/11/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.10.011 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94192
in ISPRS Journal of photogrammetry and remote sensing > Vol 158 (December 2019) . - pp 219 - 230[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019121 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019123 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019122 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Matching of TerraSAR-X derived ground control points to optical image patches using deep learning / Tatjana Bürgmann in ISPRS Journal of photogrammetry and remote sensing, Vol 158 (December 2019)
[article]
Titre : Matching of TerraSAR-X derived ground control points to optical image patches using deep learning Type de document : Article/Communication Auteurs : Tatjana Bürgmann, Auteur ; Wolfgang Koppe, Auteur ; Michael Schmitt, Auteur Année de publication : 2019 Article en page(s) : pp 241 - 248 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] appariement d'images
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] géolocalisation
[Termes IGN] image multicapteur
[Termes IGN] image optique
[Termes IGN] image Pléiades
[Termes IGN] image radar moirée
[Termes IGN] image Sentinel-MSI
[Termes IGN] image Sentinel-SAR
[Termes IGN] image TerraSAR-X
[Termes IGN] point d'appuiRésumé : (auteur) High resolution synthetic aperture radar (SAR) satellites like TerraSAR-X are capable of acquiring images exhibiting an absolute geolocation accuracy within a few centimeters, mainly because of the availability of precise orbit information and by compensating range delay errors due to atmospheric conditions. In contrast, satellite images from optical missions generally exhibit comparably low geolocation accuracies because of the propagation of errors in angular measurements over large distances. However, a variety of remote sensing applications, such as change detection, surface movement monitoring or ice flow measurements, require precisely geo-referenced and co-registered satellite images. By using Ground Control Points (GCPs) derived from TerraSAR-X, the absolute geolocation accuracy of optical satellite images can be improved. For this purpose, the corresponding matching points in the optical images need to be localized. In this paper, a deep learning based approach is investigated for an automated matching of SAR-derived GCPs to optical image elements. Therefore, a convolutional neural network is pretrained with medium resolution Sentinel-1 and Sentinel-2 imagery and fine-tuned on precisely co-registered TerraSAR-X and Pléiades training image pairs to learn a common descriptor representation. By using these descriptors, the similarity of SAR and optical image patches can be calculated. This similarity metric is then used in a sliding window approach to identify the matching points in the optical reference image. Subsequently, the derived points can be utilized for co-registration of the underlying images. The network is evaluated over nine study areas showing airports and their rural surroundings from several different countries around the world. The results show that based on TerraSAR-X-derived GCPs, corresponding points in the optical image can automatically and reliably be identified with a pixel-level localization accuracy. Numéro de notice : A2019-548 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.09.010 Date de publication en ligne : 05/11/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.09.010 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94194
in ISPRS Journal of photogrammetry and remote sensing > Vol 158 (December 2019) . - pp 241 - 248[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019121 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019123 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019122 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt