Descripteur
Documents disponibles dans cette catégorie (9019)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Application of a graph convolutional network with visual and semantic features to classify urban scenes / Yongyang Xu in International journal of geographical information science IJGIS, vol 36 n° 10 (October 2022)
[article]
Titre : Application of a graph convolutional network with visual and semantic features to classify urban scenes Type de document : Article/Communication Auteurs : Yongyang Xu, Auteur ; Shuai Jin, Auteur ; Zhanlong Chen, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 2009-2034 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] matrice de co-occurrence
[Termes IGN] OpenStreetMap
[Termes IGN] Pékin (Chine)
[Termes IGN] point d'intérêt
[Termes IGN] relation spatiale
[Termes IGN] réseau neuronal de graphes
[Termes IGN] réseau routier
[Termes IGN] scène urbaineRésumé : (auteur) Urban scenes consist of visual and semantic features and exhibit spatial relationships among land-use types (e.g. industrial areas are far away from the residential zones). This study applied a graph convolutional network with neighborhood information (henceforth, named the neighbour supporting graph convolutional neural network), to learn spatial relationships for urban scene classification. Furthermore, a co-occurrence analysis with visual and semantic features proceeded to improve the accuracy of urban scene classification. We tested the proposed method with the fifth ring road of Beijing with an overall classification accuracy of 0.827 and a Kappa coefficient of 0.769. In comparison with other methods, such as support vector machine, random forest, and general graph convolutional network, the case study showed that the proposed method improved about 10% in urban scene classification. Numéro de notice : A2022-740 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE/URBANISME Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/13658816.2022.2048834 Date de publication en ligne : 10/03/2022 En ligne : https://doi.org/10.1080/13658816.2022.2048834 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101717
in International journal of geographical information science IJGIS > vol 36 n° 10 (October 2022) . - pp 2009-2034[article]Challenges and limitations of earthquake-induced building damage mapping techniques using remote sensing images : A systematic review / Sahar S. Matin in Geocarto international, Vol 37 n° 21 ([01/10/2022])
[article]
Titre : Challenges and limitations of earthquake-induced building damage mapping techniques using remote sensing images : A systematic review Type de document : Article/Communication Auteurs : Sahar S. Matin, Auteur ; Biswajeet Pradhan, Auteur Année de publication : 2022 Article en page(s) : pp 6186 - 6212 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage profond
[Termes IGN] cartographie thématique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] déformation d'édifice
[Termes IGN] détection de changement
[Termes IGN] dommage matériel
[Termes IGN] données lidar
[Termes IGN] image optique
[Termes IGN] image radar moirée
[Termes IGN] secours d'urgence
[Termes IGN] séismeRésumé : (auteur) Assessing the extent and level of building damages is crucial to support post-earthquake rescue and relief activities. There is a large body of literature proposing novel frameworks for automating earthquake-induced building damage mapping using high-resolution remote sensing images. Yet, its deployment in real-world scenarios is largely limited to the manual interpretation of images. Although manual interpretation is costly and labor-intensive, it is preferred over automatic and semi-automatic building damage mapping frameworks such as machine learning and deep learning because of its reliability. Therefore, this review paper explores various automatic and semi-automatic building damage mapping techniques with a quest to understand the pros and cons of different methodologies to narrow the gap between research and practice. Further, the research gaps and opportunities are identified for the future development of real-world scenarios earthquake-induced building damage mapping. This review can serve as a guideline for researchers, decision-makers, and practitioners in the emergency management service domain. Numéro de notice : A2022-719 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2021.1933213 Date de publication en ligne : 07/06/2021 En ligne : https://doi.org/10.1080/10106049.2021.1933213 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101651
in Geocarto international > Vol 37 n° 21 [01/10/2022] . - pp 6186 - 6212[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 059-2022211 RAB Revue Centre de documentation En réserve L003 Disponible Comparison of layer-stacking and Dempster-Shafer theory-based methods using Sentinel-1 and Sentinel-2 data fusion in urban land cover mapping / Dang Hung Bui in Geo-spatial Information Science, vol 25 n° 3 (October 2022)
[article]
Titre : Comparison of layer-stacking and Dempster-Shafer theory-based methods using Sentinel-1 and Sentinel-2 data fusion in urban land cover mapping Type de document : Article/Communication Auteurs : Dang Hung Bui, Auteur ; László Mucsi, Auteur Année de publication : 2022 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] analyse comparative
[Termes IGN] carte d'occupation du sol
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification pixellaire
[Termes IGN] fusion d'images
[Termes IGN] image Sentinel-MSI
[Termes IGN] image Sentinel-SAR
[Termes IGN] théorie de Dempster-Shafer
[Termes IGN] zone urbaineRésumé : (auteur) Data fusion has shown potential to improve the accuracy of land cover mapping, and selection of the optimal fusion technique remains a challenge. This study investigated the performance of fusing Sentinel-1 (S-1) and Sentinel-2 (S-2) data, using layer-stacking method at the pixel level and Dempster-Shafer (D-S) theory-based approach at the decision level, for mapping six land cover classes in Thu Dau Mot City, Vietnam. At the pixel level, S-1 and S-2 bands and their extracted textures and indices were stacked into the different single-sensor and multi-sensor datasets (i.e. fused datasets). The datasets were categorized into two groups. One group included the datasets containing only spectral and backscattering bands, and the other group included the datasets consisting of these bands and their extracted features. The random forest (RF) classifier was then applied to the datasets within each group. At the decision level, the RF classification outputs of the single-sensor datasets within each group were fused together based on D-S theory. Finally, the accuracy of the mapping results at both levels within each group was compared. The results showed that fusion at the decision level provided the best mapping accuracy compared to the results from other products within each group. The highest overall accuracy (OA) and Kappa coefficient of the map using D-S theory were 92.67% and 0.91, respectively. The decision-level fusion helped increase the OA of the map by 0.75% to 2.07% compared to that of corresponding S-2 products in the groups. Meanwhile, the data fusion at the pixel level delivered the mapping results, which yielded an OA of 4.88% to 6.58% lower than that of corresponding S-2 products in the groups. Numéro de notice : A2022-448 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1080/10095020.2022.2035656 Date de publication en ligne : 03/03/2022 En ligne : https://doi.org/10.1080/10095020.2022.2035656 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100398
in Geo-spatial Information Science > vol 25 n° 3 (October 2022)[article]Correcting laser scanning intensity recorded in a cave environment for high-resolution lithological mapping: A case study of the Gouffre Georges, France / Michaela Nováková in Remote sensing of environment, vol 280 (October 2022)
[article]
Titre : Correcting laser scanning intensity recorded in a cave environment for high-resolution lithological mapping: A case study of the Gouffre Georges, France Type de document : Article/Communication Auteurs : Michaela Nováková, Auteur ; Michal Gallay, Auteur ; Jozef Šupinský, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 113210 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] amélioration du contraste
[Termes IGN] Ariège (09)
[Termes IGN] cartographie géologique
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] filtrage du bruit
[Termes IGN] grotte
[Termes IGN] intensité lumineuse
[Termes IGN] lithologie
[Termes IGN] roche
[Termes IGN] semis de points
[Termes IGN] télémétrie laser terrestreRésumé : (auteur) Active remote sensing by laser scanning (LiDAR) has markedly improved the mapping of a cave environment with an unprecedented level of accuracy and spatial detail. However, the use of laser intensity simultaneously recorded during the scanning of caves remains unexplored despite it having promising potential for lithological mapping as it has been demonstrated by many applications in open-sky conditions. The appropriate use of laser intensity requires calibration and corrections for influencing factors, which are different in caves as opposed to the above-ground environments. Our study presents an efficient and complex workflow to correct the recorded intensity, which takes into consideration the acquisition geometry, micromorphology of the cave surface, and the specific atmospheric influence previously neglected in terrestrial laser scanning. The applicability of the approach is demonstrated on terrestrial LiDAR data acquired in the Gouffre Georges, a cave located in the northern Pyrenees in France. The cave is unique for its geology and lithology allowing for observation, with a spectacular continuity without any vegetal cover, of the contact between marble and lherzolite rocks and tectonic structures that characterize such contact. The overall accuracy of rock surface classification based on the corrected laser intensity was over 84%. The presence of water or a wet surface introduced bias of the intensity values towards lower values complicating the material discrimination. Such conditions have to be considered in applications of the recorded laser intensity in mapping underground spaces. The presented method allows for putting geological observations in an absolute spatial reference frame, which is often very difficult in a cave environment. Thus, laser scanning of the cave geometry assigned with the corrected laser intensity is an invaluable tool to unravel the complexity of such a lithological environment. Numéro de notice : A2022-775 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.rse.2022.113210 Date de publication en ligne : 10/08/2022 En ligne : https://doi.org/10.1016/j.rse.2022.113210 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101807
in Remote sensing of environment > vol 280 (October 2022) . - n° 113210[article]Deep learning high resolution burned area mapping by transfer learning from Landsat-8 to PlanetScope / V.S. Martins in Remote sensing of environment, vol 280 (October 2022)
[article]
Titre : Deep learning high resolution burned area mapping by transfer learning from Landsat-8 to PlanetScope Type de document : Article/Communication Auteurs : V.S. Martins, Auteur ; D.P. Roy, Auteur ; H. Huang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 113203 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] Afrique (géographie politique)
[Termes IGN] apprentissage profond
[Termes IGN] carte thématique
[Termes IGN] cartographie automatique
[Termes IGN] correction radiométrique
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] forêt tropicale
[Termes IGN] image Landsat-OLI
[Termes IGN] image PlanetScope
[Termes IGN] incendie
[Termes IGN] précision de la classification
[Termes IGN] régression
[Termes IGN] savaneRésumé : (auteur) High spatial resolution commercial satellite data provide new opportunities for terrestrial monitoring. The recent availability of near-daily 3 m observations provided by the PlanetScope constellation enables mapping of small and spatially fragmented burns that are not detected at coarser spatial resolution. This study demonstrates, for the first time, the potential for automated PlanetScope 3 m burned area mapping. The PlanetScope sensors have no onboard calibration or short-wave infrared bands, and have variable overpass times, making them challenging to use for large area, automated, burned area mapping. To help overcome these issues, a U-Net deep learning algorithm was developed to classify burned areas from two-date Planetscope 3 m image pairs acquired at the same location. The deep learning approach, unlike conventional burned area mapping algorithms, is applied to image spatial subsets and not to single pixels and so incorporates spatial as well as spectral information. Deep learning requires large amounts of training data. Consequently, transfer learning was undertaken using pre-existing Landsat-8 derived burned area reference data to train the U-Net that was then refined with a smaller set of PlanetScope training data. Results across Africa considering 659 PlanetScope radiometrically normalized image pairs sensed one day apart in 2019 are presented. The U-Net was first trained with different numbers of randomly selected 256 × 256 30 m pixel patches extracted from 92 pre-existing Landsat-8 burned area reference data sets defined for 2014 and 2015. The U-Net trained with 300,000 Landsat patches provided about 13% 30 m burn omission and commission errors with respect to 65,000 independent 30 m evaluation patches. The U-Net was then refined by training on 5,000 256 × 256 3 m patches extracted from independently interpreted PlanetScope burned area reference data. Qualitatively, the refined U-Net was able to more precisely delineate 3 m burn boundaries, including the interiors of unburned areas, and better classify “faint” burned areas indicative of low combustion completeness and/or sparse burns. The refined U-Net 3 m classification accuracy was assessed with respect to 20 independently interpreted PlanetScope burned area reference data sets, composed of 339.4 million 3 m pixels, with low 12.29% commission and 12.09% omission errors. The dependency of the U-Net classification accuracy on the burned area proportion within 3 m pixel 256 × 256 patches was also examined, and patches Numéro de notice : A2022-774 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.rse.2022.113203 Date de publication en ligne : 08/08/2022 En ligne : https://doi.org/10.1016/j.rse.2022.113203 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101802
in Remote sensing of environment > vol 280 (October 2022) . - n° 113203[article]Determination of local geometric geoid model for Kuwait / Ahmed Zaki in Journal of applied geodesy, vol 16 n° 4 (October 2022)PermalinkDSNUNet: An improved forest change detection network by combining Sentinel-1 and Sentinel-2 images / Jiawei Jiang in Remote sensing, vol 14 n° 19 (October-1 2022)PermalinkEvaluation of Landsat 8 image pansharpening in estimating soil organic matter using multiple linear regression and artificial neural networks / Abdelkrim Bouasria in Geo-spatial Information Science, vol 25 n° 3 (October 2022)PermalinkIncremental road network update method with trajectory data and UAV remote sensing imagery / Jianxin Qin in ISPRS International journal of geo-information, vol 11 n° 10 (October 2022)PermalinkInvestigation of recognition and classification of forest fires based on fusion color and textural features of images / Cong Li in Forests, vol 13 n° 10 (October 2022)PermalinkMonitoring spatiotemporal soil moisture changes in the subsurface of forest sites using electrical resistivity tomography (ERT) / Julian Fäth in Journal of Forestry Research, vol 33 n° 5 (October 2022)PermalinkMulti‑constellation GNSS interferometric reflectometry for the correction of long-term snow height retrieval on sloping topography / Wei Zhou in GPS solutions, vol 26 n° 4 (October 2022)PermalinkNovel algorithm based on geometric characteristics for tree branch skeleton extraction from LiDAR point cloud / Jie Yang in Forests, vol 13 n° 10 (October 2022)PermalinkPotential and limitation of PlanetScope images for 2-D and 3-D Earth surface monitoring with example of applications to glaciers and earthquakes / Saif Aati in IEEE Transactions on geoscience and remote sensing, vol 60 n° 10 (October 2022)PermalinkPyeo: A Python package for near-real-time forest cover change detection from Earth observation using machine learning / J.F. Roberts in Computers & geosciences, vol 167 (October 2022)Permalink