Descripteur
Termes IGN > mathématiques > statistique mathématique > analyse de données > classification > classification par réseau neuronal > classification par réseau neuronal convolutif
classification par réseau neuronal convolutifVoir aussi |
Documents disponibles dans cette catégorie (371)



Etendre la recherche sur niveau(x) vers le bas
Multi-nomenclature, multi-resolution joint translation: an application to land-cover mapping / Luc Baudoux in International journal of geographical information science IJGIS, vol 37 n° 2 (February 2023)
![]()
[article]
Titre : Multi-nomenclature, multi-resolution joint translation: an application to land-cover mapping Type de document : Article/Communication Auteurs : Luc Baudoux , Auteur ; Jordi Inglada, Auteur ; Clément Mallet
, Auteur
Année de publication : 2023 Projets : AI4GEO / Article en page(s) : pp ? Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Cartographie thématique
[Termes IGN] apprentissage profond
[Termes IGN] carte d'occupation du sol
[Termes IGN] carte d'utilisation du sol
[Termes IGN] carte thématique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] harmonisation des données
[Termes IGN] nomenclature
[Termes IGN] pouvoir de résolution géométriqueRésumé : (auteur) Land-use/land-cover (LULC) maps describe the Earth’s surface with discrete classes at a specific spatial resolution. The chosen classes and resolution highly depend on peculiar uses, making it mandatory to develop methods to adapt these characteristics for a large range of applications. Recently, a convolutional neural network (CNN)-based method was introduced to take into account both spatial and geographical context to translate a LULC map into another one. However, this model only works for two maps: one source and one target. Inspired by natural language translation using multiple-language models, this article explores how to translate one LULC map into several targets with distinct nomenclatures and spatial resolutions. We first propose a new data set based on six open access LULC maps to train our CNN-based encoder-decoder framework. We then apply such a framework to convert each of these six maps into each of the others using our Multi-Landcover Translation network (MLCT-Net). Extensive experiments are conducted at a country scale (namely France). The results reveal that our MLCT-Net outperforms its semantic counterparts and gives on par results with mono-LULC models when evaluated on areas similar to those used for training. Furthermore, it outperforms the mono-LULC models when applied to totally new landscapes. Numéro de notice : A2023-075 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/13658816.2022.2120996 Date de publication en ligne : 10/10/2022 En ligne : https://doi.org/10.1080/13658816.2022.2120996 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101797
in International journal of geographical information science IJGIS > vol 37 n° 2 (February 2023) . - pp ?[article]Generating Sentinel-2 all-band 10-m data by sharpening 20/60-m bands: A hierarchical fusion network / Jingan Wu in ISPRS Journal of photogrammetry and remote sensing, vol 196 (February 2023)
![]()
[article]
Titre : Generating Sentinel-2 all-band 10-m data by sharpening 20/60-m bands: A hierarchical fusion network Type de document : Article/Communication Auteurs : Jingan Wu, Auteur ; Liupeng Lin, Auteur ; Chi Zhang, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : pp 16 - 31 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] affinage d'image
[Termes IGN] approche hiérarchique
[Termes IGN] bande spectrale
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] filtre passe-haut
[Termes IGN] fusion d'images
[Termes IGN] image à haute résolution
[Termes IGN] image Sentinel-MSIRésumé : (Auteur) Earth observations from the Sentinel-2 mission have been extensively accepted in a variety of land services. The thirteen spectral bands of Sentinel-2, however, are collected at three spatial resolutions of 10/20/60 m, and such a difference brings difficulties to analyze multispectral imagery at a uniform resolution. To address this problem, we developed a hierarchical fusion network (HFN) to sharpen 20/60-m bands and generate Sentinel-2 all-band 10-m data. The deep learning architecture is used to learn the complex mapping between multi-resolution input and output data. Given the deficiency of previous studies in which the spatial information is inferred only from the fine-resolution bands, the proposed hierarchical fusion framework simultaneously leverages the self-similarity information from coarse-resolution bands and the spatial structure information from fine-resolution bands, to enhance the sharpening performance. Technically, the coarse-resolution bands are super-resolved by exploiting the information from themselves and then sharpened by fusing with the fine-resolution bands. Both 20-m and 60-m bands can be sharpened via the developed approach. Experimental results regarding visual comparison and quantitative assessment demonstrate that HFN outperforms the other benchmarking models, including pan-sharpening-based, model-based, geostatistical-based, and other deep-learning-based approaches, showing remarkable performance in reproducing explicit spatial details and maintaining original spectral features. Moreover, the developed model works more effectively than the other models over the heterogeneous landscape, which is usually considered a challenging application scenario. To sum up, the fusion model can sharpen Sentinel-2 20/60-m bands, and the created all-band 10-m data allows image analysis and geoscience applications to be authentically carried out at the 10-m resolution. Numéro de notice : A2023-063 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.12.017 Date de publication en ligne : 01/01/2023 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.12.017 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102392
in ISPRS Journal of photogrammetry and remote sensing > vol 196 (February 2023) . - pp 16 - 31[article]Large-scale burn severity mapping in multispectral imagery using deep semantic segmentation models / Xikun Hu in ISPRS Journal of photogrammetry and remote sensing, vol 196 (February 2023)
![]()
[article]
Titre : Large-scale burn severity mapping in multispectral imagery using deep semantic segmentation models Type de document : Article/Communication Auteurs : Xikun Hu, Auteur ; Puzhao Zhang, Auteur ; Yifang Ban, Auteur Année de publication : 2023 Article en page(s) : pp 228 - 240 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] carte thématique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] dommage
[Termes IGN] image Landsat-ETM+
[Termes IGN] image Landsat-OLI
[Termes IGN] image Landsat-TM
[Termes IGN] image multibande
[Termes IGN] image Sentinel-MSI
[Termes IGN] incendie de forêt
[Termes IGN] jeu de données localisées
[Termes IGN] segmentation sémantique
[Termes IGN] surveillance forestière
[Termes IGN] zone sinistréeRésumé : (auteur) Nowadays Earth observation satellites provide forest fire authorities and resource managers with spatial and comprehensive information for fire stabilization and recovery. Burn severity mapping is typically performed by classifying bi-temporal indices (e.g., dNBR, and RdNBR) using thresholds derived from parametric models incorporating field-based measurements. Analysts are currently expending considerable manual effort using prior knowledge and visual inspection to determine burn severity thresholds. In this study, we aim to employ highly automated approaches to provide spatially explicit damage level estimates. We first reorganize a large-scale Landsat-based bi-temporal burn severity assessment dataset (Landsat-BSA) by visual data cleaning based on annotated MTBS data (approximately 1000 major fire events in the United States). Then we apply state-of-the-art deep learning (DL) based methods to map burn severity based on the Landsat-BSA dataset. Experimental results emphasize that multi-class semantic segmentation algorithms can approximate the threshold-based techniques used extensively for burn severity classification. UNet-like models outperform other region-based CNN and Transformer-based models and achieve accurate pixel-wise classification results. Combined with the online hard example mining algorithm to reduce class imbalance issue, Attention UNet achieves the highest mIoU (0.78) and the highest Kappa coefficient close to 0.90. The bi-temporal inputs with ancillary spectral indices work much better than the uni-temporal multispectral inputs. The restructured dataset will be publicly available and create opportunities for further advances in remote sensing and wildfire communities. Numéro de notice : A2023-122 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.12.026 Date de publication en ligne : 11/01/2023 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.12.026 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102498
in ISPRS Journal of photogrammetry and remote sensing > vol 196 (February 2023) . - pp 228 - 240[article]A CNN based approach for the point-light photometric stereo problem / Fotios Logothetis in International journal of computer vision, vol 131 n° 1 (January 2023)
![]()
[article]
Titre : A CNN based approach for the point-light photometric stereo problem Type de document : Article/Communication Auteurs : Fotios Logothetis, Auteur ; Roberto Mecca, Auteur ; Ignas Budvytis, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : pp 101 - 120 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] distribution du coefficient de réflexion bidirectionnelle BRDF
[Termes IGN] éclairement lumineux
[Termes IGN] effet de profondeur cinétique
[Termes IGN] intensité lumineuse
[Termes IGN] itération
[Termes IGN] reconstruction 3D
[Termes IGN] réflectivité
[Termes IGN] stéréoscopie
[Termes IGN] vue perspectiveRésumé : (auteur) Reconstructing the 3D shape of an object using several images under different light sources is a very challenging task, especially when realistic assumptions such as light propagation and attenuation, perspective viewing geometry and specular light reflection are considered. Many of works tackling Photometric Stereo (PS) problems often relax most of the aforementioned assumptions. Especially they ignore specular reflection and global illumination effects. In this work, we propose a CNN-based approach capable of handling these realistic assumptions by leveraging recent improvements of deep neural networks for far-field Photometric Stereo and adapt them to the point light setup. We achieve this by employing an iterative procedure of point-light PS for shape estimation which has two main steps. Firstly we train a per-pixel CNN to predict surface normals from reflectance samples. Secondly, we compute the depth by integrating the normal field in order to iteratively estimate light directions and attenuation which is used to compensate the input images to compute reflectance samples for the next iteration. Our approach sigificantly outperforms the state-of-the-art on the DiLiGenT real world dataset. Furthermore, in order to measure the performance of our approach for near-field point-light source PS data, we introduce LUCES the first real-world ’dataset for near-fieLd point light soUrCe photomEtric Stereo’ of 14 objects of different materials were the effects of point light sources and perspective viewing are a lot more significant. Our approach also outperforms the competition on this dataset as well. Data and test code are available at the project page. Numéro de notice : A2023-048 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s11263-022-01689-3 Date de publication en ligne : 07/10/2022 En ligne : https://doi.org/10.1007/s11263-022-01689-3 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102364
in International journal of computer vision > vol 131 n° 1 (January 2023) . - pp 101 - 120[article]Forest road extraction from orthophoto images by convolutional neural networks / Erhan Çalişkan in Geocarto international, vol 38 n° inconnu ([01/01/2023])
![]()
[article]
Titre : Forest road extraction from orthophoto images by convolutional neural networks Type de document : Article/Communication Auteurs : Erhan Çalişkan, Auteur ; Yusuf Sevim, Auteur Année de publication : 2023 Article en page(s) : pp Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] chemin forestier
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction automatique
[Termes IGN] orthoimage
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Continuous monitoring of the forest road infrastructure and keeping track of the changes occurred are important for forestry practices, map updating, forest fire and forest transport decision support systems. In this context, the most of up to date data can be obtained by automatic forest road extraction from satellite images via machine learning (ML). Acquiring sufficient data is one of the most important factors which affect the success of ML and deep learning (DL). DL architectures yield more consistent results for complex data sets compared with ML algorithms. In the present study, three different deep learning (Resnet-18, MobileNet-V2 and Xception) architectures with semantic segmentation architecture were compared for extracting the forest road network from high-resolution orthophoto images and the results were analyzed. The architectures were evaluated through a multiclass statistical analysis based precision, recall, F1 score, intersection over union and overall accuracy (OA). The results present significant values obtained by the Resnet-18 architecture, with 99.72% of OA and 98.87% of precision and by the MobileNet-V2 architecture, with 97.76% of OA and 98.28% of precision. Also the results show that Resnet-18, MobileNet-V2 semantic segmentation architectures can be used efficiently for forest road extraction. Numéro de notice : A2022-159 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1080/10106049.2022.2060319 Date de publication en ligne : 06/04/2022 En ligne : https://doi.org/10.1080/10106049.2022.2060319 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100380
in Geocarto international > vol 38 n° inconnu [01/01/2023] . - pp[article]Large-scale individual building extraction from open-source satellite imagery via super-resolution-based instance segmentation approach / Shenglong Chen in ISPRS Journal of photogrammetry and remote sensing, vol 195 (January 2023)
PermalinkMTMGNN: Multi-time multi-graph neural network for metro passenger flow prediction / Du Yin in Geoinformatica, vol 27 n° 1 (January 2023)
PermalinkSolid waste mapping based on very high resolution remote sensing imagery and a novel deep learning approach / Bowen Niu in Geocarto international, vol 38 n° 1 ([01/01/2023])
PermalinkDeep learning detects invasive plant species across complex landscapes using Worldview-2 and Planetscope satellite imagery / Thomas A. Lake in Remote sensing in ecology and conservation, vol 8 n° 6 (December 2022)
PermalinkEstablishing a GIS-based evaluation method considering spatial heterogeneity for debris flow susceptibility mapping at the regional scale / Shengwu Qin in Natural Hazards, vol 114 n° 3 (December 2022)
PermalinkInstance segmentation of standing dead trees in dense forest from aerial imagery using deep learning / Aboubakar Sani-Mohammed in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 6 (December 2022)
PermalinkA whale optimization algorithm–based cellular automata model for urban expansion simulation / Yuan Ding in International journal of applied Earth observation and geoinformation, vol 115 (December 2022)
PermalinkChange alignment-based image transformation for unsupervised heterogeneous change detection / Kuowei Xiao in Remote sensing, vol 14 n° 21 (November-1 2022)
PermalinkCross-guided pyramid attention-based residual hyperdense network for hyperspectral image pansharpening / Jiahui Qu in IEEE Transactions on geoscience and remote sensing, vol 60 n° 11 (November 2022)
PermalinkForeground-aware refinement network for building extraction from remote sensing images / Zhang Yan in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 11 (November 2022)
Permalink