Descripteur
Termes IGN > mathématiques > statistique mathématique > analyse de données > classification > classification par réseau neuronal > classification par réseau neuronal convolutif
classification par réseau neuronal convolutifVoir aussi |
Documents disponibles dans cette catégorie (380)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Improvement in crop mapping from satellite image time series by effectively supervising deep neural networks / Sina Mohammadi in ISPRS Journal of photogrammetry and remote sensing, vol 198 (April 2023)
[article]
Titre : Improvement in crop mapping from satellite image time series by effectively supervising deep neural networks Type de document : Article/Communication Auteurs : Sina Mohammadi, Auteur ; Mariana Belgiu, Auteur ; Alfred Stein, Auteur Année de publication : 2023 Article en page(s) : pp 272 - 283 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage dirigé
[Termes IGN] apprentissage profond
[Termes IGN] carte de la végétation
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] classification par réseau neuronal récurrent
[Termes IGN] cultures
[Termes IGN] image Landsat-ETM+
[Termes IGN] image Landsat-OLI
[Termes IGN] Normalized Difference Vegetation Index
[Termes IGN] série temporelleRésumé : (auteur) Deep learning methods have achieved promising results in crop mapping using satellite image time series. A challenge still remains on how to better learn discriminative feature representations to detect crop types when the model is applied to unseen data. To address this challenge and reveal the importance of proper supervision of deep neural networks in improving performance, we propose to supervise intermediate layers of a designed 3D Fully Convolutional Neural Network (FCN) by employing two middle supervision methods: Cross-entropy loss Middle Supervision (CE-MidS) and a novel middle supervision method, namely Supervised Contrastive loss Middle Supervision (SupCon-MidS). This method pulls together features belonging to the same class in embedding space, while pushing apart features from different classes. We demonstrate that SupCon-MidS enhances feature discrimination and clustering throughout the network, thereby improving the network performance. In addition, we employ two output supervision methods, namely F1 loss and Intersection Over Union (IOU) loss. Our experiments on identifying corn, soybean, and the class Other from Landsat image time series in the U.S. corn belt show that the best set-up of our method, namely IOU+SupCon-MidS, is able to outperform the state-of-the-art methods by
scores of 3.5% and 0.5% on average when testing its accuracy across a different year (local test) and different regions (spatial test), respectively. Further, adding SupCon-MidS to the output supervision methods improves
scores by 1.2% and 7.6% on average in local and spatial tests, respectively. We conclude that proper supervision of deep neural networks plays a significant role in improving crop mapping performance. The code and data are available at: https://github.com/Sina-Mohammadi/CropSupervision.Numéro de notice : A2023-203 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.isprsjprs.2023.03.007 Date de publication en ligne : 29/03/2023 En ligne : https://doi.org/10.1016/j.isprsjprs.2023.03.007 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=103105
in ISPRS Journal of photogrammetry and remote sensing > vol 198 (April 2023) . - pp 272 - 283[article]Towards global scale segmentation with OpenStreetMap and remote sensing / Munazza Usmani in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 8 (April 2023)
[article]
Titre : Towards global scale segmentation with OpenStreetMap and remote sensing Type de document : Article/Communication Auteurs : Munazza Usmani, Auteur ; Maurizio Napolitano, Auteur ; Francesca Bovolo, Auteur Année de publication : 2023 Article en page(s) : n° 100031 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] bâtiment
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données localisées des bénévoles
[Termes IGN] image à haute résolution
[Termes IGN] information sémantique
[Termes IGN] occupation du sol
[Termes IGN] OpenStreetMap
[Termes IGN] segmentation d'image
[Termes IGN] segmentation sémantique
[Termes IGN] utilisation du solRésumé : (auteur) Land Use Land Cover (LULC) segmentation is a famous application of remote sensing in an urban environment. Up-to-date and complete data are of major importance in this field. Although with some success, pixel-based segmentation remains challenging because of class variability. Due to the increasing popularity of crowd-sourcing projects, like OpenStreetMap, the need for user-generated content has also increased, providing a new prospect for LULC segmentation. We propose a deep-learning approach to segment objects in high-resolution imagery by using semantic crowdsource information. Due to satellite imagery and crowdsource database complexity, deep learning frameworks perform a significant role. This integration reduces computation and labor costs. Our methods are based on a fully convolutional neural network (CNN) that has been adapted for multi-source data processing. We discuss the use of data augmentation techniques and improvements to the training pipeline. We applied semantic (U-Net) and instance segmentation (Mask R-CNN) methods and, Mask R–CNN showed a significantly higher segmentation accuracy from both qualitative and quantitative viewpoints. The conducted methods reach 91% and 96% overall accuracy in building segmentation and 90% in road segmentation, demonstrating OSM and remote sensing complementarity and potential for city sensing applications. Numéro de notice : A2023-148 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.ophoto.2023.100031 Date de publication en ligne : 16/02/2023 En ligne : https://doi.org/10.1016/j.ophoto.2023.100031 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102807
in ISPRS Open Journal of Photogrammetry and Remote Sensing > vol 8 (April 2023) . - n° 100031[article]A unified attention paradigm for hyperspectral image classification / Qian Liu in IEEE Transactions on geoscience and remote sensing, vol 61 n° 3 (March 2023)
[article]
Titre : A unified attention paradigm for hyperspectral image classification Type de document : Article/Communication Auteurs : Qian Liu, Auteur ; Zebin Wu, Auteur ; Yang Xu, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 5506316 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image hyperspectrale
[Termes IGN] précision de la classification
[Termes IGN] séparateur à vaste margeRésumé : (auteur) Attention mechanisms improve the classification accuracies by enhancing the salient information for hyperspectral images (HSIs). However, existing HSI attention models are driven by advanced achievements of computer vision, which are not able to fully exploit the spectral–spatial structure prior of HSIs and effectively refine features from a global perspective. In this article, we propose a unified attention paradigm (UAP) that defines the attention mechanism as a general three-stage process including optimizing feature representations, strengthening information interaction, and emphasizing meaningful information. Meanwhile, we designed a novel efficient spectral–spatial attention module (ESSAM) under this paradigm, which adaptively adjusts feature responses along the spectral and spatial dimensions at an extremely low parameter cost. Specifically, we construct a parameter-free spectral attention block that employs multiscale structured encodings and similarity calculations to perform global cross-channel interactions, and a memory-enhanced spatial attention block that captures key semantics of images stored in a learnable memory unit and models global spatial relationship by constructing semantic-to-pixel dependencies. ESSAM takes full account of the spatial distribution and low-dimensional characteristics of HSIs, with better interpretability and lower complexity. We develop a dense convolutional network based on efficient spectral–spatial attention network (ESSAN) and experiment on three real hyperspectral datasets. The experimental results demonstrate that the proposed ESSAM brings higher accuracy improvement compared to advanced attention models. Numéro de notice : A2023-185 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2023.3257321 Date de publication en ligne : 15/12/2023 En ligne : https://doi.org/10.1109/TGRS.2023.3257321 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102957
in IEEE Transactions on geoscience and remote sensing > vol 61 n° 3 (March 2023) . - n° 5506316[article]Comparative analysis of different CNN models for building segmentation from satellite and UAV images / Batuhan Sariturk in Photogrammetric Engineering & Remote Sensing, PERS, vol 89 n° 2 (February 2023)
[article]
Titre : Comparative analysis of different CNN models for building segmentation from satellite and UAV images Type de document : Article/Communication Auteurs : Batuhan Sariturk, Auteur ; Damla Kumbasar, Auteur ; Dursun Zafer Seker, Auteur Année de publication : 2023 Article en page(s) : pp 97 - 105 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse comparative
[Termes IGN] bati
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] image captée par drone
[Termes IGN] image satellite
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Building segmentation has numerous application areas such as urban planning and disaster management. In this study, 12 CNN models (U-Net, FPN, and LinkNet using EfficientNet-B5 backbone, U-Net, SegNet, FCN, and six Residual U-Net models) were generated and used for building segmentation. Inria Aerial Image Labeling Data Set was used to train models, and three data sets (Inria Aerial Image Labeling Data Set, Massachusetts Buildings Data Set, and Syedra Archaeological Site Data Set) were used to evaluate trained models. On the Inria test set, Residual-2 U-Net has the highest F1 and Intersection over Union (IoU) scores with 0.824 and 0.722, respectively. On the Syedra test set, LinkNet-EfficientNet-B5 has F1 and IoU scores of 0.336 and 0.246. On the Massachusetts test set, Residual-4 U-Net has F1 and IoU scores of 0.394 and 0.259. It has been observed that, for all sets, at least two of the top three models used residual connections. Therefore, for this study, residual connections are more successful than conventional convolutional layers. Numéro de notice : A2023-143 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.22-00084R2 Date de publication en ligne : 01/02/2023 En ligne : https://doi.org/10.14358/PERS.22-00084R2 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102718
in Photogrammetric Engineering & Remote Sensing, PERS > vol 89 n° 2 (February 2023) . - pp 97 - 105[article]Generating Sentinel-2 all-band 10-m data by sharpening 20/60-m bands: A hierarchical fusion network / Jingan Wu in ISPRS Journal of photogrammetry and remote sensing, vol 196 (February 2023)
[article]
Titre : Generating Sentinel-2 all-band 10-m data by sharpening 20/60-m bands: A hierarchical fusion network Type de document : Article/Communication Auteurs : Jingan Wu, Auteur ; Liupeng Lin, Auteur ; Chi Zhang, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : pp 16 - 31 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] affinage d'image
[Termes IGN] approche hiérarchique
[Termes IGN] bande spectrale
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] filtre passe-haut
[Termes IGN] fusion d'images
[Termes IGN] image à haute résolution
[Termes IGN] image Sentinel-MSIRésumé : (Auteur) Earth observations from the Sentinel-2 mission have been extensively accepted in a variety of land services. The thirteen spectral bands of Sentinel-2, however, are collected at three spatial resolutions of 10/20/60 m, and such a difference brings difficulties to analyze multispectral imagery at a uniform resolution. To address this problem, we developed a hierarchical fusion network (HFN) to sharpen 20/60-m bands and generate Sentinel-2 all-band 10-m data. The deep learning architecture is used to learn the complex mapping between multi-resolution input and output data. Given the deficiency of previous studies in which the spatial information is inferred only from the fine-resolution bands, the proposed hierarchical fusion framework simultaneously leverages the self-similarity information from coarse-resolution bands and the spatial structure information from fine-resolution bands, to enhance the sharpening performance. Technically, the coarse-resolution bands are super-resolved by exploiting the information from themselves and then sharpened by fusing with the fine-resolution bands. Both 20-m and 60-m bands can be sharpened via the developed approach. Experimental results regarding visual comparison and quantitative assessment demonstrate that HFN outperforms the other benchmarking models, including pan-sharpening-based, model-based, geostatistical-based, and other deep-learning-based approaches, showing remarkable performance in reproducing explicit spatial details and maintaining original spectral features. Moreover, the developed model works more effectively than the other models over the heterogeneous landscape, which is usually considered a challenging application scenario. To sum up, the fusion model can sharpen Sentinel-2 20/60-m bands, and the created all-band 10-m data allows image analysis and geoscience applications to be authentically carried out at the 10-m resolution. Numéro de notice : A2023-063 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.12.017 Date de publication en ligne : 01/01/2023 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.12.017 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102392
in ISPRS Journal of photogrammetry and remote sensing > vol 196 (February 2023) . - pp 16 - 31[article]Large-scale burn severity mapping in multispectral imagery using deep semantic segmentation models / Xikun Hu in ISPRS Journal of photogrammetry and remote sensing, vol 196 (February 2023)PermalinkMulti-nomenclature, multi-resolution joint translation: an application to land-cover mapping / Luc Baudoux in International journal of geographical information science IJGIS, vol 37 n° 2 (February 2023)PermalinkPermalinkA CNN based approach for the point-light photometric stereo problem / Fotios Logothetis in International journal of computer vision, vol 131 n° 1 (January 2023)PermalinkForest road extraction from orthophoto images by convolutional neural networks / Erhan Çalişkan in Geocarto international, vol 38 n° inconnu ([01/01/2023])PermalinkLarge-scale individual building extraction from open-source satellite imagery via super-resolution-based instance segmentation approach / Shenglong Chen in ISPRS Journal of photogrammetry and remote sensing, vol 195 (January 2023)PermalinkModern vectorization and alignment of historical maps: An application to Paris Atlas (1789-1950) / Yizi Chen (2023)PermalinkMTMGNN: Multi-time multi-graph neural network for metro passenger flow prediction / Du Yin in Geoinformatica, vol 27 n° 1 (January 2023)PermalinkSolid waste mapping based on very high resolution remote sensing imagery and a novel deep learning approach / Bowen Niu in Geocarto international, vol 38 n° 1 ([01/01/2023])PermalinkWavelet-like denoising of GNSS data through machine learning. Application to the time series of the Campi Flegrei volcanic area (Southern Italy) / Rolando Carbonari in Geomatics, Natural Hazards and Risk, vol 14 n° 1 (2023)Permalink