Descripteur
Termes IGN > mathématiques > statistique mathématique > analyse de données > classification > classification par réseau neuronal > classification par réseau neuronal convolutif
classification par réseau neuronal convolutifVoir aussi |
Documents disponibles dans cette catégorie (336)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Mapping land-use intensity of grasslands in Germany with machine learning and Sentinel-2 time series / Maximilian Lange in Remote sensing of environment, vol 277 (August 2022)
[article]
Titre : Mapping land-use intensity of grasslands in Germany with machine learning and Sentinel-2 time series Type de document : Article/Communication Auteurs : Maximilian Lange, Auteur ; Hannes Feilhauer, Auteur ; Ingolf Kühn, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 112888 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] Allemagne
[Termes IGN] apprentissage automatique
[Termes IGN] bande spectrale
[Termes IGN] carte d'utilisation du sol
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] échantillonnage de données
[Termes IGN] image Sentinel-MSI
[Termes IGN] indice de végétation
[Termes IGN] prairie
[Termes IGN] série temporelleRésumé : (auteur) Information on grassland land-use intensity (LUI) is crucial for understanding trends and dynamics in biodiversity, ecosystem functioning, earth system science and environmental monitoring. LUI is a major driver for numerous environmental processes and indicators, such as primary production, nitrogen deposition and resilience to climate extremes. However, large extent, high resolution data on grassland LUI is rare. New satellite generations, such as Copernicus Sentinel-2, enable a spatially comprehensive detection of the mainly subtle changes induced by land-use intensification by their fine spatial and temporal resolution. We developed a methodology quantifying key parameters of grassland LUI such as grazing intensity, mowing frequency and fertiliser application across Germany using Convolutional Neural Networks (CNN) on Sentinel-2 satellite data with 20 m × 20 m spatial resolution. Subsequently, these land-use components were used to calculate a continuous LUI index. Predictions of LUI and its components were validated using comprehensive in situ grassland management data. A feature contribution analysis using Shapley values substantiates the applicability of the methodology by revealing a high relevance of springtime satellite observations and spectral bands related to vegetation health and structure. We achieved an overall classification accuracy of up to 66% for grazing intensity, 68% for mowing, 85% for fertilisation and an r2 of 0.82 for subsequently depicting LUI. We evaluated the methodology's robustness with a spatial 3-fold cross-validation by training and predicting on geographically distinctly separated regions. Spatial transferability was assessed by delineating the models' area of applicability. The presented methodology enables a high resolution, large extent mapping of land-use intensity of grasslands. Numéro de notice : A2022-468 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.rse.2022.112888 Date de publication en ligne : 13/05/2022 En ligne : https://doi.org/10.1016/j.rse.2022.112888 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100805
in Remote sensing of environment > vol 277 (August 2022) . - n° 112888[article]A pipeline for automated processing of Corona KH-4 (1962-1972) stereo imagery / Sajid Ghuffar in IEEE Transactions on geoscience and remote sensing, vol 60 n° 8 (August 2022)
[article]
Titre : A pipeline for automated processing of Corona KH-4 (1962-1972) stereo imagery Type de document : Article/Communication Auteurs : Sajid Ghuffar, Auteur ; Tobias Bolch, Auteur ; Ewelina Rupnik , Auteur ; Atanu Bhattacharya, Auteur Année de publication : 2022 Article en page(s) : pp Note générale : bibliographie
voir aussi https://research-repository.st-andrews.ac.uk/bitstream/10023/26124/1/Ghuffar_2022_IEEE_TGRS_Pipeline_automated_processing_AAM.pdfLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement d'images
[Termes IGN] apprentissage profond
[Termes IGN] chaîne de traitement
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] compensation par faisceaux
[Termes IGN] géométrie de l'image
[Termes IGN] géométrie épipolaire
[Termes IGN] glacier
[Termes IGN] Himalaya
[Termes IGN] image Corona
[Termes IGN] image panoramique
[Termes IGN] MNS SRTM
[Termes IGN] modèle numérique de surface
[Termes IGN] modèle stéréoscopique
[Termes IGN] point d'appuiRésumé : (auteur) The Corona KH-4 reconnaissance satellite missions from 1962-1972 acquired panoramic stereo imagery with high spatial resolution of 1.8-7.5 m. The potential of 800,000+ declassified Corona images has not been leveraged due to the complexities arising from handling of panoramic imaging geometry, film distortions and limited availability of the metadata required for georeferencing of the Corona imagery. This paper presents Corona Stereo Pipeline (CoSP): A pipeline for processing of Corona KH-4 stereo panoramic imagery. CoSP utlizes a deep learning based feature matcher SuperGlue to automatically match features point between Corona KH-4 images and recent satellite imagery to generate Ground Control Points (GCPs). To model the imaging geometry and the scanning motion of the panoramic KH-4 cameras, a rigorous camera model consisting of modified collinearity equations with time dependent exterior orientation parameters is employed. The results show that using the entire frame of the Corona image, bundle adjustment using well-distributed GCPs results in an average standard deviation (SD) of less than 2 pixels. We evaluate fiducial marks on the Corona films and show that pre-processing the Corona images to compensate for film bending improves the accuracy. We further assess a polynomial epipolar resampling method for rectification of Corona stereo images. The distortion pattern of image residuals of GCPs and y-parallax in epipolar resampled images suggest that film distortions due to long term storage as likely cause of systematic deviations. Compared to the SRTM DEM, the Corona DEM computed using CoSP achieved a Normalized Median Absolute Deviation (NMAD) of elevation differences of ? 4m over an area of approx. 4000km2. We show that the proposed pipeline can be applied to sequence of complex scenes involving high relief and glacierized terrain and that the resulting DEMs can be used to compute long term glacier elevation changes over large areas. Numéro de notice : A2022-952 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Autre URL associée : vers ArXiv Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2022.3200151 Date de publication en ligne : 19/08/2022 En ligne : https://doi.org/10.1109/TGRS.2022.3200151 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=103286
in IEEE Transactions on geoscience and remote sensing > vol 60 n° 8 (August 2022) . - pp[article]Transfer learning from citizen science photographs enables plant species identification in UAV imagery / Salim Soltani in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 5 (August 2022)
[article]
Titre : Transfer learning from citizen science photographs enables plant species identification in UAV imagery Type de document : Article/Communication Auteurs : Salim Soltani, Auteur ; Hannes Feilhauer, Auteur ; Robbert Duker, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 100016 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] base de données naturalistes
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] distribution spatiale
[Termes IGN] données localisées des bénévoles
[Termes IGN] espèce végétale
[Termes IGN] filtrage de la végétation
[Termes IGN] identification de plantes
[Termes IGN] image captée par drone
[Termes IGN] orthoimage couleur
[Termes IGN] science citoyenne
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Accurate information on the spatial distribution of plant species and communities is in high demand for various fields of application, such as nature conservation, forestry, and agriculture. A series of studies has shown that Convolutional Neural Networks (CNNs) accurately predict plant species and communities in high-resolution remote sensing data, in particular with data at the centimeter scale acquired with Unoccupied Aerial Vehicles (UAV). However, such tasks often require ample training data, which is commonly generated in the field via geocoded in-situ observations or labeling remote sensing data through visual interpretation. Both approaches are laborious and can present a critical bottleneck for CNN applications. An alternative source of training data is given by using knowledge on the appearance of plants in the form of plant photographs from citizen science projects such as the iNaturalist database. Such crowd-sourced plant photographs typically exhibit very different perspectives and great heterogeneity in various aspects, yet the sheer volume of data could reveal great potential for application to bird’s eye views from remote sensing platforms. Here, we explore the potential of transfer learning from such a crowd-sourced data treasure to the remote sensing context. Therefore, we investigate firstly, if we can use crowd-sourced plant photographs for CNN training and subsequent mapping of plant species in high-resolution remote sensing imagery. Secondly, we test if the predictive performance can be increased by a priori selecting photographs that share a more similar perspective to the remote sensing data. We used two case studies to test our proposed approach with multiple RGB orthoimages acquired from UAV with the target plant species Fallopia japonica and Portulacaria afra respectively. Our results demonstrate that CNN models trained with heterogeneous, crowd-sourced plant photographs can indeed predict the target species in UAV orthoimages with surprising accuracy. Filtering the crowd-sourced photographs used for training by acquisition properties increased the predictive performance. This study demonstrates that citizen science data can effectively anticipate a common bottleneck for vegetation assessments and provides an example on how we can effectively harness the ever-increasing availability of crowd-sourced and big data for remote sensing applications. Numéro de notice : A2022-488 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.1016/j.ophoto.2022.100016 Date de publication en ligne : 23/05/2022 En ligne : https://doi.org/10.1016/j.ophoto.2022.100016 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100956
in ISPRS Open Journal of Photogrammetry and Remote Sensing > vol 5 (August 2022) . - n° 100016[article]Detection of diseased pine trees in unmanned aerial vehicle images by using deep convolutional neural networks / Gensheng Hu in Geocarto international, vol 37 n° 12 ([01/07/2022])
[article]
Titre : Detection of diseased pine trees in unmanned aerial vehicle images by using deep convolutional neural networks Type de document : Article/Communication Auteurs : Gensheng Hu, Auteur ; Yanqiu Zhu, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 3520 - 3539 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage profond
[Termes IGN] Chine
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] image captée par drone
[Termes IGN] Pinus (genre)
[Termes IGN] santé des forêtsRésumé : (auteur) This study presents a method that uses high-resolution remote sensing images collected by an unmanned aerial vehicle (UAV) and combines MobileNet and Faster R-CNN for detecting diseased pine trees. MobileNet is used to remove backgrounds to reduce the interference of background information. Faster R-CNN is adopted to distinguish between diseased and healthy pine trees. The number of training samples is expanded due to the insufficient number of available UAV images. Experimental results show that the proposed method is better than traditional machine learning approaches, such as support vector machine and AdaBoost, and methods of DCNN, such as Alexnet, Inception and Faster R-CNN. Through sample expansion and background removal, the proposed method achieves effective detection of diseased pine trees in UAV images by using deep learning technology. Numéro de notice : A2022-588 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2020.1864025 Date de publication en ligne : 06/01/2021 En ligne : https://doi.org/10.1080/10106049.2020.1864025 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101362
in Geocarto international > vol 37 n° 12 [01/07/2022] . - pp 3520 - 3539[article]Discriminative information restoration and extraction for weakly supervised low-resolution fine-grained image recognition / Tiantian Yan in Pattern recognition, vol 127 (July 2022)
[article]
Titre : Discriminative information restoration and extraction for weakly supervised low-resolution fine-grained image recognition Type de document : Article/Communication Auteurs : Tiantian Yan, Auteur ; Jian Shi, Auteur ; Haojie Li, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 108629 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse discriminante
[Termes IGN] arbre aléatoire minimum
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de données
[Termes IGN] granularité d'image
[Termes IGN] image à basse résolution
[Termes IGN] image à haute résolution
[Termes IGN] relation sémantique
[Termes IGN] texture d'imageRésumé : (auteur) The existing methods of fine-grained image recognition mainly devote to learning subtle yet discriminative features from the high-resolution input. However, their performance deteriorates significantly when they are used for low quality images because a lot of discriminative details of images are missing. We propose a discriminative information restoration and extraction network, termed as DRE-Net, to address the problem of low-resolution fine-grained image recognition, which has widespread application potential, such as shelf auditing and surveillance scenarios. DRE-Net is the first framework for weakly supervised low-resolution fine-grained image recognition and consists of two sub-networks: (1) fine-grained discriminative information restoration sub-network (FDR) and (2) recognition sub-network with the semantic relation distillation loss (SRD-loss). The first module utilizes the structural characteristic of minimum spanning tree (MST) to establish context information for each pixel by employing the spatial structures between each pixel and other pixels, which can help FDR focus on and restore the critical texture details. The second module employs the SRD-loss to calibrate recognition sub-network by transferring the correct relationships between every two pixels on the feature map. Meanwhile the SRD-loss can further prompt the FDR to recover reliable and accurate fine-grained details and guide the recognition sub-network to perceive the discriminative features from the correct relationships. Extensive experiments on three benchmark datasets and one retail product dataset demonstrate the effectiveness of our proposed framework. Numéro de notice : A2022-555 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.patcog.2022.108629 Date de publication en ligne : 06/03/2022 En ligne : https://doi.org/10.1016/j.patcog.2022.108629 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101168
in Pattern recognition > vol 127 (July 2022) . - n° 108629[article]Estimating generalized measures of local neighbourhood context from multispectral satellite images using a convolutional neural network / Alex David Singleton in Computers, Environment and Urban Systems, vol 95 (July 2022)PermalinkGlobal forecasting of ionospheric vertical total electron contents via ConvLSTM with spectrum analysis / Jinpei Chen in GPS solutions, vol 26 n° 3 (July 2022)PermalinkImproving remote sensing classification: A deep-learning-assisted model / Tsimur Davydzenka in Computers & geosciences, vol 164 (July 2022)PermalinkSemantic feature-constrained multitask siamese network for building change detection in high-spatial-resolution remote sensing imagery / Qian Shen in ISPRS Journal of photogrammetry and remote sensing, vol 189 (July 2022)PermalinkEncoder-decoder structure with multiscale receptive field block for unsupervised depth estimation from monocular video / Songnan Chen in Remote sensing, Vol 14 n° 12 (June-2 2022)Permalink3D browsing of wide-angle fisheye images under view-dependent perspective correction / Mingyi Huang in Photogrammetric record, vol 37 n° 178 (June 2022)PermalinkBeyond single receptive field: A receptive field fusion-and-stratification network for airborne laser scanning point cloud classification / Yongqiang Mao in ISPRS Journal of photogrammetry and remote sensing, vol 188 (June 2022)PermalinkDetecting interchanges in road networks using a graph convolutional network approach / Min Yang in International journal of geographical information science IJGIS, vol 36 n° 6 (June 2022)PermalinkExtracting the urban landscape features of the historic district from street view images based on deep learning: A case study in the Beijing Core area / Siming Yin in ISPRS International journal of geo-information, vol 11 n° 6 (June 2022)PermalinkFeature-selection high-resolution network with hypersphere embedding for semantic segmentation of VHR remote sensing images / Hanwen Xu in IEEE Transactions on geoscience and remote sensing, vol 60 n° 6 (June 2022)PermalinkInvariant structure representation for remote sensing object detection based on graph modeling / Zicong Zhu in IEEE Transactions on geoscience and remote sensing, vol 60 n° 6 (June 2022)PermalinkLine-based deep learning method for tree branch detection from digital images / Rodrigo L. S. Silva in International journal of applied Earth observation and geoinformation, vol 110 (June 2022)PermalinkPrecise crop classification of hyperspectral images using multi-branch feature fusion and dilation-based MLP / Haibin Wu in Remote sensing, vol 14 n° 11 (June-1 2022)PermalinkDeep learning for the detection of early signs for forest damage based on satellite imagery / Dennis Wittich in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2022 (2022 edition)PermalinkRailway lidar semantic segmentation with axially symmetrical convolutional learning / Antoine Manier in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2022 (2022 edition)PermalinkResearch on automatic identification method of terraces on the Loess plateau based on deep transfer learning / Mingge Yu in Remote sensing, vol 14 n° 10 (May-2 2022)Permalink3D lidar point-cloud projection operator and transfer machine learning for effective road surface features detection and segmentation / Heyang Thomas Li in The Visual Computer, vol 38 n° 5 (May 2022)PermalinkA context feature enhancement network for building extraction from high-resolution remote sensing imagery / Jinzhi Chen in Remote sensing, vol 14 n° 9 (May-1 2022)PermalinkEfficient convolutional neural architecture search for LiDAR DSM classification / Aili Wang in IEEE Transactions on geoscience and remote sensing, vol 60 n° 5 (May 2022)PermalinkRevising cadastral data on land boundaries using deep learning in image-based mapping / Bujar Fetai in ISPRS International journal of geo-information, vol 11 n° 5 (May 2022)PermalinkUnsupervised multi-view CNN for salient view selection and 3D interest point detection / Ran Song in International journal of computer vision, vol 130 n° 5 (May 2022)PermalinkWood decay detection in Norway spruce forests based on airborne hyperspectral and ALS data / Michele Dalponte in Remote sensing, vol 14 n° 8 (April-2 2022)PermalinkAssessing surface drainage conditions at the street and neighborhood scale: A computer vision and flow direction method applied to lidar data / Cheng-Chun Lee in Computers, Environment and Urban Systems, vol 93 (April 2022)PermalinkComparison of neural networks and k-nearest neighbors methods in forest stand variable estimation using airborne laser data / Andras Balazs in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 4 (April 2022)PermalinkA convolution neural network for forest leaf chlorophyll and carotenoid estimation using hyperspectral reflectance / Shuo Shi in International journal of applied Earth observation and geoinformation, vol 108 (April 2022)PermalinkDeep generative model for spatial–spectral unmixing with multiple endmember priors / Shuaikai Shi in IEEE Transactions on geoscience and remote sensing, vol 60 n° 4 (April 2022)PermalinkDeep learning for archaeological object detection on LiDAR: New evaluation measures and insights / Marco Fiorucci in Remote sensing, vol 14 n° 7 (April-1 2022)PermalinkEnriching the metadata of map images: a deep learning approach with GIS-based data augmentation / Yingjie Hu in International journal of geographical information science IJGIS, vol 36 n° 4 (April 2022)PermalinkExploring scientific literature by textual and image content using DRIFT / Ximena Pocco in Computers and graphics, vol 103 (April 2022)PermalinkMeta-learning based hyperspectral target detection using siamese network / Yulei Wang in IEEE Transactions on geoscience and remote sensing, vol 60 n° 4 (April 2022)PermalinkResearch on machine intelligent perception of urban geographic location based on high resolution remote sensing images / Jun Chen in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 4 (April 2022)PermalinkSpatially oriented convolutional neural network for spatial relation extraction from natural language texts / Qinjun Qiu in Transactions in GIS, vol 26 n° 2 (April 2022)PermalinkNeural map style transfer exploration with GANs / Sidonie Christophe in International journal of cartography, vol 8 n° 1 (March 2022)PermalinkTraffic sign three-dimensional reconstruction based on point clouds and panoramic images / Minye Wang in Photogrammetric record, vol 37 n° 177 (March 2022)PermalinkUltrahigh-resolution boreal forest canopy mapping: Combining UAV imagery and photogrammetric point clouds in a deep-learning-based approach / Linyuan Li in International journal of applied Earth observation and geoinformation, vol 107 (March 2022)PermalinkUsing street view images to identify road noise barriers with ensemble classification model and geospatial analysis / Kai Zhang in Sustainable Cities and Society, vol 78 (March 2022)PermalinkVisual vs internal attention mechanisms in deep neural networks for image classification and object detection / Abraham Montoya Obeso in Pattern recognition, vol 123 (March 2022)PermalinkMulti-species individual tree segmentation and identification based on improved mask R-CNN and UAV imagery in mixed forests / Chong Zhang in Remote sensing, vol 14 n° 4 (February-2 2022)PermalinkA combination of convolutional and graph neural networks for regularized road surface extraction / Jingjing Yan in IEEE Transactions on geoscience and remote sensing, vol 60 n° 2 (February 2022)PermalinkDecision fusion of deep learning and shallow learning for marine oil spill detection / Junfang Yang in Remote sensing, vol 14 n° 3 (February-1 2022)PermalinkGazPNE: annotation-free deep learning for place name extraction from microblogs leveraging gazetteer and synthetic data by rules / Xuke Hu in International journal of geographical information science IJGIS, vol 36 n° 2 (February 2022)PermalinkGisGCN: a visual graph-based framework to match geographical areas through time / Margarita Khokhlova in ISPRS International journal of geo-information, vol 11 n° 2 (February 2022)PermalinkMonthly mapping of forest harvesting using dense time series Sentinel-1 SAR imagery and deep learning / Feng Zhao in Remote sensing of environment, vol 269 (February 2022)PermalinkPCEDNet: a lightweight neural network for fast and interactive edge detection in 3D point clouds / Chems-Eddine Himeur in ACM Transactions on Graphics, TOG, Vol 41 n° 1 (February 2022)PermalinkSemantic segmentation of land cover from high resolution multispectral satellite images by spectral-spatial convolutional neural network / Ekrem Saralioglu in Geocarto international, vol 37 n° 2 ([15/01/2022])Permalink