Descripteur
Termes IGN > mathématiques > statistique mathématique > analyse de données > classification > classification par réseau neuronal
classification par réseau neuronalVoir aussi |
Documents disponibles dans cette catégorie (338)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
A pipeline for automated processing of Corona KH-4 (1962-1972) stereo imagery / Sajid Ghuffar in IEEE Transactions on geoscience and remote sensing, vol 60 n° 8 (August 2022)
[article]
Titre : A pipeline for automated processing of Corona KH-4 (1962-1972) stereo imagery Type de document : Article/Communication Auteurs : Sajid Ghuffar, Auteur ; Tobias Bolch, Auteur ; Ewelina Rupnik , Auteur ; Atanu Bhattacharya, Auteur Année de publication : 2022 Article en page(s) : pp Note générale : bibliographie
voir aussi https://research-repository.st-andrews.ac.uk/bitstream/10023/26124/1/Ghuffar_2022_IEEE_TGRS_Pipeline_automated_processing_AAM.pdfLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement d'images
[Termes IGN] apprentissage profond
[Termes IGN] chaîne de traitement
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] compensation par faisceaux
[Termes IGN] géométrie de l'image
[Termes IGN] géométrie épipolaire
[Termes IGN] glacier
[Termes IGN] Himalaya
[Termes IGN] image Corona
[Termes IGN] image panoramique
[Termes IGN] MNS SRTM
[Termes IGN] modèle numérique de surface
[Termes IGN] modèle stéréoscopique
[Termes IGN] point d'appuiRésumé : (auteur) The Corona KH-4 reconnaissance satellite missions from 1962-1972 acquired panoramic stereo imagery with high spatial resolution of 1.8-7.5 m. The potential of 800,000+ declassified Corona images has not been leveraged due to the complexities arising from handling of panoramic imaging geometry, film distortions and limited availability of the metadata required for georeferencing of the Corona imagery. This paper presents Corona Stereo Pipeline (CoSP): A pipeline for processing of Corona KH-4 stereo panoramic imagery. CoSP utlizes a deep learning based feature matcher SuperGlue to automatically match features point between Corona KH-4 images and recent satellite imagery to generate Ground Control Points (GCPs). To model the imaging geometry and the scanning motion of the panoramic KH-4 cameras, a rigorous camera model consisting of modified collinearity equations with time dependent exterior orientation parameters is employed. The results show that using the entire frame of the Corona image, bundle adjustment using well-distributed GCPs results in an average standard deviation (SD) of less than 2 pixels. We evaluate fiducial marks on the Corona films and show that pre-processing the Corona images to compensate for film bending improves the accuracy. We further assess a polynomial epipolar resampling method for rectification of Corona stereo images. The distortion pattern of image residuals of GCPs and y-parallax in epipolar resampled images suggest that film distortions due to long term storage as likely cause of systematic deviations. Compared to the SRTM DEM, the Corona DEM computed using CoSP achieved a Normalized Median Absolute Deviation (NMAD) of elevation differences of ? 4m over an area of approx. 4000km2. We show that the proposed pipeline can be applied to sequence of complex scenes involving high relief and glacierized terrain and that the resulting DEMs can be used to compute long term glacier elevation changes over large areas. Numéro de notice : A2022-952 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Autre URL associée : vers ArXiv Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2022.3200151 Date de publication en ligne : 19/08/2022 En ligne : https://doi.org/10.1109/TGRS.2022.3200151 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=103286
in IEEE Transactions on geoscience and remote sensing > vol 60 n° 8 (August 2022) . - pp[article]Transfer learning from citizen science photographs enables plant species identification in UAV imagery / Salim Soltani in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 5 (August 2022)
[article]
Titre : Transfer learning from citizen science photographs enables plant species identification in UAV imagery Type de document : Article/Communication Auteurs : Salim Soltani, Auteur ; Hannes Feilhauer, Auteur ; Robbert Duker, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 100016 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] base de données naturalistes
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] distribution spatiale
[Termes IGN] données localisées des bénévoles
[Termes IGN] espèce végétale
[Termes IGN] filtrage de la végétation
[Termes IGN] identification de plantes
[Termes IGN] image captée par drone
[Termes IGN] orthoimage couleur
[Termes IGN] science citoyenne
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Accurate information on the spatial distribution of plant species and communities is in high demand for various fields of application, such as nature conservation, forestry, and agriculture. A series of studies has shown that Convolutional Neural Networks (CNNs) accurately predict plant species and communities in high-resolution remote sensing data, in particular with data at the centimeter scale acquired with Unoccupied Aerial Vehicles (UAV). However, such tasks often require ample training data, which is commonly generated in the field via geocoded in-situ observations or labeling remote sensing data through visual interpretation. Both approaches are laborious and can present a critical bottleneck for CNN applications. An alternative source of training data is given by using knowledge on the appearance of plants in the form of plant photographs from citizen science projects such as the iNaturalist database. Such crowd-sourced plant photographs typically exhibit very different perspectives and great heterogeneity in various aspects, yet the sheer volume of data could reveal great potential for application to bird’s eye views from remote sensing platforms. Here, we explore the potential of transfer learning from such a crowd-sourced data treasure to the remote sensing context. Therefore, we investigate firstly, if we can use crowd-sourced plant photographs for CNN training and subsequent mapping of plant species in high-resolution remote sensing imagery. Secondly, we test if the predictive performance can be increased by a priori selecting photographs that share a more similar perspective to the remote sensing data. We used two case studies to test our proposed approach with multiple RGB orthoimages acquired from UAV with the target plant species Fallopia japonica and Portulacaria afra respectively. Our results demonstrate that CNN models trained with heterogeneous, crowd-sourced plant photographs can indeed predict the target species in UAV orthoimages with surprising accuracy. Filtering the crowd-sourced photographs used for training by acquisition properties increased the predictive performance. This study demonstrates that citizen science data can effectively anticipate a common bottleneck for vegetation assessments and provides an example on how we can effectively harness the ever-increasing availability of crowd-sourced and big data for remote sensing applications. Numéro de notice : A2022-488 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.1016/j.ophoto.2022.100016 Date de publication en ligne : 23/05/2022 En ligne : https://doi.org/10.1016/j.ophoto.2022.100016 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100956
in ISPRS Open Journal of Photogrammetry and Remote Sensing > vol 5 (August 2022) . - n° 100016[article]A model development on GIS-driven data to predict temporal daily collision through integrating Discrete Wavelet Transform (DWT) and Artificial Neural Network (ANN) algorithms; case study: Tehran-Qazvin freeway / Reza Sanayeia in Geocarto international, vol 37 n° 14 ([20/07/2022])
[article]
Titre : A model development on GIS-driven data to predict temporal daily collision through integrating Discrete Wavelet Transform (DWT) and Artificial Neural Network (ANN) algorithms; case study: Tehran-Qazvin freeway Type de document : Article/Communication Auteurs : Reza Sanayeia, Auteur ; Alireza Vafaeinejad, Auteur ; Jalal Karami, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 4141 - 4157 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications SIG
[Termes IGN] accident de la route
[Termes IGN] autocorrélation
[Termes IGN] autoroute
[Termes IGN] classification par Perceptron multicouche
[Termes IGN] modèle de simulation
[Termes IGN] réseau neuronal artificiel
[Termes IGN] système d'information géographique
[Termes IGN] Téhéran
[Termes IGN] transformation en ondelettesRésumé : (auteur) The aim of this study is to develop a model to predict temporal daily collision by integrating of Discrete Wavelet Transform (DWT) and Artificial Neural Network (ANN) algorithms. As a case study, the integrated model was tested on 1097 daily traffic collisions data of Karaj-Qazvin freeway from 2009 to 2013 and the results were compared with the conventional ANN prediction model. In this method, initially, the raw collision data were analyzed, normalized, and classified via Geographical Information System (GIS). Partial Autocorrelation Function (PACF) was also utilized to evaluate the temporal autocorrelation for consecutive existing daily data. The results of this study showed that the proposed integrated DWT-ANN method provided higher predictive accuracy in daily traffic collision than ANN model by increasing coefficient of determination (R2) from 0.66 to 0.82. Numéro de notice : A2022-650 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : https://doi.org/10.1080/10106049.2021.1871669 Date de publication en ligne : 19/01/2021 En ligne : https://doi.org/10.1080/10106049.2021.1871669 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101472
in Geocarto international > vol 37 n° 14 [20/07/2022] . - pp 4141 - 4157[article]Detection of diseased pine trees in unmanned aerial vehicle images by using deep convolutional neural networks / Gensheng Hu in Geocarto international, vol 37 n° 12 ([01/07/2022])
[article]
Titre : Detection of diseased pine trees in unmanned aerial vehicle images by using deep convolutional neural networks Type de document : Article/Communication Auteurs : Gensheng Hu, Auteur ; Yanqiu Zhu, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 3520 - 3539 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage profond
[Termes IGN] Chine
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] image captée par drone
[Termes IGN] Pinus (genre)
[Termes IGN] santé des forêtsRésumé : (auteur) This study presents a method that uses high-resolution remote sensing images collected by an unmanned aerial vehicle (UAV) and combines MobileNet and Faster R-CNN for detecting diseased pine trees. MobileNet is used to remove backgrounds to reduce the interference of background information. Faster R-CNN is adopted to distinguish between diseased and healthy pine trees. The number of training samples is expanded due to the insufficient number of available UAV images. Experimental results show that the proposed method is better than traditional machine learning approaches, such as support vector machine and AdaBoost, and methods of DCNN, such as Alexnet, Inception and Faster R-CNN. Through sample expansion and background removal, the proposed method achieves effective detection of diseased pine trees in UAV images by using deep learning technology. Numéro de notice : A2022-588 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2020.1864025 Date de publication en ligne : 06/01/2021 En ligne : https://doi.org/10.1080/10106049.2020.1864025 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101362
in Geocarto international > vol 37 n° 12 [01/07/2022] . - pp 3520 - 3539[article]Discriminative information restoration and extraction for weakly supervised low-resolution fine-grained image recognition / Tiantian Yan in Pattern recognition, vol 127 (July 2022)
[article]
Titre : Discriminative information restoration and extraction for weakly supervised low-resolution fine-grained image recognition Type de document : Article/Communication Auteurs : Tiantian Yan, Auteur ; Jian Shi, Auteur ; Haojie Li, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 108629 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse discriminante
[Termes IGN] arbre aléatoire minimum
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de données
[Termes IGN] granularité d'image
[Termes IGN] image à basse résolution
[Termes IGN] image à haute résolution
[Termes IGN] relation sémantique
[Termes IGN] texture d'imageRésumé : (auteur) The existing methods of fine-grained image recognition mainly devote to learning subtle yet discriminative features from the high-resolution input. However, their performance deteriorates significantly when they are used for low quality images because a lot of discriminative details of images are missing. We propose a discriminative information restoration and extraction network, termed as DRE-Net, to address the problem of low-resolution fine-grained image recognition, which has widespread application potential, such as shelf auditing and surveillance scenarios. DRE-Net is the first framework for weakly supervised low-resolution fine-grained image recognition and consists of two sub-networks: (1) fine-grained discriminative information restoration sub-network (FDR) and (2) recognition sub-network with the semantic relation distillation loss (SRD-loss). The first module utilizes the structural characteristic of minimum spanning tree (MST) to establish context information for each pixel by employing the spatial structures between each pixel and other pixels, which can help FDR focus on and restore the critical texture details. The second module employs the SRD-loss to calibrate recognition sub-network by transferring the correct relationships between every two pixels on the feature map. Meanwhile the SRD-loss can further prompt the FDR to recover reliable and accurate fine-grained details and guide the recognition sub-network to perceive the discriminative features from the correct relationships. Extensive experiments on three benchmark datasets and one retail product dataset demonstrate the effectiveness of our proposed framework. Numéro de notice : A2022-555 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.patcog.2022.108629 Date de publication en ligne : 06/03/2022 En ligne : https://doi.org/10.1016/j.patcog.2022.108629 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101168
in Pattern recognition > vol 127 (July 2022) . - n° 108629[article]Estimating generalized measures of local neighbourhood context from multispectral satellite images using a convolutional neural network / Alex David Singleton in Computers, Environment and Urban Systems, vol 95 (July 2022)PermalinkGlobal forecasting of ionospheric vertical total electron contents via ConvLSTM with spectrum analysis / Jinpei Chen in GPS solutions, vol 26 n° 3 (July 2022)PermalinkImproving remote sensing classification: A deep-learning-assisted model / Tsimur Davydzenka in Computers & geosciences, vol 164 (July 2022)PermalinkSemantic feature-constrained multitask siamese network for building change detection in high-spatial-resolution remote sensing imagery / Qian Shen in ISPRS Journal of photogrammetry and remote sensing, vol 189 (July 2022)PermalinkEncoder-decoder structure with multiscale receptive field block for unsupervised depth estimation from monocular video / Songnan Chen in Remote sensing, Vol 14 n° 12 (June-2 2022)Permalink3D browsing of wide-angle fisheye images under view-dependent perspective correction / Mingyi Huang in Photogrammetric record, vol 37 n° 178 (June 2022)PermalinkArtificial intelligence techniques in extracting building and tree footprints using aerial imagery and LiDAR data / Saeideh Sahebi Vayghan in Geocarto international, vol 37 n° 10 ([01/06/2022])PermalinkBeyond single receptive field: A receptive field fusion-and-stratification network for airborne laser scanning point cloud classification / Yongqiang Mao in ISPRS Journal of photogrammetry and remote sensing, vol 188 (June 2022)PermalinkDetecting interchanges in road networks using a graph convolutional network approach / Min Yang in International journal of geographical information science IJGIS, vol 36 n° 6 (June 2022)PermalinkExtracting the urban landscape features of the historic district from street view images based on deep learning: A case study in the Beijing Core area / Siming Yin in ISPRS International journal of geo-information, vol 11 n° 6 (June 2022)Permalink