Descripteur
Termes IGN > mathématiques > statistique mathématique > analyse de données > classification > classification par réseau neuronal
classification par réseau neuronalVoir aussi |
Documents disponibles dans cette catégorie (236)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Comparison of neural networks and k-nearest neighbors methods in forest stand variable estimation using airborne laser data / Andras Balazs in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 4 (April 2022)
[article]
Titre : Comparison of neural networks and k-nearest neighbors methods in forest stand variable estimation using airborne laser data Type de document : Article/Communication Auteurs : Andras Balazs, Auteur ; Eero Liski, Auteur ; Sakari Tuominen, Auteur Année de publication : 2022 Article en page(s) : n° 100012 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] algorithme génétique
[Termes IGN] bois sur pied
[Termes IGN] classification barycentrique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] covariance
[Termes IGN] diamètre à hauteur de poitrine
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] Finlande
[Termes IGN] hauteur des arbres
[Termes IGN] inventaire forestier étranger (données)
[Termes IGN] peuplement forestier
[Termes IGN] réseau neuronal artificiel
[Termes IGN] semis de points
[Termes IGN] volume en boisRésumé : (auteur) In the remote sensing of forests, point cloud data from airborne laser scanning contains high-value information for predicting the volume of growing stock and the size of trees. At the same time, laser scanning data allows a very high number of potential features that can be extracted from the point cloud data for predicting the forest variables. In some methods, the features are first extracted by user-defined algorithms and the best features are selected based on supervised learning, whereas both tasks can be carried out automatically by deep learning methods typically based on deep neural networks. In this study we tested k-nearest neighbor method combined with genetic algorithm (k-NN), artificial neural network (ANN), 2-dimensional convolutional neural network (2D-CNN) and 3-dimensional CNN (3D-CNN) for estimating the following forest variables: volume of growing stock, stand mean height and mean diameter. The results indicate that there were no major differences in the accuracy of the tested methods, but the ANN and 3D-CNN generally resulted in the lowest RMSE values for the predicted forest variables and the highest R2 values between the predicted and observed forest variables. The lowest RMSE scores were 20.3% (3D-CNN), 6.4% (3D-CNN) and 11.2% (ANN) and the highest R2 results 0.90 (3D-CNN), 0.95 (3D-CNN) and 0.85 (ANN) for volume of growing stock, stand mean height and mean diameter, respectively. Covariances of all response variable combinations and all predictions methods were lower than corresponding covariances of the field observations. ANN predictions had the highest covariances for mean height vs. mean diameter and total growing stock vs. mean diameter combinations and 3D-CNN for mean height vs. total growing stock. CNNs have distinct theoretical advantage over the other methods in complex recognition or classification tasks, but the utilization of their full potential may possibly require higher point density clouds than applied here. Thus, the relatively low density of the point clouds data may have been a contributing factor to the somewhat inconclusive ranking of the methods in this study. The input data and computer codes are available at: https://github.com/balazsan/ALS_NNs. Numéro de notice : A2022-265 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.ophoto.2022.100012 Date de publication en ligne : 12/03/2022 En ligne : https://doi.org/10.1016/j.ophoto.2022.100012 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100263
in ISPRS Open Journal of Photogrammetry and Remote Sensing > vol 4 (April 2022) . - n° 100012[article]A convolution neural network for forest leaf chlorophyll and carotenoid estimation using hyperspectral reflectance / Shuo Shi in International journal of applied Earth observation and geoinformation, vol 108 (April 2022)
[article]
Titre : A convolution neural network for forest leaf chlorophyll and carotenoid estimation using hyperspectral reflectance Type de document : Article/Communication Auteurs : Shuo Shi, Auteur ; Lu Xu, Auteur ; Wei Gong, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 102719 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] chlorophylle
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] écosystème forestier
[Termes IGN] feuille (végétation)
[Termes IGN] modèle de transfert radiatif
[Termes IGN] processus gaussien
[Termes IGN] réflectance spectrale
[Termes IGN] régressionRésumé : (auteur) Forest leaf chlorophyll (Cab) and carotenoid (Cxc) are key functional indicators for the state of the forest ecosystem. Current machine learning models based on hyperspectral reflectance are widely applied to estimate leaf Cab and Cxc contents at leaf scale. However, these models have certain accuracy for non-independent datasets but have poor generalization for independent datasets when they are used to estimate leaf Cab and Cxc contents. This fact limits that hyperspectral remote sensing completely replaces destructive measurements for leaf Cab and Cxc contents. Thus, the development of an estimation model with high accuracy and satisfactory generalization is necessary. Convolutional neural networks (CNNs) have certain accuracy and generalization in many domains, and have the potential to solve above-mentioned problem. Therefore, this study developed a CNN using one-dimensional hyperspectral reflectance, which aimed to improve the model's accuracy and generalization in leaf Cab and Cxc content estimation at leaf scale. The proposed CNN was developed by three steps. First, in consideration of the correlation between leaf Cab and Cxc contents in natural leaves, 2500 physical data with leaf reflectance and corresponding Cab and Cxc contents were generated by leaf radiative transfer model and multivariable gaussian distribution function. Then, the proposed CNN was built by five strategies based on the architecture of the AlexNet. Finally, five-fold cross validation was performed with 70% of the physical data to determine the best strategy to develop the proposed CNN. These were executed to ensure the proposed CNN with the maximum accuracy and generalization. In addition, the accuracy and generalization of the proposed CNN were tested using a non-independent dataset and an independent dataset, respectively. The proposed CNN was also compared with back propagation neural network (BPNN), support vector regression (SVR) and gaussian process regression (GPR). Results showed that the best CNN could be developed with one input, five convolutional, three max-pooling and three fully-connected layers. Comprehensively considering the model's accuracy and generalization, the proposed CNN was the best model for leaf Cab and Cxc content estimation compared with BPNN, SVR and GPR. This study provides a development strategy of CNN estimation model using one-dimensional hyperspectral reflectance at leaf scale. The proposed CNN could further promote the practical application of hyperspectral remote sensing in leaf Cab and Cxc content estimation. Numéro de notice : A2022-231 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.1016/j.jag.2022.102719 Date de publication en ligne : 16/02/2022 En ligne : https://doi.org/10.1016/j.jag.2022.102719 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100119
in International journal of applied Earth observation and geoinformation > vol 108 (April 2022) . - n° 102719[article]Deep generative model for spatial–spectral unmixing with multiple endmember priors / Shuaikai Shi in IEEE Transactions on geoscience and remote sensing, vol 60 n° 4 (April 2022)
[article]
Titre : Deep generative model for spatial–spectral unmixing with multiple endmember priors Type de document : Article/Communication Auteurs : Shuaikai Shi, Auteur ; Lijun Zhang, Auteur ; Yoann Altmann, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 5527214 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse de mélange spectral d’extrémités multiples
[Termes IGN] analyse linéaire des mélanges spectraux
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] image hyperspectrale
[Termes IGN] réseau neuronal de graphesRésumé : (auteur) Spectral unmixing is an effective tool to mine information at the subpixel level from complex hyperspectral images. To consider the spatially correlated materials distributions in the scene, many algorithms unmix the data in a spatial–spectral fashion; however, existing models are usually unable to model spectral variability simultaneously. In this article, we present a variational autoencoder-based deep generative model for spatial–spectral unmixing (DGMSSU) with endmember variability, by linking the generated endmembers to the probability distributions of endmember bundles extracted from the hyperspectral imagery via discriminators. Besides the convolutional autoencoder-like architecture that can only model the spatial information within the regular patch inputs, DGMSSU is able to alternatively choose graph convolutional networks or self-attention mechanism modules to handle the irregular but more flexible data—superpixel. Experimental results on a simulated dataset, as well as two well-known real hyperspectral images, show the superiority of our proposed approach in comparison with other state-of-the-art spatial–spectral unmixing methods. Compared to the conventional unmixing methods that consider the endmember variability, our proposed model generates more accurate endmembers on each subimage by the adversarial training process. The codes of this work will be available at https://github.com/shuaikaishi/DGMSSU for the sake of reproducibility. Numéro de notice : A2022-380 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2022.3168712 Date de publication en ligne : 18/04/2022 En ligne : https://doi.org/10.1109/TGRS.2022.3168712 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100645
in IEEE Transactions on geoscience and remote sensing > vol 60 n° 4 (April 2022) . - n° 5527214[article]Deep learning for archaeological object detection on LiDAR: New evaluation measures and insights / Marco Fiorucci in Remote sensing, vol 14 n° 7 (April-1 2022)
[article]
Titre : Deep learning for archaeological object detection on LiDAR: New evaluation measures and insights Type de document : Article/Communication Auteurs : Marco Fiorucci, Auteur ; Wouter Baernd Verschoof-van der Vaart, Auteur ; Paolo Soleni, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 1694 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] classification barycentrique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] classification pixellaire
[Termes IGN] détection d'objet
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] site archéologiqueRésumé : (auteur) Machine Learning-based workflows are being progressively used for the automatic detection of archaeological objects (intended as below-surface sites) in remote sensing data. Despite promising results in the detection phase, there is still a lack of a standard set of measures to evaluate the performance of object detection methods, since buried archaeological sites often have distinctive shapes that set them aside from other types of objects included in mainstream remote sensing datasets (e.g., Dataset of Object deTection in Aerial images, DOTA). Additionally, archaeological research relies heavily on geospatial information when validating the output of an object detection procedure, a type of information that is not normally considered in regular machine learning validation pipelines. This paper tackles these shortcomings by introducing two novel automatic evaluation measures, namely ‘centroid-based’ and ‘pixel-based’, designed to encode the salient aspects of the archaeologists’ thinking process. To test their usability, an experiment with different object detection deep neural networks was conducted on a LiDAR dataset. The experimental results show that these two automatic measures closely resemble the semi-automatic one currently used by archaeologists and therefore can be adopted as fully automatic evaluation measures in archaeological remote sensing detection. Adoption will facilitate cross-study comparisons and close collaboration between machine learning and archaeological researchers, which in turn will encourage the development of novel human-centred archaeological object detection tools. Numéro de notice : A2022-282 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.3390/rs14071694 En ligne : https://doi.org/10.3390/rs14071694 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100298
in Remote sensing > vol 14 n° 7 (April-1 2022) . - n° 1694[article]Enriching the metadata of map images: a deep learning approach with GIS-based data augmentation / Yingjie Hu in International journal of geographical information science IJGIS, vol 36 n° 4 (April 2022)
[article]
Titre : Enriching the metadata of map images: a deep learning approach with GIS-based data augmentation Type de document : Article/Communication Auteurs : Yingjie Hu, Auteur ; Zhipeng Gui, Auteur ; Jimin Wang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 799 - 821 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique web
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] descripteur
[Termes IGN] données d'entrainement sans étiquette
[Termes IGN] image cartographique
[Termes IGN] métadonnées
[Termes IGN] projection
[Termes IGN] système d'information géographique
[Termes IGN] Web Map Service
[Termes IGN] web mappingRésumé : (auteur) Maps in the form of digital images are widely available in geoportals, Web pages, and other data sources. The metadata of map images, such as spatial extents and place names, are critical for their indexing and searching. However, many map images have either mismatched metadata or no metadata at all. Recent developments in deep learning offer new possibilities for enriching the metadata of map images via image-based information extraction. One major challenge of using deep learning models is that they often require large amounts of training data that have to be manually labeled. To address this challenge, this paper presents a deep learning approach with GIS-based data augmentation that can automatically generate labeled training map images from shapefiles using GIS operations. We utilize such an approach to enrich the metadata of map images by adding spatial extents and place names extracted from map images. We evaluate this GIS-based data augmentation approach by using it to train multiple deep learning models and testing them on two different datasets: a Web Map Service image dataset at the continental scale and an online map image dataset at the state scale. We then discuss the advantages and limitations of the proposed approach. Numéro de notice : A2022-258 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : https://doi.org/10.1080/13658816.2021.1968407 En ligne : https://doi.org/10.1080/13658816.2021.1968407 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100231
in International journal of geographical information science IJGIS > vol 36 n° 4 (April 2022) . - pp 799 - 821[article]Exploring scientific literature by textual and image content using DRIFT / Ximena Pocco in Computers and graphics, vol 103 (April 2022)PermalinkA graph attention network for road marking classification from mobile LiDAR point clouds / Lina Fang in International journal of applied Earth observation and geoinformation, vol 108 (April 2022)PermalinkMeta-learning based hyperspectral target detection using siamese network / Yulei Wang in IEEE Transactions on geoscience and remote sensing, vol 60 n° 4 (April 2022)PermalinkResearch on machine intelligent perception of urban geographic location based on high resolution remote sensing images / Jun Chen in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 4 (April 2022)PermalinkSpatially oriented convolutional neural network for spatial relation extraction from natural language texts / Qinjun Qiu in Transactions in GIS, vol 26 n° 2 (April 2022)PermalinkNeural map style transfer exploration with GANs / Sidonie Christophe in International journal of cartography, vol 8 n° 1 (March 2022)PermalinkSimultaneous retrieval of selected optical water quality indicators from Landsat-8, Sentinel-2, and Sentinel-3 / Nima Pahlevan in Remote sensing of environment, vol 270 (March 2022)PermalinkTraffic sign three-dimensional reconstruction based on point clouds and panoramic images / Minye Wang in Photogrammetric record, vol 37 n° 177 (March 2022)PermalinkUltrahigh-resolution boreal forest canopy mapping: Combining UAV imagery and photogrammetric point clouds in a deep-learning-based approach / Linyuan Li in International journal of applied Earth observation and geoinformation, vol 107 (March 2022)PermalinkUnderstanding the movement predictability of international travelers using a nationwide mobile phone dataset collected in South Korea / Yang Xu in Computers, Environment and Urban Systems, vol 92 (March 2022)Permalink