Descripteur
Termes IGN > informatique > intelligence artificielle > apprentissage automatique > apprentissage profond
apprentissage profond |
Documents disponibles dans cette catégorie (315)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Wood decay detection in Norway spruce forests based on airborne hyperspectral and ALS data / Michele Dalponte in Remote sensing, vol 14 n° 8 (April-2 2022)
[article]
Titre : Wood decay detection in Norway spruce forests based on airborne hyperspectral and ALS data Type de document : Article/Communication Auteurs : Michele Dalponte, Auteur ; Alvar J. I. Kallio, Auteur ; Hans Ole Ørka, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 1892 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] bois sur pied
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] dépérissement
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] données lidar
[Termes IGN] image hyperspectrale
[Termes IGN] image infrarouge
[Termes IGN] Norvège
[Termes IGN] Perceptron multicouche
[Termes IGN] Picea abies
[Termes IGN] régression linéaire
[Termes IGN] régression logistique
[Termes IGN] santé des forêts
[Termes IGN] semis de pointsRésumé : (auteur) Wood decay caused by pathogenic fungi in Norway spruce forests causes severe economic losses in the forestry sector, and currently no efficient methods exist to detect infected trees. The detection of wood decay could potentially lead to improvements in forest management and could help in reducing economic losses. In this study, airborne hyperspectral data were used to detect the presence of wood decay in the trees in two forest areas located in Etnedal (dataset I) and Gran (dataset II) municipalities, in southern Norway. The hyperspectral data used consisted of images acquired by two sensors operating in the VNIR and SWIR parts of the spectrum. Corresponding ground reference data were collected in Etnedal using a cut-to-length harvester while in Gran, field measurements were collected manually. Airborne laser scanning (ALS) data were used to detect the individual tree crowns (ITCs) in both sites. Different approaches to deal with pixels inside each ITC were considered: in particular, pixels were either aggregated to a unique value per ITC (i.e., mean, weighted mean, median, centermost pixel) or analyzed in an unaggregated way. Multiple classification methods were explored to predict rot presence: logistic regression, feed forward neural networks, and convolutional neural networks. The results showed that wood decay could be detected, even if with accuracy varying among the two datasets. The best results on the Etnedal dataset were obtained using a convolution neural network with the first five components of a principal component analysis as input (OA = 65.5%), while on the Gran dataset, the best result was obtained using LASSO with logistic regression and data aggregated using the weighted mean (OA = 61.4%). In general, the differences among aggregated and unaggregated data were small. Numéro de notice : A2022-352 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.3390/rs14081892 Date de publication en ligne : 14/04/2022 En ligne : https://doi.org/10.3390/rs14081892 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100541
in Remote sensing > vol 14 n° 8 (April-2 2022) . - n° 1892[article]Comparison of neural networks and k-nearest neighbors methods in forest stand variable estimation using airborne laser data / Andras Balazs in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 4 (April 2022)
[article]
Titre : Comparison of neural networks and k-nearest neighbors methods in forest stand variable estimation using airborne laser data Type de document : Article/Communication Auteurs : Andras Balazs, Auteur ; Eero Liski, Auteur ; Sakari Tuominen, Auteur Année de publication : 2022 Article en page(s) : n° 100012 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] algorithme génétique
[Termes IGN] bois sur pied
[Termes IGN] classification barycentrique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] covariance
[Termes IGN] diamètre à hauteur de poitrine
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] Finlande
[Termes IGN] hauteur des arbres
[Termes IGN] inventaire forestier étranger (données)
[Termes IGN] peuplement forestier
[Termes IGN] réseau neuronal artificiel
[Termes IGN] semis de points
[Termes IGN] volume en boisRésumé : (auteur) In the remote sensing of forests, point cloud data from airborne laser scanning contains high-value information for predicting the volume of growing stock and the size of trees. At the same time, laser scanning data allows a very high number of potential features that can be extracted from the point cloud data for predicting the forest variables. In some methods, the features are first extracted by user-defined algorithms and the best features are selected based on supervised learning, whereas both tasks can be carried out automatically by deep learning methods typically based on deep neural networks. In this study we tested k-nearest neighbor method combined with genetic algorithm (k-NN), artificial neural network (ANN), 2-dimensional convolutional neural network (2D-CNN) and 3-dimensional CNN (3D-CNN) for estimating the following forest variables: volume of growing stock, stand mean height and mean diameter. The results indicate that there were no major differences in the accuracy of the tested methods, but the ANN and 3D-CNN generally resulted in the lowest RMSE values for the predicted forest variables and the highest R2 values between the predicted and observed forest variables. The lowest RMSE scores were 20.3% (3D-CNN), 6.4% (3D-CNN) and 11.2% (ANN) and the highest R2 results 0.90 (3D-CNN), 0.95 (3D-CNN) and 0.85 (ANN) for volume of growing stock, stand mean height and mean diameter, respectively. Covariances of all response variable combinations and all predictions methods were lower than corresponding covariances of the field observations. ANN predictions had the highest covariances for mean height vs. mean diameter and total growing stock vs. mean diameter combinations and 3D-CNN for mean height vs. total growing stock. CNNs have distinct theoretical advantage over the other methods in complex recognition or classification tasks, but the utilization of their full potential may possibly require higher point density clouds than applied here. Thus, the relatively low density of the point clouds data may have been a contributing factor to the somewhat inconclusive ranking of the methods in this study. The input data and computer codes are available at: https://github.com/balazsan/ALS_NNs. Numéro de notice : A2022-265 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.ophoto.2022.100012 Date de publication en ligne : 12/03/2022 En ligne : https://doi.org/10.1016/j.ophoto.2022.100012 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100263
in ISPRS Open Journal of Photogrammetry and Remote Sensing > vol 4 (April 2022) . - n° 100012[article]Deep generative model for spatial–spectral unmixing with multiple endmember priors / Shuaikai Shi in IEEE Transactions on geoscience and remote sensing, vol 60 n° 4 (April 2022)
[article]
Titre : Deep generative model for spatial–spectral unmixing with multiple endmember priors Type de document : Article/Communication Auteurs : Shuaikai Shi, Auteur ; Lijun Zhang, Auteur ; Yoann Altmann, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 5527214 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse de mélange spectral d’extrémités multiples
[Termes IGN] analyse linéaire des mélanges spectraux
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] image hyperspectrale
[Termes IGN] réseau neuronal de graphesRésumé : (auteur) Spectral unmixing is an effective tool to mine information at the subpixel level from complex hyperspectral images. To consider the spatially correlated materials distributions in the scene, many algorithms unmix the data in a spatial–spectral fashion; however, existing models are usually unable to model spectral variability simultaneously. In this article, we present a variational autoencoder-based deep generative model for spatial–spectral unmixing (DGMSSU) with endmember variability, by linking the generated endmembers to the probability distributions of endmember bundles extracted from the hyperspectral imagery via discriminators. Besides the convolutional autoencoder-like architecture that can only model the spatial information within the regular patch inputs, DGMSSU is able to alternatively choose graph convolutional networks or self-attention mechanism modules to handle the irregular but more flexible data—superpixel. Experimental results on a simulated dataset, as well as two well-known real hyperspectral images, show the superiority of our proposed approach in comparison with other state-of-the-art spatial–spectral unmixing methods. Compared to the conventional unmixing methods that consider the endmember variability, our proposed model generates more accurate endmembers on each subimage by the adversarial training process. The codes of this work will be available at https://github.com/shuaikaishi/DGMSSU for the sake of reproducibility. Numéro de notice : A2022-380 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2022.3168712 Date de publication en ligne : 18/04/2022 En ligne : https://doi.org/10.1109/TGRS.2022.3168712 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100645
in IEEE Transactions on geoscience and remote sensing > vol 60 n° 4 (April 2022) . - n° 5527214[article]Deep learning for archaeological object detection on LiDAR: New evaluation measures and insights / Marco Fiorucci in Remote sensing, vol 14 n° 7 (April-1 2022)
[article]
Titre : Deep learning for archaeological object detection on LiDAR: New evaluation measures and insights Type de document : Article/Communication Auteurs : Marco Fiorucci, Auteur ; Wouter Baernd Verschoof-van der Vaart, Auteur ; Paolo Soleni, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 1694 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] classification barycentrique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] classification pixellaire
[Termes IGN] détection d'objet
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] site archéologiqueRésumé : (auteur) Machine Learning-based workflows are being progressively used for the automatic detection of archaeological objects (intended as below-surface sites) in remote sensing data. Despite promising results in the detection phase, there is still a lack of a standard set of measures to evaluate the performance of object detection methods, since buried archaeological sites often have distinctive shapes that set them aside from other types of objects included in mainstream remote sensing datasets (e.g., Dataset of Object deTection in Aerial images, DOTA). Additionally, archaeological research relies heavily on geospatial information when validating the output of an object detection procedure, a type of information that is not normally considered in regular machine learning validation pipelines. This paper tackles these shortcomings by introducing two novel automatic evaluation measures, namely ‘centroid-based’ and ‘pixel-based’, designed to encode the salient aspects of the archaeologists’ thinking process. To test their usability, an experiment with different object detection deep neural networks was conducted on a LiDAR dataset. The experimental results show that these two automatic measures closely resemble the semi-automatic one currently used by archaeologists and therefore can be adopted as fully automatic evaluation measures in archaeological remote sensing detection. Adoption will facilitate cross-study comparisons and close collaboration between machine learning and archaeological researchers, which in turn will encourage the development of novel human-centred archaeological object detection tools. Numéro de notice : A2022-282 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.3390/rs14071694 En ligne : https://doi.org/10.3390/rs14071694 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100298
in Remote sensing > vol 14 n° 7 (April-1 2022) . - n° 1694[article]Determination of building flood risk maps from LiDAR mobile mapping data / Yu Feng in Computers, Environment and Urban Systems, vol 93 (April 2022)
[article]
Titre : Determination of building flood risk maps from LiDAR mobile mapping data Type de document : Article/Communication Auteurs : Yu Feng, Auteur ; Qing Xiao, Auteur ; Claus Brenner, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 101759 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] bâtiment
[Termes IGN] cartographie d'urgence
[Termes IGN] cartographie des risques
[Termes IGN] classification semi-dirigée
[Termes IGN] détection d'objet
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] façade
[Termes IGN] infiltration
[Termes IGN] inondation
[Termes IGN] modèle de simulation
[Termes IGN] prévention des risques
[Termes IGN] risque naturel
[Termes IGN] segmentation sémantiqueRésumé : (auteur) With increasing urbanization, flooding is a major challenge for many cities today. Based on forecast precipitation, topography, and pipe networks, flood simulations can provide early warnings for areas and buildings at risk of flooding. Basement windows, doors, and underground garage entrances are common places where floodwater can flow into a building. Some buildings have been prepared or designed considering the threat of flooding, but others have not. Therefore, knowing the heights of these facade openings helps to identify places that are more susceptible to water ingress. However, such data is not yet readily available in most cities. Traditional surveying of the desired targets may be used, but this is a very time-consuming and laborious process. Instead, mobile mapping using LiDAR (light detection and ranging) is an efficient tool to obtain a large amount of high-density 3D measurement data. To use this method, it is required to extract the desired facade openings from the data in a fully automatic manner. This research presents a new process for the extraction of windows and doors from LiDAR mobile mapping data. Deep learning object detection models are trained to identify these objects. Usually, this requires to provide large amounts of manual annotations.
In this paper, we mitigate this problem by leveraging a rule-based method. In a first step, the rule-based method is used to generate pseudo-labels. A semi-supervised learning strategy is then applied with three different levels of supervision. The results show that using only automatically generated pseudo-labels, the learning-based model outperforms the rule-based approach by 14.6% in terms of F1-score. After five hours of human supervision, it is possible to improve the model by another 6.2%. By comparing the detected facade openings' heights with the predicted water levels from a flood simulation model, a map can be produced which assigns per-building flood risk levels. Thus, our research provides a new geographic information layer for fine-grained urban emergency response. This information can be combined with flood forecasting to provide a more targeted disaster prevention guide for the city's infrastructure and residential buildings. To the best of our knowledge, this work is the first attempt to achieve such a large scale, fine-grained building flood risk mapping.Numéro de notice : A2022-196 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.compenvurbsys.2022.101759 Date de publication en ligne : 01/02/2022 En ligne : https://doi.org/10.1016/j.compenvurbsys.2022.101759 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99964
in Computers, Environment and Urban Systems > vol 93 (April 2022) . - n° 101759[article]Enriching the metadata of map images: a deep learning approach with GIS-based data augmentation / Yingjie Hu in International journal of geographical information science IJGIS, vol 36 n° 4 (April 2022)PermalinkExploring the association between street built environment and street vitality using deep learning methods / Yunqin Li in Sustainable Cities and Society, vol 79 (April 2022)PermalinkGeoRec: Geometry-enhanced semantic 3D reconstruction of RGB-D indoor scenes / Linxi Huan in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)PermalinkA graph attention network for road marking classification from mobile LiDAR point clouds / Lina Fang in International journal of applied Earth observation and geoinformation, vol 108 (April 2022)PermalinkGraph learning based on signal smoothness representation for homogeneous and heterogeneous change detection / David Alejandro Jimenez-Sierra in IEEE Transactions on geoscience and remote sensing, vol 60 n° 4 (April 2022)PermalinkGraph neural network based model for multi-behavior session-based recommendation / Bo Yu in Geoinformatica, vol 26 n° 2 (April 2022)PermalinkMeta-learning based hyperspectral target detection using siamese network / Yulei Wang in IEEE Transactions on geoscience and remote sensing, vol 60 n° 4 (April 2022)PermalinkPolGAN: A deep-learning-based unsupervised forest height estimation based on the synergy of PolInSAR and LiDAR data / Qi Zhang in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)PermalinkUncertainty estimation for stereo matching based on evidential deep learning / Chen Wang in Pattern recognition, vol 124 (April 2022)PermalinkVD-LAB: A view-decoupled network with local-global aggregation bridge for airborne laser scanning point cloud classification / Jihao Li in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)Permalink