Descripteur
Documents disponibles dans cette catégorie (1332)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Weakly supervised semantic segmentation of airborne laser scanning point clouds / Yaping Lin in ISPRS Journal of photogrammetry and remote sensing, vol 187 (May 2022)
[article]
Titre : Weakly supervised semantic segmentation of airborne laser scanning point clouds Type de document : Article/Communication Auteurs : Yaping Lin, Auteur ; M. George Vosselman, Auteur ; Michael Ying Yang, Auteur Année de publication : 2022 Article en page(s) : pp 79 - 100 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] chevauchement
[Termes IGN] classification dirigée
[Termes IGN] données étiquetées d'entrainement
[Termes IGN] données laser
[Termes IGN] données localisées 3D
[Termes IGN] hétérogénéité sémantique
[Termes IGN] segmentation sémantique
[Termes IGN] semis de pointsRésumé : (Auteur) While modern deep learning algorithms for semantic segmentation of airborne laser scanning (ALS) point clouds have achieved considerable success, the training process often requires a large number of labelled 3D points. Pointwise annotation of 3D point clouds, especially for large scale ALS datasets, is extremely time-consuming work. Weak supervision that only needs a few annotation efforts but can make networks achieve comparable performance is an alternative solution. Assigning a weak label to a subcloud, a group of points, is an efficient annotation strategy. With the supervision of subcloud labels, we first train a classification network that produces pseudo labels for the training data. Then the pseudo labels are taken as the input of a segmentation network which gives the final predictions on the testing data. As the quality of pseudo labels determines the performance of the segmentation network on testing data, we propose an overlap region loss and an elevation attention unit for the classification network to obtain more accurate pseudo labels. The overlap region loss that considers the nearby subcloud semantic information is introduced to enhance the awareness of the semantic heterogeneity within a subcloud. The elevation attention helps the classification network to encode more representative features for ALS point clouds. For the segmentation network, in order to effectively learn representative features from inaccurate pseudo labels, we adopt a supervised contrastive loss that uncovers the underlying correlations of class-specific features. Extensive experiments on three ALS datasets demonstrate the superior performance of our model to the baseline method (Wei et al., 2020). With the same amount of labelling efforts, for the ISPRS benchmark dataset, the Rotterdam dataset and the DFC2019 dataset, our method rises the overall accuracy by 0.062, 0.112 and 0.031, and the average F1 score by 0.09, 0.178 and 0.043 respectively. Our code is publicly available at ‘https://github.com/yaping222/Weak_ALS.git’. Numéro de notice : A2022-227 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.03.001 Date de publication en ligne : 11/03/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.03.001 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100197
in ISPRS Journal of photogrammetry and remote sensing > vol 187 (May 2022) . - pp 79 - 100[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022051 SL Revue Centre de documentation Revues en salle Disponible 081-2022053 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2022052 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Spectral-spatial classification method for hyperspectral images using stacked sparse autoencoder suitable in limited labelled samples situation / Seyyed Ali Ahmadi in Geocarto international, vol 37 n° 7 ([15/04/2022])
[article]
Titre : Spectral-spatial classification method for hyperspectral images using stacked sparse autoencoder suitable in limited labelled samples situation Type de document : Article/Communication Auteurs : Seyyed Ali Ahmadi, Auteur ; Nasser Mehrshad, Auteur ; Seyyed Mohammadali Arghavan, Auteur Année de publication : 2022 Article en page(s) : pp 2031 - 2054 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse de sensibilité
[Termes IGN] apprentissage profond
[Termes IGN] données étiquetées d'entrainement
[Termes IGN] échantillonnage de données
[Termes IGN] filtre de Gabor
[Termes IGN] image hyperspectraleRésumé : (auteur) Recently, deep learning (DL)-based methods have attracted increasing attention for hyperspectral images (HSIs) classification. However, the complex structure and limited number of labelled training samples of HSIs negatively affect the performance of DL models. In this paper, a spectral-spatial classification method is proposed based on the combination of local and global spatial information, including extended multi-attribute profiles and multiscale Gabor features, with sparse stacked autoencoder (GEAE). GEAE stacks the spatial and spectral information to form the fused features. Also, GEAE generates virtual samples using weighted average of available samples for expanding the training set so that many parameters of DL network can be learned optimally in limited labelled samples situations. Therefore, the similarity between samples is determined with distance metric learning to overcome the problems of Euclidean distance-based similarity metrics. The experimental results on three HSIs datasets demonstrate the effectiveness of the GEAE in comparison to some existing classification methods. Numéro de notice : A2022-498 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2020.1797188 Date de publication en ligne : 10/08/2020 En ligne : https://doi.org/10.1080/10106049.2020.1797188 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100990
in Geocarto international > vol 37 n° 7 [15/04/2022] . - pp 2031 - 2054[article]Wood decay detection in Norway spruce forests based on airborne hyperspectral and ALS data / Michele Dalponte in Remote sensing, vol 14 n° 8 (April-2 2022)
[article]
Titre : Wood decay detection in Norway spruce forests based on airborne hyperspectral and ALS data Type de document : Article/Communication Auteurs : Michele Dalponte, Auteur ; Alvar J. I. Kallio, Auteur ; Hans Ole Ørka, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 1892 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] bois sur pied
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] dépérissement
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] données lidar
[Termes IGN] image hyperspectrale
[Termes IGN] image infrarouge
[Termes IGN] Norvège
[Termes IGN] Perceptron multicouche
[Termes IGN] Picea abies
[Termes IGN] régression linéaire
[Termes IGN] régression logistique
[Termes IGN] santé des forêts
[Termes IGN] semis de pointsRésumé : (auteur) Wood decay caused by pathogenic fungi in Norway spruce forests causes severe economic losses in the forestry sector, and currently no efficient methods exist to detect infected trees. The detection of wood decay could potentially lead to improvements in forest management and could help in reducing economic losses. In this study, airborne hyperspectral data were used to detect the presence of wood decay in the trees in two forest areas located in Etnedal (dataset I) and Gran (dataset II) municipalities, in southern Norway. The hyperspectral data used consisted of images acquired by two sensors operating in the VNIR and SWIR parts of the spectrum. Corresponding ground reference data were collected in Etnedal using a cut-to-length harvester while in Gran, field measurements were collected manually. Airborne laser scanning (ALS) data were used to detect the individual tree crowns (ITCs) in both sites. Different approaches to deal with pixels inside each ITC were considered: in particular, pixels were either aggregated to a unique value per ITC (i.e., mean, weighted mean, median, centermost pixel) or analyzed in an unaggregated way. Multiple classification methods were explored to predict rot presence: logistic regression, feed forward neural networks, and convolutional neural networks. The results showed that wood decay could be detected, even if with accuracy varying among the two datasets. The best results on the Etnedal dataset were obtained using a convolution neural network with the first five components of a principal component analysis as input (OA = 65.5%), while on the Gran dataset, the best result was obtained using LASSO with logistic regression and data aggregated using the weighted mean (OA = 61.4%). In general, the differences among aggregated and unaggregated data were small. Numéro de notice : A2022-352 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.3390/rs14081892 Date de publication en ligne : 14/04/2022 En ligne : https://doi.org/10.3390/rs14081892 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100541
in Remote sensing > vol 14 n° 8 (April-2 2022) . - n° 1892[article]Comparison of neural networks and k-nearest neighbors methods in forest stand variable estimation using airborne laser data / Andras Balazs in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 4 (April 2022)
[article]
Titre : Comparison of neural networks and k-nearest neighbors methods in forest stand variable estimation using airborne laser data Type de document : Article/Communication Auteurs : Andras Balazs, Auteur ; Eero Liski, Auteur ; Sakari Tuominen, Auteur Année de publication : 2022 Article en page(s) : n° 100012 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] algorithme génétique
[Termes IGN] bois sur pied
[Termes IGN] classification barycentrique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] covariance
[Termes IGN] diamètre à hauteur de poitrine
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] Finlande
[Termes IGN] hauteur des arbres
[Termes IGN] inventaire forestier étranger (données)
[Termes IGN] peuplement forestier
[Termes IGN] réseau neuronal artificiel
[Termes IGN] semis de points
[Termes IGN] volume en boisRésumé : (auteur) In the remote sensing of forests, point cloud data from airborne laser scanning contains high-value information for predicting the volume of growing stock and the size of trees. At the same time, laser scanning data allows a very high number of potential features that can be extracted from the point cloud data for predicting the forest variables. In some methods, the features are first extracted by user-defined algorithms and the best features are selected based on supervised learning, whereas both tasks can be carried out automatically by deep learning methods typically based on deep neural networks. In this study we tested k-nearest neighbor method combined with genetic algorithm (k-NN), artificial neural network (ANN), 2-dimensional convolutional neural network (2D-CNN) and 3-dimensional CNN (3D-CNN) for estimating the following forest variables: volume of growing stock, stand mean height and mean diameter. The results indicate that there were no major differences in the accuracy of the tested methods, but the ANN and 3D-CNN generally resulted in the lowest RMSE values for the predicted forest variables and the highest R2 values between the predicted and observed forest variables. The lowest RMSE scores were 20.3% (3D-CNN), 6.4% (3D-CNN) and 11.2% (ANN) and the highest R2 results 0.90 (3D-CNN), 0.95 (3D-CNN) and 0.85 (ANN) for volume of growing stock, stand mean height and mean diameter, respectively. Covariances of all response variable combinations and all predictions methods were lower than corresponding covariances of the field observations. ANN predictions had the highest covariances for mean height vs. mean diameter and total growing stock vs. mean diameter combinations and 3D-CNN for mean height vs. total growing stock. CNNs have distinct theoretical advantage over the other methods in complex recognition or classification tasks, but the utilization of their full potential may possibly require higher point density clouds than applied here. Thus, the relatively low density of the point clouds data may have been a contributing factor to the somewhat inconclusive ranking of the methods in this study. The input data and computer codes are available at: https://github.com/balazsan/ALS_NNs. Numéro de notice : A2022-265 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.ophoto.2022.100012 Date de publication en ligne : 12/03/2022 En ligne : https://doi.org/10.1016/j.ophoto.2022.100012 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100263
in ISPRS Open Journal of Photogrammetry and Remote Sensing > vol 4 (April 2022) . - n° 100012[article]Deep generative model for spatial–spectral unmixing with multiple endmember priors / Shuaikai Shi in IEEE Transactions on geoscience and remote sensing, vol 60 n° 4 (April 2022)
[article]
Titre : Deep generative model for spatial–spectral unmixing with multiple endmember priors Type de document : Article/Communication Auteurs : Shuaikai Shi, Auteur ; Lijun Zhang, Auteur ; Yoann Altmann, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 5527214 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse de mélange spectral d’extrémités multiples
[Termes IGN] analyse linéaire des mélanges spectraux
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] image hyperspectrale
[Termes IGN] réseau neuronal de graphesRésumé : (auteur) Spectral unmixing is an effective tool to mine information at the subpixel level from complex hyperspectral images. To consider the spatially correlated materials distributions in the scene, many algorithms unmix the data in a spatial–spectral fashion; however, existing models are usually unable to model spectral variability simultaneously. In this article, we present a variational autoencoder-based deep generative model for spatial–spectral unmixing (DGMSSU) with endmember variability, by linking the generated endmembers to the probability distributions of endmember bundles extracted from the hyperspectral imagery via discriminators. Besides the convolutional autoencoder-like architecture that can only model the spatial information within the regular patch inputs, DGMSSU is able to alternatively choose graph convolutional networks or self-attention mechanism modules to handle the irregular but more flexible data—superpixel. Experimental results on a simulated dataset, as well as two well-known real hyperspectral images, show the superiority of our proposed approach in comparison with other state-of-the-art spatial–spectral unmixing methods. Compared to the conventional unmixing methods that consider the endmember variability, our proposed model generates more accurate endmembers on each subimage by the adversarial training process. The codes of this work will be available at https://github.com/shuaikaishi/DGMSSU for the sake of reproducibility. Numéro de notice : A2022-380 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2022.3168712 Date de publication en ligne : 18/04/2022 En ligne : https://doi.org/10.1109/TGRS.2022.3168712 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100645
in IEEE Transactions on geoscience and remote sensing > vol 60 n° 4 (April 2022) . - n° 5527214[article]Deep learning for archaeological object detection on LiDAR: New evaluation measures and insights / Marco Fiorucci in Remote sensing, vol 14 n° 7 (April-1 2022)PermalinkDetecting individuals' spatial familiarity with urban environments using eye movement data / Hua Liao in Computers, Environment and Urban Systems, vol 93 (April 2022)PermalinkDetermination of building flood risk maps from LiDAR mobile mapping data / Yu Feng in Computers, Environment and Urban Systems, vol 93 (April 2022)PermalinkDiscovering co-location patterns in multivariate spatial flow data / Jiannan Cai in International journal of geographical information science IJGIS, vol 36 n° 4 (April 2022)PermalinkEnriching the metadata of map images: a deep learning approach with GIS-based data augmentation / Yingjie Hu in International journal of geographical information science IJGIS, vol 36 n° 4 (April 2022)PermalinkExploring scientific literature by textual and image content using DRIFT / Ximena Pocco in Computers and graphics, vol 103 (April 2022)PermalinkExploring the association between street built environment and street vitality using deep learning methods / Yunqin Li in Sustainable Cities and Society, vol 79 (April 2022)PermalinkA GAN-based approach toward architectural line drawing colorization prototyping / Qian (Chayn) Sun in The Visual Computer, vol 38 n° 4 (April 2022)PermalinkGeoRec: Geometry-enhanced semantic 3D reconstruction of RGB-D indoor scenes / Linxi Huan in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)PermalinkA graph attention network for road marking classification from mobile LiDAR point clouds / Lina Fang in International journal of applied Earth observation and geoinformation, vol 108 (April 2022)Permalink