Descripteur
Documents disponibles dans cette catégorie (617)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Unsupervised multi-view CNN for salient view selection and 3D interest point detection / Ran Song in International journal of computer vision, vol 130 n° 5 (May 2022)
[article]
Titre : Unsupervised multi-view CNN for salient view selection and 3D interest point detection Type de document : Article/Communication Auteurs : Ran Song, Auteur ; Wei Zhang, Auteur ; Yitian Zhao, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 1210 - 1227 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification non dirigée
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] objet 3D
[Termes IGN] point d'intérêt
[Termes IGN] saillanceRésumé : (auteur) We present an unsupervised 3D deep learning framework based on a ubiquitously true proposition named by us view-object consistency as it states that a 3D object and its projected 2D views always belong to the same object class. To validate its effectiveness, we design a multi-view CNN instantiating it for salient view selection and interest point detection of 3D objects, which quintessentially cannot be handled by supervised learning due to the difficulty of collecting sufficient and consistent training data. Our unsupervised multi-view CNN, namely UMVCNN, branches off two channels which encode the knowledge within each 2D view and the 3D object respectively and also exploits both intra-view and inter-view knowledge of the object. It ends with a new loss layer which formulates the view-object consistency by impelling the two channels to generate consistent classification outcomes. The UMVCNN is then integrated with a global distinction adjustment scheme to incorporate global cues into salient view selection. We evaluate our method for salient view section both qualitatively and quantitatively, demonstrating its superiority over several state-of-the-art methods. In addition, we showcase that our method can be used to select salient views of 3D scenes containing multiple objects. We also develop a method based on the UMVCNN for 3D interest point detection and conduct comparative evaluations on a publicly available benchmark, which shows that the UMVCNN is amenable to different 3D shape understanding tasks. Numéro de notice : A2022-415 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s11263-022-01592-x Date de publication en ligne : 16/03/2022 En ligne : https://doi.org/10.1007/s11263-022-01592-x Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100771
in International journal of computer vision > vol 130 n° 5 (May 2022) . - pp 1210 - 1227[article]Weakly supervised semantic segmentation of airborne laser scanning point clouds / Yaping Lin in ISPRS Journal of photogrammetry and remote sensing, vol 187 (May 2022)
[article]
Titre : Weakly supervised semantic segmentation of airborne laser scanning point clouds Type de document : Article/Communication Auteurs : Yaping Lin, Auteur ; M. George Vosselman, Auteur ; Michael Ying Yang, Auteur Année de publication : 2022 Article en page(s) : pp 79 - 100 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] chevauchement
[Termes IGN] classification dirigée
[Termes IGN] données étiquetées d'entrainement
[Termes IGN] données laser
[Termes IGN] données localisées 3D
[Termes IGN] hétérogénéité sémantique
[Termes IGN] segmentation sémantique
[Termes IGN] semis de pointsRésumé : (Auteur) While modern deep learning algorithms for semantic segmentation of airborne laser scanning (ALS) point clouds have achieved considerable success, the training process often requires a large number of labelled 3D points. Pointwise annotation of 3D point clouds, especially for large scale ALS datasets, is extremely time-consuming work. Weak supervision that only needs a few annotation efforts but can make networks achieve comparable performance is an alternative solution. Assigning a weak label to a subcloud, a group of points, is an efficient annotation strategy. With the supervision of subcloud labels, we first train a classification network that produces pseudo labels for the training data. Then the pseudo labels are taken as the input of a segmentation network which gives the final predictions on the testing data. As the quality of pseudo labels determines the performance of the segmentation network on testing data, we propose an overlap region loss and an elevation attention unit for the classification network to obtain more accurate pseudo labels. The overlap region loss that considers the nearby subcloud semantic information is introduced to enhance the awareness of the semantic heterogeneity within a subcloud. The elevation attention helps the classification network to encode more representative features for ALS point clouds. For the segmentation network, in order to effectively learn representative features from inaccurate pseudo labels, we adopt a supervised contrastive loss that uncovers the underlying correlations of class-specific features. Extensive experiments on three ALS datasets demonstrate the superior performance of our model to the baseline method (Wei et al., 2020). With the same amount of labelling efforts, for the ISPRS benchmark dataset, the Rotterdam dataset and the DFC2019 dataset, our method rises the overall accuracy by 0.062, 0.112 and 0.031, and the average F1 score by 0.09, 0.178 and 0.043 respectively. Our code is publicly available at ‘https://github.com/yaping222/Weak_ALS.git’. Numéro de notice : A2022-227 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.03.001 Date de publication en ligne : 11/03/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.03.001 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100197
in ISPRS Journal of photogrammetry and remote sensing > vol 187 (May 2022) . - pp 79 - 100[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022051 SL Revue Centre de documentation Revues en salle Disponible 081-2022053 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2022052 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Spectral-spatial classification method for hyperspectral images using stacked sparse autoencoder suitable in limited labelled samples situation / Seyyed Ali Ahmadi in Geocarto international, vol 37 n° 7 ([15/04/2022])
[article]
Titre : Spectral-spatial classification method for hyperspectral images using stacked sparse autoencoder suitable in limited labelled samples situation Type de document : Article/Communication Auteurs : Seyyed Ali Ahmadi, Auteur ; Nasser Mehrshad, Auteur ; Seyyed Mohammadali Arghavan, Auteur Année de publication : 2022 Article en page(s) : pp 2031 - 2054 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse de sensibilité
[Termes IGN] apprentissage profond
[Termes IGN] données étiquetées d'entrainement
[Termes IGN] échantillonnage de données
[Termes IGN] filtre de Gabor
[Termes IGN] image hyperspectraleRésumé : (auteur) Recently, deep learning (DL)-based methods have attracted increasing attention for hyperspectral images (HSIs) classification. However, the complex structure and limited number of labelled training samples of HSIs negatively affect the performance of DL models. In this paper, a spectral-spatial classification method is proposed based on the combination of local and global spatial information, including extended multi-attribute profiles and multiscale Gabor features, with sparse stacked autoencoder (GEAE). GEAE stacks the spatial and spectral information to form the fused features. Also, GEAE generates virtual samples using weighted average of available samples for expanding the training set so that many parameters of DL network can be learned optimally in limited labelled samples situations. Therefore, the similarity between samples is determined with distance metric learning to overcome the problems of Euclidean distance-based similarity metrics. The experimental results on three HSIs datasets demonstrate the effectiveness of the GEAE in comparison to some existing classification methods. Numéro de notice : A2022-498 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2020.1797188 Date de publication en ligne : 10/08/2020 En ligne : https://doi.org/10.1080/10106049.2020.1797188 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100990
in Geocarto international > vol 37 n° 7 [15/04/2022] . - pp 2031 - 2054[article]Wood decay detection in Norway spruce forests based on airborne hyperspectral and ALS data / Michele Dalponte in Remote sensing, vol 14 n° 8 (April-2 2022)
[article]
Titre : Wood decay detection in Norway spruce forests based on airborne hyperspectral and ALS data Type de document : Article/Communication Auteurs : Michele Dalponte, Auteur ; Alvar J. I. Kallio, Auteur ; Hans Ole Ørka, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 1892 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] bois sur pied
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] dépérissement
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] données lidar
[Termes IGN] image hyperspectrale
[Termes IGN] image infrarouge
[Termes IGN] Norvège
[Termes IGN] Perceptron multicouche
[Termes IGN] Picea abies
[Termes IGN] régression linéaire
[Termes IGN] régression logistique
[Termes IGN] santé des forêts
[Termes IGN] semis de pointsRésumé : (auteur) Wood decay caused by pathogenic fungi in Norway spruce forests causes severe economic losses in the forestry sector, and currently no efficient methods exist to detect infected trees. The detection of wood decay could potentially lead to improvements in forest management and could help in reducing economic losses. In this study, airborne hyperspectral data were used to detect the presence of wood decay in the trees in two forest areas located in Etnedal (dataset I) and Gran (dataset II) municipalities, in southern Norway. The hyperspectral data used consisted of images acquired by two sensors operating in the VNIR and SWIR parts of the spectrum. Corresponding ground reference data were collected in Etnedal using a cut-to-length harvester while in Gran, field measurements were collected manually. Airborne laser scanning (ALS) data were used to detect the individual tree crowns (ITCs) in both sites. Different approaches to deal with pixels inside each ITC were considered: in particular, pixels were either aggregated to a unique value per ITC (i.e., mean, weighted mean, median, centermost pixel) or analyzed in an unaggregated way. Multiple classification methods were explored to predict rot presence: logistic regression, feed forward neural networks, and convolutional neural networks. The results showed that wood decay could be detected, even if with accuracy varying among the two datasets. The best results on the Etnedal dataset were obtained using a convolution neural network with the first five components of a principal component analysis as input (OA = 65.5%), while on the Gran dataset, the best result was obtained using LASSO with logistic regression and data aggregated using the weighted mean (OA = 61.4%). In general, the differences among aggregated and unaggregated data were small. Numéro de notice : A2022-352 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.3390/rs14081892 Date de publication en ligne : 14/04/2022 En ligne : https://doi.org/10.3390/rs14081892 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100541
in Remote sensing > vol 14 n° 8 (April-2 2022) . - n° 1892[article]Comparison of neural networks and k-nearest neighbors methods in forest stand variable estimation using airborne laser data / Andras Balazs in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 4 (April 2022)
[article]
Titre : Comparison of neural networks and k-nearest neighbors methods in forest stand variable estimation using airborne laser data Type de document : Article/Communication Auteurs : Andras Balazs, Auteur ; Eero Liski, Auteur ; Sakari Tuominen, Auteur Année de publication : 2022 Article en page(s) : n° 100012 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] algorithme génétique
[Termes IGN] bois sur pied
[Termes IGN] classification barycentrique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] covariance
[Termes IGN] diamètre à hauteur de poitrine
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] Finlande
[Termes IGN] hauteur des arbres
[Termes IGN] inventaire forestier étranger (données)
[Termes IGN] peuplement forestier
[Termes IGN] réseau neuronal artificiel
[Termes IGN] semis de points
[Termes IGN] volume en boisRésumé : (auteur) In the remote sensing of forests, point cloud data from airborne laser scanning contains high-value information for predicting the volume of growing stock and the size of trees. At the same time, laser scanning data allows a very high number of potential features that can be extracted from the point cloud data for predicting the forest variables. In some methods, the features are first extracted by user-defined algorithms and the best features are selected based on supervised learning, whereas both tasks can be carried out automatically by deep learning methods typically based on deep neural networks. In this study we tested k-nearest neighbor method combined with genetic algorithm (k-NN), artificial neural network (ANN), 2-dimensional convolutional neural network (2D-CNN) and 3-dimensional CNN (3D-CNN) for estimating the following forest variables: volume of growing stock, stand mean height and mean diameter. The results indicate that there were no major differences in the accuracy of the tested methods, but the ANN and 3D-CNN generally resulted in the lowest RMSE values for the predicted forest variables and the highest R2 values between the predicted and observed forest variables. The lowest RMSE scores were 20.3% (3D-CNN), 6.4% (3D-CNN) and 11.2% (ANN) and the highest R2 results 0.90 (3D-CNN), 0.95 (3D-CNN) and 0.85 (ANN) for volume of growing stock, stand mean height and mean diameter, respectively. Covariances of all response variable combinations and all predictions methods were lower than corresponding covariances of the field observations. ANN predictions had the highest covariances for mean height vs. mean diameter and total growing stock vs. mean diameter combinations and 3D-CNN for mean height vs. total growing stock. CNNs have distinct theoretical advantage over the other methods in complex recognition or classification tasks, but the utilization of their full potential may possibly require higher point density clouds than applied here. Thus, the relatively low density of the point clouds data may have been a contributing factor to the somewhat inconclusive ranking of the methods in this study. The input data and computer codes are available at: https://github.com/balazsan/ALS_NNs. Numéro de notice : A2022-265 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.ophoto.2022.100012 Date de publication en ligne : 12/03/2022 En ligne : https://doi.org/10.1016/j.ophoto.2022.100012 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100263
in ISPRS Open Journal of Photogrammetry and Remote Sensing > vol 4 (April 2022) . - n° 100012[article]Deep generative model for spatial–spectral unmixing with multiple endmember priors / Shuaikai Shi in IEEE Transactions on geoscience and remote sensing, vol 60 n° 4 (April 2022)PermalinkDeep learning for archaeological object detection on LiDAR: New evaluation measures and insights / Marco Fiorucci in Remote sensing, vol 14 n° 7 (April-1 2022)PermalinkDetecting individuals' spatial familiarity with urban environments using eye movement data / Hua Liao in Computers, Environment and Urban Systems, vol 93 (April 2022)PermalinkDetermination of building flood risk maps from LiDAR mobile mapping data / Yu Feng in Computers, Environment and Urban Systems, vol 93 (April 2022)PermalinkEnriching the metadata of map images: a deep learning approach with GIS-based data augmentation / Yingjie Hu in International journal of geographical information science IJGIS, vol 36 n° 4 (April 2022)PermalinkExploring the association between street built environment and street vitality using deep learning methods / Yunqin Li in Sustainable Cities and Society, vol 79 (April 2022)PermalinkA GAN-based approach toward architectural line drawing colorization prototyping / Qian (Chayn) Sun in The Visual Computer, vol 38 n° 4 (April 2022)PermalinkGeoRec: Geometry-enhanced semantic 3D reconstruction of RGB-D indoor scenes / Linxi Huan in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)PermalinkA graph attention network for road marking classification from mobile LiDAR point clouds / Lina Fang in International journal of applied Earth observation and geoinformation, vol 108 (April 2022)PermalinkGraph learning based on signal smoothness representation for homogeneous and heterogeneous change detection / David Alejandro Jimenez-Sierra in IEEE Transactions on geoscience and remote sensing, vol 60 n° 4 (April 2022)PermalinkGraph neural network based model for multi-behavior session-based recommendation / Bo Yu in Geoinformatica, vol 26 n° 2 (April 2022)PermalinkMeta-learning based hyperspectral target detection using siamese network / Yulei Wang in IEEE Transactions on geoscience and remote sensing, vol 60 n° 4 (April 2022)PermalinkMTLM: a multi-task learning model for travel time estimation / Saijun Xu in Geoinformatica, vol 26 n° 2 (April 2022)PermalinkPolGAN: A deep-learning-based unsupervised forest height estimation based on the synergy of PolInSAR and LiDAR data / Qi Zhang in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)PermalinkRegularized integer least-squares estimation: Tikhonov’s regularization in a weak GNSS model / Zemin Wu in Journal of geodesy, vol 96 n° 4 (April 2022)PermalinkSimulating future LUCC by coupling climate change and human effects based on multi-phase remote sensing data / Zihao Huang in Remote sensing, vol 14 n° 7 (April-1 2022)PermalinkSpatially oriented convolutional neural network for spatial relation extraction from natural language texts / Qinjun Qiu in Transactions in GIS, vol 26 n° 2 (April 2022)PermalinkSpecies level classification of Mediterranean sparse forests-maquis formations using Sentinel-2 imagery / Semiha Demirbaş Çağlayana in Geocarto international, vol 37 n° 6 ([01/04/2022])PermalinkUncertainty estimation for stereo matching based on evidential deep learning / Chen Wang in Pattern recognition, vol 124 (April 2022)PermalinkVD-LAB: A view-decoupled network with local-global aggregation bridge for airborne laser scanning point cloud classification / Jihao Li in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)Permalink