Descripteur
Termes IGN > mathématiques > statistique mathématique > analyse de données > classification > classification par réseau neuronal
classification par réseau neuronalVoir aussi |
Documents disponibles dans cette catégorie (563)
![](./images/expand_all.gif)
![](./images/collapse_all.gif)
Etendre la recherche sur niveau(x) vers le bas
Using multi-scale and hierarchical deep convolutional features for 3D semantic classification of TLS point clouds / Zhou Guo in International journal of geographical information science IJGIS, vol 34 n° 4 (April 2020)
![]()
[article]
Titre : Using multi-scale and hierarchical deep convolutional features for 3D semantic classification of TLS point clouds Type de document : Article/Communication Auteurs : Zhou Guo, Auteur ; Chen-Chieh Feng, Auteur Année de publication : 2020 Article en page(s) : pp 661 - 680 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] analyse multiéchelle
[Termes IGN] apprentissage profond
[Termes IGN] approche hiérarchique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données lidar
[Termes IGN] modélisation 3D
[Termes IGN] Oakland (Californie)
[Termes IGN] régression
[Termes IGN] semis de pointsRésumé : (auteur) Point cloud classification, which provides meaningful semantic labels to the points in a point cloud, is essential for generating three-dimensional (3D) models. Its automation, however, remains challenging due to varying point densities and irregular point distributions. Adapting existing deep-learning approaches for two-dimensional (2D) image classification to point cloud classification is inefficient and results in the loss of information valuable for point cloud classification. In this article, a new approach that classifies point cloud directly in 3D is proposed. The approach uses multi-scale features generated by deep learning. It comprises three steps: (1) extract single-scale deep features using 3D convolutional neural network (CNN); (2) subsample the input point cloud at multiple scales, with the point cloud at each scale being an input to the 3D CNN, and combine deep features at multiple scales to form multi-scale and hierarchical features; and (3) retrieve the probabilities that each point belongs to the intended semantic category using a softmax regression classifier. The proposed approach was tested against two publicly available point cloud datasets to demonstrate its performance and compared to the results produced by other existing approaches. The experiment results achieved 96.89% overall accuracy on the Oakland dataset and 91.89% overall accuracy on the Europe dataset, which are the highest among the considered methods. Numéro de notice : A2020-109 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/13658816.2018.1552790 Date de publication en ligne : 10/12/2018 En ligne : https://doi.org/10.1080/13658816.2018.1552790 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94711
in International journal of geographical information science IJGIS > vol 34 n° 4 (April 2020) . - pp 661 - 680[article]What, where, and how to transfer in SAR target recognition based on deep CNNs / Zhongling Huang in IEEE Transactions on geoscience and remote sensing, vol 58 n° 4 (April 2020)
![]()
[article]
Titre : What, where, and how to transfer in SAR target recognition based on deep CNNs Type de document : Article/Communication Auteurs : Zhongling Huang, Auteur ; Zongxu Pan, Auteur ; Bin Lei, Auteur Année de publication : 2020 Article en page(s) : pp 2324 - 2336 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de cible
[Termes IGN] données multisources
[Termes IGN] image optique
[Termes IGN] image radar moirée
[Termes IGN] source de données
[Termes IGN] transmission de donnéesRésumé : (auteur) Deep convolutional neural networks (DCNNs) have attracted much attention in remote sensing recently. Compared with the large-scale annotated data set in natural images, the lack of labeled data in remote sensing becomes an obstacle to train a deep network very well, especially in synthetic aperture radar (SAR) image interpretation. Transfer learning provides an effective way to solve this problem by borrowing knowledge from the source task to the target task. In optical remote sensing application, a prevalent mechanism is to fine-tune on an existing model pretrained with a large-scale natural image data set, such as ImageNet. However, this scheme does not achieve satisfactory performance for SAR applications because of the prominent discrepancy between SAR and optical images. In this article, we attempt to discuss three issues that are seldom studied before in detail: 1) what network and source tasks are better to transfer to SAR targets; 2) in which layer are transferred features more generic to SAR targets; and 3) how to transfer effectively to SAR targets recognition. Based on the analysis, a transitive transfer method via multisource data with domain adaptation is proposed in this article to decrease the discrepancy between the source data and SAR targets. Several experiments are conducted on OpenSARShip. The results indicate that the universal conclusions about transfer learning in natural images cannot be completely applied to SAR targets, and the analysis of what and where to transfer in SAR target recognition is helpful to decide how to transfer more effectively. Numéro de notice : A2020-195 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2947634 Date de publication en ligne : 20/11/2019 En ligne : https://doi.org/10.1109/TGRS.2019.2947634 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94863
in IEEE Transactions on geoscience and remote sensing > vol 58 n° 4 (April 2020) . - pp 2324 - 2336[article]Dimension reduction methods applied to coastline extraction on hyperspectral imagery / Ozan Arslan in Geocarto international, vol 35 n° 4 ([15/03/2020])
![]()
[article]
Titre : Dimension reduction methods applied to coastline extraction on hyperspectral imagery Type de document : Article/Communication Auteurs : Ozan Arslan, Auteur ; özer Akyürek, Auteur ; Sinasi Kaya, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 376 - 390 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse en composantes principales
[Termes IGN] Bosphore, détroit du
[Termes IGN] classification par réseau neuronal
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] détection de contours
[Termes IGN] extraction
[Termes IGN] image EO1-Hyperion
[Termes IGN] image hyperspectrale
[Termes IGN] Istanbul (Turquie)
[Termes IGN] littoral
[Termes IGN] rapport signal sur bruit
[Termes IGN] réduction
[Termes IGN] télédétection
[Termes IGN] trait de côteRésumé : (auteur) In this study, dimensionality reduction (DR) methods on a hyperspectral dataset to explore the influence on the process of extraction of coastlines were examined and performance of different DR algorithms on the detection of coastline in Bosphorus, Istanbul was investigated. Among these methods, principal component (PC) analysis, maximum noise fraction and independent component (IC) analysis were used in the experiments with the aim of comparing. The study was carried out using these well-known DR techniques on a real hyperspectral image, an Hyperion data set with 161 bands, in the course of the experiments. Three different classifiers (i.e. ML, SVM and neural network) were used for the classification of dimensionally reduced and original images to detect coastline in the region. The DR results were evaluated quantitatively and visually in order to determine the reduced dimensions of the image subsets. Findings show that there is no significant influence of using DR methods on the dataset on the detection of coastline. Numéro de notice : A2020-099 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2018.1520920 Date de publication en ligne : 22/10/2018 En ligne : https://doi.org/10.1080/10106049.2018.1520920 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94690
in Geocarto international > vol 35 n° 4 [15/03/2020] . - pp 376 - 390[article]Classification and segmentation of mining area objects in large-scale spares Lidar point cloud using a novel rotated density network / Yueguan Yan in ISPRS International journal of geo-information, vol 9 n° 3 (March 2020)
![]()
[article]
Titre : Classification and segmentation of mining area objects in large-scale spares Lidar point cloud using a novel rotated density network Type de document : Article/Communication Auteurs : Yueguan Yan, Auteur ; Haixu Yan, Auteur ; Junting Guo, Auteur ; Huayang Dai, Auteur Année de publication : 2020 Article en page(s) : 19 p. Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] classification barycentrique
[Termes IGN] classification orientée objet
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] corrélation automatique de points homologues
[Termes IGN] densité des points
[Termes IGN] données lidar
[Termes IGN] objet 3D
[Termes IGN] reconnaissance d'objets
[Termes IGN] semis de points clairsemésRésumé : (auteur) The classification and segmentation of large-scale, sparse, LiDAR point cloud with deep learning are widely used in engineering survey and geoscience. The loose structure and the non-uniform point density are the two major constraints to utilize the sparse point cloud. This paper proposes a lightweight auxiliary network, called the rotated density-based network (RD-Net), and a novel point cloud preprocessing method, Grid Trajectory Box (GT-Box), to solve these problems. The combination of RD-Net and PointNet was used to achieve high-precision 3D classification and segmentation of the sparse point cloud. It emphasizes the importance of the density feature of LiDAR points for 3D object recognition of sparse point cloud. Furthermore, RD-Net plus PointCNN, PointNet, PointCNN, and RD-Net were introduced as comparisons. Public datasets were used to evaluate the performance of the proposed method. The results showed that the RD-Net could significantly improve the performance of sparse point cloud recognition for the coordinate-based network and could improve the classification accuracy to 94% and the segmentation per-accuracy to 70%. Additionally, the results concluded that point-density information has an independent spatial–local correlation and plays an essential role in the process of sparse point cloud recognition. Numéro de notice : A2020-256 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 0.3390/ijgi9030182 Date de publication en ligne : 24/03/2020 En ligne : https://doi.org/10.3390/ijgi9030182 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95012
in ISPRS International journal of geo-information > vol 9 n° 3 (March 2020) . - 19 p.[article]Deep learning for geometric and semantic tasks in photogrammetry and remote sensing / Christian Helpke in Geo-spatial Information Science, vol 23 n° 1 (March 2020)
![]()
[article]
Titre : Deep learning for geometric and semantic tasks in photogrammetry and remote sensing Type de document : Article/Communication Auteurs : Christian Helpke, Auteur ; Franz Rottensteiner, Auteur Année de publication : 2020 Article en page(s) : pp 10 - 19 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] image aérienne
[Termes IGN] intelligence artificielle
[Termes IGN] photogrammétrie numérique
[Termes IGN] télédétectionRésumé : (auteur) During the last few years, artificial intelligence based on deep learning, and particularly based on convolutional neural networks, has acted as a game changer in just about all tasks related to photogrammetry and remote sensing. Results have shown partly significant improvements in many projects all across the photogrammetric processing chain from image orientation to surface reconstruction, scene classification as well as change detection, object extraction and object tracking and recognition in image sequences. This paper summarizes the foundations of deep learning for photogrammetry and remote sensing before illustrating, by way of example, different projects being carried out at the Institute of Photogrammetry and GeoInformation, Leibniz University Hannover, in this exciting and fast moving field of research and development. Numéro de notice : A2020-161 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1080/10095020.2020.1718003 Date de publication en ligne : 03/02/2020 En ligne : https://doi.org/https://doi.org/10.1080/10095020.2020.1718003 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94821
in Geo-spatial Information Science > vol 23 n° 1 (March 2020) . - pp 10 - 19[article]Deep SAR-Net: learning objects from signals / Zhongling Huang in ISPRS Journal of photogrammetry and remote sensing, vol 161 (March 2020)
PermalinkEdge-reinforced convolutional neural network for road detection in very-high-resolution remote sensing imagery / Xiaoyan Lu in Photogrammetric Engineering & Remote Sensing, PERS, vol 86 n° 3 (March 2020)
PermalinkPoststack seismic data denoising based on 3-D convolutional neural network / Dawei Liu in IEEE Transactions on geoscience and remote sensing, vol 58 n° 3 (March 2020)
PermalinkSea-land segmentation using deep learning techniques for Landsat-8 OLI imagery / Ting Yang in Marine geodesy, Vol 43 n° 2 (March 2020)
PermalinkUnsupervised extraction of urban features from airborne lidar data by using self-organizing maps / Alper Sen in Survey review, vol 52 n° 371 (March 2020)
PermalinkReal-time mapping of natural disasters using citizen update streams / Iranga Subasinghe in International journal of geographical information science IJGIS, vol 34 n° 2 (February 2020)
PermalinkTree annotations in LiDAR data using point densities and convolutional neural networks / Ananya Gupta in IEEE Transactions on geoscience and remote sensing, vol 58 n° 2 (February 2020)
PermalinkVolcano-seismic transfer learning and uncertainty quantification with bayesian neural networks / Angel Bueno in IEEE Transactions on geoscience and remote sensing, vol 58 n° 2 (February 2020)
Permalink10th Colour and Visual Computing Symposium 2020 (CVCS 2020), Gjøvik, Norway, and Virtual, September 16-17, 2020 / Jean-Baptiste Thomas (2020)
PermalinkPermalink