Descripteur
Termes IGN > informatique > intelligence artificielle > apprentissage automatique > apprentissage profond > réseau neuronal artificiel > réseau neuronal convolutif
réseau neuronal convolutif |
Documents disponibles dans cette catégorie (100)



Etendre la recherche sur niveau(x) vers le bas
Hierarchical learning with backtracking algorithm based on the visual confusion label tree for large-scale image classification / Yuntao Liu in The Visual Computer, vol 38 n° 3 (March 2022)
![]()
[article]
Titre : Hierarchical learning with backtracking algorithm based on the visual confusion label tree for large-scale image classification Type de document : Article/Communication Auteurs : Yuntao Liu, Auteur ; Yong Dou, Auteur ; Ruochun Jin, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 897 - 917 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage automatique
[Termes IGN] classification bayesienne
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] réseau neuronal convolutif
[Termes IGN] segmentation sémantiqueRésumé : (auteur) In this paper, a hierarchical learning algorithm based on the Bayesian Neural Network classifier with backtracking is proposed to support large-scale image classification, where a Visual Confusion Label Tree is established for constructing a hierarchical structure for large numbers of categories in image datasets and determining the hierarchical learning tasks automatically. Specifically, the Visual Confusion Label Tree is established based on outputs of convolution neural network models. One parent node on the Visual Confusion Label Tree contains a set of sibling coarse-grained categories, and child nodes have several sets of fine-grained categories which are partitions of categories on the parent node. The proposed Hierarchical Bayesian Neural Network with backtracking algorithm can benefit from the hierarchical structure of the Visual Confusion Label Tree. Focusing on those confusion subsets instead of the entire set of categories makes the classification ability of the tree classifier stronger. The backtracking algorithm can utilize the uncertainty information captured from the Bayesian Neural Network to make a second classification to re-correct samples that were classified incorrectly in the previous classification process. Experiments on four large-scale datasets show that our tree classifier obtains a significant improvement over the state-of-the-art tree classifier, which have demonstrated the discriminative hierarchical structure of our Visual Confusion Label Tree and the effectiveness of our Hierarchical Bayesian Neural Network with backtracking algorithm. Numéro de notice : A2022-149 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1007/s00371-021-02058-w Date de publication en ligne : 04/02/2021 En ligne : http://dx.doi.org/10.1007/s00371-021-02058-w Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100070
in The Visual Computer > vol 38 n° 3 (March 2022) . - pp 897 - 917[article]Detection of damaged buildings after an earthquake with convolutional neural networks in conjunction with image segmentation / Ramazan Unlu in The Visual Computer, vol 38 n° 2 (February 2022)
![]()
[article]
Titre : Detection of damaged buildings after an earthquake with convolutional neural networks in conjunction with image segmentation Type de document : Article/Communication Auteurs : Ramazan Unlu, Auteur ; Recep Kiriş, Auteur Année de publication : 2022 Article en page(s) : pp 685 - 694 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] bâtiment
[Termes IGN] classification par nuées dynamiques
[Termes IGN] détection de changement
[Termes IGN] dommage matériel
[Termes IGN] données étiquetées d'entrainement
[Termes IGN] réseau neuronal convolutif
[Termes IGN] segmentation d'image
[Termes IGN] séismeRésumé : (auteur) Detecting damaged buildings after an earthquake as quickly as possible is important for emergency teams to reach these buildings and save the lives of many people. Today, damaged buildings after the earthquake are carried out by the survivors contacting the authorities or using some air vehicles such as helicopters. In this study, AI-based systems were tested to detect damaged or destroyed buildings by integrating into street camera systems after unexpected disasters. For this purpose, we have used VGG-16, VGG-19, and NASNet convolutional neural network models which are often used for image recognition problems in the literature to detect damaged buildings. In order to effectively implement these models, we have first segmented all the images with the K-means clustering algorithm. Thereafter, for the first phase of this study, segmented images labeled “damaged buildings” and “normal” were classified and the VGG-19 model was the most successful model with a 90% accuracy in the test set. Besides, as the second phase of the study, we have created a multiclass classification problem by labeling segmented images as “damaged buildings,” “less damaged buildings,” and “normal.” The same three architectures are used to achieve the most accurate classification results on the test set. VGG-19 and VGG-16, and NASNet have achieved considerable success in the test set with about 70%, 67%, and 62% accuracy, respectively. Numéro de notice : A2022-145 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1007/s00371-020-02043-9 Date de publication en ligne : 03/01/2022 En ligne : https://doi.org/10.1007/s00371-020-02043-9 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100039
in The Visual Computer > vol 38 n° 2 (February 2022) . - pp 685 - 694[article]GCN-Denoiser: mesh denoising with graph convolutional networks / Yuefan Shen in ACM Transactions on Graphics, TOG, Vol 41 n° 1 (February 2022)
![]()
[article]
Titre : GCN-Denoiser: mesh denoising with graph convolutional networks Type de document : Article/Communication Auteurs : Yuefan Shen, Auteur ; Hongbo Fu, Auteur ; Zhongshuo Du, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 8 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage profond
[Termes IGN] filtrage du bruit
[Termes IGN] maille triangulaire
[Termes IGN] optimisation (mathématiques)
[Termes IGN] réseau neuronal convolutif
[Termes IGN] réseau neuronal de graphesRésumé : (auteur) In this article, we present GCN-Denoiser, a novel feature-preserving mesh denoising method based on graph convolutional networks (GCNs). Unlike previous learning-based mesh denoising methods that exploit handcrafted or voxel-based representations for feature learning, our method explores the structure of a triangular mesh itself and introduces a graph representation followed by graph convolution operations in the dual space of triangles. We show such a graph representation naturally captures the geometry features while being lightweight for both training and inference. To facilitate effective feature learning, our network exploits both static and dynamic edge convolutions, which allow us to learn information from both the explicit mesh structure and potential implicit relations among unconnected neighbors. To better approximate an unknown noise function, we introduce a cascaded optimization paradigm to progressively regress the noise-free facet normals with multiple GCNs. GCN-Denoiser achieves the new state-of-the-art results in multiple noise datasets, including CAD models often containing sharp features and raw scan models with real noise captured from different devices. We also create a new dataset called PrintData containing 20 real scans with their corresponding ground-truth meshes for the research community. Our code and data are available at https://github.com/Jhonve/GCN-Denoiser. Numéro de notice : A2022-302 Affiliation des auteurs : non IGN Autre URL associée : vers ArXiv Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1145/3480168 Date de publication en ligne : 09/02/2022 En ligne : https://doi.org/10.1145/3480168 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100373
in ACM Transactions on Graphics, TOG > Vol 41 n° 1 (February 2022) . - n° 8[article]GNSS reflectometry global ocean wind speed using deep learning: Development and assessment of CyGNSSnet / Milad Asgarimehr in Remote sensing of environment, vol 269 (February 2022)
![]()
[article]
Titre : GNSS reflectometry global ocean wind speed using deep learning: Development and assessment of CyGNSSnet Type de document : Article/Communication Auteurs : Milad Asgarimehr, Auteur ; Caroline Arnold, Auteur ; Tobias Weigel, Auteur ; Chris Ruf, Auteur ; Jens Wickert, Auteur Année de publication : 2022 Article en page(s) : n° 112801 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de géodésie spatiale
[Termes IGN] apprentissage profond
[Termes IGN] modèle numérique
[Termes IGN] réflectométrie par GNSS
[Termes IGN] réseau neuronal convolutif
[Termes IGN] vent
[Termes IGN] vitesseRésumé : (auteur) GNSS Reflectometry (GNSS-R) is a novel remote sensing technique for the monitoring of geophysical parameters using reflected GNSS signals from the Earth's surface. Ocean wind speed monitoring is the main objective of the recently launched Cyclone GNSS (CyGNSS), a GNSS-R constellation of eight microsatellites, launched in late 2016. In this study, the capability of deep learning, especially, for an operational wind speed data derivation from the measured Delay-Doppler Maps (DDMs) is characterized. CyGNSSnet is based on convolutional layers for the feature extraction from bistatic radar cross section (BRCS) DDMs, along with fully connected layers for processing ancillary technical and higher-level input parameters. The best architecture is determined on a validation set and is evaluated over a completely blind dataset from a different time span than that of the training data to validate the generality of the model for operational usage. After a data quality control, CyGNSSnet results in an RMSE of 1.36 m/s leading to a significant improvement by 28% in comparison to the officially operational retrieval algorithm. The RMSE is the lowest among those seen in the literature for any conventional or machine learning-based algorithm. The benefits of the convolutional layers, the advantages and weaknesses of the model are discussed. CyGNSSnet offers efficient processing of GNSS-R measurements for high-quality global ocean winds. Numéro de notice : A2022-079 Affiliation des auteurs : non IGN Thématique : POSITIONNEMENT Nature : Article DOI : 10.1016/j.rse.2021.112801 Date de publication en ligne : 23/11/2021 En ligne : https://doi.org/10.1016/j.rse.2021.112801 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99764
in Remote sensing of environment > vol 269 (February 2022) . - n° 112801[article]Object recognition algorithm based on optimized nonlinear activation function-global convolutional neural network / Feng-Ping An in The Visual Computer, vol 38 n° 2 (February 2022)
![]()
[article]
Titre : Object recognition algorithm based on optimized nonlinear activation function-global convolutional neural network Type de document : Article/Communication Auteurs : Feng-Ping An, Auteur ; Jun-e Liu, Auteur ; Lei Bai, Auteur Année de publication : 2022 Article en page(s) : pp 541 - 553 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] détection d'objet
[Termes IGN] programmation non linéaire
[Termes IGN] réseau neuronal convolutifRésumé : (auteur) Traditional object recognition algorithms cannot meet the requirements of object recognition accuracy in the actual warehousing and logistics field. In recent years, the rapid development of the deep learning theory has provided a technical approach for solving the above problems, and a number of object recognition algorithms has been proposed based on deep learning, which have been promoted and applied. However, deep learning has the following problems in the application process of object recognition: First, the nonlinear modeling ability of the activation function in the deep learning model is poor; second, the deep learning model has a large number of repeated pooling operations during which information is lost. In view of these shortcomings, this paper proposes multiple-parameter exponential linear units with uniform and learnable parameter forms and introduces two learned parameters in the exponential linear unit (ELU), enabling it to represent piecewise linear and exponential nonlinear functions. Therefore, the ELU has good nonlinear modeling capabilities. At the same time, to improve the problem of losing information in the large number of repeated pooling operations, this paper proposes a new global convolutional neural network structure. This network structure makes full use of the local and global information of different layer feature maps in the network. It can reduce the problem of losing feature information in the large number of pooling operations. Based on the above ideas, this paper suggests an object recognition algorithm based on the optimized nonlinear activation function-global convolutional neural network. Experiments were carried out on the CIFAR100 dataset and the ImageNet dataset using the object recognition algorithm proposed in this paper. The results show that the object recognition method suggested in this paper not only has a better recognition accuracy than traditional machine learning and other deep learning models but also has a good stability and robustness. Numéro de notice : A2022-147 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1007/s00371-020-02033-x Date de publication en ligne : 03/01/2022 En ligne : https://doi.org/10.1007/s00371-020-02033-x Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100041
in The Visual Computer > vol 38 n° 2 (February 2022) . - pp 541 - 553[article]Spatiotemporal temperature fusion based on a deep convolutional network / Xuehan Wang in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 2 (February 2022)
PermalinkPermalinkPhotogrammetric point clouds: quality assessment, filtering, and change detection / Zhenchao Zhang (2022)
PermalinkA deep multi-modal learning method and a new RGB-depth data set for building roof extraction / Mehdi Khoshboresh Masouleh in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 10 (October 2021)
PermalinkStochastic super-resolution for downscaling time-evolving atmospheric fields with a generative adversarial network / Jussi Leinonen in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 9 (September 2021)
PermalinkUnsupervised representation high-resolution remote sensing image scene classification via contrastive learning convolutional neural network / Fengpeng Li in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 8 (August 2021)
PermalinkImproving human mobility identification with trajectory augmentation / Fan Zhou in Geoinformatica [en ligne], vol 25 n° 3 (July 2021)
PermalinkA convolutional neural network approach to predict non‐permissive environments from moderate‐resolution imagery / Seth Goodman in Transactions in GIS, Vol 25 n° 2 (April 2021)
PermalinkApprentissage profond et IA pour l’amélioration de la robustesse des techniques de localisation par vision artificielle / Achref Elouni (2021)
PermalinkPermalink