Descripteur
Termes IGN > mathématiques > statistique mathématique > analyse de données > classification > classification par réseau neuronal > classification par réseau neuronal convolutif
classification par réseau neuronal convolutifVoir aussi |
Documents disponibles dans cette catégorie (92)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Using street view images to identify road noise barriers with ensemble classification model and geospatial analysis / Kai Zhang in Sustainable Cities and Society, vol 78 (March 2022)
[article]
Titre : Using street view images to identify road noise barriers with ensemble classification model and geospatial analysis Type de document : Article/Communication Auteurs : Kai Zhang, Auteur ; Zhen Qian, Auteur ; Yue Yang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 103598 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Analyse spatiale
[Termes IGN] analyse de groupement
[Termes IGN] apprentissage profond
[Termes IGN] cartographie du bruit
[Termes IGN] Chine
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] distribution spatiale
[Termes IGN] image Streetview
[Termes IGN] lutte contre le bruit
[Termes IGN] milieu urbain
[Termes IGN] OpenStreetMap
[Termes IGN] planification urbaine
[Termes IGN] pollution acoustique
[Termes IGN] trafic routier
[Termes IGN] ville durableRésumé : (auteur) Road noise barriers (RNBs) are important urban infrastructures to relieve the harm of traffic noise pollution for citizens. Therefore, obtaining the spatial distribution characteristics of RNBs, such as precise positions and mileage, can be of great help for obtaining more accurate urban noise maps and assessing the quality of the urban living environment for sustainable urban development. However, an effective and efficient method for identifying RNBs and acquiring their attributes in large areas is scarce. This study constructs an ensemble classification model (ECM) to automatically identify RNBs at the city level based on Baidu Street View (BSV). Firstly, the bootstrap sampling method is proposed to build a street view image-based train set, where the effect of imbalanced categories of samples was reduced by adding confusing negative samples. Secondly, two state-of-the-art deep learning models, ResNet and DenseNet, are ensembled to construct an ECM based on the bagging framework. Finally, a post-processing method has been proposed based on geospatial analysis to eliminate street view images (SVIs) that are misclassified as RNBs. This study takes Suzhou, China as the study area to validate the proposed method. The model achieved an accuracy and F1-score of 0.98 and 0.90, respectively. The total mileage of the RNBs in Suzhou was 178,919 m. The results demonstrated the performance of the proposed RNBs identification framework. The significance of obtaining RNBs attributes for accelerating sustainable urban development has been demonstrated through the case of photovoltaic noise barriers (PVNBs). Numéro de notice : A2022-241 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE/IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1016/j.scs.2021.103598 Date de publication en ligne : 20/12/2021 En ligne : https://doi.org/10.1016/j.scs.2021.103598 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100167
in Sustainable Cities and Society > vol 78 (March 2022) . - n° 103598[article]Visual vs internal attention mechanisms in deep neural networks for image classification and object detection / Abraham Montoya Obeso in Pattern recognition, vol 123 (March 2022)
[article]
Titre : Visual vs internal attention mechanisms in deep neural networks for image classification and object detection Type de document : Article/Communication Auteurs : Abraham Montoya Obeso, Auteur ; Jenny Benois-Pineau, Auteur ; Mireya S. García Vázquez, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 108411 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse visuelle
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] oculométrie
[Termes IGN] saillance
[Termes IGN] segmentation sémantique
[Termes IGN] visualisation de donnéesRésumé : (auteur) The so-called “attention mechanisms” in Deep Neural Networks (DNNs) denote an automatic adaptation of DNNs to capture representative features given a specific classification task and related data. Such attention mechanisms perform both globally by reinforcing feature channels and locally by stressing features in each feature map. Channel and feature importance are learnt in the global end-to-end DNNs training process. In this paper, we present a study and propose a method with a different approach, adding supplementary visual data next to training images. We use human visual attention maps obtained independently with psycho-visual experiments, both in task-driven or in free viewing conditions, or powerful models for prediction of visual attention maps. We add visual attention maps as new data alongside images, thus introducing human visual attention into the DNNs training and compare it with both global and local automatic attention mechanisms. Experimental results show that known attention mechanisms in DNNs work pretty much as human visual attention, but still the proposed approach allows a faster convergence and better performance in image classification tasks. Numéro de notice : A2022-197 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.patcog.2021.108411 Date de publication en ligne : 12/11/2021 En ligne : https://doi.org/10.1016/j.patcog.2021.108411 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99988
in Pattern recognition > vol 123 (March 2022) . - n° 108411[article]Multi-species individual tree segmentation and identification based on improved mask R-CNN and UAV imagery in mixed forests / Chong Zhang in Remote sensing, vol 14 n° 4 (February-2 2022)
[article]
Titre : Multi-species individual tree segmentation and identification based on improved mask R-CNN and UAV imagery in mixed forests Type de document : Article/Communication Auteurs : Chong Zhang, Auteur ; Jiawei Zhou, Auteur ; Huiwen Wang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 874 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] Chine
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de contours
[Termes IGN] échantillonnage de données
[Termes IGN] entropie
[Termes IGN] estimation quantitative
[Termes IGN] feuillu
[Termes IGN] hauteur des arbres
[Termes IGN] image captée par drone
[Termes IGN] peuplement mélangé
[Termes IGN] Pinophyta
[Termes IGN] segmentation d'imageRésumé : (auteur) High-resolution UAV imagery paired with a convolutional neural network approach offers significant advantages in accurately measuring forestry ecosystems. Despite numerous studies existing for individual tree crown delineation, species classification, and quantity detection, the comprehensive situation in performing the above tasks simultaneously has rarely been explored, especially in mixed forests. In this study, we propose a new method for individual tree segmentation and identification based on the improved Mask R-CNN. For the optimized network, the fusion type in the feature pyramid network is modified from down-top to top-down to shorten the feature acquisition path among the different levels. Meanwhile, a boundary-weighted loss module is introduced to the cross-entropy loss function Lmask to refine the target loss. All geometric parameters (contour, the center of gravity and area) associated with canopies ultimately are extracted from the mask by a boundary segmentation algorithm. The results showed that F1-score and mAP for coniferous species were higher than 90%, and that of broadleaf species were located between 75%–85.44%. The producer’s accuracy of coniferous forests was distributed between 0.8–0.95 and that of broadleaf ranged in 0.87–0.93; user’s accuracy of coniferous was distributed between 0.81–0.84 and that of broadleaf ranged in 0.71–0.76. The total number of trees predicted was 50,041 for the entire study area, with an overall error of 5.11%. The method under study is compared with other networks including U-net and YOLOv3. Results in this study show that the improved Mask R-CNN has more advantages in broadleaf canopy segmentation and number detection. Numéro de notice : A2022-168 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.3390/rs14040874 Date de publication en ligne : 11/02/2022 En ligne : https://doi.org/10.3390/rs14040874 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99793
in Remote sensing > vol 14 n° 4 (February-2 2022) . - n° 874[article]A combination of convolutional and graph neural networks for regularized road surface extraction / Jingjing Yan in IEEE Transactions on geoscience and remote sensing, vol 60 n° 2 (February 2022)
[article]
Titre : A combination of convolutional and graph neural networks for regularized road surface extraction Type de document : Article/Communication Auteurs : Jingjing Yan, Auteur ; Shunping Ji, Auteur ; Yao Wei, Auteur Année de publication : 2022 Article en page(s) : n° 4409113 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] Bavière (Allemagne)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de contours
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] extraction du réseau routier
[Termes IGN] image aérienne
[Termes IGN] jeu de données
[Termes IGN] optimisation (mathématiques)
[Termes IGN] régression
[Termes IGN] réseau neuronal de graphes
[Termes IGN] Wuhan (Chine)Résumé : (auteur) Road surface extraction from high-resolution remote sensing images has many engineering applications; however, extracting regularized and smooth road surface maps that reach the human delineation level is a very challenging task, and substantial and time-consuming manual work is usually unavoidable. In this article, to solve this problem, we propose a novel regularized road surface extraction framework by introducing a graph neural network (GNN) for processing the road graph that is preconstructed from the easily accessible road centerlines. The proposed framework formulates the road surface extraction problem as two-sided width inference of the road graph and consists of a convolutional neural network (CNN)-based feature extractor and a GNN model for vertex attribute adjustment. The CNN extracts the high-level abstract features of each vertex in the graph as the input of the GNN and also the road boundary features that allow us to distinguish roads from the background. The GNN propagates and aggregates the features of the vertices in the graph to achieve global optimization of the regression of the regularized widths of the vertices. At the same time, a biased centerline map can also be corrected based on the width prediction result. To the best of the authors’ knowledge, this is the first study to have introduced a GNN to regularized human-level road surface extraction. The proposed method was evaluated on four diverse datasets, and the results show that the proposed method comprehensively outperforms the recent CNN-based segmentation methods and other regularization methods in the intersection over union (IoU) and smoothness score, and a visual check shows that a majority of the prediction results of the proposed method approach the human delineation level. Numéro de notice : A2022-297 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2022.3151688 Date de publication en ligne : 15/02/2022 En ligne : https://doi.org/10.1109/TGRS.2022.3151688 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100355
in IEEE Transactions on geoscience and remote sensing > vol 60 n° 2 (February 2022) . - n° 4409113[article]Decision fusion of deep learning and shallow learning for marine oil spill detection / Junfang Yang in Remote sensing, vol 14 n° 3 (February-1 2022)
[article]
Titre : Decision fusion of deep learning and shallow learning for marine oil spill detection Type de document : Article/Communication Auteurs : Junfang Yang, Auteur ; Yi Ma, Auteur ; Yabin Hu, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 666 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] algorithme de fusion
[Termes IGN] analyse multiéchelle
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] hydrocarbure
[Termes IGN] image hyperspectrale
[Termes IGN] marée noire
[Termes IGN] milieu marin
[Termes IGN] pollution des mers
[Termes IGN] précision de la classification
[Termes IGN] sous ensemble flou
[Termes IGN] surveillance écologique
[Termes IGN] transformation en ondelettesRésumé : (auteur) Marine oil spills are an emergency of great harm and have become a hot topic in marine environmental monitoring research. Optical remote sensing is an important means to monitor marine oil spills. Clouds, weather, and light control the amount of available data, which often limit feature characterization using a single classifier and therefore difficult to accurate monitoring of marine oil spills. In this paper, we develop a decision fusion algorithm to integrate deep learning methods and shallow learning methods based on multi-scale features for improving oil spill detection accuracy in the case of limited samples. Based on the multi-scale features after wavelet transform, two deep learning methods and two classical shallow learning algorithms are used to extract oil slick information from hyperspectral oil spill images. The decision fusion algorithm based on fuzzy membership degree is introduced to fuse multi-source oil spill information. The research shows that oil spill detection accuracy using the decision fusion algorithm is higher than that of the single detection algorithms. It is worth noting that oil spill detection accuracy is affected by different scale features. The decision fusion algorithm under the first-level scale features can further improve the accuracy of oil spill detection. The overall classification accuracy of the proposed method is 91.93%, which is 2.03%, 2.15%, 1.32%, and 0.43% higher than that of SVM, DBN, 1D-CNN, and MRF-CNN algorithms, respectively. Numéro de notice : A2022-125 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.3390/rs14030666 Date de publication en ligne : 30/01/2022 En ligne : https://doi.org/10.3390/rs14030666 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99688
in Remote sensing > vol 14 n° 3 (February-1 2022) . - n° 666[article]GazPNE: annotation-free deep learning for place name extraction from microblogs leveraging gazetteer and synthetic data by rules / Xuke Hu in International journal of geographical information science IJGIS, vol 36 n° 2 (February 2022)PermalinkGisGCN: a visual graph-based framework to match geographical areas through time / Margarita Khokhlova in ISPRS International journal of geo-information, vol 11 n° 2 (February 2022)PermalinkMonthly mapping of forest harvesting using dense time series Sentinel-1 SAR imagery and deep learning / Feng Zhao in Remote sensing of environment, vol 269 (February 2022)PermalinkPCEDNet: a lightweight neural network for fast and interactive edge detection in 3D point clouds / Chems-Eddine Himeur in ACM Transactions on Graphics, TOG, Vol 41 n° 1 (February 2022)PermalinkSemantic segmentation of land cover from high resolution multispectral satellite images by spectral-spatial convolutional neural network / Ekrem Saralioglu in Geocarto international, vol 37 n° 2 ([15/01/2022])PermalinkAnalysis of pedestrian movements and gestures using an on-board camera to predict their intentions / Joseph Gesnouin (2022)PermalinkAttributs de texture extraits d'images multispectrales acquises en conditions d'éclairage non contrôlées : application à l'agriculture de précision / Anis Amziane (2022)PermalinkA benchmark of named entity recognition approaches in historical documents : application to 19th century French directories / Nathalie Abadie (2022)PermalinkContribution to object extraction in cartography : A novel deep learning-based solution to recognise, segment and post-process the road transport network as a continuous geospatial element in high-resolution aerial orthoimagery / Calimanut-Ionut Cira (2022)PermalinkDeep image translation with an affinity-based change prior for unsupervised multimodal change detection / Luigi Tommaso Luppino in IEEE Transactions on geoscience and remote sensing, vol 60 n° 1 (January 2022)Permalink