Descripteur
Termes IGN > mathématiques > statistique mathématique > analyse de données > classification > classification par réseau neuronal
classification par réseau neuronalVoir aussi |
Documents disponibles dans cette catégorie (167)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Feature-selection high-resolution network with hypersphere embedding for semantic segmentation of VHR remote sensing images / Hanwen Xu in IEEE Transactions on geoscience and remote sensing, vol 60 n° 6 (June 2022)
[article]
Titre : Feature-selection high-resolution network with hypersphere embedding for semantic segmentation of VHR remote sensing images Type de document : Article/Communication Auteurs : Hanwen Xu, Auteur ; Xinming Tang, Auteur ; Bo Ai, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 4411915 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] architecture de réseau
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] entropie
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image à très haute résolution
[Termes IGN] segmentation multi-échelle
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Very-high-resolution (VHR) remote sensing images contain various multiscale objects, such as large-scale buildings and small-scale cars. However, these multiscale objects cannot be considered simultaneously in the widely used backbones with a large downsampling factor (e.g., VGG-like and ResNet-like), resulting in the appearance of various context aggregation approaches, such as fusing low-level features and attention-based modules. To alleviate this problem caused by backbones with a large downsampling factor, we propose a feature-selection high-resolution network (FSHRNet) based on an observation: if the features maintain high resolution throughout the network, a high precision segmentation result can be obtained by only using a 1× 1 convolution layer with no need for complex context aggregation modules. Specifically, the backbone of FSHRNet is a multibranch structure similar to HRNet where the high-resolution branch is the principal line. Then, a lightweight dynamic weight module, named the feature-selection convolution (FSConv) layer, is presented to fuse multiresolution features, allowing adaptively feature selection based on the characteristic of objects. Finally, a specially designed 1× 1 convolution layer derived from hypersphere embedding is used to produce the segmentation result. Experiments with other widely used methods show that the proposed FSHRNet obtains competitive performance on the ISPRS Vaihingen dataset, the ISPRS Potsdam dataset, and the iSAID dataset. Numéro de notice : A2022-559 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2022.3183144 En ligne : https://doi.org/10.1109/TGRS.2022.3183144 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101184
in IEEE Transactions on geoscience and remote sensing > vol 60 n° 6 (June 2022) . - n° 4411915[article]Invariant structure representation for remote sensing object detection based on graph modeling / Zicong Zhu in IEEE Transactions on geoscience and remote sensing, vol 60 n° 6 (June 2022)
[article]
Titre : Invariant structure representation for remote sensing object detection based on graph modeling Type de document : Article/Communication Auteurs : Zicong Zhu, Auteur ; Xian Sun, Auteur ; Wenhui Diao, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 5625217 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] données d'entrainement sans étiquette
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] filtrage numérique d'image
[Termes IGN] granularité d'image
[Termes IGN] graphe
[Termes IGN] invariantRésumé : (auteur) Due to the characteristics of vertical orthophoto imaging, the apparent structural features of the object in the remote sensing (RS) image are relatively stable, such as the cross-shaped structure of the aircraft and the rectangular structure of the vehicle. Compared with the traditional visual features, using these features is conducive to improving the accuracy of object detection. However, there are few studies on such characteristics. In this article, we systematically study the invariant structural features of remote sensing objects and propose a graph focusing aggregation network (GFA-Net) to represent the structural features of remote sensing objects. Among them, in view of the problem that traditional convolutional neural networks (CNNs) are sensitive to the changes in rotation, scale, and other factors, which makes it difficult to extract structural features, we propose the graph focusing process (GFP) based on the idea of graph convolution. Analysis and experiments show that graph structure has significant advantages over Euclidean feature space under CNN in expressing such structural features. In order to realize the end-to-end efficient training of the above model, we design a graph aggregation network (GAN) to update the weight of nodes. We verify the effectiveness of our method on the proposed multitask datasets aircraft component segmentation dataset (ACSD) and the large-scale Fine-grAined object recognItion in high-Resolution RS imagery (FAIR1M). Experiments conducted on the object detection datasets of large-scale Dataset for Object deTection in Aerial images (DOTA) and HRSC2016 prove that the proposed method is superior to the current state-of-the-art (SOTA) method. Numéro de notice : A2022-560 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2022.3181686 Date de publication en ligne : 09/06/2022 En ligne : https://doi.org/10.1109/TGRS.2022.3181686 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101186
in IEEE Transactions on geoscience and remote sensing > vol 60 n° 6 (June 2022) . - n° 5625217[article]Line-based deep learning method for tree branch detection from digital images / Rodrigo L. S. Silva in International journal of applied Earth observation and geoinformation, vol 110 (June 2022)
[article]
Titre : Line-based deep learning method for tree branch detection from digital images Type de document : Article/Communication Auteurs : Rodrigo L. S. Silva, Auteur ; José Marcato Junior, Auteur ; Laisa Almeida, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 102759 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] branche (arbre)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] données qualitatives
[Termes IGN] estimation quantitative
[Termes IGN] image à haute résolution
[Termes IGN] ligne (géométrie)
[Termes IGN] transformation de HoughRésumé : (auteur) Preventive maintenance of power lines, including cutting and pruning of tree branches, is essential to avoid interruptions in the energy supply. Automatic methods can support this risky task and also reduce time-consuming. Here, we propose a method in which the orientation and the grasping positions of tree branches are estimated. The proposed method firstly predicts the straight line (representing the tree branch extension) based on a convolutional neural network (CNN). Secondly, a Hough transform is applied to estimate the direction and position of the line. Finally, we estimate the grip point as the pixel point with the highest probability of belonging to the line. We generated a dataset based on internet searches and annotated 1868 images considering challenging scenarios with different tree branch shapes, capture devices, and environmental conditions. Ten-fold cross-validation was adopted, considering 90% for training and 10% for testing. We also assessed the method under corruptions (gaussian and shot) with different severity levels. The experimental analysis showed the effectiveness of the proposed method reporting F1-score of 96.78%. Our method outperformed state-of-the-art Deep Hough Transform (DHT) and Fully Convolutional Line Parsing (F-Clip). Numéro de notice : A2022-550 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.1016/j.jag.2022.102759 Date de publication en ligne : 09/05/2022 En ligne : https://doi.org/10.1016/j.jag.2022.102759 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101153
in International journal of applied Earth observation and geoinformation > vol 110 (June 2022) . - n° 102759[article]Precise crop classification of hyperspectral images using multi-branch feature fusion and dilation-based MLP / Haibin Wu in Remote sensing, vol 14 n° 11 (June-1 2022)
[article]
Titre : Precise crop classification of hyperspectral images using multi-branch feature fusion and dilation-based MLP Type de document : Article/Communication Auteurs : Haibin Wu, Auteur ; Huaming Zhou, Auteur ; Aili Wang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 2713 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse en composantes principales
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] cultures
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image hyperspectrale
[Termes IGN] Perceptron multicoucheRésumé : (auteur) The precise classification of crop types using hyperspectral remote sensing imaging is an essential application in the field of agriculture, and is of significance for crop yield estimation and growth monitoring. Among the deep learning methods, Convolutional Neural Networks (CNNs) are the premier model for hyperspectral image (HSI) classification for their outstanding locally contextual modeling capability, which facilitates spatial and spectral feature extraction. Nevertheless, the existing CNNs have a fixed shape and are limited to observing restricted receptive fields, constituting a simulation difficulty for modeling long-range dependencies. To tackle this challenge, this paper proposed two novel classification frameworks which are both built from multilayer perceptrons (MLPs). Firstly, we put forward a dilation-based MLP (DMLP) model, in which the dilated convolutional layer replaced the ordinary convolution of MLP, enlarging the receptive field without losing resolution and keeping the relative spatial position of pixels unchanged. Secondly, the paper proposes multi-branch residual blocks and DMLP concerning performance feature fusion after principal component analysis (PCA), called DMLPFFN, which makes full use of the multi-level feature information of the HSI. The proposed approaches are carried out on two widely used hyperspectral datasets: Salinas and KSC; and two practical crop hyperspectral datasets: WHU-Hi-LongKou and WHU-Hi-HanChuan. Experimental results show that the proposed methods outshine several state-of-the-art methods, outperforming CNN by 6.81%, 12.45%, 4.38% and 8.84%, and outperforming ResNet by 4.48%, 7.74%, 3.53% and 6.39% on the Salinas, KSC, WHU-Hi-LongKou and WHU-Hi-HanChuan datasets, respectively. As a result of this study, it was confirmed that the proposed methods offer remarkable performances for hyperspectral precise crop classification. Numéro de notice : A2022-539 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.3390/rs14112713 Date de publication en ligne : 05/06/2022 En ligne : https://doi.org/10.3390/rs14112713 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101102
in Remote sensing > vol 14 n° 11 (June-1 2022) . - n° 2713[article]Deep learning for the detection of early signs for forest damage based on satellite imagery / Dennis Wittich in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2022 (2022 edition)
[article]
Titre : Deep learning for the detection of early signs for forest damage based on satellite imagery Type de document : Article/Communication Auteurs : Dennis Wittich, Auteur ; Franz Rottensteiner, Auteur ; Mirjana Voelsen, Auteur ; Christian Heipke, Auteur ; Sönke Müller, Auteur Année de publication : 2022 Article en page(s) : pp 307 - 315 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] dégradation de la flore
[Termes IGN] dommage forestier causé par facteurs naturels
[Termes IGN] fonction de perte
[Termes IGN] image Sentinel-MSI
[Termes IGN] régression
[Termes IGN] série temporelle
[Termes IGN] surveillance forestièreRésumé : (auteur) We present an approach for detecting early signs for upcoming forest damages by training a Convolutional Neural Network (CNN) for the pixel-wise prediction of the remaining life-time (RLT) of trees in forests based on Sentinel-2 imagery. We focus on a scenario in which reference data are only available for a related task, namely for a bi-temporal pixel-wise classification of forest degradation. This reference is used to train a CNN for the pixel-wise prediction of forest degradation. In this context, we propose a new sub-sampling-based approach for compensating the effects of a heavy class imbalance in the training data. Using the resulting classification model, we predict semi-labels for images of a Sentinel-2 time series, from which training data for a CNN designed to regress the RLT can be derived after some label cleansing. However, due to data gaps in the time series, e.g. caused by clouds, only intervals can be derived for the target variable to be regressed, and for some training pixels one of the interval limits may even be unknown. Consequently, we propose a new loss function for training a CNN for regressing the RLT that only requires the known interval limits. The method is evaluated on a data set in Germany, covering a time-span of 5 years. We show that the proposed sub-sampling strategy for dealing with strong label imbalance when training the classifier significantly reduces the training time compared to other approaches. We further show that our model predicts the RLT with a maximum error of two months for 80% of the forest pixels that die within one year from the acquisition date of the Sentinel-2 image. Numéro de notice : A2022-432 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.5194/isprs-annals-V-2-2022-307-2022 Date de publication en ligne : 17/05/2022 En ligne : https://doi.org/10.5194/isprs-annals-V-2-2022-307-2022 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100738
in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences > vol V-2-2022 (2022 edition) . - pp 307 - 315[article]Railway lidar semantic segmentation with axially symmetrical convolutional learning / Antoine Manier in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2022 (2022 edition)PermalinkResearch on automatic identification method of terraces on the Loess plateau based on deep transfer learning / Mingge Yu in Remote sensing, vol 14 n° 10 (May-2 2022)Permalink3D lidar point-cloud projection operator and transfer machine learning for effective road surface features detection and segmentation / Heyang Thomas Li in The Visual Computer, vol 38 n° 5 (May 2022)PermalinkChineseTR: A weakly supervised toponym recognition architecture based on automatic training data generator and deep neural network / Qinjun Qiu in Transactions in GIS, vol 26 n° 3 (May 2022)PermalinkA context feature enhancement network for building extraction from high-resolution remote sensing imagery / Jinzhi Chen in Remote sensing, vol 14 n° 9 (May-1 2022)PermalinkEfficient convolutional neural architecture search for LiDAR DSM classification / Aili Wang in IEEE Transactions on geoscience and remote sensing, vol 60 n° 5 (May 2022)PermalinkRevising cadastral data on land boundaries using deep learning in image-based mapping / Bujar Fetai in ISPRS International journal of geo-information, vol 11 n° 5 (May 2022)PermalinkUnsupervised multi-view CNN for salient view selection and 3D interest point detection / Ran Song in International journal of computer vision, vol 130 n° 5 (May 2022)PermalinkWood decay detection in Norway spruce forests based on airborne hyperspectral and ALS data / Michele Dalponte in Remote sensing, vol 14 n° 8 (April-2 2022)PermalinkAssessing surface drainage conditions at the street and neighborhood scale: A computer vision and flow direction method applied to lidar data / Cheng-Chun Lee in Computers, Environment and Urban Systems, vol 93 (April 2022)Permalink