Descripteur
Termes IGN > mathématiques > statistique mathématique > analyse de données > classification > classification par réseau neuronal > classification par réseau neuronal convolutif
classification par réseau neuronal convolutifVoir aussi |
Documents disponibles dans cette catégorie (215)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
3D browsing of wide-angle fisheye images under view-dependent perspective correction / Mingyi Huang in Photogrammetric record, vol 37 n° 178 (June 2022)
[article]
Titre : 3D browsing of wide-angle fisheye images under view-dependent perspective correction Type de document : Article/Communication Auteurs : Mingyi Huang, Auteur ; Jun Wu, Auteur ; Zhiyong Peng, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 185 - 207 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] correction d'image
[Termes IGN] distorsion d'image
[Termes IGN] étalonnage d'instrument
[Termes IGN] image hémisphérique
[Termes IGN] objectif très grand angulaire
[Termes IGN] panorama sphérique
[Termes IGN] perspective
[Termes IGN] processeur graphique
[Termes IGN] projection orthogonale
[Termes IGN] projection perspectiveRésumé : (auteur) This paper presents a novel technique for 3D browsing of wide-angle fisheye images using view-dependent perspective correction (VDPC). First, the fisheye imaging model with interior orientation parameters (IOPs) is established. Thereafter, a VDPC model for wide-angle fisheye images is proposed that adaptively selects correction planes for different areas of the image format. Finally, the wide-angle fisheye image is re-projected to obtain the visual effect of browsing in hemispherical space, using the VDPC model and IOPs of the fisheye camera calibrated using the ideal projection ellipse constraint. The proposed technique is tested on several downloaded internet images with unknown IOPs. Results show that the proposed VDPC model achieves a more uniform perspective correction of fisheye images in different areas, and preserves the detailed information with greater flexibility compared with the traditional perspective projection conversion (PPC) technique. The proposed algorithm generates a corrected image of 512 × 512 pixels resolution at a speed of 58 fps when run on a pure central processing unit (CPU) processor. With an ordinary graphics processing unit (GPU) processor, a corrected image of 1024 × 1024 pixels resolution can be generated at 60 fps. Therefore, smooth 3D visualisation of a fisheye image can be realised on a computer using the proposed algorithm, which may benefit applications such as panorama surveillance, robot navigation, etc. Numéro de notice : A2022-518 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1111/phor.12410 Date de publication en ligne : 10/05/2022 En ligne : https://doi.org/10.1111/phor.12410 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101068
in Photogrammetric record > vol 37 n° 178 (June 2022) . - pp 185 - 207[article]Beyond single receptive field: A receptive field fusion-and-stratification network for airborne laser scanning point cloud classification / Yongqiang Mao in ISPRS Journal of photogrammetry and remote sensing, vol 188 (June 2022)
[article]
Titre : Beyond single receptive field: A receptive field fusion-and-stratification network for airborne laser scanning point cloud classification Type de document : Article/Communication Auteurs : Yongqiang Mao, Auteur ; Kaiqiang chen, Auteur ; Wenhui Diao, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 45 - 61 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage automatique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données laser
[Termes IGN] données localisées 3D
[Termes IGN] Perceptron multicouche
[Termes IGN] représentation parcimonieuse
[Termes IGN] réseau neuronal de graphes
[Termes IGN] semis de points
[Termes IGN] stratification de données
[Termes IGN] voxelRésumé : (Auteur) The classification of airborne laser scanning (ALS) point clouds is a critical task of remote sensing and photogrammetry fields. Although recent deep learning-based methods have achieved satisfactory performance, they have ignored the unicity of the receptive field, which makes the ALS point cloud classification remain challenging for the distinguishment of the areas with complex structures and extreme scale variations. In this article, for the objective of configuring multi-receptive field features, we propose a novel receptive field fusion-and-stratification network (RFFS-Net). With a novel dilated graph convolution (DGConv) and its extension annular dilated convolution (ADConv) as basic building blocks, the receptive field fusion process is implemented with the dilated and annular graph fusion (DAGFusion) module, which obtains multi-receptive field feature representation through capturing dilated and annular graphs with various receptive regions. The stratification of the receptive fields with point sets of different resolutions as the calculation bases is performed with Multi-level Decoders nested in RFFS-Net and driven by the multi-level receptive field aggregation loss (MRFALoss) to drive the network to learn in the direction of the supervision labels with different resolutions. With receptive field fusion-and-stratification, RFFS-Net is more adaptable to the classification of regions with complex structures and extreme scale variations in large-scale ALS point clouds. Evaluated on the ISPRS Vaihingen 3D dataset, our RFFS-Net significantly outperforms the baseline (i.e. PointConv) approach by 5.3% on mF1 and 5.4% on mIoU, accomplishing an overall accuracy of 82.1%, an mF1 of 71.6%, and an mIoU of 58.2%. The experiments show that our RFFS-Net achieves a new state-of-the-art classification performance on powerline, car, and fence classes. Furthermore, experiments on the LASDU dataset and the 2019 IEEE-GRSS Data Fusion Contest dataset show that RFFS-Net achieves a new state-of-the-art classification performance. The code is available at github.com/WingkeungM/RFFS-Net. Numéro de notice : A2022-273 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.03.019 Date de publication en ligne : 07/04/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.03.019 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100532
in ISPRS Journal of photogrammetry and remote sensing > vol 188 (June 2022) . - pp 45 - 61[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022061 SL Revue Centre de documentation Revues en salle Disponible 081-2022063 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2022062 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Detecting interchanges in road networks using a graph convolutional network approach / Min Yang in International journal of geographical information science IJGIS, vol 36 n° 6 (June 2022)
[article]
Titre : Detecting interchanges in road networks using a graph convolutional network approach Type de document : Article/Communication Auteurs : Min Yang, Auteur ; Chenjun Jiang, Auteur ; Xiongfeng Yan, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 1119 - 1139 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Analyse spatiale
[Termes IGN] analyse de groupement
[Termes IGN] analyse vectorielle
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] classification semi-dirigée
[Termes IGN] détection d'objet
[Termes IGN] échangeur routier
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] modélisation
[Termes IGN] noeud
[Termes IGN] Pékin (Chine)
[Termes IGN] réseau neuronal de graphes
[Termes IGN] réseau routier
[Termes IGN] Wuhan (Chine)Résumé : (auteur) Detecting interchanges in road networks benefit many applications, such as vehicle navigation and map generalization. Traditional approaches use manually defined rules based on geometric, topological, or both properties, and thus can present challenges for structurally complex interchange. To overcome this drawback, we propose a graph-based deep learning approach for interchange detection. First, we model the road network as a graph in which the nodes represent road segments, and the edges represent their connections. The proposed approach computes the shape measures and contextual properties of individual road segments for features characterizing the associated nodes in the graph. Next, a semi-supervised approach uses these features and limited labeled interchanges to train a graph convolutional network that classifies these road segments into an interchange and non-interchange segments. Finally, an adaptive clustering approach groups the detected interchange segments into interchanges. Our experiment with the road networks of Beijing and Wuhan achieved a classification accuracy >95% at a label rate of 10%. Moreover, the interchange detection precision and recall were 79.6 and 75.7% on the Beijing dataset and 80.6 and 74.8% on the Wuhan dataset, respectively, which were 18.3–36.1 and 17.4–19.4% higher than those of the existing approaches based on characteristic node clustering. Numéro de notice : A2022-404 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/13658816.2021.2024195 Date de publication en ligne : 11/03/2022 En ligne : https://doi.org/10.1080/13658816.2021.2024195 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100716
in International journal of geographical information science IJGIS > vol 36 n° 6 (June 2022) . - pp 1119 - 1139[article]Extracting the urban landscape features of the historic district from street view images based on deep learning: A case study in the Beijing Core area / Siming Yin in ISPRS International journal of geo-information, vol 11 n° 6 (June 2022)
[article]
Titre : Extracting the urban landscape features of the historic district from street view images based on deep learning: A case study in the Beijing Core area Type de document : Article/Communication Auteurs : Siming Yin, Auteur ; Xian Guo, Auteur ; Jie Jiang, Auteur Année de publication : 2022 Article en page(s) : n° 326 Note générale : résumé Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image Streetview
[Termes IGN] paysage urbain
[Termes IGN] Pékin (Chine)
[Termes IGN] segmentation sémantique
[Termes IGN] site historiqueRésumé : (auteur) Accurate extraction of urban landscape features in the historic district of China is an essential task for the protection of the cultural and historical heritage. In recent years, deep learning (DL)-based methods have made substantial progress in landscape feature extraction. However, the lack of annotated data and the complex scenarios inside alleyways result in the limited performance of the available DL-based methods when extracting landscape features. To deal with this problem, we built a small yet comprehensive history-core street view (HCSV) dataset and propose a polarized attention-based landscape feature segmentation network (PALESNet) in this article. The polarized self-attention block is employed in PALESNet to discriminate each landscape feature in various situations, whereas the atrous spatial pyramid pooling (ASPP) block is utilized to capture the multi-scale features. As an auxiliary, a transfer learning module was introduced to supplement the knowledge of the network, to overcome the shortage of labeled data and improve its learning capability in the historic districts. Compared to other state-of-the-art methods, our network achieved the highest accuracy in the case study of Beijing Core Area, with an mIoU of 63.7% on the HCSV dataset; and thus could provide sufficient and accurate data for further protection and renewal in Chinese historic districts. Numéro de notice : A2022-410 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi11060326 Date de publication en ligne : 28/05/2022 En ligne : https://doi.org/10.3390/ijgi11060326 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100760
in ISPRS International journal of geo-information > vol 11 n° 6 (June 2022) . - n° 326[article]Feature-selection high-resolution network with hypersphere embedding for semantic segmentation of VHR remote sensing images / Hanwen Xu in IEEE Transactions on geoscience and remote sensing, vol 60 n° 6 (June 2022)
[article]
Titre : Feature-selection high-resolution network with hypersphere embedding for semantic segmentation of VHR remote sensing images Type de document : Article/Communication Auteurs : Hanwen Xu, Auteur ; Xinming Tang, Auteur ; Bo Ai, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 4411915 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] architecture de réseau
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] entropie
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image à très haute résolution
[Termes IGN] segmentation multi-échelle
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Very-high-resolution (VHR) remote sensing images contain various multiscale objects, such as large-scale buildings and small-scale cars. However, these multiscale objects cannot be considered simultaneously in the widely used backbones with a large downsampling factor (e.g., VGG-like and ResNet-like), resulting in the appearance of various context aggregation approaches, such as fusing low-level features and attention-based modules. To alleviate this problem caused by backbones with a large downsampling factor, we propose a feature-selection high-resolution network (FSHRNet) based on an observation: if the features maintain high resolution throughout the network, a high precision segmentation result can be obtained by only using a 1× 1 convolution layer with no need for complex context aggregation modules. Specifically, the backbone of FSHRNet is a multibranch structure similar to HRNet where the high-resolution branch is the principal line. Then, a lightweight dynamic weight module, named the feature-selection convolution (FSConv) layer, is presented to fuse multiresolution features, allowing adaptively feature selection based on the characteristic of objects. Finally, a specially designed 1× 1 convolution layer derived from hypersphere embedding is used to produce the segmentation result. Experiments with other widely used methods show that the proposed FSHRNet obtains competitive performance on the ISPRS Vaihingen dataset, the ISPRS Potsdam dataset, and the iSAID dataset. Numéro de notice : A2022-559 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2022.3183144 En ligne : https://doi.org/10.1109/TGRS.2022.3183144 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101184
in IEEE Transactions on geoscience and remote sensing > vol 60 n° 6 (June 2022) . - n° 4411915[article]Invariant structure representation for remote sensing object detection based on graph modeling / Zicong Zhu in IEEE Transactions on geoscience and remote sensing, vol 60 n° 6 (June 2022)PermalinkLine-based deep learning method for tree branch detection from digital images / Rodrigo L. S. Silva in International journal of applied Earth observation and geoinformation, vol 110 (June 2022)PermalinkPrecise crop classification of hyperspectral images using multi-branch feature fusion and dilation-based MLP / Haibin Wu in Remote sensing, vol 14 n° 11 (June-1 2022)PermalinkDeep learning for the detection of early signs for forest damage based on satellite imagery / Dennis Wittich in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2022 (2022 edition)PermalinkRailway lidar semantic segmentation with axially symmetrical convolutional learning / Antoine Manier in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2022 (2022 edition)PermalinkResearch on automatic identification method of terraces on the Loess plateau based on deep transfer learning / Mingge Yu in Remote sensing, vol 14 n° 10 (May-2 2022)Permalink3D lidar point-cloud projection operator and transfer machine learning for effective road surface features detection and segmentation / Heyang Thomas Li in The Visual Computer, vol 38 n° 5 (May 2022)PermalinkA context feature enhancement network for building extraction from high-resolution remote sensing imagery / Jinzhi Chen in Remote sensing, vol 14 n° 9 (May-1 2022)PermalinkEfficient convolutional neural architecture search for LiDAR DSM classification / Aili Wang in IEEE Transactions on geoscience and remote sensing, vol 60 n° 5 (May 2022)PermalinkRevising cadastral data on land boundaries using deep learning in image-based mapping / Bujar Fetai in ISPRS International journal of geo-information, vol 11 n° 5 (May 2022)Permalink