Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > extraction de traits caractéristiques
extraction de traits caractéristiquesSynonyme(s)extraction des caractéristiques extraction de primitiveVoir aussi |
Documents disponibles dans cette catégorie (632)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Foreground-aware refinement network for building extraction from remote sensing images / Zhang Yan in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 11 (November 2022)
[article]
Titre : Foreground-aware refinement network for building extraction from remote sensing images Type de document : Article/Communication Auteurs : Zhang Yan, Auteur ; Wang Xiangyu, Auteur ; Zhang Zhongwei, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 731 - 738 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse visuelle
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de régions
[Termes IGN] détection du bâti
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image RVB
[Termes IGN] jeu de donnéesRésumé : (auteur) To extract buildings accurately, we propose a foreground-aware refinement network for building extraction. In particular, in order to reduce the false positive of buildings, we design the foreground-aware module using the attention gate block, which effectively suppresses the features of nonbuilding and enhances the sensitivity of the model to buildings. In addition, we introduce the reverse attention mechanism in the detail refinement module. Specifically, this module guides the network to learn to supplement the missing details of the buildings by erasing the currently predicted regions of buildings and achieves more accurate and complete building extraction. To further optimize the network, we design hybrid loss, which combines BCE loss and SSIM loss, to supervise network learning from both pixel and structure layers. Experimental results demonstrate the superiority of our network over state-of-the-art methods in terms of both quantitative metrics and visual quality. Numéro de notice : A2022-842 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.21-00081R2 Date de publication en ligne : 01/11/2022 En ligne : https://doi.org/10.14358/PERS.21-00081R2 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102055
in Photogrammetric Engineering & Remote Sensing, PERS > vol 88 n° 11 (November 2022) . - pp 731 - 738[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 105-2022111 SL Revue Centre de documentation Revues en salle Disponible GA-Net: A geometry prior assisted neural network for road extraction / Xin Chen in International journal of applied Earth observation and geoinformation, vol 114 (November 2022)
[article]
Titre : GA-Net: A geometry prior assisted neural network for road extraction Type de document : Article/Communication Auteurs : Xin Chen, Auteur ; Qun Sun, Auteur ; Wenyue Guo, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 103004 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de contours
[Termes IGN] données multiéchelles
[Termes IGN] extraction automatique
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] extraction du réseau routier
[Termes IGN] jeu de données
[Termes IGN] Massachusetts (Etats-Unis)Résumé : (auteur) With geospatial intelligence research developing rapidly, automatic road extraction is becoming a fundamental and challenging task. Due to the special geometric structure and spectral information of road networks, existing methods suffer from incomplete and fractured results. In this work, a novel road extraction convolutional neural network, incorporating the road boundary details and road junction information via a dual-branch multi-task structure, is proposed to learn synergistic feature representations and strengthen road connectivity. Firstly, a BiFPN-based feature aggregation module is utilised to bridge the semantic gap between low-level and high-level feature maps, allowing multi-scale spatial details to be fully fused. Secondly, the boundary auxiliary branch, using a U-shaped network with a spatial-channel attention module, captures residential information for the backbone to enhance the subtleties of road edges. Thirdly, the node inferring branch models the road junction position jointly with the road surface, aiming to strengthen the topology structure and reduce the fragmented road segments. We perform experiments on three diverse road datasets, namely the DeepGlobe dataset, Massachusetts dataset, and SpaceNet dataset. The results demonstrate that our model shows an overall performance improvement over some SOTA algorithms and the IoU indicator achieves 3.86%, 0.79%, and 1.71% improvements over Unet on the three datasets, respectively. Numéro de notice : A2022-785 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.jag.2022.103004 En ligne : https://doi.org/10.1016/j.jag.2022.103004 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101888
in International journal of applied Earth observation and geoinformation > vol 114 (November 2022) . - n° 103004[article]Graph-based leaf–wood separation method for individual trees using terrestrial lidar point clouds / Zhilin Tian in IEEE Transactions on geoscience and remote sensing, vol 60 n° 11 (November 2022)
[article]
Titre : Graph-based leaf–wood separation method for individual trees using terrestrial lidar point clouds Type de document : Article/Communication Auteurs : Zhilin Tian, Auteur ; Shihua Li, Auteur Année de publication : 2022 Article en page(s) : n° 5705111 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] bois
[Termes IGN] branche (arbre)
[Termes IGN] chemin le plus court, algorithme du
[Termes IGN] données lidar
[Termes IGN] échantillonnage de données
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] feuille (végétation)
[Termes IGN] graphe
[Termes IGN] Python (langage de programmation)
[Termes IGN] segmentation
[Termes IGN] semis de pointsRésumé : (auteur) Terrestrial light detection and ranging (lidar) is capable of resolving trees at the branch/leaf level with accurate and dense point clouds. The separation of leaf and wood components is a prerequisite for the estimation of branch/leaf-scale biophysical properties and realistic tree model reconstruction. Most existing methods have been tested on trees with similar structures; their robustness for trees of different species and sizes remains relatively unexplored. This study proposed a new graph-based leaf–wood separation (GBS) method for individual trees purely using the xyz -information of the point cloud. The GBS method fully utilized the shortest path-based features, as the shortest path can effectively reflect the structures for trees of different species and sizes. Ten types of tree data—covering tropical, temperate, and boreal species—with heights ranging from 5.4 to 43.7 m, were used to test the method performance. The mean accuracy and kappa coefficient at the point level were 94% and 0.78, respectively, and our method outperformed two other state-of-the-art methods. Through further analysis and testing, the GBS method exhibited a strong ability for detecting small and leaf-surrounded branches, and was also sufficiently robust in terms of data subsampling. Our research further demonstrated the potential of the shortest path-based features in leaf–wood separation. The entire framework was provided for use as an open-source Python package, along with our labeled validation data. Numéro de notice : A2022-853 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2022.3218603 Date de publication en ligne : 01/11/2022 En ligne : https://doi.org/10.1109/TGRS.2022.3218603 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102099
in IEEE Transactions on geoscience and remote sensing > vol 60 n° 11 (November 2022) . - n° 5705111[article]Improving image segmentation with boundary patch refinement / Xiaolin Hu in International journal of computer vision, vol 130 n° 11 (November 2022)
[article]
Titre : Improving image segmentation with boundary patch refinement Type de document : Article/Communication Auteurs : Xiaolin Hu, Auteur ; Chufeng Tang, Auteur ; Hang Chen, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 2571 - 2589 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] contour
[Termes IGN] détection de contours
[Termes IGN] distance euclidienne
[Termes IGN] masque
[Termes IGN] segmentation d'image
[Termes IGN] segmentation fondée sur les contours
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Tremendous efforts have been made on image segmentation but the mask quality is still not satisfactory. The boundaries of predicted masks are usually imprecise due to the low spatial resolution of feature maps and the imbalance problem caused by the extremely low proportion of boundary pixels. To address these issues, we propose a conceptually simple yet effective post-processing refinement framework, termed BPR, to improve the boundary quality of the prediction of any image segmentation model. Following the idea of looking closer to segment boundaries better, we extract and refine a series of small boundary patches along the predicted boundaries. The refinement is accomplished by a boundary patch refinement network at the higher resolution. The trained BPR model can be easily transferred to refine the results of other models as well. Extensive experiments show that the proposed BPR framework yields significant improvements on the semantic, instance, and panoptic segmentation tasks over a variety of baselines on the Cityscapes dataset. Numéro de notice : A2022-741 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s11263-022-01662-0 Date de publication en ligne : 12/08/2022 En ligne : https://doi.org/10.1007/s11263-022-01662-0 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101719
in International journal of computer vision > vol 130 n° 11 (November 2022) . - pp 2571 - 2589[article]A joint deep learning network of point clouds and multiple views for roadside object classification from lidar point clouds / Lina Fang in ISPRS Journal of photogrammetry and remote sensing, vol 193 (November 2022)
[article]
Titre : A joint deep learning network of point clouds and multiple views for roadside object classification from lidar point clouds Type de document : Article/Communication Auteurs : Lina Fang, Auteur ; Zhilong You, Auteur ; Guixi Shen, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 115 - 136 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification orientée objet
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] fusion d'images
[Termes IGN] image captée par drone
[Termes IGN] reconnaissance d'objets
[Termes IGN] route
[Termes IGN] scène urbaine
[Termes IGN] semis de pointsRésumé : (auteur) Urban management and survey departments have begun investigating the feasibility of acquiring data from various laser scanning systems for urban infrastructure measurements and assessments. Roadside objects such as cars, trees, traffic poles, pedestrians, bicycles and e-bicycles describe the static and dynamic urban information available for acquisition. Because of the unstructured nature of 3D point clouds, the rich targets in complex road scenes, and the varying scales of roadside objects, finely classifying these roadside objects from various point clouds is a challenging task. In this paper, we integrate two representations of roadside objects, point clouds and multiview images to propose a point-group-view network named PGVNet for classifying roadside objects into cars, trees, traffic poles, and small objects (pedestrians, bicycles and e-bicycles) from generalized point clouds. To utilize the topological information of the point clouds, we propose a graph attention convolution operation called AtEdgeConv to mine the relationship among the local points and to extract local geometric features. In addition, we employ a hierarchical view-group-object architecture to diminish the redundant information between similar views and to obtain salient viewwise global features. To fuse the local geometric features from the point clouds and the global features from multiview images, we stack an attention-guided fusion network in PGVNet. In particular, we quantify and leverage the global features as an attention mask to capture the intrinsic correlation and discriminability of the local geometric features, which contributes to recognizing the different roadside objects with similar shapes. To verify the effectiveness and generalization of our methods, we conduct extensive experiments on six test datasets of different urban scenes, which were captured by different laser scanning systems, including mobile laser scanning (MLS) systems, unmanned aerial vehicle (UAV)-based laser scanning (ULS) systems and backpack laser scanning (BLS) systems. Experimental results, and comparisons with state-of-the-art methods, demonstrate that the PGVNet model is able to effectively identify various cars, trees, traffic poles and small objects from generalized point clouds, and achieves promising performances on roadside object classifications, with an overall accuracy of 95.76%. Our code is released on https://github.com/flidarcode/PGVNet. Numéro de notice : A2022-756 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.08.022 Date de publication en ligne : 22/09/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.08.022 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101759
in ISPRS Journal of photogrammetry and remote sensing > vol 193 (November 2022) . - pp 115 - 136[article]Point2Roof: End-to-end 3D building roof modeling from airborne LiDAR point clouds / Li Li in ISPRS Journal of photogrammetry and remote sensing, vol 193 (November 2022)PermalinkA robust edge detection algorithm based on feature-based image registration (FBIR) using improved canny with fuzzy logic (ICWFL) / Anchal Kumawat in The Visual Computer, vol 38 n° 11 (November 2022)PermalinkApplication of a graph convolutional network with visual and semantic features to classify urban scenes / Yongyang Xu in International journal of geographical information science IJGIS, vol 36 n° 10 (October 2022)PermalinkIncremental road network update method with trajectory data and UAV remote sensing imagery / Jianxin Qin in ISPRS International journal of geo-information, vol 11 n° 10 (October 2022)PermalinkInvestigation of recognition and classification of forest fires based on fusion color and textural features of images / Cong Li in Forests, vol 13 n° 10 (October 2022)PermalinkNovel algorithm based on geometric characteristics for tree branch skeleton extraction from LiDAR point cloud / Jie Yang in Forests, vol 13 n° 10 (October 2022)PermalinkSemi-supervised adversarial recognition of refined window structures for inverse procedural façade modelling / Han Hu in ISPRS Journal of photogrammetry and remote sensing, vol 192 (October 2022)PermalinkComparison of deep neural networks in detecting field grapevine diseases using transfer learning / Antonios Morellos in Remote sensing, vol 14 n° 18 (September-2 2022)PermalinkA boundary-based ground-point filtering method for photogrammetric point-cloud data / Seyed Mohammad Ayazi in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 9 (September 2022)PermalinkDeep learning method for Chinese multisource point of interest matching / Pengpeng Li in Computers, Environment and Urban Systems, vol 96 (September 2022)Permalink