Descripteur
Termes IGN > mathématiques > statistique mathématique > analyse de données > segmentation > segmentation sémantique
segmentation sémantiqueSynonyme(s)étiquetage sémantique étiquetage de données |
Documents disponibles dans cette catégorie (58)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Semantic segmentation of bridge components and road infrastructure from mobile LiDAR data / Yi-Chun Lin in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 6 (December 2022)
[article]
Titre : Semantic segmentation of bridge components and road infrastructure from mobile LiDAR data Type de document : Article/Communication Auteurs : Yi-Chun Lin, Auteur ; Ayman Habib, Auteur Année de publication : 2022 Article en page(s) : n° 100023 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] autoroute
[Termes IGN] couplage GNSS-INS
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] lidar mobile
[Termes IGN] pont
[Termes IGN] réseau neuronal de graphes
[Termes IGN] réseau routier
[Termes IGN] segmentation sémantique
[Termes IGN] semis de pointsRésumé : (auteur) Emerging mobile LiDAR mapping systems exhibit great potential as an alternative for mapping urban environments. Such systems can acquire high-quality, dense point clouds that capture detailed information over an area of interest through efficient field surveys. However, automatically recognizing and semantically segmenting different components from the point clouds with efficiency and high accuracy remains a challenge. Towards this end, this study proposes a semantic segmentation framework to simultaneously classify bridge components and road infrastructure using mobile LiDAR point clouds while providing the following contributions: 1) a deep learning approach exploiting graph convolutions is adopted for point cloud semantic segmentation; 2) cross-labeling and transfer learning techniques are developed to reduce the need for manual annotation; and 3) geometric quality control strategies are proposed to refine the semantic segmentation results. The proposed framework is evaluated using data from two mobile mapping systems along an interstate highway with 27 highway bridges. With the help of the proposed cross-labeling and transfer learning strategies, the deep learning model achieves an overall accuracy of 84% using limited training data. Moreover, the effectiveness of the proposed framework is verified through test covering approximately 42 miles along the interstate highway, where substantial improvement after quality control can be observed. Numéro de notice : A2022-814 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1016/j.ophoto.2022.100023 Date de publication en ligne : 24/10/2022 En ligne : https://doi.org/10.1016/j.ophoto.2022.100023 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101975
in ISPRS Open Journal of Photogrammetry and Remote Sensing > vol 6 (December 2022) . - n° 100023[article]Improving deep learning on point cloud by maximizing mutual information across layers / Di Wang in Pattern recognition, vol 131 (November 2022)
[article]
Titre : Improving deep learning on point cloud by maximizing mutual information across layers Type de document : Article/Communication Auteurs : Di Wang, Auteur ; Lulu Tang, Auteur ; Xu Wang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 108892 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] entropie de Shannon
[Termes IGN] information sémantique
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] transformation géométrique
[Termes IGN] vision par ordinateur
[Termes IGN] visualisation 3DRésumé : (auteur) It is a fundamental and vital task to enhance the perception capability of the point cloud learning network in 3D machine vision applications. Most existing methods utilize feature fusion and geometric transformation to improve point cloud learning without paying enough attention to mining further intrinsic information across multiple network layers. Motivated to improve consistency between hierarchical features and strengthen the perception capability of the point cloud network, we propose exploring whether maximizing the mutual information (MI) across shallow and deep layers is beneficial to improve representation learning on point clouds. A novel design of Maximizing Mutual Information (MMI) Module is proposed, which assists the training process of the main network to capture discriminative features of the input point clouds. Specifically, the MMI-based loss function is employed to constrain the differences of semantic information in two hierarchical features extracted from the shallow and deep layers of the network. Extensive experiments show that our method is generally applicable to point cloud tasks, including classification, shape retrieval, indoor scene segmentation, 3D object detection, and completion, and illustrate the efficacy of our proposed method and its advantages over existing ones. Our source code is available at https://github.com/wendydidi/MMI.git. Numéro de notice : A2022-780 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : https://doi.org/10.1016/j.patcog.2022.108892 Date de publication en ligne : 08/07/2022 En ligne : https://doi.org/10.1016/j.patcog.2022.108892 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101859
in Pattern recognition > vol 131 (November 2022) . - n° 108892[article]Improving image segmentation with boundary patch refinement / Xiaolin Hu in International journal of computer vision, vol 130 n° 11 (November 2022)
[article]
Titre : Improving image segmentation with boundary patch refinement Type de document : Article/Communication Auteurs : Xiaolin Hu, Auteur ; Chufeng Tang, Auteur ; Hang Chen, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 2571 - 2589 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] contour
[Termes IGN] détection de contours
[Termes IGN] distance euclidienne
[Termes IGN] masque
[Termes IGN] segmentation d'image
[Termes IGN] segmentation fondée sur les contours
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Tremendous efforts have been made on image segmentation but the mask quality is still not satisfactory. The boundaries of predicted masks are usually imprecise due to the low spatial resolution of feature maps and the imbalance problem caused by the extremely low proportion of boundary pixels. To address these issues, we propose a conceptually simple yet effective post-processing refinement framework, termed BPR, to improve the boundary quality of the prediction of any image segmentation model. Following the idea of looking closer to segment boundaries better, we extract and refine a series of small boundary patches along the predicted boundaries. The refinement is accomplished by a boundary patch refinement network at the higher resolution. The trained BPR model can be easily transferred to refine the results of other models as well. Extensive experiments show that the proposed BPR framework yields significant improvements on the semantic, instance, and panoptic segmentation tasks over a variety of baselines on the Cityscapes dataset. Numéro de notice : A2022-741 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s11263-022-01662-0 Date de publication en ligne : 12/08/2022 En ligne : https://doi.org/10.1007/s11263-022-01662-0 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101719
in International journal of computer vision > vol 130 n° 11 (November 2022) . - pp 2571 - 2589[article]Measuring visual walkability perception using panoramic street view images, virtual reality, and deep learning / Yunqin Li in Sustainable Cities and Society, vol 86 (November 2022)
[article]
Titre : Measuring visual walkability perception using panoramic street view images, virtual reality, and deep learning Type de document : Article/Communication Auteurs : Yunqin Li, Auteur ; Nobuyoshi Yabuki, Auteur ; Tomohiro Fukuda, Auteur Année de publication : 2022 Article en page(s) : n° 104140 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] corrélation
[Termes IGN] image panoramique
[Termes IGN] image Streetview
[Termes IGN] modèle de régression
[Termes IGN] piéton
[Termes IGN] réalité virtuelle
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantique
[Termes IGN] visionRésumé : (auteur) Measuring perceptions of visual walkability in urban streets and exploring the associations between the visual features of the street built environment that make walking attractive to humans are both theoretically and practically important. Previous studies have used either environmental audits and subjective evaluations that have limitations in terms of cost, time, and measurement scale, or computer-aided audits based on natural street view images (SVIs) but with gaps in real perception. In this study, a virtual reality panoramic image-based deep learning framework is proposed for measuring visual walkability perception (VWP) and then quantifying and visualizing the contributing visual features. A VWP classification deep multitask learning (VWPCL) model was first developed and trained on human ratings of panoramic SVIs in virtual reality to predict VWP in six categories. Second, a regression model was used to determine the degree of correlation of various objects with one of the six VWP categories based on semantic segmentation. Furthermore, an interpretable deep learning model was used to assist in identifying and visualizing elements that contribute to VWP. The experiment validated the accuracy of the VWPCL model for predicting VWP. The results represent a further step in understanding the interplay of VWP and street-level semantics and features. Numéro de notice : A2022-816 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.scs.2022.104140 Date de publication en ligne : 21/08/2022 En ligne : https://doi.org/10.1016/j.scs.2022.104140 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101982
in Sustainable Cities and Society > vol 86 (November 2022) . - n° 104140[article]An improved multi-task pointwise network for segmentation of building roofs in airborne laser scanning point clouds / Chaoquan Zhang in Photogrammetric record, vol 37 n° 179 (September 2022)
[article]
Titre : An improved multi-task pointwise network for segmentation of building roofs in airborne laser scanning point clouds Type de document : Article/Communication Auteurs : Chaoquan Zhang, Auteur ; Hongchao Fan, Auteur Année de publication : 2022 Article en page(s) : pp 260 - 284 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie
[Termes IGN] analyse de groupement
[Termes IGN] apprentissage profond
[Termes IGN] classification barycentrique
[Termes IGN] classification par réseau neuronal récurrent
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] fusion de données
[Termes IGN] Norvège
[Termes IGN] Ransac (algorithme)
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] toitRésumé : (auteur) Roof plane segmentation is an essential step in the process of 3D building reconstruction from airborne laser scanning (ALS) point clouds. The existing approaches either rely on human intervention to select the appropriate input parameters for different data-sets or they are not automatic and efficient. To tackle these issues, an improved multi-task pointwise network is proposed to simultaneously segment instances (that is, individual roof planes) and semantics (that is, groups of roof planes with similar geometric shapes) in point clouds. PointNet++ is used as a backbone network to extract robust features in the first step. The features from semantics branch are then added to the instance branch to facilitate the learning of instance embeddings. After that, a feature fusion module is added to the semantics branch to acquire more discriminative features from the backbone network. To increase the accuracy of semantic predictions, fused semantic features of the points belonging to the same instance are aggregated together. Finally, a mean-shift clustering algorithm is employed on instance embeddings to produce the instance predictions. Furthermore, a new roof data-set (called RoofNTNU) is established by taking ALS point clouds as training data for automatic and more general segmentation. Experiments on the new roof data-set show that the method achieves promising segmentation results: the mean precision (mPrec) of 96.2% for the instance segmentation task and mean accuracy (mAcc) of 94.4% for the semantic segmentation task. Numéro de notice : A2022-936 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1111/phor.12420 Date de publication en ligne : 13/07/2022 En ligne : https://doi.org/10.1111/phor.12420 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102682
in Photogrammetric record > vol 37 n° 179 (September 2022) . - pp 260 - 284[article]Learning indoor point cloud semantic segmentation from image-level labels / Youcheng Song in The Visual Computer, vol 38 n° 9 (September 2022)PermalinkStructured binary neural networks for image recognition / Bohan Zhuang in International journal of computer vision, vol 130 n° 9 (September 2022)Permalink3D semantic scene completion: A survey / Luis Roldão in International journal of computer vision, vol 130 n° 8 (August 2022)PermalinkAn automatic approach for tree species detection and profile estimation of urban street trees using deep learning and Google street view images / Kwanghun Choi in ISPRS Journal of photogrammetry and remote sensing, vol 190 (August 2022)PermalinkTransfer learning from citizen science photographs enables plant species identification in UAV imagery / Salim Soltani in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 5 (August 2022)PermalinkA lightweight network with attention decoder for real-time semantic segmentation / Kang Wang in The Visual Computer, vol 38 n° 7 (July 2022)PermalinkStreet-view imagery guided street furniture inventory from mobile laser scanning point clouds / Yuzhou Zhou in ISPRS Journal of photogrammetry and remote sensing, vol 189 (July 2022)PermalinkContext-aware network for semantic segmentation toward large-scale point clouds in urban environments / Chun Liu in IEEE Transactions on geoscience and remote sensing, vol 60 n° 6 (June 2022)PermalinkExtracting the urban landscape features of the historic district from street view images based on deep learning: A case study in the Beijing Core area / Siming Yin in ISPRS International journal of geo-information, vol 11 n° 6 (June 2022)PermalinkFeature-selection high-resolution network with hypersphere embedding for semantic segmentation of VHR remote sensing images / Hanwen Xu in IEEE Transactions on geoscience and remote sensing, vol 60 n° 6 (June 2022)Permalink