Descripteur
Termes IGN > mathématiques > statistique mathématique > analyse de données > segmentation > segmentation sémantique
segmentation sémantiqueSynonyme(s)étiquetage sémantique étiquetage de données |
Documents disponibles dans cette catégorie (235)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Semantic segmentation of road furniture in mobile laser scanning data / Fashuai Li in ISPRS Journal of photogrammetry and remote sensing, vol 154 (August 2019)
[article]
Titre : Semantic segmentation of road furniture in mobile laser scanning data Type de document : Article/Communication Auteurs : Fashuai Li, Auteur ; Matti Lehtomäki, Auteur ; Sander J. Oude Elberink, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 98 - 113 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] classification bayesienne
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] mobilier urbain
[Termes IGN] segmentation sémantique
[Termes IGN] semis de pointsRésumé : (Auteur) Road furniture recognition has become a prevalent issue in the past few years because of its great importance in smart cities and autonomous driving. Previous research has especially focussed on pole-like road furniture, such as traffic signs and lamp posts. Published methods have mainly classified road furniture as individual objects. However, most road furniture consists of a combination of classes, such as a traffic sign mounted on a street light pole. To tackle this problem, we propose a framework to interpret road furniture at a more detailed level. Instead of being interpreted as single objects, mobile laser scanning data of road furniture is decomposed in elements individually labelled as poles, and objects attached to them, such as, street lights, traffic signs and traffic lights. In our framework, we first detect road furniture from unorganised mobile laser scanning point clouds. Then detected road furniture is decomposed into poles and attachments (e.g. traffic signs). In the interpretation stage, we extract a set of features to classify the attachments by utilising a knowledge-driven method and four representative types of machine learning classifiers, which are random forest, support vector machine, Gaussian mixture model and naïve Bayes, to explore the optimal method. The designed features are the unary features of attachments and the spatial relations between poles and their attachments. Two experimental test sites in Enschede dataset and Saunalahti dataset were applied, and Saunalahti dataset was collected in two different epochs. In the experimental results, the random forest classifier outperforms the other methods, and the overall accuracy acquired is higher than 80% in Enschede test site and higher than 90% in both Saunalahti epochs. The designed features play an important role in the interpretation of road furniture. The results of two epochs in the same area prove the high reliability of our framework and demonstrate that our method achieves good transferability with an accuracy over 90% through employing the training data of one epoch to test the data in another epoch. Numéro de notice : A2019-266 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.06.001 Date de publication en ligne : 08/06/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.06.001 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93081
in ISPRS Journal of photogrammetry and remote sensing > vol 154 (August 2019) . - pp 98 - 113[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019081 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019083 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019082 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Structural segmentation and classification of mobile laser scanning point clouds with large variations in point density / Yuan Li in ISPRS Journal of photogrammetry and remote sensing, vol 153 (July 2019)
[article]
Titre : Structural segmentation and classification of mobile laser scanning point clouds with large variations in point density Type de document : Article/Communication Auteurs : Yuan Li, Auteur ; Bo Wu, Auteur ; Xuming Ge, Auteur Année de publication : 2019 Article en page(s) : pp 151 - 165 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] champ aléatoire conditionnel
[Termes IGN] classification
[Termes IGN] classification basée sur les régions
[Termes IGN] densité des points
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] Hong-Kong
[Termes IGN] modèle 3D de l'espace urbain
[Termes IGN] Paris (75)
[Termes IGN] scène urbaine
[Termes IGN] segmentation en régions
[Termes IGN] segmentation hiérarchique
[Termes IGN] segmentation sémantique
[Termes IGN] semis de pointsRésumé : (Auteur) Objects are formed by various structures and such structural information is essential for the identification of objects, especially for street facilities presented by mobile laser scanning (MLS) data with abundant details. However, due to the large volume of data, large variations in point density, noise and complexity of scanned scenes, the achievement of effective decomposition of objects into physical meaningful structures remains a challenge issue. And structural information has been rarely considered to improve the accuracy of distinguishing between objects with global or local similarity, such as traffic signs and traffic lights. Therefore, we propose a structural segmentation and classification method for MLS point clouds that is efficient and robust to variations in point density and complex urban scenes. During the segmentation stage, a novel region growing approach and a multi-size supervoxel segmentation algorithm robust to noise and varying density are combined to extract effective local shape descriptors. Structural components with physically meaningful labels are generated via structural labelling and clustering. During the classification stage, we consider the structural information at various scales and locations and encode it into a conditional random-field model for unary and pairwise inferences. High-order potentials are also introduced into the conditional random field to eliminate regional label noise. These high-order potentials are defined upon regions independent of connection relationships and can therefore take effect on isolated nodes. Experiments with two MLS datasets of typical urban scenes in Paris and Hong Kong were used to evaluate the performance of the proposed method. Nine and eleven different object classes were recognized from these two datasets with overall accuracies of 97.13% and 95.79%, respectively, indicating the effectiveness of the proposed method of interpreting complex urban scenes from point clouds with large variations in point density. Compared with previous studies on the Paris dataset, our method was able to recognize more classes and obtained a mean F1-score of 72.70% of seven common classes, being higher than the best of previous results. Numéro de notice : A2019-262 Affiliation des auteurs : non IGN Thématique : IMAGERIE/URBANISME Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.05.007 Date de publication en ligne : 28/05/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.05.007 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93075
in ISPRS Journal of photogrammetry and remote sensing > vol 153 (July 2019) . - pp 151 - 165[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019071 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019073 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019072 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Semantic façade segmentation from airborne oblique images / Yaping Lin in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 6 (June 2019)
[article]
Titre : Semantic façade segmentation from airborne oblique images Type de document : Article/Communication Auteurs : Yaping Lin, Auteur ; Francesco Nex, Auteur ; Michael Ying Yang, Auteur Année de publication : 2019 Article en page(s) : pp 425 - 433 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse comparative
[Termes IGN] champ aléatoire conditionnel
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] façade
[Termes IGN] image aérienne oblique
[Termes IGN] image RVB
[Termes IGN] segmentation d'image
[Termes IGN] segmentation sémantique
[Termes IGN] semis de pointsRésumé : (Auteur) In this paper, oblique airborne images with very high resolution are used to address the problem from aerial views in urban areas. Traditional classification method (i.e., random forests) is compared with state-of-the-art fully convolutional networks (FCNs). Random forests use hand-craft image features including red, green, blue (RGB), scale-invariant feature transform (SIFT), and Texton, and point cloud features consisting of normal vector and planarity extracted from different scales. In contrast, the inputs of FCNs are the RGB bands and the third components of normal vectors. In both cases, three-dimensional (3D) features are projected back into the image space to support the facade interpretation. Fully connected conditional random field (CRF) is finally taken as a post-processing of the FCN to refine the segmentation results. Several tests have been performed and the achieved results show that the models embedding the 3D component outperform the solution using only images. FCNs significantly outperformed random forests, especially for the balcony delineation. Numéro de notice : A2019-247 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.85.6.425 Date de publication en ligne : 01/06/2019 En ligne : https://doi.org/10.14358/PERS.85.6.425 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93003
in Photogrammetric Engineering & Remote Sensing, PERS > vol 85 n° 6 (June 2019) . - pp 425 - 433[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 105-2019061 SL Revue Centre de documentation Revues en salle Disponible Automatic building extraction from high-resolution aerial images and LiDAR data using gated residual refinement network / Jianfeng Huang in ISPRS Journal of photogrammetry and remote sensing, vol 151 (May 2019)
[article]
Titre : Automatic building extraction from high-resolution aerial images and LiDAR data using gated residual refinement network Type de document : Article/Communication Auteurs : Jianfeng Huang, Auteur ; Xinchang Zhang, Auteur ; Qinchuan Xin, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 91 - 105 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] détection du bâti
[Termes IGN] image à haute résolution
[Termes IGN] réseau neuronal convolutif
[Termes IGN] résidu
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] zone urbaineRésumé : (Auteur) Automated extraction of buildings from remotely sensed data is important for a wide range of applications but challenging due to difficulties in extracting semantic features from complex scenes like urban areas. The recently developed fully convolutional neural networks (FCNs) have shown to perform well on urban object extraction because of the outstanding feature learning and end-to-end pixel labeling abilities. The commonly used feature fusion or skip-connection refine modules of FCNs often overlook the problem of feature selection and could reduce the learning efficiency of the networks. In this paper, we develop an end-to-end trainable gated residual refinement network (GRRNet) that fuses high-resolution aerial images and LiDAR point clouds for building extraction. The modified residual learning network is applied as the encoder part of GRRNet to learn multi-level features from the fusion data and a gated feature labeling (GFL) unit is introduced to reduce unnecessary feature transmission and refine classification results. The proposed model - GRRNet is tested in a publicly available dataset with urban and suburban scenes. Comparison results illustrated that GRRNet has competitive building extraction performance in comparison with other approaches. The source code of the developed GRRNet is made publicly available for studies. Numéro de notice : A2019-206 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.02.019 Date de publication en ligne : 20/03/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.02.019 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92669
in ISPRS Journal of photogrammetry and remote sensing > vol 151 (May 2019) . - pp 91 - 105[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019051 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019053 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019052 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Voxel-based 3D point cloud semantic segmentation: unsupervised geometric and relationship featuring vs deep learning methods / Florent Poux in ISPRS International journal of geo-information, vol 8 n° 5 (May 2019)
[article]
Titre : Voxel-based 3D point cloud semantic segmentation: unsupervised geometric and relationship featuring vs deep learning methods Type de document : Article/Communication Auteurs : Florent Poux, Auteur ; Roland Billen, Auteur Année de publication : 2019 Article en page(s) : n° 213 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] arbre de décision
[Termes IGN] classification dirigée
[Termes IGN] classification non dirigée
[Termes IGN] connexité (topologie)
[Termes IGN] données localisées 3D
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] voxelRésumé : (auteur) Automation in point cloud data processing is central in knowledge discovery within decision-making systems. The definition of relevant features is often key for segmentation and classification, with automated workflows presenting the main challenges. In this paper, we propose a voxel-based feature engineering that better characterize point clusters and provide strong support to supervised or unsupervised classification. We provide different feature generalization levels to permit interoperable frameworks. First, we recommend a shape-based feature set (SF1) that only leverages the raw X, Y, Z attributes of any point cloud. Afterwards, we derive relationship and topology between voxel entities to obtain a three-dimensional (3D) structural connectivity feature set (SF2). Finally, we provide a knowledge-based decision tree to permit infrastructure-related classification. We study SF1/SF2 synergy on a new semantic segmentation framework for the constitution of a higher semantic representation of point clouds in relevant clusters. Finally, we benchmark the approach against novel and best-performing deep-learning methods while using the full S3DIS dataset. We highlight good performances, easy-integration, and high F1-score (> 85%) for planar-dominant classes that are comparable to state-of-the-art deep learning. Numéro de notice : A2019-656 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.3390/ijgi8050213 Date de publication en ligne : 07/05/2019 En ligne : https://doi.org/10.3390/ijgi8050213 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97890
in ISPRS International journal of geo-information > vol 8 n° 5 (May 2019) . - n° 213[article]3D hyperspectral point cloud generation: Fusing airborne laser scanning and hyperspectral imaging sensors for improved object-based information extraction / Maximilian Brell in ISPRS Journal of photogrammetry and remote sensing, vol 149 (March 2019)PermalinkAn exploratory analysis of usability of Flickr tags for land use/land cover attribution / Yingwei Yan in Geo-spatial Information Science, vol 22 n° 1 (March 2019)PermalinkModeling and visualizing semantic and spatio-temporal evolution of topics in interpersonal communication on Twitter / Caglar Koylu in International journal of geographical information science IJGIS, Vol 33 n° 3-4 (March - April 2019)PermalinkSemantic understanding of scenes through the ADE20K dataset / Bolei Zhou in International journal of computer vision, vol 127 n° 3 (March 2019)PermalinkGeoTxt: A scalable geoparsing system for unstructured text geolocation / Morteza Karimzadeh in Transactions in GIS, vol 23 n° 1 (February 2019)PermalinkPermalinkCorrecting rural building annotations in OpenStreetMap using convolutional neural networks / John E. Vargas-Muñoz in ISPRS Journal of photogrammetry and remote sensing, vol 147 (January 2019)PermalinkEnrichissement d'orthophotographie par des données OpenStreetMap pour l'apprentissage machine / Gauthier Fillières-Riveau (2019)PermalinkIntegration of lidar data and GIS data for point cloud semantic enrichment at the point level / Harith Aljumaily in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 1 (January 2019)PermalinkLU-Net, An efficient network for 3D LiDAR point cloud semantic segmentation based on end-to-end-learned 3D features and U-Net / Pierre Biasutti (2019)Permalink