Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > extraction de traits caractéristiques
extraction de traits caractéristiquesSynonyme(s)extraction des caractéristiques extraction de primitiveVoir aussi |
Documents disponibles dans cette catégorie (653)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Automatic building extraction from high-resolution aerial images and LiDAR data using gated residual refinement network / Jianfeng Huang in ISPRS Journal of photogrammetry and remote sensing, vol 151 (May 2019)
[article]
Titre : Automatic building extraction from high-resolution aerial images and LiDAR data using gated residual refinement network Type de document : Article/Communication Auteurs : Jianfeng Huang, Auteur ; Xinchang Zhang, Auteur ; Qinchuan Xin, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 91 - 105 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] détection du bâti
[Termes IGN] image à haute résolution
[Termes IGN] réseau neuronal convolutif
[Termes IGN] résidu
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] zone urbaineRésumé : (Auteur) Automated extraction of buildings from remotely sensed data is important for a wide range of applications but challenging due to difficulties in extracting semantic features from complex scenes like urban areas. The recently developed fully convolutional neural networks (FCNs) have shown to perform well on urban object extraction because of the outstanding feature learning and end-to-end pixel labeling abilities. The commonly used feature fusion or skip-connection refine modules of FCNs often overlook the problem of feature selection and could reduce the learning efficiency of the networks. In this paper, we develop an end-to-end trainable gated residual refinement network (GRRNet) that fuses high-resolution aerial images and LiDAR point clouds for building extraction. The modified residual learning network is applied as the encoder part of GRRNet to learn multi-level features from the fusion data and a gated feature labeling (GFL) unit is introduced to reduce unnecessary feature transmission and refine classification results. The proposed model - GRRNet is tested in a publicly available dataset with urban and suburban scenes. Comparison results illustrated that GRRNet has competitive building extraction performance in comparison with other approaches. The source code of the developed GRRNet is made publicly available for studies. Numéro de notice : A2019-206 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.02.019 Date de publication en ligne : 20/03/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.02.019 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92669
in ISPRS Journal of photogrammetry and remote sensing > vol 151 (May 2019) . - pp 91 - 105[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019051 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019053 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019052 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Coastline extraction from SAR images using robust ridge tracing / Dailiang Wang in Marine geodesy, vol 42 n° 3 (May 2019)
[article]
Titre : Coastline extraction from SAR images using robust ridge tracing Type de document : Article/Communication Auteurs : Dailiang Wang, Auteur ; Xiaoyan Liu, Auteur Année de publication : 2019 Article en page(s) : pp 286 - 315 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] détection de contours
[Termes IGN] érosion côtière
[Termes IGN] filtre de déchatoiement
[Termes IGN] image radar moirée
[Termes IGN] image Radarsat
[Termes IGN] image Sentinel-SAR
[Termes IGN] littoral
[Termes IGN] méthode robuste
[Termes IGN] trait de côte
[Termes IGN] varianceRésumé : (auteur) Although ridge tracing has the advantages of continuity and high positioning accuracy compared with other edge-based methods, it is difficult to use ridge tracing to extract coastlines from Synthetic Aperture Radar (SAR) images because of the speckle noise that occurs in SAR images. This paper presents a new coastline extraction method for SAR images based on a more robust ridge tracing method. First, according to the statistical properties of the pixel intensities in the land and sea regions in a SAR image, an edge magnitude map that characterizes the boundary between them is produced by the ratio of the variance to the mean such that the magnitude at the land-sea boundary is much higher than that at other locations. Second, the pixel with the maximum magnitude in the map is adopted as the starting point for tracing, and strip windows, which reduce tracing failures, are adopted to obtain different average magnitudes corresponding to the eight neighborhood pixels around the starting point. Then, the neighborhood pixel with the maximum magnitude is adopted as the next tracing point. The above procedure is repeated to determine the direction of the next point. This process achieves part of the tracing operation. The complete coastline is then extracted by performing the other part of the tracing operation. The experimental results show that the proposed method is more robust than traditional methods, and we demonstrate its effectiveness with RADARSAT-2 and Sentinel-1A data. Numéro de notice : A2019-280 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/01490419.2019.1583147 Date de publication en ligne : 29/03/2019 En ligne : https://doi.org/10.1080/01490419.2019.1583147 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93114
in Marine geodesy > vol 42 n° 3 (May 2019) . - pp 286 - 315[article]Voxel-based 3D point cloud semantic segmentation: unsupervised geometric and relationship featuring vs deep learning methods / Florent Poux in ISPRS International journal of geo-information, vol 8 n° 5 (May 2019)
[article]
Titre : Voxel-based 3D point cloud semantic segmentation: unsupervised geometric and relationship featuring vs deep learning methods Type de document : Article/Communication Auteurs : Florent Poux, Auteur ; Roland Billen, Auteur Année de publication : 2019 Article en page(s) : n° 213 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] arbre de décision
[Termes IGN] classification dirigée
[Termes IGN] classification non dirigée
[Termes IGN] connexité (topologie)
[Termes IGN] données localisées 3D
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] voxelRésumé : (auteur) Automation in point cloud data processing is central in knowledge discovery within decision-making systems. The definition of relevant features is often key for segmentation and classification, with automated workflows presenting the main challenges. In this paper, we propose a voxel-based feature engineering that better characterize point clusters and provide strong support to supervised or unsupervised classification. We provide different feature generalization levels to permit interoperable frameworks. First, we recommend a shape-based feature set (SF1) that only leverages the raw X, Y, Z attributes of any point cloud. Afterwards, we derive relationship and topology between voxel entities to obtain a three-dimensional (3D) structural connectivity feature set (SF2). Finally, we provide a knowledge-based decision tree to permit infrastructure-related classification. We study SF1/SF2 synergy on a new semantic segmentation framework for the constitution of a higher semantic representation of point clouds in relevant clusters. Finally, we benchmark the approach against novel and best-performing deep-learning methods while using the full S3DIS dataset. We highlight good performances, easy-integration, and high F1-score (> 85%) for planar-dominant classes that are comparable to state-of-the-art deep learning. Numéro de notice : A2019-656 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.3390/ijgi8050213 Date de publication en ligne : 07/05/2019 En ligne : https://doi.org/10.3390/ijgi8050213 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97890
in ISPRS International journal of geo-information > vol 8 n° 5 (May 2019) . - n° 213[article]Automatic sensor orientation using horizontal and vertical line feature constraints / Yanbiao Sun in ISPRS Journal of photogrammetry and remote sensing, vol 150 (April 2019)
[article]
Titre : Automatic sensor orientation using horizontal and vertical line feature constraints Type de document : Article/Communication Auteurs : Yanbiao Sun, Auteur ; Stuart Robson, Auteur ; Daniel Scott, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 172 - 184 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie numérique
[Termes IGN] angle azimutal
[Termes IGN] angle vertical
[Termes IGN] compensation par faisceaux
[Termes IGN] coordonnées horizontales
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] forme linéaire
[Termes IGN] image aérienne
[Termes IGN] ligne caractéristique
[Termes IGN] orientation d'image
[Termes IGN] orientation du capteur
[Termes IGN] point d'appuiRésumé : (Auteur) To improve the accuracy of sensor orientation using calibrated aerial images, this paper proposes an automatic sensor orientation method utilizing horizontal and vertical constraints on human-engineered structures, addressing the limitations faced with sub-optimal number of Ground Control Points (GCPs) within a scene. Related state-of-the-art methods rely on structured building edges, and necessitate manual identification of end points. Our method makes use of line-segments but eliminates the need for these matched end points, thus eliminating the need for inefficient manual intervention.
To achieve this, a 3D line in object space is represented by the intersection of two planes going through two camera centers. The normal vector of each plane can be written as a function of a pair of azimuth and elevations angles. The normal vector of the 3D line can be expressed by the cross product of these two plane’s normal vectors. Then, we create observation functions of horizontal and vertical line constraints based on the zero-vector cross-product and the dot-product of the normal vector of the 3D lines. The observation functions of the horizontal and vertical lines are then introduced into a hybrid Bundle Adjustment (BA) method as constraints, including observed image points as well as observed line segment projections. Finally, to assess the feasibility and effectiveness of the proposed method, simulated and real data are tested. The results demonstrate that, in cases with only 3 GCPs, the accuracy of the proposed method utilizing line features extracted automatically, is increased by 50%, compared to a BA using only point constraints.Numéro de notice : A2019-140 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.02.011 Date de publication en ligne : 28/02/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.02.011 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92478
in ISPRS Journal of photogrammetry and remote sensing > vol 150 (April 2019) . - pp 172 - 184[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019041 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Journées de la recherche 2019 / Anonyme in Géomatique expert, n° 127 (avril - mai 2019)
[article]
Titre : Journées de la recherche 2019 Type de document : Article/Communication Auteurs : Anonyme, Auteur Année de publication : 2019 Article en page(s) : pp 23 - 34 Langues : Français (fre) Descripteur : [Vedettes matières IGN] Information géographique
[Termes IGN] apprentissage profond
[Termes IGN] base de connaissances
[Termes IGN] carte de Cassini
[Termes IGN] données localisées
[Termes IGN] données localisées 3D
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] géoréférencement
[Termes IGN] parcelle agricole
[Termes IGN] paroisse
[Termes IGN] photographie argentique
[Termes IGN] qualité des données
[Termes IGN] réseau neuronal convolutif
[Termes IGN] segmentation
[Termes IGN] segmentation d'image
[Termes IGN] semis de points
[Termes IGN] série temporelleRésumé : (Auteur) Cette année, les journées de la recherche de l’IGN ont fait la part belle aux réseaux de neurones – un sujet décidément très à la mode – ainsi qu’à différentes initiatives d’archivage et de consultation des données géographiques anciennes. Numéro de notice : A2019-308 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE/IMAGERIE/INFORMATIQUE/POSITIONNEMENT Nature : Article DOI : sans Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93284
in Géomatique expert > n° 127 (avril - mai 2019) . - pp 23 - 34[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité IFN-001-P002141 PER Revue Nogent-sur-Vernisson Salle périodiques Exclu du prêt Learning high-level features by fusing multi-view representation of MLS point clouds for 3D object recognition in road environments / Zhipeng Luo in ISPRS Journal of photogrammetry and remote sensing, vol 150 (April 2019)PermalinkMultilane roads extracted from the OpenStreetMap urban road network using random forests / Yongyang Xu in Transactions in GIS, vol 23 n° 2 (April 2019)PermalinkBuilding detection and regularisation using DSM and imagery information / Yousif A. Mousa in Photogrammetric record, vol 34 n° 165 (March 2019)PermalinkA new waveform decomposition method for multispectral LiDAR / Shalei Song in ISPRS Journal of photogrammetry and remote sensing, vol 149 (March 2019)PermalinkA local projection-based approach to individual tree detection and 3-D crown delineation in multistoried coniferous forests using high-density airborne LiDAR data / Aravind Harikumar in IEEE Transactions on geoscience and remote sensing, vol 57 n° 2 (February 2019)PermalinkRepeated structure detection for 3D reconstruction of building façade from mobile lidar data / Yanming Chen in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 2 (February 2019)PermalinkChallenging deep image descriptors for retrieval in heterogeneous iconographic collections / Dimitri Gominski (2019)PermalinkDétection de fenêtres dans un nuage de points de façade et positionnement semi-automatique dans un logiciel BIM / Julie Thierry (2019)PermalinkPermalinkIntegration of lidar data and GIS data for point cloud semantic enrichment at the point level / Harith Aljumaily in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 1 (January 2019)Permalink