Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > extraction de traits caractéristiques
extraction de traits caractéristiquesSynonyme(s)extraction des caractéristiques extraction de primitiveVoir aussi |
Documents disponibles dans cette catégorie (714)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Deep learning method for Chinese multisource point of interest matching / Pengpeng Li in Computers, Environment and Urban Systems, vol 96 (September 2022)
[article]
Titre : Deep learning method for Chinese multisource point of interest matching Type de document : Article/Communication Auteurs : Pengpeng Li, Auteur ; Jiping Liu, Auteur ; An Luo, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 101821 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique
[Termes IGN] appariement sémantique
[Termes IGN] apprentissage profond
[Termes IGN] classification par Perceptron multicouche
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] inférence sémantique
[Termes IGN] information sémantique
[Termes IGN] point d'intérêt
[Termes IGN] représentation vectorielle
[Termes IGN] traitement du langage naturelRésumé : (auteur) Multisource point of interest (POI) matching refers to the pairing of POIs that refer to the same geographic entity in different data sources. This also constitutes the core issue in geospatial data fusion and update. The existing methods cannot effectively capture the complex semantic information from a text, and the manually defined rules largely affect matching results. This study developed a multisource POI matching method based on deep learning that transforms the POI pair matching problem into a binary classification problem. First, we used three different Chinese word segmentation methods to segment the POI text attributes and used the segmentation results to train the Word2Vec model to generate the corresponding word vector representation. Then, we used the text convolutional neural network (Text-CNN) and multilayer perceptron (MLP) to extract the POI attributes' features and generate the corresponding feature vector representation. Finally, we used the enhanced sequential inference model (ESIM) to perform local inference and inference combination on each attribute to realize the classification of POI pairs. We used the POI dataset containing Baidu Map, Tencent Map, and Gaode Map from Chengdu to train, verify, and test the model. The experimental results show that the matching precision, recall rate, and F1 score of the proposed method exceed 98% on the test set, and it is significantly better than the existing matching methods. Numéro de notice : A2022-513 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE/INFORMATIQUE Nature : Article DOI : 10.1016/j.compenvurbsys.2022.101821 Date de publication en ligne : 18/06/2022 En ligne : https://doi.org/10.1016/j.compenvurbsys.2022.101821 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101053
in Computers, Environment and Urban Systems > vol 96 (September 2022) . - n° 101821[article]Deep learning feature representation for image matching under large viewpoint and viewing direction change / Lin Chen in ISPRS Journal of photogrammetry and remote sensing, vol 190 (August 2022)
[article]
Titre : Deep learning feature representation for image matching under large viewpoint and viewing direction change Type de document : Article/Communication Auteurs : Lin Chen, Auteur ; Christian Heipke, Auteur Année de publication : 2022 Article en page(s) : pp 94 -112 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement d'images
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image aérienne oblique
[Termes IGN] orientation d'image
[Termes IGN] reconnaissance de formes
[Termes IGN] réseau neuronal siamois
[Termes IGN] SIFT (algorithme)Résumé : (auteur) Feature based image matching has been a research focus in photogrammetry and computer vision for decades, as it is the basis for many applications where multi-view geometry is needed. A typical feature based image matching algorithm contains five steps: feature detection, affine shape estimation, orientation assignment, description and descriptor matching. This paper contains innovative work in different steps of feature matching based on convolutional neural networks (CNN). For the affine shape estimation and orientation assignment, the main contribution of this paper is twofold. First, we define a canonical shape and orientation for each feature. As a consequence, instead of the usual Siamese CNN, only single branch CNNs needs to be employed to learn the affine shape and orientation parameters, which turns the related tasks from supervised to self supervised learning problems, removing the need for known matching relationships between features. Second, the affine shape and orientation are solved simultaneously. To the best of our knowledge, this is the first time these two modules are reported to have been successfully trained together. In addition, for the descriptor learning part, a new weak match finder is suggested to better explore the intra-variance of the appearance of matched features. For any input feature patch, a transformed patch that lies far from the input feature patch in descriptor space is defined as a weak match feature. A weak match finder network is proposed to actively find these weak match features; they are subsequently used in the standard descriptor learning framework. The proposed modules are integrated into an inference pipeline to form the proposed feature matching algorithm. The algorithm is evaluated on standard benchmarks and is used to solve for the parameters of image orientation of aerial oblique images. It is shown that deep learning feature based image matching leads to more registered images, more reconstructed 3D points and a more stable block geometry than conventional methods. The code is available at https://github.com/Childhoo/Chen_Matcher.git. Numéro de notice : A2022-502 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.06.003 Date de publication en ligne : 14/06/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.06.003 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101000
in ISPRS Journal of photogrammetry and remote sensing > vol 190 (August 2022) . - pp 94 -112[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022081 SL Revue Centre de documentation Revues en salle Disponible 081-2022083 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2022082 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Exploring the vertical dimension of street view image based on deep learning: a case study on lowest floor elevation estimation / Huan Ning in International journal of geographical information science IJGIS, vol 36 n° 7 (juillet 2022)
[article]
Titre : Exploring the vertical dimension of street view image based on deep learning: a case study on lowest floor elevation estimation Type de document : Article/Communication Auteurs : Huan Ning, Auteur ; Zhenlong Li, Auteur ; Xinyue Ye, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 1317 - 1342 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] détection d'objet
[Termes IGN] distorsion d'image
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] hauteur du bâti
[Termes IGN] image Streetview
[Termes IGN] lever tachéométrique
[Termes IGN] modèle numérique de surface
[Termes IGN] porteRésumé : (auteur) Street view imagery such as Google Street View is widely used in people’s daily lives. Many studies have been conducted to detect and map objects such as traffic signs and sidewalks for urban built-up environment analysis. While mapping objects in the horizontal dimension is common in those studies, automatic vertical measuring in large areas is underexploited. Vertical information from street view imagery can benefit a variety of studies. One notable application is estimating the lowest floor elevation, which is critical for building flood vulnerability assessment and insurance premium calculation. In this article, we explored the vertical measurement in street view imagery using the principle of tacheometric surveying. In the case study of lowest floor elevation estimation using Google Street View images, we trained a neural network (YOLO-v5) for door detection and used the fixed height of doors to measure doors’ elevation. The results suggest that the average error of estimated elevation is 0.218 m. The depthmaps of Google Street View were utilized to traverse the elevation from the roadway surface to target objects. The proposed pipeline provides a novel approach for automatic elevation estimation from street view imagery and is expected to benefit future terrain-related studies for large areas. Numéro de notice : A2022-465 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/13658816.2021.1981334 Date de publication en ligne : 06/10/2021 En ligne : https://doi.org/10.1080/13658816.2021.1981334 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100970
in International journal of geographical information science IJGIS > vol 36 n° 7 (juillet 2022) . - pp 1317 - 1342[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 079-2022071 SL Revue Centre de documentation Revues en salle Disponible Semantic feature-constrained multitask siamese network for building change detection in high-spatial-resolution remote sensing imagery / Qian Shen in ISPRS Journal of photogrammetry and remote sensing, vol 189 (July 2022)
[article]
Titre : Semantic feature-constrained multitask siamese network for building change detection in high-spatial-resolution remote sensing imagery Type de document : Article/Communication Auteurs : Qian Shen, Auteur ; Jiru Huang, Auteur ; Min Wang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 78 - 94 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de changement
[Termes IGN] détection du bâti
[Termes IGN] données qualitatives
[Termes IGN] estimation quantitative
[Termes IGN] fusion d'images
[Termes IGN] image à haute résolution
[Termes IGN] image multibande
[Termes IGN] jeu de données
[Termes IGN] réseau neuronal siamoisRésumé : (auteur) In the field of remote sensing applications, semantic change detection (SCD) simultaneously identifies changed areas and their change types by jointly conducting bitemporal image classification and change detection. It facilitates change reasoning and provides more application value than binary change detection (BCD), which offers only a binary map of the changed/unchanged areas. In this study, we propose a multitask Siamese network, named the semantic feature-constrained change detection (SFCCD) network, for building change detection in bitemporal high-spatial-resolution (HSR) images. SFCCD conducts feature extraction, semantic segmentation and change detection simultaneously, where change detection and semantic segmentation are the main and auxiliary tasks, respectively. For the segmentation task, ResNet50 is used to conduct image feature extraction, and the extracted semantic features are provided to execute the change detection task via a series of jump connections. For the change detection task, a global channel attention (GCA) module and a multiscale feature fusion (MSFF) module are designed, where high-level features offer training guidance to the low-level feature maps, and multiscale features are fused with multiple convolutions that possess different receptive fields. In bitemporal HSR images with different view angles, high-rise buildings have different directional height displacements, which generally cause serious false alarms for common change detection methods. However, known public building change detection datasets often lack buildings with height displacement. We thus create the Nanjing Dataset (NJDS) and design the aforementioned network structures and modules to target this issue. Experiments for method validation and comparison are conducted on the NJDS and two additional public datasets, i.e., the WHU Building Dataset (WBDS) and Google Dataset (GDS). Ablation experiments on the NJDS show that the joint utilization of the GCA and MSFF modules performs better than several classic modules, including atrous spatial pyramid pooling (ASPP), efficient spatial pyramid (ESP), channel attention block (CAB) and global attention upsampling (GAU) modules, in dealing with building height displacement. Furthermore, SFCCD achieves higher accuracy in terms of the OA, recall, F1-score and mIoU measures than several state-of-the-art change detection methods, including deeply supervised image fusion network (DSIFN), the dual-task constrained deep Siamese convolutional network (DTCDSCN), and multitask U-Net (MTU-Net). Numéro de notice : A2022-412 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.05.001 Date de publication en ligne : 12/05/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.05.001 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100762
in ISPRS Journal of photogrammetry and remote sensing > vol 189 (July 2022) . - pp 78 - 94[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 081-2022071 SL Revue Centre de documentation Revues en salle Disponible Artificial intelligence techniques in extracting building and tree footprints using aerial imagery and LiDAR data / Saeideh Sahebi Vayghan in Geocarto international, vol 37 n° 10 ([01/06/2022])
[article]
Titre : Artificial intelligence techniques in extracting building and tree footprints using aerial imagery and LiDAR data Type de document : Article/Communication Auteurs : Saeideh Sahebi Vayghan, Auteur ; Mohammad Salmani, Auteur ; Neda Ghasemkhanic, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 2967 - 2995 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] algorithme génétique
[Termes IGN] classification par nuées dynamiques
[Termes IGN] classification par réseau neuronal
[Termes IGN] détection d'arbres
[Termes IGN] détection du bâti
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] empreinte
[Termes IGN] image aérienne
[Termes IGN] image optique
[Termes IGN] Inférence floue
[Termes IGN] morphologie mathématiqueRésumé : (auteur) One of the most important considerations in urban environments is the extraction of urban objects, with a high automation level. This study aims to present a new method which uses aerial images and LiDAR data to extract buildings and trees footprint in urban areas. In this study, high-elevation objects were extracted from the LiDAR data using the developed scan labeling method, and then the classification methods of Neural Networks (NN), Adaptive Neuro-Fuzzy Inference System (ANFIS) and Genetic Based K-Means algorithm (GBKMs) were used to separate buildings and trees and with the purpose of evaluating their performance. The features used for the classification were extracted from aerial images and LiDAR data, and the training data for the classification were selected automatically. Mathematical morphology functions were also used to process the classification results. The results show that NN and the ANFIS are more effective than the genetic-based K-Means algorithm in detecting small and large buildings. Numéro de notice : A2022-596 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2020.1844311 En ligne : https://doi.org/10.1080/10106049.2020.1844311 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101300
in Geocarto international > vol 37 n° 10 [01/06/2022] . - pp 2967 - 2995[article]Context-aware network for semantic segmentation toward large-scale point clouds in urban environments / Chun Liu in IEEE Transactions on geoscience and remote sensing, vol 60 n° 6 (June 2022)PermalinkDetecting interchanges in road networks using a graph convolutional network approach / Min Yang in International journal of geographical information science IJGIS, vol 36 n° 6 (June 2022)PermalinkExtracting the urban landscape features of the historic district from street view images based on deep learning: A case study in the Beijing Core area / Siming Yin in ISPRS International journal of geo-information, vol 11 n° 6 (June 2022)PermalinkFeature-selection high-resolution network with hypersphere embedding for semantic segmentation of VHR remote sensing images / Hanwen Xu in IEEE Transactions on geoscience and remote sensing, vol 60 n° 6 (June 2022)PermalinkGraph-based block-level urban change detection using Sentinel-2 time series / Nan Wang in Remote sensing of environment, vol 274 (June 2022)PermalinkInvariant structure representation for remote sensing object detection based on graph modeling / Zicong Zhu in IEEE Transactions on geoscience and remote sensing, vol 60 n° 6 (June 2022)PermalinkPrecise crop classification of hyperspectral images using multi-branch feature fusion and dilation-based MLP / Haibin Wu in Remote sensing, vol 14 n° 11 (June-1 2022)PermalinkRecent advances in forest insect pests and diseases monitoring using UAV-based data: A systematic review / André Duarte in Forests, vol 13 n° 6 (June 2022)PermalinkThe promising combination of a remote sensing approach and landscape connectivity modelling at a fine scale in urban planning / Elie Morin in Ecological indicators, vol 139 (June 2022)PermalinkTowards the automated large-scale reconstruction of past road networks from historical maps / Johannes H. Uhl in Computers, Environment and Urban Systems, vol 94 (June 2022)Permalink