Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > extraction de traits caractéristiques
extraction de traits caractéristiquesSynonyme(s)extraction des caractéristiques extraction de primitiveVoir aussi |
Documents disponibles dans cette catégorie (804)


Etendre la recherche sur niveau(x) vers le bas
Deep learning method for Chinese multisource point of interest matching / Pengpeng Li in Computers, Environment and Urban Systems, vol 96 (September 2022)
![]()
[article]
Titre : Deep learning method for Chinese multisource point of interest matching Type de document : Article/Communication Auteurs : Pengpeng Li, Auteur ; Jiping Liu, Auteur ; An Luo, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 101821 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique
[Termes IGN] appariement sémantique
[Termes IGN] apprentissage profond
[Termes IGN] classification par Perceptron multicouche
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] inférence sémantique
[Termes IGN] information sémantique
[Termes IGN] point d'intérêt
[Termes IGN] représentation vectorielle
[Termes IGN] traitement du langage naturelRésumé : (auteur) Multisource point of interest (POI) matching refers to the pairing of POIs that refer to the same geographic entity in different data sources. This also constitutes the core issue in geospatial data fusion and update. The existing methods cannot effectively capture the complex semantic information from a text, and the manually defined rules largely affect matching results. This study developed a multisource POI matching method based on deep learning that transforms the POI pair matching problem into a binary classification problem. First, we used three different Chinese word segmentation methods to segment the POI text attributes and used the segmentation results to train the Word2Vec model to generate the corresponding word vector representation. Then, we used the text convolutional neural network (Text-CNN) and multilayer perceptron (MLP) to extract the POI attributes' features and generate the corresponding feature vector representation. Finally, we used the enhanced sequential inference model (ESIM) to perform local inference and inference combination on each attribute to realize the classification of POI pairs. We used the POI dataset containing Baidu Map, Tencent Map, and Gaode Map from Chengdu to train, verify, and test the model. The experimental results show that the matching precision, recall rate, and F1 score of the proposed method exceed 98% on the test set, and it is significantly better than the existing matching methods. Numéro de notice : A2022-513 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article DOI : 10.1016/j.compenvurbsys.2022.101821 Date de publication en ligne : 18/06/2022 En ligne : https://doi.org/10.1016/j.compenvurbsys.2022.101821 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101053
in Computers, Environment and Urban Systems > vol 96 (September 2022) . - n° 101821[article]Deep learning feature representation for image matching under large viewpoint and viewing direction change / Lin Chen in ISPRS Journal of photogrammetry and remote sensing, vol 190 (August 2022)
![]()
[article]
Titre : Deep learning feature representation for image matching under large viewpoint and viewing direction change Type de document : Article/Communication Auteurs : Lin Chen, Auteur ; Christian Heipke, Auteur Année de publication : 2022 Article en page(s) : pp 94 -112 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement d'images
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image aérienne oblique
[Termes IGN] orientation d'image
[Termes IGN] reconnaissance de formes
[Termes IGN] réseau neuronal siamois
[Termes IGN] SIFT (algorithme)Résumé : (auteur) Feature based image matching has been a research focus in photogrammetry and computer vision for decades, as it is the basis for many applications where multi-view geometry is needed. A typical feature based image matching algorithm contains five steps: feature detection, affine shape estimation, orientation assignment, description and descriptor matching. This paper contains innovative work in different steps of feature matching based on convolutional neural networks (CNN). For the affine shape estimation and orientation assignment, the main contribution of this paper is twofold. First, we define a canonical shape and orientation for each feature. As a consequence, instead of the usual Siamese CNN, only single branch CNNs needs to be employed to learn the affine shape and orientation parameters, which turns the related tasks from supervised to self supervised learning problems, removing the need for known matching relationships between features. Second, the affine shape and orientation are solved simultaneously. To the best of our knowledge, this is the first time these two modules are reported to have been successfully trained together. In addition, for the descriptor learning part, a new weak match finder is suggested to better explore the intra-variance of the appearance of matched features. For any input feature patch, a transformed patch that lies far from the input feature patch in descriptor space is defined as a weak match feature. A weak match finder network is proposed to actively find these weak match features; they are subsequently used in the standard descriptor learning framework. The proposed modules are integrated into an inference pipeline to form the proposed feature matching algorithm. The algorithm is evaluated on standard benchmarks and is used to solve for the parameters of image orientation of aerial oblique images. It is shown that deep learning feature based image matching leads to more registered images, more reconstructed 3D points and a more stable block geometry than conventional methods. The code is available at https://github.com/Childhoo/Chen_Matcher.git. Numéro de notice : A2022-502 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.06.003 Date de publication en ligne : 14/06/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.06.003 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101000
in ISPRS Journal of photogrammetry and remote sensing > vol 190 (August 2022) . - pp 94 -112[article]Exploring the vertical dimension of street view image based on deep learning: a case study on lowest floor elevation estimation / Huan Ning in International journal of geographical information science IJGIS, vol 36 n° 7 (juillet 2022)
![]()
[article]
Titre : Exploring the vertical dimension of street view image based on deep learning: a case study on lowest floor elevation estimation Type de document : Article/Communication Auteurs : Huan Ning, Auteur ; Zhenlong Li, Auteur ; Xinyue Ye, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 1317 - 1342 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] détection d'objet
[Termes IGN] distorsion d'image
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] hauteur du bâti
[Termes IGN] image Streetview
[Termes IGN] lever tachéométrique
[Termes IGN] modèle numérique de surface
[Termes IGN] porteRésumé : (auteur) Street view imagery such as Google Street View is widely used in people’s daily lives. Many studies have been conducted to detect and map objects such as traffic signs and sidewalks for urban built-up environment analysis. While mapping objects in the horizontal dimension is common in those studies, automatic vertical measuring in large areas is underexploited. Vertical information from street view imagery can benefit a variety of studies. One notable application is estimating the lowest floor elevation, which is critical for building flood vulnerability assessment and insurance premium calculation. In this article, we explored the vertical measurement in street view imagery using the principle of tacheometric surveying. In the case study of lowest floor elevation estimation using Google Street View images, we trained a neural network (YOLO-v5) for door detection and used the fixed height of doors to measure doors’ elevation. The results suggest that the average error of estimated elevation is 0.218 m. The depthmaps of Google Street View were utilized to traverse the elevation from the roadway surface to target objects. The proposed pipeline provides a novel approach for automatic elevation estimation from street view imagery and is expected to benefit future terrain-related studies for large areas. Numéro de notice : A2022-465 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/13658816.2021.1981334 Date de publication en ligne : 06/10/2021 En ligne : https://doi.org/10.1080/13658816.2021.1981334 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100970
in International journal of geographical information science IJGIS > vol 36 n° 7 (juillet 2022) . - pp 1317 - 1342[article]Semantic feature-constrained multitask siamese network for building change detection in high-spatial-resolution remote sensing imagery / Qian Shen in ISPRS Journal of photogrammetry and remote sensing, vol 189 (July 2022)
![]()
[article]
Titre : Semantic feature-constrained multitask siamese network for building change detection in high-spatial-resolution remote sensing imagery Type de document : Article/Communication Auteurs : Qian Shen, Auteur ; Jiru Huang, Auteur ; Min Wang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 78 - 94 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de changement
[Termes IGN] détection du bâti
[Termes IGN] données qualitatives
[Termes IGN] estimation quantitative
[Termes IGN] fusion d'images
[Termes IGN] image à haute résolution
[Termes IGN] image multibande
[Termes IGN] jeu de données
[Termes IGN] réseau neuronal siamoisRésumé : (auteur) In the field of remote sensing applications, semantic change detection (SCD) simultaneously identifies changed areas and their change types by jointly conducting bitemporal image classification and change detection. It facilitates change reasoning and provides more application value than binary change detection (BCD), which offers only a binary map of the changed/unchanged areas. In this study, we propose a multitask Siamese network, named the semantic feature-constrained change detection (SFCCD) network, for building change detection in bitemporal high-spatial-resolution (HSR) images. SFCCD conducts feature extraction, semantic segmentation and change detection simultaneously, where change detection and semantic segmentation are the main and auxiliary tasks, respectively. For the segmentation task, ResNet50 is used to conduct image feature extraction, and the extracted semantic features are provided to execute the change detection task via a series of jump connections. For the change detection task, a global channel attention (GCA) module and a multiscale feature fusion (MSFF) module are designed, where high-level features offer training guidance to the low-level feature maps, and multiscale features are fused with multiple convolutions that possess different receptive fields. In bitemporal HSR images with different view angles, high-rise buildings have different directional height displacements, which generally cause serious false alarms for common change detection methods. However, known public building change detection datasets often lack buildings with height displacement. We thus create the Nanjing Dataset (NJDS) and design the aforementioned network structures and modules to target this issue. Experiments for method validation and comparison are conducted on the NJDS and two additional public datasets, i.e., the WHU Building Dataset (WBDS) and Google Dataset (GDS). Ablation experiments on the NJDS show that the joint utilization of the GCA and MSFF modules performs better than several classic modules, including atrous spatial pyramid pooling (ASPP), efficient spatial pyramid (ESP), channel attention block (CAB) and global attention upsampling (GAU) modules, in dealing with building height displacement. Furthermore, SFCCD achieves higher accuracy in terms of the OA, recall, F1-score and mIoU measures than several state-of-the-art change detection methods, including deeply supervised image fusion network (DSIFN), the dual-task constrained deep Siamese convolutional network (DTCDSCN), and multitask U-Net (MTU-Net). Numéro de notice : A2022-412 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.05.001 Date de publication en ligne : 12/05/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.05.001 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100762
in ISPRS Journal of photogrammetry and remote sensing > vol 189 (July 2022) . - pp 78 - 94[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 081-2022071 SL Revue Centre de documentation Revues en salle Disponible Human cognition based framework for detecting roads from remote sensing images / Naveen Chandra in Geocarto international, vol 37 n° 8 ([22/06/2022])
![]()
[article]
Titre : Human cognition based framework for detecting roads from remote sensing images Type de document : Article/Communication Auteurs : Naveen Chandra, Auteur ; Himadri Vaidya, Auteur ; Jayanta Kumar Ghosh, Auteur Année de publication : 2022 Article en page(s) : pp 2365 - 2384 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse comparative
[Termes IGN] analyse d'image numérique
[Termes IGN] classification
[Termes IGN] cognition
[Termes IGN] extraction du réseau routier
[Termes IGN] image à haute résolution
[Termes IGN] interprétation (psychologie)
[Termes IGN] représentation cognitive
[Termes IGN] segmentation d'imageRésumé : (auteur) The complete extraction of roads from remote sensing images (RSIs) is an emergent area of research. It is an interesting topic as it involves diverse procedures for detecting roads. The detection of roads using high-resolution-satellite-images (HRSi) is challenging because of the occurrence of several types of noise such as bridges, vehicles, and crossing lines, etc. The extraction of the correct road network is crucial due to its broad range of applications such as transportation, map updating, navigation, and generating maps. Therefore our paper concentrates on understanding the cognitive processes, reasoning, and knowledge used by the analyst through visual cognition while performing the task of road detection from HRSi. The novel process is performed emulating human cognition within cognitive task analysis which is carried out in five different stages. The suggested cognitive procedure for road extraction is validated with the fifteen HRSi of four different land cover patterns specifically developed-sub-urban (DSUr), developed-urban (DUr), emerging-sub-urban (ESUr), and emerging-urban (EUr). The experimental results and the comparative assessment prove the impact of the presented cognitive method. Numéro de notice : A2022-506 Affiliation des auteurs : non IGN Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2020.1810330 Date de publication en ligne : 14/10/2020 En ligne : https://doi.org/10.1080/10106049.2020.1810330 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101027
in Geocarto international > vol 37 n° 8 [22/06/2022] . - pp 2365 - 2384[article]Detecting interchanges in road networks using a graph convolutional network approach / Min Yang in International journal of geographical information science IJGIS, vol 36 n° 6 (June 2022)
PermalinkExtracting the urban landscape features of the historic district from street view images based on deep learning: A case study in the Beijing Core area / Siming Yin in ISPRS International journal of geo-information, vol 11 n° 6 (June 2022)
PermalinkGraph-based block-level urban change detection using Sentinel-2 time series / Nan Wang in Remote sensing of environment, vol 274 (June 2022)
PermalinkPrecise crop classification of hyperspectral images using multi-branch feature fusion and dilation-based MLP / Haibin Wu in Remote sensing, vol 14 n° 11 (June-1 2022)
PermalinkRecent advances in forest insect pests and diseases monitoring using UAV-based data: A systematic review / André Duarte in Forests, vol 13 n° 6 (June 2022)
PermalinkSpecies level classification of Mediterranean sparse forests-maquis formations using Sentinel-2 imagery / Semiha Demirbaş Çağlayana in Geocarto international, vol 37 n° 6 (June 2022)
PermalinkThe promising combination of a remote sensing approach and landscape connectivity modelling at a fine scale in urban planning / Elie Morin in Ecological indicators, vol 139 (June 2022)
PermalinkTowards the automated large-scale reconstruction of past road networks from historical maps / Johannes H. Uhl in Computers, Environment and Urban Systems, vol 94 (June 2022)
PermalinkTrue orthophoto generation based on unmanned aerial vehicle images using reconstructed edge points / Mojdeh Ebrahimikia in Photogrammetric record, vol 37 n° 178 (June 2022)
PermalinkAn informal road detection neural network for societal impact in developing countries / Inger Fabris-Rotelli in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-4-2022 (2022 edition)
Permalink