Descripteur
Termes descripteurs IGN > sciences naturelles > physique > traitement d'image > analyse d'image numérique > extraction de traits caractéristiques
extraction de traits caractéristiquesSynonyme(s)extraction des caractéristiques extraction de primitiveVoir aussi |


Etendre la recherche sur niveau(x) vers le bas
Choosing an appropriate training set size when using existing data to train neural networks for land cover segmentation / Huan Ning in Annals of GIS, vol 26 n° 4 (December 2020)
![]()
[article]
Titre : Choosing an appropriate training set size when using existing data to train neural networks for land cover segmentation Type de document : Article/Communication Auteurs : Huan Ning, Auteur ; Zhenlong Li, Auteur ; Cuizhen Wang, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 329 - 342 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] contour
[Termes descripteurs IGN] extraction de traits caractéristiques
[Termes descripteurs IGN] jeu de données
[Termes descripteurs IGN] Kiangsi (Chine)
[Termes descripteurs IGN] occupation du sol
[Termes descripteurs IGN] segmentation d'image
[Termes descripteurs IGN] segmentation sémantique
[Termes descripteurs IGN] taille du jeu de donnéesRésumé : (auteur) Land cover data is an inventory of objects on the Earth’s surface, which is often derived from remotely sensed imagery. Deep Convolutional Neural Network (DCNN) is a competitive method in image semantic segmentation. Some scholars argue that the inadequacy of training set is an obstacle when applying DCNNs in remote sensing image segmentation. While existing land cover data can be converted to large training sets, the size of training data set needs to be carefully considered. In this paper, we used different portions of a high-resolution land cover map to produce different sizes of training sets to train DCNNs (SegNet and U-Net) and then quantitatively evaluated the impact of training set size on the performance of the trained DCNN. We also introduced a new metric, Edge-ratio, to assess the performance of DCNN in maintaining the boundary of land cover objects. Based on the experiments, we document the relationship between the segmentation accuracy and the size of the training set, as well as the nonstationary accuracies among different land cover types. The findings of this paper can be used to effectively tailor the existing land cover data to training sets, and thus accelerate the assessment and employment of deep learning techniques for high-resolution land cover map extraction. Numéro de notice : A2020-800 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/19475683.2020.1803402 date de publication en ligne : 10/08/2020 En ligne : https://doi.org/10.1080/19475683.2020.1803402 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96723
in Annals of GIS > vol 26 n° 4 (December 2020) . - pp 329 - 342[article]MS-RRFSegNetMultiscale regional relation feature segmentation network for semantic segmentation of urban scene point clouds / Haifeng Luo in IEEE Transactions on geoscience and remote sensing, Vol 58 n° 12 (December 2020)
![]()
[article]
Titre : MS-RRFSegNetMultiscale regional relation feature segmentation network for semantic segmentation of urban scene point clouds Type de document : Article/Communication Auteurs : Haifeng Luo, Auteur ; Chongcheng Chen, Auteur ; Lina Fang, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 8301 - 8315 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] cognition
[Termes descripteurs IGN] données lidar
[Termes descripteurs IGN] extraction de traits caractéristiques
[Termes descripteurs IGN] représentation multiple
[Termes descripteurs IGN] scène urbaine
[Termes descripteurs IGN] segmentation sémantique
[Termes descripteurs IGN] semis de pointsRésumé : (auteur) Semantic segmentation is one of the fundamental tasks in understanding and applying urban scene point clouds. Recently, deep learning has been introduced to the field of point cloud processing. However, compared to images that are characterized by their regular data structure, a point cloud is a set of unordered points, which makes semantic segmentation a challenge. Consequently, the existing deep learning methods for semantic segmentation of point cloud achieve less success than those applied to images. In this article, we propose a novel method for urban scene point cloud semantic segmentation using deep learning. First, we use homogeneous supervoxels to reorganize raw point clouds to effectively reduce the computational complexity and improve the nonuniform distribution. Then, we use supervoxels as basic processing units, which can further expand receptive fields to obtain more descriptive contexts. Next, a sparse autoencoder (SAE) is presented for feature embedding representations of the supervoxels. Subsequently, we propose a regional relation feature reasoning module (RRFRM) inspired by relation reasoning network and design a multiscale regional relation feature segmentation network (MS-RRFSegNet) based on the RRFRM to semantically label supervoxels. Finally, the supervoxel-level inferences are transformed into point-level fine-grained predictions. The proposed framework is evaluated in two open benchmarks (Paris-Lille-3D and Semantic3D). The evaluation results show that the proposed method achieves competitive overall performance and outperforms other related approaches in several object categories. An implementation of our method is available at: https://github.com/HiphonL/MS_RRFSegNet . Numéro de notice : A2020-738 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.2985695 date de publication en ligne : 28/04/2020 En ligne : https://doi.org/10.1109/TGRS.2020.2985695 Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96363
in IEEE Transactions on geoscience and remote sensing > Vol 58 n° 12 (December 2020) . - pp 8301 - 8315[article]Building change detection using a shape context similarity model for LiDAR data / Xuzhe Lyu in ISPRS International journal of geo-information, vol 9 n° 11 (November 2020)
![]()
[article]
Titre : Building change detection using a shape context similarity model for LiDAR data Type de document : Article/Communication Auteurs : Xuzhe Lyu, Auteur ; Ming Hao, Auteur ; Wenzhong Shi, Auteur Année de publication : 2020 Article en page(s) : n° 678 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes descripteurs IGN] analyse d'image orientée objet
[Termes descripteurs IGN] détection de changement
[Termes descripteurs IGN] détection du bâti
[Termes descripteurs IGN] données lidar
[Termes descripteurs IGN] fusion d'images
[Termes descripteurs IGN] modèle numérique de surface
[Termes descripteurs IGN] reconnaissance de formes
[Termes descripteurs IGN] segmentation d'image
[Termes descripteurs IGN] semis de pointsRésumé : (auteur) In this paper, a novel building change detection approach is proposed using statistical region merging (SRM) and a shape context similarity model for Light Detection and Ranging (LiDAR) data. First, digital surface models (DSMs) are generated from LiDAR acquired at two different epochs, and the difference data D-DSM is created by difference processing. Second, to reduce the noise and registration error of the pixel-based method, the SRM algorithm is applied to segment the D-DSM, and multi-scale segmentation results are obtained under different scale values. Then, the shape context similarity model is used to calculate the shape similarity between the segmented objects and the buildings. Finally, the refined building change map is produced by the k-means clustering method based on shape context similarity and area-to-length ratio. The experimental results indicated that the proposed method could effectively improve the accuracy of building change detection compared with some popular change detection methods. Numéro de notice : A2020-732 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi9110678 date de publication en ligne : 15/11/2020 En ligne : https://doi.org/10.3390/ijgi9110678 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96345
in ISPRS International journal of geo-information > vol 9 n° 11 (November 2020) . - n° 678[article]A deep learning framework for matching of SAR and optical imagery / Lloyd Haydn Hughes in ISPRS Journal of photogrammetry and remote sensing, vol 169 (November 2020)
![]()
[article]
Titre : A deep learning framework for matching of SAR and optical imagery Type de document : Article/Communication Auteurs : Lloyd Haydn Hughes, Auteur ; Diego Marcos, Auteur ; Sylvain Lobry, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 166 - 179 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes descripteurs IGN] appariement d'images
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] données clairsemées
[Termes descripteurs IGN] extraction de traits caractéristiques
[Termes descripteurs IGN] fusion de données
[Termes descripteurs IGN] géoréférencement
[Termes descripteurs IGN] image optique
[Termes descripteurs IGN] image radar moirée
[Termes descripteurs IGN] superposition d'imagesRésumé : (auteur) SAR and optical imagery provide highly complementary information about observed scenes. A combined use of these two modalities is thus desirable in many data fusion scenarios. However, any data fusion task requires measurements to be accurately aligned. While for both data sources images are usually provided in a georeferenced manner, the geo-localization of optical images is often inaccurate due to propagation of angular measurement errors. Many methods for the matching of homologous image regions exist for both SAR and optical imagery, however, these methods are unsuitable for SAR-optical image matching due to significant geometric and radiometric differences between the two modalities. In this paper, we present a three-step framework for sparse image matching of SAR and optical imagery, whereby each step is encoded by a deep neural network. We first predict regions in each image which are deemed most suitable for matching. A correspondence heatmap is then generated through a multi-scale, feature-space cross-correlation operator. Finally, outliers are removed by classifying the correspondence surface as a positive or negative match. Our experiments show that the proposed approach provides a substantial improvement over previous methods for SAR-optical image matching and can be used to register even large-scale scenes. This opens up the possibility of using both types of data jointly, for example for the improvement of the geo-localization of optical satellite imagery or multi-sensor stereogrammetry. Numéro de notice : A2020-639 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.09.012 date de publication en ligne : 03/12/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.09.012 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96062
in ISPRS Journal of photogrammetry and remote sensing > vol 169 (November 2020) . - pp 166 - 179[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020111 SL Revue Centre de documentation Revues en salle Disponible 081-2020113 DEP-RECP Revue MATIS Dépôt en unité Exclu du prêt 081-2020112 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt High-resolution remote sensing image scene classification via key filter bank based on convolutional neural network / Fengpeng Li in IEEE Transactions on geoscience and remote sensing, vol 58 n° 11 (November 2020)
![]()
[article]
Titre : High-resolution remote sensing image scene classification via key filter bank based on convolutional neural network Type de document : Article/Communication Auteurs : Fengpeng Li, Auteur ; Ruyi Feng, Auteur ; Wei Han, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 8077 - 8092 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes descripteurs IGN] apprentissage profond
[Termes descripteurs IGN] classification par réseau neuronal convolutif
[Termes descripteurs IGN] étiquetage sémantique
[Termes descripteurs IGN] extraction de traits caractéristiques
[Termes descripteurs IGN] filtrage numérique d'image
[Termes descripteurs IGN] image à haute résolution
[Termes descripteurs IGN] jeu de données
[Termes descripteurs IGN] test statistiqueRésumé : (auteur) High-resolution remote sensing (HRRS) image scene classification has attracted an enormous amount of attention due to its wide application in a range of tasks. Due to the rapid development of deep learning (DL), models based on convolutional neural network (CNN) have made competitive achievements on HRRS image scene classification because of the excellent representation capacity of DL. The scene labels of HRRS images extremely depend on the combination of global information and information from key regions or locations. However, most existing models based on CNN tend only to represent the global features of images or overstate local information capturing from key regions or locations, which may confuse different categories. To address this issue, a key region or location capturing method called key filter bank (KFB) is proposed in this article, and KFB can retain global information at the same time. This method can combine with different CNN models to improve the performance of HRRS imagery scene classification. Moreover, for the convenience of practical tasks, an end-to-end model called KFBNet where KFB combined with DenseNet-121 is proposed to compare the performance with existing models. This model is evaluated on public benchmark data sets, and the proposed model makes better performance on benchmarks than the state-of-the-art methods. Numéro de notice : A2020-683 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.2987060 date de publication en ligne : 23/04/2020 En ligne : https://doi.org/10.1109/TGRS.2020.2987060 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96208
in IEEE Transactions on geoscience and remote sensing > vol 58 n° 11 (November 2020) . - pp 8077 - 8092[article]Topographic connection method for automated mapping of landslide inventories, study case: semi urban sub-basin from Monterrey, Northeast of México / Nelly L. Ramirez Serrato in Geocarto international, vol 35 n° 15 ([01/11/2020])
PermalinkTextural classification of remotely sensed images using multiresolution techniques / Rizwan Ahmed Ansari in Geocarto international, vol 35 n° 14 ([15/10/2020])
PermalinkA graph convolutional network model for evaluating potential congestion spots based on local urban built environments / Kun Qin in Transactions in GIS, Vol 24 n° 5 (October 2020)
PermalinkMultiview automatic target recognition for infrared imagery using collaborative sparse priors / Xuelu Li in IEEE Transactions on geoscience and remote sensing, vol 58 n° 10 (October 2020)
PermalinkTree species classification using structural features derived from terrestrial laser scanning / Louise Terryn in ISPRS Journal of photogrammetry and remote sensing, vol 168 (October 2020)
PermalinkCrater detection and registration of planetary images through marked point processes, multiscale decomposition, and region-based analysis / David Solarna in IEEE Transactions on geoscience and remote sensing, vol 58 n° 9 (September 2020)
PermalinkLocal color and morphological image feature based vegetation identification and its application to human environment street view vegetation mapping, or how green is our county? / Istvan G. Lauko in Geo-spatial Information Science, vol 23 n° 3 (September 2020)
PermalinkA novel deep learning instance segmentation model for automated marine oil spill detection / Shamsudeen Temitope Yekeen in ISPRS Journal of photogrammetry and remote sensing, vol 167 (September 2020)
PermalinkA novel deep network and aggregation model for saliency detection / Ye Liang in The Visual Computer, vol 36 n° 9 (September 2020)
PermalinkPrecise extraction of citrus fruit trees from a Digital Surface Model using a unified strategy: detection, delineation, and clustering / Ali Ozgun Ok in Photogrammetric Engineering & Remote Sensing, PERS, vol 86 n° 9 (September 2020)
Permalink