Descripteur
Documents disponibles dans cette catégorie (18)



Etendre la recherche sur niveau(x) vers le bas
Siamese KPConv: 3D multiple change detection from raw point clouds using deep learning / Iris de Gelis in ISPRS Journal of photogrammetry and remote sensing, vol 197 (March 2023)
![]()
[article]
Titre : Siamese KPConv: 3D multiple change detection from raw point clouds using deep learning Type de document : Article/Communication Auteurs : Iris de Gelis, Auteur ; Sébastien Lefèvre, Auteur ; Thomas Corpetti, Auteur Année de publication : 2023 Article en page(s) : pp 274 - 291 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] apprentissage profond
[Termes IGN] bâtiment
[Termes IGN] détection de changement
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] modèle numérique de surface
[Termes IGN] réseau neuronal siamois
[Termes IGN] semis de points
[Termes IGN] végétation
[Termes IGN] zone urbaineRésumé : (auteur) This study is concerned with urban change detection and categorization in point clouds. In such situations, objects are mainly characterized by their vertical axis, and the use of native 3D data such as 3D Point Clouds (PCs) is, in general, preferred to rasterized versions because of significant loss of information implied by any rasterization process. Yet, for obvious practical reasons, most existing studies only focus on 2D images for change detection purpose. In this paper, we propose a method capable of performing change detection directly within 3D data. Despite recent deep learning developments in remote sensing, to the best of our knowledge there is no such method to tackle multi-class change segmentation that directly processes raw 3D PCs. Thereby, based on advances in deep learning for change detection in 2D images and for analysis of 3D point clouds, we propose a deep Siamese KPConv network that deals with raw 3D PCs to perform change detection and categorization in a single step. Experimental results are conducted on synthetic and real data of various kinds (LiDAR, multi-sensors). Tests performed on simulated low density LiDAR and multi-sensor datasets show that our proposed method can obtain up to 80% of mean of IoU over classes of changes, leading to an improvement ranging from 10% to 30% over the state-of-the-art. A similar range of improvements is attainable on real data. Then, we show that pre-training Siamese KPConv on simulated PCs allows us to greatly reduce (more than 3,000
) the annotations required on real data. This is a highly significant result to deal with practical scenarios. Finally, an adaptation of Siamese KPConv is realized to deal with change classification at PC scale. Our network overtakes the current state-of-the-art deep learning method by 23% and 15% of mean of IoU when assessed on synthetic and public Change3D datasets, respectively. The code is available at the following link: https://github.com/IdeGelis/torch-points3d-SiameseKPConv.Numéro de notice : A2023-147 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2023.02.001 Date de publication en ligne : 17/02/2023 En ligne : https://doi.org/10.1016/j.isprsjprs.2023.02.001 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102805
in ISPRS Journal of photogrammetry and remote sensing > vol 197 (March 2023) . - pp 274 - 291[article]DSNUNet: An improved forest change detection network by combining Sentinel-1 and Sentinel-2 images / Jiawei Jiang in Remote sensing, vol 14 n° 19 (October-1 2022)
![]()
[article]
Titre : DSNUNet: An improved forest change detection network by combining Sentinel-1 and Sentinel-2 images Type de document : Article/Communication Auteurs : Jiawei Jiang, Auteur ; Yuanjun Xing, Auteur ; Wei Wei, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 5046 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] apprentissage profond
[Termes IGN] Chine
[Termes IGN] détection de changement
[Termes IGN] gestion forestière
[Termes IGN] image radar moirée
[Termes IGN] image Sentinel-MSI
[Termes IGN] image Sentinel-SAR
[Termes IGN] réseau neuronal siamois
[Termes IGN] ressources forestièresRésumé : (auteur) The use of remote sensing images to detect forest changes is of great significance for forest resource management. With the development and implementation of deep learning algorithms in change detection, a large number of models have been designed to detect changes in multi-phase remote sensing images. Although synthetic aperture radar (SAR) data have strong potential for application in forest change detection tasks, most existing deep learning-based models have been designed for optical imagery. Therefore, to effectively combine optical and SAR data in forest change detection, this paper proposes a double Siamese branch-based change detection network called DSNUNet. DSNUNet uses two sets of feature branches to extract features from dual-phase optical and SAR images and employs shared weights to combine features into groups. In the proposed DSNUNet, different feature extraction branch widths were used to compensate for a difference in the amount of information between optical and SAR images. The proposed DSNUNet was validated by experiments on the manually annotated forest change detection dataset. According to the obtained results, the proposed method outperformed other change detection methods, achieving an F1-score of 76.40%. In addition, different combinations of width between feature extraction branches were analyzed in this study. The results revealed an optimal performance of the model at initial channel numbers of the optical imaging branch and SAR image branch of 32 and 8, respectively. The prediction results demonstrated the effectiveness of the proposed method in accurately predicting forest changes and suppressing cloud interferences to some extent. Numéro de notice : A2022-772 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.3390/rs14195046 Date de publication en ligne : 10/10/2022 En ligne : https://doi.org/10.3390/rs14195046 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101800
in Remote sensing > vol 14 n° 19 (October-1 2022) . - n° 5046[article]Deep learning feature representation for image matching under large viewpoint and viewing direction change / Lin Chen in ISPRS Journal of photogrammetry and remote sensing, vol 190 (August 2022)
![]()
[article]
Titre : Deep learning feature representation for image matching under large viewpoint and viewing direction change Type de document : Article/Communication Auteurs : Lin Chen, Auteur ; Christian Heipke, Auteur Année de publication : 2022 Article en page(s) : pp 94 -112 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement d'images
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image aérienne oblique
[Termes IGN] orientation d'image
[Termes IGN] reconnaissance de formes
[Termes IGN] réseau neuronal siamois
[Termes IGN] SIFT (algorithme)Résumé : (auteur) Feature based image matching has been a research focus in photogrammetry and computer vision for decades, as it is the basis for many applications where multi-view geometry is needed. A typical feature based image matching algorithm contains five steps: feature detection, affine shape estimation, orientation assignment, description and descriptor matching. This paper contains innovative work in different steps of feature matching based on convolutional neural networks (CNN). For the affine shape estimation and orientation assignment, the main contribution of this paper is twofold. First, we define a canonical shape and orientation for each feature. As a consequence, instead of the usual Siamese CNN, only single branch CNNs needs to be employed to learn the affine shape and orientation parameters, which turns the related tasks from supervised to self supervised learning problems, removing the need for known matching relationships between features. Second, the affine shape and orientation are solved simultaneously. To the best of our knowledge, this is the first time these two modules are reported to have been successfully trained together. In addition, for the descriptor learning part, a new weak match finder is suggested to better explore the intra-variance of the appearance of matched features. For any input feature patch, a transformed patch that lies far from the input feature patch in descriptor space is defined as a weak match feature. A weak match finder network is proposed to actively find these weak match features; they are subsequently used in the standard descriptor learning framework. The proposed modules are integrated into an inference pipeline to form the proposed feature matching algorithm. The algorithm is evaluated on standard benchmarks and is used to solve for the parameters of image orientation of aerial oblique images. It is shown that deep learning feature based image matching leads to more registered images, more reconstructed 3D points and a more stable block geometry than conventional methods. The code is available at https://github.com/Childhoo/Chen_Matcher.git. Numéro de notice : A2022-502 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.06.003 Date de publication en ligne : 14/06/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.06.003 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101000
in ISPRS Journal of photogrammetry and remote sensing > vol 190 (August 2022) . - pp 94 -112[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022081 SL Revue Centre de documentation Revues en salle Disponible 081-2022083 DEP-RECP Revue LaSTIG Dépôt en unité Exclu du prêt 081-2022082 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Spatial–spectral attention network guided with change magnitude image for land cover change detection using remote sensing images / Zhiyong Lv in IEEE Transactions on geoscience and remote sensing, vol 60 n° 8 (August 2022)
![]()
[article]
Titre : Spatial–spectral attention network guided with change magnitude image for land cover change detection using remote sensing images Type de document : Article/Communication Auteurs : Zhiyong Lv, Auteur ; Fengjun Wang, Auteur ; Guoqing Cui, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 4412712 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] détection de changement
[Termes IGN] image Landsat-TM
[Termes IGN] jeu de données
[Termes IGN] occupation du sol
[Termes IGN] prévention des risques
[Termes IGN] réseau neuronal siamoisRésumé : (auteur) Land cover change detection (LCCD) using remote sensing images (RSIs) plays an important role in natural disaster evaluation, forest deformation monitoring, and wildfire destruction detection. However, bitemporal images are usually acquired at different atmospheric conditions, such as sun height and soil moisture, which usually cause pseudo and noise change in the change detection map. Changed areas on the ground also generally have various shapes and sizes, consequently making the utilization of spatial contextual information a challenging task. In this article, we design a novel neural network with a spatial–spectral attention mechanism and multiscale dilation convolution modules. This work is based on the previously demonstrated promising performance of convolutional neural network for LCCD with RSIs and attempts to capture more positive changes and further enhance the detection accuracies. The learning of the proposed neural network is guided with a change magnitude image. The performance and feasibility of the proposed network are validated with four pairs of RSIs that depict real land cover change events on the Earth’s surface. Comparison of the performance of the proposed approach with that of five state-of-art methods indicates the superiority of the proposed network in terms of ten quantitative evaluation metrics and visual performance. Such as, the proposed network achieved an improvement of about 0.08%–14.87% in terms of overall accuracy (OA) for Dataset-A. Numéro de notice : A2022-660 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2022.3197901 Date de publication en ligne : 17/08/2022 En ligne : https://doi.org/10.1109/TGRS.2022.3197901 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101516
in IEEE Transactions on geoscience and remote sensing > vol 60 n° 8 (August 2022) . - n° 4412712[article]Semantic feature-constrained multitask siamese network for building change detection in high-spatial-resolution remote sensing imagery / Qian Shen in ISPRS Journal of photogrammetry and remote sensing, vol 189 (July 2022)
![]()
[article]
Titre : Semantic feature-constrained multitask siamese network for building change detection in high-spatial-resolution remote sensing imagery Type de document : Article/Communication Auteurs : Qian Shen, Auteur ; Jiru Huang, Auteur ; Min Wang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 78 - 94 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de changement
[Termes IGN] détection du bâti
[Termes IGN] données qualitatives
[Termes IGN] estimation quantitative
[Termes IGN] fusion d'images
[Termes IGN] image à haute résolution
[Termes IGN] image multibande
[Termes IGN] jeu de données
[Termes IGN] réseau neuronal siamoisRésumé : (auteur) In the field of remote sensing applications, semantic change detection (SCD) simultaneously identifies changed areas and their change types by jointly conducting bitemporal image classification and change detection. It facilitates change reasoning and provides more application value than binary change detection (BCD), which offers only a binary map of the changed/unchanged areas. In this study, we propose a multitask Siamese network, named the semantic feature-constrained change detection (SFCCD) network, for building change detection in bitemporal high-spatial-resolution (HSR) images. SFCCD conducts feature extraction, semantic segmentation and change detection simultaneously, where change detection and semantic segmentation are the main and auxiliary tasks, respectively. For the segmentation task, ResNet50 is used to conduct image feature extraction, and the extracted semantic features are provided to execute the change detection task via a series of jump connections. For the change detection task, a global channel attention (GCA) module and a multiscale feature fusion (MSFF) module are designed, where high-level features offer training guidance to the low-level feature maps, and multiscale features are fused with multiple convolutions that possess different receptive fields. In bitemporal HSR images with different view angles, high-rise buildings have different directional height displacements, which generally cause serious false alarms for common change detection methods. However, known public building change detection datasets often lack buildings with height displacement. We thus create the Nanjing Dataset (NJDS) and design the aforementioned network structures and modules to target this issue. Experiments for method validation and comparison are conducted on the NJDS and two additional public datasets, i.e., the WHU Building Dataset (WBDS) and Google Dataset (GDS). Ablation experiments on the NJDS show that the joint utilization of the GCA and MSFF modules performs better than several classic modules, including atrous spatial pyramid pooling (ASPP), efficient spatial pyramid (ESP), channel attention block (CAB) and global attention upsampling (GAU) modules, in dealing with building height displacement. Furthermore, SFCCD achieves higher accuracy in terms of the OA, recall, F1-score and mIoU measures than several state-of-the-art change detection methods, including deeply supervised image fusion network (DSIFN), the dual-task constrained deep Siamese convolutional network (DTCDSCN), and multitask U-Net (MTU-Net). Numéro de notice : A2022-412 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.05.001 Date de publication en ligne : 12/05/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.05.001 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100762
in ISPRS Journal of photogrammetry and remote sensing > vol 189 (July 2022) . - pp 78 - 94[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 081-2022071 SL Revue Centre de documentation Revues en salle Disponible Meta-learning based hyperspectral target detection using siamese network / Yulei Wang in IEEE Transactions on geoscience and remote sensing, vol 60 n° 4 (April 2022)
PermalinkSiamese Adversarial Network for image classification of heavy mineral grains / Huizhen Hao in Computers & geosciences, vol 159 (February 2022)
PermalinkEffective triplet mining improves training of multi-scale pooled CNN for image retrieval / Federico Vaccaro in Machine Vision and Applications, vol 33 n° 1 (January 2022)
PermalinkAdaptive feature weighted fusion nested U-Net with discrete wavelet transform for change detection of high-resolution remote sensing images / Congcong Wang in Remote sensing, vol 13 n° 24 (December-2 2021)
PermalinkDeep-learning-based burned area mapping using the synergy of Sentinel-1&2 data / Qi Zhang in Remote sensing of environment, vol 264 (October 2021)
PermalinkMultiple convolutional features in Siamese networks for object tracking / Zhenxi Li in Machine Vision and Applications, vol 32 n° 3 (May 2021)
PermalinkRotation-invariant feature learning in VHR optical remote sensing images via nested siamese structure with double center loss / Ruoqiao Jiang in IEEE Transactions on geoscience and remote sensing, vol 59 n° 4 (April 2021)
PermalinkUnsupervised deep representation learning for real-time tracking / Ning Wang in International journal of computer vision, vol 129 n° 2 (February 2021)
PermalinkConvolutional neural networks for change analysis in earth observation images with noisy labels and domain shifts / Rodrigo Caye Daudt (2020)
PermalinkPermalink