Remote sensing . Vol 14 n° 12Paru le : 15/06/2022 |
[n° ou bulletin]
[n° ou bulletin]
|
Dépouillements
Ajouter le résultat dans votre panierA dual-generator translation network fusing texture and structure features for SAR and optical image matching / Han Nie in Remote sensing, Vol 14 n° 12 (June-2 2022)
[article]
Titre : A dual-generator translation network fusing texture and structure features for SAR and optical image matching Type de document : Article/Communication Auteurs : Han Nie, Auteur ; Zhitao Fu, Auteur ; Bo-Hui Tang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 2946 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] agrégation de détails
[Termes IGN] appariement d'images
[Termes IGN] fusion d'images
[Termes IGN] image radar moirée
[Termes IGN] image Sentinel-MSI
[Termes IGN] image Sentinel-SAR
[Termes IGN] rapport signal sur bruit
[Termes IGN] rift
[Termes IGN] texture d'imageRésumé : (auteur) The matching problem for heterologous remote sensing images can be simplified to the matching problem for pseudo homologous remote sensing images via image translation to improve the matching performance. Among such applications, the translation of synthetic aperture radar (SAR) and optical images is the current focus of research. However, the existing methods for SAR-to-optical translation have two main drawbacks. First, single generators usually sacrifice either structure or texture features to balance the model performance and complexity, which often results in textural or structural distortion; second, due to large nonlinear radiation distortions (NRDs) in SAR images, there are still visual differences between the pseudo-optical images generated by current generative adversarial networks (GANs) and real optical images. Therefore, we propose a dual-generator translation network for fusing structure and texture features. On the one hand, the proposed network has dual generators, a texture generator, and a structure generator, with good cross-coupling to obtain high-accuracy structure and texture features; on the other hand, frequency-domain and spatial-domain loss functions are introduced to reduce the differences between pseudo-optical images and real optical images. Extensive quantitative and qualitative experiments show that our method achieves state-of-the-art performance on publicly available optical and SAR datasets. Our method improves the peak signal-to-noise ratio (PSNR) by 21.0%, the chromatic feature similarity (FSIMc) by 6.9%, and the structural similarity (SSIM) by 161.7% in terms of the average metric values on all test images compared with the next best results. In addition, we present a before-and-after translation comparison experiment to show that our method improves the average keypoint repeatability by approximately 111.7% and the matching accuracy by approximately 5.25%. Numéro de notice : A2022-562 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.3390/rs14122946 Date de publication en ligne : 20/06/2022 En ligne : https://doi.org/10.3390/rs14122946 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101237
in Remote sensing > Vol 14 n° 12 (June-2 2022) . - n° 2946[article]Encoder-decoder structure with multiscale receptive field block for unsupervised depth estimation from monocular video / Songnan Chen in Remote sensing, Vol 14 n° 12 (June-2 2022)
[article]
Titre : Encoder-decoder structure with multiscale receptive field block for unsupervised depth estimation from monocular video Type de document : Article/Communication Auteurs : Songnan Chen, Auteur ; Junyu Han, Auteur ; Mengxia Tang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 2906 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage non-dirigé
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] couple stéréoscopique
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] image isolée
[Termes IGN] optimisation (mathématiques)
[Termes IGN] profondeur
[Termes IGN] séquence d'images
[Termes IGN] structure-from-motionRésumé : (auteur) Monocular depth estimation is a fundamental yet challenging task in computer vision as depth information will be lost when 3D scenes are mapped to 2D images. Although deep learning-based methods have led to considerable improvements for this task in a single image, most existing approaches still fail to overcome this limitation. Supervised learning methods model depth estimation as a regression problem and, as a result, require large amounts of ground truth depth data for training in actual scenarios. Unsupervised learning methods treat depth estimation as the synthesis of a new disparity map, which means that rectified stereo image pairs need to be used as the training dataset. Aiming to solve such problem, we present an encoder-decoder based framework, which infers depth maps from monocular video snippets in an unsupervised manner. First, we design an unsupervised learning scheme for the monocular depth estimation task based on the basic principles of structure from motion (SfM) and it only uses adjacent video clips rather than paired training data as supervision. Second, our method predicts two confidence masks to improve the robustness of the depth estimation model to avoid the occlusion problem. Finally, we leverage the largest scale and minimum depth loss instead of the multiscale and average loss to improve the accuracy of depth estimation. The experimental results on the benchmark KITTI dataset for depth estimation show that our method outperforms competing unsupervised methods. Numéro de notice : A2022-563 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.3390/rs14122906 En ligne : https://doi.org/10.3390/rs14122906 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101240
in Remote sensing > Vol 14 n° 12 (June-2 2022) . - n° 2906[article]