Détail de l'auteur
Auteur Zhongling Huang |
Documents disponibles écrits par cet auteur (2)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
What, where, and how to transfer in SAR target recognition based on deep CNNs / Zhongling Huang in IEEE Transactions on geoscience and remote sensing, vol 58 n° 4 (April 2020)
[article]
Titre : What, where, and how to transfer in SAR target recognition based on deep CNNs Type de document : Article/Communication Auteurs : Zhongling Huang, Auteur ; Zongxu Pan, Auteur ; Bin Lei, Auteur Année de publication : 2020 Article en page(s) : pp 2324 - 2336 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de cible
[Termes IGN] données multisources
[Termes IGN] image optique
[Termes IGN] image radar moirée
[Termes IGN] source de données
[Termes IGN] transmission de donnéesRésumé : (auteur) Deep convolutional neural networks (DCNNs) have attracted much attention in remote sensing recently. Compared with the large-scale annotated data set in natural images, the lack of labeled data in remote sensing becomes an obstacle to train a deep network very well, especially in synthetic aperture radar (SAR) image interpretation. Transfer learning provides an effective way to solve this problem by borrowing knowledge from the source task to the target task. In optical remote sensing application, a prevalent mechanism is to fine-tune on an existing model pretrained with a large-scale natural image data set, such as ImageNet. However, this scheme does not achieve satisfactory performance for SAR applications because of the prominent discrepancy between SAR and optical images. In this article, we attempt to discuss three issues that are seldom studied before in detail: 1) what network and source tasks are better to transfer to SAR targets; 2) in which layer are transferred features more generic to SAR targets; and 3) how to transfer effectively to SAR targets recognition. Based on the analysis, a transitive transfer method via multisource data with domain adaptation is proposed in this article to decrease the discrepancy between the source data and SAR targets. Several experiments are conducted on OpenSARShip. The results indicate that the universal conclusions about transfer learning in natural images cannot be completely applied to SAR targets, and the analysis of what and where to transfer in SAR target recognition is helpful to decide how to transfer more effectively. Numéro de notice : A2020-195 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2947634 Date de publication en ligne : 20/11/2019 En ligne : https://doi.org/10.1109/TGRS.2019.2947634 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94863
in IEEE Transactions on geoscience and remote sensing > vol 58 n° 4 (April 2020) . - pp 2324 - 2336[article]Deep SAR-Net: learning objects from signals / Zhongling Huang in ISPRS Journal of photogrammetry and remote sensing, vol 161 (March 2020)
[article]
Titre : Deep SAR-Net: learning objects from signals Type de document : Article/Communication Auteurs : Zhongling Huang, Auteur ; Mihai Datcu, Auteur ; Zongxu Pan, Auteur ; Bin Lei, Auteur Année de publication : 2020 Article en page(s) : pp 179 - 193 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] image radar moirée
[Termes IGN] image Sentinel-SAR
[Termes IGN] image Terra
[Termes IGN] matrice de covariance
[Termes IGN] micro-onde
[Termes IGN] polarisation
[Termes IGN] temps-fréquenceRésumé : (Auteur) This paper introduces a novel Synthetic Aperture Radar (SAR) specific deep learning framework for complex-valued SAR images. The conventional deep convolutional neural networks based methods usually take the amplitude information of single-polarization SAR images as the input to learn hierarchical spatial features automatically, which may have difficulties in discriminating objects with similar texture but discriminative scattering patterns. Our novel deep learning framework, Deep SAR-Net, takes complex-valued SAR images into consideration to learn both spatial texture information and backscattering patterns of objects on the ground. On the one hand, we transfer the detected SAR images pre-trained layers to extract spatial features from intensity images. On the other hand, we dig into the Fourier domain to learn physical properties of the objects by joint time-frequency analysis on complex-valued SAR images. We evaluate the effectiveness of Deep SAR-Net on three complex-valued SAR datasets from Sentinel-1 and TerraSAR-X satellite and demonstrate how it works better than conventional deep CNNs, especially on man-made objects classes. The proposed datasets and the trained Deep SAR-Net model with all codes are provided. Numéro de notice : A2020-065 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.01.016 Date de publication en ligne : 23/01/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.01.016 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94583
in ISPRS Journal of photogrammetry and remote sensing > vol 161 (March 2020) . - pp 179 - 193[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020031 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020033 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020032 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt