Détail de l'auteur
Auteur Tatjana Bürgmann |
Documents disponibles écrits par cet auteur (1)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Matching of TerraSAR-X derived ground control points to optical image patches using deep learning / Tatjana Bürgmann in ISPRS Journal of photogrammetry and remote sensing, Vol 158 (December 2019)
[article]
Titre : Matching of TerraSAR-X derived ground control points to optical image patches using deep learning Type de document : Article/Communication Auteurs : Tatjana Bürgmann, Auteur ; Wolfgang Koppe, Auteur ; Michael Schmitt, Auteur Année de publication : 2019 Article en page(s) : pp 241 - 248 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] appariement d'images
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] géolocalisation
[Termes IGN] image multicapteur
[Termes IGN] image optique
[Termes IGN] image Pléiades
[Termes IGN] image radar moirée
[Termes IGN] image Sentinel-MSI
[Termes IGN] image Sentinel-SAR
[Termes IGN] image TerraSAR-X
[Termes IGN] point d'appuiRésumé : (auteur) High resolution synthetic aperture radar (SAR) satellites like TerraSAR-X are capable of acquiring images exhibiting an absolute geolocation accuracy within a few centimeters, mainly because of the availability of precise orbit information and by compensating range delay errors due to atmospheric conditions. In contrast, satellite images from optical missions generally exhibit comparably low geolocation accuracies because of the propagation of errors in angular measurements over large distances. However, a variety of remote sensing applications, such as change detection, surface movement monitoring or ice flow measurements, require precisely geo-referenced and co-registered satellite images. By using Ground Control Points (GCPs) derived from TerraSAR-X, the absolute geolocation accuracy of optical satellite images can be improved. For this purpose, the corresponding matching points in the optical images need to be localized. In this paper, a deep learning based approach is investigated for an automated matching of SAR-derived GCPs to optical image elements. Therefore, a convolutional neural network is pretrained with medium resolution Sentinel-1 and Sentinel-2 imagery and fine-tuned on precisely co-registered TerraSAR-X and Pléiades training image pairs to learn a common descriptor representation. By using these descriptors, the similarity of SAR and optical image patches can be calculated. This similarity metric is then used in a sliding window approach to identify the matching points in the optical reference image. Subsequently, the derived points can be utilized for co-registration of the underlying images. The network is evaluated over nine study areas showing airports and their rural surroundings from several different countries around the world. The results show that based on TerraSAR-X-derived GCPs, corresponding points in the optical image can automatically and reliably be identified with a pixel-level localization accuracy. Numéro de notice : A2019-548 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.09.010 Date de publication en ligne : 05/11/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.09.010 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94194
in ISPRS Journal of photogrammetry and remote sensing > Vol 158 (December 2019) . - pp 241 - 248[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019121 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019123 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019122 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt