Détail de l'auteur
Auteur Diego Marcos |
Documents disponibles écrits par cet auteur (3)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
A deep learning framework for matching of SAR and optical imagery / Lloyd Haydn Hughes in ISPRS Journal of photogrammetry and remote sensing, vol 169 (November 2020)
[article]
Titre : A deep learning framework for matching of SAR and optical imagery Type de document : Article/Communication Auteurs : Lloyd Haydn Hughes, Auteur ; Diego Marcos, Auteur ; Sylvain Lobry, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 166 - 179 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] appariement d'images
[Termes IGN] apprentissage profond
[Termes IGN] données clairsemées
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] fusion de données
[Termes IGN] géoréférencement
[Termes IGN] image optique
[Termes IGN] image radar moirée
[Termes IGN] superposition d'imagesRésumé : (auteur) SAR and optical imagery provide highly complementary information about observed scenes. A combined use of these two modalities is thus desirable in many data fusion scenarios. However, any data fusion task requires measurements to be accurately aligned. While for both data sources images are usually provided in a georeferenced manner, the geo-localization of optical images is often inaccurate due to propagation of angular measurement errors. Many methods for the matching of homologous image regions exist for both SAR and optical imagery, however, these methods are unsuitable for SAR-optical image matching due to significant geometric and radiometric differences between the two modalities. In this paper, we present a three-step framework for sparse image matching of SAR and optical imagery, whereby each step is encoded by a deep neural network. We first predict regions in each image which are deemed most suitable for matching. A correspondence heatmap is then generated through a multi-scale, feature-space cross-correlation operator. Finally, outliers are removed by classifying the correspondence surface as a positive or negative match. Our experiments show that the proposed approach provides a substantial improvement over previous methods for SAR-optical image matching and can be used to register even large-scale scenes. This opens up the possibility of using both types of data jointly, for example for the improvement of the geo-localization of optical satellite imagery or multi-sensor stereogrammetry. Numéro de notice : A2020-639 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.09.012 Date de publication en ligne : 03/12/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.09.012 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96062
in ISPRS Journal of photogrammetry and remote sensing > vol 169 (November 2020) . - pp 166 - 179[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020111 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020113 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020112 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Half a percent of labels is enough: efficient animal detection in UAV imagery using deep CNNs and active learning / Benjamin Kellenberger in IEEE Transactions on geoscience and remote sensing, vol 57 n° 12 (December 2019)
[article]
Titre : Half a percent of labels is enough: efficient animal detection in UAV imagery using deep CNNs and active learning Type de document : Article/Communication Auteurs : Benjamin Kellenberger, Auteur ; Diego Marcos, Auteur ; Sylvain Lobry, Auteur ; Devis Tuia, Auteur Année de publication : 2019 Article en page(s) : pp 9524 - 9533 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] apprentissage profond
[Termes IGN] classification orientée objet
[Termes IGN] classification par réseau neuronal
[Termes IGN] détection d'objet
[Termes IGN] données localisées
[Termes IGN] échantillonnage de données
[Termes IGN] faune locale
[Termes IGN] image captée par drone
[Termes IGN] Namibie
[Termes IGN] objet mobile
[Termes IGN] réalité de terrain
[Termes IGN] recensementRésumé : (auteur) We present an Active Learning (AL) strategy for reusing a deep Convolutional Neural Network (CNN)-based object detector on a new data set. This is of particular interest for wildlife conservation: given a set of images acquired with an Unmanned Aerial Vehicle (UAV) and manually labeled ground truth, our goal is to train an animal detector that can be reused for repeated acquisitions, e.g., in follow-up years. Domain shifts between data sets typically prevent such a direct model application. We thus propose to bridge this gap using AL and introduce a new criterion called Transfer Sampling (TS). TS uses Optimal Transport (OT) to find corresponding regions between the source and the target data sets in the space of CNN activations. The CNN scores in the source data set are used to rank the samples according to their likelihood of being animals, and this ranking is transferred to the target data set. Unlike conventional AL criteria that exploit model uncertainty, TS focuses on very confident samples, thus allowing quick retrieval of true positives in the target data set, where positives are typically extremely rare and difficult to find by visual inspection. We extend TS with a new window cropping strategy that further accelerates sample retrieval. Our experiments show that with both strategies combined, less than half a percent of oracle-provided labels are enough to find almost 80% of the animals in challenging sets of UAV images, beating all baselines by a margin. Numéro de notice : A2019-598 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2927393 Date de publication en ligne : 20/08/2019 En ligne : http://doi.org/10.1109/TGRS.2019.2927393 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94592
in IEEE Transactions on geoscience and remote sensing > vol 57 n° 12 (December 2019) . - pp 9524 - 9533[article]Land cover mapping at very high resolution with rotation equivariant CNNs : Towards small yet accurate models / Diego Marcos in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)
[article]
Titre : Land cover mapping at very high resolution with rotation equivariant CNNs : Towards small yet accurate models Type de document : Article/Communication Auteurs : Diego Marcos, Auteur ; Michele Volpi, Auteur ; Benjamin Kellenberger, Auteur ; Devis Tuia, Auteur Année de publication : 2018 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] Bade-Wurtemberg (Allemagne)
[Termes IGN] carte d'occupation du sol
[Termes IGN] enrichissement sémantique
[Termes IGN] filtrage numérique d'image
[Termes IGN] image à ultra haute résolution
[Termes IGN] modèle numérique de surface
[Termes IGN] orthoimage
[Termes IGN] réseau neuronal convolutif
[Termes IGN] segmentation sémantiqueRésumé : (Auteur) In remote sensing images, the absolute orientation of objects is arbitrary. Depending on an object’s orientation and on a sensor’s flight path, objects of the same semantic class can be observed in different orientations in the same image. Equivariance to rotation, in this context understood as responding with a rotated semantic label map when subject to a rotation of the input image, is therefore a very desirable feature, in particular for high capacity models, such as Convolutional Neural Networks (CNNs). If rotation equivariance is encoded in the network, the model is confronted with a simpler task and does not need to learn specific (and redundant) weights to address rotated versions of the same object class. In this work we propose a CNN architecture called Rotation Equivariant Vector Field Network (RotEqNet) to encode rotation equivariance in the network itself. By using rotating convolutions as building blocks and passing only the values corresponding to the maximally activating orientation throughout the network in the form of orientation encoding vector fields, RotEqNet treats rotated versions of the same object with the same filter bank and therefore achieves state-of-the-art performances even when using very small architectures trained from scratch. We test RotEqNet in two challenging sub-decimeter resolution semantic labeling problems, and show that we can perform better than a standard CNN while requiring one order of magnitude less parameters. Numéro de notice : A2018-491 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.01.021 Date de publication en ligne : 19/02/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.01.021 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91227
in ISPRS Journal of photogrammetry and remote sensing > vol 145 - part A (November 2018)[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018111 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018113 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018112 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt