Détail de l'auteur
Auteur Sylvain Lobry |
Documents disponibles écrits par cet auteur (4)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
A deep learning framework for matching of SAR and optical imagery / Lloyd Haydn Hughes in ISPRS Journal of photogrammetry and remote sensing, vol 169 (November 2020)
[article]
Titre : A deep learning framework for matching of SAR and optical imagery Type de document : Article/Communication Auteurs : Lloyd Haydn Hughes, Auteur ; Diego Marcos, Auteur ; Sylvain Lobry, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 166 - 179 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] appariement d'images
[Termes IGN] apprentissage profond
[Termes IGN] données clairsemées
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] fusion de données
[Termes IGN] géoréférencement
[Termes IGN] image optique
[Termes IGN] image radar moirée
[Termes IGN] superposition d'imagesRésumé : (auteur) SAR and optical imagery provide highly complementary information about observed scenes. A combined use of these two modalities is thus desirable in many data fusion scenarios. However, any data fusion task requires measurements to be accurately aligned. While for both data sources images are usually provided in a georeferenced manner, the geo-localization of optical images is often inaccurate due to propagation of angular measurement errors. Many methods for the matching of homologous image regions exist for both SAR and optical imagery, however, these methods are unsuitable for SAR-optical image matching due to significant geometric and radiometric differences between the two modalities. In this paper, we present a three-step framework for sparse image matching of SAR and optical imagery, whereby each step is encoded by a deep neural network. We first predict regions in each image which are deemed most suitable for matching. A correspondence heatmap is then generated through a multi-scale, feature-space cross-correlation operator. Finally, outliers are removed by classifying the correspondence surface as a positive or negative match. Our experiments show that the proposed approach provides a substantial improvement over previous methods for SAR-optical image matching and can be used to register even large-scale scenes. This opens up the possibility of using both types of data jointly, for example for the improvement of the geo-localization of optical satellite imagery or multi-sensor stereogrammetry. Numéro de notice : A2020-639 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.09.012 Date de publication en ligne : 03/12/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.09.012 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96062
in ISPRS Journal of photogrammetry and remote sensing > vol 169 (November 2020) . - pp 166 - 179[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2020111 RAB Revue Centre de documentation En réserve L003 Disponible 081-2020113 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2020112 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Fine-grained landuse characterization using ground-based pictures: a deep learning solution based on globally available data / Shivangi Srivastava in International journal of geographical information science IJGIS, vol 34 n° 6 (June 2020)
[article]
Titre : Fine-grained landuse characterization using ground-based pictures: a deep learning solution based on globally available data Type de document : Article/Communication Auteurs : Shivangi Srivastava, Auteur ; John E. Vargas-Muñoz, Auteur ; Sylvain Lobry, Auteur ; Devis Tuia, Auteur Année de publication : 2020 Article en page(s) : pp 1117 - 1136 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Analyse spatiale
[Termes IGN] analyse d'image orientée objet
[Termes IGN] apprentissage profond
[Termes IGN] base de données urbaines
[Termes IGN] carte d'occupation du sol
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données localisées des bénévoles
[Termes IGN] données localisées libres
[Termes IGN] Ile-de-France
[Termes IGN] image Streetview
[Termes IGN] image terrestre
[Termes IGN] information géographique
[Termes IGN] méthode heuristique
[Termes IGN] OpenStreetMap
[Termes IGN] réseau socialRésumé : (auteur) We study the problem of landuse characterization at the urban-object level using deep learning algorithms. Traditionally, this task is performed by surveys or manual photo interpretation, which are expensive and difficult to update regularly. We seek to characterize usages at the single object level and to differentiate classes such as educational institutes, hospitals and religious places by visual cues contained in side-view pictures from Google Street View (GSV). These pictures provide geo-referenced information not only about the material composition of the objects but also about their actual usage, which otherwise is difficult to capture using other classical sources of data such as aerial imagery. Since the GSV database is regularly updated, this allows to consequently update the landuse maps, at lower costs than those of authoritative surveys. Because every urban-object is imaged from a number of viewpoints with street-level pictures, we propose a deep-learning based architecture that accepts arbitrary number of GSV pictures to predict the fine-grained landuse classes at the object level. These classes are taken from OpenStreetMap. A quantitative evaluation of the area of Île-de-France, France shows that our model outperforms other deep learning-based methods, making it a suitable alternative to manual landuse characterization. Numéro de notice : A2020-269 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/13658816.2018.1542698 Date de publication en ligne : 18/11/2018 En ligne : https://doi.org/10.1080/13658816.2018.1542698 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95041
in International journal of geographical information science IJGIS > vol 34 n° 6 (June 2020) . - pp 1117 - 1136[article]Half a percent of labels is enough: efficient animal detection in UAV imagery using deep CNNs and active learning / Benjamin Kellenberger in IEEE Transactions on geoscience and remote sensing, vol 57 n° 12 (December 2019)
[article]
Titre : Half a percent of labels is enough: efficient animal detection in UAV imagery using deep CNNs and active learning Type de document : Article/Communication Auteurs : Benjamin Kellenberger, Auteur ; Diego Marcos, Auteur ; Sylvain Lobry, Auteur ; Devis Tuia, Auteur Année de publication : 2019 Article en page(s) : pp 9524 - 9533 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] apprentissage profond
[Termes IGN] classification orientée objet
[Termes IGN] classification par réseau neuronal
[Termes IGN] détection d'objet
[Termes IGN] données localisées
[Termes IGN] échantillonnage de données
[Termes IGN] faune locale
[Termes IGN] image captée par drone
[Termes IGN] Namibie
[Termes IGN] objet mobile
[Termes IGN] réalité de terrain
[Termes IGN] recensementRésumé : (auteur) We present an Active Learning (AL) strategy for reusing a deep Convolutional Neural Network (CNN)-based object detector on a new data set. This is of particular interest for wildlife conservation: given a set of images acquired with an Unmanned Aerial Vehicle (UAV) and manually labeled ground truth, our goal is to train an animal detector that can be reused for repeated acquisitions, e.g., in follow-up years. Domain shifts between data sets typically prevent such a direct model application. We thus propose to bridge this gap using AL and introduce a new criterion called Transfer Sampling (TS). TS uses Optimal Transport (OT) to find corresponding regions between the source and the target data sets in the space of CNN activations. The CNN scores in the source data set are used to rank the samples according to their likelihood of being animals, and this ranking is transferred to the target data set. Unlike conventional AL criteria that exploit model uncertainty, TS focuses on very confident samples, thus allowing quick retrieval of true positives in the target data set, where positives are typically extremely rare and difficult to find by visual inspection. We extend TS with a new window cropping strategy that further accelerates sample retrieval. Our experiments show that with both strategies combined, less than half a percent of oracle-provided labels are enough to find almost 80% of the animals in challenging sets of UAV images, beating all baselines by a margin. Numéro de notice : A2019-598 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2927393 Date de publication en ligne : 20/08/2019 En ligne : http://doi.org/10.1109/TGRS.2019.2927393 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94592
in IEEE Transactions on geoscience and remote sensing > vol 57 n° 12 (December 2019) . - pp 9524 - 9533[article]Correcting rural building annotations in OpenStreetMap using convolutional neural networks / John E. Vargas-Muñoz in ISPRS Journal of photogrammetry and remote sensing, vol 147 (January 2019)
[article]
Titre : Correcting rural building annotations in OpenStreetMap using convolutional neural networks Type de document : Article/Communication Auteurs : John E. Vargas-Muñoz, Auteur ; Sylvain Lobry, Auteur ; Alexandre X. Falcão, Auteur ; Devis Tuia, Auteur Année de publication : 2019 Article en page(s) : pp 283 - 293 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique web
[Termes IGN] bati
[Termes IGN] champ aléatoire de Markov
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] correction géométrique
[Termes IGN] données localisées des bénévoles
[Termes IGN] habitat rural
[Termes IGN] mise à jour de base de données
[Termes IGN] OpenStreetMap
[Termes IGN] réseau neuronal convolutif
[Termes IGN] segmentation sémantique
[Termes IGN] Tanzanie
[Termes IGN] Zimbabwe
[Termes IGN] zone ruraleRésumé : (auteur) Rural building mapping is paramount to support demographic studies and plan actions in response to crisis that affect those areas. Rural building annotations exist in OpenStreetMap (OSM), but their quality and quantity are not sufficient for training models that can create accurate rural building maps. The problems with these annotations essentially fall into three categories: (i) most commonly, many annotations are geometrically misaligned with the updated imagery; (ii) some annotations do not correspond to buildings in the images (they are misannotations or the buildings have been destroyed); and (iii) some annotations are missing for buildings in the images (the buildings were never annotated or were built between subsequent image acquisitions). First, we propose a method based on Markov Random Field (MRF) to align the buildings with their annotations. The method maximizes the correlation between annotations and a building probability map while enforcing that nearby buildings have similar alignment vectors. Second, the annotations with no evidence in the building probability map are removed. Third, we present a method to detect non-annotated buildings with predefined shapes and add their annotation. The proposed methodology shows considerable improvement in accuracy of the OSM annotations for two regions of Tanzania and Zimbabwe, being more accurate than state-of-the-art baselines. Numéro de notice : A2019-038 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.11.010 Date de publication en ligne : 06/12/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.11.010 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91975
in ISPRS Journal of photogrammetry and remote sensing > vol 147 (January 2019) . - pp 283 - 293[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019011 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019013 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2019012 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt