Détail de l'auteur
Auteur Devis Tuia |
Documents disponibles écrits par cet auteur (15)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Mapping forest in the Swiss Alps treeline ecotone with explainable deep learning / Thiên-Anh Nguyen in Remote sensing of environment, vol 281 (November 2022)
[article]
Titre : Mapping forest in the Swiss Alps treeline ecotone with explainable deep learning Type de document : Article/Communication Auteurs : Thiên-Anh Nguyen, Auteur ; Benjamin Kellenberger, Auteur ; Devis Tuia, Auteur Année de publication : 2022 Article en page(s) : n° 113217 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] Alpes
[Termes IGN] apprentissage profond
[Termes IGN] canopée
[Termes IGN] carte forestière
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] écotone
[Termes IGN] hauteur des arbres
[Termes IGN] image à très haute résolution
[Termes IGN] image aérienne
[Termes IGN] image RVB
[Termes IGN] inventaire forestier étranger (données)
[Termes IGN] modèle numérique de surface de la canopée
[Termes IGN] SuisseRésumé : (auteur) Forest maps are essential to understand forest dynamics. Due to the increasing availability of remote sensing data and machine learning models like convolutional neural networks, forest maps can these days be created on large scales with high accuracy. Common methods usually predict a map from remote sensing images without deliberately considering intermediate semantic concepts that are relevant to the final map. This makes the mapping process difficult to interpret, especially when using opaque deep learning models. Moreover, such procedure is entirely agnostic to the definitions of the mapping targets (e.g., forest types depending on variables such as tree height and tree density). Common models can at best learn these rules implicitly from data, which greatly hinders trust in the produced maps. In this work, we aim at building an explainable deep learning model for forest mapping that leverages prior knowledge about forest definitions to provide explanations to its decisions. We propose a model that explicitly quantifies intermediate variables like tree height and tree canopy density involved in the forest definitions, corresponding to those used to create the forest maps for training the model in the first place, and combines them accordingly. We apply our model to mapping forest types using very high resolution aerial imagery and lay particular focus on the treeline ecotone at high altitudes, where forest boundaries are complex and highly dependent on the chosen forest definition. Results show that our rule-informed model is able to quantify intermediate key variables and predict forest maps that reflect forest definitions. Through its interpretable design, it is further able to reveal implicit patterns in the manually-annotated forest labels, which facilitates the analysis of the produced maps and their comparison with other datasets. Numéro de notice : A2022-794 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.rse.2022.113217 Date de publication en ligne : 01/09/2022 En ligne : https://doi.org/10.1016/j.rse.2022.113217 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101928
in Remote sensing of environment > vol 281 (November 2022) . - n° 113217[article]Multi-modal learning in photogrammetry and remote sensing / Michael Ying Yang in ISPRS Journal of photogrammetry and remote sensing, vol 176 (June 2021)
[article]
Titre : Multi-modal learning in photogrammetry and remote sensing Type de document : Article/Communication Auteurs : Michael Ying Yang, Auteur ; Loïc Landrieu , Auteur ; Devis Tuia, Auteur ; Charles Toth, Auteur Année de publication : 2021 Projets : 1-Pas de projet / Article en page(s) : pp 54 - 54 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Télédétection
[Termes IGN] acquisition d'images
[Termes IGN] apprentissage automatique
[Termes IGN] données multisourcesRésumé : (Auteur) [Editorial] There is a growing interest in the photogrammetry and remote sensing community for multi-modal data, i. e., data simultaneously acquired from a variety of platforms, including satellites, aircraft, UAS/UGS, autonomous vehicles, etc., by different sensors, such as radar, optical, LiDAR. Thanks to their different spatial, spectral, or temporal resolutions, the use of complementary data sources leads to richer and more robust information extraction. We expect that the use of multiple modalities will rapidly become a standard approach in the future. The main difficulty of jointly processing multi-modal data is due to the differences in structure among modalities. Another issue is the unbalanced number of labelled samples available across modalities, resulting in a significant gap in performance when models are trained separately. Clearly, the photogrammetry and remote sensing community has not yet exploited the full potential of multi-modal data. Neural networks seem well suited for accommodating different data sources, thanks to their capabilities to learn representations adapted to each task in an end-to-end fashion. In this context, there is a strong need for research and development of approaches for multi-sensory and multi-modal deep learning within the geospatial domain. Numéro de notice : A2021-364 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.03.022 Date de publication en ligne : 23/04/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.03.022 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97660
in ISPRS Journal of photogrammetry and remote sensing > vol 176 (June 2021) . - pp 54 - 54[article]Fine-grained landuse characterization using ground-based pictures: a deep learning solution based on globally available data / Shivangi Srivastava in International journal of geographical information science IJGIS, vol 34 n° 6 (June 2020)
[article]
Titre : Fine-grained landuse characterization using ground-based pictures: a deep learning solution based on globally available data Type de document : Article/Communication Auteurs : Shivangi Srivastava, Auteur ; John E. Vargas-Muñoz, Auteur ; Sylvain Lobry, Auteur ; Devis Tuia, Auteur Année de publication : 2020 Article en page(s) : pp 1117 - 1136 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Analyse spatiale
[Termes IGN] analyse d'image orientée objet
[Termes IGN] apprentissage profond
[Termes IGN] base de données urbaines
[Termes IGN] carte d'occupation du sol
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données localisées des bénévoles
[Termes IGN] données localisées libres
[Termes IGN] Ile-de-France
[Termes IGN] image Streetview
[Termes IGN] image terrestre
[Termes IGN] information géographique
[Termes IGN] méthode heuristique
[Termes IGN] OpenStreetMap
[Termes IGN] réseau socialRésumé : (auteur) We study the problem of landuse characterization at the urban-object level using deep learning algorithms. Traditionally, this task is performed by surveys or manual photo interpretation, which are expensive and difficult to update regularly. We seek to characterize usages at the single object level and to differentiate classes such as educational institutes, hospitals and religious places by visual cues contained in side-view pictures from Google Street View (GSV). These pictures provide geo-referenced information not only about the material composition of the objects but also about their actual usage, which otherwise is difficult to capture using other classical sources of data such as aerial imagery. Since the GSV database is regularly updated, this allows to consequently update the landuse maps, at lower costs than those of authoritative surveys. Because every urban-object is imaged from a number of viewpoints with street-level pictures, we propose a deep-learning based architecture that accepts arbitrary number of GSV pictures to predict the fine-grained landuse classes at the object level. These classes are taken from OpenStreetMap. A quantitative evaluation of the area of Île-de-France, France shows that our model outperforms other deep learning-based methods, making it a suitable alternative to manual landuse characterization. Numéro de notice : A2020-269 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/13658816.2018.1542698 Date de publication en ligne : 18/11/2018 En ligne : https://doi.org/10.1080/13658816.2018.1542698 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95041
in International journal of geographical information science IJGIS > vol 34 n° 6 (June 2020) . - pp 1117 - 1136[article]Half a percent of labels is enough: efficient animal detection in UAV imagery using deep CNNs and active learning / Benjamin Kellenberger in IEEE Transactions on geoscience and remote sensing, vol 57 n° 12 (December 2019)
[article]
Titre : Half a percent of labels is enough: efficient animal detection in UAV imagery using deep CNNs and active learning Type de document : Article/Communication Auteurs : Benjamin Kellenberger, Auteur ; Diego Marcos, Auteur ; Sylvain Lobry, Auteur ; Devis Tuia, Auteur Année de publication : 2019 Article en page(s) : pp 9524 - 9533 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] apprentissage profond
[Termes IGN] classification orientée objet
[Termes IGN] classification par réseau neuronal
[Termes IGN] détection d'objet
[Termes IGN] données localisées
[Termes IGN] échantillonnage de données
[Termes IGN] faune locale
[Termes IGN] image captée par drone
[Termes IGN] Namibie
[Termes IGN] objet mobile
[Termes IGN] réalité de terrain
[Termes IGN] recensementRésumé : (auteur) We present an Active Learning (AL) strategy for reusing a deep Convolutional Neural Network (CNN)-based object detector on a new data set. This is of particular interest for wildlife conservation: given a set of images acquired with an Unmanned Aerial Vehicle (UAV) and manually labeled ground truth, our goal is to train an animal detector that can be reused for repeated acquisitions, e.g., in follow-up years. Domain shifts between data sets typically prevent such a direct model application. We thus propose to bridge this gap using AL and introduce a new criterion called Transfer Sampling (TS). TS uses Optimal Transport (OT) to find corresponding regions between the source and the target data sets in the space of CNN activations. The CNN scores in the source data set are used to rank the samples according to their likelihood of being animals, and this ranking is transferred to the target data set. Unlike conventional AL criteria that exploit model uncertainty, TS focuses on very confident samples, thus allowing quick retrieval of true positives in the target data set, where positives are typically extremely rare and difficult to find by visual inspection. We extend TS with a new window cropping strategy that further accelerates sample retrieval. Our experiments show that with both strategies combined, less than half a percent of oracle-provided labels are enough to find almost 80% of the animals in challenging sets of UAV images, beating all baselines by a margin. Numéro de notice : A2019-598 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2019.2927393 Date de publication en ligne : 20/08/2019 En ligne : http://doi.org/10.1109/TGRS.2019.2927393 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94592
in IEEE Transactions on geoscience and remote sensing > vol 57 n° 12 (December 2019) . - pp 9524 - 9533[article]Correcting rural building annotations in OpenStreetMap using convolutional neural networks / John E. Vargas-Muñoz in ISPRS Journal of photogrammetry and remote sensing, vol 147 (January 2019)
[article]
Titre : Correcting rural building annotations in OpenStreetMap using convolutional neural networks Type de document : Article/Communication Auteurs : John E. Vargas-Muñoz, Auteur ; Sylvain Lobry, Auteur ; Alexandre X. Falcão, Auteur ; Devis Tuia, Auteur Année de publication : 2019 Article en page(s) : pp 283 - 293 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique web
[Termes IGN] bati
[Termes IGN] champ aléatoire de Markov
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] correction géométrique
[Termes IGN] données localisées des bénévoles
[Termes IGN] habitat rural
[Termes IGN] mise à jour de base de données
[Termes IGN] OpenStreetMap
[Termes IGN] réseau neuronal convolutif
[Termes IGN] segmentation sémantique
[Termes IGN] Tanzanie
[Termes IGN] Zimbabwe
[Termes IGN] zone ruraleRésumé : (auteur) Rural building mapping is paramount to support demographic studies and plan actions in response to crisis that affect those areas. Rural building annotations exist in OpenStreetMap (OSM), but their quality and quantity are not sufficient for training models that can create accurate rural building maps. The problems with these annotations essentially fall into three categories: (i) most commonly, many annotations are geometrically misaligned with the updated imagery; (ii) some annotations do not correspond to buildings in the images (they are misannotations or the buildings have been destroyed); and (iii) some annotations are missing for buildings in the images (the buildings were never annotated or were built between subsequent image acquisitions). First, we propose a method based on Markov Random Field (MRF) to align the buildings with their annotations. The method maximizes the correlation between annotations and a building probability map while enforcing that nearby buildings have similar alignment vectors. Second, the annotations with no evidence in the building probability map are removed. Third, we present a method to detect non-annotated buildings with predefined shapes and add their annotation. The proposed methodology shows considerable improvement in accuracy of the OSM annotations for two regions of Tanzania and Zimbabwe, being more accurate than state-of-the-art baselines. Numéro de notice : A2019-038 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.11.010 Date de publication en ligne : 06/12/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.11.010 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=91975
in ISPRS Journal of photogrammetry and remote sensing > vol 147 (January 2019) . - pp 283 - 293[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019011 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019013 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2019012 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Land cover mapping at very high resolution with rotation equivariant CNNs : Towards small yet accurate models / Diego Marcos in ISPRS Journal of photogrammetry and remote sensing, vol 145 - part A (November 2018)PermalinkDeep multi-task learning for a geographically-regularized semantic segmentation of aerial images / Michele Volpi in ISPRS Journal of photogrammetry and remote sensing, vol 144 (October 2018)PermalinkForeword to the special issue on urban remote sensing for smarter cities / Prashanth Reddy Marpu in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol 11 n° 8 (August 2018)PermalinkForeword to the theme issue on geospatial computer vision / Jan Dirk Wegner in ISPRS Journal of photogrammetry and remote sensing, vol 140 (June 2018)Permalinkvol 140 - June 2018 - Geospatial computer vision (Bulletin de ISPRS Journal of photogrammetry and remote sensing) / Jan Dirk WegnerPermalinkForeword to the Special Issue on 'GeoVision: Computer Vision for Geospatial Applications' / Devis Tuia in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol 9 n° 7 (July 2016)PermalinkMulticlass feature learning for hyperspectral image classification: Sparse and hierarchical solutions / Devis Tuia in ISPRS Journal of photogrammetry and remote sensing, vol 105 (July 2015)PermalinkSemisupervised manifold alignment of multimodal remote sensing images / Devis Tuia in IEEE Transactions on geoscience and remote sensing, vol 52 n° 12 (December 2014)PermalinkSemisupervised classification of remote sensing images with active queries / Jordi Munoz-Mari in IEEE Transactions on geoscience and remote sensing, vol 50 n° 10 Tome 1 (October 2012)PermalinkMemory-based cluster sampling for remote sensing image classification / Michele Volpi in IEEE Transactions on geoscience and remote sensing, vol 50 n° 8 (August 2012)Permalink