Détail de l'auteur
Auteur Loïc Landrieu
Commentaire :
Researcher at LASTIG, STRUDEL team (September 2015 - March 2023) then at LIGM (ENPC)
Autorités liées :
idHAL :
loic-landrieu
idRef :
autre URL :
ORCID :
Scopus :
Publons :
G. Scholar :
DBLP URL :
|
Documents disponibles écrits par cet auteur (36)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Scalable surface reconstruction with Delaunay-Graph neural networks / Raphaël Sulzer in Computer graphics forum, vol 40 n° 5 (2021)
[article]
Titre : Scalable surface reconstruction with Delaunay-Graph neural networks Type de document : Article/Communication Auteurs : Raphaël Sulzer , Auteur ; Loïc Landrieu , Auteur ; Renaud Marlet, Auteur ; Bruno Vallet , Auteur Année de publication : 2021 Projets : BIOM / Vallet, Bruno Conférence : SGP 2021, Symposium on Geometry Processing 12/07/2021 14/07/2021 Toronto Ontario - Canada open access proceedings Article en page(s) : pp 157 - 167 Note générale : bibliographie
The presentation of this work at SGP 2021 is available at https://youtu.be/KIrCDGhS10oLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] algorithme Graph-Cut
[Termes IGN] apprentissage profond
[Termes IGN] prise en compte du contexte
[Termes IGN] reconstruction d'objet
[Termes IGN] réseau neuronal de graphes
[Termes IGN] semis de points
[Termes IGN] tétraèdre
[Termes IGN] triangulation de DelaunayRésumé : (auteur) We introduce a novel learning-based, visibility-aware, surface reconstruction method for large-scale, defect-laden point clouds. Our approach can cope with the scale and variety of point cloud defects encountered in real-life Multi-View Stereo (MVS) acquisitions. Our method relies on a 3D Delaunay tetrahedralization whose cells are classified as inside or outside the surface by a graph neural network and an energy model solvable with a graph cut. Our model, making use of both local geometric attributes and line-of-sight visibility information, is able to learn a visibility model from a small amount of synthetic training data and generalizes to real-life acquisitions. Combining the efficiency of deep learning methods and the scalability of energy-based models, our approach outperforms both learning and non learning-based reconstruction algorithms on two publicly available reconstruction benchmarks. Numéro de notice : A2021-400 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1111/cgf14364 En ligne : https://doi.org/10.1111/cgf.14364 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98219
in Computer graphics forum > vol 40 n° 5 (2021) . - pp 157 - 167[article]Multi-modal learning in photogrammetry and remote sensing / Michael Ying Yang in ISPRS Journal of photogrammetry and remote sensing, vol 176 (June 2021)
[article]
Titre : Multi-modal learning in photogrammetry and remote sensing Type de document : Article/Communication Auteurs : Michael Ying Yang, Auteur ; Loïc Landrieu , Auteur ; Devis Tuia, Auteur ; Charles Toth, Auteur Année de publication : 2021 Projets : 1-Pas de projet / Vallet, Bruno Article en page(s) : pp 54 - 54 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Télédétection
[Termes IGN] acquisition d'images
[Termes IGN] apprentissage automatique
[Termes IGN] données multisourcesRésumé : (Auteur) [Editorial] There is a growing interest in the photogrammetry and remote sensing community for multi-modal data, i. e., data simultaneously acquired from a variety of platforms, including satellites, aircraft, UAS/UGS, autonomous vehicles, etc., by different sensors, such as radar, optical, LiDAR. Thanks to their different spatial, spectral, or temporal resolutions, the use of complementary data sources leads to richer and more robust information extraction. We expect that the use of multiple modalities will rapidly become a standard approach in the future. The main difficulty of jointly processing multi-modal data is due to the differences in structure among modalities. Another issue is the unbalanced number of labelled samples available across modalities, resulting in a significant gap in performance when models are trained separately. Clearly, the photogrammetry and remote sensing community has not yet exploited the full potential of multi-modal data. Neural networks seem well suited for accommodating different data sources, thanks to their capabilities to learn representations adapted to each task in an end-to-end fashion. In this context, there is a strong need for research and development of approaches for multi-sensory and multi-modal deep learning within the geospatial domain. Numéro de notice : A2021-364 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.03.022 Date de publication en ligne : 23/04/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.03.022 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97660
in ISPRS Journal of photogrammetry and remote sensing > vol 176 (June 2021) . - pp 54 - 54[article]Deep learning pour les données 3D en télédétection / Loïc Landrieu (2021)
Titre : Deep learning pour les données 3D en télédétection Type de document : Article/Communication Auteurs : Loïc Landrieu , Auteur Editeur : Saint-Mandé : Institut national de l'information géographique et forestière - IGN (2012-) Année de publication : 2021 Conférence : Journées 2021 inter-GdR CNRS MAGIS-MADICS-IGRV, Observation 3D : outils et verrous 24/11/2021 25/11/2021 Paris France open access proceedings Langues : Français (fre) Numéro de notice : C2021-026 Affiliation des auteurs : UGE-LASTIG (2020- ) Nature : Communication nature-HAL : ComSansActesPubliés-Unpublished DOI : sans Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98974 Leveraging class hierarchies with metric-guided prototype learning / Vivien Sainte Fare Garnot (2021)
Titre : Leveraging class hierarchies with metric-guided prototype learning Type de document : Article/Communication Auteurs : Vivien Sainte Fare Garnot , Auteur ; Loïc Landrieu , Auteur Editeur : Ithaca [New York - Etats-Unis] : ArXiv - Université Cornell Année de publication : 2021 Projets : 1-Pas de projet / Vallet, Bruno Conférence : BMVC 2021, 32nd British Machine Vision Conference 22/11/2021 25/11/2021 online Royaume-Uni OA Proceedings Importance : 31 p. Note générale : bibliographie
préprint déposé sur ArXivLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Intelligence artificielle
[Termes IGN] apprentissage automatique
[Termes IGN] classification
[Termes IGN] matrice d'erreur
[Termes IGN] prototype
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Not all errors are created equal. This is especially true for many key machine learning applications. In the case of classification tasks, the severity of errors can be summarized under the form of a cost matrix, which assesses the gravity of confusing each pair of classes. When the target classes are organized into a hierarchical structure, this matrix defines a metric. We propose to integrate this metric in a new and versatile classification layer in order to model the disparity of errors. Our method relies on jointly learning a feature-extracting network and a set of class representations, or prototypes, which incorporate the error metric into their relative arrangement in the embedding space. Our approach allows for consistent improvement of the severity of the network's errors with regard to the cost matrix. Furthermore, when the induced metric contains insight on the data structure, our approach improves the overall precision as well. Experiments on four different public datasets -- from agricultural time series classification to depth image semantic segmentation -- validate our approach. Numéro de notice : C2021-027 Affiliation des auteurs : UGE-LASTIG (2020- ) Autre URL associée : vers ArXiv Thématique : IMAGERIE/INFORMATIQUE Nature : Poster nature-HAL : Poster-avec-CL DOI : 10.48550/arXiv.2007.03047 En ligne : https://www.bmvc2021-virtualconference.com/assets/papers/0084.pdf Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98983 Multi-modal temporal attention models for crop mapping from satellite time series / Vivien Sainte Fare Garnot (2021)
Titre : Multi-modal temporal attention models for crop mapping from satellite time series Type de document : Article/Communication Auteurs : Vivien Sainte Fare Garnot , Auteur ; Loïc Landrieu , Auteur ; Nesrine Chehata , Auteur Editeur : Saint-Mandé : Institut national de l'information géographique et forestière - IGN (2012-) Année de publication : 2021 Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] base de données d'images
[Termes IGN] carte agricole
[Termes IGN] image optique
[Termes IGN] image radar
[Termes IGN] Pastis
[Termes IGN] segmentation d'imageRésumé : (auteur) Optical and radar satellite time series are synergetic: optical images contain rich spectral information, while C-band radar captures useful geometrical information and is immune to cloud cover. Motivated by the recent success of temporal attention-based methods across multiple crop mapping tasks, we propose to investigate how these models can be adapted to operate on several modalities. We implement and evaluate multiple fusion schemes, including a novel approach and simple adjustments to the training procedure, significantly improving performance and efficiency with little added complexity. We show that most fusion schemes have advantages and drawbacks, making them relevant for specific settings. We then evaluate the benefit of multimodality across several tasks: parcel classification, pixel-based segmentation, and panoptic parcel segmentation. We show that by leveraging both optical and radar time series, multimodal temporal attention-based models can outmatch single-modality models in terms of performance and resilience to cloud cover. To conduct these experiments, we augment the PASTIS dataset with spatially aligned radar image time series. The resulting dataset, PASTIS-R, constitutes the first large-scale, multimodal, and open-access satellite time series dataset with semantic and instance annotations. Numéro de notice : P2021-005 Affiliation des auteurs : UGE-LASTIG (2020- ) Thématique : IMAGERIE Nature : Preprint nature-HAL : Préprint DOI : sans Date de publication en ligne : 14/12/2021 En ligne : https://arxiv.org/abs/2112.07558v1 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99392 Panoptic segmentation of satellite image time series with convolutional temporal attention networks / Vivien Sainte Fare Garnot (2021)PermalinkPermalinkSuivi de la rotation des cultures à partir de séries temporelles d’images satellite / Félix Quinton (2021)PermalinkSupplementary material for: Panoptic segmentation of satellite image time series with convolutional temporal attention networks / Vivien Sainte Fare Garnot (2021)PermalinkVegetation stratum occupancy prediction from airborne LiDAR 3D point clouds / Ekaterina Kalinicheva (2021)PermalinkImproved crop classification with rotation knowledge using Sentinel-1 and -2 time series / Sébastien Giordano in Photogrammetric Engineering & Remote Sensing, PERS, vol 86 n° 7 (July 2020)PermalinkLightweight temporal self-attention for classifying satellite images time series / Vivien Sainte Fare Garnot (2020)PermalinkSatellite image time series classification with pixel-set encoders and temporal self-attention / Vivien Sainte Fare Garnot (2020)PermalinkTorch-Points3D: A modular multi-task framework for reproducible deep learning on 3D point clouds / Thomas Chaton (2020)PermalinkPiecewise-planar approximation of large 3D data as graph-structured optimization / Stéphane Guinard in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol IV-2/W5 (May 2019)Permalink