Détail de l'auteur
Auteur Sébastien Lefèvre |
Documents disponibles écrits par cet auteur (7)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Siamese KPConv: 3D multiple change detection from raw point clouds using deep learning / Iris de Gelis in ISPRS Journal of photogrammetry and remote sensing, vol 197 (March 2023)
[article]
Titre : Siamese KPConv: 3D multiple change detection from raw point clouds using deep learning Type de document : Article/Communication Auteurs : Iris de Gelis, Auteur ; Sébastien Lefèvre, Auteur ; Thomas Corpetti, Auteur Année de publication : 2023 Article en page(s) : pp 274 - 291 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] apprentissage profond
[Termes IGN] bâtiment
[Termes IGN] détection de changement
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] modèle numérique de surface
[Termes IGN] réseau neuronal siamois
[Termes IGN] semis de points
[Termes IGN] végétation
[Termes IGN] zone urbaineRésumé : (auteur) This study is concerned with urban change detection and categorization in point clouds. In such situations, objects are mainly characterized by their vertical axis, and the use of native 3D data such as 3D Point Clouds (PCs) is, in general, preferred to rasterized versions because of significant loss of information implied by any rasterization process. Yet, for obvious practical reasons, most existing studies only focus on 2D images for change detection purpose. In this paper, we propose a method capable of performing change detection directly within 3D data. Despite recent deep learning developments in remote sensing, to the best of our knowledge there is no such method to tackle multi-class change segmentation that directly processes raw 3D PCs. Thereby, based on advances in deep learning for change detection in 2D images and for analysis of 3D point clouds, we propose a deep Siamese KPConv network that deals with raw 3D PCs to perform change detection and categorization in a single step. Experimental results are conducted on synthetic and real data of various kinds (LiDAR, multi-sensors). Tests performed on simulated low density LiDAR and multi-sensor datasets show that our proposed method can obtain up to 80% of mean of IoU over classes of changes, leading to an improvement ranging from 10% to 30% over the state-of-the-art. A similar range of improvements is attainable on real data. Then, we show that pre-training Siamese KPConv on simulated PCs allows us to greatly reduce (more than 3,000
) the annotations required on real data. This is a highly significant result to deal with practical scenarios. Finally, an adaptation of Siamese KPConv is realized to deal with change classification at PC scale. Our network overtakes the current state-of-the-art deep learning method by 23% and 15% of mean of IoU when assessed on synthetic and public Change3D datasets, respectively. The code is available at the following link: https://github.com/IdeGelis/torch-points3d-SiameseKPConv.Numéro de notice : A2023-147 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2023.02.001 Date de publication en ligne : 17/02/2023 En ligne : https://doi.org/10.1016/j.isprsjprs.2023.02.001 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102805
in ISPRS Journal of photogrammetry and remote sensing > vol 197 (March 2023) . - pp 274 - 291[article]
Titre : Learning to map street-side objects using multiple views Type de document : Thèse/HDR Auteurs : Ahmed Samy Nassar, Auteur ; Sébastien Lefèvre, Directeur de thèse ; Jan Dirk Wegner, Directeur de thèse Editeur : Vannes : Université de Bretagne Sud Année de publication : 2021 Importance : 139 p. Format : 21 x 30 cm Note générale : bibliographie
Thèse de Doctorat de l'Université de Bretagne Sud, spécialité InformatiqueLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] arbre urbain
[Termes IGN] cartographie par internet
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] données multisources
[Termes IGN] estimation de pose
[Termes IGN] géolocalisation
[Termes IGN] graphe
[Termes IGN] image Streetview
[Termes IGN] inventaire
[Termes IGN] mobilier urbain
[Termes IGN] vision par ordinateurIndex. décimale : THESE Thèses et HDR Résumé : (auteur) Creating inventories of street-side objects and their monitoring in cities is a labor-intensive and costly process. Field workers are known to conduct this process on-site to record properties about the object. These properties can be the location, species, height, and health of a tree as an example. To monitor cities, gathering such information on a large scale becomes challenging. With the abundance of imagery, adequate coverage of a city is achieved from different views provided by online mapping services (e.g., Google Maps and Street View, Mapillary). The availability of such imagery allows efficient creation and updating of inventories of street-side objects status by using computer vision methods such as object detection and multiple object tracking. This thesis aims at detecting and geo-localizing street-side objects, especially trees and street signs, from multiple views using novel deep learning methods. Note de contenu : 1- Introduction
2- Background
3- Multi-view instance matching with learned geometric soft-constraints
4- Simultaneous multi-view instance detection with learned geometric softconstraints
5- GeoGraphV2: Graph-based aerial & street view multi-view object detection with geometric cues end-to-end
6- ConclusionNuméro de notice : 28674 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Thèse française Note de thèse : Thèse de Doctorat : Informatique : Université de Bretagne Sud : 2021 Organisme de stage : IRISA DOI : sans En ligne : https://hal.science/tel-03523658 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99920
Titre : Multispectral object detection Type de document : Thèse/HDR Auteurs : Heng Zhang, Auteur ; Elisa Fromont, Directeur de thèse ; Sébastien Lefèvre, Directeur de thèse Editeur : Rennes : Université de Rennes 1 Année de publication : 2021 Importance : 114 p. Format : 21 x 30 cm Note générale : Bibliographie
Thèse présentée en vue de l’obtention du grade de docteur en Informatique de l'Université de Rennes 1Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] chambre de prise de vue thermique
[Termes IGN] détection d'objet
[Termes IGN] données d'entrainement sans étiquette
[Termes IGN] efficacité
[Termes IGN] fusion de données multisource
[Termes IGN] image multibande
[Termes IGN] précision de la classification
[Termes IGN] qualité du modèle
[Termes IGN] segmentation sémantiqueIndex. décimale : THESE Thèses et HDR Résumé : (Auteur) Only using RGB cameras for automatic outdoor scene analysis is challenging when, for example, facing insufficient illumination or adverse weather. To improve the recognition reliability, multispectral systems add additional cameras (e.g. infra-red) and perform object detection from multispectral data. Although multispectral scene analysis with deep learning has been shown to have a great potential, there are still many open research questions and it has not been widely deployed in industrial contexts. In this thesis, we investigated three main challenges about multispectral object detection: (1) the fast and accurate detection of objects of interest from images; (2) the dynamic and adaptive fusion of information from different modalities;(3) low-cost and low-energy multispectral object detection and the reduction of its manual annotation efforts. In terms of the first challenge, we first optimize the label assignment of the object detection training with a mutual guidance strategy between the classification and localization tasks; we then realize an efficient compression of object detection models including the teacher-student prediction disagreements in a feature-based knowledge distillation framework. With regard to the second challenge, three different multispectral feature fusion schemes are proposed to deal with the most difficult fusion cases where different cameras provide contradictory information. For the third challenge, a novel modality distillation framework is firstly presented to tackle the hardware and software constraints of current multispectral systems; then a multi-sensor-based active learning strategy is designed to reduce the labeling costs when constructing multispectral datasets. Note de contenu : 1. Introduction
1.1 Context and motivations
1.2 Thesis outline
2. Deep learning background
2.1 General object detection
2.2 Multispectral object detection
2.3 Knowledge distillation
2.4 Active learning
2.5 Datasets
3. Efficient object detection on embedded devices
3.1 Best practices for training object detection models
3.2 Mutual Guidance for Anchor Matching
3.3 Prediction Disagreement aware Feature Distillation
3.4 Experimental results
4. Information fusion from multispectral data
4.1 Multispectral Fusion with Cyclic Fuse-and-Refine
4.2 Progressive Spectral Fusion
4.3 Experimental results for CFR and PS-Fuse
4.4 Guided Attentive Feature Fusion
4.5 Experimental results for GAFF
5. Sensors and annotations: low cost multispectral data processing
5.1 Deep Active Learning from Multispectral Data
5.2 Low-cost Multispectral Scene Analysis with Modality Distillation
6. Conclusions and future works
6.1 Conclusions
6.2 Application to remote sensing data
6.3 PerspectivesNuméro de notice : 26765 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse française Note de thèse : Thèse de Doctorat : Informatique : Rennes 1 : 2021 Organisme de stage : (IRISA) INRIA nature-HAL : Thèse DOI : sans Date de publication en ligne : 17/01/2022 En ligne : https://hal.science/tel-03530257/ Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99855 Foreword to the special issue on paving the way for the future of urban remote sensing / Sébastien Lefèvre in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol 13 ([01/01/2020])
[article]
Titre : Foreword to the special issue on paving the way for the future of urban remote sensing Type de document : Article/Communication Auteurs : Sébastien Lefèvre, Éditeur scientifique ; Thomas Corpetti, Éditeur scientifique ; Monika Kuffer, Éditeur scientifique ; Hannes Taubenböck, Éditeur scientifique ; Clément Mallet , Éditeur scientifique Année de publication : 2020 Article en page(s) : pp 6533 - 6536 Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Télédétection Numéro de notice : A2020-278 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/JSTARS.2020.3046096 En ligne : https://doi.org/10.1109/JSTARS.2020.3046096 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96817
in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing > vol 13 [01/01/2020] . - pp 6533 - 6536[article]Ship identification and characterization in Sentinel-1 SAR images with multi-task deep learning / Clément Dechesne in Remote sensing, Vol 11 n° 24 (December-2 2019)
[article]
Titre : Ship identification and characterization in Sentinel-1 SAR images with multi-task deep learning Type de document : Article/Communication Auteurs : Clément Dechesne , Auteur ; Sébastien Lefèvre, Auteur ; Rodolphe Vadaine, Auteur ; Guillaume Hajduch, Auteur ; Ronan Fablet, Auteur Année de publication : 2019 Projets : SESAME / Fablet, Ronan Article en page(s) : n° 2997 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] détection de cible
[Termes IGN] image Sentinel-SAR
[Termes IGN] navire
[Termes IGN] objet mobileRésumé : (auteur) The monitoring and surveillance of maritime activities are critical issues in both military and civilian fields, including among others fisheries’ monitoring, maritime traffic surveillance, coastal and at-sea safety operations, and tactical situations. In operational contexts, ship detection and identification is traditionally performed by a human observer who identifies all kinds of ships from a visual analysis of remotely sensed images. Such a task is very time consuming and cannot be conducted at a very large scale, while Sentinel-1 SAR data now provide a regular and worldwide coverage. Meanwhile, with the emergence of GPUs, deep learning methods are now established as state-of-the-art solutions for computer vision, replacing human intervention in many contexts. They have been shown to be adapted for ship detection, most often with very high resolution SAR or optical imagery. In this paper, we go one step further and investigate a deep neural network for the joint classification and characterization of ships from SAR Sentinel-1 data. We benefit from the synergies between AIS (Automatic Identification System) and Sentinel-1 data to build significant training datasets. We design a multi-task neural network architecture composed of one joint convolutional network connected to three task specific networks, namely for ship detection, classification, and length estimation. The experimental assessment shows that our network provides promising results, with accurate classification and length performance (classification overall accuracy: 97.25%, mean length error: 4.65 m ± 8.55 m). Numéro de notice : A2019-632 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/rs11242997 Date de publication en ligne : 13/12/2019 En ligne : https://doi.org/10.3390/rs11242997 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95325
in Remote sensing > Vol 11 n° 24 (December-2 2019) . - n° 2997[article]Attribute profiles on derived features for urban land cover classification / Bharath Bhushan Damodaran in Photogrammetric Engineering & Remote Sensing, PERS, vol 83 n° 3 (March 2017)PermalinkVector attribute profiles for hyperspectral image classification / Erchan Aptoula in IEEE Transactions on geoscience and remote sensing, vol 54 n° 6 (June 2016)Permalink