Détail de l'éditeur
Documents disponibles chez cet éditeur (137)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Analysis and modelling of remote sensing reflectance during anoxic crisis in the Thau lagoon using satellite images / Manchun Lei (2019)
Titre : Analysis and modelling of remote sensing reflectance during anoxic crisis in the Thau lagoon using satellite images Type de document : Article/Communication Auteurs : Manchun Lei , Auteur ; Audrey Minghelli, Auteur ; Annie Fiandrino, Auteur Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2019 Projets : 1-Pas de projet / Conférence : Oceans Europe 2019, MTS / IEEE 17/06/2019 20/06/2019 Marseille France Proceedings IEEE Importance : pp 1 - 6 Format : 21 x 30 cm Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] capteur multibande
[Termes IGN] image Envisat-MERIS
[Termes IGN] oxygène (O²)
[Termes IGN] pollution des eaux
[Termes IGN] réflectance spectrale
[Termes IGN] Thau (bassin de)Mots-clés libres : bactéries du soufre Résumé : (auteur) This study concerns the spectral behavior of remote sensing reflectance R rs of Thau lagoon water during the anoxic crisis in the summer of 2003 and 2006. The proliferation of photosynthetic sulfur-oxidizing bacteria (SOB) in anoxic water make the water color becomes opaque and turbid until milky. An identification method using 665 nm, 709 nm and 754 nm bands of Medium Resolution Imaging Spectrometer (MERIS) sensor is proposed to identify the total anoxic water filled by SOB. The reflectance of the SOB layer is described as a specific reflectance relating to the SOB concentration. The results show that SOB contaminated water during the anoxic crisis can be quantitatively remote sensed by multispectral sensor. Numéro de notice : C2019-021 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Autre URL associée : vers HAL Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/OCEANSE.2019.8867565 Date de publication en ligne : 14/10/2019 En ligne : http://doi.org/10.1109/OCEANSE.2019.8867565 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93845
Titre : Geometric camera pose refinement with learned depth maps Type de document : Article/Communication Auteurs : Nathan Piasco , Auteur ; Désiré Sidibé, Auteur ; Cédric Demonceaux, Auteur ; Valérie Gouet-Brunet , Auteur Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2019 Projets : PLaTINUM / Gouet-Brunet, Valérie Conférence : ICIP 2019, 26th IEEE International Conference on Image Processing 22/09/2019 25/09/2019 Taipei Taiwan Proceedings IEEE Importance : 5 p. Format : 21 x 30 cm Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] algorithme ICP
[Termes IGN] carte de profondeur
[Termes IGN] estimation de pose
[Termes IGN] réseau neuronal convolutif
[Termes IGN] scène intérieure
[Termes IGN] semis de pointsRésumé : (auteur) We present a new method for image-only camera relocalisation composed of a fast image indexing retrieval step followed by pose refinement based on ICP (Iterative Closest Point). The first step aims to find an initial pose for the query by evaluating images similarity with low dimensional global deep descriptors. Subsequently, we predict with a fully convolutional deep encoder-decoder neural network a dense depth map from the image query. We use this depth map to create a local point cloud and refine the initial query pose using an ICP algorithm.We demonstrate the effectiveness of our new approach on various indoor scenes. Compared to learned pose regression methods, our proposal can be used on multiple scenes without the need of a specific weights-setup for each scene, while showing equivalent results. Numéro de notice : C2019-015 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/ICIP.2019.8803014 Date de publication en ligne : 26/08/2019 En ligne : https://doi.org/10.1109/ICIP.2019.8803014 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93279
Titre : Learning scene geometry for visual localization in challenging conditions Type de document : Article/Communication Auteurs : Nathan Piasco , Auteur ; Désiré Sidibé, Auteur ; Valérie Gouet-Brunet , Auteur ; Cédric Demonceaux, Auteur Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2019 Projets : PLaTINUM / Gouet-Brunet, Valérie Conférence : ICRA 2019, International Conference on Robotics and Automation 20/05/2019 24/05/2019 Montréal Québec - Canada Proceedings IEEE Importance : pp 9094 - 9100 Format : 21 x 30 cm Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] analyse visuelle
[Termes IGN] appariement d'images
[Termes IGN] carte de profondeur
[Termes IGN] descripteur
[Termes IGN] géométrie de l'image
[Termes IGN] image RVB
[Termes IGN] localisation basée vision
[Termes IGN] précision de localisation
[Termes IGN] prise de vue nocturne
[Termes IGN] robotique
[Termes IGN] scène urbaine
[Termes IGN] variation diurne
[Termes IGN] variation saisonnière
[Termes IGN] vision par ordinateurRésumé : (auteur) We propose a new approach for outdoor large scale image based localization that can deal with challenging scenarios like cross-season, cross-weather, day/night and longterm localization. The key component of our method is a new learned global image descriptor, that can effectively benefit from scene geometry information during training. At test time, our system is capable of inferring the depth map related to the query image and use it to increase localization accuracy. We are able to increase recall@1 performances by 2.15% on cross-weather and long-term localization scenario and by 4.24% points on a challenging winter/summer localization sequence versus state-of-the-art methods. Our method can also use weakly annotated data to localize night images across a reference dataset of daytime images. Numéro de notice : C2019-002 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/ICRA.2019.8794221 Date de publication en ligne : 12/08/2019 En ligne : http://doi.org/10.1109/ICRA.2019.8794221 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93774 Documents numériques
en open access
Learning scene geometry... - pdf auteurAdobe Acrobat PDF LU-Net, An efficient network for 3D LiDAR point cloud semantic segmentation based on end-to-end-learned 3D features and U-Net / Pierre Biasutti (2019)
Titre : LU-Net, An efficient network for 3D LiDAR point cloud semantic segmentation based on end-to-end-learned 3D features and U-Net Type de document : Article/Communication Auteurs : Pierre Biasutti , Auteur ; Vincent Lepetit, Auteur ; Mathieu Brédif , Auteur ; Jean-François Aujol, Auteur ; Aurélie Bugeau, Auteur Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2019 Projets : 1-Pas de projet / Gouet-Brunet, Valérie Conférence : ICCVW 2019, IEEE/CVF International Conference on Computer Vision Workshop 27/10/2019 28/10/2019 Seoul Corée du sud Proceedings Importance : pp 942 - 950 Format : 21 x 30 cm Note générale : Bibliographie
préprint dans HAL https://hal.archives-ouvertes.fr/hal-02269915v1 avec titre un peu différent - version finale dans HAL https://hal.archives-ouvertes.fr/hal-02269915v2
This project has also received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 777826.Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] segmentation sémantique
[Termes IGN] semis de pointsRésumé : (Auteur) We propose LU-Net (for LiDAR U-Net), for the semantic segmentation of a 3D LiDAR point cloud. Instead of applying some global 3D segmentation method such as Point-Net, we propose an end-to-end architecture for LiDAR point cloud semantic segmentation that efficiently solves the problem as an image processing problem. First, a high-level 3D feature extraction module is used to compute 3D local features for each point given its neighbors. Then, these features are projected into a 2D multichannel range-image by considering the topology of the sensor. This range-image later serves as the input to a U-Net segmentation network, which is a simple architecture yet enough for our purpose. In this way, we can exploit both the 3D nature of the data and the specificity of the LiDAR sensor. This approach efficiently bridges between 3D point cloud processing and image processing as it outperforms the state-of-the-art by a large margin on the KITTI dataset, as our experiments show. Moreover, this approach operates at 24fps on a single GPU. This is above the acquisition rate of common LiDAR sensors which makes it suitable for real-time applications. Numéro de notice : C2019-037 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Autre URL associée : vers HAL Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/ICCVW.2019.00123 Date de publication en ligne : 05/03/2020 En ligne : https://doi.org/10.1109/ICCVW.2019.00123 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93282
Titre : Piecewise horizontal 3D roof reconstruction from aerial Lidar Type de document : Article/Communication Auteurs : Slim Namouchi, Auteur ; Bruno Vallet , Auteur ; Imed Riadh Farah, Auteur ; Haythem Ismail, Auteur Editeur : New York : Institute of Electrical and Electronics Engineers IEEE Année de publication : 2019 Projets : 2-Pas d'info accessible - article non ouvert / Gouet-Brunet, Valérie Conférence : IGARSS 2019, IEEE International Geoscience And Remote Sensing Symposium 28/07/2019 02/08/2019 Yokohama Japon Proceedings IEEE Importance : pp 8992 - 8995 Format : 21 x 30 cm Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] aide à la décision
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction de points
[Termes IGN] image RVB
[Termes IGN] planification urbaine
[Termes IGN] reconstruction 3D du bâti
[Termes IGN] reconstruction d'image
[Termes IGN] semis de points
[Termes IGN] toit
[Termes IGN] ville intelligenteRésumé : (auteur) 3D urban models provide convincing analytic tools for decision making, city planning, and smart city services. However, developing a fully automated method that can produce 3D building models of high quality, fidelity and accuracy is still a challenging task. Currently, most of the proposed approaches handle polyhedral roofs (consisting of planar polygons) because they assume that all roofs in a single area follow this prior. However, the reconstruction method could have its prior adapted to the roof type. In this paper, we are dealing with a specific roof case which is piecewise horizontal roofs which are very frequent in most countries of North Africa and in particular in Tunisia. Our building reconstruction method follows four main steps: building LiDAR points extraction, piecewise horizontal roof clustering, boundary creation and 3D geometric modeling. In order to prove the suitability and the effectiveness of the introduced method, experiments are conducted with real LiDAR data and aerial RGB image. Numéro de notice : C2019-038 Affiliation des auteurs : LASTIG MATIS+Ext (2012-2019) Thématique : IMAGERIE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.1109/IGARSS.2019.8898650 Date de publication en ligne : 14/11/2019 En ligne : https://doi.org/10.1109/IGARSS.2019.8898650 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95354 Sensitivity of urban material classification to spatial and spectral configurations from visible to short-wave infrared / Arnaud Le Bris (2019)PermalinkThe necessary yet complex evaluation of 3D city models: a semantic approach / Oussama Ennafii (2019)PermalinkPermalinkTime-space tradeoff in deep learning models for crop classification on satellite multi-spectral image time series / Vivien Sainte Fare Garnot (2019)PermalinkUrban morpho-types classification from SPOT-6/7 imagery and Sentinel-2 time series / Arnaud Le Bris (2019)PermalinkComparative study of visual saliency maps in the problem of classification of architectural images with Deep CNNs / Abraham Montoya Obeso (2018)PermalinkCrop-rotation structured classification using multi-source sentinel images and LPIS for crop type mapping / Simon Bailly (2018)PermalinkDecision fusion of SPOT6 and multitemporal Sentinel2 images for urban area detection / Cyril Wendl (2018)PermalinkDetection and area estimation for photovoltaic panels in urban hyperspectral remote sensing data by an original NMF-based unmixing method / Moussa Sofiane Karoui (2018)PermalinkDomain adaptation for large scale classification of very high resolution satellite images with deep convolutional neural networks / Tristan Postadjian (2018)PermalinkPermalinkPermalinkPotential and limits of Sentinel-1 data for small alpine glaciers monitoring / Matthias Jauvin (2018)PermalinkPermalinkSentinel-2 level-1 calibration and validation status from the mission performance centre / Catherine Bouzinac (2018)PermalinkA stixel approach for enhancing semantic image segmentation using prior map information / Sylvain Jonchery (2018)PermalinkSuperpixel partitioning of very high resolution satellite images for large-scale classification perspectives with deep convolutional neural networks / Tristan Postadjian (2018)PermalinkAutomatic production of large-scale cloud-free orthomosaics from multitemporal satellite images / Nicolas Champion (2017)PermalinkComparison of belief propagation and graph-cut approaches for contextual classification of 3D LIDAR point cloud data / Loïc Landrieu (2017)PermalinkPermalink