Descripteur
Termes IGN > imagerie > image numérique
image numériqueSynonyme(s)image en mode mailléVoir aussi |
Documents disponibles dans cette catégorie (2388)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
HackAIR : towards raising awareness about air quality in Europe by developing a collective online platform / Evangelos Kosmidis in ISPRS International journal of geo-information, vol 7 n° 5 (May 2018)
[article]
Titre : HackAIR : towards raising awareness about air quality in Europe by developing a collective online platform Type de document : Article/Communication Auteurs : Evangelos Kosmidis, Auteur ; Panagiota Syropoulou, Auteur ; Stavros Tekes, Auteur ; Philipp Schneider, Auteur ; Eleftherios Spyromitros-Xioufis, Auteur ; et al., Auteur Année de publication : 2018 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique web
[Termes IGN] données environnementales
[Termes IGN] fusion de données
[Termes IGN] image numérique
[Termes IGN] image RVB
[Termes IGN] participation du public
[Termes IGN] pollution atmosphérique
[Termes IGN] qualité de l'air
[Termes IGN] réseau social
[Termes IGN] science citoyenne
[Termes IGN] surveillance écologiqueRésumé : (Auteur) Although air pollution is one of the most significant environmental factors posing a threat to human health worldwide, air quality data are scarce or not easily accessible in most European countries. The current work aims to develop a centralized air quality data hub that enables citizens to contribute to air quality monitoring. In this work, data from official air quality monitoring stations are combined with air pollution estimates from sky-depicting photos and from low-cost sensing devices that citizens build on their own so that citizens receive improved information about the quality of the air they breathe. Additionally, a data fusion algorithm merges air quality information from various sources to provide information in areas where no air quality measurements exist. Numéro de notice : A2018-342 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi7050187 Date de publication en ligne : 12/05/2018 En ligne : https://doi.org/10.10.3390/ijgi7050187 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90564
in ISPRS International journal of geo-information > vol 7 n° 5 (May 2018)[article]Large-scale supervised learning for 3D Point cloud labeling : Semantic3d.Net / Timo Hackel in Photogrammetric Engineering & Remote Sensing, PERS, vol 84 n° 5 (mai 2018)
[article]
Titre : Large-scale supervised learning for 3D Point cloud labeling : Semantic3d.Net Type de document : Article/Communication Auteurs : Timo Hackel, Auteur ; Jan Dirk Wegner, Auteur ; Nikolay Savinov, Auteur ; Lubor Ladicky, Auteur ; Konrad Schindler, Auteur ; Marc Pollefeys, Auteur Année de publication : 2018 Article en page(s) : pp 297 - 308 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage dirigé
[Termes IGN] apprentissage profond
[Termes IGN] classification
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] état de l'art
[Termes IGN] réseau neuronal convolutif
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantique
[Termes IGN] semis de pointsRésumé : (Auteur) In this paper, we review current state-of-the-art in 3D point cloud classification, present a new 3D point cloud classification benchmark data set of single scans with over four billion manually labeled points, and discuss first available results on the benchmark. Much of the stunning recent progress in 2D image interpretation can be attributed to the availability of large amounts of training data, which have enabled the (supervised) learning of deep neural networks. With the data set presented in this paper, we aim to boost the performance of CNNs also for 3D point cloud labeling. Our hope is that this will lead to a breakthrough of deep learning also for 3D (geo-) data. The semantic3D.net data set consists of dense point clouds acquired with static terrestrial laser scanners. It contains eight semantic classes and covers a wide range of urban outdoor scenes, including churches, streets, railroad tracks, squares, villages, soccer fields, and castles. We describe our labeling interface and show that, compared to those already available to the research community, our data set provides denser and more complete point clouds, with a much higher overall number of labeled points. We further provide descriptions of baseline methods and of the first independent submissions, which are indeed based on CNNs, and already show remarkable improvements over prior art. We hope that semantic3D.net will pave the way for deep learning in 3D point cloud analysis, and for 3D representation learning in general. Numéro de notice : A2018-162 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.84.5.297 Date de publication en ligne : 01/05/2018 En ligne : https://doi.org/10.14358/PERS.84.5.297 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=89795
in Photogrammetric Engineering & Remote Sensing, PERS > vol 84 n° 5 (mai 2018) . - pp 297 - 308[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 105-2018051 RAB Revue Centre de documentation En réserve L003 Disponible Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification / Rama Rao Nidamanuri in ISPRS Journal of photogrammetry and remote sensing, vol 138 (April 2018)
[article]
Titre : Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification Type de document : Article/Communication Auteurs : Rama Rao Nidamanuri, Auteur ; Fahad Shahbaz Khan, Auteur ; Joost van de Weijer, Auteur ; Matthieu Molinier, Auteur ; Jorma Laaksonen, Auteur Année de publication : 2018 Article en page(s) : pp 74 - 85 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse texturale
[Termes IGN] apprentissage profond
[Termes IGN] classification
[Termes IGN] image RVB
[Termes IGN] motif binaire local
[Termes IGN] réseau neuronal convolutif
[Termes IGN] texture d'imageRésumé : (Auteur) Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene classification. Numéro de notice : A2018-121 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.01.023 Date de publication en ligne : 15/02/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.01.023 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=89590
in ISPRS Journal of photogrammetry and remote sensing > vol 138 (April 2018) . - pp 74 - 85[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018041 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018043 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018042 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Close-range hyperspectral image analysis for the early detection of stress responses in individual plants in a high-throughput phenotyping platform / Mohd Shahrimie Mohd Asaari in ISPRS Journal of photogrammetry and remote sensing, vol 138 (April 2018)
[article]
Titre : Close-range hyperspectral image analysis for the early detection of stress responses in individual plants in a high-throughput phenotyping platform Type de document : Article/Communication Auteurs : Mohd Shahrimie Mohd Asaari, Auteur ; Puneet Mishra ; Stien Mertens, Auteur ; Stijn Dhondt, Auteur ; et al., Auteur Année de publication : 2018 Article en page(s) : pp 121 - 138 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] analyse spectrale
[Termes IGN] image hyperspectrale
[Termes IGN] maïs (céréale)
[Termes IGN] mesure de similitude
[Termes IGN] réflectance végétale
[Termes IGN] signature spectrale
[Termes IGN] similitude spectrale
[Termes IGN] stress hydriqueRésumé : (Auteur) The potential of close-range hyperspectral imaging (HSI) as a tool for detecting early drought stress responses in plants grown in a high-throughput plant phenotyping platform (HTPPP) was explored. Reflectance spectra from leaves in close-range imaging are highly influenced by plant geometry and its specific alignment towards the imaging system. This induces high uninformative variability in the recorded signals, whereas the spectral signature informing on plant biological traits remains undisclosed. A linear reflectance model that describes the effect of the distance and orientation of each pixel of a plant with respect to the imaging system was applied. By solving this model for the linear coefficients, the spectra were corrected for the uninformative illumination effects. This approach, however, was constrained by the requirement of a reference spectrum, which was difficult to obtain. As an alternative, the standard normal variate (SNV) normalisation method was applied to reduce this uninformative variability.
Once the envisioned illumination effects were eliminated, the remaining differences in plant spectra were assumed to be related to changes in plant traits. To distinguish the stress-related phenomena from regular growth dynamics, a spectral analysis procedure was developed based on clustering, a supervised band selection, and a direct calculation of a spectral similarity measure against a reference. To test the significance of the discrimination between healthy and stressed plants, a statistical test was conducted using a one-way analysis of variance (ANOVA) technique.
The proposed analysis techniques was validated with HSI data of maize plants (Zea mays L.) acquired in a HTPPP for early detection of drought stress in maize plant. Results showed that the pre-processing of reflectance spectra with the SNV effectively reduces the variability due to the expected illumination effects. The proposed spectral analysis method on the normalized spectra successfully detected drought stress from the third day of drought induction, confirming the potential of HSI for drought stress detection studies and further supporting its adoption in HTPPP.Numéro de notice : A2018-122 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.02.003 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.02.003 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=89570
in ISPRS Journal of photogrammetry and remote sensing > vol 138 (April 2018) . - pp 121 - 138[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018041 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018043 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018042 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Real-time accurate 3D head tracking and pose estimation with consumer RGB-D cameras / David Joseph Tan in International journal of computer vision, vol 126 n° 2-4 (April 2018)
[article]
Titre : Real-time accurate 3D head tracking and pose estimation with consumer RGB-D cameras Type de document : Article/Communication Auteurs : David Joseph Tan, Auteur ; Federico Tombari, Auteur ; Nassir Navab, Auteur Année de publication : 2018 Article en page(s) : pp 158 - 183 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] détection de visage
[Termes IGN] données localisées 3D
[Termes IGN] estimation de pose
[Termes IGN] image RVB
[Termes IGN] méthode robuste
[Termes IGN] séquence d'images
[Termes IGN] temps réelRésumé : (Auteur) We demonstrate how 3D head tracking and pose estimation can be effectively and efficiently achieved from noisy RGB-D sequences. Our proposal leverages on a random forest framework, designed to regress the 3D head pose at every frame in a temporal tracking manner. One peculiarity of the algorithm is that it exploits together (1) a generic training dataset of 3D head models, which is learned once offline; and, (2) an online refinement with subject-specific 3D data, which aims for the tracker to withstand slight facial deformations and to adapt its forest to the specific characteristics of an individual subject. The combination of these works allows our algorithm to be robust even under extreme poses, where the user’s face is no longer visible on the image. Finally, we also propose another solution that utilizes a multi-camera system such that the data simultaneously acquired from multiple RGB-D sensors helps the tracker to handle challenging conditions that affect a subset of the cameras. Notably, the proposed multi-camera frameworks yields a real-time performance of approximately 8 ms per frame given six cameras and one CPU core, and scales up linearly to 30 fps with 25 cameras. Numéro de notice : A2018-406 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-017-0988-8 Date de publication en ligne : 02/02/2017 En ligne : https://doi.org/10.1007/s11263-017-0988-8 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90879
in International journal of computer vision > vol 126 n° 2-4 (April 2018) . - pp 158 - 183[article]Revue des descripteurs tridimensionnels (3D) pour la catégorisation des nuages de points acquis avec un système LiDAR de télémétrie mobile / Sylvie Daniel in Geomatica, vol 72 n° 1 (March 2018)PermalinkSaint-Quentin-en-Yvelines à 2,5 cm / Anonyme in Géomatique expert, n° 121 (mars - avril 2018)PermalinkTowards automatic SAR-optical stereogrammetry over urban areas using very high resolution imagery / Chunping Qiu in ISPRS Journal of photogrammetry and remote sensing, vol 138 (April 2018)PermalinkVideo event recognition and anomaly detection by combining gaussian process and hierarchical dirichlet process models / Michael Ying Yang in Photogrammetric Engineering & Remote Sensing, PERS, vol 84 n° 4 (April 2018)PermalinkMapping tree cover with Sentinel-2 data using the Support Vector Machine (SVM) / Anna Mirończuk in Geoinformation issues, Vol 9 n° 1 (2017)PermalinkSensitivity analysis of pansharpening in hyperspectral change detection / Seyd Teymoor Seydi in Applied geomatics, vol 10 n° 1 (March 2018)PermalinkUnderstanding the temporal dimension of the red-edge spectral region for forest decline detection using high-resolution hyperspectral and Sentinel-2a imagery / Pablo J. Zarco-Tejada in ISPRS Journal of photogrammetry and remote sensing, vol 137 (March 2018)PermalinkEstimating forest standing biomass in savanna woodlands as an indicator of forest productivity using the new generation WorldView-2 sensor / Timothy Dube in Geocarto international, vol 33 n° 2 (February 2018)PermalinkFine-grained object recognition and zero-shot learning in remote sensing imagery / Gencer Sumbul in IEEE Transactions on geoscience and remote sensing, vol 56 n° 2 (February 2018)PermalinkLittoral, "Ricochet" ausculte / Marielle Mayo in Géomètre, n° 2155 (février 2018)PermalinkLRAGE : learning latent relationships with adaptive graph embedding for aerial scene classification / Yuebin Wang in IEEE Transactions on geoscience and remote sensing, vol 56 n° 2 (February 2018)PermalinkMultisource remote sensing data classification based on convolutional neural network / Xiaodong Xu in IEEE Transactions on geoscience and remote sensing, vol 56 n° 2 (February 2018)PermalinkActive learning-based optimized training library generation for object-oriented image classification / Rajeswari Balasubramaniam in IEEE Transactions on geoscience and remote sensing, vol 56 n° 1 (January 2018)PermalinkPermalinkColorisation of LiDAR point cloud / Mathieu Brédif (2018)PermalinkComparative study of visual saliency maps in the problem of classification of architectural images with Deep CNNs / Abraham Montoya Obeso (2018)PermalinkCrop-rotation structured classification using multi-source sentinel images and LPIS for crop type mapping / Simon Bailly (2018)PermalinkDetection and area estimation for photovoltaic panels in urban hyperspectral remote sensing data by an original NMF-based unmixing method / Moussa Sofiane Karoui (2018)PermalinkPermalinkExploring image fusion of ALOS/PALSAR data and LANDSAT data to differentiate forest area / Saygin Abdikan in Geocarto international, vol 33 n° 1 (January 2018)PermalinkPermalinkPermalinkPermalinkFusion tardive d’images SPOT-6/7 et de données multitemporelles Sentinel-2 pour la détection de la tache urbaine / Cyril Wendl (2018)PermalinkPermalinkPermalinkPermalinkMachine learning and pose estimation for autonomous robot grasping with collaborative robots / Victor Talbot (2018)PermalinkMultiobjective subpixel land-cover mapping / Ailong Ma in IEEE Transactions on geoscience and remote sensing, vol 56 n° 1 (January 2018)PermalinkPermalinkQGIS in Remote Sensing, Volume 2. QGIS and applications in agriculture and forest / Nicolas Baghdadi (2018)PermalinkQGIS in Remote Sensing, Volume 4. QGIS and Applications in Water and Risks / Nicolas Baghdadi (2018)PermalinkRectified feature matching for spherical panoramic images / Tzu-Yi Chuang in Photogrammetric Engineering & Remote Sensing, PERS, vol 84 n° 1 (January 2018)PermalinkPermalinkA stixel approach for enhancing semantic image segmentation using prior map information / Sylvain Jonchery (2018)PermalinkSuperpixel partitioning of very high resolution satellite images for large-scale classification perspectives with deep convolutional neural networks / Tristan Postadjian (2018)PermalinkTélédétection multispectrale et hyperspectrale des eaux littorales turbides / Morgane Larnicol (2018)PermalinkTERRISCOPE, une nouvelle plateforme mutualisée de recherche en télédétection optique à partir d’avions et de drones / Yannick Boucher (2018)PermalinkTesting, analysis and improvement of FGI-NLS Sentinel-2 data processing chain for land use applications / Emile Blettery (2018)PermalinkPermalinkUtilisation de QGIS en télédétection, Ch. 2. Apports du MNT topo-bathymétrique pour l'évolution bio-géomorphologique des marais d'Ichkeul (Tunisie) / Zeineb Kassouk (2018)PermalinkUtilisation de QGIS en télédétection, Volume 2. QGIS et applications en agriculture et forêt / Nicolas Baghdadi (2018)PermalinkUtilisation de QGIS en télédétection, Volume 4. QGIS et applications en eau et risques / Nicolas Baghdadi (2018)PermalinkArea-based estimation of growing stock volume in Scots pine stands using ALS and airborne image-based point clouds / Paweł Hawryło in Forestry, an international journal of forest research, vol 90 n° 5 (December 2017)PermalinkBuilding extraction from fused LiDAR and hyperspectral data using Random Forest Algorithm / Saeid Parsian in Geomatica, vol 71 n° 4 (December 2017)Permalink