Descripteur
Documents disponibles dans cette catégorie (107)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Extracting leaf area index using viewing geometry effects : A new perspective on high-resolution unmanned aerial system photography / Lukas Roth in ISPRS Journal of photogrammetry and remote sensing, vol 141 (July 2018)
[article]
Titre : Extracting leaf area index using viewing geometry effects : A new perspective on high-resolution unmanned aerial system photography Type de document : Article/Communication Auteurs : Lukas Roth, Auteur ; Helge Aasen, Auteur ; Achim Walter, Auteur ; Frank Liebisch, Auteur Année de publication : 2018 Article en page(s) : pp 161 - 175 Note générale : Bibliography Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage automatique
[Termes IGN] cultures
[Termes IGN] drone
[Termes IGN] Glycine max
[Termes IGN] image aérienne
[Termes IGN] image RVB
[Termes IGN] indice foliaire
[Termes IGN] Leaf Area Index
[Termes IGN] modélisation géométrique de prise de vue
[Termes IGN] orthoimage géoréférencée
[Termes IGN] segmentation d'image
[Termes IGN] simulation 3D
[Termes IGN] SuisseRésumé : (Editeur) Extraction of leaf area index (LAI) is an important prerequisite in numerous studies related to plant ecology, physiology and breeding. LAI is indicative for the performance of a plant canopy and of its potential for growth and yield. In this study, a novel method to estimate LAI based on RGB images taken by an unmanned aerial system (UAS) is introduced. Soybean was taken as the model crop of investigation. The method integrates viewing geometry information in an approach related to gap fraction theory. A 3-D simulation of virtual canopies helped developing and verifying the underlying model. In addition, the method includes techniques to extract plot based data from individual oblique images using image projection, as well as image segmentation applying an active learning approach. Data from a soybean field experiment were used to validate the method. The thereby measured LAI prediction accuracy was comparable with the one of a gap fraction-based handheld device ( of , RMSE of m 2m−2) and correlated well with destructive LAI measurements ( of , RMSE of m2 m−2). These results indicate that, if respecting the range (LAI ) the method was tested for, extracting LAI from UAS derived RGB images using viewing geometry information represents a valid alternative to destructive and optical handheld device LAI measurements in soybean. Thereby, we open the door for automated, high-throughput assessment of LAI in plant and crop science. Numéro de notice : A2018-287 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.04.012 Date de publication en ligne : 07/05/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.04.012 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90402
in ISPRS Journal of photogrammetry and remote sensing > vol 141 (July 2018) . - pp 161 - 175[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018071 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018073 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018072 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt SDF-2-SDF registration for real-time 3D reconstruction from RGB-D data / Miroslava Slavcheva in International journal of computer vision, vol 126 n° 6 (June 2018)
[article]
Titre : SDF-2-SDF registration for real-time 3D reconstruction from RGB-D data Type de document : Article/Communication Auteurs : Miroslava Slavcheva, Auteur ; Wadim Kehl, Auteur ; Nassir Navab, Auteur ; Slobodan Ilic, Auteur Année de publication : 2018 Article en page(s) : pp 615 - 636 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] contrainte géométrique
[Termes IGN] estimation de pose
[Termes IGN] image RVB
[Termes IGN] Kinect
[Termes IGN] méthode de réduction d'énergie
[Termes IGN] optimisation (mathématiques)
[Termes IGN] reconstruction d'objet
[Termes IGN] semis de points
[Termes IGN] voxelMots-clés libres : simultaneous localization and mapping (SLAM) Résumé : (Auteur) We tackle the task of dense 3D reconstruction from RGB-D data. Contrary to the majority of existing methods, we focus not only on trajectory estimation accuracy, but also on reconstruction precision. The key technique is SDF-2-SDF registration, which is a correspondence-free, symmetric, dense energy minimization method, performed via the direct voxel-wise difference between a pair of signed distance fields. It has a wider convergence basin than traditional point cloud registration and cloud-to-volume alignment techniques. Furthermore, its formulation allows for straightforward incorporation of photometric and additional geometric constraints. We employ SDF-2-SDF registration in two applications. First, we perform small-to-medium scale object reconstruction entirely on the CPU. To this end, the camera is tracked frame-to-frame in real time. Then, the initial pose estimates are refined globally in a lightweight optimization framework, which does not involve a pose graph. We combine these procedures into our second, fully real-time application for larger-scale object reconstruction and SLAM. It is implemented as a hybrid system, whereby tracking is done on the GPU, while refinement runs concurrently over batches on the CPU. To bound memory and runtime footprints, registration is done over a fixed number of limited-extent volumes, anchored at geometry-rich locations. Extensive qualitative and quantitative evaluation of both trajectory accuracy and model fidelity on several public RGB-D datasets, acquired with various quality sensors, demonstrates higher precision than related techniques. Numéro de notice : A2018-410 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-017-1057-z Date de publication en ligne : 18/12/2017 En ligne : https://doi.org/10.1007/s11263-017-1057-z Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90884
in International journal of computer vision > vol 126 n° 6 (June 2018) . - pp 615 - 636[article]HackAIR : towards raising awareness about air quality in Europe by developing a collective online platform / Evangelos Kosmidis in ISPRS International journal of geo-information, vol 7 n° 5 (May 2018)
[article]
Titre : HackAIR : towards raising awareness about air quality in Europe by developing a collective online platform Type de document : Article/Communication Auteurs : Evangelos Kosmidis, Auteur ; Panagiota Syropoulou, Auteur ; Stavros Tekes, Auteur ; Philipp Schneider, Auteur ; Eleftherios Spyromitros-Xioufis, Auteur ; et al., Auteur Année de publication : 2018 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique web
[Termes IGN] données environnementales
[Termes IGN] fusion de données
[Termes IGN] image numérique
[Termes IGN] image RVB
[Termes IGN] participation du public
[Termes IGN] pollution atmosphérique
[Termes IGN] qualité de l'air
[Termes IGN] réseau social
[Termes IGN] science citoyenne
[Termes IGN] surveillance écologiqueRésumé : (Auteur) Although air pollution is one of the most significant environmental factors posing a threat to human health worldwide, air quality data are scarce or not easily accessible in most European countries. The current work aims to develop a centralized air quality data hub that enables citizens to contribute to air quality monitoring. In this work, data from official air quality monitoring stations are combined with air pollution estimates from sky-depicting photos and from low-cost sensing devices that citizens build on their own so that citizens receive improved information about the quality of the air they breathe. Additionally, a data fusion algorithm merges air quality information from various sources to provide information in areas where no air quality measurements exist. Numéro de notice : A2018-342 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi7050187 Date de publication en ligne : 12/05/2018 En ligne : https://doi.org/10.10.3390/ijgi7050187 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90564
in ISPRS International journal of geo-information > vol 7 n° 5 (May 2018)[article]Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification / Rama Rao Nidamanuri in ISPRS Journal of photogrammetry and remote sensing, vol 138 (April 2018)
[article]
Titre : Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification Type de document : Article/Communication Auteurs : Rama Rao Nidamanuri, Auteur ; Fahad Shahbaz Khan, Auteur ; Joost van de Weijer, Auteur ; Matthieu Molinier, Auteur ; Jorma Laaksonen, Auteur Année de publication : 2018 Article en page(s) : pp 74 - 85 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse texturale
[Termes IGN] apprentissage profond
[Termes IGN] classification
[Termes IGN] image RVB
[Termes IGN] motif binaire local
[Termes IGN] réseau neuronal convolutif
[Termes IGN] texture d'imageRésumé : (Auteur) Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene classification. Numéro de notice : A2018-121 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.01.023 Date de publication en ligne : 15/02/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.01.023 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=89590
in ISPRS Journal of photogrammetry and remote sensing > vol 138 (April 2018) . - pp 74 - 85[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018041 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018043 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018042 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Real-time accurate 3D head tracking and pose estimation with consumer RGB-D cameras / David Joseph Tan in International journal of computer vision, vol 126 n° 2-4 (April 2018)
[article]
Titre : Real-time accurate 3D head tracking and pose estimation with consumer RGB-D cameras Type de document : Article/Communication Auteurs : David Joseph Tan, Auteur ; Federico Tombari, Auteur ; Nassir Navab, Auteur Année de publication : 2018 Article en page(s) : pp 158 - 183 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] détection de visage
[Termes IGN] données localisées 3D
[Termes IGN] estimation de pose
[Termes IGN] image RVB
[Termes IGN] méthode robuste
[Termes IGN] séquence d'images
[Termes IGN] temps réelRésumé : (Auteur) We demonstrate how 3D head tracking and pose estimation can be effectively and efficiently achieved from noisy RGB-D sequences. Our proposal leverages on a random forest framework, designed to regress the 3D head pose at every frame in a temporal tracking manner. One peculiarity of the algorithm is that it exploits together (1) a generic training dataset of 3D head models, which is learned once offline; and, (2) an online refinement with subject-specific 3D data, which aims for the tracker to withstand slight facial deformations and to adapt its forest to the specific characteristics of an individual subject. The combination of these works allows our algorithm to be robust even under extreme poses, where the user’s face is no longer visible on the image. Finally, we also propose another solution that utilizes a multi-camera system such that the data simultaneously acquired from multiple RGB-D sensors helps the tracker to handle challenging conditions that affect a subset of the cameras. Notably, the proposed multi-camera frameworks yields a real-time performance of approximately 8 ms per frame given six cameras and one CPU core, and scales up linearly to 30 fps with 25 cameras. Numéro de notice : A2018-406 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-017-0988-8 Date de publication en ligne : 02/02/2017 En ligne : https://doi.org/10.1007/s11263-017-0988-8 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90879
in International journal of computer vision > vol 126 n° 2-4 (April 2018) . - pp 158 - 183[article]Machine learning and pose estimation for autonomous robot grasping with collaborative robots / Victor Talbot (2018)PermalinkPermalinkSuperpixel partitioning of very high resolution satellite images for large-scale classification perspectives with deep convolutional neural networks / Tristan Postadjian (2018)PermalinkTesting, analysis and improvement of FGI-NLS Sentinel-2 data processing chain for land use applications / Emile Blettery (2018)PermalinkLearning aggregated features and optimizing model for semantic labeling / Jianhua Wang in The Visual Computer, vol 33 n° 12 (December 2017)PermalinkHyperspectral UAV-imagery and photogrammetric canopy height model in estimating forest stand variables / Sakari Tuominen in Silva fennica, vol 51 n° 5 (2017)PermalinkUnsupervised feature learning for land-use scene recognition / Jiayuan Fan in IEEE Transactions on geoscience and remote sensing, vol 55 n° 4 (April 2017)PermalinkOn the fusion of lidar and aerial color imagery to detect urban vegetation and buildings / Madhurima Bandyopadhyay in Photogrammetric Engineering & Remote Sensing, PERS, vol 83 n° 2 (February 2017)PermalinkShadow detection and removal in RGB VHR images for land use unsupervised classification / A. Movia in ISPRS Journal of photogrammetry and remote sensing, vol 119 (September 2016)PermalinkChange detection and deformation analysis in point clouds: Application to rock face monitoring / Marco Scaioni in Photogrammetric Engineering & Remote Sensing, PERS, vol 79 n° 5 (May 2013)Permalink