Descripteur
Documents disponibles dans cette catégorie (890)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Extracting leaf area index using viewing geometry effects : A new perspective on high-resolution unmanned aerial system photography / Lukas Roth in ISPRS Journal of photogrammetry and remote sensing, vol 141 (July 2018)
[article]
Titre : Extracting leaf area index using viewing geometry effects : A new perspective on high-resolution unmanned aerial system photography Type de document : Article/Communication Auteurs : Lukas Roth, Auteur ; Helge Aasen, Auteur ; Achim Walter, Auteur ; Frank Liebisch, Auteur Année de publication : 2018 Article en page(s) : pp 161 - 175 Note générale : Bibliography Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage automatique
[Termes IGN] cultures
[Termes IGN] drone
[Termes IGN] Glycine max
[Termes IGN] image aérienne
[Termes IGN] image RVB
[Termes IGN] indice foliaire
[Termes IGN] Leaf Area Index
[Termes IGN] modélisation géométrique de prise de vue
[Termes IGN] orthoimage géoréférencée
[Termes IGN] segmentation d'image
[Termes IGN] simulation 3D
[Termes IGN] SuisseRésumé : (Editeur) Extraction of leaf area index (LAI) is an important prerequisite in numerous studies related to plant ecology, physiology and breeding. LAI is indicative for the performance of a plant canopy and of its potential for growth and yield. In this study, a novel method to estimate LAI based on RGB images taken by an unmanned aerial system (UAS) is introduced. Soybean was taken as the model crop of investigation. The method integrates viewing geometry information in an approach related to gap fraction theory. A 3-D simulation of virtual canopies helped developing and verifying the underlying model. In addition, the method includes techniques to extract plot based data from individual oblique images using image projection, as well as image segmentation applying an active learning approach. Data from a soybean field experiment were used to validate the method. The thereby measured LAI prediction accuracy was comparable with the one of a gap fraction-based handheld device ( of , RMSE of m 2m−2) and correlated well with destructive LAI measurements ( of , RMSE of m2 m−2). These results indicate that, if respecting the range (LAI ) the method was tested for, extracting LAI from UAS derived RGB images using viewing geometry information represents a valid alternative to destructive and optical handheld device LAI measurements in soybean. Thereby, we open the door for automated, high-throughput assessment of LAI in plant and crop science. Numéro de notice : A2018-287 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.04.012 Date de publication en ligne : 07/05/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.04.012 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90402
in ISPRS Journal of photogrammetry and remote sensing > vol 141 (July 2018) . - pp 161 - 175[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018071 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018073 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018072 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Hierarchical cellular automata for visual saliency / Yao Qin in International journal of computer vision, vol 126 n° 7 (July 2018)
[article]
Titre : Hierarchical cellular automata for visual saliency Type de document : Article/Communication Auteurs : Yao Qin, Auteur ; Mengyang Feng, Auteur ; Huchuan Lu, Auteur ; Garrison W. Cottrell, Auteur Année de publication : 2018 Article en page(s) : pp 751 - 770 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage profond
[Termes IGN] automate cellulaire
[Termes IGN] classification bayesienne
[Termes IGN] réseau neuronal artificiel
[Termes IGN] zone saillante 3DRésumé : (Auteur) Saliency detection, finding the most important parts of an image, has become increasingly popular in computer vision. In this paper, we introduce Hierarchical Cellular Automata (HCA)—a temporally evolving model to intelligently detect salient objects. HCA consists of two main components: Single-layer Cellular Automata (SCA) and Cuboid Cellular Automata (CCA). As an unsupervised propagation mechanism, Single-layer Cellular Automata can exploit the intrinsic relevance of similar regions through interactions with neighbors. Low-level image features as well as high-level semantic information extracted from deep neural networks are incorporated into the SCA to measure the correlation between different image patches. With these hierarchical deep features, an impact factor matrix and a coherence matrix are constructed to balance the influences on each cell’s next state. The saliency values of all cells are iteratively updated according to a well-defined update rule. Furthermore, we propose CCA to integrate multiple saliency maps generated by SCA at different scales in a Bayesian framework. Therefore, single-layer propagation and multi-scale integration are jointly modeled in our unified HCA. Surprisingly, we find that the SCA can improve all existing methods that we applied it to, resulting in a similar precision level regardless of the original results. The CCA can act as an efficient pixel-wise aggregation algorithm that can integrate state-of-the-art methods, resulting in even better results. Extensive experiments on four challenging datasets demonstrate that the proposed algorithm outperforms state-of-the-art conventional methods and is competitive with deep learning based approaches. Numéro de notice : A2018-413 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-017-1062-2 Date de publication en ligne : 23/02/2018 En ligne : https://doi.org/10.1007/s11263-017-1062-2 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90896
in International journal of computer vision > vol 126 n° 7 (July 2018) . - pp 751 - 770[article]A light and faster regional convolutional neural network for object detection in optical remote sensing images / Peng Ding in ISPRS Journal of photogrammetry and remote sensing, vol 141 (July 2018)
[article]
Titre : A light and faster regional convolutional neural network for object detection in optical remote sensing images Type de document : Article/Communication Auteurs : Peng Ding, Auteur ; Ye Zhang, Auteur ; Wei-Jian Deng, Auteur ; Ping Jia, Auteur ; Arjan Kuijper, Auteur Année de publication : 2018 Article en page(s) : pp 208 - 218 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification orientée objet
[Termes IGN] détection d'objet
[Termes IGN] image aérienne
[Termes IGN] image terrestre
[Termes IGN] représentation multiple
[Termes IGN] réseau neuronal convolutifRésumé : (auteur) Detection of objects from satellite optical remote sensing images is very important for many commercial and governmental applications. With the development of deep convolutional neural networks (deep CNNs), the field of object detection has seen tremendous advances. Currently, objects in satellite remote sensing images can be detected using deep CNNs. In general, optical remote sensing images contain many dense and small objects, and the use of the original Faster Regional CNN framework does not yield a suitably high precision. Therefore, after careful analysis we adopt dense convoluted networks, a multi-scale representation and various combinations of improvement schemes to enhance the structure of the base VGG16-Net for improving the precision. We propose an approach to reduce the test-time (detection time) and memory requirements. To validate the effectiveness of our approach, we perform experiments using satellite remote sensing image datasets of aircraft and automobiles. The results show that the improved network structure can detect objects in satellite optical remote sensing images more accurately and efficiently. Numéro de notice : A2018-288 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.05.005 Date de publication en ligne : 14/05/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.05.005 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90403
in ISPRS Journal of photogrammetry and remote sensing > vol 141 (July 2018) . - pp 208 - 218[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018071 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018073 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018072 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Mining and visual exploration of closed contiguous sequential patterns in trajectories / Can Yang in International journal of geographical information science IJGIS, vol 32 n° 7-8 (July - August 2018)
[article]
Titre : Mining and visual exploration of closed contiguous sequential patterns in trajectories Type de document : Article/Communication Auteurs : Can Yang, Auteur ; Gyözö Gidofalvi, Auteur Année de publication : 2018 Article en page(s) : pp 1413 - 1435 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Termes IGN] analyse spatio-temporelle
[Termes IGN] arbre de décision
[Termes IGN] exploration de données géographiques
[Termes IGN] réseau routier
[Termes IGN] trafic routier
[Termes IGN] trajectoire (véhicule non spatial)
[Termes IGN] visualisation de données
[Vedettes matières IGN] GéovisualisationMots-clés libres : closed contiguous sequential pattern = motif séquentiel contigu fermé Résumé : (auteur) Large collections of trajectories provide rich insight into movement patterns of the tracked objects. By map matching trajectories to a road network as sequences of road edge IDs, contiguous sequential patterns can be extracted as a certain number of objects traversing a specific path, which provides valuable information in travel demand modeling and transportation planning. Mining and visualization of such patterns still face challenges in efficiency, scalability, and visual cluttering of patterns. To address these challenges, this article firstly proposes a Bidirectional Pruning based Closed Contiguous Sequential pattern Mining (BP-CCSM) algorithm. By employing tree structures to create partitions of input sequences and candidate patterns, closeness can be checked efficiently by comparing nodes in a tree. Secondly, a system called Sequential Pattern Explorer for Trajectories (SPET) is built for spatial and temporal exploration of the mined patterns. Two types of maps are designed where a conventional traffic map gives an overview of the movement patterns and a dynamic offset map presents detailed information according to user-specified filters. Extensive experiments are performed in this article. BP-CCSM is compared with three other state-of-the-art algorithms on two datasets: a small public dataset containing clickstreams from an e-commerce and a large global positioning system dataset with more than 600,000 taxi trip trajectories. The results show that BP-CCSM considerably outperforms three other algorithms in terms of running time and memory consumption. Besides, SPET provides an efficient and convenient way to inspect spatial and temporal variations in closed contiguous sequential patterns from a large number of trajectories. Numéro de notice : A2018-279 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article DOI : 10.1080/13658816.2017.1393542 Date de publication en ligne : 31/10/2017 En ligne : https://doi.org/10.1080/13658816.2017.1393542 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90361
in International journal of geographical information science IJGIS > vol 32 n° 7-8 (July - August 2018) . - pp 1413 - 1435[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 079-2018041 RAB Revue Centre de documentation En réserve L003 Disponible Testing time-geographic density estimation for home range analysis using an agent-based model of animal movement / Joni A. Downs in International journal of geographical information science IJGIS, vol 32 n° 7-8 (July - August 2018)
[article]
Titre : Testing time-geographic density estimation for home range analysis using an agent-based model of animal movement Type de document : Article/Communication Auteurs : Joni A. Downs, Auteur ; Mark Horner, Auteur ; David Lamb, Auteur ; Rebecca W. Loraamm, Auteur ; James Anderson, Auteur ; Brittany Wood, Auteur Année de publication : 2018 Article en page(s) : pp 1505 - 1522 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Analyse spatiale
[Termes IGN] aire naturelle (écologie)
[Termes IGN] densité de population
[Termes IGN] données localisées
[Termes IGN] méthode fondée sur le noyau
[Termes IGN] migration animale
[Termes IGN] modèle orienté agent
[Termes IGN] population animale
[Termes IGN] Time-geographyRésumé : (auteur) Time-geographic density estimation (TGDE) is a method of movement pattern analysis that generates a continuous intensity surface from a set of tracking data. TGDE has recently been proposed as a method of animal home range estimation, where the goal is to delineate the spatial extents that an animal occupies. This paper tests TGDE’s effectiveness as a home range estimator using simulated movement data. First, an agent-based model is used to simulate tracking data under 16 movement scenarios representing a variety of animal life history traits (habitat preferences, homing behaviour, mobility) and habitat configurations (levels of habitat fragmentation). Second, the accuracy of TGDE is evaluated for four temporal sampling frequencies using three adaptive velocity parameters for 30 sample data sets from each scenario. Third, TGDE accuracy is compared to two other common home range estimation methods, kernel density estimation (KDE) and characteristic hull polygons (CHP). The results demonstrate that TGDE is the most effective at estimating core areas, home ranges and total areas at high sampling frequencies, while CHP performs better at low sampling frequencies. KDE was ineffective across all scenarios explored. Numéro de notice : A2018-281 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article DOI : 10.1080/13658816.2017.1421764 Date de publication en ligne : 03/01/2018 En ligne : https://doi.org/10.1080/13658816.2017.1421764 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90363
in International journal of geographical information science IJGIS > vol 32 n° 7-8 (July - August 2018) . - pp 1505 - 1522[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 079-2018041 RAB Revue Centre de documentation En réserve L003 Disponible Application of deep learning for object detection / Ajeet Ram Pathak in Procedia Computer Science, vol 132 (2018)PermalinkClassification à très large échelle d’images satellites à très haute résolution spatiale par réseaux de neurones convolutifs / Tristan Postadjian in Revue Française de Photogrammétrie et de Télédétection, n° 217-218 (juin - septembre 2018)PermalinkFusion tardive d’images SPOT 6/7 et de données multitemporelles Sentinel-2 pour la détection de la tache urbaine / Cyril Wendl in Revue Française de Photogrammétrie et de Télédétection, n° 217-218 (juin - septembre 2018)PermalinkA voxel- and graph-based strategy for segmenting man-made infrastructures using perceptual grouping laws: comparison and evaluation / Yusheng Xu in Photogrammetric Engineering & Remote Sensing, PERS, vol 84 n° 6 (juin 2018)PermalinkAn object-based approach for mapping forest structural types based on low-density LiDAR and multispectral imagery / Luis Angel Ruiz in Geocarto international, vol 33 n° 5 (May 2018)PermalinkClassifying airborne LiDAR point clouds via deep features learned by a multi-scale convolutional neural network / Ruibin Zhao in International journal of geographical information science IJGIS, vol 32 n° 5-6 (May - June 2018)PermalinkDeep convolutional neural network training enrichment using multi-view object-based analysis of Unmanned Aerial systems imagery for wetlands classification / Tao Liu in ISPRS Journal of photogrammetry and remote sensing, vol 139 (May 2018)PermalinkDo semantic parts emerge in convolutional neural networks? / Abel Gonzalez-Garcia in International journal of computer vision, vol 126 n° 5 (May 2018)PermalinkA geometric-based approach for road matching on multi-scale datasets using a genetic algorithm / Alireza Chehreghan in Cartography and Geographic Information Science, Vol 45 n° 3 (May 2018)PermalinkLarge-scale supervised learning for 3D Point cloud labeling : Semantic3d.Net / Timo Hackel in Photogrammetric Engineering & Remote Sensing, PERS, vol 84 n° 5 (mai 2018)Permalink