Descripteur
Documents disponibles dans cette catégorie (1415)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Combining machine-learning topic models and spatiotemporal analysis of social media data for disaster footprint and damage assessment / Bernd Resch in Cartography and Geographic Information Science, Vol 45 n° 4 (July 2018)
[article]
Titre : Combining machine-learning topic models and spatiotemporal analysis of social media data for disaster footprint and damage assessment Type de document : Article/Communication Auteurs : Bernd Resch, Auteur ; Florian Usländer, Auteur ; Clemens Havas, Auteur Année de publication : 2018 Article en page(s) : pp 362 - 376 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Analyse spatiale
[Termes IGN] analyse spatio-temporelle
[Termes IGN] apprentissage automatique
[Termes IGN] carte thématique
[Termes IGN] catastrophe naturelle
[Termes IGN] dommage matériel
[Termes IGN] données issues des réseaux sociaux
[Termes IGN] empreinte
[Termes IGN] gestion de criseRésumé : (Auteur) Current disaster management procedures to cope with human and economic losses and to manage a disaster’s aftermath suffer from a number of shortcomings like high temporal lags or limited temporal and spatial resolution. This paper presents an approach to analyze social media posts to assess the footprint of and the damage caused by natural disasters through combining machine-learning techniques (Latent Dirichlet Allocation) for semantic information extraction with spatial and temporal analysis (local spatial autocorrelation) for hot spot detection. Our results demonstrate that earthquake footprints can be reliably and accurately identified in our use case. More, a number of relevant semantic topics can be automatically identified without a priori knowledge, revealing clearly differing temporal and spatial signatures. Furthermore, we are able to generate a damage map that indicates where significant losses have occurred. The validation of our results using statistical measures, complemented by the official earthquake footprint by US Geological Survey and the results of the HAZUS loss model, shows that our approach produces valid and reliable outputs. Thus, our approach may improve current disaster management procedures through generating a new and unseen information layer in near real time. Numéro de notice : A2018-136 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/15230406.2017.1356242 Date de publication en ligne : 03/08/2017 En ligne : https://doi.org/10.1080/15230406.2017.1356242 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=89678
in Cartography and Geographic Information Science > Vol 45 n° 4 (July 2018) . - pp 362 - 376[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 032-2018041 RAB Revue Centre de documentation En réserve L003 Disponible Evolutionary approach for detection of buried remains using hyperspectral images / Leon Dozal in Photogrammetric Engineering & Remote Sensing, PERS, vol 84 n° 7 (juillet 2018)
[article]
Titre : Evolutionary approach for detection of buried remains using hyperspectral images Type de document : Article/Communication Auteurs : Leon Dozal, Auteur ; José L. Silvan-Cardenas, Auteur ; Daniela Moctezuma, Auteur ; Oscar S. Siordia, Auteur ; Enrique Naredo, Auteur Année de publication : 2018 Article en page(s) : pp 435 - 450 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] algorithme génétique
[Termes IGN] image hyperspectrale
[Termes IGN] Mexique
[Termes IGN] précision de la classification
[Termes IGN] teneur en eau de la végétation
[Termes IGN] tombeRésumé : (Auteur) Hyperspectral imaging has been successfully utilized to locate clandestine graves. This study applied a Genetic Programming technique called Brain Programming (BP) for automating the design of Hyperspectral Visual Attention Models (H-VAM.), which is proposed as a new method for the detection of buried remains. Four graves were simulated and monitored during six months by taking in situ spectral measurements of the ground. Two experiments were implemented using Kappa and weighted Kappa coefficients as classification accuracy measures for guiding the BP search of the best H-VAM. Experimental results demonstrate that the proposed BP method improves classification accuracy compared to a previous approach. A better detection performance was observed for the image acquired after three months from burial. Moreover, results suggest that the use of spectral bands that respond to vegetation and water content of the plants and provide evidence that the number of buried bodies plays a crucial role on a successful detection. Numéro de notice : A2018-359 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.84.7.435 Date de publication en ligne : 01/07/2018 En ligne : https://doi.org/10.14358/PERS.84.7.435 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90599
in Photogrammetric Engineering & Remote Sensing, PERS > vol 84 n° 7 (juillet 2018) . - pp 435 - 450[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 105-2018071 RAB Revue Centre de documentation En réserve L003 Disponible Exploring geo-tagged photos for land cover validation with deep learning / Hanfa Xing in ISPRS Journal of photogrammetry and remote sensing, vol 141 (July 2018)
[article]
Titre : Exploring geo-tagged photos for land cover validation with deep learning Type de document : Article/Communication Auteurs : Hanfa Xing, Auteur ; Yuan Meng, Auteur ; Zixuan Wang, Auteur ; Kaixuan Fan, Auteur ; Dongyang Hou, Auteur Année de publication : 2018 Article en page(s) : pp 237 - 251 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie numérique
[Termes IGN] apprentissage profond
[Termes IGN] base de données d'occupation du sol
[Termes IGN] Californie (Etats-Unis)
[Termes IGN] échantillon
[Termes IGN] estimation de précision
[Termes IGN] géobalise
[Termes IGN] image numérique
[Termes IGN] occupation du sol
[Termes IGN] production participative
[Termes IGN] réseau neuronal convolutifRésumé : (Auteur) Land cover validation plays an important role in the process of generating and distributing land cover thematic maps, which is usually implemented by high cost of sample interpretation with remotely sensed images or field survey. With an increasing availability of geo-tagged landscape photos, the automatic photo recognition methodologies, e.g., deep learning, can be effectively utilised for land cover applications. However, they have hardly been utilised in validation processes, as challenges remain in sample selection and classification for highly heterogeneous photos. This study proposed an approach to employ geo-tagged photos for land cover validation by using the deep learning technology. The approach first identified photos automatically based on the VGG-16 network. Then, samples for validation were selected and further classified by considering photos distribution and classification probabilities. The implementations were conducted for the validation of the GlobeLand30 land cover product in a heterogeneous area, western California. Experimental results represented promises in land cover validation, given that GlobeLand30 showed an overall accuracy of 83.80% with classified samples, which was close to the validation result of 80.45% based on visual interpretation. Additionally, the performances of deep learning based on ResNet-50 and AlexNet were also quantified, revealing no substantial differences in final validation results. The proposed approach ensures geo-tagged photo quality, and supports the sample classification strategy by considering photo distribution, with accuracy improvement from 72.07% to 79.33% compared with solely considering the single nearest photo. Consequently, the presented approach proves the feasibility of deep learning technology on land cover information identification of geo-tagged photos, and has a great potential to support and improve the efficiency of land cover validation. Numéro de notice : A2018-289 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.04.025 Date de publication en ligne : 16/05/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.04.025 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90404
in ISPRS Journal of photogrammetry and remote sensing > vol 141 (July 2018) . - pp 237 - 251[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018071 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018073 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018072 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Extracting leaf area index using viewing geometry effects : A new perspective on high-resolution unmanned aerial system photography / Lukas Roth in ISPRS Journal of photogrammetry and remote sensing, vol 141 (July 2018)
[article]
Titre : Extracting leaf area index using viewing geometry effects : A new perspective on high-resolution unmanned aerial system photography Type de document : Article/Communication Auteurs : Lukas Roth, Auteur ; Helge Aasen, Auteur ; Achim Walter, Auteur ; Frank Liebisch, Auteur Année de publication : 2018 Article en page(s) : pp 161 - 175 Note générale : Bibliography Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage automatique
[Termes IGN] cultures
[Termes IGN] drone
[Termes IGN] Glycine max
[Termes IGN] image aérienne
[Termes IGN] image RVB
[Termes IGN] indice foliaire
[Termes IGN] Leaf Area Index
[Termes IGN] modélisation géométrique de prise de vue
[Termes IGN] orthoimage géoréférencée
[Termes IGN] segmentation d'image
[Termes IGN] simulation 3D
[Termes IGN] SuisseRésumé : (Editeur) Extraction of leaf area index (LAI) is an important prerequisite in numerous studies related to plant ecology, physiology and breeding. LAI is indicative for the performance of a plant canopy and of its potential for growth and yield. In this study, a novel method to estimate LAI based on RGB images taken by an unmanned aerial system (UAS) is introduced. Soybean was taken as the model crop of investigation. The method integrates viewing geometry information in an approach related to gap fraction theory. A 3-D simulation of virtual canopies helped developing and verifying the underlying model. In addition, the method includes techniques to extract plot based data from individual oblique images using image projection, as well as image segmentation applying an active learning approach. Data from a soybean field experiment were used to validate the method. The thereby measured LAI prediction accuracy was comparable with the one of a gap fraction-based handheld device ( of , RMSE of m 2m−2) and correlated well with destructive LAI measurements ( of , RMSE of m2 m−2). These results indicate that, if respecting the range (LAI ) the method was tested for, extracting LAI from UAS derived RGB images using viewing geometry information represents a valid alternative to destructive and optical handheld device LAI measurements in soybean. Thereby, we open the door for automated, high-throughput assessment of LAI in plant and crop science. Numéro de notice : A2018-287 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.04.012 Date de publication en ligne : 07/05/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.04.012 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90402
in ISPRS Journal of photogrammetry and remote sensing > vol 141 (July 2018) . - pp 161 - 175[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018071 RAB Revue Centre de documentation En réserve L003 Disponible 081-2018073 DEP-EXM Revue LASTIG Dépôt en unité Exclu du prêt 081-2018072 DEP-EAF Revue Nancy Dépôt en unité Exclu du prêt Hierarchical cellular automata for visual saliency / Yao Qin in International journal of computer vision, vol 126 n° 7 (July 2018)
[article]
Titre : Hierarchical cellular automata for visual saliency Type de document : Article/Communication Auteurs : Yao Qin, Auteur ; Mengyang Feng, Auteur ; Huchuan Lu, Auteur ; Garrison W. Cottrell, Auteur Année de publication : 2018 Article en page(s) : pp 751 - 770 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] apprentissage profond
[Termes IGN] automate cellulaire
[Termes IGN] classification bayesienne
[Termes IGN] réseau neuronal artificiel
[Termes IGN] zone saillante 3DRésumé : (Auteur) Saliency detection, finding the most important parts of an image, has become increasingly popular in computer vision. In this paper, we introduce Hierarchical Cellular Automata (HCA)—a temporally evolving model to intelligently detect salient objects. HCA consists of two main components: Single-layer Cellular Automata (SCA) and Cuboid Cellular Automata (CCA). As an unsupervised propagation mechanism, Single-layer Cellular Automata can exploit the intrinsic relevance of similar regions through interactions with neighbors. Low-level image features as well as high-level semantic information extracted from deep neural networks are incorporated into the SCA to measure the correlation between different image patches. With these hierarchical deep features, an impact factor matrix and a coherence matrix are constructed to balance the influences on each cell’s next state. The saliency values of all cells are iteratively updated according to a well-defined update rule. Furthermore, we propose CCA to integrate multiple saliency maps generated by SCA at different scales in a Bayesian framework. Therefore, single-layer propagation and multi-scale integration are jointly modeled in our unified HCA. Surprisingly, we find that the SCA can improve all existing methods that we applied it to, resulting in a similar precision level regardless of the original results. The CCA can act as an efficient pixel-wise aggregation algorithm that can integrate state-of-the-art methods, resulting in even better results. Extensive experiments on four challenging datasets demonstrate that the proposed algorithm outperforms state-of-the-art conventional methods and is competitive with deep learning based approaches. Numéro de notice : A2018-413 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1007/s11263-017-1062-2 Date de publication en ligne : 23/02/2018 En ligne : https://doi.org/10.1007/s11263-017-1062-2 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90896
in International journal of computer vision > vol 126 n° 7 (July 2018) . - pp 751 - 770[article]A light and faster regional convolutional neural network for object detection in optical remote sensing images / Peng Ding in ISPRS Journal of photogrammetry and remote sensing, vol 141 (July 2018)PermalinkMining and visual exploration of closed contiguous sequential patterns in trajectories / Can Yang in International journal of geographical information science IJGIS, vol 32 n° 7-8 (July - August 2018)PermalinkTesting time-geographic density estimation for home range analysis using an agent-based model of animal movement / Joni A. Downs in International journal of geographical information science IJGIS, vol 32 n° 7-8 (July - August 2018)PermalinkAdvancing New Testament interpretation through spatio‐temporal analysis: Demonstrated by case studies / Vincent Van Altena in Transactions in GIS, vol 22 n° 3 (June 2018)PermalinkApplication of deep learning for object detection / Ajeet Ram Pathak in Procedia Computer Science, vol 132 (2018)PermalinkClassification à très large échelle d’images satellites à très haute résolution spatiale par réseaux de neurones convolutifs / Tristan Postadjian in Revue Française de Photogrammétrie et de Télédétection, n° 217-218 (juin - septembre 2018)PermalinkForeword to the theme issue on geospatial computer vision / Jan Dirk Wegner in ISPRS Journal of photogrammetry and remote sensing, vol 140 (June 2018)PermalinkFusion tardive d’images SPOT 6/7 et de données multitemporelles Sentinel-2 pour la détection de la tache urbaine / Cyril Wendl in Revue Française de Photogrammétrie et de Télédétection, n° 217-218 (juin - septembre 2018)PermalinkGeometric reasoning with uncertain polygonal faces / Jochen Meidow in Photogrammetric Engineering & Remote Sensing, PERS, vol 84 n° 6 (juin 2018)PermalinkPré-estimation et analyse de la précision pour la cartographie par drone / Laurent Valentin Jospin in XYZ, n° 155 (juin - août 2018)Permalink