Descripteur
Documents disponibles dans cette catégorie (1844)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Transfer learning from citizen science photographs enables plant species identification in UAV imagery / Salim Soltani in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 5 (August 2022)
[article]
Titre : Transfer learning from citizen science photographs enables plant species identification in UAV imagery Type de document : Article/Communication Auteurs : Salim Soltani, Auteur ; Hannes Feilhauer, Auteur ; Robbert Duker, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 100016 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] base de données naturalistes
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] distribution spatiale
[Termes IGN] données localisées des bénévoles
[Termes IGN] espèce végétale
[Termes IGN] filtrage de la végétation
[Termes IGN] identification de plantes
[Termes IGN] image captée par drone
[Termes IGN] orthoimage couleur
[Termes IGN] science citoyenne
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Accurate information on the spatial distribution of plant species and communities is in high demand for various fields of application, such as nature conservation, forestry, and agriculture. A series of studies has shown that Convolutional Neural Networks (CNNs) accurately predict plant species and communities in high-resolution remote sensing data, in particular with data at the centimeter scale acquired with Unoccupied Aerial Vehicles (UAV). However, such tasks often require ample training data, which is commonly generated in the field via geocoded in-situ observations or labeling remote sensing data through visual interpretation. Both approaches are laborious and can present a critical bottleneck for CNN applications. An alternative source of training data is given by using knowledge on the appearance of plants in the form of plant photographs from citizen science projects such as the iNaturalist database. Such crowd-sourced plant photographs typically exhibit very different perspectives and great heterogeneity in various aspects, yet the sheer volume of data could reveal great potential for application to bird’s eye views from remote sensing platforms. Here, we explore the potential of transfer learning from such a crowd-sourced data treasure to the remote sensing context. Therefore, we investigate firstly, if we can use crowd-sourced plant photographs for CNN training and subsequent mapping of plant species in high-resolution remote sensing imagery. Secondly, we test if the predictive performance can be increased by a priori selecting photographs that share a more similar perspective to the remote sensing data. We used two case studies to test our proposed approach with multiple RGB orthoimages acquired from UAV with the target plant species Fallopia japonica and Portulacaria afra respectively. Our results demonstrate that CNN models trained with heterogeneous, crowd-sourced plant photographs can indeed predict the target species in UAV orthoimages with surprising accuracy. Filtering the crowd-sourced photographs used for training by acquisition properties increased the predictive performance. This study demonstrates that citizen science data can effectively anticipate a common bottleneck for vegetation assessments and provides an example on how we can effectively harness the ever-increasing availability of crowd-sourced and big data for remote sensing applications. Numéro de notice : A2022-488 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.1016/j.ophoto.2022.100016 Date de publication en ligne : 23/05/2022 En ligne : https://doi.org/10.1016/j.ophoto.2022.100016 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100956
in ISPRS Open Journal of Photogrammetry and Remote Sensing > vol 5 (August 2022) . - n° 100016[article]Discriminative information restoration and extraction for weakly supervised low-resolution fine-grained image recognition / Tiantian Yan in Pattern recognition, vol 127 (July 2022)
[article]
Titre : Discriminative information restoration and extraction for weakly supervised low-resolution fine-grained image recognition Type de document : Article/Communication Auteurs : Tiantian Yan, Auteur ; Jian Shi, Auteur ; Haojie Li, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 108629 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse discriminante
[Termes IGN] arbre aléatoire minimum
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de données
[Termes IGN] granularité d'image
[Termes IGN] image à basse résolution
[Termes IGN] image à haute résolution
[Termes IGN] relation sémantique
[Termes IGN] texture d'imageRésumé : (auteur) The existing methods of fine-grained image recognition mainly devote to learning subtle yet discriminative features from the high-resolution input. However, their performance deteriorates significantly when they are used for low quality images because a lot of discriminative details of images are missing. We propose a discriminative information restoration and extraction network, termed as DRE-Net, to address the problem of low-resolution fine-grained image recognition, which has widespread application potential, such as shelf auditing and surveillance scenarios. DRE-Net is the first framework for weakly supervised low-resolution fine-grained image recognition and consists of two sub-networks: (1) fine-grained discriminative information restoration sub-network (FDR) and (2) recognition sub-network with the semantic relation distillation loss (SRD-loss). The first module utilizes the structural characteristic of minimum spanning tree (MST) to establish context information for each pixel by employing the spatial structures between each pixel and other pixels, which can help FDR focus on and restore the critical texture details. The second module employs the SRD-loss to calibrate recognition sub-network by transferring the correct relationships between every two pixels on the feature map. Meanwhile the SRD-loss can further prompt the FDR to recover reliable and accurate fine-grained details and guide the recognition sub-network to perceive the discriminative features from the correct relationships. Extensive experiments on three benchmark datasets and one retail product dataset demonstrate the effectiveness of our proposed framework. Numéro de notice : A2022-555 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.patcog.2022.108629 Date de publication en ligne : 06/03/2022 En ligne : https://doi.org/10.1016/j.patcog.2022.108629 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101168
in Pattern recognition > vol 127 (July 2022) . - n° 108629[article]Estimating generalized measures of local neighbourhood context from multispectral satellite images using a convolutional neural network / Alex David Singleton in Computers, Environment and Urban Systems, vol 95 (July 2022)
[article]
Titre : Estimating generalized measures of local neighbourhood context from multispectral satellite images using a convolutional neural network Type de document : Article/Communication Auteurs : Alex David Singleton, Auteur ; Dani Arribas-Bel, Auteur ; John Murray, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 101802 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse de groupement
[Termes IGN] analyse en composantes principales
[Termes IGN] apprentissage automatique
[Termes IGN] bâtiment
[Termes IGN] carte d'occupation du sol
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] Grande-Bretagne
[Termes IGN] image multibande
[Termes IGN] image Sentinel-MSI
[Termes IGN] morphologie urbaine
[Termes IGN] pondération
[Termes IGN] processeur graphiqueRésumé : (auteur) The increased availability of high-resolution multispectral imagery captured by remote sensing platforms provides new opportunities for the characterisation and differentiation of urban context. The discovery of generalized latent representations from such data are however under researched within the social sciences. As such, this paper exploits advances in machine learning to implement a new method of capturing measures of urban context from multispectral satellite imagery at a very small area level through the application of a convolutional autoencoder (CAE). The utility of outputs from the CAE is enhanced through the application of spatial weighting, and the smoothed outputs are then summarised using cluster analysis to generate a typology comprising seven groups describing salient patterns of differentiated urban context. The limits of the technique are discussed with reference to the resolution of the satellite data utilised within the study and the interaction between the geography of the input data and the learned structure. The method is implemented within the context of Great Britain, however, is applicable to any location where similar high resolution multispectral imagery are available. Numéro de notice : A2022-370 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1016/j.compenvurbsys.2022.101802 Date de publication en ligne : 19/04/2022 En ligne : https://doi.org/10.1016/j.compenvurbsys.2022.101802 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100606
in Computers, Environment and Urban Systems > vol 95 (July 2022) . - n° 101802[article]Exploring the vertical dimension of street view image based on deep learning: a case study on lowest floor elevation estimation / Huan Ning in International journal of geographical information science IJGIS, vol 36 n° 7 (juillet 2022)
[article]
Titre : Exploring the vertical dimension of street view image based on deep learning: a case study on lowest floor elevation estimation Type de document : Article/Communication Auteurs : Huan Ning, Auteur ; Zhenlong Li, Auteur ; Xinyue Ye, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 1317 - 1342 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] détection d'objet
[Termes IGN] distorsion d'image
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] hauteur du bâti
[Termes IGN] image Streetview
[Termes IGN] lever tachéométrique
[Termes IGN] modèle numérique de surface
[Termes IGN] porteRésumé : (auteur) Street view imagery such as Google Street View is widely used in people’s daily lives. Many studies have been conducted to detect and map objects such as traffic signs and sidewalks for urban built-up environment analysis. While mapping objects in the horizontal dimension is common in those studies, automatic vertical measuring in large areas is underexploited. Vertical information from street view imagery can benefit a variety of studies. One notable application is estimating the lowest floor elevation, which is critical for building flood vulnerability assessment and insurance premium calculation. In this article, we explored the vertical measurement in street view imagery using the principle of tacheometric surveying. In the case study of lowest floor elevation estimation using Google Street View images, we trained a neural network (YOLO-v5) for door detection and used the fixed height of doors to measure doors’ elevation. The results suggest that the average error of estimated elevation is 0.218 m. The depthmaps of Google Street View were utilized to traverse the elevation from the roadway surface to target objects. The proposed pipeline provides a novel approach for automatic elevation estimation from street view imagery and is expected to benefit future terrain-related studies for large areas. Numéro de notice : A2022-465 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/13658816.2021.1981334 Date de publication en ligne : 06/10/2021 En ligne : https://doi.org/10.1080/13658816.2021.1981334 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100970
in International journal of geographical information science IJGIS > vol 36 n° 7 (juillet 2022) . - pp 1317 - 1342[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 079-2022071 SL Revue Centre de documentation Revues en salle Disponible Fusing Sentinel-2 and Landsat 8 satellite images using a model-based method / Jakob Sigurdsson in Remote sensing, vol 14 n° 13 (July-1 2022)
[article]
Titre : Fusing Sentinel-2 and Landsat 8 satellite images using a model-based method Type de document : Article/Communication Auteurs : Jakob Sigurdsson, Auteur ; Sveinn E. Armannsson, Auteur ; Magnus Orn Ulfarsson, Auteur ; Johannes R. Sveinsson, Auteur Année de publication : 2022 Article en page(s) : n° 3224 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] fusion d'images
[Termes IGN] image Landsat-8
[Termes IGN] image Sentinel-MSI
[Termes IGN] limite de résolution géométrique
[Termes IGN] modèle géométrique de prise de vueRésumé : (auteur) The Copernicus Sentinel-2 (S2) constellation comprises of two satellites in a sun-synchronous orbit. The S2 sensors have three spatial resolutions: 10, 20, and 60 m. The Landsat 8 (L8) satellite has sensors that provide seasonal coverage at spatial resolutions of 15, 30, and 60 m. Many remote sensing applications require the spatial resolutions of all data to be at the highest resolution possible, i.e., 10 m for S2. To address this demand, researchers have proposed various methods that exploit the spectral and spatial correlations within multispectral data to sharpen the S2 bands to 10 m. In this study, we combined S2 and L8 data. An S2 sharpening method called Sentinel-2 Sharpening (S2Sharp) was modified to include the 30 m and 15 m spectral bands from L8 and to sharpen all bands (S2 and L8) to the highest resolution of the data, which was 10 m. The method was evaluated using both real and simulated data. Numéro de notice : A2022-573 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : https://doi.org/10.3390/rs14133224 Date de publication en ligne : 05/07/2022 En ligne : https://doi.org/10.3390/rs14133224 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101289
in Remote sensing > vol 14 n° 13 (July-1 2022) . - n° 3224[article]Improving remote sensing classification: A deep-learning-assisted model / Tsimur Davydzenka in Computers & geosciences, vol 164 (July 2022)PermalinkInvestigating the ability to identify new constructions in urban areas using images from unmanned aerial vehicles, Google Earth, and Sentinel-2 / Fahime Arabi Aliabad in Remote sensing, vol 14 n° 13 (July-1 2022)PermalinkInvestigating the role of image retrieval for visual localization / Martin Humenberger in International journal of computer vision, vol 130 n° 7 (July 2022)PermalinkA lightweight network with attention decoder for real-time semantic segmentation / Kang Wang in The Visual Computer, vol 38 n° 7 (July 2022)PermalinkModeling human–human interaction with attention-based high-order GCN for trajectory prediction / Yanyan Fang in The Visual Computer, vol 38 n° 7 (July 2022)PermalinkA second-order attention network for glacial lake segmentation from remotely sensed imagery / Shidong Wang in ISPRS Journal of photogrammetry and remote sensing, vol 189 (July 2022)PermalinkSemantic feature-constrained multitask siamese network for building change detection in high-spatial-resolution remote sensing imagery / Qian Shen in ISPRS Journal of photogrammetry and remote sensing, vol 189 (July 2022)PermalinkEncoder-decoder structure with multiscale receptive field block for unsupervised depth estimation from monocular video / Songnan Chen in Remote sensing, Vol 14 n° 12 (June-2 2022)Permalink3D browsing of wide-angle fisheye images under view-dependent perspective correction / Mingyi Huang in Photogrammetric record, vol 37 n° 178 (June 2022)PermalinkAdversarial defenses for object detectors based on Gabor convolutional layers / Abdollah Amirkhani in The Visual Computer, vol 38 n° 6 (June 2022)Permalink