Descripteur
Termes IGN > imagerie
imagerie
Commentaire :
Terme regroupant photographies et images issues de différents capteurs.
|
Documents disponibles dans cette catégorie (8192)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Transfer learning from citizen science photographs enables plant species identification in UAV imagery / Salim Soltani in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 5 (August 2022)
[article]
Titre : Transfer learning from citizen science photographs enables plant species identification in UAV imagery Type de document : Article/Communication Auteurs : Salim Soltani, Auteur ; Hannes Feilhauer, Auteur ; Robbert Duker, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 100016 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] base de données naturalistes
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] distribution spatiale
[Termes IGN] données localisées des bénévoles
[Termes IGN] espèce végétale
[Termes IGN] filtrage de la végétation
[Termes IGN] identification de plantes
[Termes IGN] image captée par drone
[Termes IGN] orthoimage couleur
[Termes IGN] science citoyenne
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Accurate information on the spatial distribution of plant species and communities is in high demand for various fields of application, such as nature conservation, forestry, and agriculture. A series of studies has shown that Convolutional Neural Networks (CNNs) accurately predict plant species and communities in high-resolution remote sensing data, in particular with data at the centimeter scale acquired with Unoccupied Aerial Vehicles (UAV). However, such tasks often require ample training data, which is commonly generated in the field via geocoded in-situ observations or labeling remote sensing data through visual interpretation. Both approaches are laborious and can present a critical bottleneck for CNN applications. An alternative source of training data is given by using knowledge on the appearance of plants in the form of plant photographs from citizen science projects such as the iNaturalist database. Such crowd-sourced plant photographs typically exhibit very different perspectives and great heterogeneity in various aspects, yet the sheer volume of data could reveal great potential for application to bird’s eye views from remote sensing platforms. Here, we explore the potential of transfer learning from such a crowd-sourced data treasure to the remote sensing context. Therefore, we investigate firstly, if we can use crowd-sourced plant photographs for CNN training and subsequent mapping of plant species in high-resolution remote sensing imagery. Secondly, we test if the predictive performance can be increased by a priori selecting photographs that share a more similar perspective to the remote sensing data. We used two case studies to test our proposed approach with multiple RGB orthoimages acquired from UAV with the target plant species Fallopia japonica and Portulacaria afra respectively. Our results demonstrate that CNN models trained with heterogeneous, crowd-sourced plant photographs can indeed predict the target species in UAV orthoimages with surprising accuracy. Filtering the crowd-sourced photographs used for training by acquisition properties increased the predictive performance. This study demonstrates that citizen science data can effectively anticipate a common bottleneck for vegetation assessments and provides an example on how we can effectively harness the ever-increasing availability of crowd-sourced and big data for remote sensing applications. Numéro de notice : A2022-488 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.1016/j.ophoto.2022.100016 Date de publication en ligne : 23/05/2022 En ligne : https://doi.org/10.1016/j.ophoto.2022.100016 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100956
in ISPRS Open Journal of Photogrammetry and Remote Sensing > vol 5 (August 2022) . - n° 100016[article]Multiscale assimilation of Sentinel and Landsat data for soil moisture and Leaf Area Index predictions using an ensemble-Kalman-filter-based assimilation approach in a heterogeneous ecosystem / Nicola Montaldo in Remote sensing, vol 14 n° 14 (July-2 2022)
[article]
Titre : Multiscale assimilation of Sentinel and Landsat data for soil moisture and Leaf Area Index predictions using an ensemble-Kalman-filter-based assimilation approach in a heterogeneous ecosystem Type de document : Article/Communication Auteurs : Nicola Montaldo, Auteur ; Andrea Gaspa, Auteur ; Roberto Corona, Auteur Année de publication : 2022 Article en page(s) : n° 3458 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] assimilation des données
[Termes IGN] bassin méditerranéen
[Termes IGN] écosystème
[Termes IGN] filtre de Kalman
[Termes IGN] humidité du sol
[Termes IGN] image Landsat-8
[Termes IGN] image Sentinel-MSI
[Termes IGN] image Sentinel-SAR
[Termes IGN] Leaf Area Index
[Termes IGN] modèle dynamique
[Termes IGN] modèle hydrographique
[Termes IGN] Sardaigne
[Termes IGN] zone semi-arideRésumé : (auteur) Data assimilation techniques allow researchers to optimally merge remote sensing observations in ecohydrological models, guiding them for improving land surface fluxes predictions. Presently, freely available remote sensing products, such as those of Sentinel 1 radar, Landsat 8 sensors, and Sentinel 2 sensors, allow the monitoring of land surface variables (e.g., radar backscatter for soil moisture and the normalized difference vegetation index (NDVI) and for leaf area index (LAI)) at unprecedentedly high spatial and time resolutions, appropriate for heterogeneous ecosystems, typical of semiarid ecosystems characterized by contrasting vegetation components (grass and trees) competing for water use. A multiscale assimilation approach that assimilates radar backscatter and grass and tree NDVI in a coupled vegetation dynamic–land surface model is proposed. It is based on the ensemble Kalman filter (EnKF), and it is not limited to assimilating remote sensing data for model predictions, but it uses assimilated data for dynamically updating key model parameters (the ENKFdc approach), including saturated hydraulic conductivity and grass and tree maintenance respiration coefficients, which are highly sensitive parameters of soil–water balance and biomass budget models, respectively. The proposed EnKFdc assimilation approach facilitated good predictions of soil moisture, grass, and tree LAI in a heterogeneous ecosystem in Sardinia for a 3-year period with contrasting hydrometeorological (dry vs. wet) conditions. Contrary to the EnKF-based approach, the proposed EnKFdc approach performed well for the full range of hydrometeorological conditions and parameters, even assuming extremely biased model conditions with very high or low parameter values compared with the calibrated (“true”) values. The EnKFdc approach is crucial for soil moisture and LAI predictions in winter and spring, key seasons for water resources management in Mediterranean water-limited ecosystems. The use of ENKFdc also enabled us to predict evapotranspiration and carbon flux well, with errors of less than 4% and 15%, respectively; such results were obtained even with extremely biased initial model conditions. Numéro de notice : A2022-574 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.3390/rs14143458 En ligne : https://doi.org/10.3390/rs14143458 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101293
in Remote sensing > vol 14 n° 14 (July-2 2022) . - n° 3458[article]Validation of a corner reflector installation at Côte d’Azur multi-technique geodetic observatory / Xavier Collilieux in Advances in space research, vol 70 n° 2 (15 July 2022)
[article]
Titre : Validation of a corner reflector installation at Côte d’Azur multi-technique geodetic observatory Type de document : Article/Communication Auteurs : Xavier Collilieux , Auteur ; Clément Courde, Auteur ; Bénédicte Fruneau , Auteur ; Mourad Aimar, Auteur ; Guillaume Schmidt, Auteur ; Isabelle Delprat, Auteur ; Marie-Amélie Defresne, Auteur ; Damien Pesce, Auteur ; Fabien Bergerault, Auteur ; Guy Wöppelmann , Auteur Année de publication : 2022 Projets : Université de Paris / Clerici, Christine Article en page(s) : pp 360 - 370 Note générale : bibliographie
This study contributes to the IdEx Université de Paris ANR-18-IDEX-0001. It was supported by the Programme National GRAM INSAROME of CNRS/INSU with INP and IN2P3 co-funded by CNES but also by BQR-OCA.Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Acquisition d'image(s) et de donnée(s)
[Termes IGN] coin réflecteur
[Termes IGN] constellation Sentinel
[Termes IGN] Global Geodetic Observing System
[Termes IGN] image radar moirée
[Termes IGN] interféromètrie par radar à antenne synthétique
[Termes IGN] observatoire astronomiqueRésumé : (auteur) We present the procedure we followed to design an artificial corner reflector (CR) at the Calern site of Côte d’Azur Observatory (France). Although still few in number, such reflectors are an integral part of the Global Geodetic Observing System (GGOS) infrastructure. They can be used as a stable radar target in SAR images to connect local InSAR deformation maps to the global Terrestrial Reference Frame and for SAR absolute determination. During a test phase, the orientation of the CR was changed in order to be aligned toward all possible orbits of Sentinel-1A/1B satellites. On the different SAR images, the CR exhibits a high backscattering signal, and provides a Signal-to-Clutter Ratio larger than 26 dB. Since December 2018, the CR is specifically oriented toward the relative orbit 88. It is clearly detected as a PS in our InSAR analyses and as expected, the standard deviation of displacement measured on the CR is lower than on surrounding PS. A first local survey was performed to locate precisely this CR with respect to the existing geodetic instruments and annual campaigns have been carried out since then to insure its stability over time. Numéro de notice : A2022-337 Affiliation des auteurs : ENSG+Ext (2020- ) Thématique : POSITIONNEMENT Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.asr.2022.04.050 Date de publication en ligne : 29/04/2022 En ligne : https://doi.org/10.1016/j.asr.2022.04.050 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100694
in Advances in space research > vol 70 n° 2 (15 July 2022) . - pp 360 - 370[article]Detection of diseased pine trees in unmanned aerial vehicle images by using deep convolutional neural networks / Gensheng Hu in Geocarto international, vol 37 n° 12 ([01/07/2022])
[article]
Titre : Detection of diseased pine trees in unmanned aerial vehicle images by using deep convolutional neural networks Type de document : Article/Communication Auteurs : Gensheng Hu, Auteur ; Yanqiu Zhu, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 3520 - 3539 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage profond
[Termes IGN] Chine
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] image captée par drone
[Termes IGN] Pinus (genre)
[Termes IGN] santé des forêtsRésumé : (auteur) This study presents a method that uses high-resolution remote sensing images collected by an unmanned aerial vehicle (UAV) and combines MobileNet and Faster R-CNN for detecting diseased pine trees. MobileNet is used to remove backgrounds to reduce the interference of background information. Faster R-CNN is adopted to distinguish between diseased and healthy pine trees. The number of training samples is expanded due to the insufficient number of available UAV images. Experimental results show that the proposed method is better than traditional machine learning approaches, such as support vector machine and AdaBoost, and methods of DCNN, such as Alexnet, Inception and Faster R-CNN. Through sample expansion and background removal, the proposed method achieves effective detection of diseased pine trees in UAV images by using deep learning technology. Numéro de notice : A2022-588 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE/INFORMATIQUE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2020.1864025 Date de publication en ligne : 06/01/2021 En ligne : https://doi.org/10.1080/10106049.2020.1864025 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101362
in Geocarto international > vol 37 n° 12 [01/07/2022] . - pp 3520 - 3539[article]Discriminative information restoration and extraction for weakly supervised low-resolution fine-grained image recognition / Tiantian Yan in Pattern recognition, vol 127 (July 2022)
[article]
Titre : Discriminative information restoration and extraction for weakly supervised low-resolution fine-grained image recognition Type de document : Article/Communication Auteurs : Tiantian Yan, Auteur ; Jian Shi, Auteur ; Haojie Li, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 108629 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse discriminante
[Termes IGN] arbre aléatoire minimum
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de données
[Termes IGN] granularité d'image
[Termes IGN] image à basse résolution
[Termes IGN] image à haute résolution
[Termes IGN] relation sémantique
[Termes IGN] texture d'imageRésumé : (auteur) The existing methods of fine-grained image recognition mainly devote to learning subtle yet discriminative features from the high-resolution input. However, their performance deteriorates significantly when they are used for low quality images because a lot of discriminative details of images are missing. We propose a discriminative information restoration and extraction network, termed as DRE-Net, to address the problem of low-resolution fine-grained image recognition, which has widespread application potential, such as shelf auditing and surveillance scenarios. DRE-Net is the first framework for weakly supervised low-resolution fine-grained image recognition and consists of two sub-networks: (1) fine-grained discriminative information restoration sub-network (FDR) and (2) recognition sub-network with the semantic relation distillation loss (SRD-loss). The first module utilizes the structural characteristic of minimum spanning tree (MST) to establish context information for each pixel by employing the spatial structures between each pixel and other pixels, which can help FDR focus on and restore the critical texture details. The second module employs the SRD-loss to calibrate recognition sub-network by transferring the correct relationships between every two pixels on the feature map. Meanwhile the SRD-loss can further prompt the FDR to recover reliable and accurate fine-grained details and guide the recognition sub-network to perceive the discriminative features from the correct relationships. Extensive experiments on three benchmark datasets and one retail product dataset demonstrate the effectiveness of our proposed framework. Numéro de notice : A2022-555 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.patcog.2022.108629 Date de publication en ligne : 06/03/2022 En ligne : https://doi.org/10.1016/j.patcog.2022.108629 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101168
in Pattern recognition > vol 127 (July 2022) . - n° 108629[article]Estimating generalized measures of local neighbourhood context from multispectral satellite images using a convolutional neural network / Alex David Singleton in Computers, Environment and Urban Systems, vol 95 (July 2022)PermalinkExploring the vertical dimension of street view image based on deep learning: a case study on lowest floor elevation estimation / Huan Ning in International journal of geographical information science IJGIS, vol 36 n° 7 (juillet 2022)PermalinkFusing Sentinel-2 and Landsat 8 satellite images using a model-based method / Jakob Sigurdsson in Remote sensing, vol 14 n° 13 (July-1 2022)PermalinkFusion of GNSS and InSAR time series using the improved STRE model: applications to the San Francisco bay area and Southern California / Huineng Yan in Journal of geodesy, vol 96 n° 7 (July 2022)PermalinkHeat wave-induced augmentation of surface urban heat islands strongly regulated by rural background / Shiqi Miao in Sustainable Cities and Society, vol 82 (July 2022)PermalinkImproving remote sensing classification: A deep-learning-assisted model / Tsimur Davydzenka in Computers & geosciences, vol 164 (July 2022)PermalinkInvestigating the ability to identify new constructions in urban areas using images from unmanned aerial vehicles, Google Earth, and Sentinel-2 / Fahime Arabi Aliabad in Remote sensing, vol 14 n° 13 (July-1 2022)PermalinkQuantifying the influence of plot-level uncertainty in above ground biomass up scaling using remote sensing data in central Indian dry deciduous forest / Thangavelu Mayamanikandan in Geocarto international, vol 37 n° 12 ([01/07/2022])PermalinkA second-order attention network for glacial lake segmentation from remotely sensed imagery / Shidong Wang in ISPRS Journal of photogrammetry and remote sensing, vol 189 (July 2022)PermalinkSemantic feature-constrained multitask siamese network for building change detection in high-spatial-resolution remote sensing imagery / Qian Shen in ISPRS Journal of photogrammetry and remote sensing, vol 189 (July 2022)PermalinkStreet-view imagery guided street furniture inventory from mobile laser scanning point clouds / Yuzhou Zhou in ISPRS Journal of photogrammetry and remote sensing, vol 189 (July 2022)PermalinkSynergistic use of the SRAL/MWR and SLSTR sensors on board Sentinel-3 for the wet tropospheric correction retrieval / Pedro Aguiar in Remote sensing, vol 14 n° 13 (July-1 2022)PermalinkA dual-generator translation network fusing texture and structure features for SAR and optical image matching / Han Nie in Remote sensing, Vol 14 n° 12 (June-2 2022)PermalinkEstimating feature extraction changes of Berkelah Forest, Malaysia from multisensor remote sensing data using and object-based technique / Syaza Rozali in Geocarto international, vol 37 n° 11 ([15/06/2022])PermalinkHow large-scale bark beetle infestations influence the protective effects of forest stands against avalanches: A case study in the Swiss Alps / Marion E. Caduff in Forest ecology and management, vol 514 (June-15 2022)Permalink3D browsing of wide-angle fisheye images under view-dependent perspective correction / Mingyi Huang in Photogrammetric record, vol 37 n° 178 (June 2022)PermalinkArtificial intelligence techniques in extracting building and tree footprints using aerial imagery and LiDAR data / Saeideh Sahebi Vayghan in Geocarto international, vol 37 n° 10 ([01/06/2022])PermalinkCombination of Sentinel-1 and Sentinel-2 data for tree species classification in a Central European biosphere reserve / Michael Lechner in Remote sensing, vol 14 n° 11 (June-1 2022)PermalinkDART-Lux: An unbiased and rapid Monte Carlo radiative transfer method for simulating remote sensing images / Yingjie Wang in Remote sensing of environment, vol 274 (June 2022)PermalinkExtracting the urban landscape features of the historic district from street view images based on deep learning: A case study in the Beijing Core area / Siming Yin in ISPRS International journal of geo-information, vol 11 n° 6 (June 2022)Permalink