Descripteur
Termes IGN > imagerie > image numérique
image numériqueSynonyme(s)image en mode mailléVoir aussi |
Documents disponibles dans cette catégorie (2138)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Fusion of optical, radar and waveform LiDAR observations for land cover classification / Huiran Jin in ISPRS Journal of photogrammetry and remote sensing, vol 187 (May 2022)
[article]
Titre : Fusion of optical, radar and waveform LiDAR observations for land cover classification Type de document : Article/Communication Auteurs : Huiran Jin, Auteur ; Giorgos Mountrakis, Auteur Année de publication : 2022 Article en page(s) : pp 171 - 190 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] analyse comparative
[Termes IGN] carte de la végétation
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] fusion d'images
[Termes IGN] image ALOS-PALSAR
[Termes IGN] image Landsat-TM
[Termes IGN] image multitemporelle
[Termes IGN] occupation du solRésumé : (Auteur) Land cover is an integral component for characterizing anthropogenic activity and promoting sustainable land use. Mapping distribution and coverage of land cover at broad spatiotemporal scales largely relies on classification of remotely sensed data. Although recently multi-source data fusion has been playing an increasingly active role in land cover classification, our intensive review of current studies shows that the integration of optical, synthetic aperture radar (SAR) and light detection and ranging (LiDAR) observations has not been thoroughly evaluated. In this research, we bridged this gap by i) summarizing related fusion studies and assessing their reported accuracy improvements, and ii) conducting our own case study where for the first time fusion of optical, radar and waveform LiDAR observations and the associated improvements in classification accuracy are assessed using data collected by spaceborne or appropriately simulated platforms in the LiDAR case. Multitemporal Landsat-5/Thematic Mapper (TM) and Advanced Land Observing Satellite-1/ Phased Array type L-band SAR (ALOS-1/PALSAR) imagery acquired in the Central New York (CNY) region close to the collection of airborne waveform LVIS (Land, Vegetation, and Ice Sensor) data were examined. Classification was conducted using a random forest algorithm and different feature sets in terms of sensor and seasonality as input variables. Results indicate that the combined spectral, scattering and vertical structural information provided the maximum discriminative capability among different land cover types, giving rise to the highest overall accuracy of 83% (2–19% and 9–35% superior to the two-sensor and single-sensor scenarios with overall accuracies of 64–81% and 48–74%, respectively). Greater improvement was achieved when combining multitemporal Landsat images with LVIS-derived canopy height metrics as opposed to PALSAR features, suggesting that LVIS contributed more useful thematic information complementary to spectral data and beneficial to the classification task, especially for vegetation classes. With the Global Ecosystem Dynamics Investigation (GEDI), a recently launched LiDAR instrument of similar properties to the LVIS sensor now operating onboard the International Space Station (ISS), it is our hope that this research will act as a literature summary and offer guidelines for further applications of multi-date and multi-type remotely sensed data fusion for improved land cover classification. Numéro de notice : A2022-228 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.03.010 Date de publication en ligne : 17/03/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.03.010 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100214
in ISPRS Journal of photogrammetry and remote sensing > vol 187 (May 2022) . - pp 171 - 190[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022051 SL Revue Centre de documentation Revues en salle Disponible 081-2022053 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2022052 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Smartphone digital photography for fractional vegetation cover estimation / Gaofei Yin in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 5 (May 2022)
[article]
Titre : Smartphone digital photography for fractional vegetation cover estimation Type de document : Article/Communication Auteurs : Gaofei Yin, Auteur ; Yonghua Qu, Auteur ; Aleixandre Verger, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 303 - 310 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Acquisition d'image(s) et de donnée(s)
[Termes IGN] analyse comparative
[Termes IGN] champ visuel
[Termes IGN] couvert végétal
[Termes IGN] erreur moyenne quadratique
[Termes IGN] forêt alpestre
[Termes IGN] image à haute résolution
[Termes IGN] image hémisphérique
[Termes IGN] objectif grand angulaire
[Termes IGN] téléphone intelligentRésumé : (Auteur) Accurate ground measurements of fractional vegetation cover (FVC) are key for characterizing ecosystem functions and evaluating remote sensing products. The increasing performance of cameras equipped in smartphones opens new opportunities for extensive FVC measurement through citizen science initiatives. However, the wide field of view (FOV) of smartphone cameras constitutes a key source of uncertainty in the estimation of vegetation parameters, which has been largely ignored. We designed a practical method to characterize the FOV of smartphones and improve the FVC estimation. The method was assessed in a mountainous forest based on the comparison with in situ fisheye photographs. After the FOV correction, the agreement of smart-phone and fisheye FVC estimates highly improved: root-mean-square error (RMSE) of 0.103 compared to 0.242 of the original smartphone FVC estimates without considering the FOV effect, mean difference of 0.074 versus 0.213, and coefficient of determination R 2 of 0.719 versus 0.353. Smartphone cameras outperform traditional fisheye cameras: the overexposure and low vertical resolution of fisheye photographs introduced uncertainties in FVCestimation while the insensitivity to exposure and high spatial resolution of smartphone cameras make photograph acquisition and analysis more automatic and accurate. The smartphone FVCestimates highly agree with the GF-1 satellite product: RMSE = 0.066, bias = 0.007, and R 2 = 0.745. This study opens new perspectives for the validation of satellite products. Numéro de notice : A2022-527 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.21-00038R2 Date de publication en ligne : 01/05/2022 En ligne : https://doi.org/10.14358/PERS.21-00038R2 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101375
in Photogrammetric Engineering & Remote Sensing, PERS > vol 88 n° 5 (May 2022) . - pp 303 - 310[article]Réservation
Réserver ce documentExemplaires(2)
Code-barres Cote Support Localisation Section Disponibilité 105-2022052 SL Revue Centre de documentation Revues en salle Disponible 105-2022051 SL Revue Centre de documentation Revues en salle Disponible Spectral-spatial classification method for hyperspectral images using stacked sparse autoencoder suitable in limited labelled samples situation / Seyyed Ali Ahmadi in Geocarto international, vol 37 n° 7 ([15/04/2022])
[article]
Titre : Spectral-spatial classification method for hyperspectral images using stacked sparse autoencoder suitable in limited labelled samples situation Type de document : Article/Communication Auteurs : Seyyed Ali Ahmadi, Auteur ; Nasser Mehrshad, Auteur ; Seyyed Mohammadali Arghavan, Auteur Année de publication : 2022 Article en page(s) : pp 2031 - 2054 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse de sensibilité
[Termes IGN] apprentissage profond
[Termes IGN] données étiquetées d'entrainement
[Termes IGN] échantillonnage de données
[Termes IGN] filtre de Gabor
[Termes IGN] image hyperspectraleRésumé : (auteur) Recently, deep learning (DL)-based methods have attracted increasing attention for hyperspectral images (HSIs) classification. However, the complex structure and limited number of labelled training samples of HSIs negatively affect the performance of DL models. In this paper, a spectral-spatial classification method is proposed based on the combination of local and global spatial information, including extended multi-attribute profiles and multiscale Gabor features, with sparse stacked autoencoder (GEAE). GEAE stacks the spatial and spectral information to form the fused features. Also, GEAE generates virtual samples using weighted average of available samples for expanding the training set so that many parameters of DL network can be learned optimally in limited labelled samples situations. Therefore, the similarity between samples is determined with distance metric learning to overcome the problems of Euclidean distance-based similarity metrics. The experimental results on three HSIs datasets demonstrate the effectiveness of the GEAE in comparison to some existing classification methods. Numéro de notice : A2022-498 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2020.1797188 Date de publication en ligne : 10/08/2020 En ligne : https://doi.org/10.1080/10106049.2020.1797188 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100990
in Geocarto international > vol 37 n° 7 [15/04/2022] . - pp 2031 - 2054[article]Wood decay detection in Norway spruce forests based on airborne hyperspectral and ALS data / Michele Dalponte in Remote sensing, vol 14 n° 8 (April-2 2022)
[article]
Titre : Wood decay detection in Norway spruce forests based on airborne hyperspectral and ALS data Type de document : Article/Communication Auteurs : Michele Dalponte, Auteur ; Alvar J. I. Kallio, Auteur ; Hans Ole Ørka, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 1892 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] bois sur pied
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] dépérissement
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] données lidar
[Termes IGN] image hyperspectrale
[Termes IGN] image infrarouge
[Termes IGN] Norvège
[Termes IGN] Perceptron multicouche
[Termes IGN] Picea abies
[Termes IGN] régression linéaire
[Termes IGN] régression logistique
[Termes IGN] santé des forêts
[Termes IGN] semis de pointsRésumé : (auteur) Wood decay caused by pathogenic fungi in Norway spruce forests causes severe economic losses in the forestry sector, and currently no efficient methods exist to detect infected trees. The detection of wood decay could potentially lead to improvements in forest management and could help in reducing economic losses. In this study, airborne hyperspectral data were used to detect the presence of wood decay in the trees in two forest areas located in Etnedal (dataset I) and Gran (dataset II) municipalities, in southern Norway. The hyperspectral data used consisted of images acquired by two sensors operating in the VNIR and SWIR parts of the spectrum. Corresponding ground reference data were collected in Etnedal using a cut-to-length harvester while in Gran, field measurements were collected manually. Airborne laser scanning (ALS) data were used to detect the individual tree crowns (ITCs) in both sites. Different approaches to deal with pixels inside each ITC were considered: in particular, pixels were either aggregated to a unique value per ITC (i.e., mean, weighted mean, median, centermost pixel) or analyzed in an unaggregated way. Multiple classification methods were explored to predict rot presence: logistic regression, feed forward neural networks, and convolutional neural networks. The results showed that wood decay could be detected, even if with accuracy varying among the two datasets. The best results on the Etnedal dataset were obtained using a convolution neural network with the first five components of a principal component analysis as input (OA = 65.5%), while on the Gran dataset, the best result was obtained using LASSO with logistic regression and data aggregated using the weighted mean (OA = 61.4%). In general, the differences among aggregated and unaggregated data were small. Numéro de notice : A2022-352 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.3390/rs14081892 Date de publication en ligne : 14/04/2022 En ligne : https://doi.org/10.3390/rs14081892 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100541
in Remote sensing > vol 14 n° 8 (April-2 2022) . - n° 1892[article]Deep generative model for spatial–spectral unmixing with multiple endmember priors / Shuaikai Shi in IEEE Transactions on geoscience and remote sensing, vol 60 n° 4 (April 2022)
[article]
Titre : Deep generative model for spatial–spectral unmixing with multiple endmember priors Type de document : Article/Communication Auteurs : Shuaikai Shi, Auteur ; Lijun Zhang, Auteur ; Yoann Altmann, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 5527214 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse de mélange spectral d’extrémités multiples
[Termes IGN] analyse linéaire des mélanges spectraux
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] image hyperspectrale
[Termes IGN] réseau neuronal de graphesRésumé : (auteur) Spectral unmixing is an effective tool to mine information at the subpixel level from complex hyperspectral images. To consider the spatially correlated materials distributions in the scene, many algorithms unmix the data in a spatial–spectral fashion; however, existing models are usually unable to model spectral variability simultaneously. In this article, we present a variational autoencoder-based deep generative model for spatial–spectral unmixing (DGMSSU) with endmember variability, by linking the generated endmembers to the probability distributions of endmember bundles extracted from the hyperspectral imagery via discriminators. Besides the convolutional autoencoder-like architecture that can only model the spatial information within the regular patch inputs, DGMSSU is able to alternatively choose graph convolutional networks or self-attention mechanism modules to handle the irregular but more flexible data—superpixel. Experimental results on a simulated dataset, as well as two well-known real hyperspectral images, show the superiority of our proposed approach in comparison with other state-of-the-art spatial–spectral unmixing methods. Compared to the conventional unmixing methods that consider the endmember variability, our proposed model generates more accurate endmembers on each subimage by the adversarial training process. The codes of this work will be available at https://github.com/shuaikaishi/DGMSSU for the sake of reproducibility. Numéro de notice : A2022-380 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2022.3168712 Date de publication en ligne : 18/04/2022 En ligne : https://doi.org/10.1109/TGRS.2022.3168712 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100645
in IEEE Transactions on geoscience and remote sensing > vol 60 n° 4 (April 2022) . - n° 5527214[article]Direct photogrammetry with multispectral imagery for UAV-based snow depth estimation / Kathrin Maier in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)PermalinkExploring the association between street built environment and street vitality using deep learning methods / Yunqin Li in Sustainable Cities and Society, vol 79 (April 2022)PermalinkGeoRec: Geometry-enhanced semantic 3D reconstruction of RGB-D indoor scenes / Linxi Huan in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)PermalinkGraph learning based on signal smoothness representation for homogeneous and heterogeneous change detection / David Alejandro Jimenez-Sierra in IEEE Transactions on geoscience and remote sensing, vol 60 n° 4 (April 2022)PermalinkMeta-learning based hyperspectral target detection using siamese network / Yulei Wang in IEEE Transactions on geoscience and remote sensing, vol 60 n° 4 (April 2022)PermalinkProcedural urban forestry / Till Niese in ACM Transactions on Graphics, TOG, Vol 41 n° 2 (April 2022)PermalinkAboveground biomass of salt-marsh vegetation in coastal wetlands: Sample expansion of in situ hyperspectral and Sentinel-2 data using a generative adversarial network / Chen Chen in Remote sensing of environment, vol 270 (March 2022)PermalinkDeep-learning-based multispectral image reconstruction from single natural color RGB image - Enhancing UAV-based phenotyping / Jiangsan Zhao in Remote sensing, vol 14 n° 5 (March-1 2022)PermalinkEvaluating the 3D integrity of underwater structure from motion workflows / Ian M. Lochhead in Photogrammetric record, vol 37 n° 177 (March 2022)PermalinkLiDAR-based method for analysing landmark visibility to pedestrians in cities: case study in Kraków, Poland / Krystian Pyka in International journal of geographical information science IJGIS, vol 36 n° 3 (March 2022)Permalink