Descripteur
Termes IGN > imagerie > image numérique > image optique > image multibande
image multibandeSynonyme(s)Image xs ;Image multispectrale donnees multispectralesVoir aussi |
Documents disponibles dans cette catégorie (1020)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Accuracy analysis of UAV photogrammetry using RGB and multispectral sensors / Nikola Santrač in Geodetski vestnik, vol 67 n° 4 (December 2023)
[article]
Titre : Accuracy analysis of UAV photogrammetry using RGB and multispectral sensors Type de document : Article/Communication Auteurs : Nikola Santrač, Auteur ; Pavel Benka, Auteur ; Mehmed Batilović, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : pp 459 - 472 Note générale : bibliographie Langues : Anglais (eng) Slovène (slv) Descripteur : [Vedettes matières IGN] Photogrammétrie numérique
[Termes IGN] image captée par drone
[Termes IGN] image multibande
[Termes IGN] image RVB
[Termes IGN] modèle géométrique de prise de vue
[Termes IGN] point d'appui
[Termes IGN] positionnement cinématique en temps réel
[Termes IGN] qualité des donnéesRésumé : (auteur) In recent years, unmanned aerial vehicles (UAVs) have become increasingly important as a tool for quickly collecting high-resolution (spatial and spectral) imagery of the Earth's surface. The final products are highly dependent on the choice of values for various parameters in flight planning, the type of sensors, and the processing of the data. In this paper ground control points (GCPs) were first measured using the Global Navigation Satellite System (GNSS) Real-Time Kinematic (RTK) method, and then due to the low height accuracy of the GNSS RTK method all points were measured using a detailed leveling method. This study aims to provide a basic assessment of quality, including four main aspects: (1) the difference between an RGB sensor and a five-band multispectral sensor on accuracy and the amount of data, (2) the impact of the number of GCPs on the accuracy of the final products, (3) the impact of different altitudes and cross flight strips, and (4) the accuracy analysis of multi-altitude models. The results suggest that the type of sensor, flight configuration, and GCP setup strongly affect the quality and quantity of the final product data while creating a multi-altitude model does not result in the expected quality of data. With its unique combination of sensors and parameters, the results and recommendations presented in this paper can assist professionals and researchers in their future work. Numéro de notice : A2023-241 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.15292/geodetski-vestnik.2023.04.459-472 Date de publication en ligne : 01/12/2023 En ligne : https://dx.doi.org/10.15292/geodetski-vestnik.2023.04.459-472 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=103604
in Geodetski vestnik > vol 67 n° 4 (December 2023) . - pp 459 - 472[article]Multiresolution analysis pansharpening based on variation factor for multispectral and panchromatic images from different times / Peng Wang in IEEE Transactions on geoscience and remote sensing, vol 61 n° 3 (March 2023)
[article]
Titre : Multiresolution analysis pansharpening based on variation factor for multispectral and panchromatic images from different times Type de document : Article/Communication Auteurs : Peng Wang, Auteur ; Hongyu Yao, Auteur ; Bo Huang, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 5401217 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse multirésolution
[Termes IGN] données multitemporelles
[Termes IGN] image multibande
[Termes IGN] image panchromatique
[Termes IGN] pansharpening (fusion d'images)
[Termes IGN] pouvoir de résolution géométriqueRésumé : (auteur) Most pansharpening methods refer to the fusion of the original low-resolution multispectral (MS) and high-resolution panchromatic (PAN) images acquired simultaneously over the same area. Due to its good robustness, multiresolution analysis (MRA) has become one of the important categories of pansharpening methods. However, when only MS and PAN images acquired at different times can be provided, the fusion results from current MRA methods are often not ideal due to the failure to effectively analyze multitemporal misalignments between MS and PAN images from different times. To solve this issue, MRA pansharpening based on variation factor for MS and PAN images from different times is proposed. The MRA pansharpening based on dual-scale regression model is first established, and the variation factor is then introduced to effectively analyze the multitemporal misalignments by using the alternating direction method of multipliers (ADMM), yielding the final fusion results. Experiments with synthetic and real datasets show that the proposed method exhibits significant performance improvement compared to the traditional pansharpening methods, as well as the state-of-the-art MRA methods. Visual comparisons demonstrate that the variation factor introduces encouraging improvements in the compensation of multitemporal misalignments in ground objects and advances pansharpening applications for MS and PAN images acquired at different times. Numéro de notice : A2023-184 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2023.3252001 En ligne : https://doi.org/10.1109/TGRS.2023.3252001 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102956
in IEEE Transactions on geoscience and remote sensing > vol 61 n° 3 (March 2023) . - n° 5401217[article]A unified attention paradigm for hyperspectral image classification / Qian Liu in IEEE Transactions on geoscience and remote sensing, vol 61 n° 3 (March 2023)
[article]
Titre : A unified attention paradigm for hyperspectral image classification Type de document : Article/Communication Auteurs : Qian Liu, Auteur ; Zebin Wu, Auteur ; Yang Xu, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 5506316 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image hyperspectrale
[Termes IGN] précision de la classification
[Termes IGN] séparateur à vaste margeRésumé : (auteur) Attention mechanisms improve the classification accuracies by enhancing the salient information for hyperspectral images (HSIs). However, existing HSI attention models are driven by advanced achievements of computer vision, which are not able to fully exploit the spectral–spatial structure prior of HSIs and effectively refine features from a global perspective. In this article, we propose a unified attention paradigm (UAP) that defines the attention mechanism as a general three-stage process including optimizing feature representations, strengthening information interaction, and emphasizing meaningful information. Meanwhile, we designed a novel efficient spectral–spatial attention module (ESSAM) under this paradigm, which adaptively adjusts feature responses along the spectral and spatial dimensions at an extremely low parameter cost. Specifically, we construct a parameter-free spectral attention block that employs multiscale structured encodings and similarity calculations to perform global cross-channel interactions, and a memory-enhanced spatial attention block that captures key semantics of images stored in a learnable memory unit and models global spatial relationship by constructing semantic-to-pixel dependencies. ESSAM takes full account of the spatial distribution and low-dimensional characteristics of HSIs, with better interpretability and lower complexity. We develop a dense convolutional network based on efficient spectral–spatial attention network (ESSAN) and experiment on three real hyperspectral datasets. The experimental results demonstrate that the proposed ESSAM brings higher accuracy improvement compared to advanced attention models. Numéro de notice : A2023-185 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2023.3257321 Date de publication en ligne : 15/12/2023 En ligne : https://doi.org/10.1109/TGRS.2023.3257321 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102957
in IEEE Transactions on geoscience and remote sensing > vol 61 n° 3 (March 2023) . - n° 5506316[article]Large-scale burn severity mapping in multispectral imagery using deep semantic segmentation models / Xikun Hu in ISPRS Journal of photogrammetry and remote sensing, vol 196 (February 2023)
[article]
Titre : Large-scale burn severity mapping in multispectral imagery using deep semantic segmentation models Type de document : Article/Communication Auteurs : Xikun Hu, Auteur ; Puzhao Zhang, Auteur ; Yifang Ban, Auteur Année de publication : 2023 Article en page(s) : pp 228 - 240 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] carte thématique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] dommage
[Termes IGN] image Landsat-ETM+
[Termes IGN] image Landsat-OLI
[Termes IGN] image Landsat-TM
[Termes IGN] image multibande
[Termes IGN] image Sentinel-MSI
[Termes IGN] incendie de forêt
[Termes IGN] jeu de données localisées
[Termes IGN] segmentation sémantique
[Termes IGN] surveillance forestière
[Termes IGN] zone sinistréeRésumé : (auteur) Nowadays Earth observation satellites provide forest fire authorities and resource managers with spatial and comprehensive information for fire stabilization and recovery. Burn severity mapping is typically performed by classifying bi-temporal indices (e.g., dNBR, and RdNBR) using thresholds derived from parametric models incorporating field-based measurements. Analysts are currently expending considerable manual effort using prior knowledge and visual inspection to determine burn severity thresholds. In this study, we aim to employ highly automated approaches to provide spatially explicit damage level estimates. We first reorganize a large-scale Landsat-based bi-temporal burn severity assessment dataset (Landsat-BSA) by visual data cleaning based on annotated MTBS data (approximately 1000 major fire events in the United States). Then we apply state-of-the-art deep learning (DL) based methods to map burn severity based on the Landsat-BSA dataset. Experimental results emphasize that multi-class semantic segmentation algorithms can approximate the threshold-based techniques used extensively for burn severity classification. UNet-like models outperform other region-based CNN and Transformer-based models and achieve accurate pixel-wise classification results. Combined with the online hard example mining algorithm to reduce class imbalance issue, Attention UNet achieves the highest mIoU (0.78) and the highest Kappa coefficient close to 0.90. The bi-temporal inputs with ancillary spectral indices work much better than the uni-temporal multispectral inputs. The restructured dataset will be publicly available and create opportunities for further advances in remote sensing and wildfire communities. Numéro de notice : A2023-122 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.12.026 Date de publication en ligne : 11/01/2023 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.12.026 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102498
in ISPRS Journal of photogrammetry and remote sensing > vol 196 (February 2023) . - pp 228 - 240[article]Geospatial-based machine learning techniques for land use and land cover mapping using a high-resolution unmanned aerial vehicle image / Taposh Mollick in Remote Sensing Applications: Society and Environment, RSASE, vol 29 (January 2023)
[article]
Titre : Geospatial-based machine learning techniques for land use and land cover mapping using a high-resolution unmanned aerial vehicle image Type de document : Article/Communication Auteurs : Taposh Mollick, Auteur ; MD Golam Azam, Auteur ; Sabrina Karim, Auteur Année de publication : 2023 Article en page(s) : n° 100859 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse comparative
[Termes IGN] analyse d'image orientée objet
[Termes IGN] apprentissage automatique
[Termes IGN] Bangladesh
[Termes IGN] classification non dirigée
[Termes IGN] classification par maximum de vraisemblance
[Termes IGN] classification par nuées dynamiques
[Termes IGN] classification pixellaire
[Termes IGN] image captée par drone
[Termes IGN] image multibande
[Termes IGN] occupation du sol
[Termes IGN] rendement agricole
[Termes IGN] segmentation d'image
[Termes IGN] utilisation du solRésumé : (auteur) Bangladesh is primarily an agricultural country where technological advancement in the agricultural sector can ensure the acceleration of economic growth and ensure long-term food security. This research was conducted in the south-western coastal zone of Bangladesh, where rice is the main crop and other crops are also grown. Land use and land cover (LULC) classification using remote sensing techniques such as the use of satellite or unmanned aerial vehicle (UAV) images can forecast the crop yield and can also provide information on weeds, nutrient deficiencies, diseases, etc. to monitor and treat the crops. Depending on the reflectance received by sensors, remotely sensed images store a digital number (DN) for each pixel. Traditionally, these pixel values have been used to separate clusters and classify various objects. However, it frequently generates a lot of discontinuity in a particular land cover, resulting in small objects within a land cover that provide poor image classification output. It is called the salt-and-pepper effect. In order to classify land cover based on texture, shape, and neighbors, Pixel-Based Image Analysis (PBIA) and Object-Based Image Analysis (OBIA) methods use digital image classification algorithms like Maximum Likelihood (ML), K-Nearest Neighbors (KNN), k-means clustering algorithm, etc. to smooth this discontinuity. The authors evaluated the accuracy of both the PBIA and OBIA approaches by classifying the land cover of an agricultural field, taking into consideration the development of UAV technology and enhanced image resolution. For classifying multispectral UAV images, we used the KNN machine learning algorithm for object-based supervised image classification and Maximum Likelihood (ML) classification (parametric) for pixel-based supervised image classification. Whereas, for unsupervised classification using pixels, we used the K-means clustering technique. For image analysis, Near-infrared (NIR), Red (R), Green (G), and Blue (B) bands of a high-resolution ground sampling distance (GSD) 0.0125m UAV image was used in this research work. The study found that OBIA was 21% more accurate than PBIA, indicating 94.9% overall accuracy. In terms of Kappa statistics, OBIA was 27% more accurate than PBIA, indicating Kappa statistics accuracy of 93.4%. It indicates that OBIA provides better classification performance when compared to PBIA for the classification of high-resolution UAV images. This study found that by suggesting OBIA for more accurate identification of types of crops and land cover, which will help crop management, agricultural monitoring, and crop yield forecasting be more effective. Numéro de notice : A2023-021 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.rsase.2022.100859 Date de publication en ligne : 22/11/2022 En ligne : https://doi.org/10.1016/j.rsase.2022.100859 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102224
in Remote Sensing Applications: Society and Environment, RSASE > vol 29 (January 2023) . - n° 100859[article]How to optimize the 2D/3D urban thermal environment: Insights derived from UAV LiDAR/multispectral data and multi-source remote sensing data / Rongfang Lyu in Sustainable Cities and Society, vol 88 (January 2023)PermalinkInvestigating the impact of pan sharpening on the accuracy of land cover mapping in Landsat OLI imagery / Komeil Rokni in Geodesy and cartography, vol 49 n° 1 (January 2023)PermalinkTree species classification in a typical natural secondary forest using UAV-borne LiDAR and hyperspectral data / Ying Quan in GIScience and remote sensing, vol 60 n° 1 (2023)PermalinkBathymetry and benthic habitat mapping in shallow waters from Sentinel-2A imagery: A case study in Xisha islands, China / Wei Huang in IEEE Transactions on geoscience and remote sensing, vol 60 n° 12 (December 2022)PermalinkBayesian hyperspectral image super-resolution in the presence of spectral variability / Fei Ye in IEEE Transactions on geoscience and remote sensing, vol 60 n° 12 (December 2022)PermalinkFusion of SAR and multi-spectral time series for determination of water table depth and lake area in peatlands / Katrin Krzepek in PFG – Journal of Photogrammetry, Remote Sensing and Geoinformation Science, vol 90 n° 6 (December 2022)PermalinkHyperspectral imagery and urban areas: results of the HYEP project / Christiane Weber in Revue Française de Photogrammétrie et de Télédétection, n° 224 (2022)PermalinkCross-guided pyramid attention-based residual hyperdense network for hyperspectral image pansharpening / Jiahui Qu in IEEE Transactions on geoscience and remote sensing, vol 60 n° 11 (November 2022)PermalinkA high-resolution panchromatic-multispectral satellite image fusion method assisted with building segmentation / Fang Gao in Computers & geosciences, vol 168 (November 2022)PermalinkA deep 2D/3D Feature-Level fusion for classification of UAV multispectral imagery in urban areas / Hossein Pourazar in Geocarto international, vol 37 n° 23 ([15/10/2022])Permalink