Descripteur
Documents disponibles dans cette catégorie (1887)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Cross-supervised learning for cloud detection / Kang Wu in GIScience and remote sensing, vol 60 n° 1 (2023)
[article]
Titre : Cross-supervised learning for cloud detection Type de document : Article/Communication Auteurs : Kang Wu, Auteur ; Zunxiao Xu, Auteur ; Xinrong Lyu, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 2147298 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage dirigé
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] détection d'objet
[Termes IGN] données étiquetées d'entrainement
[Termes IGN] nuageRésumé : (auteur) We present a new learning paradigm, that is, cross-supervised learning, and explore its use for cloud detection. The cross-supervised learning paradigm is characterized by both supervised training and mutually supervised training, and is performed by two base networks. In addition to the individual supervised training for labeled data, the two base networks perform the mutually supervised training using prediction results provided by each other for unlabeled data. Specifically, we develop In-extensive Nets for implementing the base networks. The In-extensive Nets consist of two Intensive Nets and are trained using the cross-supervised learning paradigm. The Intensive Net leverages information from the labeled cloudy images using a focal attention guidance module (FAGM) and a regression block. The cross-supervised learning paradigm empowers the In-extensive Nets to learn from both labeled and unlabeled cloudy images, substantially reducing the number of labeled cloudy images (that tend to cost expensive manual effort) required for training. Experimental results verify that In-extensive Nets perform well and have an obvious advantage in the situations where there are only a few labeled cloudy images available for training. The implementation code for the proposed paradigm is available at https://gitee.com/kang_wu/in-extensive-nets. Numéro de notice : A2023-190 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1080/15481603.2022.2147298 Date de publication en ligne : 03/01/2023 En ligne : https://doi.org/10.1080/15481603.2022.2147298 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102969
in GIScience and remote sensing > vol 60 n° 1 (2023) . - n° 2147298[article]Decision tree-based machine learning models for above-ground biomass estimation using multi-source remote sensing data and object-based image analysis / Haifa Tamiminia in Geocarto international, vol 38 n° inconnu ([01/01/2023])
[article]
Titre : Decision tree-based machine learning models for above-ground biomass estimation using multi-source remote sensing data and object-based image analysis Type de document : Article/Communication Auteurs : Haifa Tamiminia, Auteur ; Bahram Salehi, Auteur ; Masoud Mahdianpari, Auteur ; et al., Auteur Année de publication : 2023 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] analyse d'image orientée objet
[Termes IGN] biomasse aérienne
[Termes IGN] boosting adapté
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification pixellaire
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] Extreme Gradient Machine
[Termes IGN] image ALOS-PALSAR
[Termes IGN] image Landsat
[Termes IGN] image Sentinel-MSI
[Termes IGN] image Sentinel-SAR
[Termes IGN] New York (Etats-Unis ; état)
[Termes IGN] réserve naturelleRésumé : (auteur) Forest above-ground biomass (AGB) estimation provides valuable information about the carbon cycle. Thus, the overall goal of this paper is to present an approach to enhance the accuracy of the AGB estimation. The main objectives are to: 1) investigate the performance of remote sensing data sources, including airborne light detection and ranging (LiDAR), optical, SAR, and their combination to improve the AGB predictions, 2) examine the capability of tree-based machine learning models, and 3) compare the performance of pixel-based and object-based image analysis (OBIA). To investigate the performance of machine learning models, multiple tree-based algorithms were fitted to predictors derived from airborne LiDAR data, Landsat, Sentinel-2, Sentinel-1, and PALSAR-2/PALSAR SAR data collected within New York’s Adirondack Park. Combining remote sensing data from multiple sources improved the model accuracy (RMSE: 52.14 Mg ha−1 and R2: 0.49). There was no significant difference among gradient boosting machine (GBM), random forest (RF), and extreme gradient boosting (XGBoost) models. In addition, pixel-based and object-based models were compared using the airborne LiDAR-derived AGB raster as a training/testing sample. The OBIA provided the best results with the RMSE of 33.77 Mg ha−1 and R2 of 0.81 for the combination of optical and SAR data in the GBM model. Numéro de notice : A2022-331 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1080/10106049.2022.2071475 Date de publication en ligne : 27/04/2022 En ligne : https://doi.org/10.1080/10106049.2022.2071475 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100607
in Geocarto international > vol 38 n° inconnu [01/01/2023][article]Detection of growth change of young forest based on UAV RGB images at single-tree level / Xiaocheng Zhou in Forests, vol 14 n° 1 (January 2023)
[article]
Titre : Detection of growth change of young forest based on UAV RGB images at single-tree level Type de document : Article/Communication Auteurs : Xiaocheng Zhou, Auteur ; Hongyu Wang, Auteur ; Chongcheng Chen, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 141 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] Abies (genre)
[Termes IGN] âge du peuplement forestier
[Termes IGN] Chine
[Termes IGN] croissance des arbres
[Termes IGN] détection de changement
[Termes IGN] hauteur des arbres
[Termes IGN] image captée par drone
[Termes IGN] image RVB
[Termes IGN] jeune arbre
[Termes IGN] modèle numérique de surface de la canopée
[Termes IGN] surveillance forestièreRésumé : (auteur) With the rapid development of Unmanned Aerial Vehicle (UAV) technology, more and more UAVs have been used in forest survey. UAV (RGB) images are the most widely used UAV data source in forest resource management. However, there is some uncertainty as to the reliability of these data when monitoring height and growth changes of low-growing saplings in an afforestation plot via UAV RGB images. This study focuses on an artificial Chinese fir (Cunninghamia lancelota, named as Chinese Fir) young forest plot in Fujian, China. Divide-and-conquer (DAC) and the local maximum (LM) method for extracting seedling height are described in the paper, and the possibility of monitoring young forest growth based on low-cost UAV remote sensing images was explored. Two key algorithms were adopted and compared to extract the tree height and how it affects the young forest at single-tree level from multi-temporal UAV RGB images from 2019 to 2021. Compared to field survey data, the R2 of single saplings’ height extracted from digital orthophoto map (DOM) images of tree pits and original DSM information using a divide-and-conquer method reached 0.8577 in 2020 and 0.9968 in 2021, respectively. The RMSE reached 0.2141 in 2020 and 0.1609 in 2021. The R2 of tree height extracted from the canopy height model (CHM) via the LM method was 0.9462. The RMSE was 0.3354 in 2021. The results demonstrated that the survival rates of the young forest in the second year and the third year were 99.9% and 85.6%, respectively. This study shows that UAV RGB images can obtain the height of low sapling trees through a computer algorithm based on using 3D point cloud data derived from high-precision UAV images and can monitor the growth of individual trees combined with multi-stage UAV RGB images after afforestation. This research provides a fully automated method for evaluating the afforestation results provided by UAV RGB images. In the future, the universality of the method should be evaluated in more afforestation plots featuring different tree species and terrain. Numéro de notice : A2023-115 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.3390/f14010141 Date de publication en ligne : 10/01/2023 En ligne : https://doi.org/10.3390/f14010141 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102482
in Forests > vol 14 n° 1 (January 2023) . - n° 141[article]Geospatial-based machine learning techniques for land use and land cover mapping using a high-resolution unmanned aerial vehicle image / Taposh Mollick in Remote Sensing Applications: Society and Environment, RSASE, vol 29 (January 2023)
[article]
Titre : Geospatial-based machine learning techniques for land use and land cover mapping using a high-resolution unmanned aerial vehicle image Type de document : Article/Communication Auteurs : Taposh Mollick, Auteur ; MD Golam Azam, Auteur ; Sabrina Karim, Auteur Année de publication : 2023 Article en page(s) : n° 100859 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse comparative
[Termes IGN] analyse d'image orientée objet
[Termes IGN] apprentissage automatique
[Termes IGN] Bangladesh
[Termes IGN] classification non dirigée
[Termes IGN] classification par maximum de vraisemblance
[Termes IGN] classification par nuées dynamiques
[Termes IGN] classification pixellaire
[Termes IGN] image captée par drone
[Termes IGN] image multibande
[Termes IGN] occupation du sol
[Termes IGN] rendement agricole
[Termes IGN] segmentation d'image
[Termes IGN] utilisation du solRésumé : (auteur) Bangladesh is primarily an agricultural country where technological advancement in the agricultural sector can ensure the acceleration of economic growth and ensure long-term food security. This research was conducted in the south-western coastal zone of Bangladesh, where rice is the main crop and other crops are also grown. Land use and land cover (LULC) classification using remote sensing techniques such as the use of satellite or unmanned aerial vehicle (UAV) images can forecast the crop yield and can also provide information on weeds, nutrient deficiencies, diseases, etc. to monitor and treat the crops. Depending on the reflectance received by sensors, remotely sensed images store a digital number (DN) for each pixel. Traditionally, these pixel values have been used to separate clusters and classify various objects. However, it frequently generates a lot of discontinuity in a particular land cover, resulting in small objects within a land cover that provide poor image classification output. It is called the salt-and-pepper effect. In order to classify land cover based on texture, shape, and neighbors, Pixel-Based Image Analysis (PBIA) and Object-Based Image Analysis (OBIA) methods use digital image classification algorithms like Maximum Likelihood (ML), K-Nearest Neighbors (KNN), k-means clustering algorithm, etc. to smooth this discontinuity. The authors evaluated the accuracy of both the PBIA and OBIA approaches by classifying the land cover of an agricultural field, taking into consideration the development of UAV technology and enhanced image resolution. For classifying multispectral UAV images, we used the KNN machine learning algorithm for object-based supervised image classification and Maximum Likelihood (ML) classification (parametric) for pixel-based supervised image classification. Whereas, for unsupervised classification using pixels, we used the K-means clustering technique. For image analysis, Near-infrared (NIR), Red (R), Green (G), and Blue (B) bands of a high-resolution ground sampling distance (GSD) 0.0125m UAV image was used in this research work. The study found that OBIA was 21% more accurate than PBIA, indicating 94.9% overall accuracy. In terms of Kappa statistics, OBIA was 27% more accurate than PBIA, indicating Kappa statistics accuracy of 93.4%. It indicates that OBIA provides better classification performance when compared to PBIA for the classification of high-resolution UAV images. This study found that by suggesting OBIA for more accurate identification of types of crops and land cover, which will help crop management, agricultural monitoring, and crop yield forecasting be more effective. Numéro de notice : A2023-021 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.rsase.2022.100859 Date de publication en ligne : 22/11/2022 En ligne : https://doi.org/10.1016/j.rsase.2022.100859 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102224
in Remote Sensing Applications: Society and Environment, RSASE > vol 29 (January 2023) . - n° 100859[article]A hierarchical deformable deep neural network and an aerial image benchmark dataset for surface multiview stereo reconstruction / Jiayi Li in IEEE Transactions on geoscience and remote sensing, vol 61 n° 1 (January 2023)
[article]
Titre : A hierarchical deformable deep neural network and an aerial image benchmark dataset for surface multiview stereo reconstruction Type de document : Article/Communication Auteurs : Jiayi Li, Auteur ; Xin Huang, Auteur ; Yujin Feng, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 5600812 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] approche hiérarchique
[Termes IGN] carte de profondeur
[Termes IGN] déformation d'objet
[Termes IGN] effet de profondeur cinétique
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image aérienne
[Termes IGN] jeu de données
[Termes IGN] modèle numérique de surface
[Termes IGN] modèle stéréoscopique
[Termes IGN] reconstruction d'image
[Termes IGN] réseau neuronal profond
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Multiview stereo (MVS) aerial image depth estimation is a research frontier in the remote sensing field. Recent deep learning-based advances in close-range object reconstruction have suggested the great potential of this approach. Meanwhile, the deformation problem and the scale variation issue are also worthy of attention. These characteristics of aerial images limit the applicability of the current methods for aerial image depth estimation. Moreover, there are few available benchmark datasets for aerial image depth estimation. In this regard, this article describes a new benchmark dataset called the LuoJia-MVS dataset ( https://irsip.whu.edu.cn/resources/resources_en_v2.php ), as well as a new deep neural network known as the hierarchical deformable cascade MVS network (HDC-MVSNet). The LuoJia-MVS dataset contains 7972 five-view images with a spatial resolution of 10 cm, pixel-wise depths, and precise camera parameters, and was generated from an accurate digital surface model (DSM) built from thousands of stereo aerial images. In the HDC-MVSNet network, a new full-scale feature pyramid extraction module, a hierarchical set of 3-D convolutional blocks, and “true 3-D” deformable 3-D convolutional layers are specifically designed by considering the aforementioned characteristics of aerial images. Overall and ablation experiments on the WHU and LuoJia-MVS datasets validated the superiority of HDC-MVSNet over the current state-of-the-art MVS depth estimation methods and confirmed that the newly built dataset can provide an effective benchmark. Numéro de notice : A2023-117 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2023.3234694 En ligne : https://doi.org/10.1109/TGRS.2023.3234694 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102488
in IEEE Transactions on geoscience and remote sensing > vol 61 n° 1 (January 2023) . - n° 5600812[article]Large-scale individual building extraction from open-source satellite imagery via super-resolution-based instance segmentation approach / Shenglong Chen in ISPRS Journal of photogrammetry and remote sensing, vol 195 (January 2023)PermalinkLinear building pattern recognition in topographical maps combining convex polygon decomposition / Zhiwei Wei in Geocarto international, vol 38 n° inconnu ([01/01/2023])PermalinkMachine learning remote sensing using the random forest classifier to detect the building damage caused by the Anak Krakatau Volcano tsunami / Riantini Virtriana in Geomatics, Natural Hazards and Risk, vol 14 n° 1 (2023)PermalinkA method for remote sensing image classification by combining Pixel Neighbourhood Similarity and optimal feature combination / Kaili Zhang in Geocarto international, vol 38 n° 1 ([01/01/2023])PermalinkPermalinkPrototype-guided multitask adversarial network for cross-domain LiDAR point clouds semantic segmentation / Zhimin Yuan in IEEE Transactions on geoscience and remote sensing, vol 61 n° 1 (January 2023)PermalinkSemi-supervised label propagation for multi-source remote sensing image change detection / Fan Hao in Computers & geosciences, vol 170 (January 2023)PermalinkThe cellular automata approach in dynamic modelling of land use change detection and future simulations based on remote sensing data in Lahore Pakistan / Muhammad Nasar Ahmad in Photogrammetric Engineering & Remote Sensing, PERS, vol 89 n° 1 (January 2023)PermalinkTree species classification in a typical natural secondary forest using UAV-borne LiDAR and hyperspectral data / Ying Quan in GIScience and remote sensing, vol 60 n° 1 (2023)PermalinkConsistency assessment of multi-date PlanetScope imagery for seagrass percent cover mapping in different seagrass meadows / Pramaditya Wicaksono in Geocarto international, vol 37 n° 27 ([20/12/2022])Permalink