Descripteur
Documents disponibles dans cette catégorie (119)



Etendre la recherche sur niveau(x) vers le bas
Detection of growth change of young forest based on UAV RGB images at single-tree level / Xiaocheng Zhou in Forests, vol 14 n° 1 (January 2023)
![]()
[article]
Titre : Detection of growth change of young forest based on UAV RGB images at single-tree level Type de document : Article/Communication Auteurs : Xiaocheng Zhou, Auteur ; Hongyu Wang, Auteur ; Chongcheng Chen, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 141 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] Abies (genre)
[Termes IGN] âge du peuplement forestier
[Termes IGN] Chine
[Termes IGN] croissance des arbres
[Termes IGN] détection de changement
[Termes IGN] hauteur des arbres
[Termes IGN] image captée par drone
[Termes IGN] image RVB
[Termes IGN] jeune arbre
[Termes IGN] modèle numérique de surface de la canopée
[Termes IGN] surveillance forestièreRésumé : (auteur) With the rapid development of Unmanned Aerial Vehicle (UAV) technology, more and more UAVs have been used in forest survey. UAV (RGB) images are the most widely used UAV data source in forest resource management. However, there is some uncertainty as to the reliability of these data when monitoring height and growth changes of low-growing saplings in an afforestation plot via UAV RGB images. This study focuses on an artificial Chinese fir (Cunninghamia lancelota, named as Chinese Fir) young forest plot in Fujian, China. Divide-and-conquer (DAC) and the local maximum (LM) method for extracting seedling height are described in the paper, and the possibility of monitoring young forest growth based on low-cost UAV remote sensing images was explored. Two key algorithms were adopted and compared to extract the tree height and how it affects the young forest at single-tree level from multi-temporal UAV RGB images from 2019 to 2021. Compared to field survey data, the R2 of single saplings’ height extracted from digital orthophoto map (DOM) images of tree pits and original DSM information using a divide-and-conquer method reached 0.8577 in 2020 and 0.9968 in 2021, respectively. The RMSE reached 0.2141 in 2020 and 0.1609 in 2021. The R2 of tree height extracted from the canopy height model (CHM) via the LM method was 0.9462. The RMSE was 0.3354 in 2021. The results demonstrated that the survival rates of the young forest in the second year and the third year were 99.9% and 85.6%, respectively. This study shows that UAV RGB images can obtain the height of low sapling trees through a computer algorithm based on using 3D point cloud data derived from high-precision UAV images and can monitor the growth of individual trees combined with multi-stage UAV RGB images after afforestation. This research provides a fully automated method for evaluating the afforestation results provided by UAV RGB images. In the future, the universality of the method should be evaluated in more afforestation plots featuring different tree species and terrain. Numéro de notice : A2023-115 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.3390/f14010141 Date de publication en ligne : 10/01/2023 En ligne : https://doi.org/10.3390/f14010141 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102482
in Forests > vol 14 n° 1 (January 2023) . - n° 141[article]Multi-information PointNet++ fusion method for DEM construction from airborne LiDAR data / Hong Hu in Geocarto international, vol 38 n° 1 ([01/01/2023])
![]()
[article]
Titre : Multi-information PointNet++ fusion method for DEM construction from airborne LiDAR data Type de document : Article/Communication Auteurs : Hong Hu, Auteur ; Guanghe Zhang, Auteur ; Jianfeng Ao, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 2153929 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] apprentissage profond
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] filtrage de points
[Termes IGN] image RVB
[Termes IGN] Kappa de Cohen
[Termes IGN] modèle numérique de surface
[Termes IGN] Perceptron multicouche
[Termes IGN] segmentation
[Termes IGN] semis de pointsRésumé : (auteur) Airborne light detection and ranging (LiDAR) is a popular technology in remote sensing that can significantly improve the efficiency of digital elevation model (DEM) construction. However, it is challenging to identify the real terrain features in complex areas using LiDAR data. To solve this problem, this work proposes a multi-information fusion method based on PointNet++ to improve the accuracy of DEM construction. The RGB data and normalized coordinate information of the point cloud was added to increase the number of channels on the input side of the PointNet++ neural network, which can improve the accuracy of the classification during feature extraction. Low and high density point clouds obtained from the International Society for Photogrammetry and Remote Sensing (ISPRS) and the United States Geological Survey (USGS) were used to test this proposed method. The results suggest that the proposed method improves the Kappa coefficient by 8.81% compared to PointNet++. The type I error was reduced by 2.13%, the type II error was reduced by 8.29%, and the total error was reduced by 2.52% compared to the conventional algorithm. Therefore, it is possible to conclude that the proposed method can obtain DEMs with higher accuracy. Numéro de notice : A2023-056 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1080/10106049.2022.2153929 Date de publication en ligne : 23/12/2022 En ligne : https://doi.org/10.1080/10106049.2022.2153929 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102389
in Geocarto international > vol 38 n° 1 [01/01/2023] . - n° 2153929[article]Above ground biomass estimation from UAV high resolution RGB images and LiDAR data in a pine forest in Southern Italy / Mauro Maesano in iForest, biogeosciences and forestry, vol 15 n° 6 (December 2022)
![]()
[article]
Titre : Above ground biomass estimation from UAV high resolution RGB images and LiDAR data in a pine forest in Southern Italy Type de document : Article/Communication Auteurs : Mauro Maesano, Auteur ; Giovanni Santopuoli, Auteur ; Federico Valerio Moresi, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 451-457 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] apprentissage automatique
[Termes IGN] biomasse aérienne
[Termes IGN] Calabre
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] données lidar
[Termes IGN] gestion forestière durable
[Termes IGN] image captée par drone
[Termes IGN] image RVB
[Termes IGN] modèle numérique de surface de la canopée
[Termes IGN] régression
[Termes IGN] semis de points
[Termes IGN] structure-from-motionRésumé : (auteur) Knowledge of forest biomass is an essential parameter for managing the forest in a sustainable way, as forest biomass data availability and reliability are necessary for forestry and forest planning, but also for the carbon market as well as to support the local economy in the mountain and inner areas. However, the accurate quantification of the above-ground biomass (AGB) is still a challenge both at the local and global levels. The use of remote sensing techniques with Unmanned Aerial Vehicle (UAV) platforms can be an excellent trade-off between resolution, scale, and frequency data of AGB estimation. In this study, we evaluated the combined use of RGB images from UAV, LiDAR data and ground truth data to estimate AGB in a forested watershed in Southern Italy. A low-cost AGB estimation method was adopted using a commercial fixed-wing drone equipped with an RGB camera, combined with the canopy information derived by LiDAR and validated by field data. Two modelling methods (stepwise regression, SR and random forest, RF) were used to estimate forest AGB. The output was an accurate maps of AGB for each model. The RF model showed better accuracy than the Steplm model, and the R2 increased from 0.81 to 0.86, and the RMSE and MAE values were decreased from 45.5 to 31.7 Mg ha-1 and from 34.2 to 22.1 Mg ha-1 respectively. We demonstrated that by increasing the computing efficiency through a machine learning algorithm, readily available images can be used to obtain satisfactory results, as proven by the accuracy of the Random forest above biomass estimation model. Numéro de notice : A2022-903 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.3832/ifor3781-015 Date de publication en ligne : 03/11/2022 En ligne : https://doi.org/10.3832/ifor3781-015 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102299
in iForest, biogeosciences and forestry > vol 15 n° 6 (December 2022) . - pp 451-457[article]Foreground-aware refinement network for building extraction from remote sensing images / Zhang Yan in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 11 (November 2022)
![]()
[article]
Titre : Foreground-aware refinement network for building extraction from remote sensing images Type de document : Article/Communication Auteurs : Zhang Yan, Auteur ; Wang Xiangyu, Auteur ; Zhang Zhongwei, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 731 - 738 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse visuelle
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de régions
[Termes IGN] détection du bâti
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image RVB
[Termes IGN] jeu de donnéesRésumé : (auteur) To extract buildings accurately, we propose a foreground-aware refinement network for building extraction. In particular, in order to reduce the false positive of buildings, we design the foreground-aware module using the attention gate block, which effectively suppresses the features of nonbuilding and enhances the sensitivity of the model to buildings. In addition, we introduce the reverse attention mechanism in the detail refinement module. Specifically, this module guides the network to learn to supplement the missing details of the buildings by erasing the currently predicted regions of buildings and achieves more accurate and complete building extraction. To further optimize the network, we design hybrid loss, which combines BCE loss and SSIM loss, to supervise network learning from both pixel and structure layers. Experimental results demonstrate the superiority of our network over state-of-the-art methods in terms of both quantitative metrics and visual quality. Numéro de notice : A2022-842 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.21-00081R2 Date de publication en ligne : 01/11/2022 En ligne : https://doi.org/10.14358/PERS.21-00081R2 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102055
in Photogrammetric Engineering & Remote Sensing, PERS > vol 88 n° 11 (November 2022) . - pp 731 - 738[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 105-2022111 SL Revue Centre de documentation Revues en salle Disponible Mapping forest in the Swiss Alps treeline ecotone with explainable deep learning / Thiên-Anh Nguyen in Remote sensing of environment, vol 281 (November 2022)
![]()
[article]
Titre : Mapping forest in the Swiss Alps treeline ecotone with explainable deep learning Type de document : Article/Communication Auteurs : Thiên-Anh Nguyen, Auteur ; Benjamin Kellenberger, Auteur ; Devis Tuia, Auteur Année de publication : 2022 Article en page(s) : n° 113217 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] Alpes
[Termes IGN] apprentissage profond
[Termes IGN] canopée
[Termes IGN] carte forestière
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] écotone
[Termes IGN] hauteur des arbres
[Termes IGN] image à très haute résolution
[Termes IGN] image aérienne
[Termes IGN] image RVB
[Termes IGN] inventaire forestier étranger (données)
[Termes IGN] modèle numérique de surface de la canopée
[Termes IGN] SuisseRésumé : (auteur) Forest maps are essential to understand forest dynamics. Due to the increasing availability of remote sensing data and machine learning models like convolutional neural networks, forest maps can these days be created on large scales with high accuracy. Common methods usually predict a map from remote sensing images without deliberately considering intermediate semantic concepts that are relevant to the final map. This makes the mapping process difficult to interpret, especially when using opaque deep learning models. Moreover, such procedure is entirely agnostic to the definitions of the mapping targets (e.g., forest types depending on variables such as tree height and tree density). Common models can at best learn these rules implicitly from data, which greatly hinders trust in the produced maps. In this work, we aim at building an explainable deep learning model for forest mapping that leverages prior knowledge about forest definitions to provide explanations to its decisions. We propose a model that explicitly quantifies intermediate variables like tree height and tree canopy density involved in the forest definitions, corresponding to those used to create the forest maps for training the model in the first place, and combines them accordingly. We apply our model to mapping forest types using very high resolution aerial imagery and lay particular focus on the treeline ecotone at high altitudes, where forest boundaries are complex and highly dependent on the chosen forest definition. Results show that our rule-informed model is able to quantify intermediate key variables and predict forest maps that reflect forest definitions. Through its interpretable design, it is further able to reveal implicit patterns in the manually-annotated forest labels, which facilitates the analysis of the produced maps and their comparison with other datasets. Numéro de notice : A2022-794 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.rse.2022.113217 Date de publication en ligne : 01/09/2022 En ligne : https://doi.org/10.1016/j.rse.2022.113217 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101928
in Remote sensing of environment > vol 281 (November 2022) . - n° 113217[article]A deep 2D/3D Feature-Level fusion for classification of UAV multispectral imagery in urban areas / Hossein Pourazar in Geocarto international, vol 37 n° 23 ([15/10/2022])
PermalinkInvestigation of recognition and classification of forest fires based on fusion color and textural features of images / Cong Li in Forests, vol 13 n° 10 (October 2022)
PermalinkLearning indoor point cloud semantic segmentation from image-level labels / Youcheng Song in The Visual Computer, vol 38 n° 9 (September 2022)
Permalink3D semantic scene completion: A survey / Luis Roldão in International journal of computer vision, vol 130 n° 8 (August 2022)
PermalinkEffective CBIR based on hybrid image features and multilevel approach / D. Latha in Multimedia tools and applications, vol 81 n° 20 (August 2022)
PermalinkSummarizing large scale 3D mesh for urban navigation / Imeen Ben Salah in Robotics and autonomous systems, vol 152 (June 2022)
PermalinkVegetation cover mapping from RGB webcam time series for land surface emissivity retrieval in high mountain areas / Benedikt Hiebl in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2022 (2022 edition)
PermalinkGeoRec: Geometry-enhanced semantic 3D reconstruction of RGB-D indoor scenes / Linxi Huan in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)
PermalinkDeep-learning-based multispectral image reconstruction from single natural color RGB image - Enhancing UAV-based phenotyping / Jiangsan Zhao in Remote sensing, vol 14 n° 5 (March-1 2022)
PermalinkAnalysis of pedestrian movements and gestures using an on-board camera to predict their intentions / Joseph Gesnouin (2022)
Permalink