Descripteur
Documents disponibles dans cette catégorie (63)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Accuracy analysis of UAV photogrammetry using RGB and multispectral sensors / Nikola Santrač in Geodetski vestnik, vol 67 n° 4 (December 2023)
[article]
Titre : Accuracy analysis of UAV photogrammetry using RGB and multispectral sensors Type de document : Article/Communication Auteurs : Nikola Santrač, Auteur ; Pavel Benka, Auteur ; Mehmed Batilović, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : pp 459 - 472 Note générale : bibliographie Langues : Anglais (eng) Slovène (slv) Descripteur : [Vedettes matières IGN] Photogrammétrie numérique
[Termes IGN] image captée par drone
[Termes IGN] image multibande
[Termes IGN] image RVB
[Termes IGN] modèle géométrique de prise de vue
[Termes IGN] point d'appui
[Termes IGN] positionnement cinématique en temps réel
[Termes IGN] qualité des donnéesRésumé : (auteur) In recent years, unmanned aerial vehicles (UAVs) have become increasingly important as a tool for quickly collecting high-resolution (spatial and spectral) imagery of the Earth's surface. The final products are highly dependent on the choice of values for various parameters in flight planning, the type of sensors, and the processing of the data. In this paper ground control points (GCPs) were first measured using the Global Navigation Satellite System (GNSS) Real-Time Kinematic (RTK) method, and then due to the low height accuracy of the GNSS RTK method all points were measured using a detailed leveling method. This study aims to provide a basic assessment of quality, including four main aspects: (1) the difference between an RGB sensor and a five-band multispectral sensor on accuracy and the amount of data, (2) the impact of the number of GCPs on the accuracy of the final products, (3) the impact of different altitudes and cross flight strips, and (4) the accuracy analysis of multi-altitude models. The results suggest that the type of sensor, flight configuration, and GCP setup strongly affect the quality and quantity of the final product data while creating a multi-altitude model does not result in the expected quality of data. With its unique combination of sensors and parameters, the results and recommendations presented in this paper can assist professionals and researchers in their future work. Numéro de notice : A2023-241 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.15292/geodetski-vestnik.2023.04.459-472 Date de publication en ligne : 01/12/2023 En ligne : https://dx.doi.org/10.15292/geodetski-vestnik.2023.04.459-472 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=103604
in Geodetski vestnik > vol 67 n° 4 (December 2023) . - pp 459 - 472[article]Deblurring low-light images with events / Chu Zhou in International journal of computer vision, vol 131 n° 5 (May 2023)
[article]
Titre : Deblurring low-light images with events Type de document : Article/Communication Auteurs : Chu Zhou, Auteur ; Minggui Teng, Auteur ; Jin Han, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : pp 1284 - 1298 Note générale : bilbiographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] caméra d'événement
[Termes IGN] correction d'image
[Termes IGN] filtrage du bruit
[Termes IGN] flou
[Termes IGN] image à basse résolution
[Termes IGN] image RVBRésumé : (auteur) Modern image-based deblurring methods usually show degenerate performance in low-light conditions since the images often contain most of the poorly visible dark regions and a few saturated bright regions, making the amount of effective features that can be extracted for deblurring limited. In contrast, event cameras can trigger events with a very high dynamic range and low latency, which hardly suffer from saturation and naturally encode dense temporal information about motion. However, in low-light conditions existing event-based deblurring methods would become less robust since the events triggered in dark regions are often severely contaminated by noise, leading to inaccurate reconstruction of the corresponding intensity values. Besides, since they directly adopt the event-based double integral model to perform pixel-wise reconstruction, they can only handle low-resolution grayscale active pixel sensor images provided by the DAVIS camera, which cannot meet the requirement of daily photography. In this paper, to apply events to deblurring low-light images robustly, we propose a unified two-stage framework along with a motion-aware neural network tailored to it, reconstructing the sharp image under the guidance of high-fidelity motion clues extracted from events. Besides, we build an RGB-DAVIS hybrid camera system to demonstrate that our method has the ability to deblur high-resolution RGB images due to the natural advantages of our two-stage framework. Experimental results show our method achieves state-of-the-art performance on both synthetic and real-world images. Numéro de notice : A2023-210 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s11263-023-01754-5 Date de publication en ligne : 06/02/2023 En ligne : https://doi.org/10.1007/s11263-023-01754-5 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=103062
in International journal of computer vision > vol 131 n° 5 (May 2023) . - pp 1284 - 1298[article]Detection of growth change of young forest based on UAV RGB images at single-tree level / Xiaocheng Zhou in Forests, vol 14 n° 1 (January 2023)
[article]
Titre : Detection of growth change of young forest based on UAV RGB images at single-tree level Type de document : Article/Communication Auteurs : Xiaocheng Zhou, Auteur ; Hongyu Wang, Auteur ; Chongcheng Chen, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 141 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] Abies (genre)
[Termes IGN] âge du peuplement forestier
[Termes IGN] Chine
[Termes IGN] croissance des arbres
[Termes IGN] détection de changement
[Termes IGN] hauteur des arbres
[Termes IGN] image captée par drone
[Termes IGN] image RVB
[Termes IGN] jeune arbre
[Termes IGN] modèle numérique de surface de la canopée
[Termes IGN] surveillance forestièreRésumé : (auteur) With the rapid development of Unmanned Aerial Vehicle (UAV) technology, more and more UAVs have been used in forest survey. UAV (RGB) images are the most widely used UAV data source in forest resource management. However, there is some uncertainty as to the reliability of these data when monitoring height and growth changes of low-growing saplings in an afforestation plot via UAV RGB images. This study focuses on an artificial Chinese fir (Cunninghamia lancelota, named as Chinese Fir) young forest plot in Fujian, China. Divide-and-conquer (DAC) and the local maximum (LM) method for extracting seedling height are described in the paper, and the possibility of monitoring young forest growth based on low-cost UAV remote sensing images was explored. Two key algorithms were adopted and compared to extract the tree height and how it affects the young forest at single-tree level from multi-temporal UAV RGB images from 2019 to 2021. Compared to field survey data, the R2 of single saplings’ height extracted from digital orthophoto map (DOM) images of tree pits and original DSM information using a divide-and-conquer method reached 0.8577 in 2020 and 0.9968 in 2021, respectively. The RMSE reached 0.2141 in 2020 and 0.1609 in 2021. The R2 of tree height extracted from the canopy height model (CHM) via the LM method was 0.9462. The RMSE was 0.3354 in 2021. The results demonstrated that the survival rates of the young forest in the second year and the third year were 99.9% and 85.6%, respectively. This study shows that UAV RGB images can obtain the height of low sapling trees through a computer algorithm based on using 3D point cloud data derived from high-precision UAV images and can monitor the growth of individual trees combined with multi-stage UAV RGB images after afforestation. This research provides a fully automated method for evaluating the afforestation results provided by UAV RGB images. In the future, the universality of the method should be evaluated in more afforestation plots featuring different tree species and terrain. Numéro de notice : A2023-115 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.3390/f14010141 Date de publication en ligne : 10/01/2023 En ligne : https://doi.org/10.3390/f14010141 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102482
in Forests > vol 14 n° 1 (January 2023) . - n° 141[article]Multi-information PointNet++ fusion method for DEM construction from airborne LiDAR data / Hong Hu in Geocarto international, vol 38 n° 1 ([01/01/2023])
[article]
Titre : Multi-information PointNet++ fusion method for DEM construction from airborne LiDAR data Type de document : Article/Communication Auteurs : Hong Hu, Auteur ; Guanghe Zhang, Auteur ; Jianfeng Ao, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : n° 2153929 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] apprentissage profond
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] filtrage de points
[Termes IGN] image RVB
[Termes IGN] Kappa de Cohen
[Termes IGN] modèle numérique de surface
[Termes IGN] Perceptron multicouche
[Termes IGN] segmentation
[Termes IGN] semis de pointsRésumé : (auteur) Airborne light detection and ranging (LiDAR) is a popular technology in remote sensing that can significantly improve the efficiency of digital elevation model (DEM) construction. However, it is challenging to identify the real terrain features in complex areas using LiDAR data. To solve this problem, this work proposes a multi-information fusion method based on PointNet++ to improve the accuracy of DEM construction. The RGB data and normalized coordinate information of the point cloud was added to increase the number of channels on the input side of the PointNet++ neural network, which can improve the accuracy of the classification during feature extraction. Low and high density point clouds obtained from the International Society for Photogrammetry and Remote Sensing (ISPRS) and the United States Geological Survey (USGS) were used to test this proposed method. The results suggest that the proposed method improves the Kappa coefficient by 8.81% compared to PointNet++. The type I error was reduced by 2.13%, the type II error was reduced by 8.29%, and the total error was reduced by 2.52% compared to the conventional algorithm. Therefore, it is possible to conclude that the proposed method can obtain DEMs with higher accuracy. Numéro de notice : A2023-056 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1080/10106049.2022.2153929 Date de publication en ligne : 23/12/2022 En ligne : https://doi.org/10.1080/10106049.2022.2153929 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102389
in Geocarto international > vol 38 n° 1 [01/01/2023] . - n° 2153929[article]Above ground biomass estimation from UAV high resolution RGB images and LiDAR data in a pine forest in Southern Italy / Mauro Maesano in iForest, biogeosciences and forestry, vol 15 n° 6 (December 2022)
[article]
Titre : Above ground biomass estimation from UAV high resolution RGB images and LiDAR data in a pine forest in Southern Italy Type de document : Article/Communication Auteurs : Mauro Maesano, Auteur ; Giovanni Santopuoli, Auteur ; Federico Valerio Moresi, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 451-457 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] apprentissage automatique
[Termes IGN] biomasse aérienne
[Termes IGN] Calabre
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] données lidar
[Termes IGN] gestion forestière durable
[Termes IGN] image captée par drone
[Termes IGN] image RVB
[Termes IGN] modèle numérique de surface de la canopée
[Termes IGN] régression
[Termes IGN] semis de points
[Termes IGN] structure-from-motionRésumé : (auteur) Knowledge of forest biomass is an essential parameter for managing the forest in a sustainable way, as forest biomass data availability and reliability are necessary for forestry and forest planning, but also for the carbon market as well as to support the local economy in the mountain and inner areas. However, the accurate quantification of the above-ground biomass (AGB) is still a challenge both at the local and global levels. The use of remote sensing techniques with Unmanned Aerial Vehicle (UAV) platforms can be an excellent trade-off between resolution, scale, and frequency data of AGB estimation. In this study, we evaluated the combined use of RGB images from UAV, LiDAR data and ground truth data to estimate AGB in a forested watershed in Southern Italy. A low-cost AGB estimation method was adopted using a commercial fixed-wing drone equipped with an RGB camera, combined with the canopy information derived by LiDAR and validated by field data. Two modelling methods (stepwise regression, SR and random forest, RF) were used to estimate forest AGB. The output was an accurate maps of AGB for each model. The RF model showed better accuracy than the Steplm model, and the R2 increased from 0.81 to 0.86, and the RMSE and MAE values were decreased from 45.5 to 31.7 Mg ha-1 and from 34.2 to 22.1 Mg ha-1 respectively. We demonstrated that by increasing the computing efficiency through a machine learning algorithm, readily available images can be used to obtain satisfactory results, as proven by the accuracy of the Random forest above biomass estimation model. Numéro de notice : A2022-903 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.3832/ifor3781-015 Date de publication en ligne : 03/11/2022 En ligne : https://doi.org/10.3832/ifor3781-015 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102299
in iForest, biogeosciences and forestry > vol 15 n° 6 (December 2022) . - pp 451-457[article]Foreground-aware refinement network for building extraction from remote sensing images / Zhang Yan in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 11 (November 2022)PermalinkMapping forest in the Swiss Alps treeline ecotone with explainable deep learning / Thiên-Anh Nguyen in Remote sensing of environment, vol 281 (November 2022)PermalinkA deep 2D/3D Feature-Level fusion for classification of UAV multispectral imagery in urban areas / Hossein Pourazar in Geocarto international, vol 37 n° 23 ([15/10/2022])PermalinkInvestigation of recognition and classification of forest fires based on fusion color and textural features of images / Cong Li in Forests, vol 13 n° 10 (October 2022)PermalinkLearning indoor point cloud semantic segmentation from image-level labels / Youcheng Song in The Visual Computer, vol 38 n° 9 (September 2022)Permalink3D semantic scene completion: A survey / Luis Roldão in International journal of computer vision, vol 130 n° 8 (August 2022)PermalinkEffective CBIR based on hybrid image features and multilevel approach / D. Latha in Multimedia tools and applications, vol 81 n° 20 (August 2022)PermalinkSummarizing large scale 3D mesh for urban navigation / Imeen Ben Salah in Robotics and autonomous systems, vol 152 (June 2022)PermalinkVegetation cover mapping from RGB webcam time series for land surface emissivity retrieval in high mountain areas / Benedikt Hiebl in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2022 (2022 edition)PermalinkGeoRec: Geometry-enhanced semantic 3D reconstruction of RGB-D indoor scenes / Linxi Huan in ISPRS Journal of photogrammetry and remote sensing, vol 186 (April 2022)PermalinkDeep-learning-based multispectral image reconstruction from single natural color RGB image - Enhancing UAV-based phenotyping / Jiangsan Zhao in Remote sensing, vol 14 n° 5 (March-1 2022)PermalinkAnalysis of pedestrian movements and gestures using an on-board camera to predict their intentions / Joseph Gesnouin (2022)PermalinkInteractive semantic segmentation of aerial images with deep neural networks / Gaston Lenczner (2022)PermalinkPermalinkPermalinkBuilding detection with convolutional networks trained with transfer learning / Simon Šanca in Geodetski vestnik, vol 65 n° 4 (December 2021 - February 2022)PermalinkFeature matching for multi-epoch historical aerial images: A new pipeline feature detection pipeline in open-source MicMac / Lulin Zhang in Blog de la RFPT, sans n° ([17/11/2021])PermalinkFeature matching for multi-epoch historical aerial images / Lulin Zhang in ISPRS Journal of photogrammetry and remote sensing, Vol 182 (December 2021)PermalinkA deep multi-modal learning method and a new RGB-depth data set for building roof extraction / Mehdi Khoshboresh Masouleh in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 10 (October 2021)PermalinkCNN-based RGB-D salient object detection: Learn, select, and fuse / Hao Chen in International journal of computer vision, vol 129 n° 7 (July 2021)PermalinkRemote sensing image colorization using symmetrical multi-scale DCGAN in YUV color space / Min Wu in The Visual Computer, vol 37 n° 7 (July 2021)PermalinkSemantic unsupervised change detection of natural land cover with multitemporal object-based analysis on SAR images / Donato Amitrano in IEEE Transactions on geoscience and remote sensing, Vol 59 n° 7 (July 2021)PermalinkAssessing forest phenology: A multi-scale comparison of near-surface (UAV, spectral reflectance sensor, PhenoCam) and satellite (MODIS, Sentinel-2) remote sensing / Shangharsha Thapa in Remote sensing, vol 13 n° 8 (April-2 2021)PermalinkVisual positioning in indoor environments using RGB-D images and improved vector of local aggregated descriptors / Longyu Zhang in ISPRS International journal of geo-information, vol 10 n° 4 (April 2021)PermalinkMulti-level progressive parallel attention guided salient object detection for RGB-D images / Zhengyi Liu in The Visual Computer, vol 37 n° 3 (March 2021)PermalinkActivity recognition in residential spaces with Internet of things devices and thermal imaging / Kshirasagar Naik in Sensors, vol 21 n° 3 (February 2021)PermalinkAleatoric uncertainty estimation for dense stereo matching via CNN-based cost volume analysis / Max Mehltretter in ISPRS Journal of photogrammetry and remote sensing, vol 171 (January 2021)PermalinkCartographie dense et compacte par vision RGB-D pour la navigation d’un robot mobile / Bruce Canovas (2021)PermalinkDétection d’ouvertures par segmentation sémantique de nuages de points 3D : apport de l’apprentissage profond / Camille Lhenry (2021)PermalinkPermalinkReal-time multimodal semantic scene understanding for autonomous UGV navigation / Yifei Zhang (2021)PermalinkPermalinkThe challenge of robust trait estimates with deep learning on high resolution RGB images / Etienne David (2021)PermalinkCNN-based tree species classification using high resolution RGB image data from automated UAV observations / Sebastian Egli in Remote sensing, vol 12 n° 23 (December-2 2020)PermalinkAutomatic building footprint extraction from UAV images using neural networks / Zoran Kokeza in Geodetski vestnik, vol 64 n° 4 (December 2020 - February 2021)PermalinkConvolutional Neural Networks accurately predict cover fractions of plant species and communities in Unmanned Aerial Vehicle imagery / Teja Kattenborn in Remote sensing in ecology and conservation, vol 6 n° 4 (December 2020)PermalinkMapping forest tree species in high resolution UAV-based RGB-imagery by means of convolutional neural networks / Felix Schiefer in ISPRS Journal of photogrammetry and remote sensing, vol 170 (December 2020)PermalinkTextural classification of remotely sensed images using multiresolution techniques / Rizwan Ahmed Ansari in Geocarto international, vol 35 n° 14 ([15/10/2020])Permalink3D hand mesh reconstruction from a monocular RGB image / Hao Peng in The Visual Computer, vol 36 n° 10 - 12 (October 2020)PermalinkTrajectory drift–compensated solution of a stereo RGB-D mapping system / Shengjun Tang in Photogrammetric Engineering & Remote Sensing, PERS, vol 86 n° 6 (June 2020)PermalinkAutomatic extraction of road intersection points from USGS historical map series using deep convolutional neural networks / Mahmoud Saeedimoghaddam in International journal of geographical information science IJGIS, vol 34 n° 5 (May 2020)PermalinkA review of techniques for 3D reconstruction of indoor environments / Zhizhong Kang in ISPRS International journal of geo-information, vol 9 n° 5 (May 2020)PermalinkShrub biomass estimates in former burnt areas using Sentinel 2 images processing and classification / Jose Aranha in Forests, vol 11 n° 5 (May 2020)PermalinkAbove-ground biomass estimation and yield prediction in potato by using UAV-based RGB and hyperspectral imaging / Bo Li in ISPRS Journal of photogrammetry and remote sensing, vol 162 (April 2020)PermalinkMultichannel Pulse-Coupled Neural Network-Based Hyperspectral Image Visualization / Puhong Duan in IEEE Transactions on geoscience and remote sensing, vol 58 n° 4 (April 2020)PermalinkIntegration of remote sensing and GIS to extract plantation rows from a drone-based image point cloud digital surface model / Nadeem Fareed in ISPRS International journal of geo-information, vol 9 n° 3 (March 2020)PermalinkPlant survival monitoring with UAVs and multispectral data in difficult access afforested areas / Maria Luz Gil-Docampo in Geocarto international, vol 35 n° 2 ([01/02/2020])PermalinkAnalyse automatique du couvert végétal pour la gestion du risque végétation en milieu ferroviaire à partir d'imagerie aérienne / Hélène Rouillon (2020)PermalinkApplication of machine learning techniques for evidential 3D perception, in the context of autonomous driving / Edouard Capellier (2020)PermalinkRegional-scale forest mapping over fragmented landscapes using global forest products and Landsat time series classification / Viktor Myroniuk in Remote sensing, vol 12 n° 1 (January 2020)PermalinkCombining thermal imaging with photogrammetry of an active volcano using UAV: an example from Stromboli, Italy / Zoë E. Wakeford in Photogrammetric record, vol 34 n° 168 (December 2019)PermalinkEstimating pasture biomass and canopy height in brazilian savanna using UAV photogrammetry / Juliana Batistoti in Remote sensing, Vol 11 n° 20 (October-2 2019)PermalinkAutomatic canola mapping using time series of Sentinel 2 images / Davoud Ashourloo in ISPRS Journal of photogrammetry and remote sensing, vol 156 (October 2019)PermalinkUnmanned aerial vehicles (UAVs) for monitoring macroalgal biodiversity: comparison of RGB and multispectral imaging sensors for biodiversity assessments / Leigh Tait in Remote sensing, vol 11 n° 19 (October-1 2019)PermalinkDevelopment and evaluation of a deep learning model for real-time ground vehicle semantic segmentation from UAV-based thermal infrared imagery / Mehdi Khoshboresh Masouleh in ISPRS Journal of photogrammetry and remote sensing, vol 155 (September 2019)PermalinkEnhanced 3D mapping with an RGB-D sensor via integration of depth measurements and image sequences / Bo Wu in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 9 (September 2019)PermalinkImproving public data for building segmentation from Convolutional Neural Networks (CNNs) for fused airborne lidar and image data using active contours / David Griffiths in ISPRS Journal of photogrammetry and remote sensing, vol 154 (August 2019)PermalinkSemantic façade segmentation from airborne oblique images / Yaping Lin in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 6 (June 2019)Permalink