Descripteur
Documents disponibles dans cette catégorie (853)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Integrating terrestrial laser scanning and unmanned aerial vehicle photogrammetry to estimate individual tree attributes in managed coniferous forests in Japan / Katsuto Shimizu in International journal of applied Earth observation and geoinformation, vol 106 (February 2022)
[article]
Titre : Integrating terrestrial laser scanning and unmanned aerial vehicle photogrammetry to estimate individual tree attributes in managed coniferous forests in Japan Type de document : Article/Communication Auteurs : Katsuto Shimizu, Auteur ; Tomohiro Nishizono, Auteur ; Fumiaki Kitahara, Auteur ; Keiko Fukumoto, Auteur ; Hideki Saito, Auteur Année de publication : 2022 Article en page(s) : n° 102658 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] estimation statistique
[Termes IGN] hauteur des arbres
[Termes IGN] image captée par drone
[Termes IGN] Japon
[Termes IGN] Pinophyta
[Termes IGN] volume en boisRésumé : (auteur) The accurate estimation of tree attributes is essential for sustainable forest management. Terrestrial Laser Scanning (TLS) is a viable remote sensing technology suitable for estimating under canopy structure. However, TLS measurements generally underestimate tree height in taller trees, which leads to the underestimation of other tree attributes (e.g., stem volume). The integration of information derived from TLS and Unmanned Aerial Vehicle (UAV) photogrammetry could potentially improve tree height estimation. This study investigated the applicability of integrating TLS and UAV photogrammetry to estimate individual tree attributes in managed coniferous forests of Japan. Diameter at breast height (DBH), tree height, and stem volume were estimated by (1) TLS data only, (2) integrating TLS and UAV data with TLS tree locations, and (3) integrating TLS and UAV data with treetop detections of the tree canopy. The TLS data only approach achieved high accuracy for DBH estimations with a root mean squared error (RMSE) of 2.36 cm (RMSE% 5.6%); however, tree height was greatly underestimated, with an RMSE of 8.87 m (28.9%) and a bias of −8.39 m. Integrating TLS and UAV photogrammetric data improved tree height estimation accuracy for both the TLS tree location (RMSE of 1.89 m and a bias of −0.46 m) and the treetop detection (RMSE of 1.77 m and a bias of 0.36 m) approaches. Integrating TLS and UAV photogrammetric data also improved the accuracy of the stem volume estimations with RMSEs of 0.21 m3 (10.8%) and 0.21 m3 (10.5%) for the TLS tree location and treetop detection approaches, respectively. Although the tree height of suppressed trees tended to be overestimated by TLS and UAV photogrammetric data integration, a good performance was obtained for dominant trees. The results of this study indicate that the integration of TLS and UAV photogrammetry is beneficial for the accurate estimation of tree attributes in coniferous forests. Numéro de notice : A2022-071 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.1016/j.jag.2021.102658 En ligne : https://doi.org/10.1016/j.jag.2021.102658 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99423
in International journal of applied Earth observation and geoinformation > vol 106 (February 2022) . - n° 102658[article]3D modeling of urban area based on oblique UAS images - An end-to-end pipeline / Valeria-Ersilia Oniga in Remote sensing, vol 14 n° 2 (January-2 2022)
[article]
Titre : 3D modeling of urban area based on oblique UAS images - An end-to-end pipeline Type de document : Article/Communication Auteurs : Valeria-Ersilia Oniga, Auteur ; Ana-Ioana Breaban, Auteur ; Norbert Pfeifer, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 422 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] apprentissage automatique
[Termes IGN] Bâti-3D
[Termes IGN] CityGML
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] données lidar
[Termes IGN] image aérienne oblique
[Termes IGN] image captée par drone
[Termes IGN] indice de végétation
[Termes IGN] lasergrammétrie
[Termes IGN] modèle numérique de surface
[Termes IGN] modélisation 3D
[Termes IGN] point d'appui
[Termes IGN] Roumanie
[Termes IGN] segmentation
[Termes IGN] semis de points
[Termes IGN] zone urbaineRésumé : (auteur) 3D modelling of urban areas is an attractive and active research topic, as 3D digital models of cities are becoming increasingly common for urban management as a consequence of the constantly growing number of people living in cities. Viewed as a digital representation of the Earth’s surface, an urban area modeled in 3D includes objects such as buildings, trees, vegetation and other anthropogenic structures, highlighting the buildings as the most prominent category. A city’s 3D model can be created based on different data sources, especially LiDAR or photogrammetric point clouds. This paper’s aim is to provide an end-to-end pipeline for 3D building modeling based on oblique UAS images only, the result being a parametrized 3D model with the Open Geospatial Consortium (OGC) CityGML standard, Level of Detail 2 (LOD2). For this purpose, a flight over an urban area of about 20.6 ha has been taken with a low-cost UAS, i.e., a DJI Phantom 4 Pro Professional (P4P), at 100 m height. The resulting UAS point cloud with the best scenario, i.e., 45 Ground Control Points (GCP), has been processed as follows: filtering to extract the ground points using two algorithms, CSF and terrain-mark; classification, using two methods, based on attributes only and a random forest machine learning algorithm; segmentation using local homogeneity implemented into Opals software; plane creation based on a region-growing algorithm; and plane editing and 3D model reconstruction based on piece-wise intersection of planar faces. The classification performed with ~35% training data and 31 attributes showed that the Visible-band difference vegetation index (VDVI) is a key attribute and 77% of the data was classified using only five attributes. The global accuracy for each modeled building through the workflow proposed in this study was around 0.15 m, so it can be concluded that the proposed pipeline is reliable. Numéro de notice : A2022-101 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE/IMAGERIE Nature : Article DOI : 10.3390/rs14020422 Date de publication en ligne : 17/01/2022 En ligne : https://doi.org/10.3390/rs14020422 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99566
in Remote sensing > vol 14 n° 2 (January-2 2022) . - n° 422[article]Automatic extraction of damaged houses by earthquake based on improved YOLOv5: A case study in Yangbi / Yafei Jing in Remote sensing, vol 14 n° 2 (January-2 2022)
[article]
Titre : Automatic extraction of damaged houses by earthquake based on improved YOLOv5: A case study in Yangbi Type de document : Article/Communication Auteurs : Yafei Jing, Auteur ; Yuhuan Ren, Auteur ; Yalan Liu, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 382 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage profond
[Termes IGN] détection d'objet
[Termes IGN] détection de cible
[Termes IGN] détection du bâti
[Termes IGN] dommage matériel
[Termes IGN] extraction automatique
[Termes IGN] image captée par drone
[Termes IGN] orthoimage
[Termes IGN] séisme
[Termes IGN] Yunnan (Chine)Résumé : (auteur) Efficiently and automatically acquiring information on earthquake damage through remote sensing has posed great challenges because the classical methods of detecting houses damaged by destructive earthquakes are often both time consuming and low in accuracy. A series of deep-learning-based techniques have been developed and recent studies have demonstrated their high intelligence for automatic target extraction for natural and remote sensing images. For the detection of small artificial targets, current studies show that You Only Look Once (YOLO) has a good performance in aerial and Unmanned Aerial Vehicle (UAV) images. However, less work has been conducted on the extraction of damaged houses. In this study, we propose a YOLOv5s-ViT-BiFPN-based neural network for the detection of rural houses. Specifically, to enhance the feature information of damaged houses from the global information of the feature map, we introduce the Vision Transformer into the feature extraction network. Furthermore, regarding the scale differences for damaged houses in UAV images due to the changes in flying height, we apply the Bi-Directional Feature Pyramid Network (BiFPN) for multi-scale feature fusion to aggregate features with different resolutions and test the model. We took the 2021 Yangbi earthquake with a surface wave magnitude (Ms) of 6.4 in Yunan, China, as an example; the results show that the proposed model presents a better performance, with the average precision (AP) being increased by 9.31% and 1.23% compared to YOLOv3 and YOLOv5s, respectively, and a detection speed of 80 FPS, which is 2.96 times faster than YOLOv3. In addition, the transferability test for five other areas showed that the average accuracy was 91.23% and the total processing time was 4 min, while 100 min were needed for professional visual interpreters. The experimental results demonstrate that the YOLOv5s-ViT-BiFPN model can automatically detect damaged rural houses due to destructive earthquakes in UAV images with a good performance in terms of accuracy and timeliness, as well as being robust and transferable. Numéro de notice : A2022-104 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.3390/rs14020382 Date de publication en ligne : 14/01/2022 En ligne : https://doi.org/10.3390/rs14020382 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99577
in Remote sensing > vol 14 n° 2 (January-2 2022) . - n° 382[article]Classification of mediterranean shrub species from UAV point clouds / Juan Pedro Carbonell-Rivera in Remote sensing, vol 14 n° 1 (January-1 2022)
[article]
Titre : Classification of mediterranean shrub species from UAV point clouds Type de document : Article/Communication Auteurs : Juan Pedro Carbonell-Rivera, Auteur ; Jesus Torralba, Auteur ; Javier Estornell, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 199 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] apprentissage automatique
[Termes IGN] arbuste
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification par Perceptron multicouche
[Termes IGN] Espagne
[Termes IGN] Extreme Gradient Machine
[Termes IGN] forêt méditerranéenne
[Termes IGN] image captée par drone
[Termes IGN] incendie de forêt
[Termes IGN] indice de végétation
[Termes IGN] modèle de simulation
[Termes IGN] modèle numérique de terrain
[Termes IGN] parc naturel
[Termes IGN] photogrammétrie aérienne
[Termes IGN] semis de pointsRésumé : (auteur) Modelling fire behaviour in forest fires is based on meteorological, topographical, and vegetation data, including species’ type. To accurately parameterise these models, an inventory of the area of analysis with the maximum spatial and temporal resolution is required. This study investigated the use of UAV-based digital aerial photogrammetry (UAV-DAP) point clouds to classify tree and shrub species in Mediterranean forests, and this information is key for the correct generation of wildfire models. In July 2020, two test sites located in the Natural Park of Sierra Calderona (eastern Spain) were analysed, registering 1036 vegetation individuals as reference data, corresponding to 11 shrub and one tree species. Meanwhile, photogrammetric flights were carried out over the test sites, using a UAV DJI Inspire 2 equipped with a Micasense RedEdge multispectral camera. Geometrical, spectral, and neighbour-based features were obtained from the resulting point cloud generated. Using these features, points belonging to tree and shrub species were classified using several machine learning methods, i.e., Decision Trees, Extra Trees, Gradient Boosting, Random Forest, and MultiLayer Perceptron. The best results were obtained using Gradient Boosting, with a mean cross-validation accuracy of 81.7% and 91.5% for test sites 1 and 2, respectively. Once the best classifier was selected, classified points were clustered based on their geometry and tested with evaluation data, and overall accuracies of 81.9% and 96.4% were obtained for test sites 1 and 2, respectively. Results showed that the use of UAV-DAP allows the classification of Mediterranean tree and shrub species. This technique opens a wide range of possibilities, including the identification of species as a first step for further extraction of structure and fuel variables as input for wildfire behaviour models. Numéro de notice : A2022-057 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.3390/rs14010199 En ligne : https://doi.org/10.3390/rs14010199 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99462
in Remote sensing > vol 14 n° 1 (January-1 2022) . - n° 199[article]Contribution to object extraction in cartography : A novel deep learning-based solution to recognise, segment and post-process the road transport network as a continuous geospatial element in high-resolution aerial orthoimagery / Calimanut-Ionut Cira (2022)
Titre : Contribution to object extraction in cartography : A novel deep learning-based solution to recognise, segment and post-process the road transport network as a continuous geospatial element in high-resolution aerial orthoimagery Type de document : Thèse/HDR Auteurs : Calimanut-Ionut Cira, Auteur Editeur : Madrid [Espagne] : Universidad politécnica de Madrid Année de publication : 2022 Importance : 227 p. Format : 21 x 30 cm Note générale : bibliographie
Thèse de Doctorat en Topographie, Géodésie et cartographie, Universidad politécnica de MadridLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] extraction du réseau routier
[Termes IGN] image aérienne
[Termes IGN] orthoimage
[Termes IGN] réseau antagoniste génératif
[Termes IGN] réseau neuronal artificiel
[Termes IGN] route
[Termes IGN] segmentation sémantiqueIndex. décimale : THESE Thèses et HDR Résumé : (auteur) Remote sensing imagery combined with deep learning strategies is often regarded as an ideal solution for interpreting scenes and monitoring infrastructures with remarkable performance levels. Remote sensing experts have been actively using deep neural networks to solve object extraction tasks in high-resolution aerial imagery by means of supervised operations. However, the extraction operation is imperfect, due to the nature of remotely sensed images (noise, obstructions, etc.), the limitations of sensing resolution, or the occlusions often present in the scenes. The road network plays an important part in transportation and, nowadays, one of the main related challenges is keeping the existent cartographic support up to date. This task can be considered very challenging due to the complex nature of the geospatial object (continuous, with irregular geometry, and significant differences in width). We also need to take into account that secondary roads represent the largest part of the road transport network, but due to the absence of clearly defined edges, and the different spectral signatures of the materials used for pavement, monitoring, and mapping them represents a great effort for public administration, and their extraction is often omitted altogether. We believe that recent advancements in machine vision can enable a successful extraction of the road structures from high-resolution, remotely sensed imagery and a greater automation of the road mapping operation. In this PhD thesis, we leverage recent computer vision advances and propose a deep learning-based end-to-end solution, capable of efficiently extracting the surface area of roads at a large scale. The novel approach is based on a disjoint execution of three different image processing operations (recognition, semantic segmentation, and post-processing with conditional generative learning) within a common framework. We focused on improving the state-of-the-art results for each of the mentioned components, before incorporating the resulting models into the proposed solution architecture. For the recognition operation, we proposed two framework candidates based on convolutional neural networks to classify roads in openly available aerial orthoimages divided in tiles of 256×256 pixels, with a spatial resolution of 0.5 m. The frameworks are based on ensemble learning and transfer learning and combine weak classifiers to leverage the strengths of different state-of-the-art models that we heavily modified for computational efficiency. We evaluated their performance on unseen test data and compared the results with those obtained by the state-of-the-art convolutional neural networks trained for the same task, observing improvements in performance metrics of 2-3%. Secondly, we implemented hybrid semantic segmentation models (where the default backbones are replaced by neural network specialised in image segmentation) and trained them with high-resolution remote sensing imagery and their correspondent ground-truth masks. Our models achieved mean increases in performance metrics of 2.7-3.5%, when compared to the original state-of-the-art semantic segmentation architectures trained from scratch for the same task. The best-performing model was integrated on a web platform that handles the evaluation of large areas, the association of the semantic predictions with geographical coordinates, the conversion of the tiles’ format, and the generation of GeoTIFF results (compatible with geospatial databases). Thirdly, the road surface area extraction task is generally carried out via semantic segmentation over remotely sensed imagery—however, this supervised learning task can be considered very costly because it requires remote sensing images labelled at pixel level and the results are not always satisfactory (presence of discontinuities, overlooked connection points, or isolated road segments). We consider that unsupervised learning (not requiring labelled data) can be employed for post-processing the geometries of geospatial objects extracted via semantic segmentation. For this reason, we also approached the post-processing of the road surface areas obtained with the best performing segmentation model to improve the initial segmentation predictions. In this line, we proposed two post-processing operations based on conditional generative learning for deep inpainting and image-to-image translation operations and trained the networks to learn the distribution of the road network present in official cartography, using a novel dataset covering representative areas of Spain. The first proposed conditional Generative Adversarial Network (cGAN) model was trained for deep inpainting operation and obtained improvements in performance metrics of maximum 1.3%. The second cGAN model was trained for image-to-image translation, is based on a popular model heavily modified for computational efficiency (a 92.4% decrease in the number of parameters in the generator network and a 61.3% decrease in the discriminator network), and achieved a maximum increase of 11.6% in performance metrics. We also conducted a qualitative comparison to visually assess the effectiveness of the generative operations and observed great improvements with respect to the initial semantic segmentation predictions. Lastly, we proposed an end-to-end processing strategy that combines image classification, semantic segmentation, and post-processing operations to extract containing road surface area extraction from high-resolution aerial orthophotography. The training of the model components was carried out on a large-scale dataset containing more than 537,500 tiles, covering approximately 20,800 km2 of the Spanish territory, manually tagged at pixel level. The consecutive execution of the resulting deep learning models delivered higher quality results when compared to state-of-the-art implementations trained for the same task. The versatility and flexibility of the solution given by the disjointed execution of the three separate sub-operations proved its effectiveness and economic efficiency and enables the integration of a web application that alleviates the manipulation of geospatial data, while allowing for an easy integration of future models and algorithms. Resuming, applying the proposed models resulted from this PhD thesis translates to operations aimed to check if the latest existing aerial orthoimages contains the studied continuous geospatial element, to obtain an approximation of its surface area using supervised learning and to improve the initial segmentation results with post-processing methods based on conditional generative learning. The results obtained with the proposed end-to-end-solution presented in this PhD thesis improve the state-of-the-art in the field of road extraction with deep learning techniques and prove the appropriateness of applying the proposed extraction workflow for a more robust and more efficient extraction operation of the road transport network. We strongly believe that the processing strategy can be applied to enhance other similar extraction tasks of continuous geospatial elements (such as the mapping of riverbeds, or railroads), or serve as a base for developing additional extraction workflows of geospatial objects from remote sensing images. Note de contenu : 1- Introduction
2- Methodology
3- Theoretical framework
4- Litterature review
5- Road recognition: A framework based on nestion of convolutional neuronal networks and transfer learning to regognise road elements
6- Road segmentation: An approach based on hybrid semantic segmentation models to extract the surface area of rod elements from aerial orthoimagery
7- Post-processing of semantic segmentation predictions I: A conditional generative adversial network to improve the extraction of road surface areas via deep inpainting operations
8- Post-processing of semantic segmentation predictions II: A lightweight conditional generative adversial network to improve the extraction of road surface areas via image-to-image translation
9- An end-to-end road extraction solution based on regonition, segmentation, and post-processing operations for a large-scale mapping of the road transport network from aerial orthophotography
10- ConclusionsNuméro de notice : 24069 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse étrangère Note de thèse : Thèse de Doctorat : Topographie, Géodésie et cartographie : Universidad politécnica de Madrid : 2022 DOI : 10.20868/UPM.thesis.70152 En ligne : https://doi.org/10.20868/UPM.thesis.70152 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102113 PermalinkDetection and biomass estimation of phaeocystis globosa blooms off Southern China from UAV-based hyperspectral measurements / Xue Li in IEEE Transactions on geoscience and remote sensing, vol 60 n° 1 (January 2022)PermalinkPermalinkDetection of windthrown tree stems on UAV-orthomosaics using U-Net convolutional networks / Stefan Reder in Remote sensing, vol 14 n° 1 (January-1 2022)PermalinkDevelopment of object detectors for satellite images by deep learning / Alissa Kouraeva (2022)PermalinkDéveloppement d’outils et de méthodes permettant l’acquisition, le traitement et la diffusion de données issues de levés par drone / Guillaume Feuillatre (2022)PermalinkÉvolution rétrospective et prospective d’un massif dunaire par imagerie multispectrale et LiDAR / Iris Jeuffrard (2022)PermalinkFLAIR: French Land cover from Aerial ImageRy - Challenge FLAIR #1: semantic segmentation and domain adaptation / Anatol Garioud (2022)PermalinkInteractive semantic segmentation of aerial images with deep neural networks / Gaston Lenczner (2022)PermalinkLatent heat flux variability and response to drought stress of black poplar: A multi-platform multi-sensor remote and proximal sensing approach to relieve the data scarcity bottleneck / Flavia Tauro in Remote sensing of environment, vol 268 (January 2022)Permalink