Descripteur
Documents disponibles dans cette catégorie (959)
![](./images/expand_all.gif)
![](./images/collapse_all.gif)
Etendre la recherche sur niveau(x) vers le bas
GisGCN: a visual graph-based framework to match geographical areas through time / Margarita Khokhlova in ISPRS International journal of geo-information, vol 11 n° 2 (February 2022)
![]()
[article]
Titre : GisGCN: a visual graph-based framework to match geographical areas through time Type de document : Article/Communication Auteurs : Margarita Khokhlova , Auteur ; Nathalie Abadie
, Auteur ; Valérie Gouet-Brunet
, Auteur ; Liming Chen, Auteur
Année de publication : 2022 Projets : Alegoria / Gouet-Brunet, Valérie Article en page(s) : n° 97 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] attribut géomètrique
[Termes IGN] attribut sémantique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données étiquetées d'entrainement
[Termes IGN] entité géographique
[Termes IGN] image aérienne
[Termes IGN] réseau sémantiqueRésumé : (auteur) Historical visual sources are particularly useful for reconstructing the successive states of the territory in the past and for analysing its evolution. However, finding visual sources covering a given area within a large mass of archives can be very difficult if they are poorly documented. In the case of aerial photographs, most of the time, this task is carried out by solely relying on the visual content of the images. Convolutional Neural Networks are capable to capture the visual cues of the images and match them to each other given a sufficient amount of training data. However, over time and across seasons, the natural and man-made landscapes may evolve, making historical image-based retrieval a challenging task. We want to approach this cross-time aerial indexing and retrieval problem from a different novel point of view: by using geometrical and topological properties of geographic entities of the researched zone encoded as graph representations which are more robust to appearance changes than the pure image-based ones. Geographic entities in the vertical aerial images are thought of as nodes in a graph, linked to each other by edges representing their spatial relationships. To build such graphs, we propose to use instances from topographic vector databases and state-of-the-art spatial analysis methods. We demonstrate how these geospatial graphs can be successfully matched across time by means of the learned graph embedding. Numéro de notice : A2022-156 Affiliation des auteurs : UGE-LASTIG+Ext (2020- ) Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi11020097 Date de publication en ligne : 29/01/2022 En ligne : https://doi.org/10.3390/ijgi11020097 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100316
in ISPRS International journal of geo-information > vol 11 n° 2 (February 2022) . - n° 97[article]Integrating terrestrial laser scanning and unmanned aerial vehicle photogrammetry to estimate individual tree attributes in managed coniferous forests in Japan / Katsuto Shimizu in International journal of applied Earth observation and geoinformation, vol 106 (February 2022)
![]()
[article]
Titre : Integrating terrestrial laser scanning and unmanned aerial vehicle photogrammetry to estimate individual tree attributes in managed coniferous forests in Japan Type de document : Article/Communication Auteurs : Katsuto Shimizu, Auteur ; Tomohiro Nishizono, Auteur ; Fumiaki Kitahara, Auteur ; Keiko Fukumoto, Auteur ; Hideki Saito, Auteur Année de publication : 2022 Article en page(s) : n° 102658 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] estimation statistique
[Termes IGN] hauteur des arbres
[Termes IGN] image captée par drone
[Termes IGN] Japon
[Termes IGN] Pinophyta
[Termes IGN] volume en boisRésumé : (auteur) The accurate estimation of tree attributes is essential for sustainable forest management. Terrestrial Laser Scanning (TLS) is a viable remote sensing technology suitable for estimating under canopy structure. However, TLS measurements generally underestimate tree height in taller trees, which leads to the underestimation of other tree attributes (e.g., stem volume). The integration of information derived from TLS and Unmanned Aerial Vehicle (UAV) photogrammetry could potentially improve tree height estimation. This study investigated the applicability of integrating TLS and UAV photogrammetry to estimate individual tree attributes in managed coniferous forests of Japan. Diameter at breast height (DBH), tree height, and stem volume were estimated by (1) TLS data only, (2) integrating TLS and UAV data with TLS tree locations, and (3) integrating TLS and UAV data with treetop detections of the tree canopy. The TLS data only approach achieved high accuracy for DBH estimations with a root mean squared error (RMSE) of 2.36 cm (RMSE% 5.6%); however, tree height was greatly underestimated, with an RMSE of 8.87 m (28.9%) and a bias of −8.39 m. Integrating TLS and UAV photogrammetric data improved tree height estimation accuracy for both the TLS tree location (RMSE of 1.89 m and a bias of −0.46 m) and the treetop detection (RMSE of 1.77 m and a bias of 0.36 m) approaches. Integrating TLS and UAV photogrammetric data also improved the accuracy of the stem volume estimations with RMSEs of 0.21 m3 (10.8%) and 0.21 m3 (10.5%) for the TLS tree location and treetop detection approaches, respectively. Although the tree height of suppressed trees tended to be overestimated by TLS and UAV photogrammetric data integration, a good performance was obtained for dominant trees. The results of this study indicate that the integration of TLS and UAV photogrammetry is beneficial for the accurate estimation of tree attributes in coniferous forests. Numéro de notice : A2022-071 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.1016/j.jag.2021.102658 En ligne : https://doi.org/10.1016/j.jag.2021.102658 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99423
in International journal of applied Earth observation and geoinformation > vol 106 (February 2022) . - n° 102658[article]3D modeling of urban area based on oblique UAS images - An end-to-end pipeline / Valeria-Ersilia Oniga in Remote sensing, vol 14 n° 2 (January-2 2022)
![]()
[article]
Titre : 3D modeling of urban area based on oblique UAS images - An end-to-end pipeline Type de document : Article/Communication Auteurs : Valeria-Ersilia Oniga, Auteur ; Ana-Ioana Breaban, Auteur ; Norbert Pfeifer, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 422 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] apprentissage automatique
[Termes IGN] Bâti-3D
[Termes IGN] CityGML
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] données lidar
[Termes IGN] image aérienne oblique
[Termes IGN] image captée par drone
[Termes IGN] indice de végétation
[Termes IGN] lasergrammétrie
[Termes IGN] modèle numérique de surface
[Termes IGN] modélisation 3D
[Termes IGN] point d'appui
[Termes IGN] Roumanie
[Termes IGN] segmentation
[Termes IGN] semis de points
[Termes IGN] zone urbaineRésumé : (auteur) 3D modelling of urban areas is an attractive and active research topic, as 3D digital models of cities are becoming increasingly common for urban management as a consequence of the constantly growing number of people living in cities. Viewed as a digital representation of the Earth’s surface, an urban area modeled in 3D includes objects such as buildings, trees, vegetation and other anthropogenic structures, highlighting the buildings as the most prominent category. A city’s 3D model can be created based on different data sources, especially LiDAR or photogrammetric point clouds. This paper’s aim is to provide an end-to-end pipeline for 3D building modeling based on oblique UAS images only, the result being a parametrized 3D model with the Open Geospatial Consortium (OGC) CityGML standard, Level of Detail 2 (LOD2). For this purpose, a flight over an urban area of about 20.6 ha has been taken with a low-cost UAS, i.e., a DJI Phantom 4 Pro Professional (P4P), at 100 m height. The resulting UAS point cloud with the best scenario, i.e., 45 Ground Control Points (GCP), has been processed as follows: filtering to extract the ground points using two algorithms, CSF and terrain-mark; classification, using two methods, based on attributes only and a random forest machine learning algorithm; segmentation using local homogeneity implemented into Opals software; plane creation based on a region-growing algorithm; and plane editing and 3D model reconstruction based on piece-wise intersection of planar faces. The classification performed with ~35% training data and 31 attributes showed that the Visible-band difference vegetation index (VDVI) is a key attribute and 77% of the data was classified using only five attributes. The global accuracy for each modeled building through the workflow proposed in this study was around 0.15 m, so it can be concluded that the proposed pipeline is reliable. Numéro de notice : A2022-101 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE/IMAGERIE Nature : Article DOI : 10.3390/rs14020422 Date de publication en ligne : 17/01/2022 En ligne : https://doi.org/10.3390/rs14020422 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99566
in Remote sensing > vol 14 n° 2 (January-2 2022) . - n° 422[article]Automatic extraction of damaged houses by earthquake based on improved YOLOv5: A case study in Yangbi / Yafei Jing in Remote sensing, vol 14 n° 2 (January-2 2022)
![]()
[article]
Titre : Automatic extraction of damaged houses by earthquake based on improved YOLOv5: A case study in Yangbi Type de document : Article/Communication Auteurs : Yafei Jing, Auteur ; Yuhuan Ren, Auteur ; Yalan Liu, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 382 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage profond
[Termes IGN] détection d'objet
[Termes IGN] détection de cible
[Termes IGN] détection du bâti
[Termes IGN] dommage matériel
[Termes IGN] extraction automatique
[Termes IGN] image captée par drone
[Termes IGN] orthoimage
[Termes IGN] séisme
[Termes IGN] Yunnan (Chine)Résumé : (auteur) Efficiently and automatically acquiring information on earthquake damage through remote sensing has posed great challenges because the classical methods of detecting houses damaged by destructive earthquakes are often both time consuming and low in accuracy. A series of deep-learning-based techniques have been developed and recent studies have demonstrated their high intelligence for automatic target extraction for natural and remote sensing images. For the detection of small artificial targets, current studies show that You Only Look Once (YOLO) has a good performance in aerial and Unmanned Aerial Vehicle (UAV) images. However, less work has been conducted on the extraction of damaged houses. In this study, we propose a YOLOv5s-ViT-BiFPN-based neural network for the detection of rural houses. Specifically, to enhance the feature information of damaged houses from the global information of the feature map, we introduce the Vision Transformer into the feature extraction network. Furthermore, regarding the scale differences for damaged houses in UAV images due to the changes in flying height, we apply the Bi-Directional Feature Pyramid Network (BiFPN) for multi-scale feature fusion to aggregate features with different resolutions and test the model. We took the 2021 Yangbi earthquake with a surface wave magnitude (Ms) of 6.4 in Yunan, China, as an example; the results show that the proposed model presents a better performance, with the average precision (AP) being increased by 9.31% and 1.23% compared to YOLOv3 and YOLOv5s, respectively, and a detection speed of 80 FPS, which is 2.96 times faster than YOLOv3. In addition, the transferability test for five other areas showed that the average accuracy was 91.23% and the total processing time was 4 min, while 100 min were needed for professional visual interpreters. The experimental results demonstrate that the YOLOv5s-ViT-BiFPN model can automatically detect damaged rural houses due to destructive earthquakes in UAV images with a good performance in terms of accuracy and timeliness, as well as being robust and transferable. Numéro de notice : A2022-104 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.3390/rs14020382 Date de publication en ligne : 14/01/2022 En ligne : https://doi.org/10.3390/rs14020382 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99577
in Remote sensing > vol 14 n° 2 (January-2 2022) . - n° 382[article]Classification of mediterranean shrub species from UAV point clouds / Juan Pedro Carbonell-Rivera in Remote sensing, vol 14 n° 1 (January-1 2022)
![]()
[article]
Titre : Classification of mediterranean shrub species from UAV point clouds Type de document : Article/Communication Auteurs : Juan Pedro Carbonell-Rivera, Auteur ; Jesus Torralba, Auteur ; Javier Estornell, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 199 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] apprentissage automatique
[Termes IGN] arbuste
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification par Perceptron multicouche
[Termes IGN] Espagne
[Termes IGN] Extreme Gradient Machine
[Termes IGN] forêt méditerranéenne
[Termes IGN] image captée par drone
[Termes IGN] incendie de forêt
[Termes IGN] indice de végétation
[Termes IGN] modèle de simulation
[Termes IGN] modèle numérique de terrain
[Termes IGN] parc naturel
[Termes IGN] photogrammétrie aérienne
[Termes IGN] semis de pointsRésumé : (auteur) Modelling fire behaviour in forest fires is based on meteorological, topographical, and vegetation data, including species’ type. To accurately parameterise these models, an inventory of the area of analysis with the maximum spatial and temporal resolution is required. This study investigated the use of UAV-based digital aerial photogrammetry (UAV-DAP) point clouds to classify tree and shrub species in Mediterranean forests, and this information is key for the correct generation of wildfire models. In July 2020, two test sites located in the Natural Park of Sierra Calderona (eastern Spain) were analysed, registering 1036 vegetation individuals as reference data, corresponding to 11 shrub and one tree species. Meanwhile, photogrammetric flights were carried out over the test sites, using a UAV DJI Inspire 2 equipped with a Micasense RedEdge multispectral camera. Geometrical, spectral, and neighbour-based features were obtained from the resulting point cloud generated. Using these features, points belonging to tree and shrub species were classified using several machine learning methods, i.e., Decision Trees, Extra Trees, Gradient Boosting, Random Forest, and MultiLayer Perceptron. The best results were obtained using Gradient Boosting, with a mean cross-validation accuracy of 81.7% and 91.5% for test sites 1 and 2, respectively. Once the best classifier was selected, classified points were clustered based on their geometry and tested with evaluation data, and overall accuracies of 81.9% and 96.4% were obtained for test sites 1 and 2, respectively. Results showed that the use of UAV-DAP allows the classification of Mediterranean tree and shrub species. This technique opens a wide range of possibilities, including the identification of species as a first step for further extraction of structure and fuel variables as input for wildfire behaviour models. Numéro de notice : A2022-057 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.3390/rs14010199 En ligne : https://doi.org/10.3390/rs14010199 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99462
in Remote sensing > vol 14 n° 1 (January-1 2022) . - n° 199[article]Contribution to object extraction in cartography : A novel deep learning-based solution to recognise, segment and post-process the road transport network as a continuous geospatial element in high-resolution aerial orthoimagery / Calimanut-Ionut Cira (2022)
PermalinkPermalinkDetection and biomass estimation of phaeocystis globosa blooms off Southern China from UAV-based hyperspectral measurements / Xue Li in IEEE Transactions on geoscience and remote sensing, vol 60 n° 1 (January 2022)
PermalinkPermalinkDetection of windthrown tree stems on UAV-orthomosaics using U-Net convolutional networks / Stefan Reder in Remote sensing, vol 14 n° 1 (January-1 2022)
PermalinkPermalinkDéveloppement d’outils et de méthodes permettant l’acquisition, le traitement et la diffusion de données issues de levés par drone / Guillaume Feuillatre (2022)
PermalinkÉvolution rétrospective et prospective d’un massif dunaire par imagerie multispectrale et LiDAR / Iris Jeuffrard (2022)
PermalinkFLAIR: French Land cover from Aerial ImageRy - Challenge FLAIR #1: semantic segmentation and domain adaptation / Anatol Garioud (2022)
PermalinkInteractive semantic segmentation of aerial images with deep neural networks / Gaston Lenczner (2022)
Permalink