Mention de date : December 2022
Paru le : 01/12/2022 |
[n° ou bulletin]
[n° ou bulletin]
| ![]() |
Dépouillements
![](./images/expand_all.gif)
![](./images/collapse_all.gif)
Semantic segmentation of bridge components and road infrastructure from mobile LiDAR data / Yi-Chun Lin in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 6 (December 2022)
![]()
[article]
Titre : Semantic segmentation of bridge components and road infrastructure from mobile LiDAR data Type de document : Article/Communication Auteurs : Yi-Chun Lin, Auteur ; Ayman Habib, Auteur Année de publication : 2022 Article en page(s) : n° 100023 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] autoroute
[Termes IGN] couplage GNSS-INS
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] lidar mobile
[Termes IGN] pont
[Termes IGN] réseau neuronal de graphes
[Termes IGN] réseau routier
[Termes IGN] segmentation sémantique
[Termes IGN] semis de pointsRésumé : (auteur) Emerging mobile LiDAR mapping systems exhibit great potential as an alternative for mapping urban environments. Such systems can acquire high-quality, dense point clouds that capture detailed information over an area of interest through efficient field surveys. However, automatically recognizing and semantically segmenting different components from the point clouds with efficiency and high accuracy remains a challenge. Towards this end, this study proposes a semantic segmentation framework to simultaneously classify bridge components and road infrastructure using mobile LiDAR point clouds while providing the following contributions: 1) a deep learning approach exploiting graph convolutions is adopted for point cloud semantic segmentation; 2) cross-labeling and transfer learning techniques are developed to reduce the need for manual annotation; and 3) geometric quality control strategies are proposed to refine the semantic segmentation results. The proposed framework is evaluated using data from two mobile mapping systems along an interstate highway with 27 highway bridges. With the help of the proposed cross-labeling and transfer learning strategies, the deep learning model achieves an overall accuracy of 84% using limited training data. Moreover, the effectiveness of the proposed framework is verified through test covering approximately 42 miles along the interstate highway, where substantial improvement after quality control can be observed. Numéro de notice : A2022-814 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1016/j.ophoto.2022.100023 Date de publication en ligne : 24/10/2022 En ligne : https://doi.org/10.1016/j.ophoto.2022.100023 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101975
in ISPRS Open Journal of Photogrammetry and Remote Sensing > vol 6 (December 2022) . - n° 100023[article]Assessment of camera focal length influence on canopy reconstruction quality / Martin Denter in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 6 (December 2022)
![]()
[article]
Titre : Assessment of camera focal length influence on canopy reconstruction quality Type de document : Article/Communication Auteurs : Martin Denter, Auteur ; Julian Frey, Auteur ; Teja Kattenborn, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 100025 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie
[Termes IGN] Abies alba
[Termes IGN] Acer pseudoplatanus
[Termes IGN] Allemagne
[Termes IGN] canopée
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] Fagus sylvatica
[Termes IGN] image captée par drone
[Termes IGN] Larix decidua
[Termes IGN] longueur focale
[Termes IGN] modèle numérique de surface de la canopée
[Termes IGN] parcelle forestière
[Termes IGN] Picea abies
[Termes IGN] reconstruction d'image
[Termes IGN] semis de points
[Termes IGN] structure-from-motionRésumé : (auteur) Unoccupied aerial vehicles (UAV) with RGB-cameras are affordable and versatile devices for the generation of a series of remote sensing products that can be used for forest inventory tasks, such as creating high-resolution orthomosaics and canopy height models. The latter may serve purposes including tree species identification, forest damage assessments, canopy height or timber stock assessments. Besides flight and image acquisition parameters such as image overlap, flight height, and weather conditions, the focal length, which determines the opening angle of the camera lens, is a parameter that influences the reconstruction quality. Despite its importance, the effect of focal length on the quality of 3D reconstructions of forests has received little attention in the literature. Shorter focal lengths result in more accurate distance estimates in the nadir direction since small angular errors lead to large positional errors in narrow opening angles. In this study, 3D reconstructions of four UAV-acquisitions with different focal lengths (21, 35, 50, and 85 mm) on a 1 ha mature mixed forest plot were compared to reference point clouds derived from high quality Terrestrial Laser Scans. Shorter focal lengths (21 and 35 mm) led to a higher agreement with the TLS scans and thus better reconstruction quality, while at 50 mm, quality losses were observed, and at 85 mm, the quality was considerably worse. F1-scores calculated from a voxel representation of the point clouds amounted to 0.254 with 35 mm and 0.201 with 85 mm. The precision with 21 mm focal length was 0.466 and 0.302 with 85 mm. We thus recommend a focal length no longer than 35 mm during UAV Structure from Motion (SfM) data acquisition for forest management practices. Numéro de notice : A2022-870 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.1016/j.ophoto.2022.100025 Date de publication en ligne : 09/11/2022 En ligne : https://doi.org/10.1016/j.ophoto.2022.100025 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102164
in ISPRS Open Journal of Photogrammetry and Remote Sensing > vol 6 (December 2022) . - n° 100025[article]Instance segmentation of standing dead trees in dense forest from aerial imagery using deep learning / Aboubakar Sani-Mohammed in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 6 (December 2022)
![]()
[article]
Titre : Instance segmentation of standing dead trees in dense forest from aerial imagery using deep learning Type de document : Article/Communication Auteurs : Aboubakar Sani-Mohammed, Auteur ; Wei Yao, Auteur ; Marco Heurich, Auteur Année de publication : 2022 Article en page(s) : n° 100024 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] arbre mort
[Termes IGN] Bavière (Allemagne)
[Termes IGN] bois sur pied
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection automatique
[Termes IGN] gestion forestière durable
[Termes IGN] image à haute résolution
[Termes IGN] image aérienne
[Termes IGN] image infrarouge couleur
[Termes IGN] peuplement mélangé
[Termes IGN] puits de carbone
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Mapping standing dead trees, especially, in natural forests is very important for evaluation of the forest's health status, and its capability for storing Carbon, and the conservation of biodiversity. Apparently, natural forests have larger areas which renders the classical field surveying method very challenging, time-consuming, labor-intensive, and unsustainable. Thus, for effective forest management, there is the need for an automated approach that would be cost-effective. With the advent of Machine Learning, Deep Learning has proven to successfully achieve excellent results. This study presents an adjusted Mask R-CNN Deep Learning approach for detecting and segmenting standing dead trees in a mixed dense forest from CIR aerial imagery using a limited (195 images) training dataset. First, transfer learning is considered coupled with the image augmentation technique to leverage the limitation of training datasets. Then, we strategically selected hyperparameters to suit appropriately our model's architecture that fits well with our type of data (dead trees in images). Finally, to assess the generalization capability of our model's performance, a test dataset that was not confronted to the deep neural network was used for comprehensive evaluation. Our model recorded promising results reaching a mean average precision, average recall, and average F1-Score of 0.85, 0.88, and 0.87 respectively, despite our relatively low resolution (20 cm) dataset. Consequently, our model could be used for automation in standing dead tree detection and segmentation for enhanced forest management. This is equally significant for biodiversity conservation, and forest Carbon storage estimation. Numéro de notice : A2022-871 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.1016/j.ophoto.2022.100024 Date de publication en ligne : 10/11/2022 En ligne : https://doi.org/10.1016/j.ophoto.2022.100024 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102165
in ISPRS Open Journal of Photogrammetry and Remote Sensing > vol 6 (December 2022) . - n° 100024[article]