ISPRS Journal of photogrammetry and remote sensing / International society for photogrammetry and remote sensing (1980 -) . vol 193Paru le : 01/11/2022 |
[n° ou bulletin]
est un bulletin de ISPRS Journal of photogrammetry and remote sensing / International society for photogrammetry and remote sensing (1980 -) (1990 -)
[n° ou bulletin]
|
Dépouillements
Ajouter le résultat dans votre panierAn advanced bidirectional reflectance factor (BRF) spectral approach for estimating flavonoid content in leaves of Ginkgo plantations / Kai Zhou in ISPRS Journal of photogrammetry and remote sensing, vol 193 (November 2022)
[article]
Titre : An advanced bidirectional reflectance factor (BRF) spectral approach for estimating flavonoid content in leaves of Ginkgo plantations Type de document : Article/Communication Auteurs : Kai Zhou, Auteur ; Lin Cao, Auteur ; Shiyun Yin, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 1 - 16 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] bande spectrale
[Termes IGN] coefficient de corrélation
[Termes IGN] distribution du coefficient de réflexion bidirectionnelle BRDF
[Termes IGN] feuille (végétation)
[Termes IGN] Ginkgo biloba
[Termes IGN] image à haute résolution
[Termes IGN] indice foliaire
[Termes IGN] Kiangsou (Chine)
[Termes IGN] réflectance végétaleRésumé : (auteur) As a key phenolic pigment concentrated in the surface tissues of leaves, flavonoids (Flav) are the major bioactive ingredients in Ginkgo leaf extracts. Flav are also marked natural antioxidants and significant indicators of biotic and abiotic stresses, critical for determining cultivation quality and enhancing Flav yield. In particular, area-based Flav (Flavarea) is related to the shortwave-blue light interaction within leaves per unit leaf area, whereas mass-based Flav (Flavmass) is useful for the quantitative assessment of Flav yield. In order to accurately estimate the contents of Flavarea and Flavmass in leaves of Ginkgo plantations, in this study, we developed an advanced bidirectional reflectance factor (BRF) spectra-based approach by reducing the effects of specular reflection and enhancing the absorption signals of Flav (in the shortwave-blue region of spectrum), using a suite of new spectral indices (SIs) (i.e., flavonoid index (FI), modified flavonoid index (mFI) and double difference index (DD)) calculated from the leaf clip equipped spectrometers-collected data. The results demonstrated that most of the SIs derived from the developed BRF spectra-based approach obtained relatively high performance for Flav estimation by alleviating adverse effects of specular reflection to different extents (CV-R2 = 0.60–0.76). In specific, DDnir434,421 selected from DD-type indices performed (CV-R2 = 0.76 for Flavarea; CV-R2 = 0.69 for Flavmass) better than other indices. These findings represent marked potentials of the developed BRF spectra-based approach for non-destructively estimating leaf Flav content, as well as improving the understanding of the mechanisms of specular effects on Flav estimations in leaves of Ginkgo plantations. Numéro de notice : A2022-744 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.08.020 Date de publication en ligne : 09/09/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.08.020 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101727
in ISPRS Journal of photogrammetry and remote sensing > vol 193 (November 2022) . - pp 1 - 16[article]Point2Roof: End-to-end 3D building roof modeling from airborne LiDAR point clouds / Li Li in ISPRS Journal of photogrammetry and remote sensing, vol 193 (November 2022)
[article]
Titre : Point2Roof: End-to-end 3D building roof modeling from airborne LiDAR point clouds Type de document : Article/Communication Auteurs : Li Li, Auteur ; Nan Song, Auteur ; Fei Sun, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 17 - 28 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] modélisation 3D
[Termes IGN] Perceptron multicouche
[Termes IGN] primitive géométrique
[Termes IGN] reconstruction 3D du bâti
[Termes IGN] semis de points
[Termes IGN] toitRésumé : (auteur) Three-dimensional (3D) building roof reconstruction from airborne LiDAR point clouds is an important task in photogrammetry and computer vision. To automatically reconstruct the 3D building models at Level of Detail 2 (LoD-2) from airborne LiDAR point clouds, the data-driven approaches usually need to be performed in two steps: geometric primitive extraction and roof structure inference. Obviously, the traditional approaches are not end-to-end, the accumulated errors in different stages cannot be avoided and the final 3D roof models may not be optimal. In addition, the results of 3D roof models largely depend on the accuracy of geometric primitives (planes, lines, etc.). To solve these problems, we present a deep learning-based approach to directly reconstruct building roofs from airborne LiDAR point clouds, named Point2Roof. In our method, we start by extracting the deep features for each input point using PointNet++. Then, we identify a set of candidate corner points from the input point clouds using the extracted deep features. In addition, we also regress the offset for each candidate corner point to refine their locations. After that, these candidates are clustered into a set of initial vertices, and we further refine their locations to obtain the final accurate vertices. Finally, we propose a Paired Point Attention (PPA) module to predict the true model edges from an exhaustive set of candidate edges between the vertices. Unlike traditional roof modeling approaches, the proposed Point2Roof is end-to-end. However, due to the lack of a building reconstruction dataset, we construct a large-scale synthetic dataset to verify the effectiveness and robustness of the proposed Point2Roof. The experimental results conducted on the synthetic benchmark demonstrate that the proposed Point2Roof significantly outperforms the traditional roof modeling approaches. The experiments also show that the network trained on the synthetic dataset can be applied to the real point clouds after fine-tuning the trained model on a small real dataset. The large-scale synthetic dataset, the small real dataset and the source code of our approach are publicly available in https://github.com/Li-Li-Whu/Point2Roof. Numéro de notice : A2022-745 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.08.027 Date de publication en ligne : 10/09/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.08.027 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101728
in ISPRS Journal of photogrammetry and remote sensing > vol 193 (November 2022) . - pp 17 - 28[article]A joint deep learning network of point clouds and multiple views for roadside object classification from lidar point clouds / Lina Fang in ISPRS Journal of photogrammetry and remote sensing, vol 193 (November 2022)
[article]
Titre : A joint deep learning network of point clouds and multiple views for roadside object classification from lidar point clouds Type de document : Article/Communication Auteurs : Lina Fang, Auteur ; Zhilong You, Auteur ; Guixi Shen, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 115 - 136 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification orientée objet
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] fusion d'images
[Termes IGN] image captée par drone
[Termes IGN] reconnaissance d'objets
[Termes IGN] route
[Termes IGN] scène urbaine
[Termes IGN] semis de pointsRésumé : (auteur) Urban management and survey departments have begun investigating the feasibility of acquiring data from various laser scanning systems for urban infrastructure measurements and assessments. Roadside objects such as cars, trees, traffic poles, pedestrians, bicycles and e-bicycles describe the static and dynamic urban information available for acquisition. Because of the unstructured nature of 3D point clouds, the rich targets in complex road scenes, and the varying scales of roadside objects, finely classifying these roadside objects from various point clouds is a challenging task. In this paper, we integrate two representations of roadside objects, point clouds and multiview images to propose a point-group-view network named PGVNet for classifying roadside objects into cars, trees, traffic poles, and small objects (pedestrians, bicycles and e-bicycles) from generalized point clouds. To utilize the topological information of the point clouds, we propose a graph attention convolution operation called AtEdgeConv to mine the relationship among the local points and to extract local geometric features. In addition, we employ a hierarchical view-group-object architecture to diminish the redundant information between similar views and to obtain salient viewwise global features. To fuse the local geometric features from the point clouds and the global features from multiview images, we stack an attention-guided fusion network in PGVNet. In particular, we quantify and leverage the global features as an attention mask to capture the intrinsic correlation and discriminability of the local geometric features, which contributes to recognizing the different roadside objects with similar shapes. To verify the effectiveness and generalization of our methods, we conduct extensive experiments on six test datasets of different urban scenes, which were captured by different laser scanning systems, including mobile laser scanning (MLS) systems, unmanned aerial vehicle (UAV)-based laser scanning (ULS) systems and backpack laser scanning (BLS) systems. Experimental results, and comparisons with state-of-the-art methods, demonstrate that the PGVNet model is able to effectively identify various cars, trees, traffic poles and small objects from generalized point clouds, and achieves promising performances on roadside object classifications, with an overall accuracy of 95.76%. Our code is released on https://github.com/flidarcode/PGVNet. Numéro de notice : A2022-756 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.08.022 Date de publication en ligne : 22/09/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.08.022 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101759
in ISPRS Journal of photogrammetry and remote sensing > vol 193 (November 2022) . - pp 115 - 136[article]