Détail de l'auteur
Auteur Lina Fang |
Documents disponibles écrits par cet auteur (4)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
A joint deep learning network of point clouds and multiple views for roadside object classification from lidar point clouds / Lina Fang in ISPRS Journal of photogrammetry and remote sensing, vol 193 (November 2022)
[article]
Titre : A joint deep learning network of point clouds and multiple views for roadside object classification from lidar point clouds Type de document : Article/Communication Auteurs : Lina Fang, Auteur ; Zhilong You, Auteur ; Guixi Shen, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 115 - 136 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification orientée objet
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] fusion d'images
[Termes IGN] image captée par drone
[Termes IGN] reconnaissance d'objets
[Termes IGN] route
[Termes IGN] scène urbaine
[Termes IGN] semis de pointsRésumé : (auteur) Urban management and survey departments have begun investigating the feasibility of acquiring data from various laser scanning systems for urban infrastructure measurements and assessments. Roadside objects such as cars, trees, traffic poles, pedestrians, bicycles and e-bicycles describe the static and dynamic urban information available for acquisition. Because of the unstructured nature of 3D point clouds, the rich targets in complex road scenes, and the varying scales of roadside objects, finely classifying these roadside objects from various point clouds is a challenging task. In this paper, we integrate two representations of roadside objects, point clouds and multiview images to propose a point-group-view network named PGVNet for classifying roadside objects into cars, trees, traffic poles, and small objects (pedestrians, bicycles and e-bicycles) from generalized point clouds. To utilize the topological information of the point clouds, we propose a graph attention convolution operation called AtEdgeConv to mine the relationship among the local points and to extract local geometric features. In addition, we employ a hierarchical view-group-object architecture to diminish the redundant information between similar views and to obtain salient viewwise global features. To fuse the local geometric features from the point clouds and the global features from multiview images, we stack an attention-guided fusion network in PGVNet. In particular, we quantify and leverage the global features as an attention mask to capture the intrinsic correlation and discriminability of the local geometric features, which contributes to recognizing the different roadside objects with similar shapes. To verify the effectiveness and generalization of our methods, we conduct extensive experiments on six test datasets of different urban scenes, which were captured by different laser scanning systems, including mobile laser scanning (MLS) systems, unmanned aerial vehicle (UAV)-based laser scanning (ULS) systems and backpack laser scanning (BLS) systems. Experimental results, and comparisons with state-of-the-art methods, demonstrate that the PGVNet model is able to effectively identify various cars, trees, traffic poles and small objects from generalized point clouds, and achieves promising performances on roadside object classifications, with an overall accuracy of 95.76%. Our code is released on https://github.com/flidarcode/PGVNet. Numéro de notice : A2022-756 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.08.022 Date de publication en ligne : 22/09/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.08.022 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101759
in ISPRS Journal of photogrammetry and remote sensing > vol 193 (November 2022) . - pp 115 - 136[article]A graph attention network for road marking classification from mobile LiDAR point clouds / Lina Fang in International journal of applied Earth observation and geoinformation, vol 108 (April 2022)
[article]
Titre : A graph attention network for road marking classification from mobile LiDAR point clouds Type de document : Article/Communication Auteurs : Lina Fang, Auteur ; Tongtong Sun, Auteur ; Shuang Wang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 102735 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] classification par Perceptron multicouche
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] noeud
[Termes IGN] réseau neuronal de graphes
[Termes IGN] réseau routier
[Termes IGN] semis de points
[Termes IGN] signalisation routièreRésumé : (auteur) The category of road marking is a crucial element in Mobile laser scanning systems’ (MLSs) applications such as intelligent traffic systems, high-definition maps, location and navigation services. Due to the complexity of road scenes, considerable and various categories, occlusion and uneven intensities in MLS point clouds, finely road marking classification is considered as the challenging work. This paper proposes a graph attention network named GAT_SCNet to simultaneously group the road markings into 11 categories from MLS point clouds. Concretely, the proposed GAT_SCNet model constructs serial computable subgraphs and fulfills a multi-head attention mechanism to encode the geometric, topological, and spatial relationships between the node and neighbors to generate the distinguishable descriptor of road marking. To assess the effectiveness and generalization of the GAT_SCNet model, we conduct extensive experiments on five test datasets of about 100 km in total captured by different MLS systems. Three accuracy evaluation metrics: average Precision, Recall, and of 11 categories on the test datasets exceed 91%, respectively. Accuracy evaluations and comparative studies show that our method has achieved a new state-of-the-art work on road marking classification, especially on similar linear road markings like stop lines, zebra crossings, and dotted lines. Numéro de notice : A2022-234 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Article DOI : 10.1016/j.jag.2022.102735 Date de publication en ligne : 10/03/2022 En ligne : https://doi.org/10.1016/j.jag.2022.102735 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100124
in International journal of applied Earth observation and geoinformation > vol 108 (April 2022) . - n° 102735[article]MS-RRFSegNetMultiscale regional relation feature segmentation network for semantic segmentation of urban scene point clouds / Haifeng Luo in IEEE Transactions on geoscience and remote sensing, Vol 58 n° 12 (December 2020)
[article]
Titre : MS-RRFSegNetMultiscale regional relation feature segmentation network for semantic segmentation of urban scene point clouds Type de document : Article/Communication Auteurs : Haifeng Luo, Auteur ; Chongcheng Chen, Auteur ; Lina Fang, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 8301 - 8315 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] cognition
[Termes IGN] données lidar
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] représentation multiple
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantique
[Termes IGN] semis de pointsRésumé : (auteur) Semantic segmentation is one of the fundamental tasks in understanding and applying urban scene point clouds. Recently, deep learning has been introduced to the field of point cloud processing. However, compared to images that are characterized by their regular data structure, a point cloud is a set of unordered points, which makes semantic segmentation a challenge. Consequently, the existing deep learning methods for semantic segmentation of point cloud achieve less success than those applied to images. In this article, we propose a novel method for urban scene point cloud semantic segmentation using deep learning. First, we use homogeneous supervoxels to reorganize raw point clouds to effectively reduce the computational complexity and improve the nonuniform distribution. Then, we use supervoxels as basic processing units, which can further expand receptive fields to obtain more descriptive contexts. Next, a sparse autoencoder (SAE) is presented for feature embedding representations of the supervoxels. Subsequently, we propose a regional relation feature reasoning module (RRFRM) inspired by relation reasoning network and design a multiscale regional relation feature segmentation network (MS-RRFSegNet) based on the RRFRM to semantically label supervoxels. Finally, the supervoxel-level inferences are transformed into point-level fine-grained predictions. The proposed framework is evaluated in two open benchmarks (Paris-Lille-3D and Semantic3D). The evaluation results show that the proposed method achieves competitive overall performance and outperforms other related approaches in several object categories. An implementation of our method is available at: https://github.com/HiphonL/MS_RRFSegNet . Numéro de notice : A2020-738 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.2985695 Date de publication en ligne : 28/04/2020 En ligne : https://doi.org/10.1109/TGRS.2020.2985695 Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96363
in IEEE Transactions on geoscience and remote sensing > Vol 58 n° 12 (December 2020) . - pp 8301 - 8315[article]Semi-automated extraction and delineation of 3D roads of street scene from mobile laser scanning point clouds / Bishen Yang in ISPRS Journal of photogrammetry and remote sensing, vol 79 (May 2013)
[article]
Titre : Semi-automated extraction and delineation of 3D roads of street scene from mobile laser scanning point clouds Type de document : Article/Communication Auteurs : Bishen Yang, Auteur ; Lina Fang, Auteur ; Jonathan Li, Auteur Année de publication : 2013 Article en page(s) : pp 80 - 93 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] délimitation
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction du réseau routier
[Termes IGN] extraction semi-automatique
[Termes IGN] reconstruction 3D
[Termes IGN] route
[Termes IGN] semis de points
[Termes IGN] système de numérisation mobileRésumé : (Auteur) Accurate 3D road information is important for applications such as road maintenance and virtual 3D modeling. Mobile laser scanning (MLS) is an efficient technique for capturing dense point clouds that can be used to construct detailed road models for large areas. This paper presents a method for extracting and delineating roads from large-scale MLS point clouds. The proposed method partitions MLS point clouds into a set of consecutive “scanning lines”, which each consists of a road cross section. A moving window operator is used to filter out non-ground points line by line, and curb points are detected based on curb patterns. The detected curb points are tracked and refined so that they are both globally consistent and locally similar. To evaluate the validity of the proposed method, experiments were conducted using two types of street-scene point clouds captured by Optech’s Lynx Mobile Mapper System. The completeness, correctness, and quality of the extracted roads are over 94.42%, 91.13%, and 91.3%, respectively, which proves the proposed method is a promising solution for extracting 3D roads from MLS point clouds. Numéro de notice : A2013-234 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2013.01.016 En ligne : https://doi.org/10.1016/j.isprsjprs.2013.01.016 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=32372
in ISPRS Journal of photogrammetry and remote sensing > vol 79 (May 2013) . - pp 80 - 93[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 081-2013051 RAB Revue Centre de documentation En réserve L003 Disponible