Détail de l'auteur
Auteur Lina Yang |
Documents disponibles écrits par cet auteur (2)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Directionally constrained fully convolutional neural network for airborne LiDAR point cloud classification / Congcong Wen in ISPRS Journal of photogrammetry and remote sensing, vol 162 (April 2020)
[article]
Titre : Directionally constrained fully convolutional neural network for airborne LiDAR point cloud classification Type de document : Article/Communication Auteurs : Congcong Wen, Auteur ; Lina Yang, Auteur ; Xiang Li, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 50 - 62 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage automatique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données lidar
[Termes IGN] fusion de données
[Termes IGN] plus proche voisin, algorithme du
[Termes IGN] précision de la classification
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] traitement de semis de pointsRésumé : (auteur) Point cloud classification plays an important role in a wide range of airborne light detection and ranging (LiDAR) applications, such as topographic mapping, forest monitoring, power line detection, and road detection. However, due to the sensor noise, high redundancy, incompleteness, and complexity of airborne LiDAR systems, point cloud classification is challenging. Traditional point cloud classification methods mostly focus on the development of handcrafted point geometry features and employ machine learning-based classification models to conduct point classification. In recent years, the advances of deep learning models have caused researchers to shift their focus towards machine learning-based models, specifically deep neural networks, to classify airborne LiDAR point clouds. These learning-based methods start by transforming the unstructured 3D point sets to regular 2D representations, such as collections of feature images, and then employ a 2D CNN for point classification. Moreover, these methods usually need to calculate additional local geometry features, such as planarity, sphericity and roughness, to make use of the local structural information in the original 3D space. Nonetheless, the 3D to 2D conversion results in information loss. In this paper, we propose a directionally constrained fully convolutional neural network (D-FCN) that can take the original 3D coordinates and LiDAR intensity as input; thus, it can directly apply to unstructured 3D point clouds for semantic labeling. Specifically, we first introduce a novel directionally constrained point convolution (D-Conv) module to extract locally representative features of 3D point sets from the projected 2D receptive fields. To make full use of the orientation information of neighborhood points, the proposed D-Conv module performs convolution in an orientation-aware manner by using a directionally constrained nearest neighborhood search. Then, we design a multiscale fully convolutional neural network with downsampling and upsampling blocks to enable multiscale point feature learning. The proposed D-FCN model can therefore process input point cloud with arbitrary sizes and directly predict the semantic labels for all the input points in an end-to-end manner. Without involving additional geometry features as input, the proposed method demonstrates superior performance on the International Society for Photogrammetry and Remote Sensing (ISPRS) 3D labeling benchmark dataset. The results show that our model achieves a new state-of-the-art performance on powerline, car, and facade categories. Moreover, to demonstrate the generalization abilities of the proposed method, we conduct further experiments on the 2019 Data Fusion Contest Dataset. Our proposed method achieves superior performance than the comparing methods and accomplishes an overall accuracy of 95.6% and an average F1 score of 0.810. Numéro de notice : A2020-119 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.02.004 Date de publication en ligne : 18/02/2020 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.02.004 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94743
in ISPRS Journal of photogrammetry and remote sensing > vol 162 (April 2020) . - pp 50 - 62[article]An indoor spatial accessible area generation approach considering distance constraints / Lina Yang in Annals of GIS, Vol 26 n° 1 (January 2020)
[article]
Titre : An indoor spatial accessible area generation approach considering distance constraints Type de document : Article/Communication Auteurs : Lina Yang, Auteur ; Hongru Bi, Auteur ; Xiaojing Yao, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 25 - 34 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Systèmes d'information géographique
[Termes IGN] analyse spatiale
[Termes IGN] contrainte géométrique
[Termes IGN] distance
[Termes IGN] espace intérieur
[Termes IGN] modèle conceptuel de données localisées
[Termes IGN] modèle géométrique
[Termes IGN] zone tamponRésumé : (auteur) Indoor objects’ accessible area generation considering distance constraints is of important realistic significance for spatial analysis. To the best of our knowledge, few studies have been conducted on indoor accessible area generation considering distance constraints. And because of different spatial characteristics between indoor and outdoor environment, the commonly used approach in outdoor space is not suitable for indoor space. In this paper, based on the hybrid spatial data model of geometric and symbolic model, an accessible area generation approach considering distance constraints for indoor environment is proposed by improving traditional spatial buffer zone generation technique. The buffer zone generation with a predefined distance around indoor objects within their located subspace is executed first; then, based on the indoor spatial connectivity, buffer generation around exit points is successively executed in its next connected subspaces until the distance decreases to zero. The merge of these generated buffer zones is the result of accessible area generation with a predefined distance constraint. During the process, two kinds of spatial search strategies, depth-first search and breadth-first search, are presented. Two sets of experiments are conducted to validate the correctness and efficiency of the proposed approach. Results show that the approach can be effectively used to solve the problem of indoor objects’ accessible area generation with distance constraints. Moreover, the potential use, as well as the limitation of the proposed approach is discussed in this paper. Numéro de notice : A2020-118 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article DOI : 10.1080/19475683.2019.1680575 Date de publication en ligne : 18/10/2019 En ligne : https://doi.org/10.1080/19475683.2019.1680575 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=94737
in Annals of GIS > Vol 26 n° 1 (January 2020) . - pp 25 - 34[article]