Détail de l'auteur
Auteur Minglei Li |
Documents disponibles écrits par cet auteur (3)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
ComNet: combinational neural network for object detection in UAV-borne thermal images / Minglei Li in IEEE Transactions on geoscience and remote sensing, vol 59 n° 8 (August 2021)
[article]
Titre : ComNet: combinational neural network for object detection in UAV-borne thermal images Type de document : Article/Communication Auteurs : Minglei Li, Auteur ; Xingke Zhao, Auteur ; Jiasong Li, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 6662 - 6673 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] détection d'objet
[Termes IGN] image captée par drone
[Termes IGN] image thermique
[Termes IGN] piéton
[Termes IGN] saillance
[Termes IGN] véhiculeRésumé : (auteur) We propose a deep learning-based method for object detection in UAV-borne thermal images that have the capability of observing scenes in both day and night. Compared with visible images, thermal images have lower requirements for illumination conditions, but they typically have blurred edges and low contrast. Using a boundary-aware salient object detection network, we extract the saliency maps of the thermal images to improve the distinguishability. Thermal images are augmented with the corresponding saliency maps through channel replacement and pixel-level weighted fusion methods. Considering the limited computing power of UAV platforms, a lightweight combinational neural network ComNet is used as the core object detection method. The YOLOv3 model trained on the original images is used as a benchmark and compared with the proposed method. In the experiments, we analyze the detection performances of the ComNet models with different image fusion schemes. The experimental results show that the average precisions (APs) for pedestrian and vehicle detection have been improved by 2%~5% compared with the benchmark without saliency map fusion and MobileNetv2. The detection speed is increased by over 50%, while the model size is reduced by 58%. The results demonstrate that the proposed method provides a compromise model, which has application potential in UAV-borne detection tasks. Numéro de notice : A2021-632 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.3029945 Date de publication en ligne : 21/10/2020 En ligne : https://doi.org/10.1109/TGRS.2020.3029945 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98288
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 8 (August 2021) . - pp 6662 - 6673[article]Modelling of buildings from aerial LiDAR point clouds using TINs and label maps / Minglei Li in ISPRS Journal of photogrammetry and remote sensing, vol 154 (August 2019)
[article]
Titre : Modelling of buildings from aerial LiDAR point clouds using TINs and label maps Type de document : Article/Communication Auteurs : Minglei Li, Auteur ; Franz Rottensteiner, Auteur ; Christian Heipke, Auteur Année de publication : 2019 Article en page(s) : pp 127 - 138 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] modèle numérique du bâti
[Termes IGN] semis de points
[Termes IGN] toit
[Termes IGN] Triangulated Irregular NetworkRésumé : (Auteur) This paper presents a new framework for automatically creating compact building models from aerial LiDAR point clouds, where each point is known to belong to the class building. The approach addresses the issues of non-uniform point density and outlier detection to extract and refine semantic roof structures by a sequence of operations on a label map. We first partition the points into some coarse regions based on a region growing method over the Triangulated Irregular Network (TIN) model. The region label IDs are then projected to a 2D grid map, which is used to refine the roof regions and their boundaries. We design an energy optimization approach on the label map to optimize the region labels. In order to regularize the contours of roof regions extracted from the label map, we propose a new method for refining contour segment vertices, which iteratively filters the normals of contour segments and uses them to guide the update of contour vertices. The effectiveness of this method is evaluated on LiDAR point clouds from different scenes, and its performance is validated by extensive comparisons to state-of-the-art techniques. Numéro de notice : A2019-267 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.06.003 Date de publication en ligne : 11/06/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.06.003 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=93082
in ISPRS Journal of photogrammetry and remote sensing > vol 154 (August 2019) . - pp 127 - 138[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019081 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019083 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019082 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Fusion of images and point clouds for the semantic segmentation of large-scale 3D scenes based on deep learning / Rui Zhang in ISPRS Journal of photogrammetry and remote sensing, vol 143 (September 2018)
[article]
Titre : Fusion of images and point clouds for the semantic segmentation of large-scale 3D scenes based on deep learning Type de document : Article/Communication Auteurs : Rui Zhang, Auteur ; Guangyun Li, Auteur ; Minglei Li, Auteur ; Li Wang, Auteur Année de publication : 2018 Article en page(s) : pp 85 - 96 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] apprentissage profond
[Termes IGN] détection du bâti
[Termes IGN] fusion de données
[Termes IGN] réseau neuronal convolutif
[Termes IGN] scène 3D
[Termes IGN] segmentation sémantique
[Termes IGN] semis de pointsRésumé : (Auteur) We address the issue of the semantic segmentation of large-scale 3D scenes by fusing 2D images and 3D point clouds. First, a Deeplab-Vgg16 based Large-Scale and High-Resolution model (DVLSHR) based on deep Visual Geometry Group (VGG16) is successfully created and fine-tuned by training seven deep convolutional neural networks with four benchmark datasets. On the val set in CityScapes, DVLSHR achieves a 74.98% mean Pixel Accuracy (mPA) and a 64.17% mean Intersection over Union (mIoU), and can be adapted to segment the captured images (image resolution 2832 ∗ 4256 pixels). Second, the preliminary segmentation results with 2D images are mapped to 3D point clouds according to the coordinate relationships between the images and the point clouds. Third, based on the mapping results, fine features of buildings are further extracted directly from the 3D point clouds. Our experiments show that the proposed fusion method can segment local and global features efficiently and effectively. Numéro de notice : A2018-356 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2018.04.022 Date de publication en ligne : 11/05/2018 En ligne : https://doi.org/10.1016/j.isprsjprs.2018.04.022 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=90590
in ISPRS Journal of photogrammetry and remote sensing > vol 143 (September 2018) . - pp 85 - 96[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2018091 RAB Livre Centre de documentation En réserve L003 Disponible 081-2018093 DEP-EXM Livre LASTIG Dépôt en unité Exclu du prêt 081-2018092 DEP-EAF Livre Nancy Dépôt en unité Exclu du prêt