Détail de l'auteur
Auteur Cheng Wang |
Documents disponibles écrits par cet auteur (4)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Footprint size design of large-footprint full-waveform LiDAR for forest and topography applications: A theoretical study / Xuebo Yang in IEEE Transactions on geoscience and remote sensing, vol 59 n° 11 (November 2021)
[article]
Titre : Footprint size design of large-footprint full-waveform LiDAR for forest and topography applications: A theoretical study Type de document : Article/Communication Auteurs : Xuebo Yang, Auteur ; Cheng Wang, Auteur ; Xiaohuan Xi, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 9745 - 9757 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] empreinte
[Termes IGN] extraction de la végétation
[Termes IGN] forme d'onde pleine
[Termes IGN] hauteur des arbres
[Termes IGN] lidar à retour d'onde complète
[Termes IGN] onde lidar
[Termes IGN] processus gaussien
[Termes IGN] signal lidarRésumé : (auteur) LiDAR footprint, defined as the illumination area of LiDAR sensor on the ground, is the fundamental unit that the sensor collects information from. The design of footprint size crucially influences the acquired LiDAR signals. For large-footprint full-waveform LiDAR, a well-designed footprint size is indispensable to acquire accurate and complete vertical profiles of scene targets. The methods that design the footprint size are increasingly needed to satisfy various application requirements. In this study, an analytical method to designing the footprint size is proposed for forest and topography applications. It is established based on a mixture Gaussian model and the designed footprint size ensures the signals of vegetation and ground can be completely extracted. Experiment results with our method show that the footprint size is preferably in the range of 10.6–25.0 m for forest application, while it is less than 32.3 m for topography application. The intersection of the two sets satisfies both applications. Furthermore, a series of sensibility studies were performed to analyze the influence of multiple key parameters to the optimal footprint size, including the scene characteristics, instrumental configurations, and application requirements. This study provides a theoretical basis for the design of future large-footprint full-waveform laser altimeters. Numéro de notice : A2021-812 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2021.3054324 Date de publication en ligne : 08/02/2021 En ligne : https://doi.org/10.1109/TGRS.2021.3054324 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98885
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 11 (November 2021) . - pp 9745 - 9757[article]PBNet: Part-based convolutional neural network for complex composite object detection in remote sensing imagery / Xian Sun in ISPRS Journal of photogrammetry and remote sensing, vol 173 (March 2021)
[article]
Titre : PBNet: Part-based convolutional neural network for complex composite object detection in remote sensing imagery Type de document : Article/Communication Auteurs : Xian Sun, Auteur ; Peijin Wang, Auteur ; Cheng Wang, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 50 - 65 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] apprentissage profond
[Termes IGN] Chine
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection d'objet
[Termes IGN] objet géographique complexe
[Termes IGN] prise en compte du contexte
[Termes IGN] rectangle englobant minimumRésumé : (auteur) In recent years, deep learning-based algorithms have brought great improvements to rigid object detection. In addition to rigid objects, remote sensing images also contain many complex composite objects, such as sewage treatment plants, golf courses, and airports, which have neither a fixed shape nor a fixed size. In this paper, we validate through experiments that the results of existing methods in detecting composite objects are not satisfying enough. Therefore, we propose a unified part-based convolutional neural network (PBNet), which is specifically designed for composite object detection in remote sensing imagery. PBNet treats a composite object as a group of parts and incorporates part information into context information to improve composite object detection. Correct part information can guide the prediction of a composite object, thus solving the problems caused by various shapes and sizes. To generate accurate part information, we design a part localization module to learn the classification and localization of part points using bounding box annotation only. A context refinement module is designed to generate more discriminative features by aggregating local context information and global context information, which enhances the learning of part information and improve the ability of feature representation. We selected three typical categories of composite objects from a public dataset to conduct experiments to verify the detection performance and generalization ability of our method. Meanwhile, we build a more challenging dataset about a typical kind of complex composite objects, i.e., sewage treatment plants. It refers to the relevant information from authorities and experts. This dataset contains sewage treatment plants in seven cities in the Yangtze valley, covering a wide range of regions. Comprehensive experiments on two datasets show that PBNet surpasses the existing detection algorithms and achieves state-of-the-art accuracy. Numéro de notice : A2021-105 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2020.12.015 Date de publication en ligne : 16/01/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2020.12.015 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=96891
in ISPRS Journal of photogrammetry and remote sensing > vol 173 (March 2021) . - pp 50 - 65[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021031 SL Revue Centre de documentation Revues en salle Disponible 081-2021033 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021032 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Facet segmentation-based line segment extraction for large-scale point clouds / Yangbin Lin in IEEE Transactions on geoscience and remote sensing, vol 55 n° 9 (September 2017)
[article]
Titre : Facet segmentation-based line segment extraction for large-scale point clouds Type de document : Article/Communication Auteurs : Yangbin Lin, Auteur ; Cheng Wang, Auteur ; Bili Chen, Auteur ; et al., Auteur Année de publication : 2017 Article en page(s) : pp 4839 - 4854 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] analyse comparative
[Termes IGN] exploration de données
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] segmentation d'image
[Termes IGN] semis de pointsRésumé : (Auteur) As one of the most common features in the man-made environments, straight lines play an important role in many applications. In this paper, we present a new framework to extract line segments from large-scale point clouds. The proposed method is fast to produce results, easy for implementation and understanding, and suitable for various point cloud data. The key idea is to segment the input point cloud into a collection of facets efficiently. These facets provide sufficient information for determining linear features in the local planar region and make line segment extraction become relatively convenient. Moreover, we introduce the concept “number of false alarms” into 3-D point cloud context to filter the false positive line segment detections. We test our approach on various types of point clouds acquired from different ways. We also compared the proposed method with several other methods and provide both quantitative and visual comparison results. The experimental results show that our algorithm is efficient and effective, and produce more accurate and complete line segments than the comparative methods. To further verify the accuracy of the line segments extracted by the proposed method, we also present a line-based registration framework, which employs these line segments on point clouds registration. Numéro de notice : A2017-656 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2016.2639025 En ligne : http://dx.doi.org/10.1109/TGRS.2016.2639025 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=87066
in IEEE Transactions on geoscience and remote sensing > vol 55 n° 9 (September 2017) . - pp 4839 - 4854[article]Forest above ground biomass inversion by fusing GLAS with optical remote sensing data / Xiaohuan Xi in ISPRS International journal of geo-information, vol 5 n° 4 (April 2016)
[article]
Titre : Forest above ground biomass inversion by fusing GLAS with optical remote sensing data Type de document : Article/Communication Auteurs : Xiaohuan Xi, Auteur ; Tingting Han, Auteur ; Cheng Wang, Auteur ; et al., Auteur Année de publication : 2016 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] biomasse aérienne
[Termes IGN] classification par réseau neuronal
[Termes IGN] distribution du coefficient de réflexion bidirectionnelle BRDF
[Termes IGN] données ICEsat
[Termes IGN] forêt
[Termes IGN] hauteur de la végétation
[Termes IGN] image Landsat-TM
[Termes IGN] image optique
[Termes IGN] image Terra-MODIS
[Termes IGN] Leaf Area Index
[Termes IGN] MNS ASTER
[Termes IGN] régression
[Termes IGN] Yunnan (Chine)Résumé : (auteur) Forest biomass is an important parameter for quantifying and understanding biological and physical processes on the Earth’s surface. Rapid, reliable, and objective estimations of forest biomass are essential to terrestrial ecosystem research. The Geoscience Laser Altimeter System (GLAS) produced substantial scientific data for detecting the vegetation structure at the footprint level. This study combined GLAS data with MODIS/BRDF (Bidirectional Reflectance Distribution Function) and ASTER GDEM data to estimate forest aboveground biomass (AGB) in Xishuangbanna, Yunnan Province, China. The GLAS waveform characteristic parameters were extracted using the wavelet method. The ASTER DEM was used to compute the terrain index for reducing the topographic influence on the GLAS canopy height estimation. A neural network method was applied to assimilate the MODIS BRDF data with the canopy heights for estimating continuous forest heights. Forest leaf area indices (LAIs) were derived from Landsat TM imagery. A series of biomass estimation models were developed and validated using regression analyses between field-estimated biomass, canopy height, and LAI. The GLAS-derived canopy heights in Xishuangbanna correlated well with the field-estimated AGB (R2 = 0.61, RMSE = 52.79 Mg/ha). Combining the GLAS estimated canopy heights and LAI yielded a stronger correlation with the field-estimated AGB (R2 = 0.73, RMSE = 38.20 Mg/ha), which indicates that the accuracy of the estimated biomass in complex terrains can be improved significantly by integrating GLAS and optical remote sensing data. Numéro de notice : A2016-820 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi5040045 En ligne : https://doi.org/10.3390/ijgi5040045 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=82625
in ISPRS International journal of geo-information > vol 5 n° 4 (April 2016)[article]