Détail de l'auteur
Auteur H. Huang |
Documents disponibles écrits par cet auteur (4)
Ajouter le résultat dans votre panier Affiner la recherche Interroger des sources externes
Deep learning high resolution burned area mapping by transfer learning from Landsat-8 to PlanetScope / V.S. Martins in Remote sensing of environment, vol 280 (October 2022)
[article]
Titre : Deep learning high resolution burned area mapping by transfer learning from Landsat-8 to PlanetScope Type de document : Article/Communication Auteurs : V.S. Martins, Auteur ; D.P. Roy, Auteur ; H. Huang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 113203 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] Afrique (géographie politique)
[Termes IGN] apprentissage profond
[Termes IGN] carte thématique
[Termes IGN] cartographie automatique
[Termes IGN] correction radiométrique
[Termes IGN] données d'entrainement (apprentissage automatique)
[Termes IGN] forêt tropicale
[Termes IGN] image Landsat-OLI
[Termes IGN] image PlanetScope
[Termes IGN] incendie
[Termes IGN] précision de la classification
[Termes IGN] régression
[Termes IGN] savaneRésumé : (auteur) High spatial resolution commercial satellite data provide new opportunities for terrestrial monitoring. The recent availability of near-daily 3 m observations provided by the PlanetScope constellation enables mapping of small and spatially fragmented burns that are not detected at coarser spatial resolution. This study demonstrates, for the first time, the potential for automated PlanetScope 3 m burned area mapping. The PlanetScope sensors have no onboard calibration or short-wave infrared bands, and have variable overpass times, making them challenging to use for large area, automated, burned area mapping. To help overcome these issues, a U-Net deep learning algorithm was developed to classify burned areas from two-date Planetscope 3 m image pairs acquired at the same location. The deep learning approach, unlike conventional burned area mapping algorithms, is applied to image spatial subsets and not to single pixels and so incorporates spatial as well as spectral information. Deep learning requires large amounts of training data. Consequently, transfer learning was undertaken using pre-existing Landsat-8 derived burned area reference data to train the U-Net that was then refined with a smaller set of PlanetScope training data. Results across Africa considering 659 PlanetScope radiometrically normalized image pairs sensed one day apart in 2019 are presented. The U-Net was first trained with different numbers of randomly selected 256 × 256 30 m pixel patches extracted from 92 pre-existing Landsat-8 burned area reference data sets defined for 2014 and 2015. The U-Net trained with 300,000 Landsat patches provided about 13% 30 m burn omission and commission errors with respect to 65,000 independent 30 m evaluation patches. The U-Net was then refined by training on 5,000 256 × 256 3 m patches extracted from independently interpreted PlanetScope burned area reference data. Qualitatively, the refined U-Net was able to more precisely delineate 3 m burn boundaries, including the interiors of unburned areas, and better classify “faint” burned areas indicative of low combustion completeness and/or sparse burns. The refined U-Net 3 m classification accuracy was assessed with respect to 20 independently interpreted PlanetScope burned area reference data sets, composed of 339.4 million 3 m pixels, with low 12.29% commission and 12.09% omission errors. The dependency of the U-Net classification accuracy on the burned area proportion within 3 m pixel 256 × 256 patches was also examined, and patches Numéro de notice : A2022-774 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.rse.2022.113203 Date de publication en ligne : 08/08/2022 En ligne : https://doi.org/10.1016/j.rse.2022.113203 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101802
in Remote sensing of environment > vol 280 (October 2022) . - n° 113203[article]Improved nonsubsampled contourlet transform for multi-sensor image registration / R. Wang in Photogrammetric Engineering & Remote Sensing, PERS, vol 79 n° 1 (January 2013)
[article]
Titre : Improved nonsubsampled contourlet transform for multi-sensor image registration Type de document : Article/Communication Auteurs : R. Wang, Auteur ; J. Ma, Auteur ; H. Huang, Auteur ; Wei Shi, Auteur Année de publication : 2013 Article en page(s) : pp 51 - 66 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement de lignes
[Termes IGN] appariement de points
[Termes IGN] détection de contours
[Termes IGN] image SPOT 5
[Termes IGN] image Terra-ASTER
[Termes IGN] similitude
[Termes IGN] superposition d'images
[Termes IGN] transformationRésumé : (Auteur) Homologous feature point extraction is a key problem in the multi-sensor image registration. In this paper, a new feature point extraction method using nonsubsampled contourlet transform (NSCT) and an adaptive shrink operator (ASO_NSCT) for multi-sensor image registration is proposed. Moreover, this proposed feature matching is different from the traditional feature matching strategies and is performed using a similarity measure computed from neighborhood circles in low-frequency bands. Then, a number of reliable matched couples with even distributions are obtained, which assures the accuracy of the registration. Applications of the proposed algorithm to different optical images as well as optical and synthetic aperture radar images show that, in each case, a large number of accurate matched couples could be identified. Additionally, the RMSE patterns are analyzed and comparisons of the parameters are carried out between the registration models and the actual ground structures, which further demonstrates the effectiveness of the proposed algorithm. Numéro de notice : A2013-005 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.14358/PERS.79.1.51 En ligne : https://doi.org/10.14358/PERS.79.1.51 Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=32143
in Photogrammetric Engineering & Remote Sensing, PERS > vol 79 n° 1 (January 2013) . - pp 51 - 66[article]Spatial knowledge acquisition with mobile maps, augmented reality and voice in the context of GPS-based pedestrian navigation: results from a field test / H. Huang in Cartography and Geographic Information Science, vol 39 n° 2 (April 2012)
[article]
Titre : Spatial knowledge acquisition with mobile maps, augmented reality and voice in the context of GPS-based pedestrian navigation: results from a field test Type de document : Article/Communication Auteurs : H. Huang, Auteur ; M. Schmidt, Auteur ; Georg Gartner, Auteur Année de publication : 2012 Article en page(s) : pp 107 - 116 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique web
[Termes IGN] acquisition de connaissances
[Termes IGN] analyse comparative
[Termes IGN] appareil portable
[Termes IGN] interface mobile
[Termes IGN] interface web
[Termes IGN] navigation pédestre
[Termes IGN] positionnement par GPS
[Termes IGN] réalité augmentée
[Termes IGN] système de navigation
[Termes IGN] utilisateur nomade
[Termes IGN] voixRésumé : (Auteur) GPS-based pedestrian navigation systems have become increasingly popular. Different interface technologies can be used to communicate/convey route directions to pedestrians. This paper aims to empirically study the influence of different interface technologies on spatial knowledge acquisition in the context of GPS-based pedestrian navigation. A field experiment was implemented to address this concern. Firstly, the suitability of the evaluation methods in assessing spatial knowledge acquisition was analyzed empirically (focusing on the ability of differentiating “familiar“ and “unfamiliar“ participants). The suitable methods were then used to compare the influence of mobile maps, augmented reality, and voice on spatial learning. The field test showed that in terms of spatial knowledge acquisition, the three interface technologies led to comparable results, which were not significantly different from each other. The results bring some challenging issues for consideration when designing mobile pedestrian navigation systems. Numéro de notice : A2012-455 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Article DOI : 10.1559/15230406392107 En ligne : https://doi.org/10.1559/15230406392107 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=31901
in Cartography and Geographic Information Science > vol 39 n° 2 (April 2012) . - pp 107 - 116[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 032-2012021 RAB Revue Centre de documentation En réserve L003 Disponible ICESat GLAS data for urban environment monitoring / P. Gong in IEEE Transactions on geoscience and remote sensing, vol 49 n° 3 (March 2011)
[article]
Titre : ICESat GLAS data for urban environment monitoring Type de document : Article/Communication Auteurs : P. Gong, Auteur ; Z. Li, Auteur ; H. Huang, Auteur ; G. Sun, Auteur ; L. Wang, Auteur Année de publication : 2011 Article en page(s) : pp 1158 - 1172 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] altimètre laser
[Termes IGN] Chine
[Termes IGN] Geoscience Laser Altimeter System
[Termes IGN] hauteur du bâti
[Termes IGN] Pékin (Chine)
[Termes IGN] surveillance
[Termes IGN] traitement de données
[Termes IGN] urbanisationRésumé : (Auteur) Although the Geoscience Laser Altimeter System (GLAS) onboard the NASA Ice, Cloud and Land Elevation Satellite was not designed for urban applications, its 3-D measurement capability over the globe makes it a nice feature for consideration in monitoring urban heights. However, this has not been previously done. In this paper, we report some preliminary assessment of the GLAS data for building height and density estimation in a suburb of Beijing, China. Building heights can be directly calculated from a GLAS data product (GLA14). Because GLA14 limits height levels to six in each ground footprint, we developed a new method to remove this restriction by processing the raw GLAS data. The maximum heights measured in the field at selected GLAS footprints were used to validate the GLAS measurement results. By assuming a constant incident energy and surface reflectance within a GLAS footprint, the building density can be estimated from GLA14 or from our newly processed GLAS data. The building density determined from high-resolution images in Google Earth was used to validate the GLAS estimation results. The results indicate that the newly developed method can produce more accurate building height estimation within each GLAS footprint (R2 = 0.937, rmse = 6.4 m, and n = 26) than the GLA14 data product (R2 = 0.808, rmse = 11.5 m, and n = 26). However, satisfactory estimation results on building density cannot be obtained from the GLAS data with the methods investigated in this paper. Forest cover could be a challenge to building height and density estimation from the GLAS data. It should be addressed in future research. Numéro de notice : A2011-069 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2010.2070514 Date de publication en ligne : 11/10/2010 En ligne : https://doi.org/10.1109/TGRS.2010.2070514 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=30850
in IEEE Transactions on geoscience and remote sensing > vol 49 n° 3 (March 2011) . - pp 1158 - 1172[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 065-2011031 RAB Revue Centre de documentation En réserve L003 Disponible