Photogrammetric Engineering & Remote Sensing, PERS / American society for photogrammetry and remote sensing . vol 88 n° 1Paru le : 01/01/2022 |
[n° ou bulletin]
est un bulletin de Photogrammetric Engineering & Remote Sensing, PERS / American society for photogrammetry and remote sensing (1975 -)
[n° ou bulletin]
|
Réservation
Réserver ce documentExemplaires (1)
Code-barres | Cote | Support | Localisation | Section | Disponibilité |
---|---|---|---|---|---|
105-2022011 | SL | Revue | Centre de documentation | Revues en salle | Disponible |
Dépouillements
Ajouter le résultat dans votre panierImproving urban land cover mapping with the fusion of optical and SAR data based on feature selection strategy / Qing Ding in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 1 (January 2022)
[article]
Titre : Improving urban land cover mapping with the fusion of optical and SAR data based on feature selection strategy Type de document : Article/Communication Auteurs : Qing Ding, Auteur ; Zhenfeng Shao, Auteur ; Xiao Huang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 17 - 28 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] analyse comparative
[Termes IGN] carte d'occupation du sol
[Termes IGN] cartographie urbaine
[Termes IGN] Chine
[Termes IGN] fusion de données multisource
[Termes IGN] image optique
[Termes IGN] image radar
[Termes IGN] précision de la classificationRésumé : (Auteur) Taking the Futian District as the research area, this study proposed an effective urban land cover mapping framework fusing optical and SAR data. To simplify the model complexity and improve the mapping results, various feature selection methods were compared and evaluated. The results showed that feature selection can eliminate irrelevant features, increase the mean correlation between features slightly, and improve the classification accuracy and computational efficiency significantly. The recursive feature elimination-support vector machine (RFE-SVM) model obtained the best results, with an overall accuracy of 89.17% and a kappa coefficient of 0.8695, respectively. In addition, this study proved that the fusion of optical and SAR data can effectively improve mapping and reduce the confusion between different land covers. The novelty of this study is with the insight into the merits of multi-source data fusion and feature selection in the land cover mapping process over complex urban environments, and to evaluate the performance differences between different feature selection methods. Numéro de notice : A2022-061 Affiliation des auteurs : non IGN Thématique : URBANISME Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.21-00030R2 Date de publication en ligne : 01/01/2022 En ligne : https://doi.org/10.14358/PERS.21-00030R2 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99703
in Photogrammetric Engineering & Remote Sensing, PERS > vol 88 n° 1 (January 2022) . - pp 17 - 28[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 105-2022011 SL Revue Centre de documentation Revues en salle Disponible Examining the integration of Landsat operational land imager with Sentinel-1 and vegetation indices in mapping southern yellow pines (Loblolly, Shortleaf, and Virginia pines) / Clement E. Akumu in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 1 (January 2022)
[article]
Titre : Examining the integration of Landsat operational land imager with Sentinel-1 and vegetation indices in mapping southern yellow pines (Loblolly, Shortleaf, and Virginia pines) Type de document : Article/Communication Auteurs : Clement E. Akumu, Auteur ; Eze O. Amadi, Auteur Année de publication : 2022 Article en page(s) : pp 29 - 38 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] bande C
[Termes IGN] canopée
[Termes IGN] carte de la végétation
[Termes IGN] coefficient de rétrodiffusion
[Termes IGN] image Landsat-OLI
[Termes IGN] image Sentinel-SAR
[Termes IGN] indice de végétation
[Termes IGN] intégration de données
[Termes IGN] inventaire forestier local
[Termes IGN] Pinus (genre)
[Termes IGN] Pinus ponderosa
[Termes IGN] précision de la classification
[Termes IGN] Soil Adjusted Vegetation IndexRésumé : (Auteur) The mapping of southern yellow pines (loblolly, shortleaf, and Virginia pines) is important to supporting forest inventory and the management of forest resources. The overall aim of this study was to examine the integration of Landsat Operational Land Imager (OLI ) optical data with Sentinel-1 microwave C-band satellite data and vegetation indices in mapping the canopy cover of southern yellow pines. Specifically, this study assessed the overall mapping accuracies of the canopy cover classification of southern yellow pines derived using four data-integration scenarios: Landsat OLI alone; Landsat OLI and Sentinel-1; Landsat OLI with vegetation indices derived from satellite data—normalized difference vegetation index, soil-adjusted vegetation index, modified soil-adjusted vegetation index, transformed soil-adjusted vegetation index, and infrared percentage vegetation index; and 4) Landsat OLI with Sentinel-1 and vegetation indices. The results showed that the integration of Landsat OLI reflectance bands with Sentinel-1 backscattering coefficients and vegetation indices yielded the best overall classification accuracy, about 77%, and standalone Landsat OLI the weakest accuracy, approximately 67%. The findings in this study demonstrate that the addition of backscattering coefficients from Sentinel-1 and vegetation indices positively contributed to the mapping of southern yellow pines. Numéro de notice : A2022-062 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.21-00024R2 Date de publication en ligne : 01/01/2022 En ligne : https://doi.org/10.14358/PERS.21-00024R2 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99706
in Photogrammetric Engineering & Remote Sensing, PERS > vol 88 n° 1 (January 2022) . - pp 29 - 38[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 105-2022011 SL Revue Centre de documentation Revues en salle Disponible Multi-view urban scene classification with a complementary-information learning model / Wanxuan Geng in Photogrammetric Engineering & Remote Sensing, PERS, vol 88 n° 1 (January 2022)
[article]
Titre : Multi-view urban scene classification with a complementary-information learning model Type de document : Article/Communication Auteurs : Wanxuan Geng, Auteur ; Weixun Zhou, Auteur ; Shuanggen Jin, Auteur Année de publication : 2022 Article en page(s) : pp 65 - 72 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage automatique
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] données de terrain
[Termes IGN] données multisources
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] fusion de données multisource
[Termes IGN] image aérienne
[Termes IGN] niveau du sol
[Termes IGN] précision de la classification
[Termes IGN] scène urbaineRésumé : (Auteur) Traditional urban scene-classification approaches focus on images taken either by satellite or in aerial view. Although single-view images are able to achieve satisfactory results for scene classification in most situations, the complementary information provided by other image views is needed to further improve performance. Therefore, we present a complementary information-learning model (CILM) to perform multi-view scene classification of aerial and ground-level images. Specifically, the proposed CILM takes aerial and ground-level image pairs as input to learn view-specific features for later fusion to integrate the complementary information. To train CILM, a unified loss consisting of cross entropy and contrastive losses is exploited to force the network to be more robust. Once CILM is trained, the features of each view are extracted via the two proposed feature-extraction scenarios and then fused to train the support vector machine classifier for classification. The experimental results on two publicly available benchmark data sets demonstrate that CILM achieves remarkable performance, indicating that it is an effective model for learning complementary information and thus improving urban scene classification. Numéro de notice : A2022-063 Affiliation des auteurs : non IGN Thématique : IMAGERIE/URBANISME Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.14358/PERS.21-00062R2 Date de publication en ligne : 01/01/2022 En ligne : https://doi.org/10.14358/PERS.21-00062R2 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99708
in Photogrammetric Engineering & Remote Sensing, PERS > vol 88 n° 1 (January 2022) . - pp 65 - 72[article]Réservation
Réserver ce documentExemplaires (1)
Code-barres Cote Support Localisation Section Disponibilité 105-2022011 SL Revue Centre de documentation Revues en salle Disponible