Descripteur
Termes IGN > imagerie > image spatiale > image satellite > image à très haute résolution
image à très haute résolutionVoir aussi |
Documents disponibles dans cette catégorie (305)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Classification of tree species in a heterogeneous urban environment using object-based ensemble analysis and World View-2 satellite imagery / Simbarashe Jombo in Applied geomatics, vol 13 n° 3 (September 2021)
[article]
Titre : Classification of tree species in a heterogeneous urban environment using object-based ensemble analysis and World View-2 satellite imagery Type de document : Article/Communication Auteurs : Simbarashe Jombo, Auteur ; Elhadi Adam, Auteur ; John Odindi, Auteur Année de publication : 2021 Article en page(s) : pp 373 - 387 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] arbre urbain
[Termes IGN] classification par forêts d'arbres décisionnels
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] espèce végétale
[Termes IGN] image à très haute résolution
[Termes IGN] image multibande
[Termes IGN] image Worldview
[Termes IGN] indice de végétation
[Termes IGN] Johannesbourg
[Termes IGN] segmentation d'imageRésumé : (auteur) Urban trees are valuable in, inter alia, ameliorating air pollution and mitigating the effects associated with urban heat islands. The dearth of tree cover maps is a major challenge for urban planners in the management of urban trees. This work adopts remote sensing approaches to provide urban tree cover maps which can strengthen urban landscape management. Whereas traditional pixel-based classification approaches have been commonly used in image classification, they are not well-suited for urban tree mapping due to their failure to fully explore the image’s spatial and spectral characteristics. Object-based classification techniques produce improved accuracies using additional variables. This study depicts the capability of object-based image analysis (OBIA) in mapping common urban trees using very high-resolution (VHR) WorldView-2 (WV-2) imagery. The study tests the utility of WV-2 bands and other feature variables in the object-based mapping of common urban trees and other land cover classes. Furthermore, the study compares the utility of Support Vector Machine (SVM) and Random Forest (RF) in the object-based mapping of common urban trees and other land cover classes. The results show that the Normalized Difference Vegetation Index (NDVI), NIR 1 and NIR 2 bands were important in the classification of common urban trees and other land cover classes. The RF classifier performed better than SVM, with an overall accuracy of 91.9% as compared to 87.3% for SVM. The results of this study offer insight to urban authorities with knowledge on the segmentation parameters, classification methods and feature variables for mapping urban trees, valuable in urban tree management. Numéro de notice : A2021-624 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article DOI : 10.1007/s12518-021-00358-3 Date de publication en ligne : 20/01/2021 En ligne : https://doi.org/10.1007/s12518-021-00358-3 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98248
in Applied geomatics > vol 13 n° 3 (September 2021) . - pp 373 - 387[article]A deep translation (GAN) based change detection network for optical and SAR remote sensing images / Xinghua Li in ISPRS Journal of photogrammetry and remote sensing, vol 179 (September 2021)
[article]
Titre : A deep translation (GAN) based change detection network for optical and SAR remote sensing images Type de document : Article/Communication Auteurs : Xinghua Li, Auteur ; Zhengshun Du, Auteur ; Yanyuan Huang, Auteur ; Zhenyu Tan, Auteur Année de publication : 2021 Article en page(s) : pp 14 - 34 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image mixte
[Termes IGN] détection de changement
[Termes IGN] image à très haute résolution
[Termes IGN] image optique
[Termes IGN] image radar moirée
[Termes IGN] image Sentinel-SAR
[Termes IGN] méthode robuste
[Termes IGN] polarisation
[Termes IGN] réseau antagoniste génératif
[Termes IGN] réseau neuronal profond
[Termes IGN] zone d'intérêtRésumé : (Editeur) With the development of space-based imaging technology, a larger and larger number of images with different modalities and resolutions are available. The optical images reflect the abundant spectral information and geometric shape of ground objects, whose qualities are degraded easily in poor atmospheric conditions. Although synthetic aperture radar (SAR) images cannot provide the spectral features of the region of interest (ROI), they can capture all-weather and all-time polarization information. In nature, optical and SAR images encapsulate lots of complementary information, which is of great significance for change detection (CD) in poor weather situations. However, due to the difference in imaging mechanisms of optical and SAR images, it is difficult to conduct their CD directly using the traditional difference or ratio algorithms. Most recent CD methods bring image translation to reduce their difference, but the results are obtained by ordinary algebraic methods and threshold segmentation with limited accuracy. Towards this end, this work proposes a deep translation based change detection network (DTCDN) for optical and SAR images. The deep translation firstly maps images from one domain (e.g., optical) to another domain (e.g., SAR) through a cyclic structure into the same feature space. With the similar characteristics after deep translation, they become comparable. Different from most previous researches, the translation results are imported to a supervised CD network that utilizes deep context features to separate the unchanged pixels and changed pixels. In the experiments, the proposed DTCDN was tested on four representative data sets from Gloucester, California, and Shuguang village. Compared with state-of-the-art methods, the effectiveness and robustness of the proposed method were confirmed. Numéro de notice : A2021-574 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2021.07.007 Date de publication en ligne : 23/07/2021 En ligne : https://doi.org/10.1016/j.isprsjprs.2021.07.007 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98174
in ISPRS Journal of photogrammetry and remote sensing > vol 179 (September 2021) . - pp 14 - 34[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2021091 SL Revue Centre de documentation Revues en salle Disponible 081-2021093 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2021092 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Vehicle detection in very-high-resolution remote sensing images based on an anchor-free detection model with a more precise foveal area / Xungen Li in ISPRS International journal of geo-information, vol 10 n° 8 (August 2021)
[article]
Titre : Vehicle detection in very-high-resolution remote sensing images based on an anchor-free detection model with a more precise foveal area Type de document : Article/Communication Auteurs : Xungen Li, Auteur ; Feifei Men, Auteur ; Shuaishuai Lv, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : n° 549 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse comparative
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection de cible
[Termes IGN] image à très haute résolution
[Termes IGN] image aérienne
[Termes IGN] véhiculeRésumé : (auteur) Vehicle detection in aerial images is a challenging task. The complexity of the background information and the redundancy of the detection area are the main obstacles that limit the successful operation of vehicle detection based on anchors in very-high-resolution (VHR) remote sensing images. In this paper, an anchor-free target detection method is proposed to solve the problems above. First, a multi-attention feature pyramid network (MA-FPN) was designed to address the influence of noise and background information on vehicle target detection by fusing attention information in the feature pyramid network (FPN) structure. Second, a more precise foveal area (MPFA) is proposed to provide better ground truth for the anchor-free method by determining a more accurate positive sample selection area. The proposed anchor-free model with MA-FPN and MPFA can predict vehicles accurately and quickly in VHR remote sensing images through direct regression and predict the pixels in the feature map. A detailed evaluation based on remote sensing image (RSI) and vehicle detection in aerial imagery (VEDAI) data sets for vehicle detection shows that our detection method performs well, the network is simple, and the detection is fast. Numéro de notice : A2021-589 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.3390/ijgi10080549 Date de publication en ligne : 14/08/2021 En ligne : https://doi.org/10.3390/ijgi10080549 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98209
in ISPRS International journal of geo-information > vol 10 n° 8 (August 2021) . - n° 549[article]Comparison of classification methods for urban green space extraction using very high resolution worldview-3 imagery / S. Vigneshwaran in Geocarto international, vol 36 n° 13 ([15/07/2021])
[article]
Titre : Comparison of classification methods for urban green space extraction using very high resolution worldview-3 imagery Type de document : Article/Communication Auteurs : S. Vigneshwaran, Auteur ; S. Vasantha Kumar, Auteur Année de publication : 2021 Article en page(s) : pp 1429 - 1442 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications de télédétection
[Termes IGN] carte de la végétation
[Termes IGN] classification dirigée
[Termes IGN] classification non dirigée
[Termes IGN] classification orientée objet
[Termes IGN] espace vert
[Termes IGN] flore urbaine
[Termes IGN] image à très haute résolution
[Termes IGN] image Worldview
[Termes IGN] Inde
[Termes IGN] Normalized Difference Vegetation Index
[Termes IGN] urbanismeRésumé : (auteur) Urban green space (UGS) plays a vital role in maintaining the ecological balance of a city and in ensuring healthy living of the city inhabitants. It is generally suggested that one-third of the city should be covered by green and to ensure this, the city administrators must have an accurate map of the existing UGS. Such a map would be useful to visualize the distribution of the existing green cover and also to find out the areas that can possibly be converted to UGS. Reported studies on UGS mapping have mostly used medium and high resolution images such as Landsat-TM, ETM+, Sentinel-2A, IKONOS, etc. However, studies on the use of very high resolution images for UGS extraction are very limited. The present study is a first attempt in utilizing the very high resolution Worldview-3 image for UGS extraction. Performance of different classification methods such as unsupervised, supervised, object based and normalized difference vegetation index (NDVI) were compared using the pan sharpened Worldview-3 image covering part of New Delhi in India. It was found that the unsupervised classification followed by manual recoding method showed superior performance with overall accuracy (OA) of 99% and κ coefficient of 0.98. Also, the OA achieved in the present study is the highest when compared to other reported studies on UGS extraction. The map of UGS revealed that almost 40% of the study area is covered by green which is more than the recommended value of 33% (one-third). In order to check the universality of the unsupervised classification approach in extracting UGS, Worldview-3 image covering Rio in Brazil was tested. It was found that an OA of 98% and κ coefficient of 0.95 were obtained which clearly indicate that the proposed approach would work very well in extracting UGS from any Worldview-3 imagery. Numéro de notice : A2021-553 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/10106049.2019.1665714 Date de publication en ligne : 18/09/2019 En ligne : https://doi.org/10.1080/10106049.2019.1665714 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98104
in Geocarto international > vol 36 n° 13 [15/07/2021] . - pp 1429 - 1442[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 059-2021131 RAB Revue Centre de documentation En réserve L003 Disponible Mask R-CNN-based building extraction from VHR satellite data in operational humanitarian action: An example related to Covid-19 response in Khartoum, Sudan / Dirk Tiede in Transactions in GIS, Vol 25 n° 3 (June 2021)
[article]
Titre : Mask R-CNN-based building extraction from VHR satellite data in operational humanitarian action: An example related to Covid-19 response in Khartoum, Sudan Type de document : Article/Communication Auteurs : Dirk Tiede, Auteur ; Gina Schwendemann, Auteur ; Ahmad Alobaidi, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 1213-1227 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] analyse d'image orientée objet
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] détection du bâti
[Termes IGN] échantillonnage
[Termes IGN] épidémie
[Termes IGN] gestion de crise
[Termes IGN] HRV (capteur)
[Termes IGN] image à très haute résolution
[Termes IGN] image Pléiades-HR
[Termes IGN] itération
[Termes IGN] SoudanRésumé : Auteur) Within the constraints of operational work supporting humanitarian organizations in their response to the Covid-19 pandemic, we conducted building extraction for Khartoum, Sudan. We extracted approximately 1.2 million dwellings and buildings, using a Mask R-CNN deep learning approach from a Pléiades very high-resolution satellite image with 0.5 m pixel resolution. Starting from an untrained network, we digitized a few hundred samples and iteratively increased the number of samples by validating initial classification results and adding them to the sample collection. We were able to strike a balance between the need for timely information and the accuracy of the result by combining the output from three different models, each aiming at distinctive types of buildings, in a post-processing workflow. We obtained a recall of 0.78, precision of 0.77 and F1 score of 0.78, and were able to deliver first results in only 10 days after the initial request. The procedure shows the great potential of convolutional neural network frameworks in combination with GIS routines for dwelling extraction even in an operational setting. Numéro de notice : A2021-464 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1111/tgis.12766 Date de publication en ligne : 06/05/2021 En ligne : https://doi.org/10.1111/tgis.12766 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98060
in Transactions in GIS > Vol 25 n° 3 (June 2021) . - pp 1213-1227[article]Uncertainty management for robust probabilistic change detection from multi-temporal Geoeye-1 imagery / Mahmoud Salah in Applied geomatics, vol 13 n° 2 (June 2021)PermalinkRotation-invariant feature learning in VHR optical remote sensing images via nested siamese structure with double center loss / Ruoqiao Jiang in IEEE Transactions on geoscience and remote sensing, vol 59 n° 4 (April 2021)PermalinkApports des méthodes d'apprentissage profond pour la reconnaissance automatique des modes d'occupation des sols et d'objets par télédétection en milieu tropical / Guillaume Rousset (2021)PermalinkPermalinkAutomated detection of individual Juniper tree location and forest cover changes using Google Earth Engine / Sudeera Wickramarathna in Annals of forest research, vol 64 n° 1 (2021)PermalinkMask R-CNN and OBIA fusion improves the segmentation of scattered vegetation in very high-resolution optical sensors / Emilio Guirado in Sensors, vol 21 n° 1 (January 2021)PermalinkSteps-based tree crown delineation by analyzing local minima for counting the trees in very high resolution satellite imagery / Debasish Chakraborty in Geocarto international, vol 36 n° 1 ([01/01/2021])PermalinkUrban construction waste with VHR remote sensing using multi-feature analysis and a hierarchical segmentation method / Qiang Chen in Remote sensing, vol 13 n° 1 (January-1 2021)PermalinkConvolutional Neural Networks accurately predict cover fractions of plant species and communities in Unmanned Aerial Vehicle imagery / Teja Kattenborn in Remote sensing in ecology and conservation, vol 6 n° 4 (December 2020)PermalinkA framework for unsupervised wildfire damage assessment using VHR satellite images with PlanetScope data / Minkyung Chung in Remote sensing, vol 12 n° 22 (December-1 2020)Permalink