Descripteur
Termes IGN > imagerie > image spatiale > image satellite > image à haute résolution
image à haute résolutionVoir aussi |
Documents disponibles dans cette catégorie (352)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Automatic building extraction from high-resolution aerial images and LiDAR data using gated residual refinement network / Jianfeng Huang in ISPRS Journal of photogrammetry and remote sensing, vol 151 (May 2019)
[article]
Titre : Automatic building extraction from high-resolution aerial images and LiDAR data using gated residual refinement network Type de document : Article/Communication Auteurs : Jianfeng Huang, Auteur ; Xinchang Zhang, Auteur ; Qinchuan Xin, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 91 - 105 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] détection du bâti
[Termes IGN] image à haute résolution
[Termes IGN] réseau neuronal convolutif
[Termes IGN] résidu
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] zone urbaineRésumé : (Auteur) Automated extraction of buildings from remotely sensed data is important for a wide range of applications but challenging due to difficulties in extracting semantic features from complex scenes like urban areas. The recently developed fully convolutional neural networks (FCNs) have shown to perform well on urban object extraction because of the outstanding feature learning and end-to-end pixel labeling abilities. The commonly used feature fusion or skip-connection refine modules of FCNs often overlook the problem of feature selection and could reduce the learning efficiency of the networks. In this paper, we develop an end-to-end trainable gated residual refinement network (GRRNet) that fuses high-resolution aerial images and LiDAR point clouds for building extraction. The modified residual learning network is applied as the encoder part of GRRNet to learn multi-level features from the fusion data and a gated feature labeling (GFL) unit is introduced to reduce unnecessary feature transmission and refine classification results. The proposed model - GRRNet is tested in a publicly available dataset with urban and suburban scenes. Comparison results illustrated that GRRNet has competitive building extraction performance in comparison with other approaches. The source code of the developed GRRNet is made publicly available for studies. Numéro de notice : A2019-206 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.02.019 Date de publication en ligne : 20/03/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.02.019 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92669
in ISPRS Journal of photogrammetry and remote sensing > vol 151 (May 2019) . - pp 91 - 105[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019051 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019053 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019052 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Exploring semantic elements for urban scene recognition: Deep integration of high-resolution imagery and OpenStreetMap (OSM) / Wenzhi Zhao in ISPRS Journal of photogrammetry and remote sensing, vol 151 (May 2019)
[article]
Titre : Exploring semantic elements for urban scene recognition: Deep integration of high-resolution imagery and OpenStreetMap (OSM) Type de document : Article/Communication Auteurs : Wenzhi Zhao, Auteur ; Yanchen Bo, Auteur ; Jiage Chen, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 237 - 250 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classe sémantique
[Termes IGN] compréhension de l'image
[Termes IGN] fusion de données
[Termes IGN] image à haute résolution
[Termes IGN] reconnaissance d'objets
[Termes IGN] scène urbaineRésumé : (Auteur) Urban scenes refer to city blocks which are basic units of megacities, they play an important role in citizens’ welfare and city management. Remote sensing imagery with largescale coverage and accurate target descriptions, has been regarded as an ideal solution for monitoring the urban environment. However, due to the heterogeneity of remote sensing images, it is difficult to access their geographical content at the object level, let alone understanding urban scenes at the block level. Recently, deep learning-based strategies have been applied to interpret urban scenes with remarkable accuracies. However, the deep neural networks require a substantial number of training samples which are hard to satisfy, especially for high-resolution images. Meanwhile, the crowed-sourced Open Street Map (OSM) data provides rich annotation information about the urban targets but may encounter the problem of insufficient sampling (limited by the places where people can go). As a result, the combination of OSM and remote sensing images for efficient urban scene recognition is urgently needed. In this paper, we present a novel strategy to transfer existing OSM data to high-resolution images for semantic element determination and urban scene understanding. To be specific, the object-based convolutional neural network (OCNN) can be utilized for geographical object detection by feeding it rich semantic elements derived from OSM data. Then, geographical objects are further delineated into their functional labels by integrating points of interest (POIs), which contain rich semantic terms, such as commercial or educational labels. Lastly, the categories of urban scenes are easily acquired from the semantic objects inside. Experimental results indicate that the proposed method has an ability to classify complex urban scenes. The classification accuracies of the Beijing dataset are as high as 91% at the object-level and 88% at the scene level. Additionally, we are probably the first to investigate the object level semantic mapping by incorporating high-resolution images and OSM data of urban areas. Consequently, the method presented is effective in delineating urban scenes that could further boost urban environment monitoring and planning with high-resolution images. Numéro de notice : A2019-209 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.03.019 Date de publication en ligne : 29/03/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.03.019 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92675
in ISPRS Journal of photogrammetry and remote sensing > vol 151 (May 2019) . - pp 237 - 250[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019051 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019053 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019052 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Including Sentinel-1 radar data to improve the disaggregation of MODIS land surface temperature data / Abdelhakim Amazirh in ISPRS Journal of photogrammetry and remote sensing, vol 150 (April 2019)
[article]
Titre : Including Sentinel-1 radar data to improve the disaggregation of MODIS land surface temperature data Type de document : Article/Communication Auteurs : Abdelhakim Amazirh, Auteur ; Olivier Merlin, Auteur ; Salah Er-Raki, Auteur Année de publication : 2019 Article en page(s) : pp 11 - 26 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] désagrégation
[Termes IGN] humidité du sol
[Termes IGN] image à haute résolution
[Termes IGN] image Landsat
[Termes IGN] image Landsat-8
[Termes IGN] image Sentinel-SAR
[Termes IGN] image Terra-MODIS
[Termes IGN] Maroc
[Termes IGN] modèle de transfert radiatif
[Termes IGN] réflectance spectrale
[Termes IGN] régression multiple
[Termes IGN] température au sol
[Termes IGN] zone semi-arideRésumé : (Auteur) The use of land surface temperature (LST) for monitoring the consumption and water status of crops requires data at fine spatial and temporal resolutions. Unfortunately, the current spaceborne thermal sensors provide data at either high temporal (e.g. MODIS: Moderate Resolution Imaging Spectro-radiometer) or high spatial (e.g. Landsat) resolution separately. Disaggregating low spatial resolution (LR) LST data using ancillary data available at high spatio-temporal resolution could compensate for the lack of high spatial resolution (HR) LST observations. Existing LST downscaling approaches generally rely on the fractional green vegetation cover (fgv) derived from HR reflectances but they do not take into account the soil water availability to explain the spatial variability in LST at HR. In this context, a new method is developed to disaggregate kilometric MODIS LST at 100 m resolution by including the Sentinel-1 (S-1) backscatter, which is indirectly linked to surface soil moisture, in addition to the Landsat-7 and Landsat-8 (L-7 & L-8) reflectances. The approach is tested over two different sites – an 8 km by 8 km irrigated crop area named “R3” and a 12 km by 12 km rainfed area named “Sidi Rahal” in central Morocco (Marrakech) – on the seven dates when S-1, and L-7 or L-8 acquisitions coincide with a one-day precision during the 2015–2016 growing season. The downscaling methods are applied to the 1 km resolution MODIS-Terra LST data, and their performance is assessed by comparing the 100 m disaggregated LST to Landsat LST in three cases: no disaggregation, disaggregation using Landsat fgv only, disaggregation using both Landsat fgv and S-1 backscatter. When including fgv only in the disaggregation procedure, the mean root mean square error in LST decreases from 4.20 to 3.60 °C and the mean correlation coefficient (R) increases from 0.45 to 0.69 compared to the non-disaggregated case within R3. The new methodology including the S-1 backscatter as input to the disaggregation is found to be systematically more accurate on the available dates with a disaggregation mean error decreasing to 3.35 °C and a mean R increasing to 0.75. Numéro de notice : A2019-136 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.02.004 Date de publication en ligne : 15/02/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.02.004 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92467
in ISPRS Journal of photogrammetry and remote sensing > vol 150 (April 2019) . - pp 11 - 26[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019041 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Segmentation for Object-Based Image Analysis (OBIA): A review of algorithms and challenges from remote sensing perspective / Mohammad D. Hossain in ISPRS Journal of photogrammetry and remote sensing, vol 150 (April 2019)
[article]
Titre : Segmentation for Object-Based Image Analysis (OBIA): A review of algorithms and challenges from remote sensing perspective Type de document : Article/Communication Auteurs : Mohammad D. Hossain, Auteur ; Dongmei Chen, Auteur Année de publication : 2019 Article en page(s) : pp 115 - 134 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] analyse d'image orientée objet
[Termes IGN] appariement de données localisées
[Termes IGN] apprentissage automatique
[Termes IGN] classification hybride
[Termes IGN] image à haute résolution
[Termes IGN] objet géographique
[Termes IGN] segmentation d'image
[Termes IGN] segmentation en régions
[Termes IGN] segmentation par décomposition-fusionRésumé : (Auteur) Image segmentation is a critical and important step in (GEographic) Object-Based Image Analysis (GEOBIA or OBIA). The final feature extraction and classification in OBIA is highly dependent on the quality of image segmentation. Segmentation has been used in remote sensing image processing since the advent of the Landsat-1 satellite. However, after the launch of the high-resolution IKONOS satellite in 1999, the paradigm of image analysis moved from pixel-based to object-based. As a result, the purpose of segmentation has been changed from helping pixel labeling to object identification. Although several articles have reviewed segmentation algorithms, it is unclear if some segmentation algorithms are generally more suited for (GE)OBIA than others. This article has conducted an extensive state-of-the-art survey on OBIA techniques, discussed different segmentation techniques and their applicability to OBIA. Conceptual details of those techniques are explained along with the strengths and weaknesses. The available tools and software packages for segmentation are also summarized. The key challenge in image segmentation is to select optimal parameters and algorithms that can general image objects matching with the meaningful geographic objects. Recent research indicates an apparent movement towards the improvement of segmentation algorithms, aiming at more accurate, automated, and computationally efficient techniques. Numéro de notice : A2019-138 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.02.009 Date de publication en ligne : 23/02/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.02.009 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92469
in ISPRS Journal of photogrammetry and remote sensing > vol 150 (April 2019) . - pp 115 - 134[article]Réservation
Réserver ce documentExemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019041 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Comparaison de MNT à haute résolution issus de techniques laser et photogrammétriques / Michel Kasser in XYZ, n° 158 (mars 2019)
[article]
Titre : Comparaison de MNT à haute résolution issus de techniques laser et photogrammétriques Type de document : Article/Communication Auteurs : Michel Kasser , Auteur ; Nicolas Delley, Auteur ; Stéphane Cretegny, Auteur Année de publication : 2019 Article en page(s) : pp 17 - 20 Note générale : bibliographie Langues : Français (fre) Descripteur : [Vedettes matières IGN] Acquisition d'image(s) et de donnée(s)
[Termes IGN] analyse comparative
[Termes IGN] image à haute résolution
[Termes IGN] image captée par drone
[Termes IGN] modèle numérique de terrain
[Termes IGN] montagne
[Termes IGN] photogrammétrie aérienne
[Termes IGN] télémétrie laser aéroporté
[Termes IGN] Vaud (Suisse)Résumé : (auteur) Dans le cadre d'une étude génomique de plantes de haute altitude nécessitant des modèles de terrain extrêmement précis, une étude sur les comparaisons de modèles acquis par des outils différents a été menée, ceci dans des sites sans végétation haute. Diverses pistes sont présentées pour expliquer les différences observées. Numéro de notice : A2019-082 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueNat DOI : sans Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92219
in XYZ > n° 158 (mars 2019) . - pp 17 - 20[article]Réservation
Réserver ce documentExemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 112-2019011 RAB Revue Centre de documentation En réserve L003 Disponible DuPLO: A DUal view Point deep Learning architecture for time series classificatiOn / Roberto Interdonato in ISPRS Journal of photogrammetry and remote sensing, vol 149 (March 2019)PermalinkEvaluation of time-series SAR and optical images for the study of winter land-use / Julien Denize (2019)PermalinkPermalinkIndividual tree detection and crown delineation with 3D information from multi-view satellite Images / Changlin Xiao in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 1 (January 2019)PermalinkA multi-faceted CNN architecture for automatic classification of mobile LiDAR data and an algorithm to reproduce point cloud samples for enhanced training / Bhavesh Kumar in ISPRS Journal of photogrammetry and remote sensing, vol 147 (January 2019)PermalinkPermalinkSensitivity of urban material classification to spatial and spectral configurations from visible to short-wave infrared / Arnaud Le Bris (2019)PermalinkPermalinkAutomatic building rooftop extraction from aerial images via hierarchical RGB-D priors / Shibiao Xu in IEEE Transactions on geoscience and remote sensing, vol 56 n° 12 (December 2018)PermalinkScene classification based on multiscale convolutional neural network / Yanfei Liu in IEEE Transactions on geoscience and remote sensing, vol 56 n° 12 (December 2018)Permalink