ISPRS Journal of photogrammetry and remote sensing / International society for photogrammetry and remote sensing (1980 -) . vol 150Paru le : 01/04/2019 |
[n° ou bulletin]
est un bulletin de ISPRS Journal of photogrammetry and remote sensing / International society for photogrammetry and remote sensing (1980 -) (1990 -)
[n° ou bulletin]
|
Exemplaires(3)
Code-barres | Cote | Support | Localisation | Section | Disponibilité |
---|---|---|---|---|---|
081-2019041 | RAB | Revue | Centre de documentation | En réserve L003 | Disponible |
081-2019043 | DEP-RECP | Revue | LASTIG | Dépôt en unité | Exclu du prêt |
081-2019042 | DEP-RECF | Revue | Nancy | Dépôt en unité | Exclu du prêt |
Dépouillements
Ajouter le résultat dans votre panierIncluding Sentinel-1 radar data to improve the disaggregation of MODIS land surface temperature data / Abdelhakim Amazirh in ISPRS Journal of photogrammetry and remote sensing, vol 150 (April 2019)
[article]
Titre : Including Sentinel-1 radar data to improve the disaggregation of MODIS land surface temperature data Type de document : Article/Communication Auteurs : Abdelhakim Amazirh, Auteur ; Olivier Merlin, Auteur ; Salah Er-Raki, Auteur Année de publication : 2019 Article en page(s) : pp 11 - 26 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image radar et applications
[Termes IGN] désagrégation
[Termes IGN] humidité du sol
[Termes IGN] image à haute résolution
[Termes IGN] image Landsat
[Termes IGN] image Landsat-8
[Termes IGN] image Sentinel-SAR
[Termes IGN] image Terra-MODIS
[Termes IGN] Maroc
[Termes IGN] modèle de transfert radiatif
[Termes IGN] réflectance spectrale
[Termes IGN] régression multiple
[Termes IGN] température au sol
[Termes IGN] zone semi-arideRésumé : (Auteur) The use of land surface temperature (LST) for monitoring the consumption and water status of crops requires data at fine spatial and temporal resolutions. Unfortunately, the current spaceborne thermal sensors provide data at either high temporal (e.g. MODIS: Moderate Resolution Imaging Spectro-radiometer) or high spatial (e.g. Landsat) resolution separately. Disaggregating low spatial resolution (LR) LST data using ancillary data available at high spatio-temporal resolution could compensate for the lack of high spatial resolution (HR) LST observations. Existing LST downscaling approaches generally rely on the fractional green vegetation cover (fgv) derived from HR reflectances but they do not take into account the soil water availability to explain the spatial variability in LST at HR. In this context, a new method is developed to disaggregate kilometric MODIS LST at 100 m resolution by including the Sentinel-1 (S-1) backscatter, which is indirectly linked to surface soil moisture, in addition to the Landsat-7 and Landsat-8 (L-7 & L-8) reflectances. The approach is tested over two different sites – an 8 km by 8 km irrigated crop area named “R3” and a 12 km by 12 km rainfed area named “Sidi Rahal” in central Morocco (Marrakech) – on the seven dates when S-1, and L-7 or L-8 acquisitions coincide with a one-day precision during the 2015–2016 growing season. The downscaling methods are applied to the 1 km resolution MODIS-Terra LST data, and their performance is assessed by comparing the 100 m disaggregated LST to Landsat LST in three cases: no disaggregation, disaggregation using Landsat fgv only, disaggregation using both Landsat fgv and S-1 backscatter. When including fgv only in the disaggregation procedure, the mean root mean square error in LST decreases from 4.20 to 3.60 °C and the mean correlation coefficient (R) increases from 0.45 to 0.69 compared to the non-disaggregated case within R3. The new methodology including the S-1 backscatter as input to the disaggregation is found to be systematically more accurate on the available dates with a disaggregation mean error decreasing to 3.35 °C and a mean R increasing to 0.75. Numéro de notice : A2019-136 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.02.004 Date de publication en ligne : 15/02/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.02.004 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92467
in ISPRS Journal of photogrammetry and remote sensing > vol 150 (April 2019) . - pp 11 - 26[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019041 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Learning high-level features by fusing multi-view representation of MLS point clouds for 3D object recognition in road environments / Zhipeng Luo in ISPRS Journal of photogrammetry and remote sensing, vol 150 (April 2019)
[article]
Titre : Learning high-level features by fusing multi-view representation of MLS point clouds for 3D object recognition in road environments Type de document : Article/Communication Auteurs : Zhipeng Luo, Auteur ; Jonathan Li, Auteur ; Zhenlong Xiao, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 44 - 58 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] fusion de données
[Termes IGN] jointure spatiale
[Termes IGN] objet 3D
[Termes IGN] reconnaissance d'objets
[Termes IGN] représentation multiple
[Termes IGN] réseau neuronal convolutif
[Termes IGN] semis de pointsRésumé : (Auteur) Most existing 3D object recognition methods still suffer from low descriptiveness and weak robustness although remarkable progress has made in 3D computer vision. The major challenge lies in effectively mining high-level 3D shape features. This paper presents a high-level feature learning framework for 3D object recognition through fusing multiple 2D representations of point clouds. The framework has two key components: (1) three discriminative low-level 3D shape descriptors for obtaining multi-view 2D representation of 3D point clouds. These descriptors preserve both local and global spatial relationships of points from different perspectives and build a bridge between 3D point clouds and 2D Convolutional Neural Networks (CNN). (2) A two-stage fusion network, which consists of a deep feature learning module and two fusion modules, for extracting and fusing high-level features. The proposed method was tested on three datasets, one of which is Sydney Urban Objects dataset and the other two were acquired by a mobile laser scanning (MLS) system along urban roads. The results obtained from comprehensive experiments demonstrated that our method is superior to the state-of-the-art methods in descriptiveness, robustness and efficiency. Our method achieves high recognition rates of 94.6%, 93.1% and 74.9% on the above three datasets, respectively. Numéro de notice : A2019-137 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.01.024 Date de publication en ligne : 16/02/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.01.024 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92468
in ISPRS Journal of photogrammetry and remote sensing > vol 150 (April 2019) . - pp 44 - 58[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019041 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Segmentation for Object-Based Image Analysis (OBIA): A review of algorithms and challenges from remote sensing perspective / Mohammad D. Hossain in ISPRS Journal of photogrammetry and remote sensing, vol 150 (April 2019)
[article]
Titre : Segmentation for Object-Based Image Analysis (OBIA): A review of algorithms and challenges from remote sensing perspective Type de document : Article/Communication Auteurs : Mohammad D. Hossain, Auteur ; Dongmei Chen, Auteur Année de publication : 2019 Article en page(s) : pp 115 - 134 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] analyse d'image orientée objet
[Termes IGN] appariement de données localisées
[Termes IGN] apprentissage automatique
[Termes IGN] classification hybride
[Termes IGN] image à haute résolution
[Termes IGN] objet géographique
[Termes IGN] segmentation d'image
[Termes IGN] segmentation en régions
[Termes IGN] segmentation par décomposition-fusionRésumé : (Auteur) Image segmentation is a critical and important step in (GEographic) Object-Based Image Analysis (GEOBIA or OBIA). The final feature extraction and classification in OBIA is highly dependent on the quality of image segmentation. Segmentation has been used in remote sensing image processing since the advent of the Landsat-1 satellite. However, after the launch of the high-resolution IKONOS satellite in 1999, the paradigm of image analysis moved from pixel-based to object-based. As a result, the purpose of segmentation has been changed from helping pixel labeling to object identification. Although several articles have reviewed segmentation algorithms, it is unclear if some segmentation algorithms are generally more suited for (GE)OBIA than others. This article has conducted an extensive state-of-the-art survey on OBIA techniques, discussed different segmentation techniques and their applicability to OBIA. Conceptual details of those techniques are explained along with the strengths and weaknesses. The available tools and software packages for segmentation are also summarized. The key challenge in image segmentation is to select optimal parameters and algorithms that can general image objects matching with the meaningful geographic objects. Recent research indicates an apparent movement towards the improvement of segmentation algorithms, aiming at more accurate, automated, and computationally efficient techniques. Numéro de notice : A2019-138 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.02.009 Date de publication en ligne : 23/02/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.02.009 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92469
in ISPRS Journal of photogrammetry and remote sensing > vol 150 (April 2019) . - pp 115 - 134[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019041 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt BIM-Tracker: A model-based visual tracking approach for indoor localisation using a 3D building model / Debaditya Acharya in ISPRS Journal of photogrammetry and remote sensing, vol 150 (April 2019)
[article]
Titre : BIM-Tracker: A model-based visual tracking approach for indoor localisation using a 3D building model Type de document : Article/Communication Auteurs : Debaditya Acharya, Auteur ; Milad Ramezani, Auteur ; Kourosh Khoshelham, Auteur ; Stephan Winter, Auteur Année de publication : 2019 Article en page(s) : pp 157 - 171 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] algorithme de Gauss-Newton
[Termes IGN] appariement de données localisées
[Termes IGN] corrélation à l'aide de traits caractéristiques
[Termes IGN] estimation de pose
[Termes IGN] filtre de Kalman
[Termes IGN] longueur focale
[Termes IGN] Matlab
[Termes IGN] méthode des moindres carrés
[Termes IGN] modèle 3D du site
[Termes IGN] positionnement en intérieur
[Termes IGN] trajet (mobilité)Résumé : (Auteur) This article presents an accurate and robust visual indoor localisation approach that not only is infrastructure-free, but also avoids accumulation error by taking advantage of (1) the widespread ubiquity of mobile devices with cameras and (2) the availability of 3D building models for most modern buildings. Localisation is performed by matching image sequences captured by a camera, with a 3D model of the building in a model-based visual tracking framework. Comprehensive evaluation of the approach with a photo-realistic synthetic dataset shows the robustness of the localisation approach under challenging conditions. Additionally, the approach is tested and evaluated on real data captured by a smartphone. The results of the experiments indicate that a localisation accuracy better than 10 cm can be achieved by using this approach. Since localisation errors do not accumulate the proposed approach is suitable for indoor localisation tasks for long periods of time and augmented reality applications, without requiring any local infrastructure. A MATLAB implementation can be found on https://github.com/debaditya-unimelb/BIM-Tracker. Numéro de notice : A2019-139 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.02.014 Date de publication en ligne : 27/02/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.02.014 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92473
in ISPRS Journal of photogrammetry and remote sensing > vol 150 (April 2019) . - pp 157 - 171[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019041 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Automatic sensor orientation using horizontal and vertical line feature constraints / Yanbiao Sun in ISPRS Journal of photogrammetry and remote sensing, vol 150 (April 2019)
[article]
Titre : Automatic sensor orientation using horizontal and vertical line feature constraints Type de document : Article/Communication Auteurs : Yanbiao Sun, Auteur ; Stuart Robson, Auteur ; Daniel Scott, Auteur ; et al., Auteur Année de publication : 2019 Article en page(s) : pp 172 - 184 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie numérique
[Termes IGN] angle azimutal
[Termes IGN] angle vertical
[Termes IGN] compensation par faisceaux
[Termes IGN] coordonnées horizontales
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] forme linéaire
[Termes IGN] image aérienne
[Termes IGN] ligne caractéristique
[Termes IGN] orientation d'image
[Termes IGN] orientation du capteur
[Termes IGN] point d'appuiRésumé : (Auteur) To improve the accuracy of sensor orientation using calibrated aerial images, this paper proposes an automatic sensor orientation method utilizing horizontal and vertical constraints on human-engineered structures, addressing the limitations faced with sub-optimal number of Ground Control Points (GCPs) within a scene. Related state-of-the-art methods rely on structured building edges, and necessitate manual identification of end points. Our method makes use of line-segments but eliminates the need for these matched end points, thus eliminating the need for inefficient manual intervention.
To achieve this, a 3D line in object space is represented by the intersection of two planes going through two camera centers. The normal vector of each plane can be written as a function of a pair of azimuth and elevations angles. The normal vector of the 3D line can be expressed by the cross product of these two plane’s normal vectors. Then, we create observation functions of horizontal and vertical line constraints based on the zero-vector cross-product and the dot-product of the normal vector of the 3D lines. The observation functions of the horizontal and vertical lines are then introduced into a hybrid Bundle Adjustment (BA) method as constraints, including observed image points as well as observed line segment projections. Finally, to assess the feasibility and effectiveness of the proposed method, simulated and real data are tested. The results demonstrate that, in cases with only 3 GCPs, the accuracy of the proposed method utilizing line features extracted automatically, is increased by 50%, compared to a BA using only point constraints.Numéro de notice : A2019-140 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.02.011 Date de publication en ligne : 28/02/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.02.011 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92478
in ISPRS Journal of photogrammetry and remote sensing > vol 150 (April 2019) . - pp 172 - 184[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019041 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Albedo estimation for real-time 3D reconstruction using RGB-D and IR data / Patrick Stotko in ISPRS Journal of photogrammetry and remote sensing, vol 150 (April 2019)
[article]
Titre : Albedo estimation for real-time 3D reconstruction using RGB-D and IR data Type de document : Article/Communication Auteurs : Patrick Stotko, Auteur ; Michael Weinmann, Auteur ; Reinhard Klein, Auteur Année de publication : 2019 Article en page(s) : pp 213 - 225 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] albedo
[Termes IGN] image infrarouge
[Termes IGN] image RVB
[Termes IGN] longueur d'onde
[Termes IGN] méthode de réduction d'énergie
[Termes IGN] reconstruction 3D
[Termes IGN] réflectance
[Termes IGN] segmentation d'image
[Termes IGN] temps réel
[Termes IGN] texture d'imageRésumé : (Auteur) Reconstructing scenes in real-time using low-cost sensors has gained increasing attention in recent research and enabled numerous applications in graphics, vision, and robotics. While current techniques offer a substantial improvement regarding the quality of the reconstructed geometry, the degree of realism of the overall appearance is still lacking as the reconstruction of accurate surface appearance is highly challenging due to the complex interplay of surface geometry, reflectance properties and surrounding illumination. We present a novel approach that allows the reconstruction of both the geometry and the spatially varying surface albedo of a scene from RGB-D and IR data obtained via commodity sensors. In comparison to previous approaches, our approach offers an improved robustness and a significant speed-up to even fulfill the real-time requirements. For this purpose, we exploit the benefits of scene segmentation to improve albedo estimation due to the resulting better segment-wise coupling of IR and RGB data that takes into account the wavelength characteristics of different materials within the scene. The estimated albedo is directly integrated into the dense volumetric reconstruction framework using a novel weighting scheme to generate high-quality results. In our evaluation, we demonstrate that our approach allows albedo capturing of complicated scenarios including complex, high-frequent and strongly varying lighting as well as shadows. Numéro de notice : A2019-141 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.01.018 Date de publication en ligne : 04/03/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.01.018 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92479
in ISPRS Journal of photogrammetry and remote sensing > vol 150 (April 2019) . - pp 213 - 225[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019041 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt BIM-PoseNet: Indoor camera localisation using a 3D indoor model and deep learning from synthetic images / Debaditya Acharya in ISPRS Journal of photogrammetry and remote sensing, vol 150 (April 2019)
[article]
Titre : BIM-PoseNet: Indoor camera localisation using a 3D indoor model and deep learning from synthetic images Type de document : Article/Communication Auteurs : Debaditya Acharya, Auteur ; Kourosh Khoshelham, Auteur ; Stephan Winter, Auteur Année de publication : 2019 Article en page(s) : pp 245 - 258 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] apprentissage profond
[Termes IGN] compréhension de l'image
[Termes IGN] estimation de pose
[Termes IGN] image de synthèse
[Termes IGN] modélisation 3D du bâti BIM
[Termes IGN] positionnement en intérieur
[Termes IGN] réseau neuronal convolutif
[Termes IGN] structure-from-motionRésumé : (Auteur) The ubiquity of cameras built in mobile devices has resulted in a renewed interest in image-based localisation in indoor environments where the global navigation satellite system (GNSS) signals are not available. Existing approaches for indoor localisation using images either require an initial location or need first to perform a 3D reconstruction of the whole environment using structure-from-motion (SfM) methods, which is challenging and time-consuming for large indoor spaces. In this paper, a visual localisation approach is proposed to eliminate the requirement of image-based reconstruction of the indoor environment by using a 3D indoor model. A deep convolutional neural network (DCNN) is fine-tuned using synthetic images obtained from the 3D indoor model to regress the camera pose. Results of the experiments indicate that the proposed approach can be used for indoor localisation in real-time with an accuracy of approximately 2 m. Numéro de notice : A2019-142 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2019.02.020 Date de publication en ligne : 05/03/2019 En ligne : https://doi.org/10.1016/j.isprsjprs.2019.02.020 Format de la ressource électronique : URL Article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=92480
in ISPRS Journal of photogrammetry and remote sensing > vol 150 (April 2019) . - pp 245 - 258[article]Exemplaires(3)
Code-barres Cote Support Localisation Section Disponibilité 081-2019041 RAB Revue Centre de documentation En réserve L003 Disponible 081-2019043 DEP-RECP Revue LASTIG Dépôt en unité Exclu du prêt 081-2019042 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt