Descripteur
Documents disponibles dans cette catégorie (45)



Etendre la recherche sur niveau(x) vers le bas
Production of orthophoto map using mobile photogrammetry and comparative assessment of cost and accuracy with satellite imagery for corridor mapping: a case study in Manesar, Haryana, India / Manuj Dev in Annals of GIS, vol 29 n° 1 (January 2023)
![]()
[article]
Titre : Production of orthophoto map using mobile photogrammetry and comparative assessment of cost and accuracy with satellite imagery for corridor mapping: a case study in Manesar, Haryana, India Type de document : Article/Communication Auteurs : Manuj Dev, Auteur ; Shetru M. Veerabhadrappa, Auteur ; Ashutosh Kainthola, Auteur ; et al., Auteur Année de publication : 2023 Article en page(s) : pp 163 - 176 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie
[Termes IGN] aérotriangulation
[Termes IGN] analyse comparative
[Termes IGN] image panoramique
[Termes IGN] image satellite
[Termes IGN] modèle stéréoscopique
[Termes IGN] orthoimage
[Termes IGN] orthophotocarte
[Termes IGN] point d'appui
[Termes IGN] positionnement cinématique en temps réel
[Termes IGN] système de numérisation mobileRésumé : (auteur) The study aims to find a low-cost alternate technology to get imagery, using mobile platform, and produce digital orthophoto for corridor mapping, with a higher degree of accuracy and which can reduce the lag time of acquisition of data. The present study uses digital single-lens reflex cameras, mounted on a mobile vehicle, and acquisition of data in the video format rather than still photographs, as traditionally used in mobile mapping systems. The videos are used to create a set of images and orthophotos. A widespread ground control points were recorded in the study area, using the global navigation satellite system receiver, which measured the control points in real-time kinematic mode. Generation of digital orthophoto has been completed using the captured mobile imagery and ground control point. Furthermore, procurement of satellite imagery and aerial triangulation using ground control points have been done. While comparing the planimetric accuracy of orthophoto against satellite imagery using the ground control points, the achieved root mean square error value of produced orthophoto is 0.171 m in X axis and 0.205 m in Y axis. However, for Cartosat -1 satellite imagery, the RMSE value for X is 1.22 m and for Y is 1.98 m. This research proposes the alternate low-cost mobile mapping method to capture the imagery for orthophoto production. The cost of orthophoto production from mobile image was found 77% cheaper than the orthophoto cost from fresh/latest satellite imagery procurement, while the overall production was 70% cost-effective than the orthophoto maps made from archived imagery. Numéro de notice : A2023-161 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1080/19475683.2022.2141853 Date de publication en ligne : 12/11/2022 En ligne : https://doi.org/10.1080/19475683.2022.2141853 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102864
in Annals of GIS > vol 29 n° 1 (January 2023) . - pp 163 - 176[article]Automatic registration of point cloud and panoramic images in urban scenes based on pole matching / Yuan Wang in International journal of applied Earth observation and geoinformation, vol 115 (December 2022)
![]()
[article]
Titre : Automatic registration of point cloud and panoramic images in urban scenes based on pole matching Type de document : Article/Communication Auteurs : Yuan Wang, Auteur ; Yuhao Li, Auteur ; Yiping Chen, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : n° 103083 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] appariement de formes
[Termes IGN] chevauchement
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image panoramique
[Termes IGN] image virtuelle
[Termes IGN] optimisation par essaim de particules
[Termes IGN] points registration
[Termes IGN] recalage d'image
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantique
[Termes IGN] semis de points
[Termes IGN] télémétrie laser mobile
[Termes IGN] zone tamponRésumé : (auteur) Given the initial calibration of multiple sensors, the fine registration between Mobile Laser Scanning (MLS) point clouds and panoramic images is still challenging due to the unforeseen movement and temporal misalignment while collecting. To tackle this issue, we proposed a novel automatic method to register the panoramic images and MLS point clouds based on the matching of pole objects. Firstly, 2D pole instances in the panoramic images are extracted by a semantic segmentation network and then optimized. Secondly, every corresponding frustum point cloud of each pole instance is obtained by a shape-adaptive buffer region in the panoramic image, and the 3D pole object is extracted via a combination of slicing, clustering, and connected domain analysis, then all 3D pole objects are fused. Finally, 2D and 3D pole objects are re-projected onto virtual images respectively, and then fine 2D-3D correspondences are collected through maximizing pole overlapping area by Particle Swarm Optimization (PSO). The accurate extrinsic orientation parameters are acquired by the Efficient Perspective-N-Point (EPnP). The experiments indicate that the proposed method performs effectively on two challenging urban scenes with an average registration error of 2.01 pixels (with RMSE 0.88) and 2.35 pixels (with RMSE 1.03), respectively. Numéro de notice : A2022-827 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.jag.2022.103083 Date de publication en ligne : 07/11/2022 En ligne : https://doi.org/10.1016/j.jag.2022.103083 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=102011
in International journal of applied Earth observation and geoinformation > vol 115 (December 2022) . - n° 103083[article]Measuring visual walkability perception using panoramic street view images, virtual reality, and deep learning / Yunqin Li in Sustainable Cities and Society, vol 86 (November 2022)
![]()
[article]
Titre : Measuring visual walkability perception using panoramic street view images, virtual reality, and deep learning Type de document : Article/Communication Auteurs : Yunqin Li, Auteur ; Nobuyoshi Yabuki, Auteur ; Tomohiro Fukuda, Auteur Année de publication : 2022 Article en page(s) : n° 104140 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] classification par séparateurs à vaste marge
[Termes IGN] corrélation
[Termes IGN] image panoramique
[Termes IGN] image Streetview
[Termes IGN] modèle de régression
[Termes IGN] piéton
[Termes IGN] réalité virtuelle
[Termes IGN] scène urbaine
[Termes IGN] segmentation sémantique
[Termes IGN] visionRésumé : (auteur) Measuring perceptions of visual walkability in urban streets and exploring the associations between the visual features of the street built environment that make walking attractive to humans are both theoretically and practically important. Previous studies have used either environmental audits and subjective evaluations that have limitations in terms of cost, time, and measurement scale, or computer-aided audits based on natural street view images (SVIs) but with gaps in real perception. In this study, a virtual reality panoramic image-based deep learning framework is proposed for measuring visual walkability perception (VWP) and then quantifying and visualizing the contributing visual features. A VWP classification deep multitask learning (VWPCL) model was first developed and trained on human ratings of panoramic SVIs in virtual reality to predict VWP in six categories. Second, a regression model was used to determine the degree of correlation of various objects with one of the six VWP categories based on semantic segmentation. Furthermore, an interpretable deep learning model was used to assist in identifying and visualizing elements that contribute to VWP. The experiment validated the accuracy of the VWPCL model for predicting VWP. The results represent a further step in understanding the interplay of VWP and street-level semantics and features. Numéro de notice : A2022-816 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1016/j.scs.2022.104140 Date de publication en ligne : 21/08/2022 En ligne : https://doi.org/10.1016/j.scs.2022.104140 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101982
in Sustainable Cities and Society > vol 86 (November 2022) . - n° 104140[article]Traffic sign three-dimensional reconstruction based on point clouds and panoramic images / Minye Wang in Photogrammetric record, vol 37 n° 177 (March 2022)
![]()
[article]
Titre : Traffic sign three-dimensional reconstruction based on point clouds and panoramic images Type de document : Article/Communication Auteurs : Minye Wang, Auteur ; Rufei Liu, Auteur ; Jiben Yang, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 87 - 110 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] correction d'image
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] image panoramique
[Termes IGN] lidar mobile
[Termes IGN] reconstruction 3D
[Termes IGN] semis de points
[Termes IGN] signalisation routièreRésumé : (auteur) Traffic signs are a very important source of information for drivers and pilotless automobiles. With the advance of Mobile LiDAR System (MLS), massive point clouds have been applied in three-dimensional digital city modelling. However, traffic signs in MLS point clouds are low density, colourless and incomplete. This paper presents a new method for the reconstruction of vertical rectangle traffic sign point clouds based on panoramic images. In this method, traffic sign point clouds are extracted based on arc feature and spatial semantic features analysis. Traffic signs in images are detected by colour and shape features and a convolutional neural network. Traffic sign point cloud and images are registered based on outline features. Finally, traffic sign points match traffic sign pixels to reconstruct the traffic sign point cloud. Experimental results have demonstrated that this proposed method can effectively obtain colourful and complete traffic sign point clouds with high resolution. Numéro de notice : A2022-254 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1111/phor.12398 Date de publication en ligne : 05/03/2022 En ligne : https://doi.org/10.1111/phor.12398 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100217
in Photogrammetric record > vol 37 n° 177 (March 2022) . - pp 87 - 110[article]Quantifying the shape of urban street trees and evaluating its influence on their aesthetic functions based on mobile lidar data / Tianyu Hu in ISPRS Journal of photogrammetry and remote sensing, vol 184 (February 2022)
![]()
[article]
Titre : Quantifying the shape of urban street trees and evaluating its influence on their aesthetic functions based on mobile lidar data Type de document : Article/Communication Auteurs : Tianyu Hu, Auteur ; Dengjie Wei, Auteur ; Yanjun Su, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 203 - 214 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] arbre urbain
[Termes IGN] canopée
[Termes IGN] Chine
[Termes IGN] couvert végétal
[Termes IGN] distribution spatiale
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] image panoramique
[Termes IGN] semis de points
[Termes IGN] système de numérisation mobileRésumé : (auteur) Street trees are important components of an urban green space and understanding and measuring their ecological and cultural services is crucial for assessing the quality of streets and managing urban environments. Currently, most studies mainly focus on evaluating the ecological services of street trees by measuring the amount of greenness, but how to evaluate their aesthetic functions through quantitative measurements of street trees remain unclear. To address this problem, we propose a method to assess the aesthetic functions of street trees by quantifying the shape of greenness inspired by assessments of skyline aesthetics. Using a state-of-the-art mobile mapping system, we collected downtown-wide lidar data and panoramic images in Jinzhou City, Hebei Province, China. We developed a method for extracting the canopy line from the mobile lidar data, and then identified two basic elements, peaks and gaps, from street canopy lines and extracted six indexes (i.e., richness of peaks, evenness of peaks, frequency of peaks, total length of gaps, evenness of gaps and frequency of gaps) to describe the fluctuations and continuities of street canopy lines. We analyzed the abundance and spatial distribution of these indexes together with survey responses on the streets’ aesthetics and found that most of them were significantly correlated with human perception of streets. Compared to indexes of amount of greenness (e.g., green volume and green view index), these shape indexes have stronger influences on the physical aesthetic beauty of street trees. These findings suggest that a comprehensive assessment of the aesthetic function of street trees should consider both shape and amount of greenness. This study provides a new perspective for the assessment of urban green spaces and can assist future urban greening planning and urban landscape management. Numéro de notice : A2022-105 Affiliation des auteurs : non IGN Thématique : FORET/IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1016/j.isprsjprs.2022.01.002 Date de publication en ligne : 15/01/2022 En ligne : https://doi.org/10.1016/j.isprsjprs.2022.01.002 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=99602
in ISPRS Journal of photogrammetry and remote sensing > vol 184 (February 2022) . - pp 203 - 214[article]Réservation
Réserver ce documentExemplaires (3)
Code-barres Cote Support Localisation Section Disponibilité 081-2022021 SL Revue Centre de documentation Revues en salle Disponible 081-2022023 DEP-RECP Revue LaSTIG Dépôt en unité Exclu du prêt 081-2022022 DEP-RECF Revue Nancy Dépôt en unité Exclu du prêt Adaptation d'un algorithme SLAM pour la vision panoramique multi-expositions dans des scènes à haute gamme dynamique / Eva Goichon (2022)
PermalinkA pipeline for automated processing of Corona KH-4 (1962-1972) stereo imagery / Sajid Ghuffar (2022)
PermalinkAutomatic registration of mobile mapping system Lidar points and panoramic-image sequences by relative orientation model / Ningning Zhu in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 12 (December 2021)
PermalinkGeometric computer vision: omnidirectional visual and remotely sensed data analysis / Pouria Babahajiani (2021)
PermalinkPerception de scène par un système multi-capteurs, application à la navigation dans des environnements d'intérieur structuré / Marwa Chakroun (2021)
PermalinkLocal color and morphological image feature based vegetation identification and its application to human environment street view vegetation mapping, or how green is our county? / Istvan G. Lauko in Geo-spatial Information Science, vol 23 n° 3 (September 2020)
PermalinkGeocoding of trees from street addresses and street-level images / Daniel Laumer in ISPRS Journal of photogrammetry and remote sensing, vol 162 (April 2020)
PermalinkCartographie sémantique hybride de scènes urbaines à partir de données image et Lidar / Mohamed Boussaha (2020)
PermalinkPermalinkSemiautomatically register MMS LiDAR points and panoramic image sequence using road lamp and lane / Ningning Zhu in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 11 (November 2019)
Permalink