Descripteur
Termes IGN > sciences naturelles > physique > traitement d'image > acquisition d'images
acquisition d'imagesVoir aussi |
Documents disponibles dans cette catégorie (1184)


Etendre la recherche sur niveau(x) vers le bas
Exploring the vertical dimension of street view image based on deep learning: a case study on lowest floor elevation estimation / Huan Ning in International journal of geographical information science IJGIS, vol 36 n° 7 (juillet 2022)
![]()
[article]
Titre : Exploring the vertical dimension of street view image based on deep learning: a case study on lowest floor elevation estimation Type de document : Article/Communication Auteurs : Huan Ning, Auteur ; Zhenlong Li, Auteur ; Xinyue Ye, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 1317 - 1342 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] détection d'objet
[Termes IGN] distorsion d'image
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] hauteur du bâti
[Termes IGN] image Streetview
[Termes IGN] lever tachéométrique
[Termes IGN] modèle numérique de surface
[Termes IGN] porteRésumé : (auteur) Street view imagery such as Google Street View is widely used in people’s daily lives. Many studies have been conducted to detect and map objects such as traffic signs and sidewalks for urban built-up environment analysis. While mapping objects in the horizontal dimension is common in those studies, automatic vertical measuring in large areas is underexploited. Vertical information from street view imagery can benefit a variety of studies. One notable application is estimating the lowest floor elevation, which is critical for building flood vulnerability assessment and insurance premium calculation. In this article, we explored the vertical measurement in street view imagery using the principle of tacheometric surveying. In the case study of lowest floor elevation estimation using Google Street View images, we trained a neural network (YOLO-v5) for door detection and used the fixed height of doors to measure doors’ elevation. The results suggest that the average error of estimated elevation is 0.218 m. The depthmaps of Google Street View were utilized to traverse the elevation from the roadway surface to target objects. The proposed pipeline provides a novel approach for automatic elevation estimation from street view imagery and is expected to benefit future terrain-related studies for large areas. Numéro de notice : A2022-465 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/13658816.2021.1981334 Date de publication en ligne : 06/10/2021 En ligne : https://doi.org/10.1080/13658816.2021.1981334 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100970
in International journal of geographical information science IJGIS > vol 36 n° 7 (juillet 2022) . - pp 1317 - 1342[article]3D browsing of wide-angle fisheye images under view-dependent perspective correction / Mingyi Huang in Photogrammetric record, vol 37 n° 178 (June 2022)
![]()
[article]
Titre : 3D browsing of wide-angle fisheye images under view-dependent perspective correction Type de document : Article/Communication Auteurs : Mingyi Huang, Auteur ; Jun Wu, Auteur ; Zhiyong Peng, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 185 - 207 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] correction d'image
[Termes IGN] distorsion d'image
[Termes IGN] étalonnage d'instrument
[Termes IGN] image hémisphérique
[Termes IGN] objectif très grand angulaire
[Termes IGN] panorama sphérique
[Termes IGN] perspective
[Termes IGN] processeur graphique
[Termes IGN] projection orthogonale
[Termes IGN] projection perspectiveRésumé : (auteur) This paper presents a novel technique for 3D browsing of wide-angle fisheye images using view-dependent perspective correction (VDPC). First, the fisheye imaging model with interior orientation parameters (IOPs) is established. Thereafter, a VDPC model for wide-angle fisheye images is proposed that adaptively selects correction planes for different areas of the image format. Finally, the wide-angle fisheye image is re-projected to obtain the visual effect of browsing in hemispherical space, using the VDPC model and IOPs of the fisheye camera calibrated using the ideal projection ellipse constraint. The proposed technique is tested on several downloaded internet images with unknown IOPs. Results show that the proposed VDPC model achieves a more uniform perspective correction of fisheye images in different areas, and preserves the detailed information with greater flexibility compared with the traditional perspective projection conversion (PPC) technique. The proposed algorithm generates a corrected image of 512 × 512 pixels resolution at a speed of 58 fps when run on a pure central processing unit (CPU) processor. With an ordinary graphics processing unit (GPU) processor, a corrected image of 1024 × 1024 pixels resolution can be generated at 60 fps. Therefore, smooth 3D visualisation of a fisheye image can be realised on a computer using the proposed algorithm, which may benefit applications such as panorama surveillance, robot navigation, etc. Numéro de notice : A2022-518 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1111/phor.12410 Date de publication en ligne : 10/05/2022 En ligne : https://doi.org/10.1111/phor.12410 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101068
in Photogrammetric record > vol 37 n° 178 (June 2022) . - pp 185 - 207[article]True orthophoto generation based on unmanned aerial vehicle images using reconstructed edge points / Mojdeh Ebrahimikia in Photogrammetric record, vol 37 n° 178 (June 2022)
![]()
[article]
Titre : True orthophoto generation based on unmanned aerial vehicle images using reconstructed edge points Type de document : Article/Communication Auteurs : Mojdeh Ebrahimikia, Auteur ; Ali Hosseininaveh, Auteur Année de publication : 2022 Article en page(s) : pp 161 - 184 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] détection de contours
[Termes IGN] détection du bâti
[Termes IGN] distorsion d'image
[Termes IGN] graphe
[Termes IGN] image captée par drone
[Termes IGN] modèle numérique de surface
[Termes IGN] orthophotographie
[Termes IGN] orthophotoplan numérique
[Termes IGN] photogrammétrie aérienne
[Termes IGN] pixel de contour
[Termes IGN] structure-from-motion
[Termes IGN] zone urbaineRésumé : (auteur) After considering state-of-the-art algorithms, this paper presents a novel method for generating true orthophotos from unmanned aerial vehicle (UAV) images of urban areas. The procedure consists of four steps: 2D edge detection in building regions, 3D edge graph generation, digital surface model (DSM) modification and, finally, true orthophoto and orthomosaic generation. The main contribution of this paper is concerned with the first two steps, in which deep-learning approaches are used to identify the structural edges of the buildings and the estimated 3D edge points are added to the point cloud for DSM modification. Running the proposed method as well as four state-of-the-art methods on two different datasets demonstrates that the proposed method outperforms the existing orthophoto improvement methods by up to 50% in the first dataset and by 70% in the second dataset by reducing true orthophoto distortion in the structured edges of the buildings. Numéro de notice : A2022-517 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1111/phor.12409 Date de publication en ligne : 05/04/2022 En ligne : https://doi.org/10.1111/phor.12409 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101065
in Photogrammetric record > vol 37 n° 178 (June 2022) . - pp 161 - 184[article]Calibration of a light hemispherical radiance field imaging system / Manchun Lei in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-1-2022 (2022 edition)
![]()
[article]
Titre : Calibration of a light hemispherical radiance field imaging system Type de document : Article/Communication Auteurs : Manchun Lei , Auteur ; Christian Thom
, Auteur ; Christophe Meynard
, Auteur ; Jean-Michaël Muller
, Auteur
Année de publication : 2022 Projets : 1-Pas de projet / Article en page(s) : pp 195 - 202 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Acquisition d'image(s) et de donnée(s)
[Termes IGN] étalonnage géométrique
[Termes IGN] étalonnage radiométrique
[Termes IGN] image hémisphérique
[Termes IGN] modèle géométrique de prise de vue
[Termes IGN] radiance
[Termes IGN] zone urbaineRésumé : (auteur) A light hemispherical radiance field imaging system based on fish-eye camera was developed for the measurement of the surface incident radiance in an urban environment, which is often affected by radiometric heterogeneity problems. A linear radiometric model and a polynomial fish-eye lens model are used. A temperature-dependent dark level model is proposed to improve the dark correction for high dynamic range photography. This paper describes the calibration procedure for spectral and geometrical radiance field measurements and presents the results of the calibration. The spectral radiometric calibration error is 2.07%, 1.34%, and 0.98% for blue, green, and red bands, respectively. The mean geometrical calibration error is 2.037 pixels. Numéro de notice : A2022-438 Affiliation des auteurs : UGE-LASTIG (2020- ) Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.5194/isprs-annals-V-1-2022-195-2022 Date de publication en ligne : 17/05/2022 En ligne : https://doi.org/10.5194/isprs-annals-V-1-2022-195-2022 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100750
in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences > vol V-1-2022 (2022 edition) . - pp 195 - 202[article]3D lidar point-cloud projection operator and transfer machine learning for effective road surface features detection and segmentation / Heyang Thomas Li in The Visual Computer, vol 38 n° 5 (May 2022)
![]()
[article]
Titre : 3D lidar point-cloud projection operator and transfer machine learning for effective road surface features detection and segmentation Type de document : Article/Communication Auteurs : Heyang Thomas Li, Auteur ; Zachary Todd, Auteur ; Nikolas Bielski, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 1759 - 1774 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] chaîne de traitement
[Termes IGN] classification orientée objet
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] espace image
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] route
[Termes IGN] segmentation d'image
[Termes IGN] semis de points
[Termes IGN] signalisation routièreRésumé : (auteur) The classification and extraction of road markings and lanes are of critical importance to infrastructure assessment, planning and road safety. We present a pipeline for the accurate segmentation and extraction of rural road surface objects in 3D lidar point-cloud, as well as a method to extract geometric parameters belonging to tar seal. To decrease the computational resources needed, the point-clouds were aggregated into a 2D image space before being transformed using affine transformations. The Mask R-CNN algorithm is then applied to the transformed image space to localize, segment and classify the road objects. The segmentation results for road surfaces and markings can then be used for geometric parameter estimation such as road widths estimation, while the segmentation results show that the efficacy of the existing Mask R-CNN to segment needle-type objects is improved by our proposed transformations. Numéro de notice : A2022-376 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1007/s00371-021-02103-8 Date de publication en ligne : 28/06/2021 En ligne : https://doi.org/10.1007/s00371-021-02103-8 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100627
in The Visual Computer > vol 38 n° 5 (May 2022) . - pp 1759 - 1774[article]Hybrid georeferencing of images and LiDAR data for UAV-based point cloud collection at millimetre accuracy / Norbert Haala in ISPRS Open Journal of Photogrammetry and Remote Sensing, vol 4 (April 2022)
PermalinkAn approach to extracting digital elevation model for undulating and hilly terrain using de-noised stereo images of Cartosat-1 sensor / Litesh Bopche in Applied geomatics, vol 14 n° 1 (March 2022)
PermalinkExploiting light directionality for image-based 3D reconstruction of non-collaborative surfaces / Ali Karami in Photogrammetric record, vol 37 n° 177 (March 2022)
PermalinkPermalinkPermalinkPermalinkAutomatic registration of mobile mapping system Lidar points and panoramic-image sequences by relative orientation model / Ningning Zhu in Photogrammetric Engineering & Remote Sensing, PERS, vol 87 n° 12 (December 2021)
PermalinkRelevés d’obstacles à la navigation aérienne au service de l’information aéronautique / Olivier de Joinville in XYZ, n° 169 (décembre 2021)
PermalinkSemi-automatic reconstruction of object lines using a smartphone’s dual camera / Mohammed Aldelgawy in Photogrammetric record, Vol 36 n° 176 (December 2021)
PermalinkAccurate mapping method for UAV photogrammetry without ground control points in the map projection frame / Jianchen Liu in IEEE Transactions on geoscience and remote sensing, vol 59 n° 11 (November 2021)
Permalink