Descripteur
Documents disponibles dans cette catégorie (85)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Etendre la recherche sur niveau(x) vers le bas
Exploring the vertical dimension of street view image based on deep learning: a case study on lowest floor elevation estimation / Huan Ning in International journal of geographical information science IJGIS, vol 36 n° 7 (juillet 2022)
[article]
Titre : Exploring the vertical dimension of street view image based on deep learning: a case study on lowest floor elevation estimation Type de document : Article/Communication Auteurs : Huan Ning, Auteur ; Zhenlong Li, Auteur ; Xinyue Ye, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 1317 - 1342 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] apprentissage profond
[Termes IGN] détection d'objet
[Termes IGN] distorsion d'image
[Termes IGN] extraction de traits caractéristiques
[Termes IGN] hauteur du bâti
[Termes IGN] image Streetview
[Termes IGN] lever tachéométrique
[Termes IGN] modèle numérique de surface
[Termes IGN] porteRésumé : (auteur) Street view imagery such as Google Street View is widely used in people’s daily lives. Many studies have been conducted to detect and map objects such as traffic signs and sidewalks for urban built-up environment analysis. While mapping objects in the horizontal dimension is common in those studies, automatic vertical measuring in large areas is underexploited. Vertical information from street view imagery can benefit a variety of studies. One notable application is estimating the lowest floor elevation, which is critical for building flood vulnerability assessment and insurance premium calculation. In this article, we explored the vertical measurement in street view imagery using the principle of tacheometric surveying. In the case study of lowest floor elevation estimation using Google Street View images, we trained a neural network (YOLO-v5) for door detection and used the fixed height of doors to measure doors’ elevation. The results suggest that the average error of estimated elevation is 0.218 m. The depthmaps of Google Street View were utilized to traverse the elevation from the roadway surface to target objects. The proposed pipeline provides a novel approach for automatic elevation estimation from street view imagery and is expected to benefit future terrain-related studies for large areas. Numéro de notice : A2022-465 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1080/13658816.2021.1981334 Date de publication en ligne : 06/10/2021 En ligne : https://doi.org/10.1080/13658816.2021.1981334 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100970
in International journal of geographical information science IJGIS > vol 36 n° 7 (juillet 2022) . - pp 1317 - 1342[article]Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 079-2022071 SL Revue Centre de documentation Revues en salle Disponible 3D browsing of wide-angle fisheye images under view-dependent perspective correction / Mingyi Huang in Photogrammetric record, vol 37 n° 178 (June 2022)
[article]
Titre : 3D browsing of wide-angle fisheye images under view-dependent perspective correction Type de document : Article/Communication Auteurs : Mingyi Huang, Auteur ; Jun Wu, Auteur ; Zhiyong Peng, Auteur ; et al., Auteur Année de publication : 2022 Article en page(s) : pp 185 - 207 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] correction d'image
[Termes IGN] distorsion d'image
[Termes IGN] étalonnage d'instrument
[Termes IGN] image hémisphérique
[Termes IGN] objectif très grand angulaire
[Termes IGN] panorama sphérique
[Termes IGN] perspective
[Termes IGN] processeur graphique
[Termes IGN] projection orthogonale
[Termes IGN] projection perspectiveRésumé : (auteur) This paper presents a novel technique for 3D browsing of wide-angle fisheye images using view-dependent perspective correction (VDPC). First, the fisheye imaging model with interior orientation parameters (IOPs) is established. Thereafter, a VDPC model for wide-angle fisheye images is proposed that adaptively selects correction planes for different areas of the image format. Finally, the wide-angle fisheye image is re-projected to obtain the visual effect of browsing in hemispherical space, using the VDPC model and IOPs of the fisheye camera calibrated using the ideal projection ellipse constraint. The proposed technique is tested on several downloaded internet images with unknown IOPs. Results show that the proposed VDPC model achieves a more uniform perspective correction of fisheye images in different areas, and preserves the detailed information with greater flexibility compared with the traditional perspective projection conversion (PPC) technique. The proposed algorithm generates a corrected image of 512 × 512 pixels resolution at a speed of 58 fps when run on a pure central processing unit (CPU) processor. With an ordinary graphics processing unit (GPU) processor, a corrected image of 1024 × 1024 pixels resolution can be generated at 60 fps. Therefore, smooth 3D visualisation of a fisheye image can be realised on a computer using the proposed algorithm, which may benefit applications such as panorama surveillance, robot navigation, etc. Numéro de notice : A2022-518 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1111/phor.12410 Date de publication en ligne : 10/05/2022 En ligne : https://doi.org/10.1111/phor.12410 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101068
in Photogrammetric record > vol 37 n° 178 (June 2022) . - pp 185 - 207[article]True orthophoto generation based on unmanned aerial vehicle images using reconstructed edge points / Mojdeh Ebrahimikia in Photogrammetric record, vol 37 n° 178 (June 2022)
[article]
Titre : True orthophoto generation based on unmanned aerial vehicle images using reconstructed edge points Type de document : Article/Communication Auteurs : Mojdeh Ebrahimikia, Auteur ; Ali Hosseininaveh, Auteur Année de publication : 2022 Article en page(s) : pp 161 - 184 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie
[Termes IGN] apprentissage profond
[Termes IGN] détection de contours
[Termes IGN] détection du bâti
[Termes IGN] distorsion d'image
[Termes IGN] graphe
[Termes IGN] image captée par drone
[Termes IGN] modèle numérique de surface
[Termes IGN] orthophotographie
[Termes IGN] orthophotoplan numérique
[Termes IGN] photogrammétrie aérienne
[Termes IGN] pixel de contour
[Termes IGN] semis de points
[Termes IGN] structure-from-motion
[Termes IGN] zone urbaineRésumé : (auteur) After considering state-of-the-art algorithms, this paper presents a novel method for generating true orthophotos from unmanned aerial vehicle (UAV) images of urban areas. The procedure consists of four steps: 2D edge detection in building regions, 3D edge graph generation, digital surface model (DSM) modification and, finally, true orthophoto and orthomosaic generation. The main contribution of this paper is concerned with the first two steps, in which deep-learning approaches are used to identify the structural edges of the buildings and the estimated 3D edge points are added to the point cloud for DSM modification. Running the proposed method as well as four state-of-the-art methods on two different datasets demonstrates that the proposed method outperforms the existing orthophoto improvement methods by up to 50% in the first dataset and by 70% in the second dataset by reducing true orthophoto distortion in the structured edges of the buildings. Numéro de notice : A2022-517 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article DOI : 10.1111/phor.12409 Date de publication en ligne : 05/04/2022 En ligne : https://doi.org/10.1111/phor.12409 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=101065
in Photogrammetric record > vol 37 n° 178 (June 2022) . - pp 161 - 184[article]
Titre : Learning to represent and reconstruct 3D deformable objects Type de document : Thèse/HDR Auteurs : Jan Bednarik, Auteur ; Pascal Fua, Directeur de thèse ; M. Salzmann, Directeur de thèse Editeur : Lausanne : Ecole Polytechnique Fédérale de Lausanne EPFL Année de publication : 2022 Importance : 138 p. Format : 21 x 30 cm Note générale : bibliographie
Thèse présentée pour l'obtention du grade de Docteur ès Sciences, Ecole Polytechnique Fédérale de LausanneLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement de formes
[Termes IGN] apprentissage profond
[Termes IGN] cohérence temporelle
[Termes IGN] déformation de surface
[Termes IGN] distorsion d'image
[Termes IGN] géométrie de Riemann
[Termes IGN] image 3D
[Termes IGN] reconstruction d'objet
[Termes IGN] semis de points
[Termes IGN] vision par ordinateurIndex. décimale : THESE Thèses et HDR Résumé : (auteur) Representing and reconstructing 3D deformable shapes are two tightly linked problems that have long been studied within the computer vision field. Deformable shapes are truly ubiquitous in the real world, whether be it specific object classes such as humans, garments and animals or more abstract ones such as generic materials deforming under stress caused by an external force. Truly practical computer vision algorithms must be able to understand the shapes of objects in the observed scenes to unlock the wide spectrum of much sought after applications ranging from virtual try-on to automated surgeries. Automatic shape reconstruction, however, is known to be an ill-posed problem, especially in the common scenario of a single image input. Therefore, the modern approaches rely on deep learning paradigm which has proven to be extremely effective even for the severely under-constrained computer vision problems. We, too, exploit the success of data-driven approaches, however, we also show that generic deep learning models can greatly benefit from being combined with explicit knowledge originating in traditional computational geometry. We analyze the use of various 3D shape representations for deformable object reconstruction and we distinctly focus on one of them, the atlas-based representation, which turns out to be especially suitable for modeling deformable shapes and which we further improve and extend to yield higher quality reconstructions. The atlas-based representation models the surfaces as an ensemble of continuous functions and thus allows for arbitrary resolution and analytical surface analysis. We identify major shortcomings of the base formulation, namely the infamous phenomena of patch collapse, patch overlap and arbitrarily strong mapping distortions, and we propose novel regularizers based on analytically computed properties of the reconstructed surfaces. Our approach counteracts the aforementioned drawbacks while yielding higher reconstruction accuracy in terms of surface normals on the tasks of single view-reconstruction, shape completion and point cloud auto-encoding. We dive into the problematics of atlas-based shape representation even deeper and focus on another pressing design flaw, the global inconsistency among the individual mappings. While the inconsistency is not reflected in the traditional reconstruction accuracy quantitative metrics, it is detrimental to the visual quality of the reconstructed surfaces. Specifically, we design loss functions encouraging intercommunication among the individual mappings which pushes the resulting surface towards a C1 smooth function. Our experiments on the tasks of single-view reconstruction and point cloud auto-encoding reveal that our method significantly improves the visual quality when compared to the baselines. Furthermore, we adapt the atlas-based representation and the related training procedure so that it could model a full sequence of a deforming object in a temporally-consistent way. In other words, the goal is to produce such reconstruction where each surface point always represents the same semantic point on the target ground-truth surface. To achieve such behavior, we note that if each surface point deforms close-to-isometrically, its semantic location likely remains unchanged. Practically, we make use of the Riemannian metric which is computed analytically on the surfaces, and force it to remain point-wise constant throughout the sequence. Our experimental results reveal that our method yields state-of-the-art results on the task of unsupervised dense shape correspondence estimation, while also improving the visual reconstruction quality. Finally, we look into a particular problem of monocular texture-less deformable shape reconstruction, an instance of the Shape-from-Shading problem. We propose a multi-task learning approach which takes an RGB image of an unknown object as the input and jointly produces a normal map, a depth map and a mesh corresponding to the observed part of the surface. We show that forcing the model to produce multiple different 3D representations of the same objects results in higher reconstruction quality. To train the network, we acquire a large real-world annotated dataset of texture-less deforming objects and we release it for public use. Finally, we prove through experiments that our approach outperforms a previous optimization based method on the single-view-reconstruction task. Note de contenu : 1- Introduction
2- Related work
3- Atlas-based representation for deformable shape reconstruction
4- Shape reconstruction by learning differentiable surface representations
5- Better patch stitching for parametric surface reconstruction
6- Temporally-consistent surface reconstruction using metrically-consistent atlases
7- Learning to reconstruct texture-less deformable surfaces from a single view
8- ConclusionNuméro de notice : 15761 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse étrangère Note de thèse : Thèse de Doctorat : Sciences : Lausanne, EPFL : 2022 DOI : 10.5075/epfl-thesis-7974 En ligne : https://doi.org/10.5075/epfl-thesis-7974 Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100958 Accurate mapping method for UAV photogrammetry without ground control points in the map projection frame / Jianchen Liu in IEEE Transactions on geoscience and remote sensing, vol 59 n° 11 (November 2021)
[article]
Titre : Accurate mapping method for UAV photogrammetry without ground control points in the map projection frame Type de document : Article/Communication Auteurs : Jianchen Liu, Auteur ; Wei Xu, Auteur ; Bingxuan Guo, Auteur ; et al., Auteur Année de publication : 2021 Article en page(s) : pp 9673 - 9681 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie numérique
[Termes IGN] aérotriangulation
[Termes IGN] auto-étalonnage
[Termes IGN] compensation par faisceaux
[Termes IGN] courbure de la Terre
[Termes IGN] distorsion d'image
[Termes IGN] données GNSS
[Termes IGN] hauteur de vol
[Termes IGN] image captée par drone
[Termes IGN] point d'appui
[Termes IGN] précision altimétrique
[Termes IGN] précision cartographique
[Termes IGN] projectionRésumé : (auteur) Unmanned aerial vehicle (UAV) photogrammetry without ground control points (GCPs) can effectively improve production efficiency and reduce production costs; this method is especially advantageous in areas that are difficult for people to reach. However, there are a series of problems in UAV photogrammetry without GCPs. One of the main problems is that the accurate camera parameters cannot be obtained through the on-the-job calibration method; furthermore, the inaccurate principal distance will have a serious impact on the elevation accuracy of object points. The other one is that the projection deformation and earth curvature also have impacts on the elevation accuracy, when the mapping task is carried out in the map projection frame. This article explains the specific reasons of elevation errors and proposes an effective solution. First, the camera self-calibration is performed in a geocentric frame with control strips. Then, the exterior orientation elements of the images are calculated in the map projection frame without control strips. Finally, the elevation errors that are caused by the map projection deformation and the earth’s curvature are corrected. The experimental results show that the proposed method can achieve accurate mapping, and the elevation accuracy has been significantly improved. Numéro de notice : A2021-811 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2021.3052466 Date de publication en ligne : 29/01/2021 En ligne : https://doi.org/10.1109/TGRS.2021.3052466 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98884
in IEEE Transactions on geoscience and remote sensing > vol 59 n° 11 (November 2021) . - pp 9673 - 9681[article]Digital camera calibration for cultural heritage documentation: the case study of a mass digitization project of religious monuments in Cyprus / Evagoras Evagorou in European journal of remote sensing, vol 54 sup 1 (2021)PermalinkSpherically optimized RANSAC aided by an IMU for Fisheye Image Matching / Anbang Liang in Remote sensing, vol 13 n°10 (May-2 2021)PermalinkThe Influence of camera calibration on nearshore bathymetry estimation from UAV Vvdeos / Gonzalo Simarro in Remote sensing, vol 13 n° 1 (January-1 2021)PermalinkVisual exploration of historical image collections: An interactive approach through space and time / Evelyn Paiz-Reyes (2021)PermalinkGeometric distortion of historical images for 3D visualization / Evelyn Paiz-Reyes in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol V-2-2020 (August 2020)PermalinkStructure from motion for complex image sets / Mario Michelini in ISPRS Journal of photogrammetry and remote sensing, vol 166 (August 2020)PermalinkDense stereo matching strategy for oblique images that considers the plane directions in urban areas / Jianchen Liu in IEEE Transactions on geoscience and remote sensing, vol 58 n° 7 (July 2020)PermalinkAn integrated approach to registration and fusion of hyperspectral and multispectral images / Yuan Zhou in IEEE Transactions on geoscience and remote sensing, vol 58 n° 5 (May 2020)PermalinkOptimising drone flight planning for measuring horticultural tree crop structure / Yu-Hsuan Tu in ISPRS Journal of photogrammetry and remote sensing, vol 160 (February 2020)PermalinkA two-step approach for the correction of rolling shutter distortion in UAV photogrammetry / Yilin Zhou in ISPRS Journal of photogrammetry and remote sensing, vol 160 (February 2020)Permalink