Descripteur
Documents disponibles dans cette catégorie (17)



Etendre la recherche sur niveau(x) vers le bas
Photogrammetric point clouds: quality assessment, filtering, and change detection / Zhenchao Zhang (2022)
![]()
Titre : Photogrammetric point clouds: quality assessment, filtering, and change detection Type de document : Thèse/HDR Auteurs : Zhenchao Zhang, Auteur ; M. George Vosselman, Auteur ; Markus Gerke, Auteur ; Michael Ying Yang, Auteur Editeur : Enschede [Pays-Bas] : International Institute for Geo-Information Science and Earth Observation ITC Année de publication : 2022 Note générale : bibliographie
NB : EMBARGO SUR LE TEXTE JUSQU'AU 1ER JUILLET 2022Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] appariement dense
[Termes IGN] détection de changement
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] qualité des données
[Termes IGN] réseau neuronal convolutif
[Termes IGN] segmentation sémantique
[Termes IGN] semis de pointsRésumé : (auteur) 3D change detection draws more and more attention in recent years due to the increasing availability of 3D data. It can be used in the fields of land use / land cover (LULC) change detection, 3D geographic information updating, terrain deformation analysis, urban construction monitoring et al. Our motivation to study 3D change detection is mainly related to the practical need to update the outdated point clouds captured by Airborne Laser Scanning (ALS) with new point clouds obtained by dense image matching (DIM).
The thesis has three main parts. The first part, chapter 1, explains the motivation, providing a review of current ALS and airborne photogrammetry techniques. It also presents the research objectives and questions. The second part including chapter 2 and chapter 3 evaluates the quality of photogrammetric products and investigates their potential for change detection. The third part including chapter 4 and chapter 5 proposes two methods for change detection that meet different requirements.
To investigate the potential of using point clouds derived by dense matching for change detection, we propose a framework for evaluating the quality of 3D point clouds and DSMs generated by dense image matching. Our evaluation framework based on a large number of square patches reveals the distribution of dense matching errors in the whole photogrammetric block. Robust quality measures are proposed to indicate the DIM accuracy and precision quantitatively. The overall mean offset to the reference is 0.1 Ground Sample Distance (GSD); the maximum mean deviation reaches 1.0 GSD. We also find that the distribution of dense matching errors is homogenous in the whole block and close to a normal distribution based on many patch-based samples. However, in some locations, especially along narrow alleys, the mean deviations may get worse. In addition, the profiles of ALS points and DIM points reveal that the DIM profile fluctuates around the ALS profile. We find that the accuracy of DIM point cloud improves and that the noise level decreases on smooth ground areas when oblique images are used in dense matching together with nadir images.
Then we evaluate whether the standard LiDAR filters are effective to filter dense matching points in order to derive accurate DTMs. Filtering results on a city block show that LiDAR filters perform well on the grassland, along bushes and around individual trees if the point cloud is sufficiently precise. When a ranking filter is used on the point clouds before filtering, the filtering will identify fewer but more reliable ground points. However, some small objects on the terrain will be filtered out. Since we aim at obtaining accurate DTMs, the ranking filter shows its value in identifying reliable ground points. Based on the previous findings in DIM quality, we propose a method to detect building changes between ALS and photogrammetric data. Firstly, the ALS points and DIM points are split out and concatenated with the orthoimages. The multimodal data are normalized to feed into a pseudo-Siamese Neural network for change detection. Then, the changed objects are delineated through per-pixel classification and artefact removal. The change detection module based on a pseudo-Siamese CNN can quickly localize the changes and generate coarse change maps. The next module can be used in precise mapping of change boundaries. Experimental results show that the proposed pseudo-Siamese Neural network can cope with the DIM errors and output plausible change detection results. Although the point cloud quality from dense matching is not as fine as laser scanning points, the spectral and textural information provided by the orthoimages serve as a supplement.
Considering that the tasks of semantic segmentation and change detection are correlated, we propose SiamPointNet++ model to combine the two tasks in one framework. The method outputs a pointwise joint label for each ALS point. If an ALS point is unchanged, it is assigned a semantic label; If an ALS point is changed, it is assigned a change label. The sematic and change information are included in the joint labels with minimum information redundancy. The combined Siamese network learns both intra-epoch and inter-epoch features. Intra-epoch features are extracted at multiple scales to embed the local and global information. Inter-epoch features are extracted by Conjugated Ball Sampling (CBS) and concatenated to make change inference. Experiments on the Rotterdam data set indicate that the network is effective in learning multi-task features. It is invariant to the permutation and noise of inputs and robust to the data difference between ALS and DIM data. Compared with a sophisticated object-based method and supervised change detection, this method requires much less hyper-parameters and human intervention but achieves superior performance.
As a conclusion, the thesis evaluates the quality of dense matching points and investigates its potential of updating outdated ALS points. The two change detection methods developed for different applications show their potential in the automation of topographic change detection and point cloud updating. Future work may focus on improving the generalizability and interpretability of the proposed models.Numéro de notice : 20403 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Thèse étrangère Note de thèse : PhD thesis : Geo-Information Science and Earth Observation : Enschede, university of Twente : 2022 DOI : 10.3990/1.9789036552653 Date de publication en ligne : 14/01/2022 En ligne : https://research.utwente.nl/en/publications/photogrammetric-point-clouds-quality [...] Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100963
Titre : A new stereo dense matching benchmark dataset for deep learning Type de document : Article/Communication Auteurs : Teng Wu , Auteur ; Bruno Vallet
, Auteur ; Marc Pierrot-Deseilligny
, Auteur ; Ewelina Rupnik
, Auteur
Editeur : International Society for Photogrammetry and Remote Sensing ISPRS Année de publication : 2021 Collection : International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, ISSN 1682-1750 num. 43-B2-2021 Projets : AI4GEO / Conférence : ISPRS 2021, Commission 2, XXIV ISPRS Congress, Imaging today foreseeing tomorrow 05/07/2021 09/07/2021 Nice Virtuel France OA Archives Commission 2 Importance : pp 405 - 412 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Lasergrammétrie
[Termes IGN] appariement de données localisées
[Termes IGN] appariement dense
[Termes IGN] apprentissage profond
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] jeu de données localisées
[Termes IGN] parangonnage
[Termes IGN] photogrammétrie aérienne
[Termes IGN] reconstruction 3DRésumé : (auteur) Stereo dense matching is a fundamental task for 3D scene reconstruction. Recently, deep learning based methods have proven effective on some benchmark datasets, for example Middlebury and KITTI stereo. However, it is not easy to find a training dataset for aerial photogrammetry. Generating ground truth data for real scenes is a challenging task. In the photogrammetry community, many evaluation methods use digital surface models (DSM) to generate the ground truth disparity for the stereo pairs, but in this case interpolation may bring errors in the estimated disparity. In this paper, we publish a stereo dense matching dataset based on ISPRS Vaihingen dataset, and use it to evaluate some traditional and deep learning based methods. The evaluation shows that learning-based methods outperform traditional methods significantly when the fine tuning is done on a similar landscape. The benchmark also investigates the impact of the base to height ratio on the performance of the evaluated methods. The dataset can be found in https://github.com/whuwuteng/benchmark_ISPRS2021. Numéro de notice : C2021-012 Affiliation des auteurs : UGE-LASTIG (2020- ) Thématique : IMAGERIE/INFORMATIQUE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : 10.5194/isprs-archives-XLIII-B2-2021-405-2021 Date de publication en ligne : 28/06/2021 En ligne : https://doi.org/10.5194/isprs-archives-XLIII-B2-2021-405-2021 Format de la ressource électronique : URL article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98066
Titre : Robust and fast global image orientation Type de document : Thèse/HDR Auteurs : Xin Wang, Auteur ; Christian Heipke, Directeur de thèse Editeur : Munich : Bayerische Akademie der Wissenschaften Année de publication : 2021 Collection : DGK - C, ISSN 0065-5325 num. 871 Importance : 141 p. Note générale : bibliographie
Diese Arbeit ist gleichzeitig veröffentlicht in: Wissenschaftliche Arbeiten der Fachrichtung Geodäsie und Geoinformatik der Leibniz Universität Hannover ISSN 0174-1454, Nr. 373, Hannover 2021Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Photogrammétrie numérique
[Termes IGN] appariement d'images
[Termes IGN] appariement dense
[Termes IGN] chaîne de traitement
[Termes IGN] estimation de pose
[Termes IGN] méthode robuste
[Termes IGN] orientation d'image
[Termes IGN] orientation relative
[Termes IGN] rotation
[Termes IGN] structure-from-motion
[Termes IGN] translation
[Termes IGN] valeur aberranteRésumé : (auteur) The estimation of image orientation (also called pose) has always played a crucial role in the field of photogrammetry since it is a fundamental prerequisite for the subsequent works of multi-view dense matching, generating DEM and DSM, etc. In the community of computer vision, the task is also well known as Structure-from-Motion (SfM), which reveals that image pose, while positions of object points are determined interdependently. Despite a lot of efforts over the last decades, it has recently gained the photogrammetrists’ interests again due to the fast-growing number of different resources of images. New challenges are posed for accurately and efficiently orienting various image datasets (e.g., unordered datasets with a large number of images, or images compromised of critical stereo pairs). In this thesis, the relevant ambition is to develop a new fast and robust method for the estimation of image orientation which is capable of coping with different types of datasets. To achieve this goal, the two most time-consuming steps of image orientation are in particular taken care of: (a) image matching and (b) the estimation process. To accelerate the image matching process, a new method employing a random k-d forest is proposed to quickly obtain pairs of overlapping images from an unordered image set. After that, image matching and the estimation of relative orientation parameters are performed only for pairs found to be very likely overlapping. On the other hand, to estimate the image poses in a time efficient manner, a global image orientation strategy is advocated. Its basic idea is to first simultaneously solve all available images’ poses, before a final bundle adjustment is carried out once for refinement. The conventional two-step global approach is pursued in this work, separating the determination of rotation matrices and translation parameters; the former is solved by an existing popular method of Chatterjee and Govindu [2013], and the latter are estimated globally using a newly developed method: translation estimation integrating both the relative translations and tie points. Tie points within triplets are adopted to firstly calculate global unified scale factors for each available pairwise relative translation. Then, analogous to rotation estimation, translations are determined by performing an averaging operation on the scaled relative translations. In order to improve the robustness of the solution, efforts in this thesis are also focused on coping with outliers in the relative orientations (ROs), which global image orientation approaches are particularly sensitive to. A general method based on triplet compatibility with respect to loop closure errors of relative rotations and translations is presented for detecting blunders in relative orientations. Although this procedure eliminated many gross errors in the input ROs, it typically cannot sort out blunders which are caused by repetitive structures and critical configurations, such as inappropriate baselines (very short baseline or baselines parallel to the viewing direction). Therefore, another new method is proposed to eliminate wrong ROs which have resulted from repetitive structures and very short baselines. Two corresponding criteria that indicate the quality of ROs are introduced. Repetitive structure is detected based on counts of conjugate points of the various image pairs, while very short baselines are found by inspecting the intersection angles of corresponding image rays. By analyzing these two criteria, incorrect ROs are detected and eliminated. As correct ROs of image pairs with a wider baseline nearly parallel to both viewing directions can be valuable, a method to identify and keep these ROs is also a part of this research. The validation and evaluation of the proposed method are thoroughly conducted on various benchmarks including ordered and unordered sets of images, images with repetitive structures and inappropriate baselines, etc. In particular, robustness is investigated by demonstrating the efficacy of the corresponding RO outlier detection methods. The performance and time efficiency of determining image orientation are evaluated and compared with several state-of-the-art global image orientation approaches. In summary, based on the experimental results, the developed methods demonstrateto be able to accomplish the image orientation taskfast and robustlyon different kinds of datasets. Numéro de notice : 17672 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse étrangère Note de thèse : PhD dissertation : Fachrichtung Geodäsie und Geoinformatik : Hanovre : 2021 En ligne : https://dgk.badw.de/fileadmin/user_upload/Files/DGK/docs/c-871.pdf Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=97997 Dense stereo matching strategy for oblique images that considers the plane directions in urban areas / Jianchen Liu in IEEE Transactions on geoscience and remote sensing, vol 58 n° 7 (July 2020)
![]()
[article]
Titre : Dense stereo matching strategy for oblique images that considers the plane directions in urban areas Type de document : Article/Communication Auteurs : Jianchen Liu, Auteur ; Linjing Zhang, Auteur ; Zhen Wang, Auteur ; et al., Auteur Année de publication : 2020 Article en page(s) : pp 5109 - 5116 Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement automatique
[Termes IGN] appariement d'images
[Termes IGN] appariement dense
[Termes IGN] appariement semi-global
[Termes IGN] bati
[Termes IGN] carte de profondeur
[Termes IGN] corrélation épipolaire dense
[Termes IGN] distorsion d'image
[Termes IGN] erreur moyenne quadratique
[Termes IGN] image oblique
[Termes IGN] perspective
[Termes IGN] planéité
[Termes IGN] zone urbaineRésumé : (auteur) The perspective distortion of oblique images has a substantial impact on dense matching, i.e., it reduces the matching precision. In this article, a strategy of dense matching in which the object plane direction is considered is proposed. According to many regular planes in urban areas, epipolar rectification with minimum distortions relative to the selected reference planes can be generated. The matching results of epipolar images relative to various reference planes are weighted and fused into a single depth map, which is a better matching result. The experimental results demonstrate that the perspective distortion has a substantial influence on the dense matching performance. The root-mean-square error (RMSE) of the flatness for horizontal objects is increased by approximately 30%, and the RMSE of the flatness for façades is increased by approximately 40%. Hence, the proposed matching strategy, in which the object plane is considered, can effectively improve the matching results. Numéro de notice : A2020-394 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Article nature-HAL : ArtAvecCL-RevueIntern DOI : 10.1109/TGRS.2020.2972312 Date de publication en ligne : 20/02/2020 En ligne : https://doi.org/10.1109/TGRS.2020.2972312 Format de la ressource électronique : url article Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=95390
in IEEE Transactions on geoscience and remote sensing > vol 58 n° 7 (July 2020) . - pp 5109 - 5116[article]
Titre : Learning 3D generation and matching Type de document : Thèse/HDR Auteurs : Thibault Groueix, Auteur ; Mathieu Aubry, Directeur de thèse Editeur : Paris : Ecole Nationale des Ponts et Chaussées ENPC Année de publication : 2020 Importance : 169 p. Format : 21 x 30 cm Note générale : bibliographie
A doctoral thesis in the domain of automated signal and image processing submitted to École Doctorale Paris-Est
Mathématiques et Sciences et Technologies de l’Information et de la CommunicationLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] appariement de formes
[Termes IGN] appariement dense
[Termes IGN] apprentissage profond
[Termes IGN] classification par réseau neuronal convolutif
[Termes IGN] déformation de surface
[Termes IGN] isométrie
[Termes IGN] maillage
[Termes IGN] modélisation 3D
[Termes IGN] reconstruction 3D
[Termes IGN] reconstruction d'image
[Termes IGN] segmentation d'image
[Termes IGN] semis de points
[Termes IGN] voxelIndex. décimale : THESE Thèses et HDR Résumé : (auteur) The goal of this thesis is to develop deep learning approaches to model and analyse 3D shapes. Progress in this field could democratize artistic creation of 3D assets which currently requires time and expert skills with technical software. We focus on the design of deep learning solutions for two particular tasks, key to many 3D modeling applications: single-view reconstruction and shape matching. A single-view reconstruction (SVR) method takes as input a single image and predicts the physical world which produced that image. SVR dates back to the early days of computer vision. In particular, in the 1960s, Lawrence G. Roberts proposed to align simple 3D primitives to the input image under the assumption that the physical world is made of cuboids. Another approach proposed by Berthold Horn in the 1970s is to decompose the input image in intrinsic images and use those to predict the depth of every input pixel. Since several configurations of shapes, texture and illumination can explain the same image, both approaches need to form assumptions on the distribution of images and 3D shapes to resolve the ambiguity. In this thesis, we learn these assumptions from large-scale datasets instead of manually designing them. Learning allows us to perform complete object reconstruction, including parts which are not visible in the input image. Shape matching aims at finding correspondences between 3D objects. Solving this task requires both a local and global understanding of 3D shapes which is hard to achieve explicitly. Instead we train neural networks on large-scale datasets to solve this task and capture this knowledge implicitly through their internal parameters.Shape matching supports many 3D modeling applications such as attribute transfer, automatic rigging for animation, or mesh editing.The first technical contribution of this thesis is a new parametric representation of 3D surfaces modeled by neural networks.The choice of data representation is a critical aspect of any 3D reconstruction algorithm. Until recently, most of the approaches in deep 3D model generation were predicting volumetric voxel grids or point clouds, which are discrete representations. Instead, we present an alternative approach that predicts a parametric surface deformation ie a mapping from a template to a target geometry. To demonstrate the benefits of such a representation, we train a deep encoder-decoder for single-view reconstruction using our new representation. Our approach, dubbed AtlasNet, is the first deep single-view reconstruction approach able to reconstruct meshes from images without relying on an independent post-processing, and can do it at arbitrary resolution without memory issues. A more detailed analysis of AtlasNet reveals it also generalizes better to categories it has not been trained on than other deep 3D generation approaches.Our second main contribution is a novel shape matching approach purely based on reconstruction via deformations. We show that the quality of the shape reconstructions is critical to obtain good correspondences, and therefore introduce a test-time optimization scheme to refine the learned deformations. For humans and other deformable shape categories deviating by a near-isometry, our approach can leverage a shape template and isometric regularization of the surface deformations. As category exhibiting non-isometric variations, such as chairs, do not have a clear template, we learn how to deform any shape into any other and leverage cycle-consistency constraints to learn meaningful correspondences. Our reconstruction-for-matching strategy operates directly on point clouds, is robust to many types of perturbations, and outperforms the state of the art by 15% on dense matching of real human scans. Note de contenu : 1- Introduction
2 Related Work
3 AtlasNet: A Papier-Mache Approach to Learning 3D Surface Generation
4 3D-CODED : 3D Correspondences by Deep Deformation
5 Unsupervised cycle-consistent deformation for shape matching
6 ConclusionNuméro de notice : 28310 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse française Note de thèse : Thèse de Doctorat : Automated signal and image processing : Paris-Est : 2020 Organisme de stage : LIGM DOI : sans En ligne : https://tel.archives-ouvertes.fr/tel-03127055v2/document Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=98201 CNN-based dense image matching for aerial remote sensing images / Shunping Ji in Photogrammetric Engineering & Remote Sensing, PERS, vol 85 n° 6 (June 2019)
PermalinkIntegrated image matching and segmentation for 3D surface reconstruction in urban areas / Lei Ye in Photogrammetric Engineering & Remote Sensing, PERS, Vol 84 n° 3 (March 2018)
PermalinkStand-level wind damage can be assessed using diachronic photogrammetric canopy height models / Jean-Pierre Renaud in Annals of Forest Science, vol 74 n° 4 (December 2017)
PermalinkDocumentation of heritage buildings using close-range UAV images: dense matching issues, comparison and case studies / Arnadi Murtiyoso in Photogrammetric record, vol 32 n° 159 (September 2017)
PermalinkMicMac – a free, open-source solution for photogrammetry / Ewelina Rupnik in Open Geospatial Data, Software and Standards, vol 2 (2017)
PermalinkAnalysis of different methods for 3D reconstruction of natural surfaces from parallel-axes UAV images / Annette Eltner in Photogrammetric record, vol 30 n° 151 (September - November 2015)
PermalinkImage matching using SIFT features and relaxation labeling technique—A constraint initializing method for dense stereo matching / Jyoti Joglekar in IEEE Transactions on geoscience and remote sensing, vol 52 n° 9 Tome 1 (September 2014)
PermalinkA comparison of dense matching algorithms for scaled surface reconstruction using stereo camera rigs / Ali Hosseininaveh Ahmadabadian in ISPRS Journal of photogrammetry and remote sensing, vol 78 (April 2013)
PermalinkMulti-view dense matching supported by triangular meshes / D. Butalov in ISPRS Journal of photogrammetry and remote sensing, vol 66 n° 6 (November 2011)
PermalinkPermalink