Détail de l'éditeur
International Institute for Geo-Information Science and Earth Observation ITC
localisé à :
Enschede
Collections rattachées :
Autorités liées :
|
Documents disponibles chez cet éditeur (7)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Photogrammetric point clouds: quality assessment, filtering, and change detection / Zhenchao Zhang (2022)
Titre : Photogrammetric point clouds: quality assessment, filtering, and change detection Type de document : Thèse/HDR Auteurs : Zhenchao Zhang, Auteur ; M. George Vosselman, Auteur ; Markus Gerke, Auteur ; Michael Ying Yang, Auteur Editeur : Enschede [Pays-Bas] : International Institute for Geo-Information Science and Earth Observation ITC Année de publication : 2022 Note générale : bibliographie
NB : EMBARGO SUR LE TEXTE JUSQU'AU 1ER JUILLET 2022Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Applications photogrammétriques
[Termes IGN] appariement dense
[Termes IGN] détection de changement
[Termes IGN] données lidar
[Termes IGN] données localisées 3D
[Termes IGN] qualité des données
[Termes IGN] réseau neuronal convolutif
[Termes IGN] segmentation sémantique
[Termes IGN] semis de pointsRésumé : (auteur) 3D change detection draws more and more attention in recent years due to the increasing availability of 3D data. It can be used in the fields of land use / land cover (LULC) change detection, 3D geographic information updating, terrain deformation analysis, urban construction monitoring et al. Our motivation to study 3D change detection is mainly related to the practical need to update the outdated point clouds captured by Airborne Laser Scanning (ALS) with new point clouds obtained by dense image matching (DIM).
The thesis has three main parts. The first part, chapter 1, explains the motivation, providing a review of current ALS and airborne photogrammetry techniques. It also presents the research objectives and questions. The second part including chapter 2 and chapter 3 evaluates the quality of photogrammetric products and investigates their potential for change detection. The third part including chapter 4 and chapter 5 proposes two methods for change detection that meet different requirements.
To investigate the potential of using point clouds derived by dense matching for change detection, we propose a framework for evaluating the quality of 3D point clouds and DSMs generated by dense image matching. Our evaluation framework based on a large number of square patches reveals the distribution of dense matching errors in the whole photogrammetric block. Robust quality measures are proposed to indicate the DIM accuracy and precision quantitatively. The overall mean offset to the reference is 0.1 Ground Sample Distance (GSD); the maximum mean deviation reaches 1.0 GSD. We also find that the distribution of dense matching errors is homogenous in the whole block and close to a normal distribution based on many patch-based samples. However, in some locations, especially along narrow alleys, the mean deviations may get worse. In addition, the profiles of ALS points and DIM points reveal that the DIM profile fluctuates around the ALS profile. We find that the accuracy of DIM point cloud improves and that the noise level decreases on smooth ground areas when oblique images are used in dense matching together with nadir images.
Then we evaluate whether the standard LiDAR filters are effective to filter dense matching points in order to derive accurate DTMs. Filtering results on a city block show that LiDAR filters perform well on the grassland, along bushes and around individual trees if the point cloud is sufficiently precise. When a ranking filter is used on the point clouds before filtering, the filtering will identify fewer but more reliable ground points. However, some small objects on the terrain will be filtered out. Since we aim at obtaining accurate DTMs, the ranking filter shows its value in identifying reliable ground points. Based on the previous findings in DIM quality, we propose a method to detect building changes between ALS and photogrammetric data. Firstly, the ALS points and DIM points are split out and concatenated with the orthoimages. The multimodal data are normalized to feed into a pseudo-Siamese Neural network for change detection. Then, the changed objects are delineated through per-pixel classification and artefact removal. The change detection module based on a pseudo-Siamese CNN can quickly localize the changes and generate coarse change maps. The next module can be used in precise mapping of change boundaries. Experimental results show that the proposed pseudo-Siamese Neural network can cope with the DIM errors and output plausible change detection results. Although the point cloud quality from dense matching is not as fine as laser scanning points, the spectral and textural information provided by the orthoimages serve as a supplement.
Considering that the tasks of semantic segmentation and change detection are correlated, we propose SiamPointNet++ model to combine the two tasks in one framework. The method outputs a pointwise joint label for each ALS point. If an ALS point is unchanged, it is assigned a semantic label; If an ALS point is changed, it is assigned a change label. The sematic and change information are included in the joint labels with minimum information redundancy. The combined Siamese network learns both intra-epoch and inter-epoch features. Intra-epoch features are extracted at multiple scales to embed the local and global information. Inter-epoch features are extracted by Conjugated Ball Sampling (CBS) and concatenated to make change inference. Experiments on the Rotterdam data set indicate that the network is effective in learning multi-task features. It is invariant to the permutation and noise of inputs and robust to the data difference between ALS and DIM data. Compared with a sophisticated object-based method and supervised change detection, this method requires much less hyper-parameters and human intervention but achieves superior performance.
As a conclusion, the thesis evaluates the quality of dense matching points and investigates its potential of updating outdated ALS points. The two change detection methods developed for different applications show their potential in the automation of topographic change detection and point cloud updating. Future work may focus on improving the generalizability and interpretability of the proposed models.Numéro de notice : 20403 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Thèse étrangère Note de thèse : PhD thesis : Geo-Information Science and Earth Observation : Enschede, university of Twente : 2022 DOI : 10.3990/1.9789036552653 Date de publication en ligne : 14/01/2022 En ligne : https://research.utwente.nl/en/publications/photogrammetric-point-clouds-quality [...] Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100963
Titre : Dynamic scene understanding using deep neural networks Type de document : Thèse/HDR Auteurs : Ye Lyu, Auteur ; M. George Vosselman, Directeur de thèse ; Michael Ying Yang, Directeur de thèse Editeur : Enschede [Pays-Bas] : International Institute for Geo-Information Science and Earth Observation ITC Année de publication : 2021 Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image
[Termes IGN] attention (apprentissage automatique)
[Termes IGN] chaîne de traitement
[Termes IGN] champ aléatoire conditionnel
[Termes IGN] compréhension de l'image
[Termes IGN] détection d'objet
[Termes IGN] image captée par drone
[Termes IGN] image vidéo
[Termes IGN] poursuite de cible
[Termes IGN] régression
[Termes IGN] segmentation sémantiqueRésumé : (auteur) Scene understanding is an important and fundamental research field in computer vision, which is quite useful for many applications in photogrammetry and remote sensing. It focuses on locating and classifying objects in images, understanding the relationships between them. The higher goal is to interpret what event happens in the scene, when it happens and why it happens, and what should we do based on the information. Dynamic scene understanding is to use information from different time to interpret scenes and answer the above related questions. For modern scene understanding technology, deep learning has shown great potential for such task. "Deep" in deep learning refers to the use of multiple layers in the neural networks. Deep neural networks are powerful as they are highly non-linear function that possess the ability to map from one domain to another quite different domain after proper training. It is the best solution for many fundamental research tasks regarding scene understanding. This ph.D. research also takes advantage of deep learning for dynamic scene understanding. Temporal information plays an important role for dynamic scene understanding. Compared with static scene understanding from images, information distilled from the time dimension provides values in many different ways. Images across consecutive frames have very high correlation, i.e., objects observed in one frame have very high chance to be observed and identified in nearby frames as well. Such redundancy in observation could potentially reduce the uncertainty for object recognition with deep learning based methods, resulting in more consistent inference. High correlation across frames could also improve the chance for recognizing objects correctly. If the camera or the object moves, the object could be observed in multiple different views with different poses and appearance. The information captured for object recognition would be more diverse and complementary, which could be aggregated to jointly inference the categories and the properties of objects. This ph.D. research involves several tasks related to the dynamic scene understanding in computer vision, including semantic segmentation for aerial platform images (chapter 2, 3), video object segmentation and video object detection for common objects in natural scenes (chapter 4, 5), and multi-object tracking and segmentation for cars and pedestrians in driving scenes (chapter 6). Chapter2 investigates how to establish the semantic segmentation benchmark for the UAV images, which includes data collection, data labeling, dataset construction, and performance evaluation with baseline deep neural networks and the proposed multi-scale dilation net. Conditional random field with feature space optimization is used to achieve consistent semantic segmentation prediction in videos. Chapter3 investigates how to better extract the scene context information for etter object recognition performance by proposing the novel bidirectional multiscale attention networks. It achieves better performance by inferring features and attention weights for feature fusing from both higher level and lower level branches. Chapter4 investigates how to simultaneously segment multiple objects across multiple frames by combining memory modules with instance segmentation networks. Our method learns to propagate the target object labels without auxiliary data, such as optical flow, which simplifies the model. Chapter5 investigates how to improve the performance of well-trained object detectors with a light weighted and efficient plug&play tracker for object detection in video. This chapter also investigates how the proposed model performs when lacking video training data. Chapter6 investigates how to improve the performance of detection, segmentation, and tracking by jointly considering top-down and bottom-up inference. The whole pipeline follows the multi-task design, i.e., a single feature extraction backbone with multiple heads for different sub-tasks. Overall, this manuscript has delved into several different computer vision tasks, which share fundamental research problems, including detection, segmentation, and tracking. Based on the research experiments and knowledge from literature review, several reflections regarding dynamic scene understanding have been discussed: The range of object context influence the quality for object recognition; The quality of video data affect the method choice for specific computer vision task; Detection and tracking are complementary for each other. For future work, unified dynamic scene understanding task could be a trend, and transformer plus self-supervised learning is one promising research direction. Real-time processing for dynamic scene understanding requires further researches in order to put the methods into usage for real-world applications. Numéro de notice : 12984 Affiliation des auteurs : non IGN Thématique : IMAGERIE/INFORMATIQUE Nature : Thèse étrangère Note de thèse : PhD thesis : Geo-Information Science and Earth Observation : Enschede, university of Twente : 2021 DOI : 10.3990/1.9789036552233 Date de publication en ligne : 08/09/2021 En ligne : https://library.itc.utwente.nl/papers_2021/phd/lyu.pdf Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=100962
Titre : Principles of geographic information systems: an introductory textbook Type de document : Guide/Manuel Auteurs : Otto Huisman, Éditeur scientifique ; A. De By Rolf, Éditeur scientifique Editeur : Enschede [Pays-Bas] : International Institute for Geo-Information Science and Earth Observation ITC Année de publication : 2009 Collection : ITC Educational textbook series, ISSN 1567-5777 Importance : 540 p. Format : 30 x 21 cm ISBN/ISSN/EAN : 978-90-6164-269-5 Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Systèmes d'information géographique
[Termes IGN] analyse spatiale
[Termes IGN] base de données localisées
[Termes IGN] données localisées
[Termes IGN] système d'information géographique
[Termes IGN] visualisation de donnéesRésumé : (documentaliste) Cours sur support numérique en anglais sur les notions indispensables aux SIG. Note de contenu : 1. A gentle introduction to GIS
2. Geographical information and spatial data types
3. Data management and processing systems
4. Spatial referencing and positioning
5. Data entry and preparation
6. Spatial data analysis
7. Data visualizationNuméro de notice : 14954 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Manuel de cours Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=77820 Documents numériques
en open access
principles of GISAdobe Acrobat PDF Sampling scheme optimization from hyperspectral data / Pravesh Debba (2006)
Titre : Sampling scheme optimization from hyperspectral data Type de document : Thèse/HDR Auteurs : Pravesh Debba, Auteur Editeur : Enschede [Pays-Bas] : International Institute for Geo-Information Science and Earth Observation ITC Année de publication : 2006 Collection : ITC Publication num. 136 Importance : 164 p. Format : 16 x 24 cm ISBN/ISSN/EAN : 978-90-8504-462-8 Note générale : bibliographie
thesis to fulfil the requirements for the degree of doctor on the autority of the rector magnificus of Wageningen UniversityLangues : Anglais (eng) Descripteur : [Vedettes matières IGN] Traitement d'image optique
[Termes IGN] échantillonnage
[Termes IGN] image hyperspectrale
[Termes IGN] optimisation (mathématiques)Index. décimale : 35.20 Traitement d'image Numéro de notice : 17213 Affiliation des auteurs : non IGN Thématique : IMAGERIE Nature : Thèse étrangère Note de thèse : thesis : Geoinformation science : Wageningen University : 2006 Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=81337 Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 17213-01 35.20 Livre Centre de documentation Télédétection Disponible
Titre : The space-time cube revisited from a geovisualization perspective Type de document : Article/Communication Auteurs : Menno-Jan Kraak, Auteur Editeur : Enschede [Pays-Bas] : International Institute for Geo-Information Science and Earth Observation ITC Année de publication : 2003 Conférence : ICC 2003, 21st International Cartographic Conference of ICA 10/08/2003 16/08/2003 Durban Afrique du sud Importance : 9 p. Note générale : bibliographie Langues : Anglais (eng) Descripteur : [Termes IGN] base de données spatiotemporelles
[Termes IGN] cube espace-temps
[Termes IGN] Time-geography
[Vedettes matières IGN] GéovisualisationRésumé : (auteur) At the end of the sixties Hägerstrand introduced a space-time model which included features such as a Space-Time- Path, and a Space-Time-Prism. His model is often seen as the start of the time-geography studies. Throughout the years his model has been applied and improved to understand our movements through space. Problems studied can be found in different fields of geography, and range from those on an individual movement to whole theories to optimize transportation. From a visualization perspective the Space-Time-Cube was the most prominent element in Hagerstrand.s approach. In its basic appearance these images consist of a cube with on its base a representation of geography (along the x- and y-axis), while the cube.s height represents time (z-axis). A typical Space-Time-Cube could contain the space time-paths of for instance individuals or bus routes. However, when the concept was introduced the options to create the graphics were limited to manual methods and the user could only experience the single view created by the draftsperson. An alternative view on the cube would mean to go through a laborious drawing exercise. Today.s software has options to automatically create the cube and its contents from a database. Data acquisition of space-time paths for both individuals and groups is also made easier using GPS. Today, the user.s viewing environment is, by default, interactive and allows one to view the cube from any direction. In this paper an extended interactive and dynamic visualization environment is proposed, and demonstrated, in which the user has full flexibility to view, manipulate and query the data in a Space-Time-Cube. Included are options to move slider planes along each of the axes to for instance select, or highlight a period in time or location in space. Examples will be discussed in which the time axis is manipulated by for instance changing world time for event time (time cartograms). Creativity should not stop here since it has been shown that especially an alternative perspective on the data will sparkle the mind with new ideas. The user should be allowed to for instance let the x- and/or y-axis be represented by other variables of the theme studied. Since the cube is seen as an integral part of a geovisualization environment the option to link other views with other graphic representation does exist. Numéro de notice : C2003-037 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Communication nature-HAL : ComAvecCL&ActesPubliésIntl DOI : sans Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=85102 Documents numériques
en open access
The space-time cube revisitedAdobe Acrobat PDF Exploring coastal morphodynamics of Ameland (the Netherlands) with remote sensing monitoring techniques and modelling in GIS / M. Eleved (1999)PermalinkElectronic atlases 2 / B. Kobben (1997)Permalink