Détail de l'éditeur
Netherlands Geodetic Commission NGC
localisé à :
Delft
Collections rattachées :
|
Documents disponibles chez cet éditeur (87)
Ajouter le résultat dans votre panier
Visionner les documents numériques
Affiner la recherche Interroger des sources externes
Titre : 3D topography : a simplicial complex-based solution in a spatial DBMS Type de document : Thèse/HDR Auteurs : F. Penninga, Auteur Editeur : Delft : Netherlands Geodetic Commission NGC Année de publication : 2008 Collection : Netherlands Geodetic Commission Publications on Geodesy, ISSN 0165-1706 num. 66 Importance : 192 p. Format : 17 x 24 cm ISBN/ISSN/EAN : 978-90-6132-304-4 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géomatique
[Termes IGN] algorithme du simplexe
[Termes IGN] base de données localisées 3D
[Termes IGN] données localisées 3D
[Termes IGN] milieu urbain
[Termes IGN] modèle conceptuel de données localisées
[Termes IGN] objet géographique 3D
[Termes IGN] système de gestion de base de données
[Termes IGN] tétraèdre
[Termes IGN] Triangulated Irregular Network
[Termes IGN] visualisation 3DIndex. décimale : 32.00 Topographie - généralités Résumé : (Auteur) Current topographic products are limited to a real world representation in only two dimensions, with at best some additional point heights and contour lines. Modelling the real world in two dimensions implies a rather drastic simplification of three di-mensional real world elements. By representing these elements in two dimensions, loss of information is inevitable. Due to this simplification, accuracy of analysis results is limited and a meaningful, insightful representation of complex situations is hard to obtain. Environmental issues like high concentrations of particulate matter along highways in urban areas, the effects of noise and odour propagation and risk analysis of liquefied petroleum gas storage tanks are random examples of current issues in 3D urban planning in which more precision is required than 2D analyses can offer. In a time with increasing attention for these kind of environmental and sustainability issues, limitations of 2D models become real problematic and trigger the demand for 3D topography.
The development of 3D topography is also supply-driven, especially by the increasing availability of high density laser scan data. Height data becomes available with point densities -multiple height points per square meter- that were previously unthinkable with traditional photogrammetric stereo techniques. Direct 3D data ac-quisition by terrestrial laser scanning is emerging, thus providing detailed measure-ments of facades, tunnels and even indoor topography. The fast developments in this field are partly triggered by the emerging popularity of personal navigation devices, which will use 3D models in the future to simplify user interpretation of the (map) display.
Objective and research question
The objective of this research is to develop a data structure that is capable of han-dling large data volumes and offers support for loading, updating, querying, analysis and especially validation. To achieve this, a triangular approach will be used, due to its advantages in maintaining consistency, its robustness and editability. This tri-angular approach creates a network of triangles (in 2D) or tetrahedrons (in 3D), in which topographic features are represented by sets of triangles or tetrahedrons. Such a network is an example of an irregular tessellation, in which the real world is de--composed into smaller (triangle/tetrahedron-shaped) building blocks. The resulting networks are called TINs (Triangular Irregular Networks) or TENs (TEtrahedronised irregular Networks). The presence of boundaries of topographic features are ensured by the use of constraints, preventing the deletion of crucial boundary edges and trian-gles. Algorithms exist to calculate these constrained triangulations and constrained tetrahedronisations of topographic data.
In this research a two-step approach will be adopted. First one has to decide how real-world objects should be modelled into features, secondly one needs to store these features in such a way that the requirements in terms of querying, analysis and validation are met. An obvious step in dealing with large volumes of geographically referenced data, is to use a spatial database.
This objective is expressed in the main research question:
How can a 3D topographic representation be realised in a feature-based triangular data model?
Note that the term 'triangular' is used here in general dimension, so both triangle-and tetrahedron-based models will be considered. As mentioned before, a two-step approach will be adopted to achieve a solution to the main research question. In accordance with the two steps, two key questions can be distinguished:
How to develop a conceptual model that describes the real world phenomena (the topographic features), regarding the general purpose-characteristic of to-pographic data sets?
How to implement this conceptual model, i.e. how to develop a suitable DBMS data structure?
The results of this research will be summarised according to this two-step approach.
A conceptual data model for 3D topography
One of the basic assumptions within this research is the use of triangular data models. As a result, topographic features will be described as sets of triangles and these fea-tures will be connected by triangles as well, thus creating one triangular network. This research explored two different approaches to triangular modelling of 3D topography.
The first one is a very pragmatic hybrid approach that combines a 2.5D* sur-face with 3D objects for those cases where 2.5D modelling is not sufficient. In terms of triangular data structures, this approach combines a TIN with several TENs. These irregular data structures not only allow varying point density (de-pending on local model complexity), but extend this irregularity into varying even model dimensionality, thus offering the ultimate fit-for-purpose approach. Unfortunately, connecting TIN and TEN networks appeared to be very difficult at design level and during prototype implementation.
The second approach avoids these problems, since it is a full 3D approach using only a TEN. Two fundamental observations are of great importance:
Physical objects have by definition a volume. In reality, there are no point, line or polygon objects; only point, line or polygon representations exist (at a certain level of abstraction/generalisation).
The real world can be considered a volume partition: a set of nonoverlap-ping volumes that form a closed (i.e. no gaps within the domain) modelled space. Objects like 'earth' or 'air' are thus explicitly included in the model.
In topographic data models, planar features like walls or roofs are obviously very useful. They can be part of the volumetric data model as 'derived features', i.e. these features depend on the relationship between volume features. For example, the earth surface is the boundary between air and earth features, while a wall or a roof are the result of adjacent building and air features. In terms of UML, these planar features are modelled as association classes. As a result, planar features are lifetime dependent from the association between two volume features.
Among the advantages of the full volumetric approach are its explicit inclusion of air and earth (often subject of analysis), its extensibility (geology, air traf-fic/telecommunication corridors, etc.) and its strong mathematical definition (full connectivity enables the use of topology for query, analysis and validation). As a re-sult, topographic features will be modelled in a TEN. Each feature will be represented by a set of tetrahedrons.
A data structure for 3D topography
The newly developed data structure has three important characteristics:
It has a solid mathematical foundation. Operators and definitions from the mathematical field of Poincare simplicial homology (part of algebraic topology) are used to handle simplexes^, the basic elements in a triangular data structure. Simplexes are well defined, ordered and constructed of simplexes of lower di-mension. The boundary operator can be used to derive these less dimensional
*See section 2.2 for an overview of often-used dimension indicators
tA simplex can loosely be defined as the simplest shape in a dimension, in which simplest refers to minimising the number of points required to define such a shape, for instance a point, a line, a triangle and a tetrahedron. See section 4.1 for a proper mathematical definition simplexes. Based on the ordering of simplexes, one can determine orientation, a useful concept in GIS. Another important concept from simplicial homology is the simplicial complex, since such a set of connected simplexes will be used to model 3D topographic features.
It is developed as a spatial database data structure. Applying definitions and operators from simplicial homology enables one to store a TEN in a relatively compact way. The new simplicial complex-based method requires only explicit storage of tetrahedrons, while simplexes of lower dimensions (triangles, edges, nodes), constraints (which guarantee feature boundary presence) and topologi-cal relationships can be derived in views. Using functions to derive views from a table is typical database functionality. In this implementation, simplexes are en-coded by their vertices, similar to the annotation in simplicial homology. These simplex encodings are extended with a feature identifier, indicating which to-pographic feature is (partly) represented by this simplex. So, a tetrahedron is encoded as 83 =< vq, Vi, V2,v^, fid >. Two variants in simplex encoding have been developed: coordinate concatenation and identifier concatenation. The concept of coordinate concatenation is to concatenate x, y and z coordinates as node identifiers and to concatenate the resulting unique node codes to describe simplexes of higher dimension. The alternative approach, identifier concatena-tion, uses separate (meaningless) node identifiers to encode simplexes to reduce the number of coordinate repetitions, since a specific node will be part of multi-ple tetrahedrons. This approach requires an additional node table to store node geometries.
It is an editable data structure, which is a crucial prerequisite to be a feasible approach for topographic data storage. Incremental updates are required, since complete rebuilds of the TEN structure will be time-consuming due to the ex-pected data volumes. Whereas most existing update algorithms for constrained tetrahedronisations use node insertions, followed by edge reconstruction, this research presents edge insertion operators. Nine exhaustive and mutually exclusive cases are distinguished, based on the location in the TEN of the inserted edge's nodes. These operators guarantee the constrained edge's presence in the structure. Existing operators might fail to recover these edges, due to the pres-ence of nearby constrained edges, which would typically happen in topographic data sets.
Conclusions
This dissertation presents a new topological approach to data modelling, based on a tetrahedral network. Operators and definitions from the field of simplicial homology are used to define and handle this structure of tetrahedrons. Simplicial homology provides a solid mathematical foundation for the data structure and offers full control over orientation of simplexes and enables one to derive substantial parts of the TEN structure efficiently, instead of explicitly storing all primitives. DBMS characteristics as the usage of views, functions and function-based indexes are extensively used to realise this potential data reduction. A proof-of-concept implementation was created and tests with several data sets show that the prevailing view that tetrahedrons are more expensive in terms of storage when compared to polyhedrons, is not correct when using the proposed approach. Storage requirements for 3D objects in tetrahe-dronised form (excluding the space in between these objects) and 3D objects stored as polyhedrons, are in the same order of magnitude.
A TEN has favourable characteristics from a computational point of view. All elements of the tetrahedral network consist by definition of flat faces, all elements are convex and they are well defined. Validation of 3D objects is also simplified by tetrahedronisation. Furthermore, a full volumetric approach enables future integra-tion of topography with other 3D data like geological layers, polluted regions or air traffic and telecommunication corridors. The price of this full volumetric approach in terms of storage space is high (about 75% of the tetrahedrons models air or earth); nevertheless this approach is likely to become justifiable due to current developments towards sustainable urban development and increased focus on environmental issues.
Now the innovative aspects of the proposed method has to be identified. Neither the idea to use a TEN data structure for 3D data nor the idea to use simplexes (in terms of simplicial homology) in a DBMS implementation is new. However, the proposed approach reduces data storage and eliminates the need for explicit updates of both topology and simplexes of lower dimension. By doing so, the approach tackles common drawbacks as TEN extensiveness and laboriousness of maintaining topology. Furthermore, applying simplicial homology offers full control over orientation of sim-plexes, which is a significant advantage, especially in 3D. In addition to this aspect, the mathematical theory of simplicial homology offers a solid theoretical foundation for both the data structure and data operations. Integrating these concepts with database functionality results in a new innovative approach to 3D data modelling.
An often raised objection to a TEN approach is its presumed complexity. Obviously, a l:n relation exists between features and their tetrahedron representations. However, as long as a user handles only features (as polyhedrons) and implemented algorithms translate these polyhedrons into operations on the TEN, one can over-come the perceived complexity. Furthermore, the prevailing view that tetrahedrons are more expensive in terms of storage than polyhedrons has been falsified in this research.
Overall, the simplicial complex-based modelling approach provides a provable correct modelling method. The use of tetrahedrons is justified by the mathematical benefits and the acceptable storage requirements. Obviously, including air and earth within the model comes at a price, but -as stated earlier- this approach is likely to become justifiable, due to current sustainability and environmentally-driven developments. The decision to develop the data structure as a database structure contributes to the overall feasibility, since a database will become indispensable due to the expected data volumes.Note de contenu : Acknowledgements
1 Introduction
1.1 Motivation
1.2 Objective and main research question
1.3 Research scope and limitations
1.4 Contribution of the work
1.5 Outline
2 Research background
2.1 Problem domain: Towards 3D topography
2.2 Defining dimensions in the range 2D-3D
2.3 Deriving requirements for the conceptual data model and structure from the problem
2.4 Managing 3D data: related research on 3D data structures
2.5 Triangular data structures and algorithms
2.6 Relevant database concepts
I Conceptual modelling of 3D Topography
3 Two triangular data models for 3D topography
3.1 Approach 1: an integrated 2.5D/3D model
3.2 Approach 2: a full 3D data model
3.3 The choice for the full 3D approach
II A Data structure for 3D Topography
4 Theoretical foundations: Poincare simplicial homology
4.1 Mathematical description of simplexes
4.2 Orientation of simplexes
4.3 Combining simplexes: simplicial complexes
4.4 Operations on simplexes and simplicial complexes
5 A simplicial complex-based solution for 3D topography
5.1 Representing topographic features in a TEN
5.2 Early ideas: three TEN-based data structures for the full 3D approach
5.3 Preferred solution: applying simplicial homology to the TEN
5.4 Implementing the data structure in a DBMS environment
5.5 Summary
6 Updating features in the Data Structure
6.1 Incremental update: feature insertion
6.2 Incremental update: feature deletion
6.3 Quality improvement of TEN structure
6.4 Initial bulk loading and bulk rebuild
III Evaluation and conclusions
7 Evaluation and discussion
7.1 Evaluation material: three different data sets
7.2 Evaluating bulk tetrahedronisation process
7.3 Evaluating storage requirements
7.4 Evaluating initial visualisation tools
7.5 Discussing requirements for 3D data sets with correct topology
7.6 Identifying future developments
8 Conclusions
8.1 Results.
8.2 Main conclusions
8.3 Discussion
8.4 Future researchNuméro de notice : 15361 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Thèse étrangère DOI : sans Accessibilité hors numérique : Non accessible via le SUDOC En ligne : https://www.ncgeo.nl/index.php/en/publicatiesgb/publications-on-geodesy/item/250 [...] Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=62703 Exemplaires(2)
Code-barres Cote Support Localisation Section Disponibilité 15361-01 32.00 Livre Centre de documentation Topographie Disponible 15361-02 32.00 Livre Centre de documentation Topographie Disponible Assessment and socio-economic aspects of Geographic Information Infrastructures / Bastiaan Van Loenen (2008)
Titre : Assessment and socio-economic aspects of Geographic Information Infrastructures : Proceedings of the Workshop Type de document : Actes de congrès Auteurs : Bastiaan Van Loenen, Éditeur scientifique Editeur : Delft : Netherlands Geodetic Commission NGC Année de publication : 2008 Collection : Netherlands Geodetic Commission Green series num. 46 Conférence : NGC 2008, Assessment and socio-economic aspects of Geographic Information Infrastructures Workshop 11/04/2008 Delft Pays-Bas OA proceedings Importance : 87 p. Format : 17 x 24 cm ISBN/ISSN/EAN : 978-90-6132-308-2 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Infrastructure de données
[Termes IGN] analyse socio-économique
[Termes IGN] données localisées
[Termes IGN] évaluation des données
[Termes IGN] infrastructure européenne de données localisées
[Termes IGN] INSPIRERésumé : (Auteur) [Introduction] Socio-economic aspects of geographic information (infrastructures) (GII) are increasingly considered in GIl development and especially in GII research. Where once the technological dimension of GII was the dimension assessed to be most relevant, it is now commonly understood that also the non-technical aspects should be addressed and understood in order to promote GII development. There may even a trend be recognized towards a non-technical focus of GII strategies.
The socio-economic side of GII is also attracting the attention of the research community. It is evident that we are only at the beginning of the development of this loose network of those researching socio-economic GII issues (see De Man's paper). For one socio-economic aspect, the GII assessment aspect, a true loose network linking those with assessment expertise now not only shares experiences, but is also cooperatively working on the assessment issue. For other socio-economic issues such a network is still in development. One way of extending the socio-economic network is to disseminate the pool of ideas and research outcomes to speak of a true community of practice as De Man has put it.
The proceedings of the workshop on Assessment and Socio-economic Aspects of Spatial Data Infrastructures contributes to this objective. This booklet presents the outcomes of this workshop. The workshop was initiated by the OTB Research Institute of Delft University of Technology to commemorate two years of research by one of its staff members, Garfield Giff.
A selected group of academics and professionals were invited to join the workshop and to share experiences in the socio-economic frameworks within which geographic information infrastructures are emerging whether within an individual nation or across multinational regions. The contributions reflect the variety of socio-economic aspects of GII. More specifically, the presented work covers GII assessment theory, applied GII assessment, GII and eGovt, sociology and privacy. [...]Note de contenu : - Preface / Bastiaan van Loenen, Editor
- Theoretical considerations for multi-view SDI assessment / Lukasz Grus, Joep Crompvoets, Arnold Bregt
- Using Performance Indicators to assess SDIs/GISs / Garfield Gift
- SDI assessment from an organizational perspective / Wilbert Kurvers
- Examining SDI development of Turkey as a socio-technical approach / Arif Cagda| Aydinoglu, Halil ibrahim inan and Tahsin Yomralioglu
- SDI as a distant ship on the horizon of EGov / Walter de Vries
- Crisis in the SDI field? Or a vibrant market of ideas and initiatives between rhetoric and praxis / Erik de Man
- Implications of privacy for INSPIRE and vice versa / Bastiaan van LoenenNuméro de notice : 15457 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE Nature : Actes DOI : sans En ligne : https://www.ncgeo.nl/downloads/46VanLoenen.pdf Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=34767 Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 15457-01 CG2008 Livre Centre de documentation Congrès Disponible
Titre : Sensor web enablement : Seminar day of the Neteherlands Geodetic Commission, 1st February 2007, Utrecht Type de document : Actes de congrès Auteurs : M. Grothe, Éditeur scientifique ; J. Kooijman, Éditeur scientifique Editeur : Delft : Netherlands Geodetic Commission NGC Année de publication : 2008 Collection : Netherlands Geodetic Commission Green series num. 45 Conférence : NGC 2007, Seminar day of the Netherlands Geodetic Commission, Sensor web enablement 01/02/2007 Utrecht Pays-Bas OA proceedings Importance : 77 p. Format : 17 x 24 cm ISBN/ISSN/EAN : 978-90-6132-305-1 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Télématique
[Termes IGN] aide à la décision
[Termes IGN] capteur (télédétection)
[Termes IGN] données localisées
[Termes IGN] positionnement dynamique
[Termes IGN] positionnement statique
[Termes IGN] Sensor Web Enablement
[Termes IGN] service web
[Termes IGN] temps réelRésumé : (Editeur) The seminar 'Sensor Web Enablement' of the Netherlands Geodetic Commission was devoted to the creation of awareness of the Sensor Web and the OGC sensor web interoperability standards. The seminar aimed to improve the understanding of SWE; concepts and applications, but also future trends and scenarios on location and sensor services. We hope that the seminar has resulted in lasting new contacts between all people in the Netherlands with an interest in sensors, location and sensor services, sensor networks and in particular the SWE standards.
The contributions in the seminar proceedings reflect both the future perspective on the position and value of sensors and sensor technology, the conceptual framework of processing sensor data, as well as the ins and outs of the Sensor Web Enablement family of sensor standards, it's test beds and applications, but also issues and items for discussion. This publication is a reflection of the different seminar contributions.
The first paper 'Location Awareness 2020. A foresight study on auto-identification and location in year 2020, and the implications for mobility' by Euro Beinat (SPINLab Vrije Universiteit Amsterdam and Salzburg University) and John Steenbruggen (Rijkswaterstaat) introduces a way to explore the future of the application of sensors and sensor networks. The authors have developed scenarios for location awareness and sensor services in 2020 with an emphasis on transportation and mobility. This paper outlines the relevant drivers and trends for the adoption of sensor services and sensor networks for future location awareness, as well as barriers for the adoption. In the paper some of the recent results that have been obtained from the Location Awareness 2020 study conducted for the innovation program on Transportation and Water management in the Netherlands (in contract of Rijkswaterstaat) are presented. The authors conclude that interoperability will be the kernel of successful adoptions of location and sensor technologies in transportation.
Zoltan Papp and Henk Hakkesteegt from TNO Science and Industry address the issue to make sensors and sensor web networks more applicable in practice, namely the handling of sensor web data from interpretation to monitoring, control, maintenance and decision making. Their paper investigates how the potential of data richness can be fully utilized. More specifically, it attempts to answer questions around the integration of sensor networks and sensor web into the data interpretation process. They illustrate that the data interpretation process has to be adjusted in order to accommodate the advantageous features of the sensor web based observations. Without these adjustments the sensor web is still useful, but cannot deliver its promises. They advocate the use of SWE and illustrate this in a water management example. At the same time, they come up with some drawbacks and issues that need further attention.
In the next paper, Alexander Walkowski (Westfälische Wilhelms-Universität Münster) introduces the main concepts and ideas of the Sensor Web Enablement initiative. One of the main objectives of SWE is finding all sensors available via the world wide web. Walkowski advocates the advantages of the standardization of access to sensors and sensor data by SWE. The SWE framework is outlined from the information model perspective and services model perspective. A use case scenario illustrates the possibilities of SWE. It is concluded that after the long period of evolution and testing, it is the time to start applications based on the SWE framework.
In their paper 'A testbed for SWE technology' Rowena Smilie, Yves Coene (both Spacebel), Philippe Merigot, Didier Giacobbo (both Spotimage), Steven Smolders and Caroline Heylen (both GIM) outline the use SWE technology in a number of projects of the European Space Agency (ESA). They illustrate the maturity of the used SWE concepts in several testbed projects of ESA and OGC, like the Observations and Measurements standard of the SWE information model and the application of the SWE Sensor Observation Service and Planning Service. All projects are related to the ESA Services Support Environment (SSE). Issues faced in these projects with the application of SWE concepts are raised by the authors, e.g. missing SOAP bindings in the SWE service specifications. Furthermore, future work on application of SWE within SSE is elaborated on.
Another example of the use of SWE is given by Jan Jellema (TNO) and Peter Gijsbers (WL | Delft Hydraulics) in their paper 'Sensor Networks, basis for the Dutch Geo-infrastructure'. The paper gives a short overview of a recent started project on application of Sensor Web Enablement framework for water management. This project is the sensor innovation project under of 'Space for Geo-information' program in the Netherlands. The goal of the project, conducted by a consortium of major scientific institutes and sensor suppliers, is to explore SWE concept and test it's advantages and disadvantages.
The last paper 'Research topics for SWE' is by the editors Michel Grothe (Rijkswaterstaat) and Jan Kooijman (TNO). This short paper reflects the discussions and brainstorm during the seminar. The input of the seminar participants is used here to sum up the research topics for Sensor Web Enablement.Note de contenu : - Editorial / Michel Grothe and Jan Kooijman
- Location Awareness 2020. A foresight study on auto-identification and location in year 2020, and the implications for mobility / Euro Beinat and John Steenbruggen
- Sensor Web, Sensor Networks: New possibilities and new challenges / Zoltan Papp and Henk Hakkesteegt
- Sensor Web Enablement – An overview / Alexander C. Walkowski
- A testbed for SWE technology / Rowena Smilie, Yves Coene, Philippe Merigot, Didier Giacobbo, Steven Smolders and Caroline Heylen
- Sensor Networks, basis for the Dutch Geo-infrastructure / Jan Jellema and Peter Gijsbers
- Research topics for the Sensor Web / Michel Grothe and Jan KooijmanNuméro de notice : 15405 Affiliation des auteurs : non IGN Autre URL associée : téléchargement Thématique : INFORMATIQUE Nature : Actes DOI : sans En ligne : https://www.ncgeo.nl/index.php/en/publicatiesgb/green-series/item/2364-gs-45-mic [...] Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=34764 Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 15405-01 CG2007 Livre Centre de documentation Congrès Disponible Documents numériques
en open access
Sensor web enablement - pdf éditeurAdobe Acrobat PDF
Titre : Least-square variance component estimation : Theory and GPS applications Type de document : Thèse/HDR Auteurs : Ali Reza Amiri-Simkooei, Auteur Editeur : Delft : Netherlands Geodetic Commission NGC Année de publication : 2007 Collection : Netherlands Geodetic Commission Publications on Geodesy, ISSN 0165-1706 num. 64 Importance : 208 p. Format : 17 x 24 cm ISBN/ISSN/EAN : 978-90-6132-301-3 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Géodésie spatiale
[Termes IGN] analyse de variance
[Termes IGN] analyse multivariée
[Termes IGN] bruit blanc
[Termes IGN] bruit rose
[Termes IGN] coordonnées GPS
[Termes IGN] estimation statistique
[Termes IGN] matrice de covariance
[Termes IGN] méthode des moindres carrés
[Termes IGN] modèle stochastique
[Termes IGN] série temporelle
[Termes IGN] signal GPS
[Termes IGN] varianceIndex. décimale : 30.61 Systèmes de Positionnement par Satellites du GNSS Résumé : (Auteur) Data processing in geodetic applications often relies on the least-squares method, for which one needs a proper stochastic model of the observables. Such a realistic covariance matrix allows one first to obtain the best (minimum variance) linear unbiased estimator of the unknown parameters; second, to determine a realistic precision description of the unknowns; and, third, along with the distribution of the data, to correctly perform hypothesis testing and assess quality control measures such as reliability. In many practical applications the covariance matrix is only partly known. The covariance matrix is then usually written as an unknown linear combination of known cofactor matrices. The estimation of the unknown (co)variance components is generally referred to as variance component estimation (VCE). In this thesis we study the method of least-squares variance component estimation (LSVCE) and elaborate on theoretical and practical aspects of the method. We show that LS-VCE is a simple, flexible, and attractive VCE-method. The LS-VCE method is simple because it is based on the well-known principle of least-squares. With this method the estimation of the (co)variance components is based on a linear model of observation equations. The method is flexible since it works with a user-defined weight matrix. Different weight matrix classes can be defined which all automatically lead to unbiased estimators of (co)variance components. LS-VCE is attractive since it allows one to apply the existing body of knowledge of least-squares theory to the problem of (co)variance component estimation. With this method, one can 1) obtain measures of discrepancies in the stochastic model, 2) determine the covariance matrix of the (co)variance components, 3) obtain the minimum variance estimator of (co)variance components by choosing the weight matrix as the inverse of the covariance matrix, 4) take the a-priori information on the (co)variance component into account, 5) solve for a nonlinear (co)variance component model, 6) apply the idea of robust estimation to (co)variance components, 7) evaluate the estimability of the (co)variance components, and 8) avoid the problem of obtaining negative variance components. LS-VCE is capable of unifying many of the existing VCE-methods such as MINQUE, BIQUE, and REML, which can be recovered by making appropriate choices for the weight matrix. An important feature of the LS-VCE method is the capability of applying hypothesis testing to the stochastic model, for which we rely on the w-test, v-test, and overall model test. We aim to find an appropriate structure for the stochastic model which includes the relevant noise components into the covariance matrix. The w-test statistic is introduced to see whether or not a certain noise component is likely to be present in the observations, which consequently can be included in the stochastic model. Based on the normal distribution of the original observables we determine the mean and the variance of the w-test statistic, which are zero and one, respectively. The distribution is a linear combination of mutually independent central chi-square distributions each with one degree of freedom. This distribution can be approximated by the standard normal distribution for some special cases. An equivalent expression for the w-test is given by introducing the v-test statistic. The goal is to decrease the number of (co)variance components of the stochastic model by testing the significance of the components. The overall model test is introduced to generally test the appropriateness of a proposed stochastic model. We also apply LS-VCE to real data of two GPS applications. LS-VCE is applied to the GPS geometry-free model. We present the functional and stochastic model of the GPS observables. The variance components of different observation types, satellite elevation dependence of GPS observables’ precision, and correlation between different observation types are estimated by LS-VCE. We show that the precision of the GPS observables clearly depends on the elevation angle of satellites. Also, significant correlation between observation types is found. For the second application we assess the noise characteristics of time series of daily coordinates for permanent GPS stations. We apply LS-VCE to estimate white noise and power-law noise (flicker noise and random walk noise) amplitudes in these time series. The results confirm that the time series are highly time correlated. We also use the w-test statistic to find an appropriate stochastic model of GPS time series. A combination of white noise, autoregressive noise, and flicker noise in general best characterizes the noise in all three position components. Unmodelled periodic effects in the data are then captured by a set of harmonic functions, for which we rely on least-squares harmonic estimation (LS-HE) developed in the same framework as LS-VCE. The results confirm the presence of annual and semiannual signals, as well as other significant periodic patterns in the series. To avoid the biased estimation of the variance components, such sinusoidal signals should be included in the functional part of the model before applying LS-VCE. Note de contenu : 1. Introduction
2. Least-Squares Estimation and Validation
3. Variance Component Estimation: A Review
4. Least-Squares Variance Component Estimation
5. Detection and Validation in Stochastic Model
6. Multivariate Variance-Covariance Analysis
7. GPS Geometry-Free Model
8. GPS Coordinate Time Series
9. Conclusions and Recommendations
A. Mathematical Background
B. Derivation of Equations
C. Moments of Normally Distributed Data
D. Mixed model with hard constraints
BibliographyNuméro de notice : 15303 Affiliation des auteurs : non IGN Thématique : POSITIONNEMENT Nature : Thèse étrangère DOI : sans En ligne : https://www.ncgeo.nl/downloads/71Memarzadeh.pdf Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=62689 Exemplaires(1)
Code-barres Cote Support Localisation Section Disponibilité 15303-01 30.61 Livre Centre de documentation Géodésie Disponible
Titre : Towards a rigorous logic for spatial data representation Type de document : Thèse/HDR Auteurs : Rodney James Thompson, Auteur Editeur : Delft : Netherlands Geodetic Commission NGC Année de publication : 2007 Collection : Netherlands Geodetic Commission Publications on Geodesy, ISSN 0165-1706 num. 65 Importance : 332 p. Format : 17 x 24 cm ISBN/ISSN/EAN : 978-90-6132-303-7 Note générale : Bibliographie Langues : Anglais (eng) Descripteur : [Vedettes matières IGN] Systèmes d'information géographique
[Termes IGN] données localisées
[Termes IGN] données vectorielles
[Termes IGN] logique
[Termes IGN] modèle conceptuel de données localisées
[Termes IGN] primitive géométrique
[Termes IGN] représentation des données
[Termes IGN] traitement automatique de donnéesRésumé : (Auteur) The storage and retrieval of spatial data in computer systems has matured greatly over recent years, from the earliest approaches (of simple digitised linework and text) to the representation of features and their attributes, with the semantics of their behaviour associated. This has led to massive cost savings where data, which might have been captured for a specific purpose, can be shared and reused for other purposes.
In this first generation of Geographic Information Systems (GIS), the data is stored locally, with each vendor using different nomenclature and definitions of spatial objects and having very different rules for what is accepted as ''valid''. As a result a scientist using a desktop GIS may need to expend a considerable portion of his/her research effort and funds in translating, cleaning and preparing pre-existing data to convert to the form required for the study.
For some years now, there has been a trend towards spatial data being housed within a database management system, these being considered as a corporate resource, leading to the realisation that the geographic data itself is in fact an infrastructure, in the same way as is, for example, a telephone network. This moves the ownership of the data from the desktop, firstly to the corporation, and ultimately to being a shared resource between public and private organisations - a Geographic Information Infrastructure (GII).
An inhibiting factor in these trends is the lack of standardisation alluded to above. Where every data sharing operation involves manual intervention, it is difficult, if not impossible to create a GII. Thus a strong and consistent set of standards is needed, with the most basic requirement being for consistency in the geometric concepts used. While progress is being made by groups such as the International Standards Organisation Technical Committee 211 (ISO TC211) and the Open Geospatial Consortium (OGC), there is still much to be done.
The success of these standardisation efforts has been compromised by the requirement to be vendor neutral - i.e. to avoid specifying an internal representation to be used for storage. For example, the standards will remain silent on whether coordinate values should be stored in floating point or integer format.
As a result, definitions of spatial objects are expressed in mathematical terms assuming an infinite precision real number system, with the details of how this is to be translated into the computational representation being left to the implementer. Therefore there is no agreed normative meaning of the ''equals'' predicate when applied to geometric objects, and definitions of validity are in general left to the implementers.
If the standardisation effort is to allow spatial data to be interchanged without expensive manual intervention, a well defined logic is needed to underpin the standards and support the definition of validity of that data. This would also ensure that inferences drawn from the digital model remain consistent and do not lead to logical fallacies.
The language of spatial databases is couched in the language of mathematics, with operations being given names such as ''union'' and ''intersection'' and using vector-like representations. This naturally leads to the impression that the representations form a topological and/or vector space. Unfortunately this is not the case. Generally speaking, the rigorous mathematics used in the definition of spatial objects ends outside the database representation, which is only an approximation of the theoretical formalism used to define it.
This thesis documents a number of cases that illustrate the potential breakdown of logic to be found in current technology, for example, cases where the union or intersection operations lead to inconsistent results. Various alternative approaches that have been investigated in search of solutions are discussed, and their advantages and disadvantages indicated.
This current research has been motivated by an attempt to apply the mathematical approach to the actual representation of spatial features within the computer system. In this rigorous approach, the assumptions (or ''axioms'') are clearly identified, and used to develop a chain of argument, leading to a proof of the required proposition. The advantage of this approach in the field of spatial data representation is that, if the computer hardware can be verified to obey the axioms, then the correct results of the algorithms are assured.
In order to facilitate such a chain of proof, a form of representation known as the regular polytope has been defined, based on a small set of axioms and definitions, and shown to possess a consistent and complete logic. That is to say, the computational representation itself expresses the algebraic formalism, rather than being an approximation to an idealized mathematical model.
Thus this representation is capable of providing a potential storage structure for a useful class of features, but this should not be seen as the sole object of the research. Rather the regular polytope should be seen as an exemplar for any approach to spatial data representation and storage.
The fact that this particular representation can be axiomatically defined and implemented demonstrates that such an approach is feasible, and opens the possibility that all computational representations can be similarly analysed. The regular polytope is a particularly tractable construct for this type of analysis, which is the reason for choosing it. By contrast the kind of structure embedded in many current systems is far more complex. In particular, floating point numbers add a significant level of complexity, and only the most basic topological behaviour has been proved where floating point operations are assumed.
Based on integer and domain restricted rational arithmetic, it is shown that the logic of topology, the Boolean connection algebra and the region connection calculus can be expressed directly by the database implementation. Thus a database built on this structure cannot suffer from the kinds of breakdown of logic discussed above. In addition, this raises the prospect of a definition of validity and robustness of representation that is not vendor specific.
A regular polytope representation of spatial objects is defined as the union of a finite set of (possibly overlapping) "convex regular polytopes", which are in turn defined as the intersection of a finite set of half spaces. These half spaces are defined by finite precision number representations. The term ''Regular Polytope'' here does not carry its conventional meaning as the generalisation of a regular polyhedron (one having equal sides, faces and angles etc.). In the form used here, it combines the topological term ''regular'' with the conventional geometric meaning of ''polyhedron''.
The actual definition is given in axiomatic form, structured so as to form a ''boundary free'' representation, valid in any number of dimensions. Although it is explored here principally in 3D, particular reference is made to the mixture of 2D and 3D found in many current application areas such as cadastral property boundaries. Particular attention is paid to the issue of connectivity, both within and between regular polytopes, and the resultant logic is developed in terms of well studied concepts such as the region connection calculus.
The particular representation chosen for the half space is such that adjoining regular polytopes will have no points in common, and no points will exist between them. Thus it is possible to define a complete partition of space where every point that can be represented computationally is defined to exist in one and only one region. In the traditional representations of 2D polygons and 3D polyhedrons, points play a very important role of carrying the metric information. This is in contrast to regular polytopes where points do not play a role in the definition at all. Instead the metric is specified via the half planes using 3 or 4 integers (in 2D and 3D respectively).
This theoretic basis is then applied to actual database schema design, and several alternative models proposed and analysed. As a check on the practicality of the algorithms, ''proof of concept'' classes have been developed in the Java programming language, and tested on a significant set of cadastral parcels (2D and 3D) from the Queensland cadastre.
Finally, further areas of research are identified, including extensions of the approach to wider problem domains.Note de contenu : 1. Introduction
1.1. Research Question
1.2. Research Approach
1.3. Scope of Research
1.4. Nomenclature
1.5. Computational Representation of Vector Spatial Data
1.6. Contribution of this Work
1.7. Organisation of the Thesis
2. Case Studies
2.1. Case 1. Polygon Union
2.2. Case 2. Data Interchange
2.3. Case 3. ISO 19107 Definition of Equality
2.4. Case 4. ISO 19107 Definition of Simplicity
2.5. Case 5. Intersection of a Point with a Line
2.6. Case 6. Narrow Cadastral Parcels
2.7. Case 7. 3D Surfaces and Lines
2.8. Case 8. ISO 19107 Definition of "interior to" association
2.9. Case 9. Adjoining polygon points
3. Related Work and Theory
3.1. Historic Perspective
3.2. Spatial Logic
3.3. Precision of Calculations and Representation
3.4. The Digital Representation
3.5. Conclusions
4. The Regular Polytope Representation
4.1. The Regular Polytope
4.2. Properties of the Regular Polytope Representation
4.3. Integer Approach
4.4. Domain-Restricted Rational Number Approach
4.5. Floating Point Number Approach
4.6. Conclusion
5. Connectivity in the Regular Polytope Representation
5.1. Connectivity of Geometric Objects
5.2. Connectivity of Convex Polytopes
5.3. Connectivity of Regular Polytopes
5.4. Properties of CA and CB
5.5. Further Connectivity Relations
5.6. Partitioning of Space
5.7. Robustness of Regular Polytopes
5.8. Robustness of Connected Regular Polytopes
5.9. Conclusions
6. Algebras of Connectivity
6.1. The Region Connection Calculus (RCC)
6.2. The Spatial Relations on Regular Polytopes
6.3. Dimensionality of Overlap
6.4. Proximity Space
6.5. Boolean Connection Algebra
6.6. Properties of the Space of Regular Polytopes
6.7. The Convex Hull
6.8. Expressiveness of the Relations and Functions
6.9. Relationship with Constraint Databases
6.10. Conclusions
7. The Data Model
7.1. Vertex-based Representations
7.2 The Discrete Regular Polytope Model
7.3. Topological Encoding of Regular Polytopes
7.4. The Approximated Polytope Model
7.5. Extension to Topological Encoding
7.6. Spatial Indexing of the Regular Polytope
7.7. Relationship with Other Approaches
7.8. Summary of Data Volumes
7.9. Conclusions
8. Implementation Issues
8.1. Rationale for the Approach Taken
8.2. Description of the Java Objects
8.3. Proof of Concept Data
8.4. Algorithmic Complexity
8.5. Optimising the Model
8.6. Data Load Issues
8.7. Conclusions
9. Review of Case Studies
9.1. Case 1. Polygon Union
9.2. Case 2. Data Interchange
9.3. Case 3. ISO 19107 Definition of equalsQ
9.4. Case 4. ISO 19107 Definition of isSimpleQ
9.5. Case 5. Intersection of a Point with a Line
9.6. Case 6. Narrow Cadastral Parcels
9.7. Case 7. 3D Surfaces and Lines
9.8. Case 8. ISO 19107 Definition of "interior to" Association
9.9. Case 9. Adjoining Polygon Points
9.10. Case 10. 3D Cadastre Issues
9.11. Case 11. Datum Conversion
9.12. Case 12. Uniqueness Of Representation
9.13. Case 13. GeoTools/GeoAPI definition of Object.equalsQ
9.14. Conclusions
10. Conclusions
10.1. Application of the Regular Polytope to Lower Dimensionality
10.2. Learnings and Future Research
10.3. ConclusionNuméro de notice : 15360 Affiliation des auteurs : non IGN Thématique : GEOMATIQUE/INFORMATIQUE Nature : Thèse étrangère DOI : sans En ligne : https://www.ncgeo.nl/downloads/65Thompson.pdf Format de la ressource électronique : URL Permalink : https://documentation.ensg.eu/index.php?lvl=notice_display&id=62702 Automatic reconstruction of industrial installations using point clouds and images / Tahir Rabbani Shah (2006)PermalinkPermalinkPermalinkPermalinkGeo-information standards in action, NCG-GIN farewell seminar Henri J.G.L. Aalders, Delft, 17 November 2004 / Peter J. M. Van Oosterom (2005)PermalinkDe geodetische referentiestelsels van Nederland : definitie en vastlegging van ETRS89, RD en NAP en hun onderlinge relaties / Arnoud de Bruijne (2005)PermalinkPermalinkPermalinkPermalinkPermalink