Résumé : |
(Editeur) The contributions reflect the diversity of the possible interactions between computational geometry and GIS. The topics of the contributions range from overviews of relevant techniques and tools to solving specific spatial problems in either the object-based (vector) or field-based (raster) domain. This publication is a reflection of the different seminar contributions. The first paper 'Computational Geometry: its objectives and relation to GIS' is by Marc van Kreveld (Utrecht University). The analysis of algorithms involves understanding how efficiently an algorithm solves a problem. One of the main objectives of computational geometry is finding the most efficient algorithms for all sorts of geometric problems. He introduces the main concepts and ideas in computational geometry, including efficiency analysis, intractability, output-sensitive algorithms, and approximation algorithms. The basic problems of computational geometry all have a direct or indirect use to GIS. He also indicates why computational geometry is not as useful to GIS as it could be (complicated algorithms, focus on worst-case efficiency, and on well-defined, simple to state problems) and how this is currently improving (available software libraries, simpler algorithms provably efficient under realistic assumptions).
Mark de Berg (TU Eindhoven) addresses one of the issues to make computational geometry techniques more applicable in practice, namely the handling of large data sets that do not fit in main memory (as often more or less implicitly assumed in the description of many data structures and algorithms). In his paper 'I/O- and Cache-efficient Algorithms for Spatial Data', he explains how the hierarchical memory consisting of a disk, main memory, and several levels of cache should be included in data structure and algorithm design. The difference between the times to access these different levels of memory is quite large: the disk is typically about 100,000 times slower than accessing the main memory. In the paper some of the recent results that have been obtained on I/O- and cache-efficient algorithms are discussed with focus on spatial data.
One specific data structure, based on quad-edges, and applied to creating and editing three-dimensional models, is described by Christopher Gold and Rebecca Tse (University of Glamorgan, UK) in their paper 'Quad-Edges and Euler Operators for Automatic Building Extrusion Using LiDAR Data' (LIght Detection And Ranging). The long-term research objective for their models is to integrate man-made objects with the landscape, so that topological properties, such as connectedness, may be used in applications such as flood modeling. Man-made objects such as build-ings, as well as terrain elevation, should be extracted directly from LiDAR data. Their model is a triangle-based boundary description of the relevant objects and earth surface. The model creation and local modifications (updates) is performed on the Quad-Edge data structure by using Euler operators. These operators permit various extrusion operations as well as the manual insertion of bridges and tunnels.
A description of the use computational geometry tools used to solve a few specific cartographic problems is given by Bettina Speckmann (TU Eindhoven) in her paper 'Algorithms for cartograms and other specialized maps'. Cartograms are a useful and intuitive tool to visualize statistical data about a set of regions like countries, states or counties. The size of a region in a cartogram corresponds to a particular geographic variable and therefore the regions generally cannot keep both their shape and their adjacencies. A good cartogram, however, preserves the recognizability in some way. The paper gives a short overview of cartogram algorithms, and focuses in particular on the computation of rectangular cartograms. In a rectangular cartogram each region is represented by a rectangle. An implementation and various tests show that in practice, visually pleasing rectangular cartograms with small cartographic error can be generated effectively. Furthermore, the computation of proportional symbol maps is also discussed briefly.
Three-dimensional topographic modeling is also the topic of the paper by Friso Penninga (TU Delft): 'Constrained tetrahedral models and update algorithms for topographic data'. In contrast to the work of Gold and Tse he does not do this by representing the bounding surfaces, but he represents the three-dimensional objects by sets of tetrahedrons. The whole model then becomes a tetrahedronized irregular network (TEN), the 3D version of the more generally known triangulated irregular network (TIN). The TEN is a well-defined and robust data structure which enables complex processing by separate processing on each primitive first and afterwards joining all these partial results into a final result. In order to represent their borders several edges and faces will be handled as constraints. Updating a topographic dataset therefore equals the addition and removal of constraints within the network. One of the biggest challenges in the realization of such a data structure and corresponding algorithms is to reach acceptable performance, despite the potentially enormous amount of data. The last paper 'Towards improved solution schemes for Monte Carlo simulation in environmental modeling languages' is by Derek Karssenberg and Kor de Jong (Utrecht University). They deal with the field-based representation of spatial data, in contrast to the object-based representation of spatial data in the other papers. On the most often used field-based data structure, the regular grid, the algorithmic challenges are quite different than their counterparts in the object-based approaches. Environmental modeling languages such as PCRaster are programming languages embedded in GIS to simulate environmental processes. These languages are used to construct dynamic models, also called forward models, which are simulations run forward in time, where the state of the model at time t is defined as a function of its state in a time step preceding t. For future applications, at least two extensions to the languages are required: support of three spatial dimensions (as the real world is often 3D), and inclusion of Monte Carlo simulation techniques (to calculate how input errors propagate to the output of a model). |