Working through the scales



Download 2.64 Mb.
Page10/16
Date02.05.2018
Size2.64 Mb.
#47260
1   ...   6   7   8   9   10   11   12   13   ...   16

3.4 Chapter 3 Summary

After describing the structure of the global quaternary triangular mesh tessellation, specific aspects of its geometry, topology, and node and facet numbering were elaborated. Variability of facets was discussed, using an analysis which shows that the largest ones at any level have less than twice the area of the smallest ones, and that these extreme cases are thousands of km apart. Algorithms for computing facet areas were presented, showing summary statistics of variation in area for five levels of detail.


The method of facet numbering was described next, and shown to derive from the properties of node basis numbers (either 1, 2 or 3) and how they permute at successive levels of detail. The numbering scheme is compared to other similar models that were described in chapter 2. A set of in-line functions for computing octant numbers from coordinates and identifying neighboring octants were also specified.
Coordinate conversion is the key operation to the aspects of geoprocessing reported in this chapter. A method for deriving QTM IDs from geographic coordinates was presented, as was its inverse, which uses the same algorithm but requires making certain assumptions, as generating geographic locations from QTM IDs is non-deterministic. It is stressed that reasonable accuracy estimates for points are just as essential as their locations to encode them into QTM properly.
The nature and structure of QTM identifiers was described, discussing both string and binary representations. Strings are more wasteful of space, but are more efficient to parse. Binary QTM IDs hold more information in less space (8 bytes versus up to 32 for strings) and are better suited for archival use. Two metadata encoding schemes were presented for “enriching” and “enhancing” QTM IDs; these add useful information in unused portions of the identifiers, either permanently or temporarily.
The next section discussed general requirements for spatial access of coordinate data. Some access mechanisms that use quadtrees to perform indexing were discussed, noting that although this is facilitated by QTM, the use of QTM IDs for storing hierarchical coordinates does not dictate a particular spatial indexing strategy.
The notion of attractors was explored in the next section, These are nodal regions that are either single or groups of six QTM quadrants, and serve to relate data points that fall in neighboring sub-trees. Attractors are identified with QTM mesh nodes, and can be named for one of the quadrants they contain. The hierarchy of attractors was described, some of their roles in generalizing map data were discussed.
The chapter concluded with a discussion of how QTM can be used to verify, discover or estimate positional accuracy and certainty of data it encodes. This capability is vital when spatial data to be encoded is poorly documented or heterogeneous. Statistical trend analysis of lines filtered via QTM can reveal the upper and lower limits of useful resolution for files, feature sets and individual features. It was shown that a side-effect of such analyses can result in robust estimates of fractal dimensionality for feature data.

Chapter 4

Using QTM to Support Map Generalization



We think in generalities, but we live in detail.

— Alfred North Whitehead


Our life is frittered away by detail ... Simplify, simplify.

— Henry David Thoreau


Because map generalization is so often necessitated by changing the scale of representation of spatial data, it is a natural application for a hierarchical coordinate system such as QTM. Not only are QTM-encoded coordinates intrinsically scale-specific (within a factor of two), the partitioning of the planet that the tessellation defines provides a matrix of locations within and among which spatial data can be referenced and manipulated. First this chapter will discuss what “generalization operators” are and which of them might be aided by spatial encodings such as QTM. Differences between hierarchical and non-hierarchical operators will be outlined, to help explain why hierarchical coordinates need not be generalized hierarchically.
After a brief overview of the breadth of operators that QTM might enable or assist, the chapter narrows its focus to a single one, line simplification. A large number of algorithms for this purpose have been proposed and implemented, several of which are briefly discussed to illustrate their properties, strengths and weaknesses. Each of these algorithms requires specification of one or more parameters that control and constrain its operation, values for which are not always easy to decide upon. The discussion of setting parameters leads to the introduction of QTM-based line generalization, an approach to which is then outlined (with further details and algorithms provided in appendix A).
One of the most troublesome and difficult-to-implement aspects of digital map generalization — detection and resolution of spatial conflicts — is addressed next. We describe aspects of conflict identification involving using QTM attractor regions to index areas where potential for congestion or overlap exists. After discussing several alternative strategies for this, our implemented method — a directory-based approach — is described and illustrated.
The final section concerns some useful and important refinements to our basic line simplification algorithm involving vertex selection and line classification. A method for characterizing line sinuosity and embedding this information as positional metadata is described. Whether or not this particular classification method is optimal in all cases, the strategy is a general one that can utilize other parametric methods for steering point selection.
Having detailed our generalization strategies in this chapter, we shall devote chapter 5 to reporting empirical tests of QTM-based generalization operators that reveal some of the behavior of the parameters that affect the process. In that chapter, the system assembled to be the testbed will be described, including the hardware and software platforms involved and the data model employed to structure test data, as well as the datasets themselves. Further tables and figures describing the test runs are provided in appendices C and D.

4.1 Map Generalization: Methods and Digital Implementations

The need for map generalization is as easy to understand as its methods are difficult to formalize. Before grappling with the latter, consider the former from the point of view of a poet and an acute observer of nature:


Wherever there is life, there is twist and mess: the frizz of an arctic lichen, the tangle of brush along a bank, the dogleg of a dog’s leg, the way a line has got to curve, split or knob. The planet is characterized by its very jaggedness, its random heaps of mountains, its frayed fringes of shore. (Dillard 1974: 141)
As for modeling such a mess, Annie Dillard goes on to say:
Think of a globe, revolving on a stand. Think of a contour globe, whose mountain ranges cast shadows, whose continents rise in bas-relief above the oceans. But then: think of how it really is. These heights aren’t just suggested; they’re there ... It is all so sculptural, three-dimensional, casting a shadow. What if you had an enormous globe in relief that was so huge it showed roads and houses -- a geological survey globe, a quarter of a mile to an inch -- of the whole world, and the ocean floor! Looking at it, you would know what had to be left out: the free-standing sculptural arrangement of furniture in rooms, the jumble of broken rocks in a creek bed, tools in a box, labyrinthine ocean liners, the shape of snapdragons, walrus. (Dillard 1974: 141)
Deciding what to leave out from such a globe (or map), and where and how to do that is the essence of map generalization, a graphic art that has resisted many attempts to verbalize and codify its practice. Although digital mapping has been used since the 1960’s and interactive map generation practiced since the mid-1970’s, it was only toward the end of the 1980’s that systematic overviews of generalization of digital map data were put forward (Brassel and Weibel 1988; McMaster 1989), enabling software engineers to more precisely emulate what map makers do.
Human cartographers generalize maps by imploding graphic representations of places and things on the earth’s surface; in the process they decide what original entities to include, and whether and how their symbolization should be altered. Mapping houses have developed guidelines that specify how symbolism should change according to scale, which feature classes take priority over others, and other rules of a more aesthetic nature, such as for managing feature density, label placement and line simplification. In digital mapping environments, which nearly all mapping houses now utilize, such guidelines are still useful, but are not usually specific or formal enough to enable automation of the process, unless a human is constantly available for steering it — the “amplified intelligence” approach to generalization (Weibel 1991).
The tasks that generalization involves have been classified a number of ways. One of the most widely-used typologies is the one developed by McMaster and Shea (1992). In it, about a half-dozen generalization operators are defined, each of which may be applied in a variety of ways, depending on the type of map features involved, and via a variety of algorithms, each having its own logic, efficiency, advantages and disadvantages. However, the McMaster and Shea typology — now the dominant paradigm — failed to include an important operator, feature selection. Selection (and its inverse, elimination) is usually performed first, before any map data or symbolism is changed. Adding this to the list provided by McMaster and Shea and reorganizing it somewhat results in the following set of operators:
Elimination/Selection — determining which features (or feature classes) are significant at a given scale and selecting those to be displayed.

Simplification — Caricaturing shapes and reducing data density in either the geometric or the attribute domain.

Smoothing — Making lines and regions less angular.

Aggregation — Merging adjacent symbols or attribute categories, sometimes changing their form of symbolization.

Collapse — Reducing feature dimensionality; e.g., transforming regions into lines or points.

Displacement — Moving map symbols to alleviate crowding.



Enhancement — Adding (invented, presumed or actual) detail to features to locally or globally exaggerate their graphic character.
A GIS may use a number of different abstract datatypes to encode map features. Different data structures such as polylines, polygons, splines, grids, triangulations or hierarchical data models tend to have specialized generalization requirements, so that the above operators work differently in various domains. Applying them may also cause side-effects, such as altering feature topology, and may require referring to the attributes of features, or even altering them in the course of generalizing map data. Table 4.1, adapted from Weibel and Dutton (1997), gives a simple summary of such contingencies, with illustrative examples of how operators apply to the principal datatypes that GISs manipulate.

As table 4.1 illustrates, map generalization has many components and contexts. Most discussions of generalization operators are limited to vector primitives, and do not include the categories Fields and Hierarchies, as we do here. While these are higher-level constructs than Points, Lines and Areas, we have included them in this table to show that most operators are still valid and in many cases can be implemented with little or no more difficulty than the basic spatial primitives require. In addition, it should be clear that many of the operators in table 4.1 may interact with one another in various ways, which must be dealt with in any map production environment, even for interactive work. Most of these complexities must be put aside here, for better or worse. In the following discussion of QTM-based generalization strategies, primary attention will be paid to the process of simplification; in the empirical tests described below and documented in appendices C and D, only results of simplification of linear and polygonal features are reported. However, there are other generalization operators that QTM can help to implement, and some of these applications will be mentioned in the next section and discussed further in chapter 5.





Directory: papers
papers -> From Warfighters to Crimefighters: The Origins of Domestic Police Militarization
papers -> The Tragedy of Overfishing and Possible Solutions Stephanie Bellotti
papers -> Prospects for Basic Income in Developing Countries: a comparative Analysis of Welfare Regimes in the South
papers -> Weather regime transitions and the interannual variability of the North Atlantic Oscillation. Part I: a likely connection
papers -> Fast Truncated Multiplication for Cryptographic Applications
papers -> Reflections on the Industrial Revolution in Britain: William Blake and J. M. W. Turner
papers -> This is the first tpb on this product
papers -> Basic aspects of hurricanes for technology faculty in the United States
papers -> Title Software based Remote Attestation: measuring integrity of user applications and kernels Authors

Download 2.64 Mb.

Share with your friends:
1   ...   6   7   8   9   10   11   12   13   ...   16




The database is protected by copyright ©ininet.org 2024
send message

    Main page