Working through the scales


Evaluation and Discussion of Findings



Download 2.64 Mb.
Page15/16
Date02.05.2018
Size2.64 Mb.
#47260
1   ...   8   9   10   11   12   13   14   15   16

5.3 Evaluation and Discussion of Findings

Our experiments demonstrate that using the QTM grid as a spatial filter works reasonably well, at least in terms of the aesthetic quality of map features generalized to date. Even when vertices are selected by mechanical sampling, the character of the features seems to be maintained over a wide range of scales (on the order of 1:32, and possibly more), across which other map simplification methods may not work as gracefully. The parameters we have implemented to control the process provide for many subtle possibilities, only some of which have been examined to date.


Degree of Generalization. As worked examples accumulated that showed the effects of our parameters, we continued to observe that linework was not being simplified by our algorithm to the degree that a cartographer might insist upon. This is evident in most of the maps presented in D2-D7 in the form of small details that would have been eliminated in a manual process. Strategies that seemed to help most in maximizing the degree of simplification are:
• Using Attractors rather than QTM Facets as mesh elements,

• Using a Hierarchical Generalization Mode, and



• Selecting only one vertex to represent a mesh element run.
But even applied in concert, these strategies still did not always yield sufficient simplification. We suspect some of these problems are due to having more than one run of vertices occupy a given mesh element; when this happens, even if each run is reduced to a single vertex, there can still be a number of them in remaining in the mesh element. Replacing all the runs with one point would probably achieve sufficient simplification, but this would also tend to introduce topological errors that might be quite difficult to diagnose and correct. So, we must learn to live with the degree of generalization that our current methods provide, and look for better methods to handle specific problem areas. Some ideas that may help in this regard are outlined in section 5.4.
Useful Metrics. Many statistics were collected, derived, analyzed and charted, some of which are reproduced in appendix C. Most of the derived ones are ratios that relate the amount of detail to map scale or resolution. For example, the SEGMM statistic describes the average size of line segments resulting from each generalization run at “Standard Scale” (see section 5.2). This value can in turn be used to derive a map scale (and associated QTM detail level) at which SEGMM would take on a constant, “optimal” value, such as one millimeter. These scale values are listed in the tables in C1 under the heading Scale for seg(mm)=1. Discrepancies between this “ideal” scale and the target scale quantified what we observed above: that our parameter settings — if not the methods themselves — were resulting in too much line detail. The ideal scales could be three to ten times larger than the QTM Standard Scales, which are based purely on mesh resolution, and double at each level.
Figure 5.6, which plots average segment lengths as measured on the ground (SEGKM) and on standard scale maps (SEGMM), shows how variations in methods yield quite different results according to these measures. In the four charts, striped lines depict segment sizes on the ground (in km) and solid lines are the same data, but transformed to the what they would measure (in mm) on maps made at the standard QTM scales (i.e., doubling at each level). Note the cross-tabulated legend: each column presents one of the datasets, using two scales of measurement. Within a row, the units of measure are constant, but different mesh elements are compared.

Figure 5.6: Mean segment size on ground and in map units at standard scales



for 8 or 9 levels of detail, by generalization mesh type and mode
The erratic behavior toward the far right of the lower series are due to retaining a very small number of points (fewer than 6), at which point the caricatures are no longer useful. But before such a catastrophic scale is reached (above level 12 or 13, around 1:3M), we note a steady rise of ground segment size and slight declines or stability in corresponding map segment sizes. As stated earlier, the most successful generalizations seem to yield an average map segment size between 0.5 and 1 mm. The hierarchical mode simplifications tend to produce segments in that range, while non-hierarchical ones produce somewhat less constant lengths that tend to be a bit below the optimum range.
Sense from Sinuosity. We found that computing and manipulating the distribution of vertex sinuosities can have major and rather interesting effects, and feel more attention should be given to this area. Some suggestions for further research are given in section 5.3.1. The measure of sinuosity we have developed (described in figure 4.8) is certainly not the only one that could be used, but it seems to behave the way it should in characterizing lines and selecting points. It may be more important how such information is managed and used than how it comes to be computed; as figure 3.6 illustrates, the QTM encoding scheme makes it possible to build such attribute data into each and every vertex in a dataset, at no additional storage cost (in our character-based implementation, however, it costs one byte per vertex). When vertices are qualified in this or a similar manner (and more than one method can be used in concert), it makes it easier to make decisions about selecting them. These decisions could even be made contextually, based on the purpose of the map, the feature class being simplified and the density of features and vertices in the vicinity (using attractor occupancy information). And segmenting features according to sinuosity — similar to the approach reported by Plazanet et al (1995) — can allow generalization to be chosen more appropriately and their parameters to be more finely tuned.

The “Other” Algorithm. When we compare our methods to the widely-used Ramer-Douglas-Peucker algorithm we find things to like and dislike about each. Results can favor either method or neither. Looking at figure 5.7, displaying data from Switzerland, it is difficult to decide which method works better for a relatively convoluted feature such as Schaffhausen Canton (many other Swiss cantons have a similar character).

5.7 Schaffhausen Canton, Switzerland: Comparison of QTM and RDP generalizations


Other comparisons of these methods are shown in figures D1.9 and D2.9, for the two U.S. test areas. In none of these figures was sinuosity selection used for the QTM-based versions, which could well have improved those results. Additional differences exist between QTM and RDP algorithms beside the quality of their results. Before closing this discussion, it may help to compare these characteristics, in order to summarize the difference between our approach and that commonly-used one. This is provided in table 5.2, and further elucidated below.





QTM

RDP

Operational Scope

local

global

Hierarchy

Y/N

Y/N

Parameters

6-8

1

Uniformity

higher

lower

Useful Scale Range

ca. 1:32

ca. 1:8

Point Attributes

used

not used

Span of Control

large

small

Comprehensibility

complex

simple

Table 5.2: Characteristics of Simplification Algorithms Compared
Operational Scope. While QTM is a global coordinate system, as applied to map generalization its operators work locally, usually looking only at the contents of one mesh element at a time. RDP is a global operator, in the sense that it treats each given feature as a unit of analysis. Its results can be biased by the choice of these units (it is sensitive to where arcs and polygons begin and end), just as QTM-based methods are sensitive to the placement of features with respect to mesh elements.
Hierarchy. Both approaches can be implemented hierarchically, but this is not a defining characteristic of either one. By modifying their logic, one can ensure that vertices will be selected in a top-down way, such that points eliminated at small tolerance levels will not suddenly reappear at coarser levels of detail. By working this way, features can be pre-processed for rapid display at a later time. Barber et al (1995) studied the effects of hierarchical elimination in great detail, and concluded that using it caused few perceptible or statistical differences in comparison to non-hierarchical algorithms.
Parameters. Section 4.4.3 outlined eight parameters or modes that affect QTM generalization, and most of these have interactions of some sort. RDP has but one parameter, which like most of QTM’s, is not very intuitive. Setting a bandwidth tolerance properly can be frustrating and take time, whereas specifying a QTM level of detail is more easily grasped (but other parameters are less obvious).
Uniformity. Despite not being a global procedure — or perhaps because of it — QTM simplification tends to yield rather uniform results along primitives, evening-out vertex spacing in the process. When attractors are used as mesh elements, two types of interlaced uniformity result (triangular and hexagonal cells), but their effects blend together to become unnoticeable. RDP does not respect or coerce uniformity, and can cause point density to vary erratically along features. Such non-uniformity is unpredictable, depending on the choice of anchor points, tolerance levels and local geometry. The most common results are to create featureless spikes and long, straight lines, where intermediate details get swallowed by the band imputed to lie between two anchor points.
Useful Scale Range. Problems can occur when RDP is used to greatly change the scale of representation of map displays. Often these take the form of self-crossing features in areas of complex geometry. Topological post-processing normally is invoked to eliminate such overlaps. QTM simplification is not immune from these artifacts, but they are relatively rare, especially when points of low sinuosity are selected. This indicates that “characteristic points” such as RDP identifies may not always be the best choice. This idea is amplified below.
Point Attributes. Because it is possible to coerce the weedQids algorithm to select certain vertices in preference to others, users potentially have some control over results. RDP could be modified to work in a similar way (for example, by declaring certain vertices to be “sacred”), but this would essentially create a new algorithm. Also, QTM’s built-in vertex attributes (metadata) need not be limited to measures of local sinuosity, as is currently the case.
Span of Control. By availing themselves to multiple parameters, QTM-based methods are inherently more flexible than RDP, or any algorithm with a single parameter. If special generalization situations can be identified, better results can probably be obtained by tinkering with one parameter or another. We are only just now learning how to do this. The experiments with segmenting features according to sinuosity are a start in this direction.
Comprehensibility. One usually pays for flexibility of control, and the price is often confusion and potential for making mistakes. This is the dark side of span-of control, its evil twin. It will take some time before QTM generalization is well-enough studied to understand its behavior. If it is going to succeed, some degree of automated parameter setting will be needed in order to routinize its application, such that GIS users can call on it without being constantly confused. While they need not necessarily understand how QTM-based algorithms work in all their detail, they will need more help than their software or data normally provides them.
Point Placement Possibilities. There are additional ways in which QTM generalization results can be subtly modified. Figure 5.8, using the Schaffhausen boundaries, shows some of these other possibilities; it controls two QTM parameters, QTM retrieval depth (by now familiar to readers), and one not usually varied, QTM decoding depth (discussed briefly in section 4.4.2). This determines the precision with which QTM IDs are translated back to latitude and longitude. Normally full precision of vertices (encoding depth) is used, but in certain circumstances it may help to coarsen the locational precision of features, particularly for choropleth thematic maps.

Figure 5.8: Effects of Varying QTM Retrieval Resolution and Point Decoding Resolution


At extremely low levels of decoding precision (as shown in the lower right of figure 5.8) spikes can appear in odd places. These result from aliasing the base vertices of small features such as peninsulas which reduce to triangles. In such situations one might count on the lookahead parameter to delete the spikes, but only if it is set high enough to traverse all the vertices from one base point to another lying in the same mesh element. A better way to handle this might be to post-process filtered vertices to identify spikes at retrieval resolution rather than at source resolution. If these minor artifacts can be dealt with by this or other means, the decoding depth parameter can probably be made more useful.


Directory: papers
papers -> From Warfighters to Crimefighters: The Origins of Domestic Police Militarization
papers -> The Tragedy of Overfishing and Possible Solutions Stephanie Bellotti
papers -> Prospects for Basic Income in Developing Countries: a comparative Analysis of Welfare Regimes in the South
papers -> Weather regime transitions and the interannual variability of the North Atlantic Oscillation. Part I: a likely connection
papers -> Fast Truncated Multiplication for Cryptographic Applications
papers -> Reflections on the Industrial Revolution in Britain: William Blake and J. M. W. Turner
papers -> This is the first tpb on this product
papers -> Basic aspects of hurricanes for technology faculty in the United States
papers -> Title Software based Remote Attestation: measuring integrity of user applications and kernels Authors

Download 2.64 Mb.

Share with your friends:
1   ...   8   9   10   11   12   13   14   15   16




The database is protected by copyright ©ininet.org 2024
send message

    Main page