Digital image warping



Download 2.54 Mb.
Page18/30
Date18.10.2016
Size2.54 Mb.
#475
1   ...   14   15   16   17   18   19   20   21   ...   30

sampling includes point sampling, as well as the supersampling and adaptive sampling

techniques described below.

6.2.1. Supersampling

The process of using more than one regularly-spaced sample per pixel is known as

supersampling. Each output pixel value is evaluated by computing a weighted average

of the samples taken from their respective preimages. For example, if the supersampling

grid is three times denser than the output grid (i.e., there are nine grid points per pixel

area), each output pixel will be an average of the nine samples taken from its projection

in the input image. If, say, three samples hit a green object and the remaining six

samples hit a blue object, the composite color in the output pixel will be one-third green

and two-thirds blue, assuming a box filter is used.

Supersampling reduces aliasing by bandlimiting the input signal. The purpose of

the high-resolution supersampling grid is to refine the estimate of the preimages seen by

the output pixels. The samples then enter the prefiltering stage, consisting of a low-pass

filter. This permits the input to be resampled onto the (relatively) low-resolution output

grid without any offending high frequencies intredocing aliasing artifacts. In Fig. 6.6 we

see an output pixel suixtivided into nine subpixel samples which each undergo inverse

mapping, sampling the input at nine positions. Those nine values then pass through a

low-pass filter to be averaged into a single output value.

Supersampling grid Input Output

Figure 6.6: Supersampfing.

The impact of supersampling is easily demonstrated in the following example of a

checkerboard projected onto an oblique plane. Figure 6.7 shows four different sampling

rates used to perform an inverse mapping. In Fig. 6.7a, only one checkerboard sample

per output pixel is used. This contributes to the jagged edges at the bettom of the image

and to the moire patterns at the top. They directly correspond to poor reconstruction and

antialiasing, respectively. The results are progressively refined as more samples are used

to compute each output pixel.

There are two problems associated with straightforward supersampling. The first

problem is that the newly designated high frequency of the prefiltered image continues to

be fixed. Therefore, there will always be sufficiently higher frequencies that will alias.

The second problem is cost. In our example, supersampling will take nine times longer

than point sampling. Although there is a clear need for the additional computation, the

dense placement of samples can be optimized. Adaptive supersampling is introduced to

address these drawbacks.

6.2.2. Adaptive Supersampling

In adaptive supersampling, the samples are distributed more densely in areas of

high intensity variance. In this manner, supersamples are collected only in regions that

warrant their use. Early work in adaptive supersampling for computer graphics is

described in [Whirted 80]. The strategy is to sulxtivide areas between previous samples

170 ANTIALIASING

(a) (b)


(c) (d)

Figure 6.7: Supersampling an oblique checkerboard, (a) 1; (b) 4; (c) 16; and (d) 256

samples per output pixel. Images have been enlarged with pixel replication.

6.2 RF, GULAR SAM[PL]NG 171

when an edge, or some other high frequency pattern, is present. Two approaches to

adaptive supersampling have been described in the literature. The first approach allows

sampling density to vary as a fuoction of local image variance [Lee 85, Kajiya 86]. A

second approach introduces two levels of sampling densities: a regular pattern for most

areas and a higher-density pattern for ragions demonstxating high frequencies. The regu-

lar pattern simply consists of one sample per output pixel. The high density pattern

involves local supersampling at a rate of 4 to 16 samples per pixel. Typically, these rates

are adequate for suppressing aliasing artifacts.

A strategy is required to determine where supersampling is necessary. In [Mitchell

87], the author describes a method in which the image is divided into small square super-

sampling cells, each containing eight or nine of the low-density Samples. The entire cell

is supersampled if its samples exhibit excessive variation. In [Lee 85], the variance of

the samples are used to indicate high frequency. It is well-known, however, that variance

is a poor measure of visual perception of local variation. Another alternative is to use

contrast, which more closely models the nonlinear response of the human eye to rapid

fluctuations in light intensities [Caelli 81]. Contxast is given as

lmx - Ln/n

C - -- (6.2.1)

I. + I.,

Adaptive sampling reduces the number of samples required for a given image qual-

ity. The problem with this technique, however, is that the variance measurement is itself

based on point samples, and so this method can fail as well. This is particularly true for

sub-pixel objects that do not cross pixel boundaries. Nevertheless, adaptive sampling

presents a far more reliable and cost-effective alternative to supersampling.

An example of the effectiveness of adaptive supersampling is shown in Fig. 6.8.

The image, depicting a bowl on a wooden table, is a computer-generated picture that

made use of bilinear interpolation for reconstruction and box filtering for antialinsing.

Higher sampling rates were chosen in regions of high variance. For each output pixel;

the following operations were taken. First, the four pixel corners were projected into the

input. The average of these point samples was computed. If any of the comer values dif-

fered from that average by more than some user-specified threshold, then the output pixel

was subdivided into four subpixels. The process repeats until the four corners satisfy the

uniformity condition. Each output pixel is the average of all the computed input values

that map onto it.

6.2.3. Reconstruction from Regular Samples

Each output pixel is evaluted as a linear combination of the preimage samples. The

low-pass filters shown in Figs. 6.4 and 6.6 are actually reconstruction lilters used to inter-

polate the output point. They share the identical function of the reconstruction filters dis-

cussed in Chapter 5: they bandlimit the sampled signal (suppress the replicated spectxa)

so that the resampling process does not itself introduce aliasing. The careful reader will

notice that reconstruction serves two roles:

172 ANTIALIASING

Figure 6.8: A ray-traced image using adaptive supersampling.

1 ) ReconsU'uction filters interpolate the input samples to compute values at nonintegral

positions. These values are the preimage samples that are assigned to the supersam-

pling grid.

2) The very same filters are used to interpolate a new value from the dense set of sam-

ples collected in step (1). The result is applied to the output pixel.

When reconstruction filters are applied to interpolate new values from regularly-

spaced samples, errors may appear as observable derivative discontinuities across pixel

boundaries. In antialiasing, reconstruction errors are more subfie. Consider an object of

constant intensity which is entirely embedded in pixel p, i.e., a sub-pixel sized object.

We will assume that the popular triangle filter is used as the reconstruction kernel. As

the object moves away from the center of p, the computed intensity forp decreases as it

moves towards the edge. Upon crossing the pixel boundary, the object begins to

6.2 REGULAR SAMPLING 173

contribute to the adjacent pixel, no longer having an influence on p. If this motion were

animated, the object would appear to flicker as it crossed the image. This artifact is due

to the limited range of the filter. This suggests that a wider filter is required, in order to

reflect the object's contribution to neighboring pixels.

One ad hoc solution is to use a square pyramid with a base width of 2x2 pixels.

This approach was used in [Blinn 76], an early paper on texture mapping. In general, by

varying the width of the filter a compromise is reached between passband tansmission

and stopband attenuation. This underscores the need for high-quality reconstruction

filters to prevent aliasing in image resampling.

Despite the apparent benefits of supersampling and adaptive sampling, all regular

sampling methods share a common problem: information is discarded in a coherent way.

This produces coherent aliasing artifacts that are easily perceived. Since spatially corre-

lated errors are a consequence of the regularity of the sampling grid, the use of irregular

sampling grids has been proposed to address this problem.

6.3. IRREGULAR SAMPLING

Irregular sampling is the process of using an irregular sampling grid in which to

sample the input image. This process is also referred to as nonuniform sampling and sto-

chastic sampling. As before, the term nonuniform sampling is a slight misnomer since

irregular sampling can be used to produce a uniform distribution of samples. The name

stochastic sampling is more appropriate since it denotes the fact that the irregularly-

spaced locations are determined probabilistically via a Monte Carlo technique.

The motivation for irregular sampling is that coherent aliasing artifacts can be ren-

dered incoherent, and thus less conspicuous. By collecting irregularly-spaced samples,

the energies of the offending high frequencies are made to appear as featureless noise of

the correct average intensity, an artifact that is much less objectionable than aliasing.

This claim is supported by evidence from work in color television encoding [Limb 77],

image noise measurement [Sakrison 77], dithering [Limb 69, Ulichney 87], and the dis-

Uibution of retinal cells in the human eye [Yellott 83].

6.3.1. Stochastic Sampling

Although the mathematical properties of stochastic sampling have received a great

deal of attention, this technique has only recendy been advocated as a new approach to

antialiasing for images. In particular, it has played an increasing role in ray tracing

where the rays (point samples) are now stochastically distributed to perform a Monte

Carlo evaluation of integrals in the rendering equation. This is called distributed ray

tracing and has been used with great success in computer graphics to simulate motion

blur, depth of field, penumbrae, gloss, and translucency [Cook 84, 86].

There are three common forms of stochastic sampling discussed in the literature:

Poisson sampling, jittered sampling, and point-diffusion sampling.

6.3.2. Poisson Sampling

Poisson sampling uses an irregular sampling grid that is stochastically generated to

yield a uniform distribution of sample points. This approximation to uniform sampling

can be improved with the addition of a minimum-distance constraint between sample

points. The result, known as the Poisson-disk distribution, has been suggested as the

optimal sampling pattern to mask aliasing artifacts. This is motivated by evidence that

the Poisson-disk distribution is found among the sparse retinal cells outside the foveal

region of the eye. It has been suggested that this spatial organization serves to scatter

aliasing into high-frequency random noise [Yellott 83].

Figure 6.9: Poisson-disk sampling: (a) Point samples; (b) Fourier tansform.

A Poisson-disk sampling pattern and its Fourier transform are shown in Fig. 6.9.

Theoretical arguments can be given in favor of this sampling pattern, in terms of its spec-

tral characteristics. An ideal sampling pattern, it is argued, should have a broad noisy

spectrum with minimal low-frequency energy. A perfectly random pattern such as white

noise is an example of such a signal where all frequency components have equal magni-

tude. This is equivalent to the "snow", or random dot patterns, that appear on a televi-

sion with poor reception. Such a pattern exhibits no coherence which can' give rise to

sWactured aliasing artifacts.

Dot-patterns with low-frequency noise often give rise to clusters of dots that

coalesce to form clumpsß Such granular appearances are undesirable for a uniformly dis-

tributed sampling pattern. Consequently, low-frequency attenuation is imposed to con-

centrate the noise energy into the higher frequencies which are not readily perceived.

These properties have direct analog to the Poisson and Poisson-disk distributions, respec-

tively. That is, white noise approximates the Poisson distribution while the low-

frequency attenuation approximates the minimal-distance constraint necessary for the

Poisson-disk distribution.

6.3 IRREGULAR SAMPLING 175

Distributions which satisfy these conditions are known as blue noise. Similar con-

straints have been applied towards improving the quality of dithered images. These dis-

tinct applications share the same problem of masking undesirable artifacts under the

guise of less objectionable noise. The solution offered by Poisson-disk sampling is

appealing in that it accounts for the response of the human visual system in establishing

the optimal sampling pattern.

Poisson-disk sampling patterns are difficult to generateß One possible implementa-

tion requires a large lookup table to store random sample locations. As each new random

sample location is generated, it is tested against all locations already chosen to be on the

sampling pattern. The point is added onto the pattern unless it is found to be closer than

a certain distance to any previously chosen point. This cycle iterates until the sampling

region is full. The pattern can then be replicated to fill the image provided that care is

taken to prevent reguladties from appearing at the boundaries of the copies.

In practice, this cosfly algorithm is approximated by cheaper alternatives. Two such

methods are jittered sampling and point-diffusion sampling.

6.3.3. Jittered Sampling

Jittered sampling is a class of stochastic sampling introduced to approximate a

Poisson-disk distribution. A jittered sampling pattern is created by randomly perturbing

each sample point on a regular sampling pattern. The result, shown in Fig. 6.10, is infe-

rior to that of the optimal Poisson-disk distribution. This is evident in the granularity of

the distribution and the increased low-frequency energy found in the specWam. How-

ever, since the magnitude of the noise is dimedy proportional to the sampling rate,

improved image quality is achieved with increased sample density.

Figure 6.10: Jittered sampling: (a) Point samples; (b) Fourier transform.

176 ANTIALIASING

6.3.4. Point-Diffusion Sampling

The point-diffusion sampling algorithm has been proposed by Mitchell as a compu-

tationally efficient technique to generate Poisson-disk samples [Mitchell 87]. It is based

on the Floyd-Steinberg error-diffusion algorithm used for dithering, i.e., to convert gray-

scale images into bitmaps. The idea is as follows. An intensity value, g, may be con-

verted to black or white by thresholding. However, an error e is made in the process:

e = MIN(white-g, g -black) (6.3.1)

This error can be used to refine successive thresholding decisions. By spreading, or

diffusing, e within a small neighborhood we can compensate for previous errors and pro-

vide sufficient fluctuation to prevent a regular pattern from appearing at the output.

These fluctuations are known as dithering signals. They are effectively high-

frequency noise that are added to an image in order to mask the false contours that inevit-

ably arise in the subsequent quantization (thresholding) stage. Regularities in the form of

textured patterns, for example, are typical in ordered dithering where the dither signal is

predefined and replicated across the image. In contrast, the Floyd-Steinberg algorithm is

an adaptive drresholding scheme in which the dither signal is generated on-the-fly based

on errors collected from previous thresholding decisions.

The success of this method is due to the pleasant distribution of points it generates

to simulate gray scale. It finds use in stochastic sampling because the distribution

satisfies the blue-noise criteria. In this application, the point samples are selected from a

supersampling grid that is four times denser than the display grid (i.e., there are 16 grid

points per pixel area). The diffusion coefficients are biased to ensure that an average of

about one out of 16 grid points will be selected as sample points. The pattern and its

Fourier transform are shown in Fig. 6.11.

(a)

Figure 6.11: Point-diffusion sampling: (a) Point samples; (b) Fourier tansform.



6.3 IRREGULAR SAMPLING 177

The Floyd-Steinberg algorithm was first introduced in [Floyd 75] and has been

described in various sources [Jarvis 76, Stoffel 81, Foley 90], including a recent disserta-

tion analyzing the role of blue-noise in dithering [Ulichney 87].

6.3.5. Adaptive Stochastic Sampling

Supersampling and adaptive sampling, introduced earlier as regular sampling

methods, can be applied to irregular sampling as well. In general, irregular sampling

requires rather high sampling densities and thus adaptive sampling plays a natural role in

this process. It serves to dramatically reduce the noise level while avoiding needless

computation.

As before, an initial set of samples is collected in the neighborhood about each

pixel. If the sample values are very similar, then a smooth region is implied and a lower

sampling rate is adequate. However, if these samples are very dissimilar, then a rapidly

varying region is indicated and a higher sampling rate is warranted. Suggestions for an

error estimator, error bound, and initial sampling rate can be found in [Dippe 85a, 85b].

6.3.6. Reconstruction from Irregular Samples

Once the irregnlarly-spaced samples are collected, they must pass through a recon-

stmction filter to be resampled at the display resolution. Reconstruction is made difficult

by the irregular distribution of the samples. One common approach is to use weighted-

average filters:

 h(x-xk)f(xk)

f(x)- k= (6.3.2)

 h(x-xO

The value of each pixel f (x) is the sum of the values of the nearby sample points f (xk)

multiplied by their respective filter values h(x-xk). This total is then normalized by

dividing by the sum of the filter values. This technique, however, can be shown to fail

upon extreme variation in sampling density.

Mitchell proposes a multi-stage filter [Mitchell 87]. Bandlimiting is achieved

through repeated application of weighted-average filters with ever-narrowing low-pass

cutoffi The strategy is to compute normalized averages over dense clusters of supersam-

ples before combining them with nearby values. Since averaging is done over a dense

grid (16 supersamples per pixel area), a crude bo5 filter is used for efficiency. Ideally,

the sophistication of the applied filters should increase with every iteration, thereby

refining the shape of the bandlimited spectrum.

Various other filtering suggestions are given in [Dippe 85a, 85b], including Wiener

filtering and the use of the raised cosine function. The raised cosine function, often used

in image restoration, is recommended as a reconstruction kernel to reduce Gibb's

phenomenon and guarantee strictly positive results. The filter is given by

178 AHAAar

h(x) = cos-[x I +1 Ixl

where W is the radius of kernel h, and x is the distance from its center.

(6.3.3)


6.4. DIRECT CONVOLUTION

Whether regular or irregular sampling is used, direct convolution requires fast

space-variant filtering. Most of the work in antialiasing research has focused on this

problem. They have generally addressed approximations to the convolution integral of

Eq. (6.1.1).

In the general case, a preimage can be of arbitrary shape and the kernel can be an

arbitrary filter. Solutions to this problem have typically achieved performance gains by

adding constraints. For example, most methods approximate a curvilinear preimage by a

quadrilateral. In this manner, techniques discussed in Chapter 3 can be used to locate

points in the preimage. Furthermore, simple kernels are often used for computational

efficiency. Below we summarize several direct convolution techniques. For consistency

with the texture mapping literature from which they are derived, we shall refer to the

input and output coordinate systems as texture space and screen space, respectively.

6,4.1. Catmull, 1974

The earliest work in texture mapping is rooted in Catmull's dissertation on subdivi-

sion algorithms for curved surfaces [Catmull 74]. For every screen pixel, his subdivision

patch renderer computed an unweighted average (i.e., box filter convolution) over the

corresponding quadrilateral preimage. An accumulator array was used to properly

integrate weighted contributions from patch fragments at each pixel.

6.4.2. Blinn and Newell, 1976

Blinn and Newell extended Catmull's results by using a triangle filter. In order to

avoid the artifacts mentioned in Sec. 6.2.3, the filter formed overlapping square pyramids

two pixels wide in screen space. In this manner, the 2x2 region surrounding the given

output pixel is inverse mapped to the corresponding quadrilateral in the input. The input

samples within the quadrilateral are weighted by a pyramid distorted to fit the quadrila-

teral. Note that the pyramid is a 2-D separable realization of the triangle filter. The sum

of the weighted values is then computed and assigned to the output pixel [Blinn 76].

6.4.3. Feibush, Levoy, and Cook, 1980

High-quality filtering was advanced in computer graphics by Feibush, Levoy, and

Cook in [Feibush 80]. Their method is summarized as follows. At each output pixel, the

bounding rectangle of the kernel is transformed into texture space where it is mapped

into an arbitrary quadrilateral. All input samples contained within the bounding rectan-

gle of this quadrilateral are then mapped into the output. The extra points selected in this

procedure are eliminated by clipping the transformed input points against the bounding

6.4 DIRECT CONVOLUTION 179

rectangle of the kernel mask in screen space. A weighted average of the selected


Directory: filedownload

Download 2.54 Mb.

Share with your friends:
1   ...   14   15   16   17   18   19   20   21   ...   30




The database is protected by copyright ©ininet.org 2024
send message

    Main page