Digital image warping



Download 2.54 Mb.
Page14/30
Date18.10.2016
Size2.54 Mb.
#475
1   ...   10   11   12   13   14   15   16   17   ...   30

discrete input samples. This permits input values to be evaluated at arbitrary positions in

the input, not just those defined at the sample points. While sampling generates an

infinite bandwidth signal from one that is bandlimited, interpolation plays an opposite

role: it reduces the bandwidth of a signal by applying a low-pass filter to the discrete sig-

nal. That is, interpolation reconstructs the signal lost in the sampling process by smooth-

ing the data samples with an interpolation function.

For equally spaced data, interpolation can be expressed as

f (x) =  c,h(x-xtO (5.3.1)

where h is the interpolation kernel weighted by coefficients ck and applied to K data sam-

ples, xk. Equation (5.3.1) formulates interpolation as a convolution operation. In prac-

tice, h is nearly always a symmetric kernel, i.e., h(-x)=h(x). We shall assume this to be

true in the discussion that follows. Furthermore, in all but one case that we will consider,

the ck coefficients are the data samples themselves.

h(x)


Xk

"X Interpolation

Resampled I Function

Point 


Figure 5.$: Interpolation of a single point.

The computation of one interpolated point is illustrated in Fig. 5.5. The interpolat-

ing function is centered at x, the location of the point to be interpolated. The value of

that point is equal to the sum of the values of the discrete input scaled by the correspond-

ing values of the interpolation kernel. This follows directly from the definition of convo-

lution.


The interpolation function shown in the figure extends over four points. If x is

offset from the nearest point by distance d, where 0 < d < 1, we sample the kernel at

53 INTRRPOLATION 125

h (-d), h(-1-d), h (l-d), and h (2-d). Since h is symmetric, it is defined only over the

positive interval. Therefore, h (d) and h (l+d) are used in place of h (-d) and h (-l-d),

respectively. Note that if the resampling grid is uniformly spaced, only a fixed number

of points on the interpolation kernel must be evaluated. Large performance gains can be

achieved by precomputing these weights and storing them in lookup tables for fast access

during convolution. This approach will be described in more detail later in this chapter.

Although interpolation has been posed in terms of convolution, it is rarely imple-

mented this way. Instead, it is simpler to directly evaluate the corresponding interpolat-

ing polynomial at the resampling positions. Why then is it necessary to introduce the

interpolation kernel and the convolution process into the discussion? The answer lies in

the ability to compare interpolation algorithms. Whereas evaluation of the interpolation

polynomial is used to implement the interpolation, analysis of the kernel is used to deter-

mine the numerical accuracy of the interpolated function. This provides us with a quanti-

tative measure which facilitates a comparison of various interpolation methods [Schafer

73].


Interpolation kernels are typically evaluated by analyzing their performance in the

passband and stopband. Recall that an ideal reconstruction filter will have unity gain in

the passband and zero gain in the stopband in order to transmit and suppress the signal's

spectrum in these respective frequency ranges. Ideal filters, as well as superior nonideal

filters, generally have wide extent in the spatial domain. For instance, the sinc function

has infinite extent. As a result, they are categorized as infinite impulse re)vonse filters

(fIR). It should be noted, however, that sinc functions are not physically realizable IIR

filters. That is, they can only be realized approximately. The physically realizable IIR

filters must necessarily use a finite number of computational elements. Such filters are

also known as recursire filters due to their structure: they always have feedback, where

the output is fed back to the input after passing through some delay element.

An alternative is to use filters with finite support that do not incorporate feedback,

called finite impulse response filters (FIR). In FIR filters, each output value is computed

as the weighted sum of a finite number of neighboring input elements. Note that they are

not functions of past output, as is the case with IIR filters. Although fIR filters can

achieve superior results over FIR filters for a given number of coefficients, they are

difficult to design and implement. Consequently, FIR filters find widespread use in sig-

nal and image processing applications. Commonly used FIR filters include the box, tri-

angle, cubic convolution kernel, cubic B-spline, and windowed sinc functions. They

serve as the interpolating functions, or kernels, described below.

126 IMAGE REVAMPLING

5.4. INTERPOLATION KERNELS

The numerical accuracy and computational cost of interpolation algorithms are

directly tied to the interpolation kernel. As a result, interpolation kernels are the target of

design and analysis in the creation and evaluation of interpolation algorithms. They are

subject to conditions influencing the tradeoff between accuracy and efficiency.

In this section, the analysis is applied to the 1-D case. Interpolation in 2-D will be

shown to be a simple extension of the 1-D results. In addition, the data samples are

assumed to be equally spaced along each dimension. This restriction imposes no serious

problems since images tend to be defined on regular grids. We now review the interpola-

tion schemes in the order of their complexity.

5.4.1. Nearest Neighbor

The simplest interpolation algorithm from a computational standpoint is the nearest

neighbor algorithm, where each interpolated output pixel is assigned the value of the

nearest sample point in the input image. This technique, also known as the point shift

algorithm, is given by the following interpolating polynomial.

Xk_ 1 q'X k X k q'Xk+ t

f(x) = f (xk) 

It can be achieved by convolving the image with a one-pixel width rectangle in the spa-

tial domain. The interpolation kernel for the nearest neighbor algorithm is defined as

(10 0-

n

various names are used to denote this simple kernel. They include the box filter,

sample-and-h>Mfunction, and Fourier window. The kernel and its Fourier transform are

shown in Fig. 5.6. The reader should note that the figure refers to frequency f in H (f),

not function f.

h(x)

4-3-2-101234 4-3-2-101234



(a) )

Figure 5.6: Nearest neighbor: (a) kernel, (b) Fourier transform.

S.4 INrERPOLATION KERNELS 127

Convolution in the spatial domain with the rectangle function h is equivalent in the

frequency domain to multiplication with a sine function. Due to the prominent side lobes

and infinite extent, a sine function makes a poor low-pass filter. Consequently, the

nearest neighbor algorithm has a poor frequency domain response relative to that of the

ideal low-pass filter.

The technique achieves magnification by pixel replication, and minification by

sparse point sampling. For large-scale changes, nearest neighbor interpolation produces

images with a blocky appearance. In addition, shift errors of up to ooe-half pixel are pos-

sible. These problems make this technique inappropriate when sub-pixel accuracy is

required.

One notable property of this algorithm is that, except for the shift error, the resam-

pled data exactly reproduce the original data if the resampling grid has the same spacing

as that of the input. This means that the frequency spectra of the original and resampled

images differ only by a pure linear phase shift. In general, the nearest neighbor algo-

rithm permits zero-degree reconstmctioo and yields exact results only when the sampled

function is piecewise constant.

Nearest neighbor interpolation was first used in remote sensing at a time when the

processing time limitations of general purpose computers prohibited more sophisticated

algorithms. It was found to simplify the entire mapping problem because each output

point is a function of only one input sample. Furthermore, since the majority of prob-

lems involved only slight distortions with a scale factor near one, the results were con-

sidered adequate.

Currently, this method has been superceded by more elaborate interpolation algo-

rithms. Dramatic improvements in digital computers account for this transition.

Nevertheless, the nearest neighbor algorithm continues to find widespread use in one

area: frame buffer hardware zoom functions. By simply diminishing the rate at which to

sample the image and by increasing the cycle period in which the sample is displayed,

pixels are easily replicated on the display monitor. This scheme is known as a sample-

and-hold function. Although it generates images with large blocky patches, the nearest

neighbor algorithm derives its primary use as a means for real-time magnification. For

more sophisticated algorithms, this has only recently become realizable with the use of

special-purpose hardware.

5.4.2. Linear Interpolation

Linear interpolation is a first-degree method that passes a straight line through

every two consecutive points of the input signal. Given an interval (x0,x 1 ) and function

values f0 and fl for the endpoints, the interpolating polynomial is

f(x) = alx + ao (5.4.3)

where a 0 and a 1 are determined by solving

i

128 IMAGE RESAMPLING



This gives rise to the following interpolating polynomial.

f(x) = fo+ x-xo (fl-f0) (5.4.4)

Not surprisingly, we have just derived the equation of a line joining points (x0,f0) and

(xl,ft). In order to evaluate this method of interpolation, we must examine the fre-

quency response of its interpolation kernel.

In the spatial domain, linear interpolation is equivalent to convolving the sampled

input with the following interpolation kernel.

h(x)=(lo-lXl 0-

I -< Ix l <5,4.5)

Kernel h is referred to as a triangle filter, tent filter, roof function, Chateau function, or

Bartlett window.

This interpolation kernel corresponds to a reasonably good low-pass filter in the fre-

quency domain. As shown in Fig. 5.7, its response is superior to that of the nearest

neighbor interpolation function. In particular, the side lobes are far less prominent, indi-

cating improved performance in the stopband. Nevertheless, a significant amount of

spurious high-frequency components continue to leak into the passband, contributing to

some aliasing. In addition, the passband is moderately attenuated, resulting in image

smoothing.

h(x) IH(f)l

-4-3-2-I 0 1 2 3 4 -4-3-2-1 0 1 2 3 4

(a) (b)

Figure 5.7: Linear interpolation: (a) kernel, (b) Fourier transform.

Linear interpolation offers improved image quality above nearest neighbor tech-

niques by accommodating first-degree fits. It is the most widely used interpolation algo-

rithm for reconstruction since it produces reasonably good results at moderate cost.

Often, though, higher fidelity is required and thus more sophisticated algorithms have

been formulated.

Although second-degree interpolating polynomials appear to be the next step in the

progression, it was shown that their filters are space-variant with phase distortion

[Schafer 73]. These problems are shared by all polynomial interpolators of even-degree.

5.4 INTERPOLATION KERNELS 129

This is attributed to the fact that the number of sampling points on each side of the inter-

pollted point always differ by one. As a result, interpolating polynomials of even-degree

are not considered.

5.4.3. Cubic Convolution

Cubic convolution is a third-degree interpolation algorithm originally suggested by

Rifman and McKinnon [Rifman 74] as an efficient approximation to the theoretically

optimum sinc interpolation function. Its interpolation kernel is derived from constraints

imposed on the general cubic spline interpolation formula. The kemel is composed of

piecewise cubic polynomials defined on the unit subintervals (-2,-1), (-1,0), (0,1), and

(1,2). Outside the interval (-2,2), the interpolation kernel is zero? As a result, each

interpolated point is a weighted sum of four consecutive input points. This has the desir-

able symmetry property of retaining two input points on each side of the interpolating

region. It gives rise to a symmetric, space-invariant, interpolation kemeI of the form

fa301xl +a2olx12+atolxl +aoo 0-< Ixl < l

h(x) = l31[x[3 +a211x[2 +alllX[ +ao l l

2_<

The values of the coefficients can be determined by applying the following set of con-



stralnts to the interpolation kemel.

1. h(O)=landh(x)=Oforlxl=land2.

2. h must be oontinuous at ]x[ =0, 1,and2.

3. h must have a continuous first derivative at ]x[ =0, 1, and2.

The first coostmint states that when h is centered on an input sample, the interpola-

tion function is independent of neighboring samples. This permits f to actually pass

through the input points. In addition, it establishes that the ctc coefficients in Eq. (5.3.1)

are the data samples themselves. This follows from the observation that at data point xj,

f (xj) = _.'ckh(xj-x) (5.4.7)

=  ch(xj-x)

k=j-2

According to the first constraint listed above, h (xj-xt,) = 0 unless j = k. Therefore, the



right-hand side of Eq. (5.4.7) reduces to c./. Since this equals f(xj), we see that all

coefficients must equal the data samples in the four-point interval.

The first two constraints provide four equations for these coefficients:

' We again assume that our data points are located on the integer grid.

130 IMAGE lIESAMPLING

1 = h(0) = ao (5.4.8a)

0 = h (1-) = a3o + a2o + a o + ao (5.4.8b)

0 = h(1 +) = asl +a21 +an +aol (5.4.8c)

0 = h(2-) = 8a31 +4a21 +2an +aol (5.4.8d)

Three more equations are obtained from constraint (3):

-am = h'(0-) = h'(0 +) = al0 (5.4.8e)

3a30 +2a20 +a0 = h'(1-) = h'(1 +) = 3asl +2a21 +an (5.4.815)

12a3 +4a2 +an = h'(2-) = h'(2 +) = 0 (5.4.8g)

The constraints given above have resulted in seven equations. However, there are eight

unknown coefficients. This requires another constraint in order to obtain a unique solu-

tion. By allowing a = as1 to be a free parameter that may be controlled by the user, the

family of solutions given below may be obtained.

(a+2)lxlS-(a+3)lxl2+l 0

h(x) :l;IxlS-Salx12+8a[x1-4a 1< Ixl <2 (5.4.9)

2_< Ixl


Additional knowledge about the shape of the desired result may be imposed upon

Eq. (5.4.9) to yield bounds on the value of a. The heuristics applied to derive the kemel

are motivated from properties of the ideal reconstruction filter, the sinc function. By

requiring h to be concave upward at I x I = 1, and concave downward at x = 0, we have

h"(0) = -2(a + 3) < 0 --> a >-3 (5.4.10a)

h"(1) = -4a > 0 --4 a < 0 (5.4.10b)

Bounding a to values between -3 and' 0 makes h resemble the sinc function. In

[Rifman 74], the authors use the constraint that a = -1 in order to match the slope of the

sinc function at x = 1. This choice results in some amplification of the frequencies at the

high-e,nd of the passband. As stated earlier, such behavior is characteristic of image shar-

pening.

Other choices for a include -.5 and -.75. Keys selected a = -.5 by making the Tay-

lor series approximation of the interpolated function agree in as many terms as possible

with the original signal [Keys 81]. He found that the resulting interpolating polynomial

will exactly reconstruct a second-degree polynomial. Finally, a = -.75 is used to set the

second derivatives of the two cubic polynomials in h to 1 [Simon 75]. This allows the

second derivative to be continuous at x = 1.

Of the three choices for a, the value -1 is preferable if visually enhanced results are

desired. That is, the image is sharpened, making visual detail perceived more readily.

5.4 INTERPOLATION KERNELS 131

However, the results are not mathematically precise, where precision is measured by the

order of the Taylor series. To maximize this order, the value a: -.5 is preferable. The

kernel and spectrum of a cubic convolution kemel with a: -.5 is shown in Fig. 5.8.

4 -3 -2 -1 0 I 2 3 4 -4 -3 -2 -1 0 I 2 3 4

(a) (b)

Figure 5.8: Cubic convolution: (a) kernel (a :-.5), (b) Fourier transform.

In a recent paper [Maeland 88], Macland showed that at the Nyquist frequency the

specmtm attains a value that is independent of the free parameter a. The value is equal

to (48/4)fs, while the value at the zero frequency is H(0)=fs. This result implies that

adjusting a can alter the cut-off rate between the passband and stopband, but not the

attenuation at the Nyquist frequency. In comparing the effect of varying a, Maeland

points out that cubic convolution with a = 0 is superior to the simple linear interpolation

method when a strictly positive kernel is necessary. The role of a has also been studied

in [Park 83], where a discussion is given on its optimal selection based on the frequency

content of the image.

It is important to note that in the general case cubic convolution can give rise to

values outside the range of the input data. Consequently, when using this method in

image processing it is necessary to properly clip or rescale the results into the appropriate

range for display.

5.4.4. Two-Parameter Cubic Filters

In [Mitchell 88], Mitchell and Netravaii describe a variation of cubic convolution in

which two parameters are used to describe a family of cubic reconstruction filters.

Through a different set of constraints, the number of free parameters in Eq. (5.4.6) are

reduced from eight to two. The constraints they use are:

1. h(x) =0for Ixl =2.

2. h'(x)=0for Ix[ =0and2.

3. h must be continuous at Ixl = 1. That is, h(1-)=h(l+).

4. h must have a continuous first derivative at I x ] = 1. That is, h'(1-) = h'(l+).

5.  h(x-n) = 1.

132 IMAGE RESAMPLING

The first four constraints ensure that the interpolation kemel is flat at Ix I = 0 and 2,

and has continuous first derivatives at Ix I = 1. They result in five equations for the

unknown coefficients. The last constraint enforces a fiat-field response, meaning that if

the digital image has constant pixel values, then the reconstructed image will also have

constant value. This yields the sixth of eight equations needed to solve for the unknown

coefficients in Eq. (5.4.6). That leaves us with the following two-parameter family of

solutions.

[(-9b-6c+12)lx13+(12b+6c-18)lx[+(-2b+6) 0< Ixl < 1

h (x) = --  (-b-6c)Ix 13 + (6b+30c)Ix 12 + (-12b-48c)Ix I + K 1 < ]x I < 2 (5.4.11)

[0 2_< Ixl

where K = 8b + 24c. Several well-known cubic filters are derivable from Eq. (5.4.11)

through an appropriate choice of vaiues for (b,c). For instance, (O,-c) corresponds to the

cubic convolution kemel in Eq. (5.4.9) and (1,0) is the cubic B-spline given later in Eq.

(5.4.18).

The evaluation of these parameters is performed in the spatial domain, using the

visual artifacts described in [Schreiber 85] as the criteria for judging image quality. In

order to better understand the behavior of (b,c), the authors partitioned the parameter

space into regions characterizing different artifacts, including blur, anisotropy, and ring-

ing. As a result, the parameter pair (.33,.33) is found to offer superior image quality.

Another suggestion is (1.5,-.25), corresponding to a band-mject, or notch, filter. This

suppresses the signal energy near the Nyquist frequency that is most responsible for con-

spicuous moire patterns.

Despite the added flexibility made possible by a second free parameter, the benefits

of the method for mconstraction fidelity are subject to scrutiny. In a recent paper

[Reichenbach 89], the frequency domain analysis developed in [Park 82] was used to

show that the additional parameter beyond that of the one-parameter cubic convolution

does not improve the reconstruction fidelity. That is, the optimal two-parameter convolu-

tion kernel is identical to the optimal kernel for the traditional one-parameter algorithm,

where optimality is taken to mean the minimization of the squared error at low spatial

frequencies. It is then masonable to ask whether this optimality criterion is useful. If so,

why might images reconstructed with other interpolation kemels be preferred in a subjec-

tive test? Ultimately, any quantity that represents reconstraction error must necessmily

conform to the subjective properties of the human visual system. This suggests that

merging image restoration with reconsWaction can yield significant improvements in the

quality of reconsWaction filters.

Further improvements in reconstruction are possible when derivative values can be

given along with the signal amplitude. This is possible for synthetic images where this

information may be available. In that case, Eq. (5.3.1) can be rewritten as

f (x) = , fkg(x-xk)+ fh(x--xk) (5.4.12)

k=0


5.4 INTERPOLATION KERNELS 133

where


sin2 rx

g(x) = 2x2 (5.4.13a)

sin2rx

h(x) = (5.4.13b)

2 x

An approximation to the resulting reconstraction formula can be given by Hermite cubic



interpolation.

g(x)={lx13-31x12+l 0_< Ixl <1

1 _< Ix l (5.4.14a)

h(x)={xl3-2xlxl +x 0_< Ixl <1

1 _< Ix l (5.4.14b)

5.4.5. Cubic Splines

The next reconstruction technique we describe is the method of cubic spline inter-

polation. A cubic spline is a piecewise continuous third-degree polynomial. Given n

points labeled (xk,yk) for 0 -< k < n, the interpolating cubic spline consists of n-1 cubic

polynomials. They pass through the supplied points, which are also known as control

points.

We now derive the piecewise interpolating polynomials. The ktn polynomial piece,

f,, is defined to pass through two consecutive input points in the fixed interval (x,t,X+l).

Furthermore, f are joined at x (for k = 1,...,n-2) such that f&, f, and f' are continuous

(Fig. 5.9). The interpolating polynomial f& is given as

fl(x) = a3(x - xt,)3 + a2(x - xt,)2 + al(x - x) + ao (5.4.15)


Directory: filedownload

Download 2.54 Mb.

Share with your friends:
1   ...   10   11   12   13   14   15   16   17   ...   30




The database is protected by copyright ©ininet.org 2024
send message

    Main page