for(j=0; I

for(k=0; k

? evaluate surface at (x,y) over all terms */

for(i=f=0; i

a = coef(i);

p = poly(i,(float)x,(float)y);

f += (a* p);

ß S++ = (float) f; ? save fitted surface values */

}

}

}
Compute paremeter alpha (Eq. 3.6.23)

float iniLalpha(j,k)

int j, k;

{

int i;

float a, h, p, hum, denum;
if(k == 0) a = 1.0; /* case 0:a0 */

else if(j == k) { ? case 1: aj */

num= denum = 0;

for(i=0; i

h = basis(j, X[i], Y[i]);

num += (Will);

denum += (W[i]*h);

74

SPAT IA L TRANSFORMATIONS

} else { ? case 2: ak, jl=k '/
num= denurn = 0;

for(i=0; i

h = basis(j, X[i], Y[i]);

p = poly(k, X[i], Y[i]);

hum += (Will*P'h);

denum += (W[i]*p*p);

a = -A[j][j] * nurn / denum;

return(a)i

Find the k th mapping function coefficient (Eq. 3.6.24)

float coef(k)

int k;

(
int i;

float p, num, denum;
num=

denum = 0;
for(J=0; i

p = poly(k, X[i], Y[i]);

num += (Will * Z[i] * p);

denum += (W[i] * p'p);

} ß

return(num / denum);
Determine the polynomial function at point (x,y)

float poly(k,x,y)

int k;

float x, y;
{

int i;

float p;
for(i=p=0; i

p += (A[k][i! * poly(i,x,y));

p += (A[k][k] * basis(k,x,y));

return(p);

3.6 POLYNO M1A L TRANSFORMATIONS

Rerum the (x,y) value of odhogonal basis function f

float basis(f, x, y)

int f;

float x, y;

{
float h;

switch(f) {

case O: h = 1.0; break;

case 1: h = x; break;

case 2: h = y; break;

case 3: h = x'x; break;

case 4: h = x'y; break;

case 5: h = y'y; break;

case 6: h =x*x*x; break;

case 7: h = X*x*y; break;

case 8: h = x*y*y; break;

case 9: h = y*y*y; break;

}

return(h);

3.7. PIECEWISE POLYNOMIAL TRANSFORMATIONS
Global polynomial transformations impose a single mapping function upon the

whole image. They often do not account for local geometric distortions such as scene

elevation, aUnospheric turbulence, and sensor nonlinearity. Consequently, piecewise

mapping functions have been introduced to handle local deformations [Goshtasby 86,

87].

The study of piecewise interpolation has received much attention in the spline
literature. The majority of the work, however, assumes that the data points are available

on a rectangular grid. In our application, this is generally not the case. Instead, we must

consider the problem of fitting a composite surface to scattered 3-D data [Franke 79].

3.7.1. A Surface Fitting Paradigm for Geometric Correction

The problem of determining functions U and V can be conveniently posed as a sur-

face fitting problem. Consider, for example, knowing M control points labeled (xk,y) in

the observed image and (uk,v) in the reference image, where 1 -

ping functions U and V is equivalent to detemtining two smooth surfaces: one that passes

through points (xk,y&,ut,) and the other that passes through (xt,,Yt,,W,) for 1

ure 3.13 shows a surface for U(x,y) with coetrol points given at the grid points.

76 SPAT IA L TRANSFORMATIONS

Figure 3.13: Surface U(x,y).

Before an image undergoes geometric distortion, these surfaces are defined to be

ramp functions. This follows from the observation that u, =xk and vk=y in the absence

of any deformation. Introducing geometric distortions will cause these surfaces to devi-

ate from their initial romp configurations. Note that as long as the surface is monotoni-

cally nondecreasing, the resulting image does not fold back upon itself.

Given only sparse control points, it is necessary to interpolate a surface through

these points and closely approximate the unknown distortion function. The problem of

smooth surface interpolation/approximation from scattered data has been the subject of

much attention across many fields. It is of great practical importance to all disciplines

concerned with inferring an analytic solution given only a few samples. Traditionally,

the solution to this problem is posed in one of two forms: global or local transformations.

A global transformation considers all the control points in order to derive the

unknown coefficients of the mapping function. Most of the solutions described thus far

ave global polynomial methods. This chapter devotes a lot of attention to polynomials

due to their popularity. Generally, th polynomial coefficients computed from global

methods will remain fixed across the entire image. That is, the same polynomial

transformation is applied over each pixel.

It is clear that glob l1o-order polynomial mapping functions can only approxi-

mate these surfaces. Furthermore, the least-squares technique used to determine the

coefficients averages a local geometric difference over the whole image area independent

of the position of the difference. As a result, local distortions cannot be handled and they

instead contribute to errors at distant locations. We may, instead, interpolate the surface

with a global mapping by increasing the degree of the polynomial to match the number

of control points. However, the resulting polynomial is likely to exhibit excessive spatial

undulations and thereby introduce further artifacts.

3.7 PIFCEWISE POLYNOMIAL TRANSFORMATIONS 77

Weighted least-squares was introduced as an alternate approach. Although it is a

global method that considers all control points, it recomputes the coefficients at each

pixel by using a weighting function that is biased in favor of nearby control points. In

this manner, it constitutes a hybrid global/local method, computing polynomial

coefficients through a global technique, but permitting the coefficients to be spadally-

varying. The extent to which the surface is interpolated or approximated is left to a

user-specified parameter.

If the control points all lie on a rectangular mesh, as in Fig. 3.13, it is possible to use

bicubic spline interpolation. For example, interpolating B-spl/nes or Bezier surface

patches can be fitted to the data [Goshtasby 89]. These methods are described globally

but remain sensitive to local data. This behavior is contrary to least-squares for fitting

polynomials to local data, where a local distortion is averaged out equally over the entire

image. With global spline interpolation (see Section 3.8), a local distortion has a global

effect on the transformed image, but its effect is vanishingly small on distant points.

A local transformation considers only nearby control points in evaluating interpo-

lated values along a surface. In this section, we describe piecewise polynomial transfor-

marion, a local technique for computing a surface from scattered points.

3.7.2. Procedure

One general procedure for performing surface interpolation on irregularly-spaced

3-D points consists of the following operations.

1. Partition each image into triangular regions by connecting neighboring control

points with noncrossing line segments, forming a planar graph. This process,

known as triangulation, serves to delimit local neighborhoods over which surface

patches will be defined.

2. Estimate partial derivatives of U (and similarly V) with respect to x and y at each of

the control points. This may be done by using a local method, with data values

taken from nearby control points, or with a global method using all the control

points. Computing the partial derivatives is necessary only if the surface patches

are to join smoothly, i.e., for C I, C 2, or smoother results. f

3. For each triangular region, fit a smooth surface through the vertices satisfying the

constraints imposed by the partial derivatives. The surface patches are generated by

using low-order bivariate polynomials. A linear system of equations must be solved

to compute the polynomial coefficients.

4. Those regions lying outside the convex hull of the data points must extrapolate the

surface from the patches lying along the boundary.

5. For each point (x,y), determine its enclosing triangle and compute an interpolated

value u (similarly for v) by using the polynomial coefficients derived for that trian-

gle. This yields the (u,v) coordinates necessary to resample the input image.

' C 1 and C 2 denote continuous first and second derivatives, respectively.

,L ... Ilrll L

78 SPATIAL TRANSFORMATIONS

3.7.3. Triangulation

Triangulation is the process of tesselating the convex hull of a set of N distinct

points into triangular regions. This is done by connecting neighboring control points

with noncrossing line segments, forming a planar graph. Although many configurations

are possible, we are interested in achieving a partition such that points inside a triangle

are closer to its three vertices than to vertices of any other triangle. This is called the

optimal triangulation and it avoids generating triangles with sharp angles and long edges.

In this manner, only nearby data points will be used in the surface patch computations

that follow. Several algorithms to obtain optimal triangulations are reviewed below.

In [Lawson 77], the author describes how to optimize an arbitrary triangulation ini-

tially created from the given data. He gives the following three criteria for optimality.

1. Max-min criterion: For each quadrilateral in the set of triangles, choose the triangu-

lation that maximizes the minimum interior angle of the two obtained triangles.

This tends to bias the tesselation against undesirable long thin tdangies. Figure

3.14a shows tdangie ABC selected in favor of triangle BCD under this criterion.

The technique has computational complexity O (N4/3).

2. The circle criterion; For each quadrilateral in the set of triangles, pass a circle

through three of its vertices. If the fourth vertex lies outside the circle then split the

quadrilateral into two triangles by drawing the diagonal that does not pass through

the vertex. This is illustrated in Fig. 3.14b.

3. Thessian region criterion: For each quadrilateral in the set of triangles, construct the

Thessian regions. In computational geometry, the Thessian regions are also known

as Delaunay, Dirichlet, and Voronoi regions. They are the result of intersecting the

perpendicular bisectors of the quadrilateral edges, as shown in Fig. 3.14c. This

serves to create regions around each control point P such that points in that region

are closer to P than to any other control point. Triangulation is obtained by joining

adjacent Delaunay regions, a result known as Delaunay triangulation (Fig. 3.15).

An O (N 32) angulation algorithm using this method is described in [Green 78].

An O(Nlog2N) recursive algorithm that determines the optimal triangulation is

given in [Lee 80]. The method recursively splits the data into halves using the x-values

of the control points until each subset contains only three or four points. These small

subsets are then easily triangulated using any of Lawson's three criteria. Finally, they are

merged into larger subsets nti!xall the triangular subsets are consumed, resulting in an

optimal triangulation of the co .ol points. Due to its speed and simplicity, this divide-

and-conquer technique was ust in [Goshtasby 87] to compute piecewise cubic mapping

functions. The subject of triangulations and data structures for them is reviewed in [De

Floriani 87].

3.7.4. Linear Triangular Patches

Once the triangular regions are determined, the scattered 3-D data (xi,Yi,Ui) or

(xi,Yi,Vi) are partitioned into groups of three points. Each group is fitted with a low-order

bivariate polynomial to generate a surface patch. In this manner, triangulation allows

3.7 Pl F-C I';WIS E PO LYNOMIA L TRANSFORMATIONS 79

A C

B

(a) (b)

(c)
Figure 3.14: Three criteria for optimal triangulation [Goshtasby 86].

(a) (b)

Figure 3.15: (a) Delaunay tesselation; (b) Triangulation [Goshtasby 86].

only nearby control points to influence the surface patch calculations. Together, these

patches comprise a composite surface defining the corresponding u or v coordinates at

each point in the observed image.

We now consider the case of fitting the triangular patches with a linear interpolant,

i.e., a plane. The equation of a plane through three points (x,yl,Ul), (x2,Y2,U2), and

(x3,Y3,U3) is given by

Ax +By+Cu+D = 0 (3.7.1)

where

A u2 1 B = C = x2
= Y2 u2 D

- x2 ; Y2 ; = - x2 Y2 112

u3 1 u3 Y3 Y3

Reprinted with permission from pattern Recognition, Volume 19, Number 6, "Pintowise Linear

Mapping Functions for Image Registration" by A. Goshtasby. Copyright ¸1986 by Pergamon

80 SPATIAL TRANSFORMATIONS

As seen in Fig. 3.15b, the triangulation covers only the convex hull of the set of

control points. In order to extrapolate points outside the convex hull, the planar triangles

along the boundary are extended to the image border. Their extents are limited to the

intersections of neighboring planes.

3.7.5. Cubic Triangular Patches

Although piecewise linear mapping functions are continuous at the boundaries

between neighboring functions, they do not provide a smooth transition across patches.

In order to obtain smoother results, the patches must at least use C interpolants. This is

achieved by fitting the patches with higher-ordered bivariate polynomials.

This subject has received much attention in the field of computcr-aided geometric

design. Many algorithms using N-degree polynomials have been proposed. They

include N=2 [Powell 77], N=3, 4 [Percell 76], and N=5 [Akima 78]. In this section,

we examine the case of fitting triangular regions with cubic patches (N=3). A cubic

patch fisa third-degree bivariate polynomial of the form

f (x,y) = at +a2x +a3y +anx2 +asxy +a6y2 +a7x3 +asx2y +a9xy2 +alOY 3 (3.7.2)

The ten coefficients can be solved by determining ten constraints among them.

Three relations are obtained from the coordinates of the three vertices. Six relations are

derived from the partial derivatives of the patch with respect to x and y at the three ver-

tices. Smoothly joining a patch with its neighbors requires the partial derivatives of the

two patches to be the same in the direction normal to the common edge. This adds three

more constraints, yielding a total of twelve relations. Since we have ten unknowns and

twelve equations, the system is overdetermined and cannot be solved as given.

The solution lies in the use of the Clough-Tocher triangle, a widely known C tri-

angular interpolant [Clough 65]. Interpolation with the Clough-Tocber triangle requires

the triangular region to be divided into three subtriangles. Fitting a surface patch to each

subtriangle yields a total of thirty unknown parameters. Since exactly thirty constraints

can be derived in this process, a linear system of thirty equations must be solved to com-

pute a surface patch for each region in the triangulation. A full derivation of this method

is given in [Goshtasby 87]. A complete review of triangular interpolants can be found in

[Barnhill 77].

An interpolation algbrJ. tffm offering smooth blending across patches requires partial

derivative data. Since this is generally not available with the supplied data, it must be

estimated. A straightforward approach to estimating the partial derivative at point P con-

sists of fitting a second-degree bivariate polynomial through P and five of its neighbors.

This allows us to determine the six parameters of the polynomial and directly compute

the partial derivative. More accurate estimates can be obtained by a weighted least-

squares technique using more than six points [Lawson 77].

Another approach is given in [Akima 78] where the author uses P and its m nearest

points P1, P2 ..... Pm, to form vector products Vii =(P-Pi)x(P-Pj) with Pi and Pj

being all possible combinations of the points. The vector sum V of all Vij's is then

3.7 PI FC EW1SE POLYNOMIAL TRANSFORMATIONS 81

calculated. Finally, the partial derivatives are estimated from the slopes of a plane that is

normal to the vector sum. A similar approach is described in [Klucewicz 78]. Aldma

later improved this technique by weighting the contribution of each triangle such that

small weights were assigned to large or narrow triangles when the vector sum was calcu-

lated [Akima 84]. For a comparison of methods, see [Nielson 83] and [Stead 84].

Despite the apparently intuitive formulation of performing surface interpolation by

fitting linear or cubic patches to triangulated ragions, partitioning a set of irregularly-

spaced points into distinct neighborhoods is not straightforward. Three criteria for

"optimal" triangulation were described. These heuristics are arbitrary and are not

without problems.

In an effort to circumvent the problem of defining neighborhoods, a uniform

hierarchical procedure has recently been proposed [Burr 88]. This method fits a polyno-

mial surface to the available data within a local neighborhood of each sample point. The

procedure, called hierarchical polynomial fit filtering, yields a multiresolution set of

low-pass filtered images, i.e., a pyramid. Finally, the set of blurred images are combined

through multiresolution interpolation to form a smooth surface passing through the origi-

nal data. The recent l/terature clearly indicates that surface interpolation and approxima-

tion from scattered data remains an open problem.

3.8. GLOBAL SPLINES

As is evident, inferring a mapping function given only sparse scattered correspon-

dence points is an important problem. In this section, we examine this problem in terms

of a more general framework: surface fitting with sophisticated mapping functions well

beyond those defined by polynomials. In particular, we introduce global splines as a gen-

eral solution. We discuss their definition in terms of basis functions and regularization

theory. Research in this area demonstrates that global splines are useful for our purposes,

particularly since they provide us a means of imposing constraints on the properties of

our inferred mapping functions.

We examine the use of global splines defined in two ways: through basis functions

and regularization theory. Although the use of global splines defined through basis func-

tions overlaps with some of the techniques described earlier, we present it here-to draw

attention to the single underlying mathematical framework. Since they do not depend on

any regular structure for the data, they are particularly useful for surface interpolation

from scattered data.

3.8.1. Basis Functions

Global splines using basis functions is one of the oldest global interpolation tech-

niques. It consists of the following procedure:

1. Define a set of basis functions hi(x,y), where i = 1 ..... K.

2. Define a set of correspondence points (xj,yj,uj), where j = 1 ..... M, and uj refers to

the surface height associated with point (xj,yj). In this discussion, we limit our-

selves to computing a surface for u. The process must be repeated for v as well.

82 SPATIALTRANSFORMATIONS

3. Define the interpolating function to be a linear combination of these basis functions.

We refer to the interpolation function as a spline. For example, the expression for

mapping function U is a spline that passes through the supplied correspondence

points. It is given as

K

$(x,y) = ai hi(x,y) (3.8.1)

for some ai.
4. Determine the unknown ai coefficients by solving a system of linear equations to

ensure that the function interpolates the data. The system of equations is given as

U = HA, or equivalently as

u2 hl(X2,Y2) h2(x2,Y2) hK(x2,Y2) a2

.... (3.8.2)

hi(XM,YM) h2(xM,yM) hK(XM,YM)

The matrix H is often called the design matrix or the Gram matrix of the problem.

While the definition of this approach is rather simple, the choice of the basis func-

tions is very nontrivial. Although we present a simple introduction here, a more

thorough investigation can be found in [Franke 79], which includes a critical comparison

of many global and local methods for scattered interpolation.

A simple choice for the set of basis functions is: hi(x,y)= l,x,y, xy, x2,y 2 for

i = 1, ..., 6. This choice, coincidently, is identical to that used in Eqs. (3.6.1) and (3.6.2)

for a second-degree fit. If we were given exactly six data points, it becomes possible to

inteolate the data. In that case, the coefficients may be determined by computing

H - U, assuming H is nonsingular. Otherwise, if the number of data points exceeds the

number of basis functions (M > K), then any approximate solution to the overconstrained

linear system can be used. It now no longer becomes possible to interpolate the data

unless, of course, the input coincides with a function of order K.

For numerical masons, it is preferable to compute a decomposition of H rather than

compute its inverse. This is sometimes necessary because design matrices are often very

ill-conditioned, and care should be taken in solving them. This tends to happen when a

cluster of supplied correspondence points may cause several rows in a design matrix to

differ only marginally. In such instances, it is suggested that an estimate of the condition

number of the particular design matrix be obtained before interpreting the results. Tech-

niques for simultaneously solving a linear system and estimating its condition number

can be found in standard linear algebra packages (e.g., LINPACK [Dongarra 79], [NAG

80, IMSL 80]), or directly as the ratio of the largest to smallest nonzero singular value

computed through singular value decomposition.

**Share with your friends:**