Digital image warping

Download 2.54 Mb.
Size2.54 Mb.
1   ...   20   21   22   23   24   25   26   27   ...   30

scanlines have a pixel-to-pixel offset of one, while vertical scanlines have an offset equal

to the width of a row. The function resarnplegen (A,B,C,D,E) applies the mapping

function A to input scanline B, generating output C. The input (and outpu0 dimension is



Figure 7.27: Facial Animation from the Pseudopod sequence in The Abyss.

Courtesy of Industrial Light & Magic, a Division of Lucas film Ltd.

Copyright ¸ 1989 Twentieth Century Fox. All rights reserved.

D and the inter-pixel offset is C. The function performs linear interpolation for

magnification and box filtering for minification. This is equivalent to the reconstruction

and antialiasing methods used in [Smythe 90]. Superior filters can be added within this

framework by incorporating the results of Chapters 5 and 6.

Figure 7.28: A warped image of Piazza San Marco.

Copyright ¸ 1989 Pixar. All rights reserved.

Two-pass mesh warping based on algorithm in [Smythe 90].

Input image IN has height IN_h and width IN_w.

Xd,Yd contain the x,y coordinates of the destination mesh.

Their height and width dimensions are T_h and T_w.

The output is stored in OUT. Due to the frozen edge

warp_mesh(IN, OUT, Xs, Ys, Xd, Yd, IN_h, IN_w, T_h, T_w)

unsigned char *IN, *OUT;

double *Xs, *Ys, *Xd, 'Yd;


...... 11  II .......... Jill I II


Figure 7.29: A caricature of Albert Einstein.

Copyright ¸ 1989 Pixar. All rights reserved.

int IN_h, IN_w, T_h, T_w;

int a, b, x, y;

unsigned char *src, *dst;

double *xl, *yl, *x2, *y2 *xrewl, *yrewl, *xrew2, *yrew2, *map1, *map2, *indx, *Ts, *Ti, *Td;


* allocate memory for buffers: indx stores indices used to sample splines;

* xmwl, xrew2, ymwl, yrew2 store column data in mw order for ispllne_gen0;

* map1, map2 store mapping functions computed in row order in ispline_gen 0


a = MAX(IN_h, IN_w) + 1;

b = sizeof(double);

indx = (double *) calloc(a, b);


xrowl = (double *) calloc(a, b); yrewl = (double *) calloc(a, b);

xrow2 = (double *) calloc(a, b); yrow2 = (double ') cal]oc(a, b);

map1 = (double *) calloc(a, b); map2 = (double *) calioc(a, b);


* First pass (phase one): create tables Ts and Ti for x-intercepts of

* vertical splines in S and I. Tables have T_w columns el height IN_h


Ts = (double *) calloc(T_w * iN_h, sizeof(double));

Ti = (double ') calloc(T_w ' IN_h, sizeof(double));

for(y=0; y

for(x=0; x

/* store columns as rows for ispline_gen */

for(y=0; y

xrewl[y] = Xs[y*T_w + x]; yrewl[y] = Ys[y*T_w + x];

xrow2[y] = Xd[y*T_w + x]; yrow2[y] = Yd[y*T_w + x];


/* scan convert vertical splines of S and I */

ispline_gen(yrowl, xrewl, T_h, indx, map1, IN_h);

ispline_gen(yrew2, xrew2, T_h, indx, map2, IN_h);

/* store resampled rows back into columns */

for(y=0; y

Ts[y*T_w + x] = mapl[y];

Ti [y*Tw + x] = map2[¾];

? First pass (phase two): warp x using Ts and Ti. TMP holds intermediate image. */

TMP = (unsigned char *) calloc(IN_h, IN_w);

for(x=0; x

for(y=0; y

/* fit spline to x-intercepts; resample over all columns */

xl = &Ts[y * T_w];

x2 = &Ti [y * T w];

ispline_gen(xl, x2, T_w, indx, map1, IN_w);

/* resample source row based on map1 '/

src = &lN[y * IN_w];

dst = &TMP[y * IN_w];

resampleen(mapl, sin, dst, w, 1);


/* free buffers */

cfree((char *) Ts);

cfree((char *) Ti );


* Second pass (phase one): create tables Ti and Td for y-intemepts of


* horizontal splines in I and D. Tables have T_h rows ol width IN_w


Ti = (double *) calloc(T_h * IN_w, sizeof(double));

Td = (double *) calloc(T_h * IN w, sizeof(double));

for(x=0; x

for(y=0; y

/' scan conver horizontal splines of I and D */

X1 = &Xs[y * T_w]; yl = &Ys[y * T_w];

x2 = &Xd[y * T w]; y2 = &Yd[y * T_w];

ispline_gen(x1, yl, T w, indx, &Ti [y*lN_w], IN_w);

ispline_gen(x2, y2, T_w, indx, &Td[y*lN_w], IN_w);


/* Second pass (phase two): warp y using Ti and Td "/

for(y=0; y

for(x=0; x

/' store column as row for ispline_gen */

for(y=0; y

xrowl[y] = Ti [y*lN_w + x];

yrowl[y] = Td[y*lN_w + x];


/* fit spline to y-intercepts; resample over all rows */

ispline_gen(xrewl, yrewl, T_h, indx, map1, INh);

/' resample intermediate image column based on map1 */

src = &TMP[x];

dst = &OUT[x];

resample_gen(mapl, src, dst, IN_h, IN_w);


cfree((char *) TMP); cfree((char *) indx);

cfree((char *) Ti); clree((char *) Td);

cfme((char *) xrowl); cfree((char *) yrewl);

clree((char *) xrew2); cfree((char *) yrew2);

cfree((char*) map1); cfree((char*) map2);


Additional separable geometric transformations are described in this section. They

rely on the simplifications of 1-D processing to perform perspective projections, map-

pings among arbitrary planar shapes, and spatial lookup tables.

7.6.1. Perspective Projection: Robertson, 1987

The perspective projection of 3-D surfaces has been shown to be reducible into a

series of fast 1-D resampling operations [Robertson 87, 89]. In the traditional approach,

this task has proved to be computationally expensive due to the problems in determining

visibility and performing hidden-point removal. With the introduction of this algorithm,

the problem can be decomposed into efficient separable components that can each be

implemented at rates approaching real-time.

The procedure begins by rotating the image into alignment with the frontal (nearest)

edge of the viewing window. Each horizontal scanline is then compressed so that all pix-

els which lie in a line of sight from the viewpoint are aligned into columns in the inter-

mediate image. That is, each resulting column comprises a line of sight between the

viewpoint and the surface.

Occlusion of a pixel can now only be due to another pixel in that column that lies

closer to the viewer. This simplifies the perspective projection and hidden-pixel removal

stages. These operations are performed along the vertical scanlines. By processing each

column in bank-to-front order, hidden-pixel removal is executed trivially.

Finally, the intermediate image undergoes a horizontal pass to apply the horizontal

projection. This pass is complicated by the need to invert the previously applied horizon-

tal compression. The difficulty arises since the image has already undergone hidden-

pixel removal. Consequently, it is not directly known which surface point has been

mapped to the current projected point. This can be uniquely determined only after addi-

tional calculations. The resulting image isthe perspective transformation of the input,

performed at rates which make real-time interactive manipulation possible.

7.6.2. Warping Among Arbitrary Planar Shapes: Wolberg, 1988

The advantages of 1-D resampling have been exploited for use in warping images

among arbitrary planar shapes [Wolberg 88, 89a]. The algorithm addresses the following

inadequately solved problem: mapping between two images that are delimited by arbi-

trary, closed, planar, curves, e.g., hand-drawn curves.

Unlike many other problems treated in image processing or computer graphics, the

stretching of an arbitrary shape onto another, and the associated mapping, is a problem

not addressed in a tractable fashion in the literature. The lack of attention to this class of

problems can be easily explained. In image processing, there is a well-defined 2-D rectil-

inear coordinate system. Correcting for distortions amounts to mapping the four comers

of a nonrectangular patch onto the four comers of a rectangular patch. In computer

graphics, a parameterization exists for the 2-D image, the 3-D object, and the 2-D screen.

Consequently, warping amounts to a change of coordinate system (2-D to 3-D) followed

by a projection onto the 2-D screen. The problems considered in this work fail to meet

the above properties. They are neither parameterized nor are they well suited for four-

comer mapping.

The algorithm treats an image as a collection of interior layers. Informally, the

layers are extracted in a manner sinfilar to peeling an onion. A radial path emanates from

each boundary point, crossing interior layers until the innermost layer, the skeleton, is

reached. Assuming correspondences may be established between the boundary points of

the soume and target images, the warping problem is reduced to mapping between radial

paths in both images. Note that the layers and the radial paths actually comprise a

 II -' 71 ii -""]l'l ' I1 I III


sampling gd.

This algorithm uses a generalization of polar coordinates. The extension lies in that

radial paths are not restricted to terminate at a single point. Rather, a fully connected

skeleton obtained from a thinning operation may serve as terminators of radial paths

directed from the boundary. This permits the processing of arbitrary shapes.

The 1-D resampling operations are introduced in three stages. First, the radial paths

in the source image must be resampled so that they all take on the same length. Then

these normalized lists, which comprise the columns in our intermediate image, are

resampled in the horizontal direction. This serves to put them in direct correspondence

to their counterparts in the target image. Finally, each column is resampled to lengths

that match those of the radial paths in the target image. In general, these lengths will

vary due to asymmetric image boundaries.

The final image is generated by wrapping the resampled radial paths onto the target

shape. This procedure is identical to the previous peeling operation except that values

are now deposited onto the traversed pixels.

7.6.3. General 2-Pass Algorithm: W01berg and Boult, 1989

Sampling an arbitrary forward mapping function yields a 2-D spatial lookup table.

This specifies the output coordinates for all input pixels. A separable technique to imple-

ment this utility is of great practical importance. The chief complications arise from the

bottleneck and foldover problems described earlier. These difficulties are addressed in

[Wolberg 89b].

Wolberg and Boult propose a 2-pass algorithm for realizing arbitrary warps that are

specified by spatial lookup tables. It is based on the solution of the three main difficulties

of the Catmull-Smith algorithm: bottlenecking, foldovers, and the need for a closed-form

inverse. In addition, it addresses some of the errors caused by filtering, especially those

caused by insufficient resolution in the sampling of the mapping function. Through care-

ful attention to efficiency and graceful degradation, the method is no more costly than the

Catmull-Smith algorithm when bottlenecking and foldovers are not present. However

when these problems do surface, they are resolved at a cost proportional to their manifes-

tation. Since the underlying data structures continue to facilitate pipelining, this method

offers a promising hardware solution to the implementation of arbitrary spatial mapping

functions. The details of this method are given in the next section.


In this section, we describe an algorithm introduced by Wolberg and Boult that

addresses tle problems that are particular to 2-pass methods [Wolberg 89b]. The result is

a separable approach that is general, accurate, and efficient, with graceful degradation for

transformations of arbitrary complexity.

The goal of this work is to realize an arbitrary warp with a separable algorithm. The

proposed technique is an extension of the Catmull-Smith approach where attention has

been directed toward solutions to the bottleneck and foldover problems, as well as to the


ramoval of any need for closed-form inverses. Consequently, the advantages of 1-D

resampfing are more fully exploited.

Conceptually, the algorithm consists of four stages: intensity resampling, coordinate

resampling, distortion measurement, and compositing. Figure 7.30 shows the interaction

of these components. Note that bold arrows represent the flow of images through a stage,

and thin arrows denote those images that act upon the input. The subscripts x and y are

appended to images that have been resampled in the horizontal and vertical directions,



Figure 7.30: Block diagram of the Wolberg-Boult algorithm.

The intensity msampler applies a 2-pass algorithm to the input image. Since the

result may suffer bottleneck problems, the identical process is repeated with the tran-

spose of the image. This accounts for the vertical symmeay of Fig. 7.30. Pixels which

suffer excessive bottlenecking in the natural processing can be recovered in the tran-

sposed processing. In the actual implementation, transposition is realized as a 90 clock-

wise rotation so as to avoid the need to reorder pixels left to right.

The coordinate resampler computes spatial information necessary for the intensity

msampler. It warps the spatial lookup table Y(u,v) so that the second pass of the


intensity resampler can access it without the net for an inverse function.

Local measures of shearing, perspective distortion, and bottlenecking are computed

to indicate the amount of information lost at each point. This information, together with

the transposed and non-transposed results of the intensity resampler, are passed to the

compositor. The final output image is generated by the compositor, which samples those

pixels from the two resampled images such that information loss is minimized.

7.7.1. Spatial Lookup Tables

Scanline algorithms generally express the coordinate transformation in terms of for-

ward mapping functions X and Y. SamplingX and Yover all input points yields two new

real-valued images, XLUT and YLUT, specifying the point-to-point mapping from each

pixel in the input image onto the output images. XLUT and YLUT are referred to as spa-

tial lookup tables since they can be viewed as 2-D tables that express a spatial transfor-


In addition to XLUT and YLUT a mechanism is also provided for the user to specify

ZLUT, which associates a z-coordinate value with each pixel. This allows warping of

planar textures onto non-planar surfaces and is useful in dealing with foldovers. The z-

coordinates are assumed to be from a particular point of view that the user determines

before supplying ZLUT to the system.

The motivation for introducing spatial lookup tables is generality. The goal is to

find a serial warp equivalent to any given parallel warp. Thus, it is impossible m retain

the mathematical elegance of closed-form expressions for the mapping functions F, G,

and the auxiliary function, H. Therefore, assuming the forward mapping functions, X and

Y, have closed-form expressions seems overly restrictive. Instead, the authors assume

that the parallel warp is defined by the samples that comprise the spatial lookup tables.

This provides a general means of specifying arbitrary mapping functions.

For each pixel (u,v) in input image/, spatial lookup tables XLUT, YLUT, and ZLUT

are indexed at location (u,v) to determine the corresponding (x,y,z) position of the input

point after warping. This new position is orthographically projected onto the output

image. Therefore, (x,y) is taken to be the position in the output image. (Of course, a

perspective projection may be included as part of the warp). The z-coordinate will only

be used to resolve foldovers. This straightforward indexing applies only if the dimen-

sions of I, XLUT, YLUT, and ZLUT are all identical. If this is not the case, then the

smaller images are upsampled (magnified) to match the largest dimensions.

7.7.2. Intensity Resampling

The spatial lookup tables determine how much compression and stretching each

pixel undergoes. The actual intensity resampling is implemented by using a technique

similar to that proposed in lFant 86]. As described earlier, this method exploits the

benefits of operating in scanline order. As a result, it is well-suited for hardware imple-

mentation and remains compatible with spatial lookup tables.


The 1-D intensity resampler is applied to the image in two passes, each along

orthogonal directions. The first pass resamples horizontal scanlines, warping pixels

along a row in the intermediate image. Its purpose is to deposit them into the proper

columns for vertical resampling. At that point, the second pass is applied to all columns

in the intermediate image, generating the output image.

In Fig. 7.30, input image I is shown warped according to XLUT to generate inter-

mediate image I x. In order to apply the second pass, YLUT is warped alongside 1, yield-

ing YLUT x. This resampled spatial lookup table is applied to Ix in the second pass as a

collection of 1-D vertical warps. The result is output image

The intensity resampling stage must handle multiple output values to be defined in

case of foldovers. This is an important implementation detail that has impact on the

memo requirements of the algorithm. We defer discussion of this aspect of the inten-

sity resampler until Section 7.7.5, where foldovers am discussed in more detail.

7.7.3. Coordinate Resampllng

YLUT x is computed in the coordinate resampling stage depicted in the second row

of the block diagram in Fig. 7.30. The ability to resample YLUT for use in the second

pass has important consequences: it circumvents the need for a closed-form inverse of

the first pass. As briefly pointed out in [Catmull 80], that inverse provides exactly the

same information that was available as the first pass was computed, i.e., the u-coordinate

associated with a pixel in the intermediate image. Thus, instead of computing the inverse

to index into YLUT, we simply warp YLUT into YLUT x allowing direct access in the

second pass.

The coordinate resampler is similar to the intensity resampler. It differs only in the

notable absence of antialiasing filtering -- the output coordinate values in YLUT x are

computed by point sampling YLUT. Interpolation is used to compute values when no

input data are supplied at the resampling locations. However, unlike the intensity

resampler, the coordinate resampler neither weighs the result with its area coverage nor

does the resampler average it with the coordinate values of other contributions to that

pixel. This serves to secure the accuracy of edge coordinates, even when the edge occu-

pies only a partial output pixel.

7.7.4. Distortions and Errors

In forward mapping, input pixels are taken to be squares that map onto arbitrary

quadrilaterals in the output image. Although separable mappings greatly simplify resam-

pling by treating pixels as points along scanlines, the measurement of distortion must

necessarily revert to 2-D to consider the deviation of each input pixel as it projects onto

the output.

As is standard, we treat the mapping of a square onto a general quadrilateral as a

combination of translation, scaling, shearing, rotation, and perspective transformations.

Inasmuch as separable kernels exist for realizing translations and scale changes, these

transformations do not suffer degradation in scanline algorithms and are not considered

-- ' I i -i i ..... iii


further. Shear, perspective and rotations, however, offer significant challenges to the 2-

pass approach. In particular, excessive shear and perspective contribute to alias'rag prob-

lems while rotations account for the bottleneck problem.

We first examine the errors introduced by separable filtering. We then address the

three sources of geometric distortion for 2-pass scanline algorithms: shear, perspective,

and rotation. Filtering Errors

One of the sources of error for scanline algorithms comes from the use of cascaded

orthogonal 1-D filtering. Let us ignore rotation for a moment, and assume we process the

image left-to-right and top-to-bottom. Then one can easily show that scanline algorithms

will, in the first pass, filter a pixel based only on the horizontal coverage of its top seg-

ment. In the second pass, they will filter based only on the vertical coverage of the left-

hand segment of the input pixel. As a result, a warped pixel generating a quadrilateral at

the output pixel is always approximated by a rectangle (Fig. 7.31). Note this can be

either an overestimate or underestimate, and the error depends on the drection of pro-

cessing. This problem is not unique to our approach. It is shared by all scanline algo-







Figure 7.31: Examples of filtering errors. Shear

Figure 7.32 depicts a set of spatial lookup tables that demonstrate horizontal shear.

For simplicity, the example includes no scaling or rotation. The figure also shows the

result obtained after applying the tables to an image of constant intensity (100). The hor-

izontal shear is apparent in the form of jagged edges between adjacent rows.

Scanline algorithms are particularly sensitive to this form of distortion because

proper filtering is applied only along scanlines -- filtering issues across scanlines are not

considered. Consequently, horizontal (vertical) shear is a manifestation of aliasing along

Directory: filedownload

Download 2.54 Mb.

Share with your friends:
1   ...   20   21   22   23   24   25   26   27   ...   30

The database is protected by copyright © 2020
send message

    Main page