**Output:**
**Conclusion: **Thus we have studied how to draw a line using Bresenham algorithm.
**Lab Exercise 6 **
**Title - Program using 2-D Transformation**
**Objective: **To study basic transformation and how to apply them on graphics objects.
**Theory:**
Geometric image transformation functions use mathematical transformations to crop,
pad, scale, rotate, transpose or otherwise alter an image array to produce a modified view of
an image. A transformation thus is the process of mapping points to other locations. Common
transformations are Translation, Scaling and Rotation.
When an image undergoes a geometric transformation, some or all of the pixels within the source image are relocated from their original spatial coordinates to a new position in the output image. When a relocated pixel does not map directly onto the center of a pixel location, but falls somewhere in between the centers of pixel locations, the pixel's value is computed by sampling the values of the neighboring pixels.
**Basic 2D Transforms**
**Translation**
This transform can change the position of object in straight-line.
Fig .Point translation
**Scaling**
This transform can change length and possibly direction of a vector
A vector with Cartesian coordinates (*x,y*) is transformed as (x',y') with scaling factor Sx and Sy for (x,y) respectively.
Fig. Triangle Scaling
**Rotation**
In matrix form, the equivalent transformation that takes *a *to *b *is
For example a matrix that rotates vectors by π/4 radians (45 degrees) is
Fig . Triangle rotation
**Shearing**
A shear is something that pushes things sideways.
The horizontal and vertical shear matrices are
__Program to implement basic transformations__
#include // for MS Windows
#include // GLUT, include glu.h and gl.h
/* Initialize OpenGL Graphics */
void initGL()
{
glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // Black and opaque window color
}
/* Handler for window-repaint event. Call back when the window first appears and
whenever the window needs to be re-painted. */
void display()
{
glClear(GL_COLOR_BUFFER_BIT); // Clear the color buffer
glMatrixMode(GL_MODELVIEW); // To operate on Model-View matrix
glLoadIdentity(); // Reset the model-view matrix
glTranslatef(-0.5f, 0.4f, 0.0f); // Translate left and up
glBegin(GL_QUADS); // Each set of 4 vertices form a quad
glColor3f(1.0f, 0.0f, 0.0f); // Red
glVertex2f(-0.3f, -0.3f); // Define vertices in counter-clockwise (CCW) order
glVertex2f( 0.3f, -0.3f); //so that the normal (front-face) is facing you
glVertex2f( 0.3f, 0.3f);
glVertex2f(-0.3f, 0.3f);
glEnd();
glTranslatef(0.1f, -0.7f, 0.0f); // Translate right and down
glBegin(GL_QUADS); // Each set of 4 vertices form a quad
glColor3f(0.0f, 1.0f, 0.0f); // Green
glVertex2f(-0.3f, -0.3f);
glVertex2f( 0.3f, -0.3f);
glVertex2f( 0.3f, 0.3f);
glVertex2f(-0.3f, 0.3f);
glEnd();
glTranslatef(-0.3f, -0.2f, 0.0f); // Translate left and down
glBegin(GL_QUADS); // Each set of 4 vertices form a quad
glColor3f(0.2f, 0.2f, 0.2f); // Dark Gray
glVertex2f(-0.2f, -0.2f);
glColor3f(1.0f, 1.0f, 1.0f); // White
glVertex2f( 0.2f, -0.2f);
glColor3f(0.2f, 0.2f, 0.2f); // Dark Gray
glVertex2f( 0.2f, 0.2f);
glColor3f(1.0f, 1.0f, 1.0f); // White
glVertex2f(-0.2f, 0.2f);
glEnd();
glTranslatef(1.1f, 0.2f, 0.0f); // Translate right and up
glBegin(GL_TRIANGLES); // Each set of 3 vertices form a triangle
glColor3f(0.0f, 0.0f, 1.0f); // Blue
glVertex2f(-0.3f, -0.2f);
glVertex2f( 0.3f, -0.2f);
glVertex2f( 0.0f, 0.3f);
glEnd();
glTranslatef(0.2f, -0.3f, 0.0f); // Translate right and down
glRotatef(180.0f, 0.0f, 0.0f, 1.0f); // Rotate 180 degree
glBegin(GL_TRIANGLES); // Each set of 3 vertices form a triangle
glColor3f(1.0f, 0.0f, 0.0f); // Red
glVertex2f(-0.3f, -0.2f);
glColor3f(0.0f, 1.0f, 0.0f); // Green
glVertex2f( 0.3f, -0.2f);
glColor3f(0.0f, 0.0f, 1.0f); // Blue
glVertex2f( 0.0f, 0.3f);
glEnd();
glScalef(1.5f, 1.5f, 0.0f); // Translate right and down
//glTranslatef(-0.1f, 1.0f, 0.0f);
glBegin(GL_POLYGON); // The vertices form one closed polygon
glColor3f(1.0f, 1.0f, 0.0f); // Yellow
glVertex2f(-0.1f, -0.2f);
glVertex2f( 0.1f, -0.2f);
glVertex2f( 0.2f, 0.0f);
glVertex2f( 0.1f, 0.2f);
glVertex2f(-0.1f, 0.2f);
glVertex2f(-0.2f, 0.0f);
glEnd();
glFlush(); // Render now
}
/* Handler for window re-size event. Called back when the window first appears and
whenever the window is re-sized with its new width and height */
void reshape(GLsizei width, GLsizei height)
{
// Compute aspect ratio of the new window
if (height == 0) height = 1; // To prevent divide by 0
GLfloat aspect = (GLfloat)width / (GLfloat)height;
// Set the viewport to cover the new window
glViewport(0, 0, width, height);
// Set the aspect ratio of the clipping area to match the viewport
glMatrixMode(GL_PROJECTION); // To operate on the Projection matrix
glLoadIdentity();
if (width >= height) {
// aspect >= 1, set the height from -1 to 1, with larger width
gluOrtho2D(-1.0 * aspect, 1.0 * aspect, -1.0, 1.0);
}
else
{
// aspect < 1, set the width to -1 to 1, with larger height
gluOrtho2D(-1.0, 1.0, -1.0 / aspect, 1.0 / aspect);
}
}
/* Main function: GLUT runs as a console application starting at main() */
int main(int argc, char** argv)
{
glutInit(&argc, argv); // Initialize GLUT
glutInitWindowSize(640, 480); // Set the window's initial width & height - non-square
glutInitWindowPosition(50, 50); // Position the window's initial top-left corner
glutCreateWindow("Model Transform"); // Create window with the given title
glutDisplayFunc(display); // Register callback handler for window re-paint event
glutReshapeFunc(reshape); // Register callback handler for window re-size event
initGL(); // Our own OpenGL initialization
glutMainLoop(); // Enter the infinite event-processing loop
return 0;
}
**Output**
**Conclusion: **Thus we have studied how transformations can be applied on objects .
**Lab Exercise 7**
**Title-Program for polygon filling using flood fill method.**
**Objective: **To study how polygons are filled using flood fill algorithm.
**Theory:**
Polygon: A polygon can be defined as an image which consists of a finite ordered set of straight boundaries called edges.
The polygon can also be defined by an ordered sequence of vertices I.e. the corners of the polygon. The edges of the polygon are then obtained by traversing the vertices in the given order.
Two consecutive vertices define one edge. The polygon can be closed by connecting the last vertex to the first.
1 : Type of Polygons
The classification of polygons is based on where the line segment joining any two
points within the polygon is going to lie' There are two types of polygons :
A convex polygon is a polygon in which the line segment joining any two points
within the polygon lies completely inside the polygon.
Example:
A concave polygon is a polygon in which the line segment joining any two points
within the polygon may not lie completely inside the polygon.
Example:
This method is used in interactive paint systems.
The user specify a seed point by pointing to the interior of the region to initiate a flood operation.
**Algorithm**
Sometimes it is required to fill in an area that is not defined within a single color
boundary. In such cases we can fill areas by replacing a specified interior color
instead of searching for a boundary color. This approach is called a flood-fill
algorithm. Like boundary fill algorithm, here we start with some seed and examine
the neighboring pixels. However, here pixels are checked for a specified interior
color instead of boundary color and they are replaced by new color. Using either a
__4-connected or 8-connected approach,__ we can step through pixel positions until all
interior point have been filled.
**Flood-Fill Algorithm**
The following procedure illustrates the recursive method for filling 4-connected region using flood-fill algorithm.
void floodFill4 (int x, int y, int fillColor, int oldColor)
{
if (getPixel (x, y) == oldColor)
{
setColor (fillColor);
setPixel (x, y);
floodFill4 (x+1, y, fillColor, oldColor);
floodFill4 (x-1, y, fillColor, oldColor);
floodFill4 (x, y+1, fillColor, oldColor);
floodFill4 (x, y-1, fillColor, oldColor);
}
}
The following procedure illustrates the recursive method for filling 8-connected region using flood-fill algorithm.
flood_fill (x, y, old-color, new-color).
{
if ( getpixel (x, y) = old-color)
{ putpixel (x, y, new-color);
flood-fitl (x + 1, y, old-color, new-color);
flood-fill (x - 1, y, old-color, new-color);
flood-fill (x, y + 1, old-color, new-color);
flood-fill (x, y - 1, old-color, new-color);
flood-fill (x + 1, y + \, old-color, new-color);
flood-fill (x - 1, y - 1, old-color, new-color);
flood-fill (x + 1, y - 7, old-color, new-color);
flood-fill (x - 1, y + 1, old-color, new-color);
l
**Output:____Fig_._Flood_fill_method__Output'>Output:**
**Fig . Flood fill method**
**Output:**
**Conclusion****: **Thus we have studied how to fill polygons using flood fill method.
**Lab Exercise 8**
**Title-Drawing lines, displaying text part of picture. **
**Objective: **To study how to display text with graphics image.
**Theory:**
A bitmap font is basically a 2D font. Although we'll place it in a 3D world, these fonts will have no thickness and can't be rotated or scaled, only translated. Furthermore, the font will always face the viewer, like a billboard. Although this can be seen as a potential disadvantage, on the other hand we won't have to worry about orienting the font to face the viewer
In this section we'll present the GLUT functions to put some bitmapped text on the screen. Basically, you just need one function: *glutBitmapCharacter*. The syntax is as follows:
void glutBitmapCharacter(void *font, int character)
Parameters:
font - the name of the font to use (see bellow for a list of what's available
character - what to render, a letter, symbol, number, etc...
The font options available are:
GLUT_BITMAP_8_BY_13
GLUT_BITMAP_9_BY_15
GLUT_BITMAP_TIMES_ROMAN_10
GLUT_BITMAP_TIMES_ROMAN_24
GLUT_BITMAP_HELVETICA_10
GLUT_BITMAP_HELVETICA_12
GLUT_BITMAP_HELVETICA_18
The following line of text exemplifies a call to the *glutBitmapCharacter *function to output a single character at the current raster position:
glutBitmapCharacter(GLUT_HELVETICA_18,'3');
One important thing to know is what is the actual raster position. The raster position can be set with the family of functions *glRasterPos *from the OpenGL library, the syntax of two functions from this family is presented below.
void glRasterPos2f(float x, float y);
void glRasterPos3f(float x, float y, float z);
Parameters:
x, y, z - local coordinates for the text to appear
The function *glutBitmapCharacter *renders the character at the required position and advances the current raster position by the width of the character. Therefore, to render a string, successive calls to*glutBitmapCharacter *will suffice to achieve the desired output.
The following function renders a string starting at the specified raster position:
void renderBitmapString(
float x,
float y,
float z,
void *font,
char *string) {char *c;
glRasterPos3f(x, y,z);
for (c=string; *c != '\0'; c++) {
glutBitmapCharacter(font, *c);
}
}
Four Approaches to Drawing Text (Fonts) in OpenGL
Use Bitmaps
You can use bitmaps, not the kind that uses an image file, but a particular OpenGL construct. This is the approach used above. Each character is represented as a bitmap. Each pixel in the bitmap has a bit, which is 1 if the pixel is colored and 0 if it is transparent. Each frame, you'd send the bitmaps for the characters to the graphics card. The graphics card would then bypass the usual 3D transformations and just draw the pixels right on the top of the window. I'm not a fan of this approach. It's slow, as you have to send each bitmap to the graphics card each frame, which is a lot of data. The method is also inflexible; you can't scale or transform the characters very well. The documentation for glutBitmapCharacter is at: http://www.opengl.org/documentation/specs/glut/spec3/node75.html.
Use Textures
You can represent characters using textures. Each character would correspond to a certain part of some texture, with some of the pixels in the texture white and the rest transparent You would draw a quadrilateral for each character and map the appropriate part of the appropriate texture to it. This approach is alright; it gives you some flexibility as to how and where you draw characters in 3D. It's also pretty fast. But the characters wouldn't scale too well; they'll look pixelated if you zoom in too far.
Draw Lines
You can draw a bunch of lines in 3D, using GL_LINES. This technique is fast and does allow scaling and otherwise transforming characters. However, the characters would look better if they covered an area rather than a perimeter. Also, it's fairly tedious to figure out a set of lines to represent each character. You can draw outlined text in GLUT using glutStrokeCharacter, whose documentation is at this site.
Draw Polygons
You can draw a bunch of polygons in 3D. This technique also allows us to transform characters well. It even lets us give the characters 3D depth, so that they look 3D rather than flat. However, it's slower than drawing lines and using textures. Also, it's even more annoying to figure out how to describe each character as a set of polygons than it is to figure out how to describe one as a set of lines.
#include
#include
#include
char *str= "My name";
void display()
{
int i;
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(1.0,0.0,0.0);
glBegin(GL_LINES);
glVertex2f(0.0,0.0);
glColor3f(0.0,1.0,0.0);
glVertex2f(0.0,0.5);
glEnd();
glColor3f(1.0,0.0,0.0);
position of the character
glRasterPos2f(0.0,0.0);
two character types bitmap/ stroke
glutBitmapCharacter(GLUT_BITMAP_HELVETICA_18,'c');
glColor3f(0.0,1.0,1.0);
glRasterPos2f(0.5,0.0); font type character to be displayed
for(i=0;i
glutBitmapCharacter(GLUT_BITMAP_TIMES_ROMAN_24,str[i]);
glFlush();
}
void myinit()
{
glClearColor(0.0,0.0,0.0,1.0);
gluOrtho2D(-1.0,1.0,-1.0,1.0);
}
int main(int argc, char **argv)
{
glutInit(&argc,argv);
glutInitDisplayMode(GLUT_RGB|GLUT_SINGLE);
glutInitWindowSize(500,500);
glutInitWindowPosition(0,0);
glutCreateWindow("Simple demo");
myinit();
glutDisplayFunc(display);
glutMainLoop();
}
**Output**
**Conclusion: **Thus we understand that text can also be printed along with
graphics objects.
**Lab Exercise 9**
**Title-Program for Cohen Sutherland Line-clipping algorithm**
**Objective:**To study Cohen Sutherland line clipping algorithm.
**Theory:**
Program to implement the Cohen-Sutherland line-clipping algorithm. Make provision to specify the input line, window for clipping and viewport for displaying the clipped image.
Algorithm at work:
Line-Clipping
In computer graphics, line clipping is the process of removing lines or portions of lines outside of an area of interest. Typically, any line or part thereof which is outside of the viewing area is removed.
Cohen-Sutherland Line-Clipping algorithm:
This algorithm divides a 2D space into 9 parts, of which only the middle part (viewport) is visible. The algorithm includes, excludes or partially includes the line based on where the two endpoints are:
Both endpoints are in the viewport (bitwise OR of endpoints == 0): trivial accept.
Both endpoints are in the same part, which is not visible (bitwise AND of endpoints != 0): trivial reject.
Both endpoints are in different parts: In case of this non trivial situation the algorithm finds one of the two points that are outside the viewport (there is at least one point outside). The intersection of the outpoint and extended viewport border is then calculated (i.e. with the parametric equation for the line) and this new point replaces the outpoint. The algorithm repeats until a trivial accept or reject occurs.
Steps for Cohen-Sutherland Algorithm
1. End-points pairs are checked for trivial acceptance or rejection using outcode (region code, each of the 9 parts are assigned a 4 bit code indicating their location with respect to the window/ region of interest).
2. If not trivially accepted or rejected, divide the line segment into two at a clip edge;
3. Iteratively clipped by test trivial-acceptance or trivial-rejection, and divided into two segments until completely inside or trivial-rejection.
Alternate description of the algorithm:
1. Encode end points
Bit 0 = point is left of window
Bit 1 = point is right of window
Bit 2 = point is below window
Bit 3 = point is above window
2. Cend ≠ 0 then P0Pend is trivially rejectedIf C0
3. Cend = 0 thenP0Pend is trivially acceptedIf C0
4. Otherwise subdivide and go to step 1 with new segment
C0 = Bit code of P0 Cend = Bit code of Pen
Clip order: Left, Right, Bottom, Top
1) A1C1 1) A2E2 1) A3D3
2) B1C1 2) B2E2 2) A3C3
3) reject 3) B2D2 3) A3B3
4) B2C2 4) accept
5) accept
**Output**
**Conclusion****: **Thus we studied how to clip the lines using Cohen Sutherland algorithm
**Lab Exercise 10**
**Title-To study OpenGL Transformation. **
**Objective:**To study OpenGL transformation matrices and conacatination of transformation
**Theory:**
**Related Topics: **OpenGL Pipeline, OpenGL Projection Matrix, Homogeneous Coordinates
**Overview**
· OpenGL Transform Matrix
· Example: GL_MODELVIEW Matrix
· Example: GL_PROJECTION Matrix
**Overview**
Geometric data such as vertex positions and normal vectors are transformed via **Vertex**
**Operation **and **Primitive Assembly **operation in OpenGL pipeline before raterization
process.
OpenGL vertex transformation
**Object Coordinates**
It is the local coordinate system of objects and is initial position and orientation of objects before any transform is applied. In order to transform objects, use glRotatef(), glTranslatef(), glScalef().
**Eye Coordinates**
It is yielded by multiplying GL_MODELVIEW matrix and object coordinates. Objects are transformed from object space to eye space using GL_MODELVIEW matrix in OpenGL. **GL_MODELVIEW **matrix is a combination of Model and View matrices ( ). Model transform is to convert from object space to world space. And, View transform is to convert from world space to eye space.
Note that there is no separate camera (view) matrix in OpenGL. Therefore, in order to simulate transforming the camera or view, the scene (3D objects and lights) must be transformed with the inverse of the view transformation. In other words, OpenGL defines that the camera is always located at (0, 0, 0) and facing to -Z axis in the eye space coordinates, and cannot be transformed. *See more details of GL_MODELVIEW matrix in ModelView Matrix*.
Normal vectors are also transformed from object coordinates to eye coordinates for lighting calculation. Note that normals are transformed in different way as vertices do. It is mutiplying the tranpose of the inverse of GL_MODELVIEW matrix by a normal vector.
**Share with your friends:** |