Customer code



Download 23.57 Kb.
Date29.04.2017
Size23.57 Kb.
#16727

CUSTOMER_CODE

SMUDE

DIVISION_CODE

SMUDE

EVENT_CODE

OCTOBER15

ASSESSMENT_CODE

BC5902_OCTOBER15



QUESTION_TYPE

DESCRIPTIVE_QUESTION

QUESTION_ID

6148

QUESTION_TEXT

Explain the TIFF (Tagged Image File Format) image format.

SCHEME OF EVALUATION

1.Tagged Image File Format (TIFF) is a variable-resolution bit mapped image format developed by Aldus, (now part of Adobe) in 1986. TIFF is very common for transporting color or gray-scale images into page layout applications, but is less suited to delivering web content. (2 marks)
2.TIFF files are large and are of very high quality. Baseline TIFF images are highly portable; most graphics, desktop publishing, and word processing applications understand them.   (1 mark)
3.The TIFF specification is readily extensible, though this comes at the price of some of its portability. Many applications incorporate their own extensions, but a number of application-independent extensions are recognized by most programs. (2 marks)
4.Four types of baseline TIFF images are available: bi-level (black and white), gray scale, palette (i.e. indexed), and RGB (i.e. true color). RGB images may store up to 16.7 million colors. Palette and Gray-Scale images are limited to 256 colors or shades. A common extension of TIFF also allows for CMYK images. (2 marks)
5.TIFF files may or may not be compressed. A number of methods may be used to compress TIPP files, including the Huffman and LZW algorithms. Even compressed, TIFF files are usually much larger than similar GIF of JPEG files. (2 marks)
6.Because the files are so large and because there are so many possible variations of each TIFF file type, few web browsers can display them without plug-ins. (1 mark)



QUESTION_TYPE

DESCRIPTIVE_QUESTION

QUESTION_ID

6149

QUESTION_TEXT

Explain Briefly the Concept of Fundamental Steps in Digital Image Processing.

SCHEME OF EVALUATION

1. Image Acquisition is the first process and it requires an imaging sensor and the capability to digitize the signal produced by the sensor. Image acquisition stage involves preprocessing such as scaling. Generally image acquisition stage involves preprocessing, such as scaling (1 Mark)
2. Image Enhancement is among the simplest and the most appealing areas of digital image processing. Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or simply to highlight certain features of interest in an image. (1 Mark)
3. Image Restoration is an area that also deals with improving the appearance of an image. However, like enhancement, which is subjective, Image restoration is objective, in the sense that enhancement techniques tend to be based on mathematical or probabilistic models of image degradation (1 Mark)
4. Color Image Processing is an area that has been gaining importance because of the significant increase in the use of digital images over the Internet. Color is used as the basis for extracting features of interest in an image (1 Mark)
5. Wavelets are the foundation for representing images in various degrees of resolution. This is used for image data compression and for pyramidal representation in which images are subdivided successfully into smaller regions (1 Mark)
6. Compression deals with techniques for reducing the storage required to save an image, or the bandwidth required to transmit it. Image compression is familiar to most users of computers in the form of image file extensions (1 Mark)
7. Morphological Processing deals with tools for extracting image components that are useful in the representation and description of shape. (1 Mark)
8. Segmentation procedures partition an image into its constituent parts or objects. The output of a segmentation stage, which usually is raw pixel data, constituting either the boundary of a region or all the points in the region itself (1 Mark)
9.In Representation and Description stage, the first decision that must be made is whether the data should be represented as boundary or as a complete region. Description also called feature selection, deals with extracting attributes that results in some quantitative information of interest or are basic for differentiating one class of objects from another (1 Mark)
10. Recognition is the process that assigns a label (example “vehicle”) to an object based on its descriptors (1 Mark)



QUESTION_TYPE

DESCRIPTIVE_QUESTION

QUESTION_ID

72570

QUESTION_TEXT

Explain the types of digital images.

SCHEME OF EVALUATION

1.     GIF

2.     JPEG

3.     TIFF

4.     PNG

1)     GIF: It was developed in 1987. It is one of the most widely used image formats on      the web. It is recognized by gif file extension. It is an 8 bit format, which means the maximum number of colors supported by the format is 256. There are 2 GIF standards, 87 a  and 89 a.

2)     JPEG : It is a standardized image compression mechanism. it stands for joint photographic experts group. It compresses either full-color or gray scale images. It uses a lossey compression method. it was developed for 2 reasons: It makes image files smaller and it stores 24-bit per pixel color data instead of 8-bit per pixel data.

3)     TIFF: Tagged image File format. TIFF files are large and of very high quality

TIFF specification is readily extensible.

4 types of baseline TIFF images are available

a) bi-level

b)gray scale

c) Palette

d) RGB

TIFF files may or may not be compressed because the files are so large and so many possible variations of each TIFF file type, few browsers can display then without plug-ins     



4)     PNG: It is pronounced as in ping-pong; for portable N/W graphics. It is superior to G/F in many ways,

     following features are:

*     Images that are the same size or slightly smaller than their GIP counterparts.

*     Support for indexed color, gray-scale & RGB

*     Support for 2-D progressive rendering.

*     An alpha channel that allows an image to have multiple levels of opacity.

*     Gamma correction

*     File integrity checks.





QUESTION_TYPE

DESCRIPTIVE_QUESTION

QUESTION_ID

72571

QUESTION_TEXT

Explain smoothing spatial filters and smoothing linear filters.

SCHEME OF EVALUATION

Smoothing spatial filters

a.     Used for blurring and for noise reduction.

b.     Blurring is used in preprocessing steps.

c.     Noise reduction can be accomplished by blurring with a linear filter and also by nonlinear filter.



Smoothing linear filters.

The output is simply the average of the pixels contained in the neighborhood of the filter mark.

These filters sometimes are called average time filters or low pass filters.

By replacing the value of every pixel in an image by the average of the gray levels in the neighborhood defined by the filter mark, this process results in an image with reduced sharp transitions in gray levels, However, a major use of aver graying filter is in the reduction of irrelevant detail in an image.





QUESTION_TYPE

DESCRIPTIVE_QUESTION

QUESTION_ID

72572

QUESTION_TEXT

Explain the basics of filtering in the frequency domain.

SCHEME OF EVALUATION

The following steps are

1)     Multiply the inputs image by (-1)x+y to center the transform.

2)     compute F(U,V), the DFT of the image from (1)

3)     Multiply F(U, V) by a filter function H(U,V)

4)     Compute the inverse DFT of the result in (3)

5)     Obtain the real part of the result in (4)

6)     multiply the result in (5) by (-1)X+Y




QUESTION_TYPE

DESCRIPTIVE_QUESTION

QUESTION_ID

114231

QUESTION_TEXT

Explain the concept of Line detection with example.

SCHEME OF EVALUATION

Line detection is an important step in image processing and analysis. Lines and edges are features in any scene, from simple indoor scenes to noisy terrain images taken by satellite. Most of the earlier methods for detecting lines were based on pattern matching. The patterns directly followed from the definition of a line. 2marks

These pattern templates are designed with suitable coefficients and are applied at each point in an image. It the first mask was moved around an image, it would respond more strongly to lines oriented horizontally. With constant background, the maximum response would result when the line passed through the middle row of the mask. 2marks

This is easily verified by sketching a simple array of 1's with a line of a different   gray level running horizontally through the array. A similar experiment would reveal that the second mask responds best to lines oriented at + 45 degree; the third mask to vertical lines; and the fourth mask to lines in the–45degree direction. 2marks

These directions can also be established by noting that the preferred direction of each mask is weighted with larger coefficient i .e. 2 than other possible directions. 1 mark

Let R1, R2, R3 and R4 denote the responses of the masks from left to right. Suppose that all masks are run through an image. If, at a certain point in the image, |Ri | > |Rj |, for all j ≠ 1, that point is said to be more likely associated with a line in the direction of mask i.   

 2 marks


Example: if a point in the image, | Ri | > | Rj |, for j = 2,3,4 that particular point is said to be more likely associated with a horizontal line.1 mark


Download 23.57 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2024
send message

    Main page