Subject : multimedia



Download 435.21 Kb.
Page4/6
Date29.04.2017
Size435.21 Kb.
#16730
1   2   3   4   5   6
Part A

  1. What is Motion Video?

Motion video is a combination of images and audio.

  1. Expand CIF

Common Intermediate Format (CIF)

  1. What are the components of color video?

A color video signal has 3 components.

  1. Expand EDTV.

Enhanced Definition Television System (EDTV):

  1. Expand SMPTE.

Society of motion pictures and television engineers.

  1. Expand EDI.

Edit decision List (EDI)

  1. Expand SCART

Syndicat Des Constructeurs d’ Appareils Radiorecepteurs et Television

  1. What is video capture card?

It is an expansion board that handles all kinds of audio and video input signals.

  1. What are the two types of light sensitive cells?

The light sensitive cells are of two types-rods and cones.

  1. What are the chrominance parts?

The chrominance part has 2 sub components I (In-phase) and Q (Quadrature)

Part B:

  1. Write about analog video camera.

ANALOG VIDEO CAMERA:

Analog video camera are used to record a succession of still images and then convert the brightness and color information of the images into electrical signals. These signals are transmitted from one place to another using cable or by wireless means and the television set at the receiving end these signals are again converted to form the images.

The tube type analog video camera is generally used in professional studios and uses electron beams to scan in a raster pattern, while the CCD video camera using a light sensitive electronic device called the CCD.


  1. Monochrome Video Camera:

The essential components of an analog video camera consist of a vacuum tube containing an electron gun, and a photo-sensitive semi-conductor plate called Target in front. A lens in front of the target focuses light from an object to target. The positive terminal of a battery is connected to the lens side of the target while the negative terminal is attached to the cathode of the electron gun.

The electron migrates towards a positive potential applied to the lens side of the target. This positive potential is applied to a thin layer of conductive but transparent material. The vacant energy states left by the liberated electrons called holes migrate towards the inner surface of the target.

Lens Target Element

Electron Gun

Electron Beam

Source Output

Charge

Separation



The charge pattern is sampled point by point by a moving beam of electrons which originates in an electron gun in the tube. The exact number of electrons needed to neutralize the charge pattern constitutes a flow of current in a series circuit. It is this current flowing across a load resistance that forms the output signal voltage of the tube.

(ii) Color Video Camera:

It essentially consists of three camera tubes in which each tube receives selectively filtered primary colors. Each camera tube develops a signal voltage proportional to the respective color intensity received by it. Light from the scene is processed by the objective lens system. The images formed by the lens are split into three images by glass prism. These prisms are designed as diachronic mirrors.

This mirror passes one wavelength and rejects other wavelengths, thus red, green, and blue images are formed. This generates the three color signal Vr, Vg, Vb the voltage levels of which are proportional to the intensity of the colored light falling on the specific tube.

.


  1. Write about Digital Video Standards.

DIGITAL VIDEO:

Analog video has been used for years in recording/ editing studios and television broadcasting. For the purpose of incorporating video content in multimedia production video needs to be converted into the digital format.

Digital video on personal computer it is very difficult initially because of the huge file sizes required and secondly of the large bit rate and processing power required. It requires a video capture card and associated recording software. The digital output from a digital video camera can also be fed to a PC after necessary format conversion

DIGITAL VIDEO STANDARDS:


  1. Enhanced Definition Television System (EDTV):

It is the conventional system developed to offer improved resolution. The system emerging in US and Europe is known as the ID-TV improved definition Television System. It attempts to improve the NTSC image by downloading the lines from 525 to 1050.

The double multiplexed analog components a standard is designed as an intermediate standard for transition from current European analog standard to HDTV standard.



  1. CCIR (ITU-R) Recommendations:

The international body for TV standards. The international telecommunication union- Radio communication branch was known as the consultative committee for international radio communication.

It is the standard developed for digitizing the video signal. A color video signal has 3 components.



  • Luminance Components

  • 2 Chrominance components

The CCIR has 2 options 1 for NTSC TV and other for PAL TV. Refer the above notes for NTSC and PAL TV.

  1. Common Intermediate Format (CIF):

The luminance resolution of CIF has 360* 288 pixels per frame with 30 frames per second. A chrominance has half of the luminance resolution. The line value 288 represents active lines in PAL TV signals. It is a common intermediate format for both NTSC and PAL.

  1. Source Input Format:

It has a luminance resolution of 360*240 pixels per frame at 30 frames per second. 360*288 pixels per frames as 25 frames per second. For both cases the resolution of the chrominance components is half of the luminance resolution in both horizontal and vertical dimensions.

  1. High Definition (HD) video and HDTV :

It is a new standard for digital video to improve the quality. It requires high definition monitor to be viewed. There are 2 alternate formats one relating to the standards 4.2 aspect ratio and the other 16:9 aspects ratio.

Some fields short using the HDTV include the following star ware Episode 2,3, spy kids 2 and spy kids 3D.



  1. Write a note on Video Editing.

VIDEO EDITING:

  1. Online and Offline Editing:

Online Editing is the practice of editing the frames on the same computer that provides final cut. It is done on expensive high end workstation designed to meet the picture quality and data processing requirements of broad cast quality video. Offline Editing video is edited using low quality of copies of the original clips and gives the final version on a high end screen.

  1. SMPTE Time Code:

Society of motion pictures and television engineers. It defines how frames in a movie are counted and affects the way you edit a clip. The standard way to represents the time code has been developed by SMPTE.

Two methods are used to generate SMPTE time code in the video world.



  • Drop

  • Non-Drop.

The time code frames are always implemented by one in exact synchronization to the video. It brought time code attempts to compensate for the discrepancy of frame between real time and SMPTE.

  1. Time Base:

Time is a continuous flow of events video requires perfect synchronization. so it is necessary to measure the time using precise numbers. When editing video the source clips may need to be inserted to create the output clip. The source frame rates of the clips determine how many frames are displayed per second.



  1. Edit decision List (EDI):

It is required while doing off line editing. Once the off line editing is completed on a low end system a table of scene sequence called edit decision list is created. It is moved to an edit controller on a high end system to get a high quality output.

Diagram refer book pages 331, 332

  1. Give a short note on Chromo sampling.

  2. Chroma Sub sampling:

Small differences in color information are ignored by the eye. This limitation cab be exploited to transmit reduced color information as compared to brightness information, a process called Chroma Sub-sampling, and save on bandwidth requirements.

  • 4:2:2

It implies that when the signal is converted into an image on the TV screen, out of 4 pixels containing luminance information (Y) only 2 pixels contain color sub-component 1 (Cb) and 2 pixels contain color and component 2 (Cr). Essentially this means that while all pixels contain brightness information only half of the pixels contain color information. The reduction in color information helps to reduce bandwidth of the transmitted signal.

  • 4:1:1

This implies produced by the signal will contain only one- fourth of the original color information, i.e. out of 4 pixels containing luminance information (Y), only 1 pixels contain color sub-component 1 (CB) and 1 pixels contain color sub-component 2 (Cr).

  • 4:4:4

This schema implies that there is no chroma sub-sampling at all, i.e. out of 4 pixels containing luminance information (Y), all the 4 pixels contain color sub-component 1 (Cb) and all 4 pixels contain color sub-component 2(Cr). There is no loss in color component and hence the picture is of the best quality, although the signal would have the highest bandwidth.

  • 4:2:0

It indicates both horizontal and vertical sub-sampling. It implies that out of 4 pixels containing luminance information(Y), only 2 pixels contain color sub-component 1(Cb) and 2 pixels contain color sub-component 2 (Cr), both along a row as well as long as column.

  1. Write about Video Signal Formats.

VIDEO SIGNAL FORMATS:

  1. Component Video:

It is used to store or transmitted as three separate component signals. The simplest form is the collection of R, G and B signals which usually form the output of analog video cameras. These RGB signals are replaced by Y, Cb and Cr signals also delivered along three separate wires. These connectors are used to store or playback from camera to another devices.

  1. Composite Video:

For ease is signal transmission. specially TV broadcasting, as also to reduce cable/channel requirements, component signals are often combined into a single signal which is transmitted along a single wire or channel. This is referred to as composite video.

The total bandwidth of the channel is split into separate portion and allotted for the luminance and chrominance parts. In some cases a SCART or BNC connector is used instead. Since a single wire carries different types of signals, a certain amounts of crosstalk or interface is introduced which leads to a slight degradation in video quality compared to component format.



Diagram refer book page 300

  1. S-Video:

Short for Super-Video. An analog video signal format where the luminance and chrominance portion are transmitted separately using multiple wires instead of the same wire as for composite video. The picture quality is better than that of composite video because of reduced interference but the cable is more expensive, and is usually found in high end VCRs and capture cards. The connector used is a 4-pin mini-DIN connector with 75ohm termination impedance.

3

4



1

2

1 Ground



2 Ground

3 Luminance (Y)

4 Chrominance (C)


  1. SCART Connector:

SCART (Syndicat Des Constructeurs d’ Appareils Radiorecepteurs et Television) is a French standard of a 21-pin audio and video connector. It can be used to connect VCRs, DVD players, set top boxes, game system and computers to television sets. SCART compatible devices have a multiple connections which can be used for daisy- chaining purposes.

The signal levels are around 1 volt so they are not much influenced by noise. It cannot carry both S-video and RGB signals at the same time. It cannot transmit surround sound formats and can only transmit analog signals not digital.

SCART connectors use coaxial cables to transmit audio and video signals, however cheaper versions may use plain wires resulting in degraded image quality.

Part C


  1. Write about transmission of video signals

TRANSMISSION OF VIDEO SIGNALS:

Problems in Transmitting Color Signals:

A color video camera produces three color signals corresponding to the R, G, B components of the color image. These signals must be combined in a monitor to produce the original image. Such a scheme is suitable when the monitor is close to the camera and three cables could used to transmit the signals from the camera to the monitor.

It requires three separate cables or wires or channels which increase the cost of the setup for large distance. Secondly it was found difficult to transmit the cables at exact synchronism with each other so that they arrived at the same instant at the receiving end.

Thirdly for TV signals, the transmission scheme had to adapt to the existing monochrome TV transmission set up, i.e. the same signals would need to produce a monochrome image in B/W TV set and a color image in a color TV set. Due to these reason, a new format called YC format was developed which addressed all theses issues.

This format was based on the luminance chrominance principle which originates from the human visual perception of color.

Color Perception Curve:

All objects that we observe are focused sharply by the lens system of the human eye on the retina. The retina which is located at the back side of the eye has light sensitive cells which capture the visual sensation. The retina is connected to the optic nerve which conducts the light stimuli to the optical center of the brain.

The light sensitive cells are of two types-rods and cones. Rods provide brightness sensation and thus perceive objects in various shades of grey from black to white. The cones are sensitive to color and can broadly be classified into three different groups. One set of cones detect the presence of blue color, the second set perceives red color and the third is sensitive to the green shade.

Any color other than red, green and blue excite different sets of cones to generate a cumulative sensation of that color.

Relative Sensitivity

of eye


Violet Green Red

The reference white for color television transmission has been chosen to be mixture of 30% red, 59% green and 11% blue.



  1. Luminance and Chrominance:

This has something to do with color perception of the HVS (human visual system).It is known that the HVS is more sensitive to green than red and the least sensitive to blue. An equal representation of red, green and blue leads to inefficient data representation when the HVS is the ultimate viewer.

The luminance components, describes the variation of perceived brightness by the HVS in different portions of the image without regard to any color information, e.g. an image with only luminance would be a grayscale image similar to that seen on a monochrome TV set. The luminance component is denoted by Y.

The chrominance component, describes the variation o color information in different parts of the image without regard to any brightness information. It is denoted by C and consists of two sub- components: hue (h) which is the actual color e.g. red. And Saturation (s) which denotes the purity of the color i.e. gray mixed with original color e.g. bright re, dull red etc.


  1. Generating YC signals from RGB:

The RGB output signals from a video camera are transformed to YC format using electronic circuitry before being transmitted. To generate the YC signals from the RGB signals they have to be defined quantitatively i.e. how Y and C are related to R. g and B. As a first estimation the bri8ghtness (Y) component can taken as the average of R, g and B.

To generate a perceptually realistic grayscale image, more emphasis should be allowed to the green component and least to the blue component. The relation between the Y and RGB which is used unanimously nowadays is shown as

Y= 0.3R + 0.59G + 0.11B

This states that the brightness of an image is composed of 30% of red information, 50% of green information and 11% of blue information. The C sub components i.e. H ands are quantitatively defined in terms of color difference signals referred to s blue chrominance Cb and red chrominance Cr. These are defined as:

Cb = B-Y

Cr = R-Y


Thus, Cb is generated by subtracting the Y signal from the blue signal, while Cr is generated by subtracting the Y signal from the red signal. Subtractions of Y in both cases ensure that Cb and Cr are devoid of any brightness information. At the receiver the Y component can be added to Cb to obtain the B signal and to Cr to obtain the R signal. Where as the Y signal is obtain from the R, G, B signals through the resistor bridge.

Diagram refer book page 296

The color difference signals are generated by inverting Y and adding the inverted signal separately to R and B to obtain (R-Y) or Cr and (B-y) or Cb.



Diagram refer book page 269

The color difference signal equal zero when white or grey shades are being transmitted. The calculation is..,

For any grey shade (including white) let R=G=B=V volts.

Then Y=0.3v + 0.59V + 0.11V=v

Thus, (R-Y) =v-v=0 volt, and (B-Y) = v-v =0 volt.

Conversion of RGB signals into YC format also has another important advantage. When R, G and B voltages are not equal, the Y signal represents the monochrome equivalent of that color.



  1. Chroma Sub sampling:

Small differences in color information are ignored by the eye. This limitation cab be exploited to transmit reduced color information as compared to brightness information, a process called Chroma Sub-sampling, and save on bandwidth requirements.

  • 4:2:2

It implies that when the signal is converted into an image on the TV screen, out of 4 pixels containing luminance information (Y) only 2 pixels contain color sub-component 1 (Cb) and 2 pixels contain color and component 2 (Cr). Essentially this means that while all pixels contain brightness information only half of the pixels contain color information. The reduction in color information helps to reduce bandwidth of the transmitted signal.

  • 4:1:1

This implies produced by the signal will contain only one- fourth of the original color information, i.e. out of 4 pixels containing luminance information (Y), only 1 pixels contain color sub-component 1 (CB) and 1 pixels contain color sub-component 2 (Cr).

  • 4:4:4

This schema implies that there is no chroma sub-sampling at all, i.e. out of 4 pixels containing luminance information (Y), all the 4 pixels contain color sub-component 1 (Cb) and all 4 pixels contain color sub-component 2(Cr). There is no loss in color component and hence the picture is of the best quality, although the signal would have the highest bandwidth.

  • 4:2:0

It indicates both horizontal and vertical sub-sampling. It implies that out of 4 pixels containing luminance information(Y), only 2 pixels contain color sub-component 1(Cb) and 2 pixels contain color sub-component 2 (Cr), both along a row as well as long as column.

  1. Write about Television Broadcasting Standards.

TELEVISION BROADCASTING STANDARDS:

  1. National Television System Committee (NTSC):

National TV system is setup by Federal Communication commission in 1940 to establish a standard for black white television. This standard specification uses 525 horizontal lines 30 frames per second to interlacing fields per second and aspect ratio of 4:3. In 1950 developed a standard for color TV.

The TV signals are transmitted with luminance as main signal and chrominance as apart. Out of the 525 lines only 480 lines is used actively for picture generation and other used for synchronization and vertical retrace. In color TV luminance signals takes the place of original monochrome signal used for black and white.

The chrominance part has 2 sub components I (In-phase) and Q (Quadrature). I and Q represent the amplitude modulated in quadrature on to the sub carrier wave. I represent orange and cyan access. Positive I represent Orange and the Negative I represent the Cyan. Where as Q represents Magenta and Green. Where positive Q represents Magenta and negative Q represents green.

Luminance is represented in Y is 30% of red, 59% of green, and 11% of blue. Relation is defined as

Y= 0.3R + 0.59C + 0.11B

I and Q can also be defined in terms of R, G and B as shown below.

I= 0.74Cr – 0.27Cb = 0.74(R-Y) - 0.27(B-Y) = 0.60R - 0.28G - 0.32B

Q= 0.48Cr + 0.41Cb = 0.48(R-Y) + 0.41(B-Y) = 0.21R - 0.52G + 0.31G

The video professionals and television engineers do not use these NTSC standards. Due to the specific reason during the signal propagation the phase relationship associated with color signals are liable to drift, resulting in incorrect hues at the receiver set.


  1. PAL (Phase Alteration Lines):

This TV standard used in Europe, Asia, Australia and other few countries. It is introduced in 1967 by Walter Bruch at Telefunken at Germany.

It uses 625 horizontal lines at 25 frames per second, 2:1interlacing with 2 fields per frames aspect ratio of 4:3. The PAL refers to that phase part of the color information is irrevised with each line that helps to correct the phase errors in transmission.

It gives better quality than NTSC. Out of the 625 lines 576 lines are actively used to display the image on the screen remaining is used for synchronization and vertical phase lines.


  1. Sequential Color and Memory (SECAM):

SECAM short for sequential couleur Avec Memoire, French for “Sequential Color with Memory” is a TV broadcasting standard used in France, Russia and middle east. It is invented by Henri de France its same like PAL.

Fundamental difference is the former transmit 2 color signals simultaneously while the later transmit one color at a time. It gets information for the preceding color transmitted and stored in memory.



  1. Write about PC Video.

PC VIDEO:

The analog video needs to be digitized before displaying on the PC screen. The conversion involves using 2 devices. Source and Source Devices.



  1. Sources and Source Devices:

Source implies the media on which the analog video is recorded. The recorded video conforms to any one of the video recording standards. To play it the source recording device called the play back devices.

The output conforms to one of the 2 standards- Components and the composite are the output standards and NTSC and PAL or sequence are the input standards. The source and source device can be one of the following.



  • Camcorder with pre recorded video tape

  • VCP with pre-recorded video cassette

  • Video camera with live footage.



  1. Video Capture Card:

It is an expansion board that handles all kinds of audio and video input signals. It connects analog to digital and vice versa. It supports various signal format. The circuit board consists of the following components.

  • Video INPUT port to accept the video signals from NTSC/PAL/SECAM broadcast signals, video camera or VCR.

  • Video compression- decompression hardware for video data.

  • Audio compression- decompression hardware for audio data.

  • A/D convertor to convert the analog input video signal to digital form.

  • Video OUTPUT port to feed output video to camera and VCR

  • D/A convertor to convert the digital video data to analog signal for feeding to output analog devices.

  • Audio INPUT/OUPUT ports for audio input and output functions.

Video Channel Multiplexer: Since a video capture card supports a number of input ports, e.g. composite video, S-video and a number of input formats, e.g. NTSC, PAL, HDTV, a video channel multiplexer allows the proper input port and format to be selected under program control and enables the circuitry appropriate for the selected channel.

ADC: The analog to digital convertor reads the input analog video signal from an analog video camera or VCP and digitizes it using standard procedures of sampling and quantization.

Image Processing Parameters: It includes specifying the brightness, contrast, color, audio volume etc.., which are specified using the video capture software. The frame buffer must be large enough to hold all data related to a frame, and must hold the data until it is written by the CPU to the hard disk.

Compression Decompression: The video capture card contains the chip for hardware compression and decompression in real time. There can be multiplied standards like MPEG1, MPEG2, and H.261/263, which would require a programmable CODEC on the card.

  1. Video Capture Software:

Tunning Parameter: These parameters define the final output of both audio and video in the digital file. This also includes frame rate, frame size.

AVI Capture: It also the user to specify whether the digital video can be stored in this format. Compression is not required for small files.

AVI to MPEG Converter: Allow to convert an audio file to MPEG format.

MPEG Capture: Certain VCC allows the user to capture directly in the MPEG format. The data is stored in analog format. This is suitable for capturing large volume of video data

DAT to MPEG Converter: It converts the DAT format of a video into MPEG usually done for editing purpose.

MPEG Editor: Some capture software provides the facility of editing an MPEG file. The MPG movie file is opened in a timeline structure and functions are provided for splitting the file into small parts by specifying the start and end of each portion.

  1. Write about Video File Formats and Codecs.

VIDEO FILE FORMATS AND CODECs:

  1. AVI (Audio/Video Interleaved):

The native video file formats on the windows platform is AVI or audio- video interleaved. The term “interleaved” means that within the file the video data and the corresponding audio data are kept in small chunks instead of widely separate blocks. To avoid delay due to head seek time, the audio data is kept as close to the video data as possible, otherwise synchronization between the visual and audio media would be distributed. This architecture is called video for windows. AVI is an un-compressed format, i.e the images frames and audio are stored without any type of compression, and hence the sizes of AVI files could be large.

  1. MOV (QuickTime Movie):

It is developed by Apple for both the windows and the Macintosh platforms. These files have an extension MOV and require a program called Movie Player for playback, which is freely downloaded from the Apple website.

  1. MPEG (Motion Picture Experts Group):

It is developed by the Moving Pictures Experts Group (MPEG) is a compressed format based on both intra-frame and inter-frame compression. There are several versions of MPEG: MPEG-1 is designed for CD-ROM based application and video CDs and provides a quality comparable to VHS quality.

MPEG-2 is designed for DVD applications and provides a quality comparable to SVHS quality. MPEG-4 provides an efficient method of object oriented content based storage and retrieval of multimedia content. MPEG-7 is a scheme for description of the multimedia content through media objects may be retrieved queries.



  1. Real Video:

It supports streaming which means that the video file starts playing even before they are fully downloaded from the internet. A program called RealPlayer is required to play back a RM file, which is freely downloaded from the Real Network web site.

Helix Player is the open source media player built on top of helix DNA client for Linux and other operating system. The helix player contains supports for the following media formats.



  1. Indeo Video Interactive:

This CODEC by Intel is used for video distribution over the internet for computers with MMX or Pentium II processors. This CODC includes features like flexible key frame control, chroma keying, on the fly cropping that reduces data load.

Full used of these feature requires utility software available separately from Intel.



  1. Cinepak:

It was originally developed to play small movies on ‘386 and ‘030 systems from a single speed CD-Rom drive. Its greatest strength is it’s extremely low CPU requirements. There are higher quality solutions for almost any applications. However, if you need your movies to play back on the widest range of machines, you may not be able to use many of the newer codec’s.

  1. Nero Digital:

It is a software product that uses MPEG-4 conforming compression technology, with support for two MPEG-4 video coding algorithm and two forms of MPEG-4 AAC audio. It offers two different video CODECs and two different audio CODECs.

  1. FFmpeg:

It is a set of open source computer programs that can be record, convert and stream digital audio and video. The project is made of several components.



  • ffmpeg: it is a command line tool to convert one video file format to another

  • ffserver: It is an HTTP multimedia streaming server for live broadcast. Time shifting of live broadcast is also supported.

  • Ffplay: It is simple media player based on SDL (simple direct media Layer) and on the FFmpeg libraries.

  • Libavcodec: It is library containing all the ffmpeg audio/video encoders and decoders.

  • Libavformat: It is a library containing parsers and generators for all common audio/ video formats.



  1. Write about Video Recording Formats and Systems.

VIDEO RECORDING FORMATS AND SYSTEMS:

  1. Ampex: (Alexander M Pontaiff Excellence)

The Ampex broadcast video tape recorder also facilitated time- zone broadcast delay which allowed TV stations to broadcast programs simultaneously in different time zones. Ampex in that year also invented the portable broadcast video recorder which enabled broadcast quality video to be shot in out door locations.

1967, ABC used Ampex recorders for the first time to display instant slow motion replay of sporting events. In 1970, Ampex introduced the first robotic library system which enabled TV stations to re-sequence TV commercials instantaneously.



  1. VERA:

It was an early videotape format developed by the BBC in the 1950s. It used very high tape speeds of about 200 inches/seconds to record the high frequencies required by video signals, and could recorded about 15 minutes of black and white video per reel.

  1. U-matic:

It is a video cassettes format developed by Sony in 1969 and was among the first formats containing the tape 3/4" wide inside a cassette in contrast to open reel format at that time. Sony made final improvements to the BVU format and called it BVU-SP. 1990s it was made obsolete by Sony’s Betacam-SP.

  1. Betamax:

It is smaller than the VHS cassette and is said to produce an improved picture quality. Also the rewind and forward tape operations are faster. Technically the Betamax has more bandwidth than the VHS format; however it lost to VHS in the domestic sector due to licensing difficulties to other companies.

It is compared with the cheaper VHS cassette might be a relevant factor for its unacceptability in the home sector.



  1. Betacam:

A family of 1/2" videotape formats developed by Sony from 1982 onwards. Betacam-SP was developed which increased the resolution to 340 lines and became the industry standard for most TV stations. Digital betacame was launched in 1993, and replaced both Betacam and Betacam-SP. running times of S cassette is about 40 minutes while for L cassette it is about 124 minutes. It also implements support for serial digital interface coaxial digital connections.

Color different types of cassette identify their types: Betacam and Betacam-SP tapes are grey, digital Betacam tapes are light blue, Betacam SX tapes are yellow, MPEG IMX tapes are light green and HDCAM/HDCAM-SR tapes are black.


  1. Video Home System (VHS):

It is a recording and playing standard for video cassette recorders developed by JVC and launched in 1976. A VHS cassette contains a 12.70mm (1/2-inch) wide magnetic tape would between two spools, allowing it to be slowly passed over the various playback and recording heads of the video cassette recorder.

A linear control track at the tape’s lower edges, holds pulse that mark the beginning of every frame of video; these are use to fine tune the tape speed during playback and to get the rotating heads exactly on their helical tracks rather than having them end up some where between two adjacent tracks.



  1. Video Cassette Recorder(VCR):

Sometimes called Video Tape Recorder. It is a device used to record and play audio and video information using removable cassett4e containing magnetic tapes. The VCR has been started being superseded by digital media like DVD.

  1. Video 2000:

The cassette was slightly larger than VHS and could record four hours of video on each side. Technically it was superior to both VHS and Betamax but was introduced at a time when VHS was already established and failed to make a dent in its market.

  1. Video Compact Cassette (VCC):

The system used 1/2" tapes coated with chromium dioxide available in three versions: 30, 45 and 60 minutes. To prevent crosstalk between adjacent video tracks, it used an unrecorded guard band, which is essentially a small space between tracks to prevent interference when reading data.

  1. Camcorder:

It is a generic term for a portable device for recording of audio and video onto a storage device within it. The “term” is a combination of “camera” and “recorder” in one unit. Initially camcorders used the VHS or Betamax tapes and were quite bulky, but later with the introduction of 8mm tapes, they reduced in size.

  1. Explain about Video Editing Software.

VIDEO EDITING SOFTWARE:

  1. Importing Clips:

For editing the user is provided with an interface of importing the video clips into the software packages. Usually the native file format AVI is always supported, some additional formats like QuickTime, MPEG, etc, may also be supported.

  1. Timeline Structure:

Most of the video editing software presents a timeline structure for video editing. A series of horizontal lines, called tracks are presented to the user, each of these lines serving as time slots where the video clips to be edited are to be placed. After the video clips are imported into the software, they are available in a separate window from which they are usually dragged and placed in the timeline by the user.

A playback head moves along the timeline from left to right, and the portion of the video data under the head is played back in a separate playback or monitor window.



  1. Playback of Clips:

Once the clips are placed on the timeline, each of them can be selected and played back in a monitor window. Another monitor window is usually used to playback the entire sequence of clips one after another along with added effects like transitions.

  1. Trimming Clips:

One of the basic operations that can be done in a video editor is trimming a clip to discard unwanted portions. The user is asked to select to points along a clip: the start point and the end point. It keeps all frames between theses points and deletes the rest of the frame.

  1. Splitting a Clip:

A video editor allows a user to split a video clip into any number of parts. On a timeline a video clip is represented by a rectangular shape. The user is usually required to specify the points where the clip is to be split by clicking at specific points of the rectangle. After splitting the user can drag the different pieces and place them at different location on the same track or different tracks.

  1. Manipulating the Audio Content:

A video editor the audio content of a video clip to be separated from the visual content. The studio content is displayed on a separate audio track in the timeline structure, from where it may be replaced by some other audio data. Audio can also be added to a video clip which originally did not have any audio content.

  1. Adding Transition:

A video editor enables the user to insert various types of transitions between two video clips. Transition like dissolve, wipes, blinds, etc. may be dragged from a separate transition palette and placed between two clips on the timeline. When the end part of the first clip played back would be seen to gradually merge to the beginning part of the second clip through the applied transition.

  1. Changing the speed of a Clip:

A video editor usually allows the playback speed of a clip to be changed, i.e. to simulate slow-motion or fast motion. This is usually done by selecting the clip on the timeline and specifying a percentage by which the speed is to be changed. If the speed is decrease then the length of the speed is increased.

  1. Changing the Opacity of a Clip:

A clip is usually opaque, i.e. with opacity 100%. If the opacity is decreased, the clip becomes partially transparent. This implies that through the clip, other clips placed on lower track would become visible.

  1. Applying Special Effects:

Visual filters like blur, emboss. Lens flare, etc. could be applied on video clips. Also the extent of these filters could be varied over the length of the clip, i.e. a blur filter could be gradually increased from the beginning to the end of a clip.

  1. Superimposing an Image:

An image can be superimposed over a video clip. Additionally some editors allows the superimposed image to be animated, i.e. the image can be made to gradually move from left to right or from top to bottom of the video frame over time.

  1. Exporting a Movie:

While editing and exporting digital video, a concept which needs to be understood is called rendering. A video editor provides us with a visual interface where we can click and drag to specify editing operations on video operations. When exporting a movie, the changes specified are first rendered and then copied to an output file format supported by the editor. In most cases AVI and MOV formats are usually supported in some cases creating other formats like MPG may also be possible.

SUBJECT : MULTIMEDIA

CLASS : II MSC CS

UNIT : I

SEMESTER : 3

STAFF : P.RADHA

UNIT-V - ANIMATION



Download 435.21 Kb.

Share with your friends:
1   2   3   4   5   6




The database is protected by copyright ©ininet.org 2024
send message

    Main page